The FARI Conference returns on 17 and 18 November in Brussels, find out more.
An Initiative of
Supported by
Author
Freyja van den Boom, Reseach Fellow at FARI
The CPDP 2025 panel “Elevating AI Oversight: the Crucial Role of Regulatory Sandboxes and Competent Authorities under the AI Act” explored the practicalities and challenges of implementing AI regulatory sandboxes within the EU’s AI Act framework.
The panel examined how Member States can establish regulatory sandboxes that align with their oversight frameworks, foster effective collaboration between market surveillance and data protection authorities, and achieve a balance between innovation and robust supervision, ensuring regulatory sandboxes effectively support AI development while adhering to the AI Act’s provisions.
1. Which competent authorities should oversee sandboxes, and what roles should market surveillance authorities play?
2. How can data protection authorities be effectively integrated into sandboxes when personal data is being processed?
3. How can companies’ concerns, such as intellectual property and confidentiality issues, be addressed so that trust can be built between them and regulators in the sandbox?
4. How could civil society be meaningfully integrated into the sandbox, and what role should it assume?
The discussion highlighted the need for clarity, trust-building, and effective collaboration between various stakeholders.
The following gives us some key themes and insights from the panel discussions.
Sensor technology for automated navigation, healthcare chatbots used for mental health are just some examples of technologies that require a regulatory framework that protects society against harm and ensures consumer trust in these devices. Regulatory sandboxes offer an opportunity for companies to gain clear regulatory guidance and inspire consumer confidence in the AI products developed by companies.
However, as Sam Jungyun Choi (Covington & Burling) explained there are challenges to getting companies to participate in regulatory sandboxes. Some companies may be concerned by the resources and time commitments involved in participating in regulatory sandboxes. They may also have concerns about confidentiality particularly in relation to sensitive and proprietary technology.
Regulators could seek to address these concerns by having a clear roadmap and confidentiality guarantees. Companies will likely have more confidence and trust if there are clear rules at the outset about what documentation will be needed, what outcomes can be expected, and what guarantees are in place regarding the information the regulators gain through the regulatory sandbox.
The panel stressed that practical guidance from regulators is crucial for companies to understand how to apply high-level principles in practice. For instance, companies will want clear, practical guidelines on the measures they should take when accessing datasets for AI training and determining the appropriate level of human review in certain AI deployments. If regulatory sandboxes operate as one avenue for companies to obtain tailored, fact-specific guidance on such points, this will likely support companies’ willingness to participate in regulatory sandboxes.
Alex Moltzau from the European AI Office joined the panel in his personal capacity and did not represent the official position of the European Commission. He highlighted the need for clearly defined competence and remit for authorities overseeing sandboxes to set clear expectations. He stressed the importance of streamlining communication and product adaptation processes, particularly for cross-border AI applications. In particular, he asked the broader question of what society we want to live in, arguing that the regulatory environment we build matters in the everyday life of citizens. Thus, horizontal coordination is important to introduce new products and ensure AI product safety in practice is understood by the regulators as well as the participating organisations. In this context, he mentioned the recent introduction of the pilot EU AI Act regulatory sandbox from Spain and the outlined structure from the Netherlands in their white paper proposal.
Thiago Moraes from LSTS (VUB) gave us several examples of existing pilots in Europe including from the Netherlands. The Dutch proposal of establishing a single-entry point for stakeholders and with coordination between the national competent authority
(NCA) and other market surveillance authorities (MSA) to assess if and how an AI Regulatory Sandbox should be operational is an interesting example and possible model for other Member States. He further urged Member States to engage with the Commission on implementation in the autumn to ensure effective learning and coordination across regions. The panel emphasized the importance of Member States learning from each other’s experiences, referencing examples like Singapore, Latin America, and Norway’s podcast on sandboxes, as well as CNIL’s experience with digital health.
Sophie Tomlinson from the Datasphere Initiative emphasized that regulatory sandboxes represent a shift in mindset, fostering innovation in a controlled environment. Sandboxes can be agile tools for testing regulation and understanding how it supports or hampers development across diverse sectors.
The EU AI Act’s inclusion of sandboxes was seen as an exciting development, offering a safe space for companies and others to test and understand technology. A key challenge is striking a balance between flexibility and accountability, ensuring robust supervision while fostering innovation.
The panel raised several critical questions about how data protection authorities can be effectively integrated into sandboxes, especially when personal data is processed. The discussion also touched upon how civil society can be meaningfully integrated into the sandbox process and what role it should assume, particularly in light of concerns about intellectual property and confidentiality when sharing sensitive information with regulators and potentially civil society groups.
A recurring sub-theme was the need for technical literacy amongst regulators to effectively understand and address the complex technical questions arising from AI development within sandboxes. Regulators need to be able to answer “nitty-gritty” questions from companies about how to apply principles and draw lines in practical scenarios.
To conclude the panel underscored that the success of AI regulatory sandboxes under the EU AI Act hinges on clear communication, well-defined roles for competent authorities, effective collaboration between market surveillance and data protection bodies, and a practical approach to addressing company concerns while ensuring robust oversight. The focus was on creating an environment where innovation can thrive with appropriate safeguards and trust.
© Head Picture: LenoirPhotography
Share
Other news