March 4, 2024
Decoding the EU AI Act: Regulatory Sandboxes and GPAI Systems
Contents
In our first post of this series, we conducted a high-level breakdown of the EU AI Act, focusing on regulatory objectives, key actors, and technologies targeted, laying the groundwork required for a holistic understanding of the act’s provisions. Moving forward, we will adopt a more granular approach, disseminating and simplifying the act’s complex web of rights and requirements as they relate to the above-stated factors.
Here, we explore the EU AI Act’s provisions concerning the implementation of regulatory sandboxes as well as the development and deployment of general purpose AI (GPAI) systems. To meaningfully interpret this information, readers should familiarize themselves with a few core concepts discussed in our previous post (linked above), namely the EU AI Act’s ambition to promote trustworthy AI innovation, standardize AI legislation across the Union, and most importantly, evaluate AI systems by reference to a tiered risk classification structure.
Moreover, seeing as this post delves into the knitty-gritty details, we must first lay out some key definitions of relevance to our discussion. Key definitions, which are covered in Article 3 of the EU AI Act, are the following:
- Artificial Intelligence Office: “the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems, general purpose AI models and AI governance.”
- General purpose AI model: “an AI model…that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.”
- High-impact capabilities: “capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models.”
- Market surveillance authority: “the national authority carrying out the activities and taking the measures pursuant to Regulation.”
- National competent authority: “the notifying authority and the market surveillance authority…any reference to national competent authorities or market surveillance authorities in this Regulation shall be understood as referring to the European Data Protection Supervisor.”
- AI regulatory sandbox: “a concrete and controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system.”
- Serious incident: “any incident or malfunctioning of an AI system that directly or indirectly leads to…the death of a person or serious damage to a person’s health; a serious and irreversible disruption of the management and operation of critical infrastructure; breach of obligations under Union law intended to protect fundamental rights; serious damage to property or the environment.”
- Systemic risk at Union level: “a risk that is specific to the high-impact capabilities of general purpose AI models, having a significant impact on the internal market due to its reach, and with actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”
Whether or not the EU AI Act will work as intended is unclear, but one thing is certain: the act will play a major role in setting global AI regulatory standards. Consequently, stakeholders who familiarize themselves with the EU AI Act now can take initiative in fleshing out the approaches required to future-proof their organization’s AI compliance strategy, and possibly even gain a competitive edge.
In line with our mission to streamline, from end-to-end, the responsible AI integration process, we here at Lumenova AI leverage our expertise to make the AI regulation discourse more accessible, interpretable, and ultimately, useful. To ensure compliance as the AI wave continues to build and spread, stakeholders must think proactively, maintaining an adaptable and flexible mindset that allows them to quickly evolve alongside novel changes in the AI regulatory landscape, and Lumenova AI can help.
Regulatory Sandboxes
Regulatory sandboxes are one of the primary mechanisms by which the EU AI Act will encourage and promote trustworthy AI innovation. However, for these sandboxes to work as intended, they must function uniformly and consistently across the EU, which necessitates the involvement of multiple different bodies and actors at various levels, from the EU Commission itself to national competent authorities and AI providers.
Therefore, we begin this section by outlining the purpose and function of AI regulatory sandboxes. Following this, we subdivide regulatory sandbox obligations into three categories: 1) the EU Commission and member states, 2) the AI Office, national competent authorities, and market surveillance authorities, and 3) AI providers.
Purpose and Function
At their core, established regulatory sandboxes must be designed to facilitate the development, training, testing, and validation of AI systems within a controlled and secure environment prior to deployment. More specifically, sandboxes should aim to promote, maintain, and/or streamline the following practices:
- Compliance with the EU AI Act, especially in light of novel AI advancements or regulatory changes.
- Cooperation and information sharing of best practices, especially across borders and between national competent authorities.
- EU AI innovation and competition, in particular, the development of a healthy AI ecosystem.
- Regulatory adaptation and revision fueled by evidence-driven learning and experimentation.
- EU market accessibility, especially for SMEs and start-ups.
- Controlled real-world AI testing where appropriate and as agreed upon by AI providers and national competent authorities.
In addition to these practices, sandboxes also create an environment in which AI providers can leverage lawfully obtained personal data for model development and training insofar as the sandbox maintains robust access control, data security and privacy protocols. However, for AI providers to take advantage of this sandbox characteristic, the intended purpose of their AI systems must meet one of these criteria:
- Safeguarding public interest, safety, and health.
- Promoting environmental and energy sustainability.
- Maintaining or improving the safety of critical infrastructure.
- Streamlining and improving the access to and quality of essential goods and services.
Once participation in the sandbox is complete, all personal data logs must be deleted, and assurances must be taken to guarantee that the processing of personal data in the sandbox does not generate any real-world impacts on data subjects.
In essence, the overall purpose of AI regulatory sandboxes is to foster safe and trustworthy AI innovation by functioning as controlled and secure testing/development hubs in which AI providers can evaluate their model’s performance and potential impacts.
EU Commission and Member States
Given that the EU Commission is a high-level governance body, its obligations and role in the development and establishment of AI regulatory sandboxes operate at a broader scope than other government bodies and actors we discuss in this section. Consequently, the EU Commission is largely responsible for two major high-level objectives:
- Formulating common principles that are applicable across the EU concerning the establishment, development, implementation, operation, and supervision of AI regulatory sandboxes. These principles should target three main areas: 1) the development of eligibility and selection criteria required for regulatory sandbox participation, 2) regulatory sandbox procedures and exit reports, and 3) regulatory sandbox terms and conditions.
- The creation of a singular interface that: 1) contains all pertinent AI regulatory sandbox information, 2) facilitates stakeholder-sandbox interaction, 3) provides a mechanism by which stakeholders can raise sandbox inquiries with national competent authorities, and 4) provides a mechanism by which stakeholders can request assistance with EU AI Act compliance.
For EU member states, AI regulatory sandbox obligations are explained below:
- Member states must develop and establish operational AI regulatory sandboxes, either alone, or in partnership with other EU member states, within two years of the EU AI Act taking effect.
- Member states must ensure that a sufficient amount of resources is allocated to the development of regulatory sandboxes, and where appropriate, allow or request the involvement of other actors in the AI ecosystem.
- Member states should target SMEs and startups for priority access to AI regulatory sandboxes, provided that they meet selection and eligibility criteria.
- Member states must create targeted, tailored, and specific training and awareness initiatives for SMEs, startups, users, and public authorities in regards to how the EU AI Act’s provisions—in this case, AI regulatory sandboxes—apply to them.
- Member states must establish communicative channels devoted to regulatory training and awareness initiatives so that SMEs, startups, users, and public authorities can pose regulatory questions, receive advice on how to comply with the EU AI act, and participate in AI regulatory sandboxes.
- Member states must ensure that relevant fees are proportionately administered, by reference to factors like business size or market capital, if SMEs or startups violate any of the act’s provisions, such as regulatory sandbox protocols.
The AI Office, National Competent Authorities, and Market Surveillance Authorities
The AI Office, as part of the EU Commission, will play a pivotal role in the implementation and continual oversight, enforcement, maintenance, and revision of the EU AI Act. Therefore, it is crucial to understand the AI Office’s core responsibilities, not just at the level of AI regulatory sandboxes, but in terms of the entire EU AI Act. The AI Office is responsible for:
- Developing and providing standardized templates for each area covered by the EU AI Act—for instance, how to apply for regulatory sandbox participation—when they are requested by the European AI Board (hereafter referred to as the “Board”).
- Developing and implementing an easily accessible, holistic, and high-utility platform that contains all relevant information on the application of the EU AI Act. For example, once approved for regulatory sandbox participation, AI providers will require access to information on how to effectively operate within sandbox guidelines.
- Running public awareness and communication campaigns to increase regulatory understanding, with an overall focus on highlighting the EU AI Act’s requirements in terms of obligations for key actors.
- Facilitating and evaluating the development of best practices concerning AI deployment procedures.
In addition to these high-level responsibilities, the AI office must keep a current (i.e., up-to-date) and publicly accessible inventory of active AI regulatory sandboxes within the EU. On the other hand, national competent authorities (NCAs) and market surveillance authorities (MSAs), which will help carry out many of the AI Office’s responsibilities on a more granular scale, are subject to a host of regulatory sandbox obligations, which are described below:
- NCAs must guide, oversee, and promote the development and implementation of AI regulatory sandboxes, demonstrating a particular focus on risk identification and mitigation.
- NCAs must disclose their AI regulatory sandboxes to the AI Office and Board, and can also request assistance on how to develop and implement them appropriately.
- NCAs can suspend or ban the further testing and development of AI technologies within regulatory sandboxes if they determine that inadequate risk mitigation efforts have been taken.
- NCAs must document sandbox activities, in particular those that are successfully carried out, as well as sandbox results, learning outcomes, and exit reports.
- NCAs must help AI providers comply with sandbox requirements upon request, and providers can leverage the NCA-documented sandbox reports to demonstrate compliance with other components of the EU AI Act. Such reports can also be made publicly available insofar as an agreement is reached between NCAs and AI providers.
- NCAs must submit yearly regulatory sandbox progress reports—until the sandbox is terminated—to the AI office and Board, which will be made public to advance and inform further regulatory revisions and best practices.
- MSAs can administer inspections and safety checks without having to notify the provider beforehand, but only if the provider’s AI system is approved for real-world testing.
AI Providers
Regulatory sandboxes can be enormously useful for AI providers, both in terms of developing safe, effective, and responsible AI systems and with respect to ensuring that such systems, once deployed, remain compliant under the EU AI Act. However, for AI providers to maximize the utility they gain from regulatory sandboxes, they must first understand what’s required of them. Below, we illustrate the core regulatory sandbox obligations for AI providers:
- AI providers must meet the selection and eligibility criteria established by the EU Commission to participate in AI regulatory sandboxes.
- AI providers must generate a sandbox plan in collaboration with their NCA that outlines the timeframe, methods, objectives, conditions and requirements for sandbox activities.
- AI providers will be held liable for any damages inflicted on third parties during sandbox experimentation, provided that such damages arise as a direct consequence of the failure to follow NCA sandbox guidelines.
- If AI providers, even those whose systems are considered high-risk, wish to conduct real-world testing outside of the regulatory sandbox, they must meet several conditions, which, most notably, include:
- The submission of a concrete, real-world testing plan to their respective MSA, followed by MSA approval of this plan.
- The acquisition of informed consent from real-world testing participants and the implementation of adequate protection measures for potentially vulnerable participants.
- The communication of all relevant information and real-world testing procedures with AI deployers that choose to cooperate.
- The continued and consistent oversight, by the provider, of real-world testing procedures.
- The guarantee that any decisions, predictions, or recommendations made by the AI system in question can be reversed or disregarded.
- Providers must explain to their MSAs where real-world testing will take place, and report on progress, termination procedures, and final testing outcomes.
- Providers must promptly report any serious incidents that emerge during real-world testing to their MSA and take immediate measures to address them, and where resolution is not possible, suspend or terminate testing entirely.
- Providers are liable for any negative consequences or harms that occur as a consequence of real-world testing.
GPAI Systems
Under the EU AI Act, GPAI systems are viewed as distinct from other kinds of AI due to their versatile capabilities, which allow them to perform a wide variety of tasks across numerous domains, and also integrate with multiple different types of AI systems. These capabilities can make GPAI both more powerful and risky than conventional AI, though it is important to note, GPAI systems are not classified as inherently high-risk according to the EU AI Act’s tiered risk classification structure. However, under the EU AI Act, GPAI providers are held to a more stringent standard than most.
GPAI providers have six core obligations. They must:
- Ensure the production of up-to-date and sufficiently detailed technical documentation regarding the design, function, training, testing, and evaluation of their models.
- Promptly provide technical documentation to the AI office or NCA when it is requested.
- Share technical documentation with AI providers who indicate plans to integrate GPAI capabilities into their AI systems, ensuring that models’ limits and capabilities are clearly communicated prior to integration and that IP rights are respected.
- Develop and establish policies that enable compliance with EU copyright law.
- Create a publicly available and detailed explanation of the content on which the GPAI model was trained.
- Cooperate with the EU Commission and NCAs when necessary to ensure compliance with the EU AI Act’s provisions.
GPAI providers who open-source their models will only be required to comply with the first three obligations listed above, however, this exemption does not apply to GPAI providers whose models pose a systemic risk. Moreover, seeing as the EU has yet to publish harmonized standards for GPAI systems, GPAI providers should follow the AI Office’s codes of practice, which although not yet available, will target three main areas:
- Maintaining up-to-date technical documentation alongside detailed explanations of the content on which a GPAI model was trained.
- Pinpointing systemic risks and their sources.
- Mitigating and managing systemic risks proportionately, by reference to risk saliency and probability.
Moreover, the AI Office will be responsible for ensuring that these codes of practice sufficiently illustrate the core objectives, necessary commitments or measures, and KPIs relevant to GPAI providers under the EU AI Act. Fortunately, the AI Office does not need to pursue the development of these codes alone, and can request the assistance of GPAI providers and NCAs, who, after agreeing to help, will provide regular updates and progress reports on how such codes are working for them. Ultimately, the AI office will determine whether these codes become standardized practice across the EU, or alternatively, must be revised in light of novel AI advancements or changes to the regulatory landscape.
GPAI Systems with Systemic Risk
For GPAI models to pose a systemic risk at the Union level, they must possess capabilities that are considered high impact by the EU Commission or the Commission’s pre-established scientific review panel.
- High impact capabilities will be evaluated using relevant technical tools, methods, indicators, and benchmarks, and by reference to a certain compute capacity threshold, all of which the Commission may amend in accordance with changes in the AI landscape.
Moreover, GPAI models with systemic risk will be added to a publicly available list that is maintained and updated by the EU Commission, which will be separate from the Commission’s database on high-risk AI systems. Nonetheless, when GPAI systems will meet or have already met the standards required to pose a systemic risk, GPAI providers must:
- Immediately notify the EU Commission, after which the Commission will determine whether to designate the model as one with systemic risk.
If GPAI providers believe that their systems do not meet systemic risk standards, and should therefore not be labeled as such, they can formally argue against the Commission’s designation. However, if the provider’s arguments are deemed insufficient, the Commission can reject their plea and nonetheless designate the model as one with systemic risk.
- GPAI providers who wish to appeal a systemic risk designation must wait at least 6 months before doing so, and be sure to present an updated argument that does not simply reiterate the one they submitted previously.
Finally, in addition to the six core GPAI provider obligations listed in the previous section, GPAI providers whose models pose a systemic risk must also follow these procedural requirements:
- Leveraging state-of-the-art standardized protocols and tools to evaluate model performance, with a particular focus on adversarial testing intended to help uncover and manage potential systemic risks.
- Evaluating and addressing potential systemic risks and their sources, throughout the development, deployment, and use stages of the AI lifecycle.
- Immediately documenting and reporting serious incidents to the AI office and NCAs, as well as the steps taken to prevent and mitigate them.
- Maintaining robust cybersecurity protocols, with a specific intent to preserve the security of GPAI model’s physical infrastructure.
Key Takeaways
In this post, we distilled the essential obligations, rights, and procedures concerning two significant areas of the EU AI Act: 1) regulatory sandboxes, and 2) GPAI systems.
We deliberately decided to group these two areas together because they share a critical regulatory characteristic: targeted and specific evaluation procedures, guidelines, or codes of practice have yet to be established. This should not be interpreted as a regulatory oversight, but rather, a proactive regulatory strategy, whereby the fine details of the EU AI Act’s application in these areas will be ironed out in accordance with early-stage regulatory trial and experimentation outcomes.
Consequently, if readers were to take away one piece of information from this post, we suggest the following: pay close attention to the EU Commision and AI Office’s progress on the standardization of regulatory sandbox requirements and procedures as well as the eventual harmonization of rules for GPAI systems, especially those that pose a systemic risk. Such developments are likely to occur in the short-term, and if AI providers play their cards well, they may actually be granted the opportunity to participate in the regulatory discourse.
In the meantime, if readers wish to maintain a current understanding of the AI regulatory landscape, and in particular, the present and future state of the EU AI Act, we invite you to follow Lumenova AI’s blog, where you can also explore additional content on AI risk management, generative, and responsible AI.
Decoding the EU AI Act Series
Decoding the EU AI Act: Scope and Impact
Decoding the EU AI Act: Regulatory Sandboxes and GPAI Systems
Decoding the EU AI Act: Transparency and Governance
Decoding the EU AI Act: Standardizing AI Legislation
Decoding the EU AI Act: Influence on Market Dynamics
Decoding the EU AI Act: Future Provisions