Contents
In our previous post on the EU AI Act, we distilled the central regulatory rights and requirements concerning regulatory sandboxes and general purpose AI (GPAI) systems. In this post—the third of our EU AI Act series—we’ll break down transparency obligations, proposed governance roles and structures, and post-market monitoring procedures. This will add a layer of complexity to our discussion of EU AI Act’s multifaceted web of rights and requirements, and we therefore suggest that readers familiarize themselves with the first two pieces in this series, which can be accessed at Lumenova AI’s blog.
Furthermore, the EU AI Act is a horizontal piece of legislation, meaning that it’s all-encompassing and attempts to address AI-related impacts—as well as the mechanisms, procedures, and bodies/authorities required to mitigate them appropriately—across numerous industries, sectors, and domains. The horizontal structure of the EU AI Act will be an important factor as we move forward, namely because it affords the legislation a degree of ingrained adaptability and flexibility that enables further refinement and revision in light of advances in the AI and regulatory landscape. In simple terms, the EU AI Act will see considerable updates, even in the short-term, and while its foundation is likely to remain consistent, stakeholders should recognize that, at its core, it’s a living and breathing document.
Generating a holistic understanding of the EU AI Act is no easy feat. Nonetheless, Lumenova AI embraces the challenge of breaking down the tenets of this comprehensive regulation in a way that’s easily digestible and useful to readers. Keeping up with the tide of AI policy, especially at the global scale, can be extremely difficult and/or confusing, and as a responsible AI organization, Lumenova AI recognizes its role and responsibility in the continued democratization of the AI regulation discourse.
Before we begin our discussion on the transparency, governance, and post-market monitoring provisions of the EU AI Act, we lay out some key definitions, obtained from Article 3, which highlight essential concepts and terminology. These definitions are listed below (additional definitions—such as those for a serious incident, market surveillance authority (MSA), national competent authority (NCA), and general purpose AI (GPAI) system—relevant to this discussion are included in our previous post):
- Conformity assessment: “the process of demonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to a high-risk AI system have been fulfilled.”
- Operator: “the provider, the product manufacturer, the deployer, the authorised representative, the importer or the distributor” of an AI system.
- Post-market monitoring system: “all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions.”
- Recall of an AI system: “any measure aimed at achieving the return to the provider or taking it out of service or disabling the use of an AI system made available to deployers.”
- Withdrawal of an AI system: “any measure aimed at preventing an AI system in the supply chain being made available on the market”
- Deep fake: “AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.”
- Notifying authority: “the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.”
- EU Data Protection Supervisor: “as regards AI systems put into service or used by EU institutions, agencies, offices and bodies, any reference to national competent authorities or market surveillance authorities in this Regulation shall be understood as referring to the European Data Protection Supervisor.”
Transparency Obligations
Transparency has emerged as a core principle in many active and prospective AI regulatory, safety, and ethics frameworks around the world, and it’s therefore unsurprising that it also emerges as a core principle of the EU AI Act. However, transparency doesn’t just concern how transparent an AI system is in terms of intended purpose, design, function, and data, but also in terms of how it’s used and the impacts it generates. In other words, transparency operates at more than one level of the AI lifecycle, from development and training to deployment and integration.
Transparency obligations under the EU AI Act pertain to AI providers and deployers, including providers and deployers of GPAI and high-risk AI systems, which are subject to more nuanced and stringent sets of requirements. However, before we break down these obligations, we note that in the case of transparency, there exist a variety of granular regulatory exemptions that may apply to certain kinds of AI providers, deployers, or AI use-cases—seeing as a discussion of these exemptions would be beyond the scope of this post, we suggest that readers looking for more information on this topic review Article 52 of the EU AI Act. That being said, transparency standards for AI providers and deployers are illustrated below:
- AI Providers whose systems directly interact with users/consumers must guarantee that users/consumers are aware of their interaction with such a system. In other words, AI providers must disclose to users/consumers that they are interacting with AI, and this disclosure must be immediate, corresponding with the first time of exposure to or use of an AI system.
- Generative AI (GenAI) providers must authenticate and disclose AI-generated content across audio, visual, video, and text domains to users/consumers when they interact with it. This disclosure must be immediate, corresponding with the first time of exposure.
- AI deployers whose systems are leveraged for the purposes of emotional recognition and/or biometric categorization must inform those affected by such systems of 1) their utilization, and 2) the role of their personal data in the use of such systems.
- GenAI deployers whose systems are leveraged to create deep fakes must immediately disclose such content, upon the first point of exposure, as AI-generated or manipulated. When deep fakes are generated for artistic or creative endeavors, this standard still applies, but only insofar as disclosure does not compromise the artwork or people’s enjoyment of it.
- GenAI deployers whose systems are utilized to inform the public on issues of public interest (e.g., AI-generated news) must immediately and publicly authenticate text-based content as AI-generated or manipulated.
For the most part, the exemptions we previously alluded to primarily concern systems that are authorized and validated for law enforcement purposes, however, more nuanced exemptions, in particular for GenAI systems, also apply. In terms of the latter, the AI Office is poised to develop a code, at the union level, that enables the streamlined and effective detection and authentication of AI-generated content. Importantly, the first iterations of this code will emerge in the short-term, and will undergo further revisions until they are standardized and/or deemed sufficient.
Governance
Seeing as the EU AI Act applies to the entire EU, coordination and cooperation between the various national and international government bodies within the Union will be critical to the application, oversight, and subsequent revisions and/or improvements made to the EU AI Act. In this respect, the overarching goal of the EU Commission, which will be exercised mainly through the AI Office, is to facilitate EU-wide expertise in the field of AI, laying the groundwork required for a healthy, trustworthy, and internationally inclusive AI ecosystem.
In terms of governance under the EU AI Act, three major bodies emerge: 1) the EU AI Board (hereafter referred to as “the Board”) and, 2) the Advisory Forum, and 3) the scientific panel of independent experts. Crucially, Each EU member state will also be required to establish or designate at least one notifying authority and one MSA as NCAs to ensure a single point of contact with regard to the independent, impartial, and unbiased application of the EU AI Act.
The EU AI Board
The Board will be overseen by the EU Data Protection Supervisor, chaired by an agreed-upon EU member state representative, and contain only one representative from each EU member state, who will serve for a period of 3 years. Moreover, the Board will hold regular meetings at which the AI Office will be present (though it will not receive a vote), and the Board may determine to include other EU authorities in these meetings, however, such decisions will be contextually motivated. Importantly, to help facilitate international coordination and collaboration, the Board will create two specific sub-groups targeting cooperation and information exchange among member state’s MSAs and notifying authorities.
The overall purpose of the Board is to help the Commission and EU Member States ensure that EU AI Act implementation is successful and consistent across the Union. In this context, the Board’s primary responsibilities are:
- To facilitate information sharing of relevant technical/regulatory knowledge and best practices, in particular, as it concerns the AI Office in the context of regulatory sandboxes.
- To provide guidance and recommendations on EU AI Act application, especially for GPAI systems. More broadly, this also applies to NCAs and the Commission in terms of helping them build the skills required to enforce and oversee compliance with the EU AI Act.
- To encourage and support the standardization of administrative procedures throughout the EU, for example, regulatory sandbox safeguards and guidelines.
- To develop agreed-upon shared codes of conduct and practice to be applied throughout the EU, as well as the provision of guidance documents and advice where relevant.
- To scrutinize the EU AI Act in terms of additional legislative components or provisions, AI trends, serious incident reports, and the overall function of the high-risk AI database.
- To assist the Commission in its development and implementation of public awareness campaigns that highlight AI risks and benefits, obligations and rights under the EU AI Act, and most importantly, the promotion of AI literacy.
- To develop and uphold a shared regulatory language through which NCAs and market operators can effectively communicate with one another in terms of their obligations under the EU AI Act.
- To encourage and maintain cooperation between different bodies within the EU, such as Union expert groups, as well as at the international level with NCAs and non-EU organizations. The Board should also act as a resource for the Commission on international AI issues.
- To help guide, develop, and inform qualified alerts on GPAI systems—an alert raised by the scientific panel of independent experts when a GPAI system meets the standards for systemic risk or poses a clear risk to the Union.
The Advisory Forum
An advisory forum, composed of relevant actors within the AI ecosystem such as SMEs and startups, industry experts, and civil society and/or academia, will be established for the purposes of providing guidance and technical acumen to the Board and Commission as to the continual evolution of their roles and responsibilities under the EU AI Act.
The forum will be headed by two co-chairs selected by the Commission from the member pool. Members will serve for a period of two years, though their service can be extended to a maximum period of four years. Moreover, several critical Union bodies such as The Fundamental Rights Agency and the EU Agency for Cybersecurity will be permanent members of the forum, and the forum will be required to hold a minimum of two meetings per year—if it sees fit, it can invite other relevant actors in the AI ecosystem to participate.
Finally, on a yearly basis, the forum will output and make publicly available an annual report outlining the forum’s activities for that year. In special cases, and if the forum deems it necessary, targeted subgroups whose intended purpose is to consider questions and concerns on the scope and objectives of the EU AI Act throughout various sub domains, industries, or sectors can be established.
The Scientific Panel of Independent Experts
The Commission will select and appoint a scientific panel of independent experts who have demonstrated their technical acumen and expertise in the field of AI. Members of this panel will play a critical role in helping the Commission enforce the EU AI Act in an accurate and objective manner.
Additionally, selected experts must be entirely independent of any AI providers, which include GPAI providers. To this point, each expert will be required to provide a publicly available declaration of interests to avoid potential conflicts of interest and guarantee the integrity of the scientific panel’s activities. Where EU member states require further assistance with the application of the EU AI Act, they will be afforded the opportunity to hire the panel’s experts for guidance, support, and assistance.
As for its responsibilities, the panel will collaborate with the AI office to help enforce the EU AI Act, especially where its provisions concern GPAI systems. For GPAI systems specifically, the panel must notify the AI office of potential systemic risks, play a foundational role in creating tools, methods, and templates by which to evaluate GPAI capabilities, and help classify GPAI risks at all levels. More broadly, the panel should facilitate cross-border collaboration with MSAs, and provide overall support for the activities carried out by the AI office. Though such provisions have not yet been established, the panel will also require a means by which to request assistance from the AI Office on how best to accomplish the tasks it has been afforded.
NCAs as a Single Point of Contact
It’s imperative that NCAs operate objectively. They must refrain from any actions that can potentially conflict with or undermine their duties under the EU AI Act. Consequently, NCAs are granted the option to consolidate tasks within multiple national authorities based on organizational needs. In this context, the most relevant of these national authorities is a member state’s MSA.
On the other hand, EU Member states must clearly communicate the identities and tasks of their designated NCAs to the Commission, ensure that this information is publicly available, and establish a single point of contact for market surveillance (i.e., an MSA), whose details will be included in a publicly available list maintained by the Commission. Consequently, member states’ responsibilities are the following:
- Member states must allocate a sufficient amount of resources to their NCAs, which include but are not limited to personnel with AI, cybersecurity, and civil rights expertise. Such resources are vital for these authorities to effectively and efficiently perform their duties.
- Member states must administer annual competence and resource assessments required for the successful and continued functioning of their NCAs.
- Member states must ensure that their respective NCAs, specifically MSAs, adhere to existing cybersecurity and confidentiality standards.
- Member states must submit biennial reports on NCA’s resource adequacy levels to the Commission. Once the Commission reviews these reports, it will determine whether further resource-driven action is required.
- Member states must designate an MSA as their single point of contact for market surveillance, provide an electronic service by which said MSA can be easily communicated with or contacted, and share the MSA’s details with the Commission.
Seeing as NCAs are considered to be national EU governance bodies, the EU Data Protection Supervisor will be the ultimate authority responsible for overseeing the functioning of member states’ NCAs, which include MSAs. The Commission will also play a role here, namely by encouraging and supporting information exchange between various member states’ NCAs, whereas NCAs will provide guidance to SMEs and startups on EU AI Act application characteristics and requirements.
Post-Market Monitoring
The idea behind post-market monitoring is straightforward: to encourage the development of an EU AI market in which all AI systems are deemed trustworthy, safe, and effective. Accomplishing this task, however, is already difficult and will increase in difficulty as AI systems, in particular GenAI and GPAI, become more sophisticated and widespread, hence the importance of robust post-market monitoring procedures.
Post-market monitoring provisions under the EU AI Act mainly address two kinds of key actors: 1) AI providers, which include providers of GPAI and high-risk AI systems, and 2) MSAs. There are several additional noteworthy rights and requirements that concern individual citizens, AI deployers, the Commission and AI Office, EU member states, and NCAs, which we will discuss at the end of this section.
Moreover, this chapter of the EU AI Act also outlines a series of targeted procedures for dealing with AI systems that pose a national risk as well as compliant AI systems that nonetheless pose a significant risk. These procedures are not especially complex, but they could substantially increase the nuances of EU AI Act application, and should thus be taken seriously.
Overall, the purpose of the post market monitoring system is to monitor high-risk AI system performance throughout the duration of the system’s lifetime, and to ensure that AI providers and deployers can maintain compliance with the EU AI Act’s provisions, even as regulatory changes or developments in the AI ecosystem occur. In certain cases and where appropriate, a post-market monitoring system may also be leveraged for more complex analytical purposes like analyzing interactions between various AI systems.
AI Provider Obligations
For AI providers, post-market monitoring obligations are listed below:
- AI providers must document their post-market monitoring system. Documentation must proportionally reflect the risk profile of the AI system in question and include a concrete post-market monitoring plan, which serves as the foundation for the post-market monitoring system.
- Providers of high-risk AI systems must report any serious incidents to the MSA of the member state in which the incident happened. The timeliness of this report should directly correspond with how serious the incident was—the more serious an incident, the more quickly a report must be submitted.
- AI providers must report a serious incident if it meets the following criteria:
- A causal link between the incident and the AI system in question is established. If such a link is identified, a serious incident report must be submitted within 15 days.
- The incident affects critical infrastructure. A report of this kind must be submitted within 2 days.
- The incident results in the death of an EU citizen. In this case, the incident must be reported immediately, or at a minimum of 10 days after its occurrence.
- Following a serious incident report, AI providers must immediately conduct an investigation that includes a risk assessment and corrective measures aimed at the source of the problem. However, providers must be careful, seeing as any profound modifications to their AI system may corrupt their ability to identify the root causes of the serious incident.
- Providers of high-risk AI systems must disclose all relevant documentation to MSAs when such documentation is required for the MSA to successfully fulfill its responsibilities. Any documentation received by MSAs should be kept confidential, notwithstanding some exceptions.
- Providers of high-risk AI systems must disclose their source code to MSAs, but only if:
- A conformity assessment necessitates access to the source code.
- Validation and verification procedures have proved inadequate despite all available efforts being taken.
MSA Obligations
MSAs are the authors and drivers of post-market monitoring operations, and are therefore subject to a host of requirements, which are described below:
- When an MSA receives a serious incident report, it must notify national public authorities within one week of its reception. Guidance on this specific procedure is set to emerge within one year of the EU AI Act taking effect, and will be established by the Commission.
- MSAs must immediately notify the Commission when serious incidents occur irrespective of whether the Commission has already begun addressing them.
- MSAs must provide annual reports on market surveillance activities to the Commission and NCAs granted that they are of relevance to EU AI Act implementation. Reports should include a description of any prohibited activities that occurred as well as the steps taken to address them.
- MSAs are responsible for enforcing necessary corrective or preventive measures if the use of a high-risk AI system leads to certain kinds of criminal offenses (listed in Annex IIa).
- MSAs are the authorities responsible for overseeing the use of high-risk AI systems when they are leveraged by financial institutions for the provision of financial services.
- EU Member state’s respective MSAs can administer a joint investigation with the Commission to identify and resolve compliance failures when the EU AI Act’s provisions are violated. During such investigations, the AI Office should help relevant competent authorities coordinate their efforts.
- MSAs can appeal to the AI Office when they require additional information of interest during their investigations of high-risk AI systems. Within one month, the AI Office must determine whether to fulfill such requests and establish what kind of information will be accessed from the provider.
- MSAs will supervise and enforce AI testing in the real-world, to ensure that it complies with the provisions listed in Article 54. When real-world testing is conducted within the confines of regulatory sandboxes, the MSA will simply verify that compliance requirements are met. If requirements are violated, the MSA can either:
- Suspend or terminate real-world testing procedures entirely.
- Necessitate that the provider changes some central aspect of real-world testing conditions.
In most cases, when MSAs make consequential decisions affecting AI providers, they need to explain to providers the why and how behind the decision, and providers should be granted the opportunity to challenge these decisions on appropriate grounds.
AI Systems that Pose a National Risk
Under the EU AI Act, national risks are defined as those that inspire a significant threat to EU citizens’ health, safety, and fundamental rights. The national governance body responsible for ensuring that such risks do not come to fruition is the MSA.
When a member state’s MSA determines that an AI system presents a national risk, it must conduct an evaluation of the system in question. If it is found to be non-compliant prior to deployment, corrective measures, on behalf of the provider, must be taken to address identified risks. If the system is already in use, it must be withdrawn or recalled from the market so that risks can be effectively managed without increasing the probability of public harm. The MSA should also pay especially close attention to systems that present a foreseeable risk to vulnerable populations, and where risks to fundamental rights are noted, the MSA must openly inform and work with other relevant national governance bodies to mitigate them.
If AI providers fail to take corrective measures within 15 days of a national risk being identified, the MSA may prohibit or restrict the provider from placing their AI system on the market. If the system has already been deployed, the MSA may require that it is effectively withdrawn from the market.
Finally, a member state’s MSA must inform the MSAs of other member states as well as the Commission as to the measures suggested for ensuring a high-risk AI system’s compliance. If, within a period of three months, no objections to these measures are raised by any of the bodies previously mentioned, they will be deemed appropriate, and should be taken.
Compliant AI Systems that Still Pose a Risk
Even if providers' high-risk AI systems are compliant, the MSA may still determine that they pose a risk to human health and safety, fundamental rights, and/or other characteristics of public interest. In such cases, the burden still falls on AI providers, and they must ensure that such risks are addressed prior to deployment.
In the same vein, when the aforementioned conditions are met, member states must report the existence of high-risk AI systems to the Commission and other member states in the EU, taking care to highlight the data required to identify the AI system in question, its supply chain characteristics, and the specific kind of risk(s) it poses in conjunction with the measures taken to address it.
Furthermore, the Commission must consult with member states and those who aim to use the AI system to scrutinize whether the corrective measures taken at the national level are sufficient. The Commission can then determine whether to validate these measures or suggest additional ones as necessary.
Additional Rights and Requirements
When EU government bodies whose purpose is to protect fundamental human rights encounter high risk AI-driven scenarios that fall under their jurisdiction, they may request guidance on how best to fulfill their duties under the EU AI Act. To this point, member states must determine which governing bodies will serve to protect fundamental rights under the EU AI Act, and include said bodies in a publicly available list.
- During periods of uncertainty, EU government bodies that protect fundamental rights may request that the MSA conducts a series of technical tests on a high-risk AI system to uncover more information with the ultimate goal of informing procedures for dealing with possible AI-related infringements on fundamental rights.
- The EU Data Protection Supervisor is responsible for overseeing the use of high-risk AI systems utilized for law enforcement purposes. If such systems are lawfully used by judicial actors the MSA must ensure that market surveillance activities do not compromise judicial capacity.
- The EU Data Protection Supervisor is responsible for overseeing the use of high-risk AI systems by EU government bodies.
- Member states should promote and uphold cooperation between their respective NCAs and MSAs to encourage and facilitate the EU-wide standardization of rules governing high-risk AI systems.
As for GPAI systems, the authority responsible for monitoring, overseeing, and enforcing the compliance of such systems is the AI Office, regardless of whether an AI system is based on a GPAI system or developed by a GPAI provider. Crucially, when GPAI systems can be leveraged by deployers for purposes that are deemed high-risk, deployers must work with the AI Office to ensure compliance and subsequently inform the Board and other MSAs as appropriate.
On a different note, MSAs are not impervious. If EU citizens believe that an MSA has violated any of the EU AI Act’s provisions, they have the right to lodge a complaint against it. Individual citizens also have a right to understand how AI-driven or assisted decision-making may impact them. AI Deployers must provide this kind of information to individuals when the decision made can or does produce adverse impacts on their wellbeing and livelihood.
Finally, readers may be asking themselves how such a monumental piece of legislation will be enforced—when the EU AI Act’s provisions are violated, especially as they concern providers and deployers of GPAI and high-risk AI systems, regulatory penalties and fines will apply. Union bodies, agencies, and institutions will also be held accountable in this context, and will receive administrative fines in accordance with any relevant regulatory infringements. Of note, fines administered to SMEs, start-ups, and enterprises should proportionately reflect the financial capacity of said institutions and the risk profile of the non-compliant AI system in question. Overall, whether fines or penalties are administered will depend on a variety of different factors, however, in most cases, the most salient determining factors will be the risk classification of a given AI system and the measures taken to mitigate risks accordingly.
Conclusion
There is a lot to digest here, which speaks to the breadth, complexity, and depth of the EU AI Act. Fortunately, this post, along with our two previous posts in the EU AI Act series, have now distilled the essential notions, principles, functional characteristics, and rights and requirements proposed by the EU AI Act, at both the macro and micro scale.
In terms of rights and requirements, however, there are many additional important details—such as exemptions and exceptions to certain rules or guidelines—that will determine the nuances of how the Act applies to various actors within the European AI ecosystem. In other words, when it comes to ironing out the fine details of EU AI Act implementation, actors subject to regulatory provisions would be wise to review the Act directly for more targeted and specific guidance.
Moving forward, the next few pieces in our EU AI Act series will be more analytical than descriptive in nature. They will be motivated by the information we’ve uncovered and illustrated in the first part of this series, but their ultimate goal will be to help stakeholders better understand how the EU AI Act might evolve over time as well as the effects it could generate on markets, the employment landscape, and the eventual standardization of AI policy.
To keep up with the next wave of EU AI Act content and thought leadership, we invite readers to follow Lumenova AI’s blog, where they can also glean insights on the responsible and generative AI landscape alongside other important developments in AI policy.
For readers who are interested in taking tangible steps toward responsible AI (RAI) integration, governance, risk management, and compliance, Lumenova AI’s RAI platform can serve as a valuable resource and tool. To gain a better understanding of what our platform can do for you or your organization, book a product demo today.
Decoding the EU AI Act Series
Decoding the EU AI Act: Scope and Impact
Decoding the EU AI Act: Regulatory Sandboxes and GPAI Systems
Decoding the EU AI Act: Transparency and Governance
Decoding the EU AI Act: Standardizing AI Legislation
Decoding the EU AI Act: Influence on Market Dynamics
Decoding the EU AI Act: Future Provisions