June 29, 2023

Europe's AI Act: Key Takeaways

eu ai act

Introducing our second blog post on the EU AI Act, which expands upon the insights shared in our previous article. If you haven’t read our earlier post, we recommend exploring it for a comprehensive understanding of the Act’s impact on the future of AI.

With the adoption of the AI Act by the European Parliament this month, the EU took a major step forward in terms of tech policy. This landmark decision, which saw 499 votes in favor, 28 against, and 93 abstentions, positions Europe as a global leader in AI regulation.

The Parliament’s position on the AI Act highlights several key aspects.

Firstly, there is a push for a complete ban on AI for biometric surveillance, emotion recognition, and predictive policing.

Secondly, generative AI systems like ChatGPT will be required to disclose that the content they produce is AI-generated.

Lastly, AI systems used to influence voters in elections will be considered high-risk.

However, while the EU AI Act has taken one step closer towards enforcement, it’s important to note that immediate clarity cannot be expected.

This is because the draft rules will now need to be negotiated with the Council of the European Union and the European Commission, the executive branch of the EU.

Hence, the final legislation will be a compromise between the differing drafts from the three institutions - the European Commission, Council, and Parliament — also called the “trilogue.” Officials are aiming to achieve a final agreement by year-end.

A Recap of the Eu AI Act’s Risk-Based Approach

As we previously discussed in one of our articles, the EU AI Act adopts a risk-based approach, similar to the EU’s Digital Services Act. This approach imposes restrictions based on the perceived level of risk associated with an AI application.

The Regulation introduces a four-tier system based on risk levels: unacceptable risk, high risk, limited risk, and minimal risk. High-risk systems will have specific requirements for providers, including risk management, transparency, and human oversight.

The Act also has the potential to ban certain AI applications entirely if the risk is deemed “unacceptable.”

Key Implications of the Act

Prohibition of Emotion-Recognition AI: The draft text from the European Parliament bans the use of AI for emotion recognition in settings such as law enforcement, schools, and workplaces.

Restrictions on Real-Time Biometrics and Predictive Policing: The proposal to ban real-time biometrics and predictive policing in public places is expected to be a topic of contention and might require careful legislative negotiation.

Social Scoring Prohibition: The draft rules propose a prohibition on social scoring by public agencies, a practice commonly associated with autocratic governments.

Regulation of Generative AI: The draft offers the first proposals for regulating generative AI, including a ban on the use of copyrighted material in the training sets of large language models.

Greater Scrutiny of Social Media Recommendation Algorithms: The new draft categorizes recommender systems as “high risk,” subjecting them to closer inspection.

Compliance Mechanism & Penalties

In order to enforce the law, Member States will be required to establish notifying authorities and conformity assessment bodies for high-risk AI systems. Softly regulated AI will have transparency obligations for certain interactions with individuals.

Non-compliance sanctions will be similar to those under GDPR, with penalties based on a company’s worldwide annual turnover.

It’s also important to note that, while the regulation itself does not provide a specific right to compensation, violations can potentially lead to civil claims under Member State law.

Enforcement

Enforcement of the AI Act will primarily be the responsibility of national competent authorities in each Member State. However, the Parliament’s AI Act introduces a significant shift in the approach to market surveillance compared to the European Commission and the Council.

Specifically, the Parliament’s proposal mandates the establishment of a single national surveillance authority (NSA) in each member state. This deviates from the Council and Commission versions of the AI Act, which would allow member states to create multiple market surveillance authorities (MSAs) according to their preference.

All three AI Act proposals recognize the need for existing agencies to serve as MSAs in certain areas, such as AI in financial services, AI in consumer products, and AI in law enforcement. The Council and Commission proposals even allow for the expansion of this approach. However, the Parliament’s proposal restricts the creation of additional MSAs, requiring member states to establish only one NSA for enforcing the AI Act, with a few selected exceptions like finance and law enforcement.

This difference in approach between the three institutions will be a point of discussion during the trilogue process.

Advantages of a Single NSA Approach:

Talent and expertise: A single NSA allows for better talent recruitment and internal expertise, enhancing the enforcement of the AI Act compared to multiple distributed MSAs.

Streamlined coordination: Centralization simplifies coordination between member states, with one agency per state and a voting seat on the AI Office board for all NSAs. This eliminates the need for numerous coordination councils.

Disadvantages of a Single NSA Approach:

Fragmented oversight: Having a separate NSA from existing regulators means algorithms used in areas like hiring, workplace management, and education would be governed by different authorities than human actions in the same areas, leading to fragmented oversight.

Interpretation and implementation challenges: AI Act interpretation and implementation may suffer in some areas due to the separation of AI experts and subject matter experts in separate agencies.

Prioritizing Discussions

Given the significant impact of government oversight on the AI Act, the choice between a single NSA or multiple MSAs should be prioritized in trilogue discussions to strike a balance between centralized expertise and streamlined coordination, while addressing potential fragmentation and interpretation concerns.

Europe’s AI Act: Guiding the Future of AI Regulation

Europe’s AI Act represents a significant development in AI regulation and sets the stage for comprehensive rules governing AI applications. The risk-based approach, compliance mechanism, and enforcement provisions outlined in the Act will shape the future of AI in Europe.

While the negotiation process and implementation may take time, this landmark legislation demonstrates Europe’s commitment to addressing the ethical and societal challenges posed by AI.

Lumenova AI - Your Trusted Guide in the EU AI Act Journey

Compliance with the EU AI Act is vital for your organization’s prosperity, and we understand its significance. Lumenova AI simplifies and streamlines your compliance journey, so your organization can adhere to the EU AI Act’s requirements.

Get in touch with us at sales@lumenova.ai or via contact form, and experience the seamless integration of Lumenova AI with your existing ML stack.

Frequently Asked Questions

The EU AI Act is a landmark regulation that establishes a risk-based framework for AI governance, requiring businesses developing or deploying AI within the European Union to comply with strict transparency, accountability, and safety standards. It supports a centralized, horizontal regulatory structure that ensures uniform rules and enforcement across all member states. Companies operating in high-risk AI sectors, such as finance, healthcare, and law enforcement, must implement robust AI risk management strategies and adhere to compliance requirements to avoid penalties and ensure responsible AI deployment.

Businesses using AI in the EU must meet specific compliance obligations based on the risk classification of their AI systems, with high-risk AI applications requiring conformity assessments, bias detection measures, human oversight, and detailed documentation. Compliance also involves aligning AI governance policies with the AI Act’s transparency, explainability, and data protection standards to ensure ethical AI practices and regulatory adherence.

Companies failing to comply with the EU AI regulation face significant financial penalties, with fines reaching up to €40 million or 7% of global annual revenue, surpassing GDPR penalties. Non-compliant AI systems risk legal action, reputational damage, and potential market restrictions, making it essential for businesses to integrate AI compliance frameworks and risk mitigation strategies to avoid regulatory consequences. Penalties for non-compliance are enforceable starting August 2, 2025, while obligations for General-Purpose AI Models will be enforced from August 2, 2026. Businesses are expected to ensure compliance by these deadlines, and any remediation efforts would be handled on a case‐by‐case basis by national authorities rather than under a uniform six-month window.

The AI Act classifies AI systems into unacceptable, high-risk, limited-risk, and minimal-risk categories, with high-risk AI applications, such as biometric surveillance and predictive policing subject to stringent regulatory oversight. Businesses using AI in critical areas must conduct risk assessments, implement AI governance frameworks, and ensure compliance with AI transparency and accountability standards.

To comply with the EU AI Act, organizations should establish AI governance frameworks, conduct AI risk assessments, implement bias detection and mitigation techniques, and ensure transparency in AI decision-making. Partnering with AI compliance solutions like Lumenova AI helps businesses streamline regulatory adherence, safeguard AI model integrity, and align with the EU’s evolving AI legal landscape.

The AI Office is a dedicated unit created by the European Commission to guide and coordinate the implementation of the EU AI Act. It offers expert advice on meeting compliance requirements, works closely with national market surveillance authorities, and helps develop non-binding codes of practice. Its central role is to ensure that the regulation is applied uniformly across the EU, thereby promoting transparency and trust in AI technologies.

Regulatory Sandboxes are controlled environments established by national regulators where companies can test innovative AI solutions under real-world conditions without fully committing to all regulatory requirements immediately. They provide a safe space for businesses to experiment and refine their technologies, receive regulatory guidance, and demonstrate compliance. This setup helps balance innovation with oversight, reducing risks before a full market launch.

Post-market monitoring is an ongoing process that ensures AI systems continue to meet safety, fairness, and performance standards after deployment. By continuously assessing the operation of AI applications, regulators and companies can promptly identify and address any emerging issues. This practice is essential for maintaining long-term compliance, enabling timely corrective actions, and ensuring that AI systems remain reliable and trustworthy throughout their lifecycle.

Related topics: EU AI Act

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo