December 11, 2023

Landmark Agreement on EU AI Act

eu ai act

The European Parliament has achieved a significant milestone in the regulation of Artificial Intelligence (AI) with a provisional agreement on the much-anticipated Artificial Intelligence Act. This comprehensive framework is set to ensure that AI systems are developed and deployed in a manner that is safe, respects fundamental rights, and promotes democracy, all while fostering innovation and economic growth within the EU.

The Deal: Balancing Innovation with Fundamental Rights

After intensive negotiations, the Parliament and the Council agreed on a bill designed to protect citizens from high-risk AI applications without stifling the technological advancements that can benefit society. The act introduces new obligations for AI systems based on their potential risks and levels of impact, aiming to protect fundamental rights, the rule of law, and environmental sustainability.

Following the announcement of the EU deal, co-rapporteur Brando Benifei, an Italian member of the EU Parliament (MEP), said that “it was long and intense, but the effort was worth it. Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise – ensuring that rights and freedoms are at the center of the development of this ground-breaking technology. Correct implementation will be key – the Parliament will continue to keep a close eye, to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models”.

Prohibited and High-Risk Applications

The act explicitly prohibits certain AI applications deemed to pose a threat to citizens' rights and democracy. These include:

  • Biometric categorization systems that process sensitive characteristics;
  • Indiscriminate scraping of facial images for recognition databases;
  • Emotion recognition in the workplace and educational institutions;
  • Social scoring systems assess individuals based on social behavior or personal characteristics;
  • AI systems that manipulate human behavior or exploit vulnerabilities due to age, disability, or socioeconomic status.

Moreover, high-risk AI systems associated with significant potential harm will be subject to stringent requirements, including mandatory fundamental rights impact assessments.

Law Enforcement and AI

A contentious aspect of the negotiations was the use of AI by law enforcement agencies. The agreed text allows for narrow, safeguarded exceptions for real-time and “post-remote” biometric identification in public spaces for serious crime prevention, subject to judicial authorization and other stringent conditions.

Obligations for General AI Systems

General-purpose AI (GPAI) systems are subject to transparency requirements. These include technical documentation, compliance with EU copyright law, and detailed summaries of training data content. For high-impact GPAI models with systemic risks, the act stipulates additional obligations, including risk assessments, adversarial testing, and energy efficiency reporting.

Fostering Innovation and Protecting SMEs

The act encourages innovation and supports small and medium-sized enterprises (SMEs) by promoting regulatory sandboxes and real-world testing, enabling businesses to develop and refine AI technologies.

Sanctions and Implementation

The agreement also sets out severe penalties for non-compliance, with fines of up to €35 million or 7% of global turnover, depending on the severity and nature of the infringement.

A pioneering step toward robust AI regulation

Dragos Tudorache, co-rapporteur of the act and Romanian MEP, said, “The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities. It protects our SMEs, strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy. The European Union has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future”.

Next Steps

The provisional agreement marks just the beginning of the journey. The text now awaits formal adoption by the Parliament and the Council. Subsequent to this, the Internal Market and Civil Liberties committees will conduct a vote in an upcoming meeting, marking the final steps towards enshrining the AI Act into EU law.

As the European Union positions itself as a leader in the ethical development and application of AI, the world watches with keen interest, recognizing the potential global impact of this groundbreaking legislation.

A Framework for Trustworthy AI

The Lumenova AI platform is set to incorporate the Act’s provisions on transparency, accountability, and data governance. This means clients can expect a level of product integrity that respects user privacy, avoids bias, and promotes fairness. The inclusion of the Act’s guidelines will reinforce Lumenova AI’s position as a trusted provider in the AI industry, keeping them at the forefront of responsible technological development.

Frequently Asked Questions

The EU AI Act is the world’s first comprehensive AI regulation, designed to ensure that AI is developed and used safely, ethically, and transparently. It establishes a tiered risk classification framework that categorizes AI systems based on their potential impact, balancing the need for innovation with responsible AI development and deployment.

The EU AI Act bans certain AI applications that pose significant risks to fundamental rights and democracy, including biometric categorization and identification systems,, indiscriminate facial recognition database scraping, instrusive workplace monitoring practices, , social scoring systems, and AI models that can manipulate human behavior or exploit vulnerabilities.

High-risk AI systems under the EU AI Act must meet strict compliance requirements, including transparency obligations, risk assessments, and fundamental rights impact assessments. These systems are subject to ongoing monitoring—especially during post-deployment—and regulatory oversight to ensure ethical and responsible AI use.

Companies that fail to comply with the EU AI Act face significant penalties, including fines of up to €35 million or 7% of their global annual revenue, whichever is higher, depending on the severity of the violation. These penalties are intended to enforce accountability and promote responsible AI practices. The timeframe for resolving compliance penalties can vary widely: uncontested fines might be resolved within 30 days, while complex cases or appeals could take several years. For reference, this information aligns with the official EU AI Act (Regulation (EU) 2024/1689), specifically Articles 99, 101, and 113, which address penalties, enforcement, and implementation timelines.

Businesses should implement AI governance frameworks, conduct AI risk assessments, and align their AI practices with transparency and accountability standards. Leveraging AI compliance solutions like Lumenova AI can help organizations navigate regulatory requirements and ensure adherence to the EU AI Act.

Related topics: EU AI Act

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo