June 20, 2024

AI Policy Analysis: European Union vs. United States

eu ai act

The European Union (EU) and the United States (US) are arguably the two most influential international bodies when it comes to setting global AI policy trends. However, while many similarities between the US and EU’s national AI governance strategies exist, noteworthy differences do emerge, several of which carry significant implications for AI design, development, and deployment within each respective nation.

Nonetheless, these implications, while they may initially be reserved for the US and EU, will produce consequences that range far beyond national borders for a few important reasons.

First, the US holds a commanding international lead in AI innovation, exemplified by a disproportionately well-funded AI ecosystem that houses several of the world’s leading AI and technology companies. The US is in a unique position, given its wealth of AI resources and talent, to set AI innovation benchmarks defined by advancements in frontier AI applications. This isn’t to say that other countries can’t develop and deploy frontier AI on their own, only that compared to the US, it’s much more difficult. Simply put, there’s a higher probability that the US AI ecosystem will more frequently churn out high-value AI companies, many of which will ultimately make their way into the global ITC market, 36% of which is already dominated by US-based tech companies.

Second, the US and EU are the wealthiest democracies in the world, which is relevant because democratic countries are more likely to cooperate on international research collaborations and institution building—some have even gone so far as to suggest that scientific values are fundamentally incompatible with autocratic governance systems due to their dogmatic belief structures. Still, the “wealth” component here, when coupled with the democratic component, is what really fuels both the US and EU’s wide-ranging influence on global governance. For instance, together, the US and EU account for approximately one-third of world GDP, exhibit some of the highest Quality of Life (QoF) rates globally, contain several of the world’s leading academic institutions and scientific research institutes, and maintain currencies—US Dollar and Euro—that consistently rank in the top 10 most powerful in the world.

Third, the US and several countries in the EU including Sweden, Denmark, Finland, Germany, and the Netherlands rank among the most innovative and technologically advanced countries globally. Similarly, the US, France, and Germany have historically been categorized as some of the world’s most prominent scientific research contributors by reference to total yearly research publications, and currently, the US holds the second-highest number of AI patent filings, outpaced only by China. It’s also worth noting that non-EU countries like the UK and Switzerland, which also rank among the most innovative and technologically advanced nations, have established strong relationships with the US and EU, correspondingly increasing the breadth and availability of potential sources of cross-border value, talent, and innovation.

Finally, since the release of the General Data Protection and Regulation Act (GDPR), the EU has established a sturdy foothold and influence in the global technology regulation landscape, with several nations around the world, including many US states, modeling their data protection and privacy laws on European standards. To further put this into perspective, the US and EU have one of the strongest bilateral partnerships in the world, exemplified by the post-pandemic renewal of the Transatlantic partnership—the US’s lead in AI innovation in conjunction with the EU’s lead in AI regulation lays the foundation for ensuring that these two international bodies, together, can exercise more influence on the global AI ecosystem than any other individual nation or group of nations (at least for now).

Consequently, it’s vital that both the US and EU are aligned with one another on all the core characteristics of AI governance, however, as we’ll see shortly, this isn’t exactly the case. In the following sections, we’ll begin by examining relevant similarities and differences between the US and EU AI governance approaches, after which we’ll discuss the implications of this comparison and what they could mean for the US and EU as well as on an international scale.

Policy Comparison: US vs. EU

Throughout this discussion, we’ll focus our attention on two major pieces of legislation: the EU AI Act (AI Act) and President Biden’s Executive Order on Safe, Secure, and Trustworthy AI (President Biden’s EO).

The EU AI Act is an obvious choice given its status as the world’s first comprehensive and enforceable piece of AI legislation. However, President Biden’s EO is a less obvious choice when considering how many AI laws are emerging on a state-by-state basis in the US and the EO’s potentially volatile status as unofficial law. Nonetheless, we’re examining President Biden’s EO, as opposed to other prominent US AI legislations, because it represents the voice of the federal government, not individual US states, in the same way that the AI Act represents the voice of the EU Parliament, not individual nation-states. And yes, we realize that both these legislations should ideally and holistically represent the voices of those they govern, but this a point that falls beyond the scope of our current discourse.

Before we get into it, we’ll provide a brief overview of both these AI legislations below:

  • The AI Act: An all-encompassing, horizontal, and legally binding AI regulatory framework that leverages a tiered risk classification structure for AI systems to qualify compliance, safety, and risk management requirements, aims to promote EU-based human-centric and responsible AI (RAI) innovation, as well as preserve and protect democratic values, fundamental rights, and human health and safety. The AI Act also explicitly strives to standardize AI legislation across the EU, ensure the adequate management of AI risks and benefits, establish AI-specific governance bodies, and foster AI literacy at all levels of society. For a deep dive into the AI Act, see Lumenova AI’s EU AI Act series.
  • President Biden’s EO: The first centralized attempt to ensure that federal agencies’ AI initiatives are aligned with best practices and standards in safe, ethical, and trustworthy AI. President Biden’s EO is principle-centric, supporting and upholding core RAI principles like transparency and accountability while also mandating regular reporting and review mechanisms, and leveraging already existing US-based tech governance bodies for oversight and implementation procedures. Moreover, it also outlines AI-specific concerns surrounding national security, environmental sustainability, and critical infrastructure. While the order primarily targets the government’s design, development, procurement, and deployment of AI systems, it does aim to set a broader precedent for AI governance throughout the private sector, and ultimately, the entire US.

At a high level, some notable similarities between both these regulations emerge. For one, each strives to ensure that AI innovation and proliferation don’t result in severe adverse consequences to democracy, fundamental human rights, and public safety. In the same vein, both aim to promote and sustain the development of trustworthy and human-centric AI systems, encouraging continual support for and investment in AI research and innovation, and upholding core RAI principles throughout the AI lifecycle. Each also facilitates some form of cross-agency collaboration and attempts to lay the foundation for a unified national AI governance strategy that transcends all industries and domains.

There are, however, additional similarities that emerge at a more granular level, though even in this context, we’ll begin to note some key differences:

  • While both the EO and AI Act strongly emphasize the importance of fairness and non-discrimination, only the AI Act outlines strict legal parameters aimed at actively preventing AI-driven discrimination and bias, especially across high-impact domains.
  • Transparency is a core principle that falls at the center of both these regulations, with each calling for the disclosure of AI use and content, as well as output explainability. However, President Biden’s EO doesn’t actually mandate any targeted transparency obligations unlike the AI Act, which qualifies transparency requirements by reference to AI risk classifications.
  • The AI Act sets clear and enforceable accountability standards through mandatory conformity assessments, human oversight measures, and documentation requirements. While President Biden’s EO also stresses the importance of accountability by requiring that federal agencies instill regular reporting, monitoring, and ethical review mechanisms, federal agencies aren’t yet held to any enforceable accountability standards.
  • President Biden’s EO recognizes the necessity for AI system robustness and resilience in critical application domains like national security and public safety whereas the AI Act requires that AI systems, particularly those classified as high-risk, are subjected to comprehensive risk management systems, data governance protocols, post-market monitoring provisions, and third-party conformity assessments, among other requirements.
  • Both legislations are risk-centric, aiming to mitigate and manage key risks and impacts associated primarily with AI deployment procedures, however, the AI Act adopts a much more intensive and targeted perspective through its tiered risk classification structure, which categorizes AI systems across four main risk categories: minimal, limited, high, and unacceptable.
  • Both legislations rely on technology-centric governance bodies for regulatory oversight, guidance, and implementation, but only the AI Act explicitly requires the creation and establishment of centralized AI governance bodies like the AI Office, which are also responsible for regulatory enforcement. By contrast, President Biden’s EO relies on existing government agencies, namely the National Institute for Standards and Technology (NIST) and the Office of Science and Technology Policy (OSTP).
  • Both legislations promote stakeholder engagement at all levels of society including government agencies, international partners, industry specialists, and academia, and while both also value engagement with civil society, the AI Act places a somewhat stronger emphasis on this component than President Biden’s EO given that AI literacy is one of its core objectives.
  • Both legislations support inclusive innovation, equitable access to AI benefits, and the cultivation of AI literacy through public awareness campaigns, increased funding for R&D, the facilitation of public-private partnerships, educational programs and resources, ethical guidelines and AI training, and stakeholder engagement, but only the AI Act establishes safe innovation hubs and specific accessibility requirements.

In addition to these relatively subtle differences between the AI Act and President Biden’s EO, several more pronounced differences also emerge, one the clearest of which relates to the AI Act’s scope—President Biden’s EO exclusively recognizes government agencies as AI providers and deployers, whereas the AI Act expands the definition of these terms to cover all relevant private and civil entities, regulating AI horizontally (across industries and domains) instead of vertically (throughout one industry or domain). Moreover, the AI Act also offers a comprehensive risk classification framework for AI systems through its tiered risk classification structure, and while President Biden’s EO acknowledges the importance of AI risk profiles, specifically dual-use foundation models, its principles-driven approach favors guidelines rather than concrete standards. Below, we expand upon several additional notable differences:

  • The AI Act is an enforceable piece of legislation, meaning that significant penalties such as fines and market sanctions can be administered when its provisions are violated. By contrast, while President Biden’s EO does mandate implementation plans, oversight procedures, reporting mechanisms, interagency coordination, and public accountability measures, it lacks any direct penalties for regulatory violations.
  • In terms of legal status, President Biden’s EO is volatile—future US presidents could repeal it if they see fit or if a court was to rule it as unconstitutional. Repealing the AI Act, however, would be much more difficult since an initial proposal would first need to be submitted to and approved by the EU Commission, after which it ascends to the EU Parliament, and finally, the EU Council. For the AI Act, repeal would only be possible if approved by the EU Parliament and the EU Council.
  • The AI Act specifically defines certain kinds of high-risk AI systems, such as systems leveraged for biometric categorization, consequential decision-making, general purpose systems that pose a systemic risk, and generative AI models used to create deepfakes, to name a few examples. Alternatively, President Biden’s EO, while it does cover broad AI use cases like national security, defense, and public services, doesn’t offer any such definitions.
  • President Biden’s EO mandates several risk management, transparency, accountability, human oversight, and reporting guidelines, but unlike the AI Act, it doesn’t establish and implement any secure safety testing and validation hubs. The AI Act, through its implementation of regulatory sandboxes, ensures that high-risk AI systems can be safely and securely tested before deployment, and in some cases, even allows for controlled real-world testing.
  • The AI Act holds all AI providers and deployers who intend to do business in the EU accountable for any risk and impacts their systems could produce—AI providers and deployers don’t need to be EU-based in order to be subject to AI Act compliance requirements. President Biden’s EO, on the other hand, is domestically driven, meaning that only US-based government entities must adhere to its provisions.
  • If it isn’t obvious yet, President Biden’s EO is principles-driven whereas the AI Act, while it is principles-based, is rules-driven. Under the AI Act, AI providers and deployers are held to a strict set of requirements, whereas under the EO, key actors are expected to follow the high-level guidelines that have been set for them.
  • While both the AI Act and President Biden’s EO consider several mechanisms for regulatory adaptation, revision, and improvement, the EO retains a much higher degree of flexibility due to its principles-driven approach, which applies to all AI technologies indiscriminately. Conversely, the AI Act, through its comprehensive scope, stringent parameters, tiered risk classification structure, and establishment of multiple coordinating AI governance bodies, is far more bureaucratic and therefore less amenable to change.
  • President Biden’s EO explicitly supports the cultivation of international AI talent by making it easier for non-citizens to live and work in the US on AI-related projects and initiatives. The AI Act, while it implicitly encourages such initiatives through investment in AI R&D and EU-wide application, doesn’t establish similarly targeted mechanisms for bringing in non-EU talent.

The differences and similarities we’ve just examined represent all the major takeaways that readers should consider when comparing and contrasting the AI Act with President Biden’s EO. However, this isn’t to say that other additional similarities and differences don’t exist, only that the ones we’ve highlighted here are the most noteworthy and consequential. As with any regulatory analysis, we recommend that readers who wish to take a deep dive into these two AI governance initiatives review the legislation directly and formulate their own opinions.

Implications

The AI Act and President Biden’s EO are most strongly aligned in terms of their core principles—principles like safety, accountability, fairness and non-discrimination, transparency, democracy, fundamental rights, and international cooperation. While the AI Act operationalizes these principles as concrete legal standards and President Biden’s EO recognizes them more as high-level guidelines—laying the groundwork for concrete standards—the shared understanding of their importance will play a crucial role in facilitating regulatory interoperability, international cooperation, and ultimately, the creation of a standardized universal language for AI regulation.

Moreover, seeing as the EU and the US represent the world’s two most powerful and influential democracies, with the EU leading AI regulation and the US leading AI innovation, global standards for AI regulation will likely adhere to democratic and innovation-centric ideals. To this point, both the AI Act and President Biden’s EO encourage continual investment in AI innovation and research, focus more heavily on regulating technology deployment over development, and actively seek to protect and preserve democratic institutions and value structures.

On a different note, it’s difficult to envision whether the EU’s horizontal approach to AI regulation will prove more fruitful than the US’s vertical approach. With its strict yet comprehensive standards and requirements, the AI Act ensures enforceability and accountability across almost all industries and domains, with penalties ranging as high as 35 million euros or the equivalent of 7% of a company’s yearly turnover. While the AI Act is technically pro-innovation, its stringent provisions and severe penalties, despite mechanisms like regulatory sandboxes, could make it challenging for EU-based AI companies to innovate locally, particularly start-ups and SMEs.

Moreover, while the enforcement structure of the AI Act will generate strong incentives for compliance, its breadth, when coupled with its bureaucratic approach, could make the revision and remediation process unnecessarily inefficient. AI regulations must be highly adaptable and flexible to account for rapid AI-driven changes at all levels of society.

Conversely, a clear benefit to the US AI policy approach is regulatory flexibility and adaptability. Individual states can design and implement AI policies on a more targeted, citizen-centric, and streamlined basis than the federal government—many states are already doing so—and often, such policies end up informing the federal government’s regulatory strategy. As for President Biden’s EO, its principles-driven structure implicitly applies to all kinds of AI systems, whereas the AI Act, through its tiered risk classification structure, singles out specific kinds of AI systems and categorizes them accordingly. Simply put, President Biden’s EO could feasibly capture novel AI advancements with a much higher degree of ease and efficiency than the AI Act. Fortunately, current federally-motivated AI regulation efforts also suggest that AI regulation is an issue with significant bipartisan support.

All that being said, readers should be aware that the US and EU are not the only international bodies who are taking AI legislation seriously—Brazil, Singapore, South Korea, the UK, Switzerland, South Africa, Egypt, and Tunisia are all beginning to develop and implement national AI governance strategies, and while the US and EU’s influence on global AI governance may be stronger, the influence of other nations should by no means be discounted. It’s also worth noting that the US and EU have bilateral international relations with each of these countries and all of them are also members of the United Nations, the world’s most renowned international governance body.

Conclusion

Throughout this regulatory analysis, we compared and contrasted the EU AI Act with President Biden’s Executive Order on Safe, Secure, and Trustworthy AI, highlighting a series of relevant similarities and differences and then discussing their implications on a few related levels. In doing so, we hope to provide readers with the tools they require to think about the future of AI regulation in a proactive and productive manner. However, we urge readers to maintain a flexible continuous learning-oriented mindset when thinking about these topics due to the intense rate at which AI progresses and proliferates.

Since the arguments in this piece are predicated on the premise that the US and EU are the most influential actors in the global AI regulation ecosystem, we’d like to challenge readers to consider a few big-picture questions rooted in this assumption:

  • What additional values and principles could motivate global AI policy standards? How might these values and principles deviate from currently accepted ones?
  • What kinds of AI use cases, risks, and impacts, might emerging and future AI policies target? What wouldn’t they target, and why?
  • Would other nations around the world be more or less likely to follow suit? Which nations might adopt similar AI governance strategies and which nations might reject them in favor of their own?
  • How could these policies impact the course of AI innovation on a global scale? Who would benefit and who wouldn’t, and why?

These hypothetical questions are intended to push readers to think critically about the global state of AI policy and the kind of AI-driven world they wish to be a part of. They should also help readers understand how complex the subject of AI policy is, and where democratic nations are most likely to encounter vulnerabilities or inadequacies in their AI governance strategies. In the words of Winston Churchill, “It has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.”

For readers who wish to continue exploring the AI policy and governance landscape, alongside other topics like RAI, generative AI, and risk management, we invite you to follow Lumenova AI’s blog.

Alternatively, for those who are in the midst of designing and implementing their AI governance and/or risk management strategies, we suggest checking out Lumenova AI’s RAI platform and booking a product demo today.


Related topics: EU AI Act President’s EO on Safe, Secure, and Trustworthy AI US

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo