A little over a year after its public release, ChatGPT has acquired over 180 million users, shattering previous records set by tech giants such as Facebook (now Meta), Twitter (now X), and Instagram (also Meta). By contrast, work-specific platforms and applications such as Slack, Notion, and Zoom, although they are popular today, have been around for over a decade, and did not see widespread adoption until Covid-19 forced a global transition to remote work. This begs the following question: what makes ChatGPT—specifically, AI—so special, and why did so many people adopt it so quickly?
Sure, AI is a cool technology, just as smartphones, EVs, and wearables are. But the “cool factor,” while it may influence adoption rates initially, does not determine them in the long-run—the ability to harness solar and nuclear energy is cool too, and yet, the global energy economy still runs on fossil fuels. Similarly, EVs have been around for over a century, but widespread adoption did not occur until the last decade, mainly due to advancements in range capabilities, safety, technological infrastructure, and green policy incentives. AI has also been around since the 1950s, but “AI winters” have since slowed and even halted the process of AI innovation, primarily due to a misalignment of expectations in the technology’s capabilities and the infrastructure required to sustain it.
Today, AI is special because it demonstrates an immediate practical utility—determined by its capacity to aid, supplement, or drive the process of human decision making and creativity—that is not confined to any particular industry or domain. From streamlining workflow and automating mundane tasks, debugging and generating code, interpreting large blocks of text, improving cybersecurity, resilience to adversarial threats, risk management and forecasting, enhancing scientific R&D, and the overall processes of creative ideation and strategic thinking, AI appears to be limitless in its range of possible applications.
However, it is only on the surface that AI appears to be limitless—its vast range of capabilities and use-cases also inspire serious risks and vulnerabilities that could permeate every facet of society. Individuals may make harmful decisions based on AI-generated outputs, employers, financial lenders, or healthcare providers may inadvertently discriminate against certain applicants due to biased algorithms, and bad actors may rapidly create and disseminate disinformation campaigns that threaten the very foundations of democracy. There are many other prevalent risks to consider, but the bottom line is this: AI has a unique risk profile that encapsulates both localized and systemic risks.
AI can do a lot of good, but it can also do a lot of bad. It’s also an exponential technology, which suggests that the rate of progress made in one year may not be equivalent to the following year—one year’s worth of progress in 2023 could be equivalent to 2 months worth of progress in 2024. The AI risk profile, along with its rapid rate of innovation and proliferation, has regulators, policymakers, safety researchers and other key stakeholders scrambling to establish robust and resilient policies that enable us to maximize potential benefits of AI while mitigating potential and pre-identified risks.
Therefore, we begin this post with a discussion on the progress of AI policy in 2023, painting a picture of last year’s AI policy landscape and the most prevalent policy trends that emerged. Following this, we explore some of the key challenges regulators face, and a series of AI policy predictions for 2024 which we hope will allow governments, companies, and citizens to better navigate the oncoming slew of AI regulations.
Here at Lumenova AI, our mission is not only to streamline the Responsible AI (RAI) integration process through the digital products and services we provide, but also to drive AI policy thought leadership that enables key stakeholders and business leaders to better prepare themselves for an AI-driven future.
Progress of AI Policy in 2023
2023 was an exciting and ambitious year for AI policymaking. From the EU Data Act and UK AI Safety Summit to the White House’s Executive Order on Safe, Secure, and Trustworthy AI (hereafter referred to as “President Biden’s EO”) and the EU AI Act, the first comprehensive attempts at regulating AI development and deployment have officially emerged.
Importantly, while Western nations may be leading the global race for AI policymaking, they are not the only ones involved—China has released its first large-scale Generative AI (GenAI) regulatory initiative, Brazil is leading South American AI policy efforts through AI Bill No. 2238, and various nations in Africa including Egypt, Mauritius, South Africa, and Tunisia are taking tangible steps toward developing national AI strategies. In a nutshell, the world is “waking up” to the importance of AI regulation.
Nonetheless, this post focuses on disseminating Western AI Policy trends, not out of a lack of concern or interest in what other nations around the world are doing, but because the US and EU have developed and implemented the most comprehensive AI policy to date, the effects of which will likely trickle down at the global level. So, let’s get into it.
AI regulation has garnered significant bipartisan attention in the US. In 2023, more than 30 congressional hearings on AI were held, with an additional 30 AI Bills being proposed, as well as a House proposal to establish the Blue-Ribbon Commision, which would serve to evaluate US AI regulatory strategy from a risk-centric perspective. Crucially, two major bipartisan frameworks have emerged in the senate:
- Senate Majority Leader Chuck Schumer’s SAFE Innovation Framework: this framework sets forth 5 core principles, which include security, accountability, explainability, foundations (AI systems should be aligned with democratic value), and innovation. It also suggests AI Insight Forums and closed-door sessions with senators and key stakeholders, alongside an overall focus on promoting AI innovation without compromising consumer protection and American civil rights.
- Blumenthal-Hawley Framework: this framework places an emphasis on transparency and accountability in the context of consumer privacy. It also advocates for the creation of an independent AI oversight body, specific requirements concerning AI-generated content, and an increased focus on national security.
The White House has also taken an ambitious approach, namely through President Biden’s EO in addition to Secretary of Commerce Gina Raimondo’s pledge to establish an AI Safety Institute in partnership with the UK AI Safety Institute. The US is rapidly advancing national AI policy initiatives, cementing the importance of safety, security, trust, accountability and transparency, explainability, and fairness, which mirror the core principles of the White House Blueprint for an AI Bill of Rights and NIST’s most recent iteration of its AI Risk Management Framework.
Overall, the US AI Policy approach strives to maintain American AI innovation prowess by fostering market competitiveness, public-private partnerships, interagency coordination, and international collaboration, while ensuring the responsible development and deployment of AI through safety-by-design, workforce education, equitable distribution of AI benefits, built-in regulatory adaptability, and robust AI risk mitigation.
Moreover, President Biden’s EO, Senator Schumer’s SAFE Innovation Framework, and the Blumenthal-Hawley Framework emphasize the preservation of national security and the prevention of foreign adversarial threats posed by nations such as China, suggesting a broader focus concerning the protection of American democratic values. Moreover, President Biden’s EO makes a point to target both systemic and localized risks stemming from frontier AI—state-of-the-art systems like ChatGPT—models.
By contrast, the EU has adopted an even more aggressive approach to AI regulation, as evidenced by its almost 900-page EU AI Act, which serves as the world’s first holistic AI-specific legal framework. Importantly, this act has been under development for over 4 years, and while it will take effect within the next 18-24 months, additional provisions and requirements targeting GenAI and Large Language Models (LLMs) are expected to emerge.
The EU AI Act is nothing short of a regulatory milestone, and just as we saw with the General Data Protection Regulation Act (GDPR) of 2018, it is likely that this act will lay the groundwork for a global standardized approach to AI policymaking—in fact, this is one of the primary intentions of the act. In this context, there are several high-level regulatory trends worth noting:
- International collaboration → the establishment of an EU AI board, EU AI Office, and advisory forums aim to ensure international collaboration at the Union level. The act also requires compliance with international standards such as the GDPR and World Trade Organization Agreement, and encourages global partnerships with nations aligned with the EU’s core democratic values.
- Risk-based categorization of AI systems → the act categorizes AI systems in terms of the risk they pose in application contexts, with a special emphasis on high-risk AI systems such as foundation models or systems utilized in high-impact domains such as healthcare, law enforcement, and immigration.
- Application over technology → the act targets risks stemming from AI application and deployment as opposed to AI design and development. It also mandates regular evaluation, review, and monitoring of AI systems to ensure they are utilized in line with their intended purpose.
- Responsible innovation → through the establishment of regulatory sandboxes, digital innovation hubs and testing facilities, member state funding for beneficial and trustworthy AI projects—especially for SMEs and start-ups—and the creation of an EU AI Board, the act champions RAI innovation.
- Consumer rights → AI systems should be utilized transparently, and where AI-driven outcomes produce consequential impacts on consumers, accountability must be enforced. Consumers have a right to know and understand how AI systems may impact them, and must be protected from potential AI risks. Deployers should also ensure product safety prior to deployment.
- One to rule them all → all deployers of AI technologies within the EU are subject to the requirements proposed by the act, however, deployers of high-risk AI systems are subject to stricter requirements. Additionally, sector-specific regulations for high-impact domains are articulated.
- Public engagement and built-in adaptability → the EU Commission must organize public consultation events centering on AI regulatory developments, and ensure that the results of these consultations are publicly accessible—quarterly consultations with the advisory forum are also mandated. Overall, member states should run public awareness campaigns on AI risks, benefits, safeguards, rights and obligations, and the act itself must be adapted in accordance with novel AI developments and stakeholder feedback.
In addition to the data governance measures proposed by the EU AI Act, GDPR, and Data Governance Act, the EU passed the EU Data Act in November of 2023, which took effect on January 10th, 2024. While this regulation is admittedly much narrower than the AI Act, it suggests some important additions to EU data protection laws.
Broadly speaking, the EU Data Act directly targets the need for fair and equitable access to data, especially in terms of utilization. Regarding data access and sharing, user rights are notably enhanced with respect to how data holders must ensure fairness and prevent potential abuse or discrimination. The act also encourages competitive digital innovation via provisions for interoperability, data portability, contractual agreements, and data sharing.
The progress of AI policy in 2023 has been monumental, with the EU and US leading the way. Still, it is not yet clear whether this leadership will prove successful as it concerns the ability to maximize AI benefits and encourage continual innovation while mitigating risks appropriately, especially when considering the exponential rate of AI innovation and proliferation. What is clear, however, is that broad regulatory trends are beginning to emerge in the AI landscape, and that these trends will likely shape the form and structure that AI policy takes in 2024.
Predictions for 2024
Challenges to Crafting AI Policy
AI isn’t just ChatGPT or GenAI. Facebook’s content recommendation algorithm, Tesla’s object recognition, Google’s personalized search, and resume screening software could also be considered AI, among many other kinds of applications utilizing machine learning and statistical methods. To date, there is no standardized definition of AI, and while some technologies clearly fall under the AI umbrella, others might not. This is one major challenge regulators must grapple with.
Another key challenge regulators face is that the AI risk profile may change throughout the AI lifecycle. The risks that arise during data collection and model training often differ, whereas the risks associated with integration and deployment may be specific to some application contexts and not others. Such risks can undergo further changes based on the specific technology in question, for example, whether we are dealing with a GenAI system or a more rudimentary data classification algorithm. This array of risks, which largely depends on AI’s inherent risk profile throughout the stages of its lifecycle as well as its intended purpose and application context, makes it extremely difficult for regulators to capture all possible risks in the policies they design and implement.
Regulators must also walk a fine line that balances potential AI benefits with potential risks. Given that both AI benefits and risks can be localized and systemic in nature, regulators must take care to develop policies that don’t inadvertently stifle innovation purely in the interest of risk management—or vice versa—ensuring that widespread potential benefits are enacted while the most salient risks are addressed.
Moreover, there is the problem of exponential AI innovation and proliferation. Laws and policies are frequently designed reactively—as an example, consider drunk driving laws, which were crafted in response to alcohol-related accidents and fatalities. Proactive AI policymaking is not impossible, but given the rate at which these technologies evolve and spread, regulators must often adopt a reactive approach, quickly accounting for novel AI developments in the policies they design. Ultimately, there is a legitimate risk that the rate of AI innovation and proliferation will outpace regulators ability to craft adequate legislation.
To this final point, there is an additional challenge worth discussing. To ensure that the policies they design remain relevant as the AI landscape fluctuates, regulators must build in flexibility and adaptability—this is where proactive thinking comes into play. AI policies should reflect the values and preferences of those they protect, hold key actors accountable, and account for novel AI developments and risks without compromising innovation. In other words, the mechanisms by which built-in flexibility and adaptability are instilled can’t be one-dimensional—stakeholder feedback, collaboration with industry experts, safety researchers, academic institutions, and various factions of government, regulatory sandboxes and testing facilities, are all mechanisms regulators must consider when determining the degree of built-in flexibility and adaptability required for their policies to remain effective.
Predictions for 2024
In this final section, we make a series of AI policy predictions for 2024. These are high-level predictions, attempting to anticipate the emergence of global AI policy trends. Our AI policy predictions for 2024 are the following:
- The number of AI policies that emerge in 2024 will dwarf the number of AI policies that emerged in 2023. There will be a significant lag-time between when these policies are passed vs. when they take effect, but key stakeholders cannot succumb to the assumption that they’ve fulfilled all compliance requirements simply by complying with current policies.
- Emergence of standardized AI definitions and core RAI principles. While there are still some granular definitional disparities between the most prominent AI policy initiatives now, there is nonetheless high-level agreement between fundamental RAI principles and the overall understanding of what AI is. This high-level agreement will help facilitate standardization, especially as national AI Boards, Offices, and Committees are established.
- Increased focus on application over technology. The intent to maintain continual innovation suggests that AI deployment, rather than development, will be at the heart of AI policy in 2024. This may also motivate more stringent requirements concerning how AI is leveraged for decision making by key actors, especially in high-stakes contexts.
- Emergence of sector-specific AI policies. Despite the broad scope of the EU AI Act and President Biden’s EO, the increasing focus on application over technology further suggests that various sectors will need to adopt AI policies that consider the risks and benefits stemming from sector-specific application contexts. Moreover, requirements for high-impact domains such as healthcare and finance will become progressively stricter.
- Preservation of national security and democratic values. As AI technologies become more powerful and widespread, threats to national security and democracy become much more salient. Though current policies already address this possibility, exponential AI innovation and proliferation will give rise to novel threats and vulnerabilities that must be quickly accounted for, especially as they concern foreign adversaries.
- Tighter controls on the digital information ecosystem and GenAI. AI-generated content and synthetic media can drive disinformation campaigns, undermine democracy, and threaten IP rights. Seeing as current methods for authenticating AI-generated content are still relatively futile, we can expect to see more stringent regulatory requirements emerge in this area.
- Public awareness campaigns, AI education, and training. Currently, this domain of AI policy making is still fairly underdeveloped. If the public and workforce are AI literate, they will be better able to understand how to use AI responsibly and identify potential risks or vulnerabilities that emerge. An AI literate public can also play a more active and useful role in the process of AI policy making.
- Emergence of independent AI oversight bodies and 3rd party certification providers. While current AI policies encourage collaboration with industry, academia, and civil society, government bodies will ultimately determine how policies are enforced. Mechanisms by which to hold the government accountable for its development and deployment of AI are still scarce, and this will give rise to non-government affiliated AI oversight bodies, at both the national and international level. On the other hand, we will also see a rise in 3rd party certification providers for safe and trustworthy AI—certification results will likely be validated by the government.
- The rest of the world will adopt a “wait and see” approach. Just as we saw with the GDPR, it is likely that the rest of the world will “wait and see” how AI policies unfold in the US and EU, developing national AI strategies in consideration of observed AI policy impacts in Western nations.
Seeing as the international stage is anarchic by default, we recognize that these predictions may hold true in some cases and not others, but overall, we think it would be wise for readers to consider them seriously.
Lumenova AI
The AI policy landscape is rapidly evolving, and the sooner businesses begin developing internal AI governance and RAI procedures, the better equipped they will be to deal with the oncoming wave of compliance requirements. Fortunately, your business doesn’t need to embark on this journey alone. With Lumenova AI, you can automate, streamline, and simplify your organization’s AI governance process. Using our platform, you can:
- Launch governance initiatives specific to your organization’s needs.
- Identify and establish relevant policies and frameworks.
- Assess model performance over time to ensure safety and trustworthiness.
- Pinpoint potential risks and vulnerabilities that arise throughout the AI lifecycle.
- Continually monitor and report on discoveries.
To learn more about what Lumenova AI can offer you, request a product demo today.