May 10, 2024

Analysis: Canada's Artificial Intelligence and Data Act

canada ai

As discussed in the conclusion of our previous post, which provided a high-level overview of Canada’s Artificial Intelligence and Data Act (AIDA), this post will examine the extent to which AIDA aligns with major AI legislations like the EU AI Act as well as any additional areas of interest not yet covered by the Act.

Moving forward, it’s important to note that AIDA is poised to take effect sometime in 2025, and while this timeline is fairly immediate, much more work is needed before the Act reaches regulatory maturity. Moreover, readers should also keep in mind that AIDA is intentionally designed to align with the national AI strategies and regulatory approaches of the EU, US, and UK—seeing as the EU is currently the only international body that has released a mature and comprehensive AI legislation, it’s quite likely that major regulatory revisions made to AIDA will, at least to some degree, reflect the regulatory shortcomings and successes of the EU AI Act once it takes effect.

Still, despite AIDA’s clear intention to facilitate regulatory interoperability and collaboration between Canada and other international actors, significant legislative differences nonetheless emerge. We’ll discuss these differences in the section that follows, but given AIDA’s immaturity, we advise readers not to take them for granted. However, the areas in which we identify prominent similarities between AIDA and other AI legislation approaches, specifically those of the EU, US, and UK, will serve as a more reliable signal for what to expect once AIDA is finalized.

Consequently, this post will primarily explore, examine, and interpret relevant similarities and differences between AIDA, the EU AI Act, the Whitehouse Blueprint for an AI Bill of Rights, and the UK’s Proposal for AI Regulation. That being said, we’ll conclude by assuming a more critical perspective, envisioning which areas of AIDA fail to grasp key AI-specific issues or warrant further revision.

Comparative Analysis

Regulatory Summaries

Before diving into the heart of our discussion, we’ll briefly set the stage for readers who might be unfamiliar with the AI regulatory approaches we’re comparing and contrasting AIDA with. In this respect, broad overviews of the EU AI Act, AI Bill of Rights, and the UK’s proposal for AI regulation are illustrated below:

  • EU AI Act: The world’s most comprehensive risk-centric, pro-innovation, AI legislation developed to date. The AI Act is unique for many reasons—it proposes a tiered risk-classification structure for AI systems, aims to standardize AI legislation across the EU, promote human-centric and trustworthy AI innovation, establish AI-specific governance bodies, maintain robust consumer and data protection, preserve fundamental rights and democratic values, facilitate international cooperation, and ensure that AI systems can be securely tested and validated before deployment. It’s also a horizontal piece of legislation, regulating AI from the top down across numerous industries, domains, and sectors, possessing an inherently flexible and adaptable structure that enables lawmakers to account for changes and advancements in the AI ecosystem.
  • AI Bill of Rights: A voluntary or non-binding regulatory initiative aimed at protecting and preserving the fundamental rights of US citizens in light of advancements made throughout the AI landscape. The main objective of the AI Bill of Rights is to ensure that the structure and function of US democracy remain robust, resilient, and effective as an AI-driven future unfolds. In this respect, the initiative outlines five core principles, which include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. Each of these principles is further supported by mechanisms through which they can be enacted, such as equity assessments, bias and pre-deployment testing, and continuous monitoring.
  • UK Proposal for AI Regulation: A non-binding framework for an inherently adaptive, pro-innovation, pro-safety, and cross-sectoral AI regulatory strategy that cements the UK as a global leader in safe AI development and deployment. The UK Proposal for AI Regulation also outlines several universal safe AI principles, which include safety, fairness, transparency, accountability, and contestability—the ability to contest and address AI-related harms. Ultimately, the framework strives to enable the development of sector-specific AI policies that consider the variety of AI risks and benefits that can arise throughout different domains, in particular those that are considered high-impact, like finance, healthcare, and education. It also encourages strategic investment in AI research and infrastructure, categorizes AI systems according to specific risk profiles, supports the establishment of a UK AI Safety Institute, and promotes international cooperation.

Similarities

This section covers several similarities that arise between AIDA, the EU AI Act, AI Bill of Rights, and UK Proposal for AI Regulation, which, for the purposes of simplicity, will hereafter be referred to as “these regulations”. Importantly, the similarities we identify here are consistent at a high level, meaning that granular differences between the mechanisms, procedures, enforcement guidelines, and implementation details underlying them may exist—we’ll discuss some of these in the next section. Nonetheless, key similarities are the following:

  • Protection for fundamental rights and democracy: All these regulations have been designed in democratic nations, and it’s therefore unsurprising that each of them strives to protect and preserve fundamental human rights, human health and safety, and overall democratic functioning. Moreover, the intention to protect these characteristics also implicitly suggests a fundamental concern for the risks that AI could pose at the systemic level, from mass manipulation, social control, and unchecked surveillance, to subverting electoral, judicial, and law enforcement processes—each regulation makes at least some mention of these factors.
  • Accountability, transparency, and fairness: The principles of accountability, transparency, and fairness are present throughout virtually all well-known and generally accepted AI governance and risk management frameworks, standards, and guidelines from the NIST AI Risk Management Framework and OECD AI Principles, to the ASEAN Guide for AI Governance and ISO 42001. Consequently, it’s safe to say that these principles have now assumed the status of common standards, as evidenced by their consistent presence in large-scale AI regulations. Moreover, and in connection with the broad interest to preserve fundamental rights, all of these regulations place an increased emphasis on mitigating AI bias, preventing discrimination, understanding the role that AI plays in decision-making, and ensuring that people always have a transparent yet context-specific understanding of how AI works and the impacts it generates.
  • Pro-innovation and safety: Balancing the risks AI could inspire with its potential benefits is integral to developing a national regulatory strategy that fosters safe AI development and deployment without stifling innovation. While differences in the safe innovation approach that each of these regulations proposes exist, notably in relation to the processes or procedures for pre-deployment testing, each stresses the importance of validating AI systems for safety and efficacy before deployment. Moreover, Canada, the US, EU, and UK are all independently wealthy international bodies with vibrant and well-funded technology ecosystems, which speaks to a larger incentive to maintain global leadership in AI innovation.
  • Pro-international cooperation and collaboration: The need for international cooperation and collaboration on the development of AI regulatory standards and best practices isn’t only essential to preserving national security but also, enacting scalable AI benefits at an international level. Additionally, each of these regulations recognizes the importance of fostering continual support for the advancement of AI research and infrastructure, which, if it is to progress in a way that objectively benefits humanity, must consider international interests to ensure that all citizens subject to AI impacts are reasonably and equally represented.
  • Distinguishing between different forms of AI: AI is often used as a blanket term to describe an extremely wide range of different technologies, but regulation needs to be specific and targeted to prove effective. Fortunately, each of these regulations distinguishes between different kinds of AI systems, either by reference to the impacts they generate, the risks they pose, the context in which they’re used, or the specific kind of system they are, for instance, generative AI vs. AI agents. At a higher level, each regulation also exhibits an increased concern for the development and deployment of high-risk or high-impact AI systems, though none possess a tiered risk classification structure as sophisticated as that of the EU AI Act.
  • Regulatory adaptability and flexibility: AI moves very fast, both in terms of innovation and proliferation. This means that novel AI use cases will continuously crop up, inspiring new risks and benefits that are unaccounted for in current regulations. In this respect, each of these regulations suggests some mechanism for regulatory feedback and revision, whether it’s stakeholder consultations, regular collaborations with industry specialists, safety researchers, and academia, the establishment of national AI governance bodies, AI awareness and education initiatives, or AI audits, advisory panels, and expert review boards.
  • Establishment of centralized AI governance bodies: AI is a deeply complex and rapidly evolving technology that can provide value across almost all industries and domains. This highlights the necessity for AI-specific governance bodies that harness AI expertise to inform and enforce AI regulation in a cross-sectoral and consistent fashion. Thankfully, each of these regulations, notwithstanding the AI Bill of Rights, recognizes the immediate importance of establishing centralized AI governance bodies. It’s worth noting that even though the AI Bill of Rights doesn’t tackle this issue, the US has indicated a tangible interest in developing its own national AI Safety Institute, which would work in partnership with that of the UK.

Differences

Given AIDA’s level of immaturity, numerous differences between it and the regulations we’ve discussed thus far exist, many of which are not worth discussing in consideration of the changes that AIDA will undergo in the following several months. Consequently, the differences we’ve chosen to highlight here represent those that we expect will be most consequential and consistent. Below, we’ve described each of these differences in detail:

  • Promotion of AI awareness and education initiatives: Of these regulations, only two—the EU AI Act and UK Proposal—have made tangible progress and efforts toward fostering AI awareness and education. For example, one of the core tenets of the EU AI Act is to cultivate AI literacy throughout all dimensions of society and government, and the UK has, since 2018, contributed over 290 million pounds to national AI skills and talent initiatives, most notably throughout the academic sector. Conversely, while AIDA does outline an interest in promoting AI awareness and education, the initiatives, procedures, and mechanisms that would support this process have yet to be developed, whereas the AI Bill of Rights doesn’t consider this point at all—this is likely because President Biden’s Executive Order for Safe, Secure, and Trustworthy AI does outline specific responsibilities for the US Department of Education in this respect.
  • Classification of AI systems according to potential impacts: The EU AI Act classifies AI systems according to their risk profile, which is mainly defined by whether a particular system possesses high-impact capabilities, either as a standalone product, a component of a product, or in a specific context, like law enforcement or surveillance. The UK proposal and AI Bill of Rights adopt a similar although much less “fleshed out” approach, whereas AIDA classifies AI systems purely according to the impacts they generate—it’s unclear whether under AIDA, high-impact is synonymous with high-risk, and seeing as a clear distinction can be made between the two, as evidenced by the EU AI Act, this could be a significant source of regulatory trouble down the line, especially in terms of international regulatory interoperability and collaboration.
  • Voluntary code of conduct for generative AI: Codes of conduct or practice are crucial to ensuring that AI is developed and deployed in compliance with existing regulations and in accordance with industry best practices. While Canada’s voluntary code of conduct for generative AI is not yet part of AIDA, it will be eventually, and this actually sets AIDA apart from the EU AI Act and AI Bill of Rights seeing as neither of the policies has yet outlined specific codes of practice for AI development and deployment. That being said, the AI Office, as one of its core responsibilities under the EU AI Act, is currently developing codes of practice that will emerge within the next year, and the UK, having been one of the major partners involved in the Hiroshima AI Process, did play a significant role in developing a code of conduct for advanced AI systems, which will likely work its way in UK AI regulation.
  • No proposal for a concrete governance structure: Coherent governance structures with clear enforcement and accountability procedures, stakeholder feedback mechanisms, processes through which government entities can communicate with each other and civil society, and AI-specific governance institutions are needed to ensure that AI legislation is appropriately enacted and revised when necessary. While AIDA does touch on some of these points, it doesn’t cover them anywhere near as exhaustively as the EU AI Act and even the UK Proposal for AI Regulation, so this is very likely to become an area that warrants further expansion in AIDA.
  • Regulation throughout the full AI lifecycle: The EU AI Act discretely favors regulating AI deployment over development, and so does the UK Proposal, although perhaps to a slightly lesser extent. Conversely, both AIDA and the AI Bill of Rights suggest that AI regulation should be applied throughout the full AI lifecycle, beginning with system design, and culminating with system operation. It remains to be seen whether the full AI lifecycle regulatory approach will prove to be more effective than the deployment-oriented strategy, both in terms of addressing AI risks and fostering a pro-innovation AI ecosystem.
  • Absence of robust mechanisms for managing pre-deployment risks: While AIDA does require that AI deployment methodologies undergo risk assessments and that AI systems undergo performance, risk, and impact assessments alongside continuous human oversight and monitoring, taken together, these mechanisms for managing pre-deployment risks are less robust than those presented in the EU AI Act and UK Proposal—such mechanisms aren’t outlined in the AI Bill of Rights, but fortunately for the US, we have the NIST AI Risk Management Framework, which currently serves as a great high-level guide for managing AI risks. Nonetheless, both the EU AI Act and UK Proposal outline the importance of regulatory sandboxes as secure testing hubs for high-risk AI systems—such mechanisms will arguably play the most critical role in mitigating per-deployment risks, and they’re likely to become a part of AIDA, especially in consideration of already established Canadian fintech regulatory sandboxes.
  • Enforcement procedures: The EU AI Act is, by far, the most mature regulation we’ve discussed in terms of enforcement procedures—for severe violations, companies subject to the AI Act can incur penalties up to 35 million euros or the equivalent of 7% of their annual global turnover. That being said, AIDA isn’t necessarily immature in this respect, and while it hasn’t outlined specific parameters for violation penalties, it would enable Canadian citizens to take companies to court for AIDA violations, and in cases where an intent to harm is present at any stage of the AI lifecycle, prosecution for criminal offense may also be viable. As for the UK Proposal and AI Bill of Rights, neither of these policies outlines concrete enforcement procedures, however, in the US, President Biden’s Executive Order does consider enforcement responsibilities for various federal agencies.

Areas that Require Further Development

Notwithstanding the proposed AIDA amendments we discussed in our previous post, there are a few additional areas within AIDA that will almost certainly require further development.

One such area concerns its AI risk classification structure, which, as we’ve already discussed, categorizes AI systems in terms of whether they’re deemed high-impact. While AIDA does identify a few kinds of high-impact AI use cases, like biometric identification and categorization or systems critical to human health and safety, there are many more high-impact use cases that it fails to cover, especially when considering the EU AI Act’s exhaustive approach to this topic.

Moreover, if AIDA is to easily facilitate international regulatory interoperability, it will need to adopt a risk classification structure similar to that of the EU AI Act, whereby a clear distinction between high-risk and high-impact AI is defined. Also, while AIDA alludes to the categorization of general purpose AI systems as high-impact, it doesn’t afford nearly as much attention to them as the EU AI Act or UK Proposal, which is concerning since these systems, like ChatGPT or Gemini, have become extremely popular among regular citizens and can be leveraged to accomplish numerous different tasks throughout many domains.

On the other hand, AIDA will need to support the development and implementation of AI regulatory sandboxes, which are quickly becoming a crucial part of AI regulation. How AIDA chooses to approach this issue, for instance, whether sandbox testing will only be available to developers of high-risk AI systems, remains open to consideration. Nonetheless, more robust pre-deployment testing procedures and mechanisms are needed, and this will also warrant a sandbox-specific governance and reporting structure in addition to overall expertise, guidance, and documentation regarding participation and operation in regulatory sandbox environments.

In a similar vein but on a broader scale, AIDA will need to flesh out its commitment to raising AI awareness and fostering AI education, skills development, and talent initiatives. Such practices will be integral to ensuring not only compliance, but alignment with responsible and trustworthy AI principles on behalf of those directly involved in the AI lifecycle, policymakers, government employees, and average citizens. Indirectly, AI awareness and education will also generate substantial impacts on the AI innovation landscape—the more informed people are, the more capable they’ll be of identifying AI risks and opportunities, especially across high-impact domains.

Furthermore, AIDA will need to outline the mechanisms, procedures, roles, and responsibilities that key actors and institutions would play in developing and maintaining a robust AI governance structure. AIDA might not need to take as aggressive an approach as the EU AI Act has in this respect, such as by enacting market surveillance authorities and defining complex post-market monitoring procedures, but it will need to investigate this issue much more profoundly than it has thus far.

Finally, the AI governance structure that AIDA—or one of its future counterparts—eventually defines, will produce widespread ramifications on the Canadian AI innovation ecosystem, AI awareness and education initiatives, pre-deployment testing procedures, stakeholder engagement and feedback mechanisms, and ultimately, enforcement guidelines. It’s therefore critical that Canadian legislators begin examining what kinds of AI governance structures would support the institutions and laws that are currently in place without compromising AI safety and continual innovation.

For those who are interested in further exploring the AI policy landscape, we recommend that you follow Lumenova AI’s blog, where you can also find in-depth content on generative and responsible AI, as well as AI risk management.

Alternatively, for readers who want to begin executing tangible steps toward AI governance, risk management, and/or responsible AI integration, we invite you to check out Lumenova AI’s platform and book a product demo today.

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo