In 2022, the Minister of the Canada Department of Innovation, Science and Economic Development (ISED), in collaboration with the Minister of the Department of Justice, introduced the Digital Charter Implementation Act, which contains three major acts, one of which is the AI and Data Act (AIDA)—the subject of this post.
However, even though Canada was one of the first countries to develop a national AI strategy, in addition to being a founding member of the Global Partnership on AI, its trajectory toward comprehensive AI legislation currently lags behind major international actors like the US and EU. Fortunately, this is likely to change, especially in light of Prime Minister Justin Trudeau’s recent pledge to contribute $2.4 billion to the Canadian AI ecosystem alongside ongoing investments—an integral part of Canada’s national AI strategy—in AI research, talent, and the development of industry standards for AI systems.
AIDA hasn’t yet matured but if all goes well, it should take effect by 2025. The Act is being designed to align with the AI legislation approaches of the UK, EU, and US to facilitate interoperability and international collaboration—many of the core concepts engrained in the EU AI Act, NIST AI Risk Management Framework, AI Bill of Rights, UK Proposal for AI Regulation, and OECD AI Principles have also made their way into AIDA. Simply put, AIDA is a risk-centric legislation that aims to protect fundamental rights, human health and safety, and democracy while balancing AI risks with opportunities to foster trustworthy and responsible AI (RAI) innovation.
AIDA will be revised and updated accordingly, in consultation with key stakeholders across Canadian regulatory bodies, industries, academia, and civil society. Ultimately, the Act should easily integrate with existing Canadian legislation—like the Canadian Human Rights Act (CHRA) or Criminal Code—relevant to the impacts AI could generate on society, and possess enough built-in flexibility and adaptability to account for novel AI advancements and regulatory changes.
AIDA may not yet be active and enforceable legislation, but this doesn’t mean that Canadian AI actors can simply relax until it is. The rate at which AI evolves and proliferates necessitates rapid regulatory development and implies that once legislation takes effect, compliance windows will be quite short, especially for technologies that pose a systemic risk like general-purpose AI (GPAI) systems.
Consequently, this post will break down AIDA at a high level, in terms of key actors and technologies targeted, core objectives, and enforcement procedures. Following this, we’ll also take a look ahead, discussing a series of proposed amendments to AIDA as well as Canada’s Voluntary Code of Conduct for Generative AI (GenAI). For more up-to-date information on AI policy, risk management, RAI, and GenAI, we invite readers to follow Lumenova AI’s blog.
Key Actors and Technologies Targeted
AIDA strives to set concrete standards for AI systems in Canada, so that consumers can trust them and so that businesses can manage their design, development, deployment, and operation of AI responsibly—under AIDA, businesses are held accountable and consumers are protected. In this respect, AIDA describes four kinds of AI actors, each of which has a specific set of regulatory responsibilities:
- AI designers and developers must account for potential AI risks and adverse impacts, define and document systems’ intended purpose, and provide explanations of their limitations. They must also respond to any relevant changes across these factors.
- AI deployers must examine potential use cases for the AI systems they deploy and communicate to end users the parameters surrounding a system’s intended use, while also facilitating an understanding of its limitations.
- AI operators must continually monitor AI use and performance ensuring that any risks are measured and managed appropriately. Importantly, the business is the entity responsible for overseeing AI operations, not individual employees.
Seeing as AIDA is a risk-centric legislation, distinct risk management requirements also apply to these AI actors, each of which corresponds with a different stage of the AI lifecycle, as described below:
- System design:
- Assessment of possible risks tied to a system’s intended purpose and use.
- Mitigation of any biases that are present in training data.
- Determining whether the system is interpretable enough and whether any relevant changes to design must be made to increase interpretability.
- System development:
- Document training data alongside any models leveraged in the creation of the system.
- Administer performance assessments to ensure the system works as intended—when it doesn’t, retraining is required.
- Create and establish concrete mechanisms for human oversight and monitoring.
- Develop documentation that outlines a system’s intended use and any of the limitations it faces.
- System deployment:
- Maintain documentation that demonstrates compliance with requirements concerning system design and development.
- Communicate information on training data, model limitations, and intended use with end users—this kind of information should also be documented.
- Assess and manage potential risks associated with the deployment methodology.
- System operation:
- When the operational context requires it, system operators must ensure that system outputs are monitored and stored accordingly.
- Guarantee continual human oversight and monitoring of any system that is actively operated.
- Adjustment of system parameters when a system doesn’t perform as intended in a given operational context.
On a related note, the kinds of AI systems that AIDA targets are those with high-impact capabilities, which are classified according to the following criteria:
- Posing a tangible threat to fundamental rights, human health, and safety, as a consequence of a system’s intended purpose and/or negative externalities tied to real-world use.
- The magnitude of potential harmful impacts. For example, systems that could generate severe harm, such as predictive policing algorithms, would qualify as high-impact.
- The scale at which a system is used. Mildly harmful impacts may be manageable within small groups or among individuals, but if they are aggregated across millions of people, mitigating their impacts becomes much more difficult.
- The kinds of adverse impacts that have already occurred. AI innovation follows a pattern of exponential growth—regulation may not be able to account for all emergent risks in a timely manner, but it can more easily account for the adverse impacts that arise due to such risks.
- The inability to opt out of system use, either for pragmatic or legal reasons. The inability to opt out essentially guarantees that an AI system will produce tangible impacts on people, regardless of whether they’re positive or negative.
- Whether potential impacts compromise or undermine equity and equality. For instance, systems that foster discriminatory outcomes in consequential settings like employment or education, would be considered high-impact.
- Whether certain risks are already captured by other existing regulations. In other words, if a system poses risks that fall outside the scope of the regulatory umbrella, it could nonetheless be classified as high-impact.
Open-source AI models, which themselves do not represent a complete system, are exempt from these requirements. However, if such models are trained and fine-tuned to perform certain tasks, and subsequently open-sourced as complete AI systems, they will fall within AIDA’s regulatory scope. That being said, AIDA does describe several kinds of AI systems that already qualify as high-impact according to their use case and/or intended purpose:
- Systems leveraged to drive, assist with, or execute employment-related decision-making. Examples include systems used for workplace monitoring, compensation analysis, or resume screening.
- Systems aiding in the delivery and access to critical goods and services. Examples include systems that evaluate creditworthiness, adjust insurance policy premiums, or streamline university admissions procedures.
- Systems utilized for biometric identification and categorization. Examples include facial recognition, gait analysis, and behavioral profiling systems.
- Systems critical to the preservation of human health and safety. Examples include autonomous driving algorithms and systems leveraged for medical admissions and discharge protocols—these kinds of systems are defined by their heightened probability to cause direct harm.
- Content moderation and curation systems. Examples include social media sites and search engines.
Core Objectives
AIDA addresses potential AI harms across two broad categories: 1) individual, and 2) collective or systemic—such risks and impacts will typically be evaluated according to the CHRA, which upholds key democratic values like equality and non-discrimination, diversity and multiculturalism, equal access to critical goods and services, pay equity, privacy, and human dignity. In this respect, AIDA requires that businesses assess and manage potential systemic risks by reference to CHRA provisions, whereas AI harms should be understood in terms of their impacts on psychological and physical well-being, property, and an individual’s economic status.
Consequently, it’s unsurprising that AIDA also places a strong emphasis on preventing and mitigating AI bias and discrimination, especially with respect to how proxy indicators—data points that are used to make indirect inferences about certain variables like consumer behavior—might be used to fuel discriminatory outcomes. However, AIDA does recognize that there exist nuanced scenarios in which leveraging differential data points could be appropriate and ethical, such as when a system is designed to foster equality of opportunity or identify targeted health interventions.
Taking a step back, AIDA supports four high-level objectives, which are intended to broadly align with the objectives of the EU AI Act, OECD AI Principles, and NIST AI RMF, without compromising integration with existing Canadian legislation. These objectives are:
- Protect Canadian citizens and consumers from potential AI risks and adverse impacts.
- Promote and uphold trustworthy and RAI innovation.
- Maintain and support Canada’s leadership in AI development.
- Ensure interoperability with national legislative frameworks and international AI legislation.
Extrapolating from these high-level objectives, AIDA also suggests four targeted objectives, which are described below:
- Ensure that high-impact AI systems do not pose a threat to citizens' fundamental rights, health, and safety.
- Establish a Canadian AI and Data office that educates, enforces, and assists with AI-specific compliance measures.
- Ban certain AI systems that could inspire serious harm to Canadian citizens.
- Ensure that businesses are held accountable for risks that arise throughout all stages of the AI lifecycle.
Importantly, AIDA stresses the criticality of managing potential AI risks before systems are made available for use or deployed. While the following AI risk management principles—and their corresponding requirements—can be applied to most stages of the AI lifecycle, they’re especially crucial for AI deployers:
- Human oversight and monitoring → AI systems should be clearly understandable to human operators, and undergo regular performance assessments.
- Transparency → Information on systems’ intended purpose and use should be easily accessible to the public, and highlight potential AI impacts alongside capabilities and limitations.
- Fairness and equity → AI systems should not foster discrimination or produce unfair outcomes.
- Safety → Risk and impact assessments should be the key factors motivating AI risk mitigation strategies.
- Accountability → Organizations designing, developing, or deploying AI must ensure the internal development of robust and resilient AI governance frameworks that correspond with existing legislation and industry best practices.
- Validity and robustness → System should perform in a way that aligns with their intended purpose, and maintain consistent and reliable performance within changing environments or novel situations.
Moving forward, in consultation with key stakeholders across industry, academia, and civil society, the following AIDA domains will be ironed out in more detail before a comprehensive draft is released:
- The specific kinds of AI systems that qualify as high-impact.
- What concrete standards and certifications should be established and administered to ensure AI systems are trustworthy and aligned with Canadians’ values and interests.
- Which areas of AIDA, including enforcement procedures, must be prioritized for further development.
- The role and responsibilities of the AI and Data Commissioner as well as the creation of an advisory committee.
To briefly summarize, AIDA would strive to ensure that Canada’s citizens are protected from possible AI harms, AI risks are adequately managed throughout every stage of the AI lifecycle, from design to operation, concrete AI industry standards reflecting core risk management principles are established, and that the Canadian AI ecosystem rewards and supports trustworthy and RAI innovation.
Enforcement
During its first few years as active legislation, AIDA would mainly aim to foster AI literacy among key stakeholders, lay the groundwork for RAI best practices, and ensure that businesses can comply with its provisions—AI actors will need time to adequately prepare themselves, so enforcement will not be immediate. That being said, certain key stakeholders, selected according to demonstrated RAI expertise in the private sector, academia, and civil society, will be recruited to aid in the compliance and enforcement process—this conglomeration of individuals will eventually form the AIDA advisory committee.
However, the ISED Minister, in partnership with the AI and Data Commissioner, would be responsible for enforcing AIDA, though the brunt of lower-level enforcement work will fall on the Commissioner. To this point, the Commissioner would be required to investigate the ongoing evolution of systemic AI risks to guide future AI policy decisions while also communicating and collaborating with regulators across various regulatory factions, to ensure they’re adequately prepared for AIDA.
Alternatively, when a system can produce biased outputs or is non-compliant with existing regulations, the Minister can either require that documentation demonstrating compliance is provided or that an independent audit is administered. Moreover, when a system is likely to produce harmful impacts, the Minister can either halt its further operation or publicly disclose information on relevant areas of non-compliance, effectively “shaming” a company into compliance. Even though AIDA’s enforcement mechanisms are still relatively immature given its unofficial status, three have been suggested:
- Administrative monetary penalties (AMPs) → Fines that are administered when AIDA is violated.
- Public prosecution → Prosecution by the Public Prosecution Service of Canada upon violation of AIDA’s provisions.
- True criminal offenses → Prosecution that occurs independently of AIDA when a system is designed, developed, deployed, or operated in a way that demonstrates intent to cause harm.
There are, however, some important caveats to these enforcement mechanisms. For instance, the Minister can direct prosecutors toward potential cases involving AIDA violations, but they can’t influence whether prosecutors choose to take such cases on. In cases where indirect violations occur, such as lying about compliance or providing false documentation, public prosecution remains fair game. As for criminal prosecutions, the same standard applies, and while the Minister does not have the authority to investigate such offenses, three AI-specific criminal offenses have already been defined (and many more are likely to emerge):
- Leveraging unlawfully acquired data at any stage of the AI lifecycle.
- Knowingly deploying an AI system that is likely to or does cause serious adverse impacts to human property.
- Knowingly deploying an AI system whose purpose is to defraud the public or directly cause economic harm to an individual.
Finally, AIDA requirements will be applied proportionately, to reflect the resources and size of a given business—small businesses won’t be held to the same governance standards as enterprises, and AMPs will be administered in a way that respects the scale of a business’s operations.
Looking Ahead
As previously mentioned, AIDA will not take effect until 2025. We can therefore expect it to undergo substantial changes, especially when considering the Canadian government’s intent to align it with the AI legislation strategies of the US, UK, and EU, which themselves, will likely undergo significant updates and revisions.
Nonetheless, looking ahead, this final section examines a series of proposed amendments to AIDA, submitted by the ISEC Minister to the House Standing Committee in November of 2023, as well as Canada’s Voluntary Code of Conduct for GenAI, which is likely to become mandated under AIDA.
Proposed Amendments
This section will not cover all of the proposed amendments to AIDA, since it is beyond the scope of this post—for those interested in building a more detailed understanding of possible changes to this legislation, see this document. However, we will briefly discuss what we interpret as the most consequential amendments to this regulation.
One such amendment would expand the array of high-impact AI systems to cover the following domains:
- Healthcare and emergency response services. Examples include systems utilized for disease classification and diagnosis, personalized medicine interventions, emergency response optimization, and emergency call analysis and prioritization.
- Judicial decision making. Examples include systems leveraged to predict the rate of recidivism for criminal offenders, determine the severity of a particular sentence, or parole eligibility.
- Law enforcement.
Additional requirements, specific to risk management procedures operating at different stages of the AI lifecycle, such as feedback mechanisms, incident identification, management, and reporting procedures have also been mentioned. More importantly, however, is the suggestion that GPAI systems should be classified as distinct from high-impact AI systems—this doesn’t imply that a GPAI system couldn’t qualify as high-impact, just that it is judged according to a different regulatory standard.
GPAI systems possess a vast capabilities repertoire that enables high-impact use across various domains, tasks, and functions—the potential scale at which these systems could prove useful, coupled with the diversity of tasks they can accomplish or be fine-tuned to perform, implies that some the threats they pose are likely to manifest systemically. For example, synthetically generated media, such as deepfakes, could be used to influence election cycles and undermine democratic functioning. In this respect, suggested requirements for GPAI systems include:
- Authenticating AI-generated content and ensuring that end users know they’re interacting with an AI system where relevant.
- Establishing robust human oversight procedures alongside incident response and reporting mechanisms.
- Providing publicly accessible plain language descriptions regarding systems’ limits and capabilities as well as the risks and adverse impacts they might generate.
Furthermore, the amendments put forth by the ISEC minister propose a structured approach to accountability for AI developers and deployers—accountability frameworks should:
- Outline key roles and responsibilities for all personnel involved in AI system development and deployment as well as the AI training regimens and resources taken and/or required by such personnel to exercise their duties effectively and responsibly.
- Establish a concrete reporting and advisory structure for key personnel involved in system development, management, or operation.
- Clearly define internal governance protocols for AI risk management and data integrity.
Finally, these amendments would also afford more powers to the AI and Data Commissioner, which are described below:
- The ability to require that organizations share their accountability framework for evaluation and if necessary, administer corrective actions.
- The ability to require that organizations share their regulatory compliance assessments, but only if they have conducted any.
- The ability to require that organizations conduct an AI audit to evaluate compliance if there is good reason to suspect an AIDA violation.
- The ability to require that other Canadian regulators and agencies of relevance freely share information with one another, to foster a collaborative regulatory ecosystem.
Voluntary Code of Conduct for Generative AI
Canada’s Voluntary Code of Conduct for GenAI is predicated on six core principles, each of which directly mirrors the core principles described in AIDA’s risk management approach. This strongly suggests that the Code of Conduct will become an integral part of AIDA, and that its voluntary status is unlikely to persist for much longer. Nonetheless, see below for a layout of the Code’s core principles:
- Accountability → Develop internal risk management protocols tailored to a given system’s risk profile. Such protocols should outline key personnel roles, responsibilities, training, and any additional resources that are required to successfully manage AI risks. Organizations should also openly collaborate with other organizations that are considering similar GenAI integration or deployment initiatives and ensure that multiple safety nets (such as AI audits) are in place to guarantee accountability and safety before deployment.
- Safety → Administer comprehensive risk and impact assessments, ensure the presence of safeguards for adverse impacts linked to pre-identified risks, and explain a system’s intended use to possible end users.
- Fairness and equity → Address any potential biases that emerge in training data and ensure that a diverse array of testing and evaluation mechanisms are leveraged to manage any output biases that emerge before deployment.
- Transparency → Provide clear and publicly available explanations of the data on which a system is trained, the measures taken to address potential risks and adverse impacts, and a system’s capabilities repertoire, which includes its limitations. Methods for authenticating AI-generated content and notifying users of AI interaction should also be established.
- Human oversight and monitoring → Following deployment, continually monitor systems for any emergent risks and corresponding adverse impacts. Any serious incidents—harmful impacts—must be immediately addressed or communicated to developers and documented within an internal database, and any measures taken to manage serious incidents must also be articulated and documented.
- Validity and Robustness → To demonstrate robustness before deployment, employ a diverse variety of testing and performance assessments that range across disparate tasks and potential use cases. Adversarial testing and cybersecurity risk assessments are also required alongside benchmarking protocols that consider model performance by reference to established industry standards.
This Code of Conduct applies to developers and managers of both private and commercially available GenAI systems, however, application details can differ. To understand how this Code might apply to you, we suggest that you review it directly here.
Conclusion
We’ve unpacked a lot of information on AIDA in this post, and we recognize that given its regulatory status, some of what we discussed might change as the legislation undergoes further revisions and updates. This doesn’t, however, imply that AIDA isn’t worth thinking critically about, especially in terms of its ambitions to facilitate international regulatory interoperability and align Canada’s approach to AI legislation with that of the EU, US, and UK. Fortunately, we have plenty of material to work within this context.
Moving forward, our next post on this topic will be analytical, examining two main lines of inquiry: 1) the extent to which AIDA aligns with major AI legislation, namely the White House Blueprint for an AI Bill of Rights, EU AI Act, and the UK Proposal for AI Regulation, and 2) which additional AI-specific areas AIDA should cover or incorporate before being finalized.
For readers interested in exploring additional content on AI regulation, policy making, and risk management, we invite you to follow Lumenova AI’s blog. If you happen to be more curious about GenAI and RAI developments, don’t worry, we have plenty of well-researched content on those topics too.
Alternatively, if you or your organization is beginning to consider concrete approaches to RAI integration and/or AI risk management, feel free to check out Lumenov AI’s RAI platform and book a product demo today.