May 23, 2024

Colorado Senate Bill 24-205: Algorithmic Discrimination Protection

colorado senate bill

The array of possible AI use cases is steadily expanding as AI innovation persists, introducing novel AI risks, benefits, and impact scenarios. However, there’s one specific use context in which AI has garnered an especially high degree of attention from policymakers and responsible AI (RAI) practitioners—the use of AI systems to drive or assist with consequential decision-making.

Consequential decisions, such as those concerning access to essential goods and services like housing or health insurance, profoundly impact people’s lives, both in terms of the resources they require to maintain a minimum acceptable standard of living, and with respect to their fundamental rights of autonomy, non-discrimination, privacy, health, and safety.

When these kinds of decisions are being made or influenced by AI, the processes by which a decision is arrived at, the precise role AI plays in orchestrating it, the nature of the decision and impacts it generates, and the means by which affected persons can exercise and understand their rights, all arise as crucial concerns that must be addressed.

However, this doesn’t mean that we shouldn’t leverage AI systems for consequential decision-making. In fact, it would be absurd not to when considering the benefits they can inspire, from streamlining the end-to-end decision-making process and reducing human error to affording interested parties with a more concise and easy-to-use method through which to obtain the goods or services they need.

Nonetheless, leveraging AI to drive or execute consequential decisions introduces a major risk: the potential for algorithmic discrimination. Fortunately, US regulators are aware of this risk, and several influential states, including California, New York, and Colorado, to name a few, have enacted AI legislation that explicitly protects consumers, users, and citizens from the adverse effects of algorithmic discrimination.

This post will tackle one such recently enacted legislation, Colorado Senate Bill 24-205 (SB 24-205), whose primary purpose is to protect consumers from algorithmic discrimination when AI systems are leveraged for consequential decision-making. Therefore, in the discussion that follows, we’ll break down SB 24-205, detailing all of the key rights and obligations the regulation supports. For readers interested in exploring a wider selection of content examining the latest trends, insights, and regulations in the AI policy landscape, we recommend following Lumenova AI’s blog.

Overview: Key Actors and Definitions

SB 24-205 will take effect on July 1st, 2025. The bill has a narrow scope, targeting developers and deployers of high-risk AI systems, holding them accountable for foreseeable or preventable risks stemming from algorithmic discrimination. The protections outlined in this regulation are specific to consumers, who are simply defined as “Colorado residents”, and enforcement authority falls exclusively with the Colorado Attorney General or state District Attorneys.

Before we move forward, however, we include some key definitions from the bill below:

  • Algorithmic discrimination: “Any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”
  • Deployer: “A person doing business in this state that deploys a high-risk artificial intelligence system.” Deploying is equivalent to “using” a high-risk AI system.
  • Developer: “A person doing business in this state that develops or intentionally and substantially modifies an artificial intelligence system.”
  • High-risk AI system: “Any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”
  • Consequential decision: “A decision that has a material or similarly significant effect on the provision or denial to any consumer of” essential goods and services. To see what these goods and services are, we recommend that readers review SB 24-205 directly.

Regulatory Obligations

SB 24-205 specifies several regulatory obligations for developers and deployers alike. However, seeing as developers and deployers have differing responsibilities throughout the AI lifecycle, these obligations are fine-tuned to suit the role that each actor plays. Consequently, we’ll begin this discussion by outlining developers’ obligations followed by those of deployers, after which we’ll describe a series of more targeted requirements for generative (GenAI) and general purpose AI (GPAI) systems. We’ll conclude by briefly discussing the enforcement structure that this bill describes.

Developer Obligations

Some of the obligations developers have directly concern deployers. For instance, developers of high-risk AI models, must:

  • Communicate and explain the intended use cases of their systems and specify what would constitute inappropriate or harmful use, so that deployers can utilize their AI systems responsibly.
  • Provide a summary of the data leveraged to train the system in question.
  • Explain known or foreseeable model limitations and preventable risks arising from intended use cases.
  • Outline and describe the intended purpose and intended benefits of the system in question.

Developers must also deal with several documentation requirements. In this respect, developers should document:

  • The measures taken to address and mitigate algorithmic discrimination risks before deployment.
  • The measures taken, in terms of data governance, to address potential issues with training data and data lineage.
  • The intended outputs of the system.
  • What techniques and/or methods were used to evaluate model performance and risk before deployment.
  • How the system should and shouldn’t be used, and how human oversight is maintained when a system is leveraged in consequential decision-making contexts.

Importantly, developers that have a dual function as deployers are not required to produce this kind of documentation. However, if they intend to sell or license their product to a deployer, other than themselves, they are subject to these requirements. Developers must also make the following information publicly available via their website or in a publicly accessible use case inventory:

  • Which high-risk AI systems are actively available to deployers or have undergone significant modifications.
  • The methods and techniques leveraged to address and/or mitigate algorithmic discrimination risks.
  • When a high-risk AI system undergoes significant modifications, whether they are intentional or emergent, developers must appropriately update information on the methods and techniques leveraged to manage algorithmic discrimination risks.

Deployer Obligations

Under SB 24-205, deployers are held to a more stringent standard than developers—this corresponds with an emerging trend in the AI policy landscape, namely an increased focus on regulating technology deployment over development.

For instance, deployers must implement risk management protocols and governance measures for the high-risk AI systems they deploy. These protocols and governance measures must be continually reviewed, revised, and updated to ensure relevance and efficacy throughout the lifecycle of a high-risk AI system. In this context, deployers must:

  • Outline the roles and responsibilities of key personnel involved in risk management and AI governance procedures.
  • Describe the techniques and methods used to manage preventable or foreseeable algorithmic discrimination risks.
  • Describe the key principles underlying and motivating their risk management protocols.

Crucially, deployers must ensure that risk management and governance procedures adhere to existing industry standards like the NIST AI Risk Management Framework (NIST AI RMF) and/or the ISO 42001 risk management standard. Nonetheless, a deployer’s risk management framework should:

  • Consider the size and operational complexity of their organization.
  • Consider the intended use case and scope of the high-risk AI system that is to be deployed.
  • Consider the nature and lineage of the data processed by a deployed high-risk AI system.

Deployers are also required to complete annual impact assessments on actively deployed models. These impact assessments should reveal and/or include the following kinds of information:

  • The real-world benefits linked to the intended use case of the system in question.
  • The real-world risks stemming from the intended use case of the system in question, as well as the nature of such risks and the measures taken to mitigate and manage them.
  • The kinds of data the system processes and/or the kinds of data a deployer leveraged to fine-tune an AI system.
  • The performance metrics leveraged to evaluate model performance over time.
  • The measures taken to maintain transparency standards and obligations.
  • The measures taken to monitor and oversee post-deployment system performance and operation.

When deployers conduct impact assessments in response to significant modifications made to an AI system, they must evaluate to what degree the deployment of that system is aligned with the intended use cases previously outlined by the developer. Deployers must also maintain records of their impact assessments for a minimum of three years following initial AI system deployment, and administer annual evaluations thereafter, either by themselves or via a third party, to certify that a high-risk AI system isn’t perpetuating algorithmic discrimination.

As for obligations toward consumers, deployers must, when a high-risk AI system is leveraged, and before a consequential decision is made:

  • Provide notice and explanation to consumers regarding:

    • The role the system plays in decision-making.
    • The kind of decision being made.
    • The system’s intended purpose.
    • How to contact the deployer if necessary.
    • A plain language description detailing the function and design of the system itself.
  • Where necessary, provide any information required by the consumer to exercise their right to opt out of having their data processed by an AI system.

If an AI-driven consequential decision produces adverse impacts on the consumers, deployers are required to:

  • Describe the reasons for which the decision was made.
  • Describe the role that the AI system played in making this decision.
  • Describe the kind of data that was processed by the system to make the decision.
  • Describe the data lineage of the data processed by the system.
  • Where necessary, provide consumers with the chance to exercise their right to correct any personal data the system uses for processing purposes.
  • Provide consumers with the ability to appeal an AI-driven consequential decision and request human review.

Like developers, deployers must make certain information publicly available via their website. This information should include:

  • An inventory of high-risk AI systems that are actively deployed.
  • A description of the risk management measures taken to address potential algorithmic discrimination risks.
  • A detailed description of the type of data used for processing and its lineage.

Deployers who have less than 50 employees, don’t fine-tune their models or leverage non-proprietary data sources, use AI systems in line with their intended purpose, or make the results of their impact assessments publicly available to consumers, are not subject to the requirements listed above.

Still, if an AI system directly interacts with consumers, deployers must disclose the following information to each consumer before their interaction (except in cases where it’s immediately obvious that they’re interacting with AI):

  • The details of the deployer’s risk management framework.
  • The results of the deployer’s latest impact assessment.
  • The deployer’s records detailing all impact assessments conducted over the last 3 years (if relevant) or thus far.

GenAI and GPAI Requirements

Developers of GPAI models must document AI governance policies that enable compliance with federal and state copyright laws while also documenting, in detail, the data leveraged to train the model. In terms of obligations toward deployers, developers of GPAI models must:

  • Document and explain the capabilities repertoire and limitations of their GPAI models.
  • Document and describe the technical requirements relevant to the deployer’s integration of GPAI models into their larger AI systems or infrastructures.
  • Document and describe how the GPAI model was designed, with a specific focus on training methodologies and procedures used.
  • Document and describe why certain design choices were made, as well as the assumptions motivating them.
  • Document and describe the intended purpose of the GPAI model and why certain parameters were selected.
  • Document and describe the data leveraged for training, testing, and validation procedures.

Developers of GenAI or GPAI models that are leveraged to generate or manipulate synthetic content must:

  • Guarantee that AI-generated content can be authenticated and detected.
  • Establish technical solutions that facilitate interoperability and robustness.
  • Disclose AI-generated or manipulated synthetic content to consumers.

Enforcement

When a deployed high-risk AI system is found to perpetuate algorithmic discrimination, the deployer must report it to the Attorney General within no more than 90 days. In this respect, the Attorney General can require that deployers disclose the following information:

  • The details of their risk management framework.
  • The results of their latest impact assessment.
  • The records detailing all impact assessments conducted over the last 3 years (if relevant) or thus far.

If a developer concludes that, after continual testing and validation procedures, their system is likely to perpetuate algorithmic discrimination, or that it has already, they must submit a report to the Attorney General and all other deployers of the system within 90 days of identifying the risk. If a deployer notifies a developer that their system has perpetuated algorithmic discrimination, developers must follow the same reporting procedure.

Fortunately for developers and deployers, lawful compliance with SB 24-205 will almost never require them to share trade secrets or divulge information in a way that would introduce significant security risks. However, during its first year as active legislation violations will be enforced after a 60-day grace period—if violations have not been addressed during this period, enforcement action will be warranted.

Conclusion

Colorado SB 24-205 represents an important milestone in the US-based AI regulatory ecosystem since AI-specific algorithmic discrimination laws have yet to be enacted in most states despite ongoing drafting and remediation efforts. SB 24-205 also adopts a comprehensive approach to this issue, which differs from other active legislations like New York City’s Local Law No. 144, which addresses the potential for algorithmic discrimination in hiring.

Nonetheless, moving forward, we expect that algorithmic discrimination regulations will take both a comprehensive and targeted approach—specific domains or industries, like finance or healthcare, introduce their own set of algorithmic discrimination risks, so targeted guidelines will be highly useful if not downright necessary at some point. Still, at the federal level, it’s critical that regulatory standards for algorithmic discrimination protection are established and implemented, and these kinds of standards are more likely to emerge as horizontal rather than vertical pieces of legislation.

All that being said, we advise readers to pay close attention to the evolution and trajectory of algorithmic discrimination laws in the US as well as both national and international risk management standards. Algorithmic discrimination, as we alluded to at the beginning of this post, is quickly becoming a central issue in the AI policy landscape, and the tide of AI regulation is strongly hinting that AI risk management standards and frameworks, though they aren’t yet mandatory, soon will be.

Fortunately, by following Lumenova AI’s blog, readers can maintain an up-to-date understanding of the latest developments, insights, and predictions on the AI policy and risk management landscape. Our blog also offers readers the opportunity to explore different but related topics like GenAI, RAI, and AI governance.

On the other hand, for readers interested in initiating concrete AI governance and risk management protocols, we invite you to check out Lumenova AI’s RAI platform and book a product demo today.


Related topics: Colorado Insurance

What You Should Know: Colorado Senate Bill 24-205 Algorithmic Discrimination Protection

Book your demo