
Contents
Since last year’s veto of California’s SB 1047 (the Safe and Secure Innovation for Frontier AI Models Act), individual states have continued to make modest albeit arguably inconsequential progress on AI regulation. Most states in the US tend to focus on distinct AI-induced regulatory challenges like AI-powered decision-making and deepfakes, responding to these challenges as they emerge and crafting regulatory provisions that narrowly target certain high-impact domains like hiring, healthcare, and financial services.
This isn’t to say that within these domains and states, such regulations are ineffective (in many cases, it’s still too early to tell) or unimportant. However, their confined application scope, both in terms of the technologies and issues they prioritize, is indicative of some broader yet influential sentiments within the US regulatory ecosystem—hesitation, reactivity, fear of stifling innovation, unwillingness to navigate uncertainty and regulate frontier AI, and a deliberate lack of foresight—all of which culminate in an evidently fragmented regulatory landscape.
While the White House has issued an Executive Order (EO) on AI for advancing national infrastructure leadership, pragmatically speaking, it’s questionable whether this EO (and its 2023 predecessor) will actively unify our regulatory landscape, especially since federally enforceable regulatory standards don’t exist yet. Unlike the EU, the US has adopted a vertical approach to AI regulation, “waiting” for one or multiple states to successfully develop and implement a high-magnitude and large-scale AI legislation that aligns with our national best interests while cementing the foundation for federal AI regulation.
While this vertical method does offer significant benefits for regulatory flexibility and revision, recent AI advancements suggest that we’re rapidly approaching a critical point where ensuring our continued well-being as a nation could prove remarkably difficult, even if robust regulation exists. Had SB 1047 passed, its influence would’ve ranged far beyond California, setting a hard tone for the US federal AI strategy due to a variety of factors, including its scope, magnitude, and ambition. Over the last several months, very few legislations that parallel SB 1047’s characteristics have made it to a state’s House of Representatives, that is, until now.
Consequently, this series will examine the Texas Responsible AI Governance Act (TRAIGA or HB 149), which was recently enacted by the Texas legislature and which some previously argued dwarfs SB 1047 in its ambition and scope (given TRAIGA’s final version, this argument might not hold up anymore). In this first part, we’ll provide a detailed breakdown of the core components of the act. In part II, we’ll critically discuss TRAIGA, returning to some of the themes we touched upon in this introduction, identifying its key strengths, weaknesses, and potential impacts, and concluding with a series of recommendations for improving the act.
Before we dive in, it’s important to re-emphasize an idea we only alluded to earlier—when we refer to AI legislation as “consequential” or “inconsequential,” we’re not referring to its potential impacts within the state that enacts it, but rather, the degree to which it can or will realistically influence the federal government’s AI regulation strategy.
Executive Summary
TRAIGA represents Texas’s ambitious and fairly proactive attempt to robustly and comprehensively regulate the development, deployment, and oversight of AI systems. Like most other AI legislations, the act prioritizes core responsible AI (RAI) principles, namely ethical, safe, and transparent AI practices, with a focus on managing well-known risks like data misuse and algorithmic discrimination. Perhaps most importantly, the act establishes the Texas AI Council, an advisory body responsible for overseeing legislative implementation and driving the development of future AI regulations and updates within Texas.
At a high level, it’s also worth noting that TRAIGA shares some clear similarities with the EU AI Act. Broadly speaking, the AI Council serves a function akin to that of the EU AI Board, and the prohibited AI use cases TRAIGA outlines considerably overlap with those in the EU AI Act, though the EU AI Act does cover a much wider variety. Additional similarities include the establishment of regulatory sandboxes, though TRAIGA does not include workforce development initiatives. Still, TRAIGA is by no means as comprehensive as the EU AI Act, though, unlike SB 1047, it does expand its scope well beyond Frontier AI systems.
Key Definitions & Stakeholders
TRAIGA’s key definitions include:
- Artificial Intelligence System: “Any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
- Biometric Identifier: “A retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.”
- Biometric Data: “Data generated by automatic measurements of an individual’s biological characteristics. The term includes a fingerprint, voiceprint, eye retina or iris, or other unique biological pattern or characteristic that is used to identify a specific individual. The term does not include a physical or digital photograph or data generated from a physical or digital photograph, a video or audio recording or data generated from a video or audio recording, or information collected, used, or stored for health care treatment, payment, or operations.”
- Protected Class: “A group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.”
TRAIGA targets the following stakeholders:
- Consumer: “An individual who is a resident of this state acting only in an individual or household context. The term does not include an individual acting in a commercial or employment context.”
- Deployer: “A person who deploys an artificial intelligence system for use in this state.”
- Developer: “A person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state.”
- Governmental Entity: “Any department, commission, board, office, authority, or other administrative unit of this state or of any political subdivision of this state, that exercises governmental functions under the authority of the laws of this state.”
Core Provisions & Requirements
Stakeholder Responsibilities & Consumer Rights
Developers must:
- For AI systems with biometric identifiers used for training, ensure compliance with business and commerce code requirements unless exempted for AI development purposes that don’t uniquely identify individuals.
- For AI systems that interact with consumers, provide clear disclosures to deployers to enable compliance with disclosure requirements and consumer-facing transparency.
- Ensure AI systems do not violate prohibited use provisions including manipulation of human behavior, social scoring, unlawful discrimination, or generation of prohibited sexual content.
- If developing AI systems for governmental entities, ensure systems comply with biometric data capture restrictions and constitutional protections, to preserve consumer rights.
Deployers must:
- For governmental entities deploying AI systems that interact with consumers, disclose before or at the time of interaction that the consumer is interacting with an AI system.
- Ensure disclosures are clear, conspicuous, written in plain language, and do not use dark patterns, so that consumers can easily interpret and understand them. .
- For healthcare services, provide disclosure by the service date (except in emergency situations, then as soon as reasonably possible).
- Governmental entities must not deploy AI for social scoring, unconstitutional biometric identification, or in ways that directly infringe upon constitutional rights.
- All deployers must ensure AI systems do not intentionally manipulate behavior to cause harm or encourage illegal activity.
- For data processors handling AI-related personal data, assist controllers with security requirements and data protection assessments.
Distributors must:
- Ensure compliance with all prohibited use provisions when distributing AI systems in Texas.
- Not distribute AI systems designed for manipulation, social scoring, unlawful discrimination, or prohibited content generation.
Shared responsibilities (those that apply to developers, deployers, and distributors) include:
- Ensuring AI systems do not violate constitutional protections or enable unlawful discrimination against protected classes.
- The maintenance of robust communication channels with all relevant stakeholders (including consumers) regarding system capabilities, limitations, and risks.
- Strict adherence to data privacy and security standards to actively prevent potential unauthorized data use and access.
- Cooperation with Attorney General investigations and compliance with notice and cure provisions.
- Unobstructed cooperation in all cases where systems are deemed non-compliant, especially if a system possesses distributed responsibilities.
Under TRAIGA, consumers have the right to:
- Transparency: Knowing when they are interacting with an AI system deployed by a governmental entity, and receiving required disclosures.
- Protection from AI systems that manipulate behavior, enable social scoring, or unlawfully discriminate.
- File complaints through the Attorney General’s online mechanism.
- Non-Discrimination: Protection from AI systems developed or deployed with the intent to unlawfully discriminate against protected classes.
- Protection from governmental use of AI for biometric identification without consent where it would infringe on constitutional or legal rights.
- Protection from AI systems that generate prohibited sexual content or child exploitation materials (e.g., impersonating sexual interactions with minors).
Prohibited Use Cases & Regulatory Sandboxes
TRAIGA explicitly prohibits six kinds of AI use cases:
- AI systems developed or deployed to intentionally manipulate individuals or groups for the purpose of inciting or encouraging physical self-harm (including suicide), harm to another person, or criminal activity.
Example: A system that coerces individuals to engage in self-harm or criminal behavior.
- AI systems used by governmental entities for social scoring practices by evaluating or classifying individuals or groups by reference to social behavior or personal characteristics, whether known, inferred, or predicted, that result in detrimental or unfavorable treatment unrelated to the context of observation, unjustified treatment, or infringement of constitutional or legal rights.
Example: A system that assigns social credit scores resulting in denial of unrelated government services.
- AI systems deployed by governmental entities that utilize biometric data to uniquely identify individuals or gather images from the Internet/public sources without consent, where such gathering would infringe on constitutional or legal rights.
Example: A governmental system that scrapes images from social media to identify individuals without their consent in violation of their rights.
- AI systems developed or deployed with the sole intent to infringe upon US Constitutional rights.
Example: A system designed specifically to violate First Amendment protections.
- AI systems developed or deployed with intent to unlawfully discriminate against protected classes in violation of state or federal law.
Example: A loan-approval system that intentionally denies loans to applicants based on their race in violation of civil rights laws.
- AI systems that generate prohibited sexual content, including deepfake videos/images violating the Texas Penal Code, or text-based conversations simulating sexual conduct while impersonating minors.
To enable and support AI developers and deployers in testing and evaluating innovative AI systems within controlled environments, TRAIGA establishes regulatory sandboxes, managed by the Department of Information Resources (DIR) in coordination with the Texas AI Council and applicable state agencies. In sandboxes, systems can be tested for a period of up to 36 months, and where good cause exists, the DIR may grant extensions. Importantly, while engaged in sandbox operations, developers and deployers are temporarily exempt from certain regulatory obligations. Overall, sandboxes aim to:
- Promote the safe and innovative use of AI systems across various sectors, including healthcare, finance, education, and public services.
- Encourage responsible deployment of AI systems while balancing consumer protection, privacy, and public safety.
To be eligible for sandbox testing, participants must:
- Obtain approval from the DIR and any applicable agency.
- Provide detailed information on their AI system’s characteristics, including intended use, purpose, benefits, and deployment context.
- Submit a benefit assessment that addresses potential impacts on consumers, privacy, and public safety.
- Describe their plan for mitigating any adverse consequences that may occur during the test.
- Provide proof of compliance with any applicable federal AI laws and regulations.
During sandbox operations, participants must:
- Provide quarterly reports to the DIR on system performance, updates on risk mitigation, and feedback from consumers and affected stakeholders.
- The DIR must maintain confidentiality regarding intellectual property, trade secrets, and other sensitive information obtained through the program.
If participants violate federal laws, violate any state law or regulation not waived under the program, or if the AI system poses an undue risk to public safety or welfare, they will be removed from sandbox participation upon recommendation from the Council or applicable agency.
Oversight & Enforcement
TRAIGA designates three core oversight bodies tasked with ensuring its successful implementation, enforcement, and potential future modification.
The Texas AI Council, which contains 7 government-appointed members who must be Texas residents with demonstrated expertise in fields relevant to AI systems, data privacy and security, ethics in technology or law, public policy and regulation, risk management related to AI systems, improving efficiency and effectiveness of governmental operations, or anticompetitive practices and market fairness, is one of the key advisory bodies. Its core functions include:
- Ensuring AI systems are ethical and developed in the public’s best interest.
- Ensuring AI systems don’t compromise public safety or undermine individual freedoms.
- Identifying and evaluating existing laws and regulations that impede AI innovation and subsequently recommending relevant reforms or improvements.
- Analyzing opportunities to augment state government efficiency through AI use and integration.
- Providing recommendations on AI efficiency and efficacy to state agencies.
- Examining possible regulatory capture and undue influence by technology companies. This also includes assessing technology company influence and competitor or user censorship.
- Offering guidance to the state legislature on ethical and legal AI use and development.
- Administering and publishing the results of studies on the current US AI regulatory environment.
- Receiving reports from the DIR concerning regulatory sandbox program efforts and providing recommendations for potential program improvements.
In terms of actionable enforcement power, the Attorney General serves as the primary authority in this context and must maintain an online complaint mechanism for consumer reporting of violations. The Attorney General’s powers include:
- Exclusive authority to enforce TRAIGA’s general provisions (except for state agency sanctions).
- The ability to investigate potential violations upon receiving complaints from consumers.
- The authority to request detailed information about AI systems, including intended purpose, training data, inputs/outputs, performance metrics, limitations, monitoring measures, and other relevant documentation if necessary.
Finally, the DIR, in addition to its management of regulatory sandbox operations, must also support inter-agency regulatory coordination by:
- Collecting information from state agencies on their use or consideration of AI systems.
- Including AI system inventories in agency information resources reviews.
On the enforcement side, provisions include:
- Attorney General oversight, whereby TRAIGA compliance is monitored via civil investigative demands following complaints. Where violations are discovered, violators are granted 60 days to cure violations before penalties or enforcement action are taken and must provide written statements confirming violation cures while also demonstrating policy changes to prevent future violations.
- Non-compliance penalties range from $10,000-$12,000 for curable violations, $80,000-$200,000 for uncurable violations, and $2,000-$40,000 per day for continued violations.
- State agency sanctions, whereby state agencies are empowered to suspend or revoke licenses of those found in violation of general provisions. Agencies may also impose financial penalties, though they may not exceed $100,000.
- The ability for those subject to TRAIGA’s requirements to establish preemptive protections from liability across certain cases if they demonstrate a rebuttable presumption of reasonable care, discover violations through feedback, testing, guidelines, or compliance with the NIST framework (or some other relevant standard), or if another person misuses their AI system in a prohibited manner.
Conclusion
Regarding US-based AI regulation, TRAIGA undeniably represents a crucial and considerable development within our nation’s regulatory landscape. Whether it will set us on the right track depends not only on its approval but even more so on its potential impacts once implemented. In our next post (part II of this series) we will dedicate its entirety to exploring this topic. For now, however, we leave readers with this comprehensive breakdown.
For those interested in examining the details of other notable AI regulations, both national and international, we invite you to follow Lumenova’s blog, where you can find numerous additional resources on topics including AI governance, risk management, ethics and safety, and GenAI. If you crave more detailed, experimental, and/or future-oriented content, we suggest engaging with our deep dive series.
Similarly, for those who have already initiated AI governance and risk management practices, we invite you to check out Lumenova’s RAI platform, as well as our AI policy analyzer and risk advisor.