Contents
Since last year’s veto of California’s SB 1047—the Safe and Secure Innovation for Frontier AI Models Act—individual states have continued to make modest albeit arguably inconsequential progress on AI regulation. Most states in the US tend to focus on distinct AI-induced regulatory challenges like AI-powered decision-making and deepfakes, responding to these challenges as they emerge and crafting regulatory provisions that narrowly target certain high-impact domains like hiring, healthcare, and financial services.
This isn’t to say that within these domains and states, such regulations are ineffective (in many cases, it’s still too early to tell) or unimportant. However, their confined application scope, both in terms of the technologies and issues they prioritize, is indicative of some broader yet influential sentiments within the US regulatory ecosystem—hesitation, reactivity, fear of stifling innovation, unwillingness to navigate uncertainty and regulate frontier AI, and a deliberate lack of foresight—all of which culminate in an evidently fragmented regulatory landscape.
While the White House did just issue another Executive Order (EO) on AI for advancing national infrastructure leadership, pragmatically speaking, it’s questionable whether this EO (and its 2023 predecessor will actively unify our regulatory landscape, especially since federally enforceable regulatory standards don’t exist yet. Unlike the EU, the US has adopted a vertical approach to AI regulation, “waiting” for one or multiple states to successfully develop and implement a high-magnitude and large-scale AI legislation that aligns with our national best interests while cementing the foundation for federal AI regulation.
While this vertical method does offer significant benefits for regulatory flexibility and revision, recent AI advancements suggest that we’re rapidly approaching a critical point where ensuring our continued well-being as a nation could prove remarkably difficult, even if robust regulation exists. Had SB 1047 passed, its influence would’ve ranged far beyond California, setting a hard tone for the US federal AI strategy due to a variety of factors including its scope, magnitude, and ambition. Over the last several months, very few legislations that parallel SB 1047’s characteristics have made it to a state’s House of Representatives, that is, until now.
Consequently, this series will examine the Texas Responsible AI Governance Act (TRAIGA or HB 1709), which was recently introduced into the Texas legislature and which some have argued dwarfs SB 1047 in its ambition and scope. In this first part, we’ll provide a detailed breakdown of the core components of the act. In part II, we’ll critically discuss TRAIGA, returning to some of the themes we touched upon in this introduction, identifying its key strengths, weaknesses, and potential impacts, and concluding with a series of recommendations for improving the act.
Before we dive in, it’s important to re-emphasize an idea we only alluded to earlier—when we refer to AI legislation as “consequential” or “inconsequential,” we’re not referring to its potential impacts within the state that enacts it, but rather, the degree to which it can or will realistically influence the federal government’s AI regulation strategy. SB 1047 became a hot topic within the US AI policy discourse precisely because of its anticipated federal influence, and TRAIGA, being even more aggressive and wide-ranging, is similarly poised (provided it passes).
Executive Summary
TRAIGA represents Texas’ ambitious, far-ranging, and fairly proactive attempt to robustly and comprehensively regulate the development, deployment, and oversight of AI systems. Like most other AI legislations, the act prioritizes core responsible AI (RAI) principles, namely ethical, safe, and transparent AI practices, with a focus on managing well-known risks like data misuse and algorithmic discrimination. Perhaps most importantly, the act establishes the Texas AI Council, a powerful enforcement and advisory body responsible for overseeing legislative implementation and driving the development of future AI regulations and updates within Texas.
At a high level, it’s also worth noting that TRAIGA shares some clear similarities with the EU AI Act. Broadly speaking, the AI Council serves a function akin to that of the EU AI Board, and the prohibited AI use cases TRAIGA outlines considerably overlap with those in the EU AI Act, though the EU AI Act does cover a significantly wider variety. Additional similarities include the establishment of regulatory sandboxes, workforce development initiatives, and the parameters by which certain AI systems are deemed high-risk. Still, TRAIGA is by no means as comprehensive as the EU AI Act, though unlike SB 1047, it does expand its scope well beyond Frontier AI systems.
Key Definitions & Stakeholders
TRAIGA’s key definitions include:
-
High-Risk AI System: “Any AI system that is a substantial factor to a consequential decision.”
-
Open-Source AI System: “An AI system that (A) can be used or modified for any purpose without securing permission from the owner or creator of such an AI system, (B) can be shared for any use with or without modifications, and (C) includes information about the data used to train such system that is sufficiently detailed such that a person skilled in AI could create a substantially equivalent system.”
-
Generative AI (GenAI): “AI models that can emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.”
-
Consequential Decision: “Any decision that has a material, legal, or similarly significant, effect on a consumer’s access to,” services, rights, or opportunities including employment, housing, and healthcare.
-
Algorithmic Discrimination: “Any condition in which an AI system when deployed creates unlawful discrimination of a protected classification in violation of the law of this state or federal law.”
-
Sensitive Attributes: “Race, political opinions, religious or philosophical beliefs, ethnic orientation, mental health diagnosis, or sex.”
-
Biometric Identifier: “A retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.”
-
Substantial Factor: “A factor that is (A) considered when making a consequential decision, (B) likely to alter the outcome of a consequential decision, and (C) weighed more heavily than any other factor contributing to the consequential decision.”
-
Deploy: “To put into effect or commercialize.”
-
Digital Service: “A website, an application, a program, or software that collects or processes personal identifying information.”
TRAIGA targets the following stakeholders:
-
Consumer: “An individual who is a resident of this state acting only in an individual or household context. The term does not include an individual acting in a commercial or employment context.”
-
Deployer: “A person doing business in this state that deploys a high-risk AI system.”
-
Developer: “A person doing business in this state that develops a high-risk AI system or substantially or intentionally modifies an AI system.”
-
Digital Service Provider: “A person who (A) owns or operates a digital service, (B) determines the purpose of collecting and processing the person identifying information of users of the digital service, and (C) determines the means used to collect and process the personal identifying information of users.”
-
Distributor: “A person, other than the Developer, that makes an AI system available in the market for a commercial purpose.”
Core Provisions & Requirements
Stakeholder Responsibilities & Consumer Rights
Developers must:
-
Adhere to the standards and guidance set by the NIST AI Risk Management Framework.
-
Share high-risk reports with deployers that describe, in detail, usage guidelines, performance metrics, known risks, training data characteristics, data governance provisions, and concrete risk management policies.
-
Upon substantially modifying a high-risk AI system, share a revised high-risk report with deployers within 30 days following the modification. Deployers must also be notified of any changes in a system’s performance metrics.
-
For non-compliant systems, immediate corrective actions (i.e., withdrawing, disabling, or recalling) must be taken and all affected distributors, deployers, and authorities must be appropriately notified.
-
Regulatory violations and substantial risks (e.g., algorithmic discrimination), accompanied by corrective action plans, must be reported to the Attorney General (AG).
-
For GenAI systems, records detailing training data and system characteristics (e.g., model weights and parameters), must be vigorously maintained. Such documentation must correspond with transparency and accessibility standards, supporting the deployer’s ability to meet impact assessment provisions.
Deployers must:
-
Conduct rigorous impact assessments for all deployed high-risk AI systems. Such assessments must describe a system’s intended purpose, use, and benefits, analyze and identify relevant risks and mitigation strategies, support data and consumer transparency provisions, outline post-deployment monitoring and cybersecurity measures, and disclose system performance metrics.
-
Administer annual impact assessments, and where substantial modifications are made, within 90 days.
-
For consumers interacting with high-risk AI systems, disclosures describing the system’s intended purpose and decision-making role, human and automated components influencing decisions, and deployer contact information must be provided. Such disclosures must be easily accessible and comprehensible.
-
Consumer interaction disclosures must be provided prior to or at the time of interaction with an AI system.
-
For non-compliant systems, continued use must be promptly suspended and notification to developers, distributors, and relevant authorities must be provided.
Distributors must:
-
Immediately upon discovering that a system is non-compliant, withdraw, disable, or recall it from the market.
-
Notify developers and deployers of non-compliant systems.
-
Follow developer requirements if they modify an existing AI system such that it qualifies as high-risk or increases risks, or rebrand/sell the system under their trademark.
Shared responsibilities—those that apply to developers, deployers, and distributors—include:
-
The identification, documentation, and mitigation of algorithmic discrimination risks before an AI system is deployed.
-
The maintenance of robust communication channels with all relevant stakeholders including consumers regarding system capabilities, limitations, and risks.
-
Strict adherence to data privacy and security standards to actively prevent potential unauthorized use and access.
-
The notification of any regulatory infringements to all relevant parties including the AG and consumers.
-
Clear cooperation in all cases where systems are deemed non-compliant, especially if a system possesses distributed responsibilities.
Under TRAIGA, consumers have the right to:
-
Transparency: Knowing when a) they are interacting with an AI system regardless of the nature of the interaction, and b) receiving developer and deployer-provided disclosures.
-
Understand Decision-Making: Receiving detailed information on AI decision-making roles, uses, substantial factors, and impacts on their fundamental rights to health and safety.
-
Appeal: Where an AI-driven consequential decision results in adverse outcomes, appeals may be directly submitted to deployers. If consumers suspect deployer responses are insufficient or in violation of TRAIGA, formal complaints may be escalated to the AG’s office.
-
Non-Discrimination: Where an AI system is believed to facilitate discrimination, incidents can be directly reported to the AG’s office, and/or developers and deployers.
-
Data Awareness: Obtaining unobstructed information on the role and purpose their personal data plays within an AI system.
-
Opt-Out: The ability to deny the sale of personal data for AI purposes and decline the use of personal data processing for targeted advertising, profiling, and AI-driven consequential decision-making.
Prohibited Use Cases & Regulatory Sandboxes
TRAIGA explicitly prohibits seven kinds of AI use cases:
1. AI systems leveraged to manipulate individuals or groups for the purpose of impairing sound decision-making or causing significant harm.
Example: A system that coerces individuals into voting for some political candidate over another.
2. AI systems that enable social scoring practices by evaluating or classifying individuals or groups by reference to personal characteristics, whether they are observed, inferred, or predicted, and social behavior.
Example: A system that assigns social credit scores according to whether individual citizens speed while driving or cross the street when they are supposed to.
3. AI systems that utilize biometric data to identify individuals who have not explicitly provided informed consent.
Example: A system that scrapes images from social media to identify individuals without their knowledge.
4. AI systems that categorize individuals by reference to sensitive personal attributes including race, religion, political opinions, and sexual orientation.
Example: A system that classifies individuals according to their religious and political inclinations.
5. AI systems that exploit personal attributes like race or gender to manipulate individual or group behavior or cause harm, whether intended or as a result of negligence.
Example: A loan-approval system that denies loans to applicants based on their race or socioeconomic status.
6. AI systems that generate unlawful explicit content such as deepfake pornography or child exploitation materials.
7. Any AI systems that infringe upon state and/or federal anti-discrimination laws and privacy protections.
To enable and support AI developers and deployers in testing and evaluating innovative or high-risk AI systems within controlled environments, TRAIGA establishes regulatory sandboxes. In sandboxes, systems can be tested for a period of up to three years and where a good cause exists, the Texas Department of Information Resources (DIR) may grant extensions—both the DIR and Texas AI Council will oversee and maintain the sandbox program, with the DIR submitting annual sandbox reports to the state legislature. Moreover, while engaged in sandbox operations, developers and deployers are temporarily exempt from certain regulatory obligations. Overall, sandboxes aim to:
-
Maintain a careful balance between innovation, public safety, consumer protection, privacy, and accountability.
-
Actively foster RAI development initiatives within structured and controlled testing environments.
To be eligible for sandbox testing, participants must:
-
Formally submit an application to the Texas AI Council, the primary enforcement entity responsible for overseeing sandbox activities.
-
Provide detailed information on their AI system’s characteristics including intended use, purpose, benefits, and deployment context.
-
Submit a risk assessment that covers consumer impacts, privacy and security risks, and impact mitigation strategies.
-
Ensure that their AI system complies with any relevant federal AI standards.
During sandbox operations, participants must:
-
Issue quarterly reports on system performance, risk mitigation, and consumer feedback.
-
Compile and report on any relevant risk updates that emerge during testing procedures.
If participants violate federal or state laws, fail to meet reporting and risk mitigation requirements, or attempt to test prohibited AI uses or systems that pose major risks to public safety and wellbeing, they will be swiftly removed from sandbox participation, and in certain cases, face civil and regulatory penalties.
Oversight & Enforcement
TRAIGA denotes three core oversight bodies tasked with ensuring its successful implementation, enforcement, and potential future modification.
The Texas AI Council, which would contain 10 government-appointed members with demonstrated expertise in fields relevant to AI ethics, governance, and risk management, is the most influential of these three bodies. Its core functions include:
-
Exercising its role as the primary rulemaking authority on evaluation guidelines for AI safety, fairness, and privacy, and ensuring that any such rules align with relevant state laws.
-
Identifying and resolving legal oversight areas pertaining to AI ethics, safety, and governance across state laws.
-
Acting as an advisory service for state agencies on matters related to legal and ethical AI uses.
-
Providing recommendations on legislative reform processes to adequately capture emerging AI risks.
-
Investigating cases where corporate influence or regulatory capture undermines regulatory development and implementation.
-
Comprehensively and annually reporting on all the AI governance efforts Texas has undertaken to the state legislature.
In terms of actionable enforcement power, the Attorney General serves as the primary authority in this context. Their powers include:
-
The ability to directly investigate any alleged TRAIGA violations coupled with the ability to assess and enforce compliance provisions. The AG can also apply regulatory penalties and seek injunctive relief for violations.
-
The authority to require that entities who have allegedly violated TRAIGA promptly share information on risk management policies, impact assessments, and relevant compliance documentation.
Finally, the Department of Information Resources, in addition to its collaboration with the Texas AI Council in running regulatory sandbox operations, must also support inter-agency regulatory coordination by:
-
Engaging with state agencies to assess and understand how they utilize high-risk AI systems.
-
Holding state agencies accountable with respect to existing AI governance standards, and ensuring that where non-compliance exists, sufficient guidance and recommendations for improvement are provided.
On the enforcement side, provisions include:
-
AG oversight, whereby TRAIGA compliance is monitored via audits and investigative demands. Where violations are discovered, relevant entities are granted 30 days to resolve them before penalties or enforcement action are administered, and must prove resolution with documentation of corrective actions taken.
-
Non-compliance penalties range from $50,000 to $200,000 per violation, depending on the nature of the violation. If violations persist, additional daily fines ranging between $2,000 and $40,000 will be administered.
-
State agency sanctions, whereby state agencies are empowered with the ability to suspend and revoke licenses of those found to have violated TRAIGA requirements. In cases where licensed professionals commit violations, penalties ranging to $100,000 may be imposed.
-
Routine audits, administered by regulators, either to ensure continued compliance with TRAIGA or in response to consumer complaints or reported violations.
-
The ability for those subject to TRAIGA’s requirements to establish preemptive protections from liability across certain cases if they exhibit a rebuttable presumption of care—consistent evidence of compliance with TRAIGA’s provisions.
Workforce Development Initiatives
To drive future AI readiness, particularly within the workforce, TRAIGA establishes the Texas AI Workforce Development Grant Program (TAIWDGP), overseen by the Texas Workforce Commission (TWC). The overarching objectives of this initiative are as follows:
-
Close workforce gaps by proactively combatting anticipated shortages across AI-related professional fields like data science and robotics.
-
Foster career preparedness by creating resources and opportunities for active and incoming professionals to build, refine, and update the skills necessary to thrive in AI-centric roles.
-
Support industry collaboration that engages workforce organizations, AI companies, and educational institutions.
-
Enhance competition by cementing Texas as a distinct AI innovation and workforce development leader, focusing on equipping the state workforce with skills demanded by Texas-based AI companies.
Businesses, educational institutions, and workforce development organizations are eligible to receive grants from the TAIWDGP. Grants are intended to be used across six core areas:
-
Curriculum Development: Targeted curricula and training programs for AI-related fields.
-
Instructional Support: Educators should receive the financial and practical support required to teach AI skills.
-
Infrastructure and Tech: Ensuring that AI training programs leverage the right equipment, technology, and resources.
-
Student Support: Providing scholarships and aid to students interested in pursuing AI-related fields, most notably, those from underrepresented communities.
-
Industry Partnerships: Encouraging internships, workforce training, and apprenticeships with local businesses.
-
Reskilling: Providing upskilling and reskilling opportunities for those displaced by automation or other AI-induced advancements.
Conclusion
Regarding US-based AI regulation, TRAIGA undeniably represents a crucial and considerable development within our nation’s regulatory landscape. Whether it will set us on the right track depends not only on its approval but even more so on its potential impacts once implemented. In our next post—part II of this series—we will dedicate its entirety to exploring this topic. For now, however, we leave readers with this comprehensive breakdown.
For those interested in examining the details of other notable AI regulations, both national and international, we invite you to follow Lumenova’s blog, where you can find numerous additional resources on topics including AI governance, risk management, ethics and safety, and GenAI. If you crave more detailed, experimental, and/or future-oriented content, we suggest engaging with our deep dive series.
Similarly, for those who have already initiated AI governance and risk management practices, we invite you to check out Lumenova’s RAI platform, as well as our AI policy analyzer and risk advisor.