Frontier AI models—state-of-the-art AI systems with potentially hazardous or high-impact capabilities—are steadily becoming more capable, sophisticated, and consequential, inspiring a diverse and prolific array of novel and emerging AI risks and benefits. Building on this claim, many such models also possess multi-modal capabilities, interpreting data across audio, visual, and text domains, which when combined with increasingly popular generalist approaches like Mixture-of-Experts (MoE) architectures, suggests that capabilities repertoires will only continue to widen and deepen, particularly as breakthroughs in compute persist.
In other words, the versatility, accessibility, and power of Frontier AI models, in conjunction with exponential AI innovation and proliferation, stress the criticality of proactively managing foreseeable, preventable, and emergent AI risks without preventing high-utility benefits from materializing. That being said, targeted and enforceable federal legislation aimed at regulating Frontier AI model development and deployment has yet to be enacted in the US.
For example, President Biden’s Executive Order on Safe, Secure, and Trustworthy AI (President Biden’s EO), while it does implicitly apply to Frontier AI, highlighting several critical risks of harm, especially for dual-use foundation models, possesses a narrow domain-specific scope and supports a principles-driven rather than rules-driven structure. Although Biden’s EO is a useful starting point for developing a federally enforceable AI regulation strategy, concrete AI-specific legislation currently exists primarily on a state-by-state basis, with a selection of states, most notably California, Colorado, and Connecticut, leading the regulatory charge. However, it’s worth noting over 40 states are in the process of developing and implementing AI legislation, which among them, have drafted approximately 400 AI-specific bills, and bipartisan initiatives directed at building a federal AI regulation strategy continue to persist.
By contrast, the EU’s AI Act, in addition to laying out a comprehensive industry-wide tiered risk classification structure for AI systems, supports the creation of centralized AI governance bodies, which will enforce and oversee AI Act implementation across the entire EU. Under the AI Act, violation penalties can range as high as 7% of a company’s annual revenue or the equivalent of 35 million euros—as enforceable legislation, the AI Act outlines strong incentives for compliance, particularly for startups and SMEs, most of which would view such penalties as potentially crippling. For an in-depth comparative analysis of the AI Act and President Biden’s EO, see this Lumenova AI blog post.
The lack of federally enforceable AI regulation, while it’s certainly an issue that warrants attention, doesn’t indicate that the US isn’t taking AI regulation seriously, especially for Frontier AI. While California’s Safe and Secure Innovation for Frontier AI Models Act (CSSIFAIMA or SB 1047) still has a ways to go before being enacted as official legislation, it represents a significant milestone toward regulating Frontier AI systems, and will likely play an influential role in informing the course of federal AI regulation seeing as California is the US’ AI innovation epicenter.
Consequently, this post will break down SB 1047, beginning by laying out some core definitions and examining precisely who and what this bill targets. Next, we’ll provide a detailed interpretation of key actors’ obligations, employee protections, and the legislation’s overall enforcement structure. We’ll finalize our discussion by also looking at the AI-specific institutions or conglomerates SB 1047 would establish and create if approved and enacted, concluding by considering a few broader implications SB 1047 might inspire. Still, we’d like to remind readers that SB 1047, given its current status, will likely undergo further revisions before being established as official California state law.
Key Actors, Technologies Targeted, and Core Definitions
SB 1047 is a targeted legislation, focusing solely on the regulation of Frontier AI models that are considered “covered” and/or “derivative” (provided that derivative models meet the specifications outlined for covered models), as defined below:
- Covered Model: An AI model trained on a quantity of compute power greater than 1026 integer or floating-point operations, the cost of which exceeds $100 million. Examples of such models include OpenAI’s ChatGPT 4o and Anthropic’s Claude 3, among several others.
- Derivative Model: A modified or unmodified version of an existing AI model or an AI model that’s integrated with other kinds of software. Microsoft’s Copilot platform, which is powered by a fine-tuned version of OpenAI’s ChatGPT, is one example of a covered derivative model.
However, not all covered models are subject to the provisions listed in SB 1047—only those with potentially hazardous capabilities must closely adhere to covered guidance standards whereas those that don’t display such capabilities may qualify for a limited duty exemption, and consequently, looser regulatory requirements. In extreme cases—emergencies, safety incidents, or advanced persistent threats defined by an imminent risk of critical harm—developers of covered models may also be required to orchestrate a full shutdown. All of these terms are explained below:
- Hazardous Capabilities: When a model’s capabilities present a critical risk of harm, whether in the form of public safety, theft, bodily harm, penal code violations, critical infrastructure damages, or the creation of weapons that could perpetuate mass casualties. This standard also applies to covered and derivative models that might display such capabilities after fine-tuning and post-training enhancement are made.
- Covered Guidance: Guidance established by the National Institute of Standards and Technology (NIST), Frontier Model Division (FMD), and relevant industry best practices for safety, security, risk management, and testing requirements concerning AI models with hazardous capabilities repertoires.
- Limited Duty Exemption: A regulatory exemption that applies to non-derivative covered models, provided that developers can prove that a covered model doesn’t possess hazardous capabilities or is unlikely to possess them, within a reasonable safety margin, following fine-tuning and post-training enhancements.
- Safety Incident: A situation in which a model’s autonomous behaviors increase its hazardous capabilities, a model is leveraged for malicious purposes such as theft or hazardous capabilities are used without authorization, technical and administrative procedures fail, or model weights are released inadvertently.
- Advanced Persistent Threat: A sophisticated and resource-rich adversary that leveraged multiple attack channels to infiltrate IT infrastructure, extract information, or disrupt critical missions.
- Full Shutdown: Halting the operation of a covered model—and all of its derivatives—possessed by a non-derivative model developer or computing cluster operator.
Moving on, key actors subject to SB 1047 fall into two categories: developers and computing cluster operators:
- Developer: “An individual, proprietorship, firm, partnership, joint venture, business trust, company, corporation, limited liability company, association, committee, or any other non-governmental organization” that develops, maintains ownership, or is responsible for an AI model.
- Computing Cluster Operator: An operator of a machine learning platform, leveraged to train AI models, with a data transfer rate greater than 100 gigabits per second and a maximum compute capacity of 1020 integer or floating-point operations. The highly popularized AI computing company NVIDIA is a great example of this.
The definitions we’ve just outlined represent the core concepts that readers will need to grasp to make sense of the discussion that follows. However, SB 1047 outlines several other important terms, which although not critical to understanding the scope and applicability of SB 1047, might be of interest to some of our readers—for those of you who fall in this group, we recommend that you review the legislation directly, especially since we’ve paraphrased almost all of the definitions covered here.
Core Obligations and Protections
SB 1047 is a developer-centric legislation, aiming to ensure that Frontier AI models are developed responsibly, safely, and effectively, hence its additional transitive focus on operators of computing clusters. Core obligations for both these sets of key actors are described and categorized below in terms of central domains of interest, like risk management or pricing transparency. In this section, we’ll also examine the protections SB 1047 offers to employees of Frontier AI model developers—which include independent contractors and unpaid advisors—who wish to report compliance and/or safety concerns.
Developer Obligations
- Cybersecurity: Establish and implement sufficiently robust and resilient technical, administrative, and physical cybersecurity safeguards aimed at preventing unauthorized model access or misuse, or the modification/inadvertent release of model weights.
- Risk Management: Develop and enact AI life cycle risk management protocols whereby targeted hazardous capability tests are administered during training, fine-tuning, and post-training enhancement stages. Developers must also ensure that models can undergo a full shutdown in the event of an emergency, and clearly describe, in detail, how their models have met compliance requirements in addition to the procedures they’ve established for revising and improving their safety and security protocols.
- Protocol and Testing: Hazardous capability testing and safety and security protocols must meet the standards set by the FMD, which include concrete risk mitigation efforts for covered model development and operation procedures and the requirement to specify model compliance as a prerequisite for model training, operation, ownership, and accessibility. Moreover, developers must ensure that they conduct post-training hazardous capabilities tests to determine whether a model qualifies for a limited duty exemption while also submitting all relevant compliance certifications to the FMD.
- Incident Reporting: In the event of a safety incident, whether it has been directly observed by a developer or a developer believes it’s occurred, a report must be submitted to the FMD within 72 hours.
- Annual Review: Developers must annually review and scrutinize their safety and security protocols with respect to any relevant model updates and industry best practices. These reviews must be formally documented and submitted to the FMD, and include detailed information on a model’s hazardous or potentially hazardous capabilities repertoire, the risk assessments taken to evaluate the efficacy of safety and security protocols, and any other information requested by the FMD.
- Assessment and Certification: Developers must evaluate whether they qualify for a limited duty exemption by reference to all relevant covered guidance requirements, after which they must submit, under penalty of perjury, a certification detailing the reasoning behind their conclusion, to the FMD. If a developer makes a *good faith error—*accounting for foreseeable risks of harm or inadequacies in capabilities testing procedures despite a model displaying hazardous capabilities—and they report/address this error within 30 days, ceasing model operation until it’s resolved, they will be deemed compliant.
- Notice to Employees: Developers must provide clear notice and explanation to all their employees regarding the rights and responsibilities they have under this regulation (we’ll discuss these employee rights and responsibilities at the end of this section).
Computing Cluster Operator Obligations
- Pricing Transparency: Pricing schedules concerning access to compute cluster resources must be transparent and consistent across potential customers to ensure that practices such as unlawful discrimination and noncompetitiveness don’t occur. However, operators of computing clusters are allowed to offer preferential access and pricing to public entities, academic institutions, and noncommercial researchers or research groups.
- Customer Information and Assessment: Operators must validate the identity of prospective customers as well as the business purpose for which they require access to compute resources. Such information should include a customer’s identity verification, payment details, contact information, IP addresses, along with access and administrative timestamps. Operators must also administer a customer assessment to determine whether the prospective customer intends to deploy a covered model while also annually validating all the customer information they collect to ensure continued relevance.
- Record-Keeping and Emergencies: Operators must maintain a record of their compliance with the above requirement for a minimum of 7 years, and provide the FMD or California Attorney General (AG) with such records upon request. Importantly, operators, like developers, must establish a reliable method through which to orchestrate a full shutdown in the event of emergencies.
Employee Protections
- Whistleblower Protection: If an employee of a Frontier Model developer believes that the developer is non-compliant with any provisions listed in this regulation, the developer can’t prevent or retaliate against the employee for sharing their concerns with the AG. Moreover, the AG can publicly release an employee complaint and the reasons for which it was submitted if the AG determines that it’s a matter of public concern whereas employees, under this regulation, are afforded the right to seek legal aid or relief, via the AG, for any retaliatory actions taken by developers in response to whistleblowing.
- Internal Reporting: Developers must establish and implement a mechanism through which employees can anonymously report compliance and safety concerns to the AG’s office. This mechanism should also provide employees with regular monthly updates regarding the status and actions taken on behalf of their submitted concern while also informing senior members of the company as to the nature of such disclosures and the company’s and AG’s response to them, every quarter.
Enforcement and Oversight
Now that we’ve covered the core requirements of SB 1047, we can dive into the details of the enforcement structure it proposes, whereby we’ll also highlight the roles, responsibilities, and functions of two AI-specific institutions this regulation would establish and support if enacted: the FMD and CalCompute.
Following the same approach as the previous section, we’ll define and categorize all relevant concepts in terms of the central domains of interest relevant to them.
Enforcement Procedures and Requirements
- Role of the Attorney General: The AG is partially responsible for enforcing SB 1047. If the AG finds that a developer or operator of a computing cluster violates any of the provisions stated in this regulation, the AG may instigate civil action.
- Preventive Relief: Once civil action has begun, a court may pursue a few different legal avenues, including temporary injunctions (a requirement to halt a particular action, like further model development), full shutdowns, monetary or punitive damages, or the deletion of a covered model, all its derivatives, and weights, among other orders. Importantly, a court can only issue preventive relief when violations inspire critical risks of harm like bodily harm, death, theft, or public safety threats, to name a few.
- Civil Penalties: An initial violation can warrant penalties up to 10% of the development costs of a covered model whereas subsequent violations can range as high as 30%.
- Relevant Restrictions: Before January 1st, 2026, a court may not require the deletion of a covered model and its weights, however, it can order a full shutdown, provided that a covered model inspires an imminent public safety threat. Likewise, before July 1st, 2025, a court may not provide aggrieved persons—persons who have had their legal rights infringed upon—with monetary damages.
Frontier Model Division Roles and Responsibilities
- Creation: The FMD would function as part of the California Department of Technology, also serving as a regulatory enforcement and oversight hub that encourages the AG to pursue civil action when AI-driven harms or threats to public safety occur.
- Certification Review and Safety Incident Publishing: The FMD is responsible for conducting annual reviews of developers’ certification reports and publicly disclosing a summary of their findings. The FMD must also ensure that the AI Safety reports it receives from developers are published anonymously.
- Advising: The FMD must advise the AG in all matters related to SB 1047 to ensure that potential violations are pursued appropriately. More broadly, the FMD must also work with an advisory committee to inform the governor on how best to deal with AI-related emergencies. Separately, the FMD must establish an open-source AI advisory committee, whose purpose would be to develop guidelines and best practices for open-source model evaluation, incentives for open-source development, and any other future policies of interest.
- Guidance and Standards: The FMD, in addition to NIST, though in a more targeted manner specific to SB 1047, is responsible for developing concrete standards and guidance for mitigating and preventing critical risks of harm stemming from covered models with hazardous capabilities. The FMD must also issue guidance for how developers and operators of computing clusters can fulfill compliance requirements under SB 1047, like certification and AI safety reports. Finally, the FMD must determine what technical thresholds and benchmarks will be leveraged to understand whether a model is covered and whether it qualifies for a limited duty exemption.
- Accreditation: The FMD must develop and implement a voluntary accreditation procedure third parties can leverage to demonstrate and certify adherence to industry best practices and standards. This accreditation, if obtained by a third party, would be valid for a period of 3 years.
- Funding: The FMD must implement a fund, nested within the California General Fund, whereby any regulatory fees, such as those acquired for certification submissions, are funneled to the legislature to support its ability to execute the provisions of this regulation.
CalCompute Structure, Role, and Responsibilities
- Creation and Components: Commissioned by the Department of Technology, CalCompute would constitute a consultant-created publicly accessible cloud computing cluster whose mission would be to foster equitable AI innovation and the safe and secure deployment of Frontier AI models. Consultants would be recruited from reputable sources such as national laboratories, academic institutions, industry expert hubs, and professional accredited networks and the public would maintain platform ownership. CalComput would also stress the importance of human expertise, oversight, and involvement in all procedures from operation, maintenance, and support to training and the overall creation and establishment of CalCompute.
- Roles: If created, CalCompute would be responsible for executing a variety of roles, including AI infrastructure and ecosystem analysis, the establishment of partnerships aimed at maintaining state-of-the-art compute infrastructure, the creation of a framework for internal decision-making, project support, and resource management procedures, the evaluation of potential harms and impacts regarding publicly available cloud resources, and the scrutiny and improvement of emergency response procedures for AI-driven safety incidents across all relevant domains. CalCompute would also play a role in California workforce development, conducting analyses of the state’s investments in workforce AI training and awareness initiatives offered by accredited universities while also assessing the effectiveness with which CalCompute retains technology expertise throughout the workforce.
- Reporting and Funding: If created, the Department of Technology would have to submit yearly progress reports on CalCompute’s objectives fulfillment, and as for CalCompute funding, the Department of Technology can accept private donations, grants, local funds, and any state-subsidized budget allocations.
Broader Implications
There’s admittedly a lot to unpack here, particularly for readers who might not be familiar with the ins and outs of AI regulation and policy in the US, not to mention at a global scale. Therefore, to make this final discussion more palatable and accessible to a wider audience, we’ll focus on the broader implications of SB 1047 for the US regulatory landscape.
Still, we remind readers that SB 1047 is in progress, meaning that certain provisions will be further revised and modified before the bill is enacted as official law. Consequently, we advise readers to interpret these broader implications with a grain of salt, keeping them in mind as the bill progresses.
One immediate implication is the extent to which SB 1047 implicitly mirrors many of the core principles and concerns in President Biden’s EO, such as the promotion of transparency, accountability, safety, and security guidelines for advanced AI models. While SB 1047 doesn’t explicitly address the risks stemming from dual-use foundation models, which represent a central concern in President Biden’s EO, the interest SB 1047 places on proactively mitigating critical risks of harm, particularly across domains such as public safety and critical infrastructure, suggests that it’s well-aligned, at least in scope, with President Biden’s EO. To this point, SB 1047 also outlines the importance of robust safety, security, and performance testing protocols, regular reporting and certification requirements, and clear communication with government and civic entities concerning models’ hazardous capabilities, related risks, and the steps taken to manage adverse consequences.
However, SB 1047 places a much stronger emphasis on managing the risks associated with Frontier AI model development as opposed to deployment, while also requiring the establishment of AI-specific governance and standard-setting bodies like the FMD and CalCompute, following more closely in the EU AI Act’s footsteps in this respect. Seeing as President Biden’s EO sets guidelines rather than concrete standards, it’s possible that institutions like the FMD and CalCompute, if established, would, through the standards they set, not only define California’s AI governance strategy but also deeply influence the federal government’s approach. This possibility appears even more likely when considering California’s position as the center of US-based AI innovation in conjunction with its role as one of the most rapid and prolific producers of AI policy, governance, and research initiatives.
On the other hand, SB 1047’s use of terms like “covered models” and “hazardous capabilities” deviates from the more commonly accepted and used vernacular of “high-risk”, “high-impact”, or “general purpose” AI systems, which could create significant issues for regulatory interoperability, enforcement, and overall understanding. In a broader context, the US has adopted a vertical approach to AI regulation, whereby it’s up to individual states to define, develop, and execute their AI policies, which, although it affords a higher degree of adaptability and flexibility in response to nation-wide AI-driven changes, could make for a rather disjointed and incoherent regulatory landscape that’s complex and hard to navigate, especially in the absence of federal standards.
Returning to an earlier point, even though SB 1047 favors regulating technology development over deployment, advocating for robust safety and security safeguards, it makes no mention of secure testing and validation hubs, which are crucial to identifying and testing hazardous capabilities in advanced AI models before deployment, especially for red-teaming purposes—simulated cybersecurity attacks that reveal an organization’s security vulnerabilities. While this doesn’t indicate that sandbox environments won’t be created, this is an area of SB 1047 that will likely require further review and remediation, and if not addressed, could open up the California AI innovation ecosystem to a series of adversarial vulnerabilities, not to mention the array of risks linked to real-world capabilities testing procedures.
Furthermore, the high compute threshold SB 1047 sets adequately captures Frontier AI models like ChatGPT, Claude, Gemini, and others, but it’s unclear whether scaled-down derivative versions of such models with similarly hazardous capabilities repertoires, if they emerge in the near future, would be covered under this regulation. That being said, SB 1047’s additional focus on operators of computing clusters represents a major step in ensuring that AI risks are holistically managed and addressed throughout the AI lifecycle and that additional parties involved in the AI development process are held accountable.
Overall, SB 1047, despite some potential oversights and drawbacks, would, if enacted, represent a significant milestone in the US-based AI regulation landscape. By qualifying the risks of Frontier AI development, establishing concrete enforcement structures and AI-specific governance bodies, aligning with core principles in AI governance, promoting and defining accountability, reporting, certification, transparency, and testing standards, and protecting and preserving the rights of affected persons in response to AI-driven harms, SB 1047 is a strong move in the right direction.
For readers interested in learning more about AI governance and risk management, we invite you to follow Lumenova AI’s blog, where you can also explore generative and responsible AI content.
Alternatively, for those interested in developing and implementing an AI governance and/or risk management framework, we suggest taking a look at Lumenova AI’s platform and booking a product demo today.