This March, the California Department of Technology (CDT) issued general guidelines on public sector procurement of Generative AI (GenAI) systems, effectively enacting the provisions listed in Section 3a of California Governor Gavin Newsom’s Executive Order N-12-23 (EO N-12-23)—for an overview of EO N-12-23, see this Lumenova AI blog post. These guidelines are set to become official state policy by 2025, but only once they are deemed sufficient according to testing and piloting procedure results in addition to stakeholder feedback.
The overarching goal of the California GenAI Procurement Guidelines (CGAIPG) is to build on EO N-12-23 by establishing trustworthy and responsible AI (RAI) best practices for the procurement, utilization, and training of GenAI systems by California government entities, thereby enabling them to begin integrating GenAI solutions into operations to improve management, workflow, and the delivery of goods and services. In the meantime, to comply with EO N-12-23, California state entities that intend to integrate GenAI products or components should follow these guidelines, irrespective of their GenAI maturity levels.
Moreover, according to the CGAIPG, the key personnel responsible for applying its provisions should be the Chief Information Officer (CIO) or Agency Information Officer (AIO) of a given state entity—involvement from other relevant actors, such as state procurement specialists and Information Security Officers (ISOs), may also be required.
Nonetheless, the CGAIPG offers GenAI guidance not only to the executive suite but also to its main staff and workforce. Broadly speaking, the guidelines target five areas: 1) use cases, 2) responsibilities, 3) training, 4) risk assessment and management, and 5) procurement—we’ll discuss the core characteristics of each area in the discussion that follows, but first, we need to lay out some key definitions (the definitions provided here are non-generalizable, and pertain specifically to the state’s use of GenAI):
- GenAI: “Pretrained AI models that can generate images, videos, audio, text, and derived synthetic content. GenAI does this by analyzing the structure and characteristics of the input data to generate new, synthetic content similar to the original. Decision support, machine learning, natural language processing/translation services, computer vision and chatbot technologies or activities support may be related to GenAI, but they are not GenAI on their own.”
- Incidental GenAI purchase: “A purchase for which a state entity identifies the use of GenAI tools as part of the overall purchase for any type of procurement. A request to primarily purchase a good or service, where the state or vendor identifies a subcomponent of the purchase as using GenAI tool(s) to assist with the delivery of the solution, is considered an incidental purchase of GenAI.
- Intentional GenAI purchase: “A purchase for which a state entity identifies a GenAI product or solution to meet a business need for any type of procurement. A request to purchase a specific GenAI product or solution at the onset of a procurement is considered an intentional purchase of GenAI.”
The AI policy and risk management landscape is changing rapidly to accommodate novel AI advancements and developments, which can compromise one’s understanding of the current and future trajectory of AI regulation. Regardless, keeping up with AI policy tide, especially if you’re already leveraging AI or plan to integrate it within your organization, is crucial for compliance, safety, and Responsible AI (RAI) best practices. Fortunately, Lumenova AI’s blog offers readers consistent and up-to-date insights on most things AI, especially risk management and regulation.
Use Cases and Responsibilities
State entities must rigorously identify and understand which characteristics or components of AI procurement initiatives and procedures are GenAI-specific—one way in which they can begin cultivating this kind of understanding is by reference to possible GenAI functions and their corresponding use cases. In this respect, the CGAIPG outlines several GenAI functions and use cases specific to the kinds of operations performed by state entities:
- Content generation → Developing and executing public awareness campaigns or improving the clarity and cohesiveness of data visualizations.
- Chatbots → Answering constituent queries, providing voice-activated assistance, making state call centers more efficient, and allowing users to navigate digital services in their native language.
- Data analysis → Running fraud and network analysis, understanding the root cause of certain problems, as well as organizing and interpreting casework.
- Explanations and tutoring → Explaining eligibility requirements for certain state-subsidized programs or providing interactive digital assistance.
- Personalized content → Streamlining documentation processes such as tax filing or application for public programs via auto-population—automatically adding user’s data.
- Search and recommendation → Figuring out where and how certain state or national regulations may apply or recommending specific government services to constituents or state employees by reference to eligibility criteria.
- Software code generation → Converting policies into machine-readable code, streamlining data transformation, promoting human-centric digital content design, and supporting webpage development and maintenance.
- Summarization → Summarizing and interpreting stakeholder feedback, AI research, and administrative codes to identify GenAI trends/risks/opportunities and further policy development.
- Synthetic data generation → Generating training data for AI systems leveraged in high-impact domains such as healthcare and tax auditing, to avoid potential risks and vulnerabilities linked to sensitive data.
Along with their commitment to responsible and trustworthy GenAI integration, state entities must also account for and evaluate the potential array of GenAI impacts on the state workforce—the executive suite of a state entity, namely its CIO and their team, are the key actors responsible for ensuring that GenAI procurement processes are managed appropriately. Consequently, when conducting incidental GenAI purchases, state entity’s leaders must:
- Determine who, in their team, will be responsible for the continual monitoring and evaluation of GenAI systems—most often, this will be the CIO.
- Take part in mandatory GenAI training and upskilling initiatives.
- Ensure the continued development of up-to-date and relevant GenAI skills and/or knowledge among staff members.
Alternatively, if state entity’s leaders make intentional AI procurements, they must adhere to the three actions above in addition to the following:
- Conduct a pre-procurement analysis to uncover and evaluate GenAI business needs and implications.
- Foster clear and open collaboration with end users to ensure that GenAI impacts are adequately considered.
- Run GenAI risk and impact assessments to determine the scope and intensity of potential GenAI risks and impacts.
- Test and evaluate GenAI systems before deployment, to inform any additional corrective and/or safety measures that are necessary.
- Build a team with GenAI expertise, which will be responsible for the continued oversight and evaluation of operational GenAI systems and/or GenAI components that are part of a larger system.
Training
At a general level, GenAI training should align with trustworthy and RAI principles and best practices, help future-proof state employees' skills repertoire in terms of what’s required to succeed in a GenAI-driven economy, and explore how GenAI can be leveraged to promote equity and diminish potential biases and/or inaccuracies in AI outputs.
For GenAI training to be effective, it should be broken down into three stages, listed hierarchically below:
- Executive leadership, legal, labor, and privacy → Training executives first allows them to get a better idea of which key personnel should undergo AI training next. As for legal, labor, and privacy, AI training is intended to help relevant specialists uncover and mitigate potential GenAI risks and impacts across these areas.
- Program staff and technical experts → GenAI training targeting staff members should aim to enhance operational efficiency, the delivery of equitable government services, and the ability to spot possible GenAI use cases and risks. On the other hand, GenAI training for technical experts should ultimately equip them with the skills required to evaluate a state entity’s GenAI readiness levels before procurement.
- General workforce → To ensure that the state’s workforce can leverage GenAI and other emerging technologies effectively and responsibly before they’re deployed, general AI education and training must be provided.
GenAI training and education should be mandatory and be a core component of privacy and security training initiatives. Overall, GenAI training and education can be divided into three categories:
- General education → Sets the stage for what AI is, how it can be used, the different kinds of AI systems and tools that exist, and the potential risks and impacts that certain AI systems might generate.
- Risk intelligent GenAI competencies → GenAI-specific, targeting foundational functional principles, possible use cases, risk identification and management, and the state entity’s responsibilities regarding RAI integration procedures. This area also addresses the legal and privacy implications of GenAI procurement, namely across the domains of data and product ownership, as well as data privacy and security.
- Technical training → Addresses two main areas: model infrastructure and security. Infrastructure-related training should target how to train, manage, and monitor GenAI systems whereas security-related training should illustrate how to manage and operate GenAI models securely, while also explaining how to address possible GenAI-driven threats to cybersecurity protocols.
Gen AI Procurement, Risk Assessment, and Management
State entity’s CIOs are responsible for evaluating any GenAI risks that emerge within their organization. Importantly, GenAI risk assessment and mitigation strategies should draw from established industry-specific documentation, such as the NIST AI Risk Management Framework alongside other resources like the California GenAI toolkit, and additional CDT-provided guidance documents.
In a nutshell, risk assessment and mitigation procedures should:
- Evaluate and scrutinize GenAI training data for representativeness, accuracy, and integrity.
- Determine whether GenAI procurement can fuel discriminatory outcomes and whether GenAI models are easily accessible.
- Facilitate an understanding of the scope and timeline of certain risk interventions.
- Identify the key actors responsible for GenAI oversight and evaluation.
- Maintain and promote GenAI trustworthiness.
As for GenAI procurement, California state entities should begin by following the guidelines prescribed by the State Contracting Manual. However, additional guidelines must also be followed, specific to the CGAIPG, which are described below:
- State entities must disclose the details of GenAI bids and offers made to GenAI vendors and also report any agreed-upon GenAI contracts. Conversely, vendors must fill out and submit a GenAI Disclosure and Fact Sheet to potential state buyers.
- When trying to attract GenAI bids and offers, state entities must provide a written solicitation describing their GenAI requirements and objectives to potential vendors.
- For any GenAI procurements made by a state entity, its CIO must conduct a risk evaluation to be supplemented by a CDT consultation—intended GenAI solutions should also be scrutinized according to the results of a consultation between the CDT and the state entity’s CIO.
- When state entities acquire GenAI, it must be accompanied by a CDT-administered GenAI assessment. If changes to a GenAI model’s risk profile occur, or if significant modifications to its architecture are made, they must be reported to the CDT for reassessment and consultation. If state entities find themselves in a position where a vendor has added GenAI tech to a contract without first obtaining their consent, they must bring it to the CDT’s attention.
- When determining whether to move forward with potential GenAI procurement contracts, state entities must engage a GenAI expert in the process, to ensure that contracts are managed responsibly and effectively.
Conclusion
Throughout this post, we’ve illustrated, on a granular level, the provisions described in the CGAIPG, aiming to make the information in this document easily accessible and digestible. Seeing as the CGAIPG isn’t expected to enter into California state legislation until 2025, government actors subject to its provisions should operate on the assumption that significant changes will occur in the near term, especially since the next 8-12 months will be rife with stakeholder feedback.
Major developments in the California GenAI ecosystem could also cause consequences whose implications reverberate throughout the CGAIPG. Given the pace at which AI advances and proliferates, government actors can’t afford to let their guard down, otherwise they risk directly causing widespread negative impacts due to GenAI integration—monitoring the GenAI landscape to gain insights into the latest trends and research developments will be crucial to anticipating how the CGAIPG might evolve, namely what additional measures will emerge to maintain trustworthy and RAI procurement practices.
Fortunately, readers don’t need to embark on this journey alone and can turn to Lumenova AI’s blog for in-depth, current information on generative and RAI developments, as well as analysis and description of the most recent AI policy initiatives and risk management approaches.
For those wanting to get a foot in the AI risk management door, consider using Lumenova AI’s RAI platform and book a product demo today.