Artificial intelligence (AI) has rapidly integrated into facets of public sector operations, transforming how state and local governments deliver services and manage resources. While AI technologies may help improve efficiency and decision making and offer more personalized services, the rapid adoption of AI introduces risks such as bias, privacy concerns, and lack of transparency within operations.
To help address these concerns, state governments are increasingly mandating their agencies to conduct thorough inventories of AI deployments and administer overarching risk and governance programs. Below, we delve into a detailed analysis of these initiatives and how they can help ensure AI is used responsibly and ethically.
AI Governance Mandates
State governments are requiring their agencies to create detailed inventories of where and how AI is being used. This step is crucial to understanding the scope of AI deployment and identifying areas that need governance and risk management. In addition, state governments are setting guidelines for the use, risk assessment, and governance requirements related to AI.
Example: California
In 2023, California Gov. Gavin Newsom issued an executive order on generative AI that addresses risk management topics such as:1
- Establishing guidelines for AI use in government operations
- Conducting risk assessments for AI applications
- Ensuring transparency, privacy, and equity in AI deployment
- Providing training for state employees on AI-related issues
- Creating an advisory group on AI ethics and implementation
- Requiring periodic assessments on AI use and impact in state operations
California also passed Senate Bill 1047,2 which aims to prevent the misuse of large AI models to inflict “critical harms” on humanity. This bill is highly debated, and the technology world is watching to see if Newsom will sign it.
Example: Virginia
In 2024, Virginia Gov. Glenn Youngkin issued a directive requiring all state agencies to identify and document their AI applications.3 This directive mandates the creation of an AI inventory and the implementation of risk management protocols to ensure AI systems are used ethically and transparently. In addition, the Virginia Information Technologies Agency (VITA) established an enterprise architecture standard on AI to guide state agencies in the responsible deployment and management of AI technologies.
Implementing Risk & Governance Programs
Alongside AI inventories, state governments are instituting risk and governance programs to help manage and alleviate potential risks associated with AI technologies. These programs are designed to ensure that AI systems are fair, transparent, and accountable. A key option to consider utilizing is the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF), as it provides guidelines for assessing and managing AI-related risks.4
Aside from NIST AI RMF, state governments may consider two other framework options for AI risk and governance programs. ISO/IEC 23894:2023, developed by the International Organization for Standardization (ISO), details how organizations that develop, produce, deploy, or use AI can manage risk specifically related to AI.5 Google also created Secure AI Framework (SAIF) to help achieve the same governance goals. The conceptual framework is designed to help mitigate risks specific to AI systems and evolve alongside ever-changing technologies.6
Key Components of AI Governance Programs in State & Local Government
Ethical Guidelines & Standards
- Bias and Fairness: Guidelines are developed to help identify and mitigate biases in AI systems.
- Transparency: AI decision-making processes should be transparent and explainable.
Privacy & Security
- Data Protection: Measures are put in place to help safeguard personal information used by AI.
- Cybersecurity: Safety protocols are set up to help protect AI systems from threats and vulnerabilities.
Accountability Mechanisms
- Audits and Assessments: Regular audits and assessments of AI systems are performed to confirm compliance with ethical guidelines and standards.
- Governance Bodies: Governance bodies or committees are established to oversee AI implementation and usage, as well as audits and assessments.
Challenges & Considerations
While the implementation of AI inventories and governance programs are positive steps, they are not free of challenges. Such programs may be challenged with the following:
Resource Allocation
Creating and maintaining AI inventories and governance programs requires significant resources, including funding, experienced staff, and time. State governments must have the necessary resources to support these initiatives, or the work will likely fall short.
Interdisciplinary Collaboration
Effective AI governance necessitates collaboration across various disciplines, including technology, data management, risk management, ethics, law, and social sciences. Connecting and building interdisciplinary teams can be difficult, but it is crucial for developing well-rounded and balanced AI policies.
Keeping Pace With Technological Advancements
AI technologies are rapidly evolving, and governance frameworks must be flexible and adaptive to keep pace. It is essential that the governing bodies focused on these programs continuously monitor and update policies to address emerging risks and opportunities.
Conclusion
As AI continues to play a crucial role in public services, robust governance frameworks such as NIST AI RMF, ISO/IEC 23894:2023, and Google’s SAIF will be essential to help safeguard citizens’ interests and make sure that AI serves as a force for social good.
By mandating AI inventories and implementing broad risk and governance programs, state governments are fostering transparency, accountability, and trust in AI systems. These actions will help to ensure the responsible and ethical use of AI technologies in their various operations and toward the communities they serve.
Our team of public sector professionals is here to help your organization tackle cybersecurity challenges and plan for the future. If you have any questions or need assistance, please reach out to a professional at Forvis Mazars.
- 1“Executive Order N-12-23,” gov.ca.gov, September 6, 2023.
- 2“Senator Wiener Introduces Legislation to Ensure Sage Development of Large-Scale AI Systems and Support AI Innovation in California,” sd11.senate.gov, February 8, 2024.
- 3“Executive Order Number 30 (2024),” governor.virginia.gov, January 18, 2024.
- 4“AI Risk Management Framework,” nist.gov, 2024.
- 5“ISO/IEC 23894:2023, Information Technology – Artificial Intelligence – Guidance on Risk Management,” iso.org, 2023.
- 6“Secure AI Framework (SAIF): A Conceptual Framework for Secure AI Systems,” developers.google.com, 2024.