The 2024 Stanford AI Index Report revealed a staggering 56.3% increase in AI-related regulations in the U.S. over the past year alone.1 As artificial intelligence (AI) continues to revolutionize industries—from scientific breakthroughs to raising the skill floor for employees—businesses are investing heavily in new AI models to stay competitive. Yet, amid this rapid progress, one thing remains conspicuously absent: a standardized framework for evaluating AI use within the enterprise.
With 66% of people now believing AI will dramatically affect their lives within the next year (up from 60% the year before), it’s clear that the societal awareness of this technology is accelerating.2 However, while the pace of AI development surges forward, many organizations remain unprepared for the wave of regulation that is inevitably coming. Failing to address this now could mean not only falling behind the AI curve, but also facing costly compliance efforts in the future.
Are you ready to proactively manage the risks and opportunities of AI in your organization—or will you wait for regulation to force your hand?
Governments & Governance
In October 2023, President Joe Biden issued an executive order on AI, setting the stage for government standards for AI security in the United States. The order emphasized stronger control, transparency, and safety for AI systems. It explored critical areas such as standards development, AI-enabled fraud, algorithmic discrimination, third-party vendor responsibility, and innovation in the field of AI.
During the past summer, California’s AI bill, Senate Bill (SB) 1047, became a hot topic of discussion. Critics argued that it introduced heavy-handed regulations targeting existential risks that don’t yet exist, while supporters saw it as a necessary step in governing AI’s societal impacts. More importantly, this bill signals the beginning of real, enforceable AI legislation making its way through the U.S. political system. Figures like former House speaker Nancy Pelosi and members of Congress have weighed in, often with conflicting views. Even during the September 11 presidential debate, AI and quantum computing were briefly mentioned, underscoring their rising importance on the national stage.
Globally, countries are racing to develop advanced models and apply them in innovative ways, all while grappling with the ethical and privacy concerns these technologies raise. In July 2024, the European Union released the AI Act, which is being hailed as the first legal framework for AI. The Digital Charter Implementation Act, introduced in June 2022, included provisions for regulating AI systems. Japan also addressed the risks associated with AI and automation by enacting AI governance and automated driving laws in April 2023.
These examples highlight a growing global movement toward AI regulation. Governments are increasingly pushing for AI system regulations that would begin to manage the unknown or unproven risks AI introduces. Inside organizations, employees may use AI tools to drive results—often without the company’s knowledge. While this can boost efficiency, it also introduces risks to the company. Without proper governance, unauthorized AI use can create a new form of shadow IT: shadow AI.
As with the early days of cloud computing, the internet, or smartphones, we’re entering an era where “we don’t know what we don’t know” about the risks of AI at scale—a concept that can be alarming. Security wasn’t built into the internet’s foundation; for instance, HTTPS (which encrypts communications between websites and users) wasn’t widely adopted until around 2014. Before that, many popular websites communicated unencrypted over the public internet. If we had applied the threat modeling and risk management strategies we now use to these technologies at their inception, today’s vulnerability landscape might look very different.
Fortunately, we have the opportunity to address the perceived risks of AI early and build a system to identify risks as they emerge. Compared to the advent of past technologies, AI model developers are placing an uncommon emphasis on ensuring these systems are created with as little malicious capability as possible. Risk management frameworks are in place and continuously updated to reflect our growing understanding of AI’s complexities. As responsible stakeholders, it is our duty to contribute to the solution, helping ensure that AI is used ethically, transparently, and in ways that benefit society as a whole.
How Can an Organization Prepare for Upcoming Regulatory Requirements?
Organizations are taking different approaches to prepare for the impending regulation of AI. Some opt for overcorrection, heavily regulating or even banning the use of AI tools within their enterprises or government operations. Others adopt a more open approach, allowing the benefits of AI tools to be harnessed and applied within their organizations.
Currently, no comprehensive regulations exist in the U.S. requiring private institutions to follow strict standards for AI use in decision making, customer interaction, or data governance. However, that may likely change soon, especially if California's SB 1047 is enacted. Preparing for these regulatory shifts will have a tangible impact on a company’s bottom line. Reactive efforts to control processes already employing AI—or to fix issues around ethical and transparent use of AI—will be far more costly to implement and enforce. If your organization has ever had to become PCI or HIPAA compliant in a short time frame, you are aware of the financial and operational strain this can cause.
To stay ahead, organizations will need to evolve their data governance principles, helping ensure AI systems are trained and operated using secure, high-quality data. Anti-discrimination controls will be essential to guarantee that AI-driven decision making does not inadvertently disenfranchise minority populations. In addition, risk management related to AI performance, security, and ethics must become embedded in company culture, with the understanding that policies, procedures, and people will need to adapt as the technology evolves.
These foundational elements—data governance, risk management, and anti-discrimination—must be considered as early as possible in the AI lifecycle. One option for assessing your current posture and preparing your governance structure is to utilize the NIST Artificial Intelligence Risk Management Framework (AI RMF). By adopting such frameworks, your organization can begin laying the groundwork for responsible and compliant AI use, helping ensure that when regulations do arrive, you are already ahead of the curve.
NIST Artificial Intelligence Risk Management Framework
“The Framework is designed to equip organizations and individuals with approaches that increase the trustworthiness of AI Systems, and help foster the responsible design, development, and deployment and use of AI systems over time.” - NIST AI RMF Executive Summary, page 2
The NIST AI RMF was developed by professionals in the AI, risk management, and governance fields. If you have used the NIST Cybersecurity Framework before, the layout and use will be extremely similar.
The Framework is split into four functions:
Govern
Establish a framework for organizational oversight, accountability, and policies related to AI systems. This helps ensure that governance structures, roles, and responsibilities are defined and that risk management practices align with ethical principles, regulatory requirements, and organizational objectives.
Map
Identify and understand the context, purpose, and potential impacts of an AI system. Map out the AI system’s lifecycle from development to deployment to better understand the specific risks associated with its intended use, the data it relies on, and the environment in which it operates.
Measure
Assess and quantify AI risks by measuring relevant metrics such as fairness, accuracy, robustness, and security. This function involves continuous monitoring and evaluation of the AI system’s performance, identifying gaps, and helping ensure compliance with defined risk tolerance levels.
Manage
Address and mitigate risks through operational controls, mitigation strategies, and response plans. This includes maintaining and updating AI systems, adapting to new risks, and helping ensure that corrective actions are taken to keep AI operations aligned with governance and risk management objectives.
Each of these four functions is segmented into categories and subcategories to assist with evaluation and give guidance on the future state. This information will allow an organization to approach AI risk management systematically, helping ensure responsible and secure AI deployment.
A review of an organization’s AI risk management posture typically begins with a comprehensive discovery phase to understand the current state of AI systems and related processes, aligning with the Map function of the NIST AI RMF. This phase is crucial to identifying all AI use cases, data dependencies, and potential risks across the system lifecycle. Involving the right stakeholders—such as business leaders, data scientists, IT professionals, and legal experts—provides a holistic view of AI risks, from ethical concerns to regulatory compliance.
The Govern function emphasizes the need for clear oversight and defined roles, making it critical that senior leadership is engaged to set risk management policies and accountability in the appropriate areas. The next step, Measure, involves assessing AI system performance, fairness, security, and robustness through established metrics, helping ensure risks are quantified and can be studied to facilitate future improvement. Finally, in the Manage phase, the organization must implement risk mitigation strategies and continuously adapt them, keeping operations in line with governance goals, evolving business requirements, and rapid change in the technology ecosystem.
The NIST AI RMF provides a clear road map for managing AI risks in a structured and proactive manner. By following its guidelines, we can anticipate regulatory trends and build AI systems that are not only compliant with existing standards but also resilient to future legal and ethical requirements. This approach helps foster trust, accountability, and responsibility—qualities that can set you apart as leaders in your respective industries.
Act Now
Rather than waiting for regulation to force your hand, take control when it comes to AI governance preparation. Implementing and assessing your readiness with the NIST AI RMF can help you manage risks, enhance security, and ensure ethical AI deployment on your own terms. Around the world, this technology has created a wave of change—a wave increasing in size every day. By acting now, you can help position your organization as an innovator and responsible steward of AI technology, capable of navigating the complex, emerging regulatory landscape with confidence and agility.
If you have any questions or need assistance, please reach out to a professional at Forvis Mazars.