The Regulatory Wave
Regulations are the other driver for the cybersecurity industry. That accounts for the fact that the single biggest category of vendors is that of Governance, Risk, and Compliance (GRC).
| GRC | 576 |
|---|---|
| Data Security | 514 |
| IAM | 487 |
| Network Security | 368 |
| AI Security | 356 |
| MSSP | 280 |
| Operations | 263 |
| Endpoint Security | 238 |
| AppSec | 201 |
| IoT Security | 147 |
| Threat Intel | 128 |
| Fraud Prevention | 114 |
| Security Analytics | 86 |
| Email Security | 74 |
| Training | 36 |
| API Security | 35 |
| Deception | 19 |
| Testing | 13 |
From this chart you can also see that AI Security has already become the fifth largest category of security vendors.
AI Governance concerns itself with the controls required to ensure an organization is in compliance with the relevant regulations. The following laws and regulations are indicative of a rapid response on the part of governing bodies to the newly perceived threats of AI.
EU AI Act
The EU AI Act was conceived in 2018–2019 as the European Union recognized that artificial intelligence was advancing faster than existing regulatory frameworks could manage — particularly in areas affecting safety, privacy, discrimination, and fundamental rights. Following the GDPR model, the European Commission sought to create the world’s first comprehensive legal framework for AI. In April 2021, the Commission released the initial draft, built around a risk-based approach that classified AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories. Over the next two and a half years, extensive negotiations took place among the European Parliament, the Council of the EU, industry stakeholders, civil-society groups, and AI researchers — triggered in part by the rapid rise of generative AI in 2022–2023. These developments led to late-stage revisions addressing foundation models, large language models (LLMs), general-purpose AI (GPAI), transparency obligations, and safeguards against misuse.
The Act ultimately covers a wide range of areas:
- Banned AI practices (e.g., social scoring)
- High-risk AI sectors (e.g., healthcare, transportation, critical infrastructure)
- Requirements for general-purpose AI models
- Biometric and emotion recognition systems
- AI governance and documentation
- Human oversight
- Conformity assessments
- Market surveillance mechanisms
- Significant transparency rules for AI-generated content
The law was formally approved in 2024 and begins phased implementation through 2025–2026. In practice, the EU AI Act establishes the first major legal benchmark for trustworthy and accountable AI, setting a global regulatory precedent similar to what GDPR did for data privacy.
NIST AI Risk Management Framework
The NIST AI Risk Management Framework (AI RMF) emerged from a growing recognition within the US government, industry, and research communities that artificial intelligence was advancing faster than existing safety, ethics, and regulatory structures could keep pace. Between 2017 and 2020, incidents involving biased algorithms, opaque decision-making, security vulnerabilities in machine learning systems, and increasing deployment of autonomous systems led policymakers to call for a comprehensive framework to address AI risks. In 2021, the US Congress formally directed the National Institute of Standards and Technology (NIST) to develop a voluntary but authoritative framework to help organizations identify and manage the risks associated with AI. This directive was further reinforced by the 2022 National AI Initiative Act, signaling the need for a standardized approach to trustworthy AI development and deployment.
NIST launched an open, multi-stakeholder process involving hundreds of participants across government, academia, civil society, and the private sector. Through workshops, public comment periods, draft releases, and technical working groups, NIST gathered input on issues spanning bias, robustness, data quality, privacy, safety, interpretability, and governance. Importantly, this process coincided with the rapid rise of large language models (LLMs) and generative AI, which broadened the scope of risks NIST needed to address — including misinformation, emergent behaviors, and complex supply-chain vulnerabilities. After nearly two years of collaborative development, the AI RMF 1.0 was released in January 2023, accompanied by a detailed “AI RMF Playbook” containing actionable guidance. The AI RMF is built around four core functions — Govern, Map, Measure, and Manage — designed to guide organizations through the full lifecycle of AI systems. The Govern function establishes organizational culture and cross-functional structures for accountability. Map helps teams understand the context, intended use, and socio-technical characteristics of an AI system. Measure supports evaluation of risks, performance, fairness, robustness, and security using quantitative and qualitative metrics. Manage focuses on responding to, monitoring, and mitigating risks as systems evolve. Across these functions, the framework promotes the principles of trustworthy AI, which include validity, reliability, safety, security, resilience, accountability, transparency, explainability, privacy enhancement, and fairness. Unlike traditional regulations, the NIST AI RMF is voluntary, risk-based, flexible, and designed for both public and private sectors. It is internationally influential, serving as a reference point for enterprise AI governance programs, and often used in conjunction with standards such as ISO/IEC 42001 and regulatory regimes like the EU AI Act. Today, the AI RMF functions not only as a technical guideline but as a shared language for policymakers, developers, auditors, and risk officers — helping organizations bridge the gap between innovative AI adoption and responsible, trustworthy deployment.
ISO 42001
ISO/IEC 42001 is the first global, certifiable management system standard dedicated to artificial intelligence. The standard was created to help organizations design, deploy, and maintain AI systems in a trustworthy, safe, and auditable manner. Its origins trace back to growing international concern — beginning around 2019 to 2020 — that AI was becoming deeply embedded in business and government decision-making without consistent global norms for governance, risk control, and accountability. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) responded by convening ISO/IEC JTC 1/SC 42, the global standards committee for AI, to draft a unified international standard. Their goal was to provide an operational blueprint for organizations needing more structure and rigor than voluntary frameworks alone could offer.
Over several years, SC 42 engaged a wide coalition of contributors across industry, academia, nonprofits, and national standards bodies. Early drafts focused heavily on transparency, risk management, and ethical guidance, but the scope expanded as the proliferation of deep learning, LLMs, and autonomous systems introduced new categories of safety, security, and lifecycle risk. By 2022 and 2023, the committee (driven in part by geopolitical interest in harmonizing AI regulation) accelerated its work. ISO/IEC 42001 was formally released in late 2023 as a certifiable management system standard, similar in structure to ISO 9001 for quality or ISO 27001 for information security. ISO 42001 defines the requirements for an AI Management System (AIMS), providing a repeatable, auditable framework for organizations that develop, acquire, or deploy AI. Its core emphasis is on establishing governance, roles, and responsibilities; documenting AI systems and their intended uses; conducting structured risk assessments; ensuring data quality and model performance; embedding monitoring, oversight, and human controls; and maintaining transparency and accountability across the AI lifecycle. The standard is deliberately broad and technology-agnostic, enabling organizations of all sizes to implement measurable controls that scale with AI maturity and regulatory expectations. Unlike frameworks such as the NIST AI RMF — meant as guidance — ISO 42001 is certifiable, allowing independent auditors to verify that an organization’s AI practices meet an internationally recognized threshold of responsibility and risk management. As global regulatory regimes emerge, including the EU AI Act, ISO 42001 is quickly becoming a foundational anchor for AI compliance programs, helping enterprises demonstrate due diligence, reduce operational risk, and align internal governance with global best practices. It is now a cornerstone standard for enterprises building enterprise-wide AI governance programs and seeking to operationalize trustworthy AI.
OWASP AI Exchange: A Practitioner-Led Foundation for AI Governance
This section was contributed by ChatGPT 5.2
The OWASP AI Exchange represents one of the most influential practitioner-driven efforts to define governance expectations for artificial intelligence systems. Developed by the Open Worldwide Application Security Project (OWASP), the AI Exchange is not a regulatory framework in the traditional sense. Instead, it serves as a living body of guidance that translates emerging AI risks into concrete governance, security, and operational controls that organizations can implement today.
At its core, the AI Exchange treats
AI governance as an extension of enterprise risk management, rather than a separate ethical or compliance exercise.
It emphasizes that AI systems introduce new categories of assets — models, training data, prompts, embeddings, and outputs — that must be governed with the same rigor applied to financial systems, personal data, and critical infrastructure.
Governance, in this model, begins with clear accountability, requiring organizations to inventory AI systems, assign ownership, document intended use, and define acceptable risk thresholds.
A key contribution of the AI Exchange is its insistence that governance be embedded across the AI lifecycle. This includes design and procurement decisions, secure development practices, deployment controls, continuous monitoring, and retirement of systems. Rather than focusing narrowly on model accuracy or bias, the guidance highlights operational risks such as data leakage, prompt manipulation, model abuse, supply-chain dependencies, and unintended downstream use — areas increasingly addressed by regulators but often poorly understood by organizations. Importantly, OWASP positions governance as a cross-functional responsibility. The AI Exchange explicitly calls for collaboration between executive leadership, security teams, engineering, legal, privacy, and compliance functions. This aligns closely with emerging regulatory expectations, such as the EU AI Act, which require demonstrable governance structures, risk assessments, and ongoing oversight rather than one-time compliance attestations. While OWASP does not define legal obligations, the AI Exchange has become a de facto reference point for regulators, auditors, and enterprises seeking practical ways to operationalize high-level regulatory principles. Its guidance complements formal regulation by providing the “how” behind requirements related to risk management, transparency, and control effectiveness. As AI regulation continues to evolve globally, the OWASP AI Exchange stands out as a bridge between abstract policy goals and the realities of building, deploying, and governing AI systems at scale.
In the machine age, where regulation often lags innovation, the OWASP AI Exchange demonstrates how community-driven governance frameworks can shape responsible AI adoption long before laws are finalized. As such, the OWASP AI Exchange has become an essential resource for organizations seeking to act as true guardians of intelligent systems.
US State Regulations
As is to be expected, the US Congress is slow to enact any AI regulations, leaving it up to individual states to legislate. This creates the usual confusion for those hoping to be in compliance. Here is a summary of state AI laws at the time of publication:
1. Colorado – Colorado AI Act
Enacted and Effective: May 17, 2024
This is the first comprehensive state AI law in the US (Consumer Protections for AI). The law regulates high-risk AI systems used in areas like employment, healthcare, financial services, housing, insurance, and government services. Notably, the law places obligations on AI developers and deployers to mitigate algorithmic discrimination and increase transparency and accountability.
2. Utah – Utah Artificial Intelligence Policy Act (UAIP)
Enacted: March 13, 2024
Effective: May 1, 2024
The Utah Artificial Intelligence Policy Act addresses AI use in the private sector under consumer protection law principles by establishing baseline oversight for AI applications similar to traditional business practices. The law was enacted and became effective in stages: key provisions took effect in May 2024 and additional amendments were implemented one year later in May 2025.
3. Texas – Responsible Artificial Intelligence Governance Act
Enacted: June 22, 2025
Effective: January 1, 2026
The Responsible Artificial Intelligence Governance Act broadly regulates aspects of AI use and transparency in Texas. The law aims to encourage responsible deployment while protecting consumers and businesses.
4. California – AI Transparency & Frontier Models
Approved: September 29, 2025
Effective: January 1, 2026
California’s Transparency in Frontier Artificial Intelligence Act (SB 53) requires large AI companies (high revenue and compute thresholds) to publish safety frameworks and risk assessments for “frontier” AI models. The law includes reporting requirements for serious AI incidents and whistleblower protections.
Other California AI-related laws also target transparency obligations and disclosures for AI developers and online platforms.
5. Tennessee – ELVIS Act
Enacted: March 21, 2024
Effective: July 1, 2024
This unique law specifically regulates AI use for impersonating individual likeness, voice, or image without consent. The law was designed to protect artists and individuals against unauthorized deepfakes and voice cloning.
6. Texas – AI and Obscene Material (SB 20)
Enacted: June 20, 2025
Effective: September 1, 2025
This law creates criminal offenses for the possession, production, or promotion of obscene images that appear to depict a child under 18 and includes images generated by AI, as well as cartoons and animation.
Just as there are 50 breach disclosure laws, there will be many more AI regulations, even multiple laws in each state. The IAPP (International Association of Privacy Professionals) tracks 45 bills that are in process as of October 2025.
