Guardrails and Governance
Guardrails
Preventing the abuse of AI by employees and far too often students at universities (how could institutions of learning restrict access to learning tools that are going to change everything for academia?) makes up 7% of the AI Security vendors. That’s 25 out of the 378 tracked in the IT-Harvest Dashboard.
The approaches to applying guardrails vary from proxies that intercept and pre-process prompts coming from users to browser extensions to agents that monitor users. The customer may opt to block a prompt or perhaps modify it by removing PII or other confidential information.
Another approach (used by Knostic) is to examine the response coming from the LLM and filter or expunge data that the organization deems as inappropriate. In that case, a user can ask, “What are the salaries of my cubicle mates?” But the response would be intercepted and replaced with, “I am sorry, it is against company policy to reveal salary data.”
Some of the key capabilities that identify a vendor as providing guardrails:
- Input filtering (prompt injection, malicious intent).
- Output filtering (PII, policy violations, harmful content).
- Token-level or semantic anomaly detection. This operates at the raw text token level — the actual units an LLM consumes and produces. Think of it as “packet inspection for prompt streams.” It looks for weird token patterns that often signal prompt injection, such as repeating override phrases (“ignore previous instructions”), extremely long prompts, null-terminated strings, encoded payloads (base64, hex, URL-encoding, leetspeak), rapid context shifts (prompt suddenly changes intent), indicators of obfuscation, adversarial tokens, whitespace manipulation, and strange punctuation frequency.
- Data loss prevention for AI usage.
- Shadow AI discovery.
- User/app/agent policy enforcement.
- Governance + audit trails tied to prompts/responses.
Here are the 27 companies that are working on Guardrails for AI Security, sorted by size as measured by number of employees shown on LinkedIn:
| Company | Country | Investment ($M) | Employees |
|---|---|---|---|
| Nexos | Lithuania | $41.52M | 108 |
| Cloudsine | Singapore | - | 51 |
| Prompt Security | USA | $23M | 50 |
| Knostic | Israel | $14.3M | 48 |
| Arthur | USA | $60.3M | 41 |
| Orion Security | Israel | $6M | 36 |
| Swift Security | USA | - | 36 |
| AIceberg | USA | $10M | 35 |
| AllTrue.ai | Canada | - | 33 |
| Trustwise | USA | $4.55M | 30 |
| Calypso AI | USA | $43.2M | 29 |
| Lumia | USA | $18M | 26 |
| Acuvity | USA | $9M | 24 |
| Liminal Security | Israel | $16.98M | 23 |
| NeuralTrust | Spain | $2.79M | 21 |
| Tumeryk | USA | - | 21 |
| Kipling Secure | USA | - | 17 |
| PromptArmor | USA | $3.2M | 16 |
| SurePath AI | USA | $5.2M | 16 |
| Credal | USA | $10.3M | 15 |
| Ovalix | Israel | - | 15 |
| Qualifire | Israel | $1.6M | 14 |
| Pangea | USA | $51M | 12 |
| Weagle | Italy | - | 10 |
| Alert AI | USA | - | 7 |
| HiveTrace | Russia | - | 3 |
| Confident Security | USA | $4.2M | 2 |
Governance
AI Governance has emerged as one of the most critical new disciplines in enterprise technology, driven by the rapid adoption of machine learning, large language models (LLMs), and autonomous AI systems. As organizations scale AI across business functions, the need for oversight, accountability, and risk management becomes foundational — not only for safety and compliance, but for trust, operational resilience, and business continuity. The AI Governance category encompasses the platforms, processes, and technologies that ensure AI systems are developed, deployed, monitored, and maintained in a secure, ethical, and compliant manner.
At its core, AI Governance provides a centralized control plane for the entire AI lifecycle. Governance platforms inventory every AI asset across the enterprise — models, datasets, agents, pipelines, endpoints, and third-party tools — while continuously assessing the risks each one presents. Not an easy task as they unify policies across business, technical, legal, and regulatory domains; enforce those policies through automated or human-in-theloop workflows; and surface issues such as bias, drift, data leakage, hallucination, and misuse before they escalate into real incidents. Rather than focusing solely on runtime protection, AI Governance extends upstream and downstream: from dataset provenance, training transparency, and model evaluation, to operational monitoring, access control, audit logging, and retirement.
The emergence of global regulatory frameworks — including the EU AI Act, NIST AI Risk Management Framework, ISO/IEC 42001, and sector-specific rules in finance, healthcare, and critical infrastructure — has accelerated the demand for governance. Enterprises must now demonstrate how AI systems were built, what data they rely on, how decisions are made, and how risks are mitigated. Governance platforms provide the templates, workflows, and automated evidence necessary to meet these regulatory expectations at scale.
An AI Governance product typically includes a combination of the following capabilities:
- AI asset inventory and discovery across cloud, SaaS, and internal systems.
- Risk classification and impact assessment, aligned to recognized frameworks.
- Policy management and automated enforcement for responsible AI use.
- Dataset governance and lineage tracking, ensuring provenance and compliance.
- Model lifecycle governance, including approvals, monitoring, and version control.
- Continuous detection of drift, bias, and performance degradation.
- Regulatory and audit preparedness, with documentation and transparency tooling.
- Identity, access, and permission management for AI systems.
- Traceability and immutable audit logs for every model action or change.
- Red teaming, vulnerability scanning, and adversarial testing.
- Vendor and third-party AI risk scoring.
- Integration with enterprise MLOps, GRC, SIEM, and DevSecOps ecosystems.
The boundaries of AI Governance intersect with — but remain distinct from — other AI security domains. Runtime guardrail products focus on live filtering of prompts and outputs, agent security platforms control tool use and autonomy, and AI assurance products evaluate model quality and accuracy. AI Governance, in contrast, is the umbrella category that orchestrates policy, risk, compliance, lifecycle, and organizational accountability. It ensures AI systems behave safely not just in a moment of execution, but throughout their entire lifespan.
As enterprises enter a phase of widespread AI deployment, AI Governance has become a strategic imperative. It is the connective fabric between model builders, risk teams, compliance officers, and security operations. Vendors in this category — such as Aim Security, Holistic AI, WitnessAI, Dynamo AI, and Cranium — are shaping how organizations will embed trust, safety, and responsibility into every layer of their AI portfolios. AI Governance is no longer optional; it is the operational backbone of enterprise AI maturity.
Here are the 37 AI Governance vendors:
| Company | Country | Investment ($M) | Employees |
|---|---|---|---|
| Alice (was ActiveFence) | Israel | $100M | 381 |
| WitnessAI | USA | $27.5M | 89 |
| Holistic AI | USA | - | 79 |
| Cranium | USA | $46.47M | 57 |
| Singulr AI | USA | $10M | 48 |
| Swif.ai | USA | $3M | 42 |
| Portal26 | USA | $15M | 41 |
| Nudge Security | USA | $39M | 38 |
| Guardare | USA | $5.1M | 36 |
| Lumenova | USA | - | 31 |
| Monitaur | USA | $12.66M | 28 |
| SplxAI | USA | $9M | 24 |
| SuperAlign | USA | - | 23 |
| Saidot.ai | Finland | $1.96M | 22 |
| Darwin | USA | $20M | 21 |
| Maro | USA | $4.3M | 19 |
| Trustible | USA | $12.44M | 18 |
| Clearly AI | USA | - | 17 |
| EQTYLab | USA | - | 17 |
| Enzai | United Kingdom | $4.2M | 16 |
| Datatron | USA | $12.1M | 14 |
| Fraim (was Resourcely) | USA | $8.3M | 14 |
| Glasswing AI | USA | - | 13 |
| MagicMirror | USA | - | 13 |
| Guardrails AI | USA | $7.5M | 11 |
| Reken | USA | $10M | 11 |
| Konfer | USA | - | 9 |
| Asenion | Canada | $1.55M | 7 |
| UncovAI | France | - | 7 |
| Preamble | USA | $4M | 5 |
| Mithril Security | France | $2.76M | 4 |
| Suzan AI | France | - | 3 |
| Capsule Security | Israel | - | 2 |
| Slauth.io | Israel | $1.7M | 2 |
| Sovereign AISecurity Labs | USA | - | 1 |
| CompliantLLM | USA | - | 0 |
| Opsera | USA | - | 0 |
