CHAPTER 5

Guardrails and Governance

Guardrails

Preventing the abuse of AI by employees and far too often students at universities (how could institutions of learning restrict access to learning tools that are going to change everything for academia?) makes up 7% of the AI Security vendors. That’s 25 out of the 378 tracked in the IT-Harvest Dashboard.

The approaches to applying guardrails vary from proxies that intercept and pre-process prompts coming from users to browser extensions to agents that monitor users. The customer may opt to block a prompt or perhaps modify it by removing PII or other confidential information.

Another approach (used by Knostic) is to examine the response coming from the LLM and filter or expunge data that the organization deems as inappropriate. In that case, a user can ask, “What are the salaries of my cubicle mates?” But the response would be intercepted and replaced with, “I am sorry, it is against company policy to reveal salary data.”

Some of the key capabilities that identify a vendor as providing guardrails:

  • Input filtering (prompt injection, malicious intent).
  • Output filtering (PII, policy violations, harmful content).
  • Token-level or semantic anomaly detection. This operates at the raw text token level — the actual units an LLM consumes and produces. Think of it as “packet inspection for prompt streams.” It looks for weird token patterns that often signal prompt injection, such as repeating override phrases (“ignore previous instructions”), extremely long prompts, null-terminated strings, encoded payloads (base64, hex, URL-encoding, leetspeak), rapid context shifts (prompt suddenly changes intent), indicators of obfuscation, adversarial tokens, whitespace manipulation, and strange punctuation frequency.
  • Data loss prevention for AI usage.
  • Shadow AI discovery.
  • User/app/agent policy enforcement.
  • Governance + audit trails tied to prompts/responses.

Here are the 27 companies that are working on Guardrails for AI Security, sorted by size as measured by number of employees shown on LinkedIn:

CompanyCountryInvestment ($M)Employees
NexosLithuania$41.52M108
CloudsineSingapore-51
Prompt SecurityUSA$23M50
KnosticIsrael$14.3M48
ArthurUSA$60.3M41
Orion SecurityIsrael$6M36
Swift SecurityUSA-36
AIcebergUSA$10M35
AllTrue.aiCanada-33
TrustwiseUSA$4.55M30
Calypso AIUSA$43.2M29
LumiaUSA$18M26
AcuvityUSA$9M24
Liminal SecurityIsrael$16.98M23
NeuralTrustSpain$2.79M21
TumerykUSA-21
Kipling SecureUSA-17
PromptArmorUSA$3.2M16
SurePath AIUSA$5.2M16
CredalUSA$10.3M15
OvalixIsrael-15
QualifireIsrael$1.6M14
PangeaUSA$51M12
WeagleItaly-10
Alert AIUSA-7
HiveTraceRussia-3
Confident SecurityUSA$4.2M2

Governance

AI Governance has emerged as one of the most critical new disciplines in enterprise technology, driven by the rapid adoption of machine learning, large language models (LLMs), and autonomous AI systems. As organizations scale AI across business functions, the need for oversight, accountability, and risk management becomes foundational — not only for safety and compliance, but for trust, operational resilience, and business continuity. The AI Governance category encompasses the platforms, processes, and technologies that ensure AI systems are developed, deployed, monitored, and maintained in a secure, ethical, and compliant manner.

At its core, AI Governance provides a centralized control plane for the entire AI lifecycle. Governance platforms inventory every AI asset across the enterprise — models, datasets, agents, pipelines, endpoints, and third-party tools — while continuously assessing the risks each one presents. Not an easy task as they unify policies across business, technical, legal, and regulatory domains; enforce those policies through automated or human-in-theloop workflows; and surface issues such as bias, drift, data leakage, hallucination, and misuse before they escalate into real incidents. Rather than focusing solely on runtime protection, AI Governance extends upstream and downstream: from dataset provenance, training transparency, and model evaluation, to operational monitoring, access control, audit logging, and retirement.

The emergence of global regulatory frameworks — including the EU AI Act, NIST AI Risk Management Framework, ISO/IEC 42001, and sector-specific rules in finance, healthcare, and critical infrastructure — has accelerated the demand for governance. Enterprises must now demonstrate how AI systems were built, what data they rely on, how decisions are made, and how risks are mitigated. Governance platforms provide the templates, workflows, and automated evidence necessary to meet these regulatory expectations at scale.

An AI Governance product typically includes a combination of the following capabilities:

  • AI asset inventory and discovery across cloud, SaaS, and internal systems.
  • Risk classification and impact assessment, aligned to recognized frameworks.
  • Policy management and automated enforcement for responsible AI use.
  • Dataset governance and lineage tracking, ensuring provenance and compliance.
  • Model lifecycle governance, including approvals, monitoring, and version control.
  • Continuous detection of drift, bias, and performance degradation.
  • Regulatory and audit preparedness, with documentation and transparency tooling.
  • Identity, access, and permission management for AI systems.
  • Traceability and immutable audit logs for every model action or change.
  • Red teaming, vulnerability scanning, and adversarial testing.
  • Vendor and third-party AI risk scoring.
  • Integration with enterprise MLOps, GRC, SIEM, and DevSecOps ecosystems.

The boundaries of AI Governance intersect with — but remain distinct from — other AI security domains. Runtime guardrail products focus on live filtering of prompts and outputs, agent security platforms control tool use and autonomy, and AI assurance products evaluate model quality and accuracy. AI Governance, in contrast, is the umbrella category that orchestrates policy, risk, compliance, lifecycle, and organizational accountability. It ensures AI systems behave safely not just in a moment of execution, but throughout their entire lifespan.

As enterprises enter a phase of widespread AI deployment, AI Governance has become a strategic imperative. It is the connective fabric between model builders, risk teams, compliance officers, and security operations. Vendors in this category — such as Aim Security, Holistic AI, WitnessAI, Dynamo AI, and Cranium — are shaping how organizations will embed trust, safety, and responsibility into every layer of their AI portfolios. AI Governance is no longer optional; it is the operational backbone of enterprise AI maturity.

Here are the 37 AI Governance vendors:

CompanyCountryInvestment ($M)Employees
Alice (was ActiveFence)Israel$100M381
WitnessAIUSA$27.5M89
Holistic AIUSA-79
CraniumUSA$46.47M57
Singulr AIUSA$10M48
Swif.aiUSA$3M42
Portal26USA$15M41
Nudge SecurityUSA$39M38
GuardareUSA$5.1M36
LumenovaUSA-31
MonitaurUSA$12.66M28
SplxAIUSA$9M24
SuperAlignUSA-23
Saidot.aiFinland$1.96M22
DarwinUSA$20M21
MaroUSA$4.3M19
TrustibleUSA$12.44M18
Clearly AIUSA-17
EQTYLabUSA-17
EnzaiUnited Kingdom$4.2M16
DatatronUSA$12.1M14
Fraim (was Resourcely)USA$8.3M14
Glasswing AIUSA-13
MagicMirrorUSA-13
Guardrails AIUSA$7.5M11
RekenUSA$10M11
KonferUSA-9
AsenionCanada$1.55M7
UncovAIFrance-7
PreambleUSA$4M5
Mithril SecurityFrance$2.76M4
Suzan AIFrance-3
Capsule SecurityIsrael-2
Slauth.ioIsrael$1.7M2
Sovereign AISecurity LabsUSA-1
CompliantLLMUSA-0
OpseraUSA-0