Free · Open source

Trust for AI agents that anyone can verify

AI agents that book flights, manage money, and make decisions need to prove they do what they promise. Kova makes that possible — for developers, enterprises, and regulators.

Community
Downloads
MITFree to use
EUCompliance ready

New here? Click the ? in the corner to ask any question — we'll explain in plain language.

AI agents that no one can verify

AI agents will manage money, make decisions, and sign contracts — but today there's no way to prove they do what they promise. No accountability. No trust. Every day without verifiable covenants is a day of unaccountable risk. Kova fixes that.

Community
Downloads
Aug 2026EU AI Act compliance deadline
Read the manifesto →

Declare · Monitor · Prove

01

Declare

Agents state what they will and won't do. These rules are locked in and can't be changed to allow more.

02

Monitor

Kova watches what agents do. If they break the rules, it stops them or records the violation.

03

Prove

Anyone can verify an agent followed the rules — without seeing how it works. Privacy preserved.

Read the full spec →

Add accountability in two lines

If you build AI agents, add Kova to make them verifiable. Everyone else: Kova lets you verify agents do what they promise — no coding required. See the Compliance section or EU AI Act guide.

TYPESCRIPT
import { Kova } from '@kova/core';

const agent = await Kova.bind(myAgent, {
  covenants: ['no-data-exfiltration', 'budget-cap:1000'],
  enforcement: 'hard',
  proof: 'zk'
});

const proof = await agent.verify();

Who owns this agent?

Every agent is linked to its owner. You can trace who's responsible.

What will it do?

Agents declare their rules upfront. They can only get stricter, never looser.

Can we verify it?

Yes. Third parties can verify compliance without seeing how the agent works.

Trust is mathematics, not reputation

Kova doesn't guess — it calculates. Trust is bounded by collateralization: an agent's trust can never exceed its staked value. Formal trust algebra enables composition, intersection, negation, and tensor operations across trust profiles. Five-dimensional scoring replaces crude single-number ratings. Adversarial trust equilibrium ensures the system converges on evolutionarily stable strategies — not just Nash equilibria that collapse under pressure. Trust entanglement enables sublinear network verification: verify one node, constrain thousands.

T ≤ STrust ≤ Staked Value
5DMultidimensional Profile
ESSAdversarial Equilibrium
O(√n)Sublinear Verification
Trust Tensor Operations
∩ ∪ ¬Composition Algebra

The accountability kernel

Four components. One invariant. The kernel is the minimal verifiable core: identity, covenant, proof, and accounting. If the kernel is correct, the entire system maintains its guarantees — regardless of what's built on top. Everything else is optional. The kernel is not.

Trust and verification built in

Kova is free and open. Value flows with the economy it secures.

Trust FuturesFinancial instruments on trust trajectories (CME Group model)Derivatives market
Compliance Autopilot0.5–1% of operational budget, continuous monitoringPer-agent recurring
Certification Authority$10K–$100K/year per agent classEnterprise licensing
Trust Data MonopolyBloomberg model for behavioral insightsData-as-a-service
Stake Pool YieldCollateral interest at billion-agent scale$7.5B/year
Sovereign LicensingNational-level deployment licensing$10–100M per country
Community
Downloads
PrivateVerification
MITFree to use

EU AI Act · Aug 2026

Aug 2026 FULL ENFORCEMENT 164 days remaining

By August 2026, Europe requires AI systems to be transparent and accountable. Kova maps to identity, rules, audit trails, and third-party verification. See how Kova meets EU AI Act requirements →

Three wedges that work without network effects

Regulatory Wedge

EU AI Act compliance. August 2026 deadline creates forced adoption. Enterprises need behavioral transparency and audit trails for autonomous agents. Kova is the compliance layer.

Internal Governance Wedge

Enterprise fleet management. Companies don't need a network to use Kova internally. Deploy covenants across your own agent infrastructure. Value from day one, zero external dependencies.

MCP Certification Wedge

Model Context Protocol trust layer. As MCP becomes the standard for agent-to-tool communication, Kova becomes the certification layer that verifies agent identity and behavioral compliance within the ecosystem.

Neutral. Cross-platform. Open.

Other solutions are tied to one company, one blockchain, or one use case. Kova works everywhere. Free to use. Anyone can verify.

NeutralNo vendor lock-in. No single chain or platform.
Cross-platformWorks with any agent, any protocol, any stack.
OpenMIT licensed. Anyone can verify. Anyone can build.
"If we can't hold AI agents accountable, we can't trust them. If we can't trust them, they can't participate in the economy. The question is whether we build accountability before or after the first catastrophic failure."
— THE KOVA MANIFESTO Read the manifesto →

I'm new to Kova

Learn first

Read the manifesto or see how it works. No coding required.

Read the manifesto →

I'm a compliance officer

EU AI Act

See how Kova maps to regulatory requirements. Plain language.

Compliance guide →

I'm a developer

Build with Kova

Install the SDK. Add accountability in minutes. MIT licensed.

Quickstart →

Enterprise or compliance support

Get in touch

Integration help, EU AI Act mapping, or custom deployment. We'll respond within 48 hours.

GitHub Discussions →

Free. Open source. Anyone can verify.