Every AI agent asserts. Every feature claims. Every product says it does what it says. What's missing is the chain from assertion to evidence — and a way to see where that chain is broken, stale, or never existed.
"Evidence-backed confidence. Not certainty theater."
BESPOKE LINT FOR AI AGENTS
Repo-local governance that tells AI agents exactly what changed violated your rules — before they finish work. Works with Claude Code, Cursor, Codex, and Copilot.
THE TRUST MAP FOR AI-ERA PRODUCTS
Maps what your product claims, what evidence supports it, where trust is missing or stale, and what humans or AI agents need to verify next.
Bootstrap your adapter, define proof requirements, and get lint-style feedback straight into your AI agent's context.
$ npm install -g @kontourai/veritas
added 1 package in 0.8s
$ veritas init
✓ detected repo shape
✓ wrote .veritas/repo.adapter.json
✓ wrote .veritas/policy-packs/default.policy-pack.json
✓ governance block added to CLAUDE.md, AGENTS.md
$ veritas shadow run
[feedback] 2 rules checked, 1 pass, 1 warn
WARN if-changed: src/api/ changed — tests/api/ must also appear
evidence written → .veritas/evidence/