Managed API
A live runtime behind https://api.hrevn.com for baseline checks, bundle generation, verification and bundle download.
HREVN
Live runtime, public surfaces, and verifiable continuity for AI workflows
HREVN is no longer only a protocol hypothesis. It now operates as a live managed runtime with public surfaces for Anthropic, Codex, Google / Genkit, and OpenClaw, plus a public MCP server and a public npm package for the Google surface.
HREVN is verifiable workflow infrastructure for AI agents. Its role is not to replace the model layer or the orchestration layer, but to add a durable continuity layer around consequential execution: baseline diagnostics before important steps, verifiable evidence after execution, and structured outputs that make interruption and resumption more reliable.
In practical terms, HREVN helps answer three questions that ordinary logs and chat history answer poorly: what happened, what remains missing, and from which trusted point a workflow can safely continue.
A live runtime behind https://api.hrevn.com for baseline checks, bundle generation, verification and bundle download.
A public MCP server published through the MCP Registry and PyPI for structured tool access to the same runtime.
Thin public surfaces for Anthropic, Codex, Google / Genkit and OpenClaw, all pointing back to the same operational truth.
Agent workflows often fail in ambiguous ways. A long execution can stop after partial work, but the surrounding system may still struggle to tell what completed, what failed, or which checks were never reached. Without a verifiable continuity layer, teams rebuild context from memory, logs, or chat transcripts, repeating work and losing confidence.
HREVN addresses this by returning structured outputs such as profile detection, readiness level, missing required blocks, risk flags, recommended next step and a remedy payload. That remedy path is important: the runtime does not only issue a verdict, it also explains what evidence is still missing.
HREVN does not make a system legally compliant on its own. What it does provide is a structured evidence layer that governance, audit and compliance processes can use: human oversight references, risk documentation, evidence lifecycle hooks, technical documentation references, receipts and bundle verification.
This makes HREVN particularly relevant in AI governance discussions where the challenge is not only model quality, but also whether a system can later prove what it did, under which authority, and with what missing controls still visible.
The technical overview explains the full system. These three pages frame the same product through continuity, evidence and verifiable state.
The operational angle: how to continue from the last trusted point.
The evidence angle: which outputs help review, audit and governance.
The category angle: why HREVN speaks about verifiable state rather than only logs.
Claude Code-facing surface using skills, a local runner and an MCP path.
Helper-first technical alpha for verifiable workflow state in coding-oriented agent flows.
Thin wrapper plus public npm package for the live HREVN runtime.
Agent-first, local-first surface with a public PyPI CLI and real end-to-end baseline validation.
HREVN is currently in technical alpha across its public surfaces. That means the runtime is live, the first test paths are real, and a technical user can already discover, install and run the public artifacts. It does not yet mean fully productized onboarding, marketplace-grade packaging across every surface, or final enterprise distribution.