5
public surfaces
Anthropic, Codex, Google / Genkit, OpenClaw, and Kiro.
H-REVN™
Verifiable state infrastructure
H-REVN helps agents and workflows avoid restarting blind when execution breaks. It signs verifiable state, structured evidence, and operational receipts so each execution can be reviewed and resumed with confidence.
SDK + API that signs agent workflow state so teams can resume without losing context or traceability.
It is built for teams that need usable traceability, verifiable receipts, and real workflow continuity: not only logs, but a dependable way to know what happened, what was left half-done, and where to continue.
H-REVN is in public alpha, but it already exposes technical artifacts that humans, crawlers, and agents can inspect.
5
public surfaces
Anthropic, Codex, Google / Genkit, OpenClaw, and Kiro.
4+
installable packages
PyPI and npm entry points for testing H-REVN from the developer stack.
1
live runtime
Managed API at api.hrevn.com for health checks and verified flows.
2
discovery routes
/openapi.json and /.well-known/ai-plugin.json.
Bundle structure, manifests, checksums, root hashes and auditable outputs already defined and publishable.
A public open verifier for inspecting, validating and summarizing Agent Evaluation Records.
A continuity layer for workflows where rebuilding context from scratch has technical, operational and financial cost.
| Need | No evidence layer | Manual audit | H-REVN |
|---|---|---|---|
| Reconstruct what happened | Scattered logs and incomplete context. | Slow case-by-case review. | Structured receipts, signed state, and exportable evidence. |
| Resume a workflow | Restart from scratch or rebuild manually. | Depends on human judgment and prior documentation. | A verifiable point to resume from without blind reconstruction. |
| Prepare external review | Evidence is assembled late and with friction. | Useful, but costly and hard to scale. | An audit-ready package produced from the operational flow. |
Teams that need to preserve state, decisions, approvals and receipts across long-running or sensitive executions.
Operations that need to reconstruct what happened, what was approved and what evidence remains available for human review.
Builders who want verifiable traceability without replacing their main framework or committing to a single model provider.
The website already explains the concept, but this is the minimum flow for readers who want the technical shape before opening a repo.
Before a consequential step, HREVN runs a diagnosis and returns structured output with detected profile, readiness, risk flags, and the recommended next step.
If required blocks are missing, the runtime does not leave only a verdict: it returns a remedy_payload describing what still needs to be collected.
The result leaves a check_id, structured evidence, and a verifiable point from which a workflow can resume without blind reconstruction.
Inspections, technical dossiers, assets and interventions documented through structured, verifiable evidence.
AER outputs for recording proposal, human approval, execution and downstream verification of sensitive actions.
Verifiable restart points that reduce blind context reconstruction after failures, pauses or handoffs.
H-REVN does not require a single stack. It attaches to different agent environments as long as the verifiable outcome is preserved.
Recommended starting point
Skills, tool use and control layers where verifiable state matters more than model vendor choice.
pip install hrevn-anthropic-runner
Execution and review environments where verifiable records of proposals and outputs may become shared infrastructure.
pip install hrevn-codex-cli
Middleware and orchestration layers where H-REVN can operate as a truth layer, not just a secondary log.
Agent-first, local-first environments where HREVN is now installable as a public CLI against the live runtime.
An experimental surface built around powers, steering and MCP, published as an honest alpha for spec-driven exploration.
If you prefer to enter through the concrete problem rather than the full architecture, these three pages frame HREVN through continuity, evidence and verifiable state.
How to avoid blind restarts when an AI workflow is interrupted.
What it means to leave structured evidence that is usable beyond scattered logs.
HREVN's native category: verifiable, resumable state that systems and people can read.
The runtime lives behind one common backend. The public surfaces are distinct entry points over the same operational truth.
Public MCP server for tool discovery and structured access to the HREVN runtime.
Live common backend for surfaces, baseline checks, verification and bundle generation.
Surface repos, MCP server and the current public technical entry points.
Public installable package for the Google / Genkit surface as a developer alpha.
Public installable CLI for the OpenClaw surface as a local-first alpha.
Public installable runner for the Anthropic / Claude Code surface.
Public installable CLI for the Codex surface.
Public Kiro power alpha with MCP, steering and example hooks, without claiming a finished native integration yet.
Model A
H-REVN lives inside the operational flow. It captures state, generates verifiable artifacts and leaves a continuity point that the live system can use immediately.
Model B
The primary workflow can run on any stack. H-REVN receives, structures, verifies and issues the auditable output without forcing a single AI provider or middleware choice.
HREVN is not only about avoiding restarts. It helps teams prepare AI workflows for later review, risk management and operational traceability, especially as organizations prepare for frameworks such as the EU AI Act, internal controls and third-party reviews.
HREVN does not replace a compliance policy. It turns actions, state and evidence into structured records that humans can review with less manual reconstruction.
Base example
Visual reference using ChatGPT Pro / $200 per month, 8 steps per workflow, interruption at 50%, 5 interruptions per week, 1,500 input tokens + 500 output tokens per step and 15% resume overhead with H-REVN.
Potential weekly savings
$342.00
28,500 tokens/week avoided
Realistic monthly savings
$1,470.60
Exceeds the plan frame while still pricing the excess
Theoretical monthly savings
$1,470.60
Free reading from tokens and marginal prices
Theoretical annual savings
$17,784.00
Indicative comparison for workflow continuity
This is an indicative reading, not a commercial promise. The full calculator lets you move every parameter and see how the outcome changes.
We have published a dedicated landing page to visualize the aggregate cost of rebuilding context when an agentic workflow is interrupted without a verifiable state to resume from.
A second landing page provides a more realistic calculator to compare theoretical token savings against realistic savings under a monthly plan or reference budget.
For technical, institutional or collaboration inquiries, write to contact@hrevn.com.
pip install hrevn-anthropic-runner