HREVN

Technical Overview

Live runtime, public surfaces, and verifiable continuity for AI workflows

HREVN is no longer only a protocol hypothesis. It now operates as a live managed runtime with public surfaces for Anthropic, Codex, Google / Genkit, and OpenClaw, plus a public MCP server and a public npm package for the Google surface.

View developer entry points Managed API health

What HREVN is

HREVN is verifiable workflow infrastructure for AI agents. Its role is not to replace the model layer or the orchestration layer, but to add a durable continuity layer around consequential execution: baseline diagnostics before important steps, verifiable evidence after execution, and structured outputs that make interruption and resumption more reliable.

In practical terms, HREVN helps answer three questions that ordinary logs and chat history answer poorly: what happened, what remains missing, and from which trusted point a workflow can safely continue.

Current Runtime Shape

Managed API

A live runtime behind https://api.hrevn.com for baseline checks, bundle generation, verification and bundle download.

MCP Server

A public MCP server published through the MCP Registry and PyPI for structured tool access to the same runtime.

Public Surfaces

Thin public surfaces for Anthropic, Codex, Google / Genkit and OpenClaw, all pointing back to the same operational truth.

Why continuity matters

Agent workflows often fail in ambiguous ways. A long execution can stop after partial work, but the surrounding system may still struggle to tell what completed, what failed, or which checks were never reached. Without a verifiable continuity layer, teams rebuild context from memory, logs, or chat transcripts, repeating work and losing confidence.

HREVN addresses this by returning structured outputs such as profile detection, readiness level, missing required blocks, risk flags, recommended next step and a remedy payload. That remedy path is important: the runtime does not only issue a verdict, it also explains what evidence is still missing.

Governance and evidence

HREVN does not make a system legally compliant on its own. What it does provide is a structured evidence layer that governance, audit and compliance processes can use: human oversight references, risk documentation, evidence lifecycle hooks, technical documentation references, receipts and bundle verification.

This makes HREVN particularly relevant in AI governance discussions where the challenge is not only model quality, but also whether a system can later prove what it did, under which authority, and with what missing controls still visible.

Three pages for the conceptual core

The technical overview explains the full system. These three pages frame the same product through continuity, evidence and verifiable state.

Public entry points

Anthropic

Claude Code-facing surface using skills, a local runner and an MCP path.

View repo

Codex

Helper-first technical alpha for verifiable workflow state in coding-oriented agent flows.

View repo

OpenClaw

Agent-first, local-first surface with a public PyPI CLI and real end-to-end baseline validation.

View repo

View PyPI package

Current status

HREVN is currently in technical alpha across its public surfaces. That means the runtime is live, the first test paths are real, and a technical user can already discover, install and run the public artifacts. It does not yet mean fully productized onboarding, marketplace-grade packaging across every surface, or final enterprise distribution.

Go to entry points Privacy policy