EU AI Act compliance

EU AI Act Compliance Review for Companies Using AI Systems

Most companies already use AI, but cannot clearly explain how it works, what data it uses or what risks it creates. HREVN helps you assess real use, identify high-risk cases and generate structured documentation with a verifiable record.

If your AI system is used in the European Union, the EU AI Act may apply even if your company is based outside Europe. HREVN helps companies, law firms and consultancies move from scattered information to structured AI compliance documentation grounded in the real use of the system.

If someone asked you tomorrow to justify one real use of AI, could you show it clearly?

Preliminary triage. Not legal advice and not a certification of compliance.

The problem

Understand whether the EU AI Act may apply to your AI system

EU AI Act compliance is not only about having a policy. Many organisations already use AI systems but still cannot show clearly how those systems operate, what data they use, who reviews outputs or what evidence exists today.

The real problem is not only uncertainty about compliance. It is being unable to explain one concrete use of AI if a client, partner, auditor or regulator asks for it. That is why the first practical step is to assess real use, human oversight, user notices and the documentary gaps that still block a serious review.

  • Do you know which AI tools each team is actually using?
  • Do you know whether anyone relies on unapproved AI tools?
  • Could you reconstruct one recent real-world use case?
  • Do you know whether AI is used in HR, chatbots, education or scoring?
  • Do you have declared evidence, or only a policy stored in a folder?
Real cases

Identify high-risk AI uses and transparency gaps in your system

The point of the review is not to repeat the policy. It is to get specific about where the system may create higher regulatory exposure, what transparency duties may apply and which decisions still lack a serious documentary basis.

In practice, that means separating high-risk AI systems from transparency-led cases and seeing when human oversight, user notice or traceability becomes the real issue.

HR / hiring

Example: You use AI to summarise CVs, sort candidates or prepare interviews.

The uncomfortable question: Does it influence who gets interviewed, rank candidates, include human review and clearly inform the candidate?

Possible HREVN route: high-impact signals, human oversight, notices and documentary evidence.

Commercial chatbot / customer support

Example: Your website answers customers automatically or qualifies leads with AI.

The uncomfortable question: Does the user know they are interacting with AI, can the case escalate to a human and are records kept?

Possible HREVN route: transparency, sensitive-response control and a traceability base.

Education / learning

Example: You use AI to mark work, evaluate students or recommend pathways.

The uncomfortable question: Does it affect grading or access, is the student informed and is there human review?

Possible HREVN route: sensitivity review, named responsibilities and process evidence.

Scoring / sensitive services

Example: AI influences access, priority, eligibility, solvency or commercial conditions.

The uncomfortable question: What data is used, who supervises it and could you explain why a decision was taken?

Possible HREVN route: reinforced triage, professional review and stronger traceability.

General internal use

Example: Your team uses ChatGPT, Copilot or Gemini with documents, emails, minutes or budgets.

The uncomfortable question: Which tools are used, what data goes in and what internal instructions exist?

Possible HREVN route: tool inventory, usage guidance and AI literacy records.

Comparison

Generate structured AI compliance documentation from real system usage

HREVN does not replace training or legal judgement. It structures preliminary triage so the company, law firm or consultant can move from generic policy language to reviewable AI compliance documentation grounded in actual system use.

Training / AI policy HREVN check
Explains what should be done Checks what is actually being done
Delivers a policy Checks whether declared evidence exists
Ends when the training ends Can be applied again after 60/90 days
Does not always inspect real use Surfaces tools, data, responsibilities and unresolved gaps
Does not create a follow-up file Creates a Flash Report, Internal Dossier and Preliminary File
Stays at training level Opens routes for professional review when needed
What HREVN delivers

Add a verifiable record to every output you issue

A preliminary documentary chain to understand whether the policy is alive, what gaps remain and what should escalate to professional review if needed. The result is not just a score: it is a preliminary documentary base that shows what has been implemented, what is still missing and what needs professional escalation.

HREVN does not only generate AI compliance reports. It creates verifiable records that allow you to prove what was issued, when, and with which documented inputs.

01

Initial mini-check

A fast way to see whether the policy is alive or only archived.

02

HREVN Flash Triage Report

A quick reading of signals, gaps and next steps.

03

HREVN Internal Review Dossier

A working base for blockers, evidence and documentary status.

04

Preliminary HREVN AI File

A structured case document with scope, routes and clear limits.

05

Action routes

Additional training, policy work, notices, human oversight or professional review when needed.

Free check

Is your AI policy alive or just archived?

This check is the entry point to the HREVN workflow. It does not issue a pass/fail compliance answer. It helps locate signals, gaps and preliminary review routes. At the end, you can request an initial reading that you can use internally or share with counsel if relevant gaps appear.

  1. Can you show that your AI policy was communicated internally, with date and record?
  2. Do you maintain an updated list of the AI tools each department uses?
  3. Can you show what categories of data are entered into each AI tool?
  4. Have you checked whether AI is used in HR, hiring, education, scoring, customer support or decisions about individuals?
  5. Could you reconstruct a recent case where an AI output was reviewed by a person before being used?
  6. If you use chatbots, automated responses or AI-generated content for external users, can you point to where they are informed?
  7. If someone asked tomorrow about one concrete AI use, could you show the tool, purpose, data used, owner, human review and related evidence?
Green Basic signals are located.
Amber A policy exists, but implementation is only partially evidenced.
Red Formal policy with insufficient evidence. It is worth reviewing tools, data, responsibilities and evidence before relying on the policy alone.
Grey Insufficient information.
Short form

Request an initial review

We use this to understand whether you fit better with an initial check, a guided demo or a partner conversation.

Preliminary triage. Not legal advice and not a certification of compliance.

Partners

Designed for companies, law firms and AI consultancies

The same workflow can support internal review, external advisory work or partner-led delivery. What changes is the operating model, not the need for serious, reviewable documentation.

Proof

Three demo scenarios currently available

The current MVP already generates a full documentary chain from a structured case: initial declaration, triage, dossier, file, manifest and checksums. The point here is to show that HREVN already works across three different operating patterns.

TalentoIA / HR

Regulatory depth: hiring, possible high-risk posture and the need for stronger documentary evidence.

Shows how HREVN opens a case with possible Annex III relevance and incomplete human review.

AI System Initial Declaration Form

View PDF

HREVN Internal Review Dossier

View PDF

HREVN AI Documentation Review File

View PDF

DemoChatBot / commercial chatbot

Commercial breadth: transparency, user notice, processed data, human escalation and operational control.

Shows that HREVN is not limited to HR and can open a real commercial chatbot case without forcing high-risk by default.

AI System Initial Declaration Form

View PDF

HREVN Internal Review Dossier

View PDF

HREVN AI Documentation Review File

View PDF

MicrocreditAI / financial scoring

Advanced depth: creditworthiness scoring, sensitive economic decisions, possible high-risk posture and reinforced legal review.

Shows how HREVN opens a financial-scoring case with human oversight, traceability, bias testing, applicant notice and required legal review.

AI System Initial Declaration Form

View PDF

HREVN Internal Review Dossier

View PDF

HREVN AI Documentation Review File

View PDF
Quick FAQ

Fast questions before requesting a demo

Does this check replace legal review?

No. HREVN runs preliminary triage and organises an initial documentary base. Legal or compliance review remains a later human decision.

Is this only for companies that already completed AI training?

No. It also fits companies that are only starting to use AI and need to organise tools, data, responsibilities and evidence before they can defend a real case.

Does HREVN generate a file or only a score?

It generates a preliminary documentary chain: initial declaration, flash report, internal dossier and a structured review file.

Can a law firm or consultancy use this with clients?

Yes. The landing is also aimed at law firms, consultancies, academies and advisers that want to use HREVN as a documentary intake and triage layer.

How does this help with AI Act compliance for companies?

It helps companies turn policy, tool lists, human oversight, notices and evidence into AI compliance documentation that is easier to review before legal escalation is needed.

Scope

What this check does and does not do

HREVN runs a preliminary implementation triage. It does not replace legal audit, does not certify compliance and does not formally validate evidence. It organises information, surfaces gaps and creates a documentary base for professional review when needed.

  • Preliminary triage.
  • Documented implementation.
  • Declared evidence.
  • Open gaps.
  • Professional review when needed.
Next step

Start by checking whether your AI policy is alive.

We are not launching another training course. We are validating whether there is demand for a second layer: checking whether AI policy and training have turned into documented real-world use.