Truvara is in Beta.
AI for GRC

Understanding EU AI Act: Requirements for High-Risk AI Systems

The EU AI Act classifies eight categories of high-risk AI with mandatory compliance by August 2026. Understand the requirements, documentation obligations, and enforcement penalties.

TT
Truvara Team
February 12, 2026
10 min read

The EU AI Act (Regulation (EU) 2024/1689) treats high‑risk AI systems as the most heavily regulated category under the framework — triggering mandatory compliance obligations across technical documentation, risk management, data governance, and human oversight. Organizations deploying AI in hiring, credit scoring, biometric identification, education assessment, or critical infrastructure management must comply by August 2026, with a possible extension to December 2027 pending the Digital Omnibus. Non‑compliance carries fines of up to €30 million or 6 % of global annual turnover, whichever is higher.

What Classifies as High‑Risk AI Under the EU AI Act

The Act defines high‑risk AI in two ways. Annex I covers AI that functions as a safety component within regulated products—medical devices, vehicles, machinery, and even toys. Annex III lists eight standalone AI applications that are high‑risk regardless of sector. The key point is that risk is use‑case dependent, not technology dependent. The same large language model that drafts customer‑service replies is low‑risk; the same model that screens job candidates or scores credit applications is high‑risk under Annex III.

The eight Annex III categories are:

  1. Biometric identification and categorisation
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment and worker management
  5. Access to essential private and public services
  6. Law enforcement
  7. Migration, asylum, and border control management
  8. Administration of justice and democratic processes

Employers often overlook the employment category. AI‑powered applicant‑tracking systems, CV‑screening tools, automated interview analysis, performance‑monitoring platforms, and workforce analytics all fall under Annex III if they influence hiring, promotion, assignment, or termination decisions. Companies that deploy these tools are “deployers” under the Act and must verify that their providers are compliant.

The Eight Annex III Categories: Detailed Breakdown

Biometric Identification and Categorisation

Real‑time remote biometric identification—matching faces, voices, irises, or gait patterns against a database—is high‑risk. This includes access‑control gates, retail analytics, and attendance‑tracking systems. Anything that infers sensitive attributes (race, gender, age, emotional state) from biometric data also lands here. Because GDPR Article 9 already treats biometric data as special‑category, the overlap is significant.

Aurora Trust’s recent analysis notes that technical documentation must spell out false‑positive and false‑negative rates and demonstrate performance across demographic groups. Many firms underestimate how much data they need to collect for this proof.

Critical Infrastructure Management

AI that acts as a safety component in road‑traffic control, water supply, gas, electricity, heating, or digital infrastructure is high‑risk. A glitch in an AI‑driven power‑grid controller could cascade into widespread outages. Providers must document failure modes, run robustness tests under adversarial conditions, and prove that human operators can override the system at any time.

Education and Vocational Training

Systems that decide who gets into a university, assign students to programs, score essays, or proctor exams are high‑risk. If an algorithm consistently undervalues a particular demographic, it can shape life trajectories for an entire generation. Providers must show that the model does not encode bias and that affected individuals have a clear avenue for appeal.

Employment and Worker Management

This is the most commercially significant category. AI used for recruitment advertising, CV screening, candidate ranking, interview analysis, task allocation, performance monitoring, or workforce analytics triggers Annex III obligations. The test is whether the system makes or meaningfully influences an employment decision—not whether a human makes the final call. Even a scored shortlist handed to a recruiter counts as high‑risk.

Companies using commercial ATS or HR‑AI platforms are deployers with independent verification duties; they cannot simply hand the compliance burden to their vendors.

Essential Private and Public Services

AI that public authorities use—or commission on their behalf—to evaluate eligibility for benefits, healthcare, or social assistance, and to grant, reduce, revoke, or reclaim those benefits, is high‑risk. Credit‑worthiness scoring and life‑insurance risk pricing also fall under this banner, as do emergency‑dispatch systems that prioritize response. The Act demands strong human‑oversight mechanisms and transparent appeal procedures.

Law Enforcement, Migration, and Democratic Processes

AI that produces risk assessments in criminal proceedings, predicts re‑offending, performs lie detection, or reads emotions during interrogations is high‑risk. Migration‑related tools—document‑authenticity checks, asylum‑risk scoring, or polygraphs—are treated the same way. Systems that assist judges or influence elections round out the category. Because the stakes are so high, the Act imposes especially stringent human‑oversight requirements.

The Mandatory Compliance Package: Articles 8 Through 15

Every high‑risk AI system must implement a control architecture drawn from Articles 8‑15. This is not optional.

RequirementLegal BasisWhat It Demands
Risk Management SystemArticle 9Ongoing identification, analysis, and mitigation of known and foreseeable risks throughout the system lifecycle
Data GovernanceArticle 10Full documentation of training datasets, validation approaches, and bias testing across demographic groups
Technical DocumentationArticle 11 / Annex IVSeven specific document categories covering system architecture, design specifications, and intended use
Transparency and Instructions for UseArticle 13XAI (explainable AI) reports, user‑facing documentation, disclosure to affected individuals
Human OversightArticle 14Documented procedures enabling meaningful human intervention in system decisions
Robustness, Accuracy, and SecurityArticle 15Performance benchmarks, adversarial testing, cybersecurity controls, post‑market monitoring
EU Declaration of ConformityArticle 47Formal declaration that the system meets all applicable requirements
EU AI Database RegistrationArticle 49Entry in the EU‑wide database before the system is placed on the market or put into service

Article 11 and Annex IV are especially demanding: providers must produce seven categories of documentation covering purpose, design, training data, testing, and known limitations. The files must stay current and be ready for inspection by market‑surveillance authorities at any time.

The Article 6(3) Carve‑Out: Not a Casual Escape Hatch

Some organisations try to lean on Article 6(3) as a “get‑out” by arguing their Annex III system poses no significant risk. EU AI Compass warns against treating this as a casual escape hatch. The carve‑out requires a documented justification that analyses the specific risk pathways of the system—not a blanket claim that the technology is safe. Authorities can—and do—challenge such classifications.

Deadlines and Enforcement Reality

The core high‑risk obligations kick in from August 2026. The EU Council’s March 13 2026 adoption of its Digital Omnibus position could push the deadline to December 2027, but the classification logic in Annex III is already law. Waiting for Omnibus certainty is risky; building the documentation, risk‑management system, and governance architecture takes months.

National market‑surveillance authorities enforce the rules and can levy fines up to €30 million or 6 % of global turnover. They can also order the withdrawal or suspension of non‑compliant systems from the EU market.

AI‑Powered Compliance Evidence Collection: A Practical Path Forward

Collecting the evidence demanded by Articles 11 and 15 is a systematic exercise. You need to record training‑data provenance, bias‑testing results, performance benchmarks, and robustness‑testing outcomes, and you must keep those records up to date as models evolve. Doing this manually is error‑prone and costly.

Modern GRC platforms now embed AI‑driven evidence‑collection modules that plug directly into ML pipelines, CI/CD tools, and data warehouses. They automatically harvest metrics, generate audit trails, and assemble the structured Annex IV documentation package that regulators expect.

How Truvara Automates High‑Risk AI Compliance

For each Annex III category, the compliance workflow follows a familiar pattern: inventory the systems, classify the risk, document the controls, and maintain the evidence. Truvara automates the most time‑consuming steps.

  • AI inventory module – scans your environment, tags each system against Annex III categories, and flags those that need conformity assessment or EU‑database registration.
  • Automated evidence collection – pulls training‑data sheets, bias‑test logs, and performance dashboards from connected ML infrastructure, then formats them into the Annex IV package.
  • Post‑market monitoring – continuously tracks drift, accuracy, and security metrics against Article 15 thresholds, producing ready‑to‑share compliance records.

If you’re juggling multiple high‑risk AI systems across different sectors, Truvara gives you a single control plane that maps each system to its specific Article 8‑15 obligations—no need to maintain parallel compliance tracks.

FAQ

How do I determine if my AI system is high‑risk?
Start by mapping your AI use cases against the eight Annex III categories. If your system falls into any of them and influences a decision, it is high‑risk.

Do I need a separate compliance program for each Annex III category?
No. While each category has its own nuances, the underlying Articles 8‑15 requirements are common. A unified risk‑management and documentation framework can cover them all.

Can I rely on my vendor’s compliance statement?
Only partially. As a deployer, you remain responsible for verifying that the provider’s documentation is complete and that the system meets EU requirements in your specific context.

What happens if I miss the August 2026 deadline?
National authorities can impose fines up to €30 million or 6 % of global turnover, and they may order the system off the market until you become compliant.

Is the Article 6(3) carve‑out a safe bet?
Only if you can produce a robust, evidence‑backed justification that the system poses no significant risk. Most regulators treat this as an exception, not the rule.

Where can I find templates for Annex IV documentation?
The European Commission provides a non‑binding template in its guidance notes. Truvara’s platform includes pre‑filled templates that you can customise for each system.

How often should I update my risk‑management file?
Whenever there is a material change—new data, model retraining, a shift in deployment context, or a discovered vulnerability. Continuous monitoring helps you stay on top of these triggers.

Will the Digital Omnibus change the high‑risk definition?
The Omnibus may shift timelines, but the Annex III classifications are already law. Prepare now; the rules are unlikely to be rolled back.


Key Takeaways

  • Identify early: Map every AI use case to the eight Annex III categories; assume high‑risk if there’s any decision‑making influence.
  • Build a unified compliance framework: Use Articles 8‑15 as a single control tower rather than creating separate programs for each category.
  • Document everything: Technical files, data‑governance logs, risk‑management plans, and human‑oversight procedures must be up‑to‑date and audit‑ready.
  • Leverage automation: Deploy GRC tools that pull evidence directly from your ML pipelines to reduce manual effort and error risk.
  • Plan for post‑market monitoring: Ongoing drift detection, performance benchmarking, and security testing are mandatory under Article 15.
  • Don’t rely solely on vendors: As a deployer, you must verify your provider’s compliance claims and retain independent proof.
  • Prepare for the deadline: Start now—building inventories, evidence collection, and monitoring processes takes months, not weeks.
  • Stay alert to carve‑out risks: If you consider the Article 6(3) exemption, back it with a rigorous, documented risk analysis; expect scrutiny from regulators.

Conclusion

The EU AI Act is reshaping how companies think about AI risk, especially for the eight high‑risk categories outlined in Annex III. Compliance is not a one‑off checklist; it demands continuous governance, transparent documentation, and robust human oversight. By cataloguing every AI system, aligning it with Articles 8‑15, and automating evidence collection, organisations can meet the August 2026 deadline—and avoid the steep fines that accompany non‑compliance. Truvara’s platform offers a practical way to turn these regulatory obligations into an integrated, manageable process, letting you focus on innovation while staying on the right side of the law. Take the first step today: run an AI inventory, flag high‑risk use cases, and start building the documentation trail that regulators will soon expect.

TT

Truvara Team

Truvara