What the AI Act actually requires
The EU AI Act categorises AI systems into four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk. The substantial obligations sit on high-risk systems — defined by Annex III, covering AI used in critical infrastructure, defence, law enforcement, migration, justice, employment, education, access to essential services, and certain biometric applications.
For a high-risk system, the provider (vendor) and the deployer (operator) jointly carry these obligations:
- Risk-management system — documented and updated throughout the system's lifecycle.
- Data governance — training, validation, and test datasets must meet quality criteria. Documentation of data sources, possible biases, and pre-processing.
- Technical documentation — comprehensive specification of the system's design, capabilities, limitations, and intended purpose.
- Record-keeping (audit logs) — automatic logging of operational events sufficient to trace the system's behaviour ex-post.
- Transparency to deployers — instructions for use, including capabilities, limitations, expected accuracy and robustness, human-oversight measures.
- Human oversight — interface and procedures that allow a human to effectively oversee operation and intervene.
- Accuracy, robustness, cybersecurity — appropriate levels for the intended purpose, with documented measurement.
- Conformity assessment — formal evaluation, including (for some categories) by a notified body. Result is a CE mark on the system.
- Post-market monitoring — vendor obligation to monitor real-world performance and report serious incidents.
What Wavenetic ships
Each Wavenetic deployment includes a vendor-side compliance pack designed to slot directly into the operator's conformity-assessment process:
| Artefact | What it covers |
|---|---|
| System architecture document | Components, data flows, model selection rationale, deployment topology, integration points. |
| Data governance summary | Training and evaluation data sources for any models we ship, pre-processing, known biases, exclusion criteria. |
| Audit-log specification | Schema and retention policy for every operational event the system produces. Operator wires this into their existing log infrastructure. |
| Human-oversight gate documentation | Where in the system flow a human is required to approve, override, or escalate. Default configurations and customisation points. |
| Accuracy + robustness reports | Measured performance on representative tasks, including failure-mode catalogue and stress-test results. |
| Risk-management plan template | Operator-customisable risk register, with the system-level risks pre-populated from our analysis. |
| Instructions for use | Capabilities, limitations, expected operating range, procedures for the deployer's staff. |
| Post-market monitoring plan | What we monitor on the vendor side; what the operator monitors on their side; incident-reporting workflow. |
These artefacts are part of the product, not a separately-priced consulting engagement. The operator's compliance team uses them as input to their own conformity-assessment file — which they own and sign, because the deployer carries final responsibility under the Act.
Is your use case high-risk?
Use the following heuristic before reading the full Annex III text:
- Critical infrastructure — energy generation, transmission, distribution; water; gas; transport; digital infrastructure. Almost any AI in these systems is high-risk.
- Essential services — banking (credit-worthiness, fraud detection), insurance (pricing, claims), healthcare (clinical decision support, diagnostics), public benefits.
- Law enforcement, migration, justice — high-risk by default.
- Employment / HR — recruitment, performance evaluation, task allocation, hiring decisions.
- Education — admissions, exam evaluation, performance scoring.
- Biometric systems — categorisation, identification, emotion recognition. Some specific applications are prohibited outright.
If your use case falls into one of these, plan for high-risk obligations. If you're unsure, treat it as high-risk for procurement until you have legal sign-off otherwise — under-classifying is the expensive direction.
Timeline + penalties
- February 2025 — General provisions and prohibited-AI rules entered force.
- August 2025 — General-purpose AI model rules entered force.
- August 2026 — High-risk-system obligations entered force. Annex III systems must comply.
- August 2027 — High-risk obligations for AI embedded in regulated products (Annex I).
Penalties for non-compliance reach €35 million or 7% of global annual revenue, whichever is higher (for prohibited-AI violations). For high-risk-system non-compliance, the cap is €15M or 3%. For incorrect or misleading information to authorities, €7.5M or 1%.
How AI Act compounds with NIS2, CER, DORA, GDPR
The AI Act is one frame among several. A regulated-enterprise AI deployment in 2026 typically sits inside all of these:
- NIS2 — cybersecurity baseline + supply-chain risk management for operators of essential services. AI Act says "the system must be secure"; NIS2 says "and your supplier must be too."
- CER (Critical Entities Resilience Directive) — operator resilience for essential services. AI Act governs the AI system; CER governs the operator's overall continuity, of which the AI system is now a load-bearing piece.
- DORA — financial services. AI is now ICT third-party risk; banks and insurers must have a tested exit strategy and rollback path. AI Act + DORA together: vendor-shipped exit documentation is procurement-grade.
- GDPR — pre-existing. AI processing of personal data must be lawful, data-minimised, with rights of access, rectification, and erasure. AI Act adds new transparency obligations on top.
Wavenetic's architecture is designed to satisfy all four by default. EU-built supply chain, on-premise / air-gap deployment, structured audit trail, EU data residency, conformity-assessment documentation. See the full architecture in the on-premise pillar →