Compliance guide

EU AI Act–compliant AI.
Built into the products.

Conformity assessments, technical documentation, audit logs, human-oversight gates. Wavenetic ships the AI Act vendor-side artifacts with every deployment — not as a billable add-on, not as a checklist sprint before procurement, not as someone else's problem.

On this page

What the AI Act actually requires

The EU AI Act categorises AI systems into four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk. The substantial obligations sit on high-risk systems — defined by Annex III, covering AI used in critical infrastructure, defence, law enforcement, migration, justice, employment, education, access to essential services, and certain biometric applications.

For a high-risk system, the provider (vendor) and the deployer (operator) jointly carry these obligations:

What Wavenetic ships

Each Wavenetic deployment includes a vendor-side compliance pack designed to slot directly into the operator's conformity-assessment process:

ArtefactWhat it covers
System architecture documentComponents, data flows, model selection rationale, deployment topology, integration points.
Data governance summaryTraining and evaluation data sources for any models we ship, pre-processing, known biases, exclusion criteria.
Audit-log specificationSchema and retention policy for every operational event the system produces. Operator wires this into their existing log infrastructure.
Human-oversight gate documentationWhere in the system flow a human is required to approve, override, or escalate. Default configurations and customisation points.
Accuracy + robustness reportsMeasured performance on representative tasks, including failure-mode catalogue and stress-test results.
Risk-management plan templateOperator-customisable risk register, with the system-level risks pre-populated from our analysis.
Instructions for useCapabilities, limitations, expected operating range, procedures for the deployer's staff.
Post-market monitoring planWhat we monitor on the vendor side; what the operator monitors on their side; incident-reporting workflow.

These artefacts are part of the product, not a separately-priced consulting engagement. The operator's compliance team uses them as input to their own conformity-assessment file — which they own and sign, because the deployer carries final responsibility under the Act.

Is your use case high-risk?

Use the following heuristic before reading the full Annex III text:

If your use case falls into one of these, plan for high-risk obligations. If you're unsure, treat it as high-risk for procurement until you have legal sign-off otherwise — under-classifying is the expensive direction.

Timeline + penalties

Penalties for non-compliance reach €35 million or 7% of global annual revenue, whichever is higher (for prohibited-AI violations). For high-risk-system non-compliance, the cap is €15M or 3%. For incorrect or misleading information to authorities, €7.5M or 1%.

The AI Act is one frame among several. A regulated-enterprise AI deployment in 2026 typically sits inside all of these:

Wavenetic's architecture is designed to satisfy all four by default. EU-built supply chain, on-premise / air-gap deployment, structured audit trail, EU data residency, conformity-assessment documentation. See the full architecture in the on-premise pillar →

Frequently asked questions

What does the EU AI Act actually require from an AI vendor?
For high-risk AI systems (Annex III: critical infrastructure, defence, law enforcement, employment, education, essential services), the AI Act requires conformity assessments, technical documentation, automatic event logs, human-oversight measures, accuracy and robustness testing, transparency to deployers, post-market monitoring, and CE marking. The vendor must produce most of this; the deployer is jointly responsible for some of it. Wavenetic ships the vendor-side artifacts with every deployment.
Is my use case high-risk under the EU AI Act?
AI in critical infrastructure (energy, water, transport, digital infrastructure), defence, law enforcement, migration management, justice administration, employment / HR, education, access to essential services (banking, insurance, healthcare), and biometric systems are typically high-risk. AI used purely for spam filtering, video games, or marketing copy is generally not high-risk. If you operate in regulated infrastructure or sit under a sector-specific regulator, assume high-risk and plan for it.
Does cloud-hosted AI satisfy the EU AI Act?
Cloud-hosted AI can satisfy the AI Act in principle, but in practice it creates structural problems: the audit log surface is owned by the cloud vendor, not you; the conformity-assessment documentation describes the cloud vendor's system, not yours; the human-oversight gate is a feature you have to bolt on; and the supply-chain story (NIS2, CER) becomes much harder to defend. For high-risk systems, on-premise or sovereign-cloud is the path that actually clears procurement.
What does Wavenetic ship to support a conformity assessment?
Each deployment includes: a technical-documentation pack (system architecture, data sources, model selection rationale, training and evaluation data summary), automatic event-log specifications, human-oversight gate documentation, accuracy and robustness test reports, a risk-management plan template, and post-market monitoring guidance. The customer's compliance team uses these to compile their own conformity-assessment file.
When does the EU AI Act apply to me?
Most AI Act provisions are in force as of February 2025; high-risk-system obligations entered force August 2026. If you're deploying or planning to deploy high-risk AI today, you're already in scope. The penalty for non-compliance reaches €35 million or 7% of global annual revenue, whichever is higher.
How does the EU AI Act interact with NIS2, CER, and DORA?
They're complementary frames: AI Act governs how the AI system itself is built and operated; NIS2 governs cybersecurity and supply-chain risk; CER governs operator resilience for essential services; DORA governs ICT third-party risk in financial services. A regulated-enterprise AI deployment in 2026 has to satisfy all four, plus GDPR. Wavenetic's architecture is designed to satisfy all four by default.
Can I use Wavenetic on Microsoft Foundry and still meet the AI Act?
Yes. Microsoft Foundry deployments inherit Microsoft's EU compliance frame. The AI Act vendor-side documentation Wavenetic ships is the same regardless of deployment target — Foundry, on-premise, or air-gapped. The choice between deployment models is driven by your specific compliance posture (NIS2 supply-chain controls, sector-specific regulator, internal data-residency policy), not by AI Act alone.
How long until we're audit-ready?
A typical pilot reaches "audit-ready" within 30 days of deployment. The compliance pack ships from day one; the customer's team needs time to integrate it into their internal compliance process. We support that integration; we do not produce the customer's conformity-assessment file for them, because that's the operator's legal responsibility under the Act.

Ready for an audit?

Talk to us about deploying AI inside your perimeter with the AI Act paperwork already in the box.