August 2, 2026. That's when the EU AI Act's high-risk obligations go live. Less than five months away.
If you're running a high-risk AI system in the EU — or selling one into the EU market — you'll need a completed conformity assessment before that date. No assessment, no CE marking. No CE marking, no legal market access.
Here's the problem: over 60% of EU SMEs say they're not ready. Only 15 of the 45 required harmonised technical standards have been published. And most compliance guides read like they were written for a legal department with a six-figure budget.
This one's different. I'm going to walk you through the conformity assessment process step by step, with real costs and real timelines. If you're an SME founder or CTO with an AI system that might be high-risk, this is your playbook.
First: What Is a Conformity Assessment?
A conformity assessment is the formal process of proving your AI system meets the EU AI Act requirements. Think of it like a CE marking for physical products — except for AI.
For most high-risk AI systems, it's a self-assessment. You verify compliance yourself, document everything, and sign a declaration. You don't need an external auditor.
There are two exceptions where you need a third-party assessment from a notified body:
- Remote biometric identification systems
- AI systems used as safety components in critical infrastructure already covered by EU harmonisation legislation
If your system falls into those categories, you'll need an accredited notified body to assess it. For everyone else, it's an internal process.
That's good news. It means you control the timeline and the cost.
Step 1: Classify Your AI System
Before you do anything else, figure out whether your AI system is actually high-risk. If it's not, you can stop here — no conformity assessment needed.
The EU AI Act lists high-risk categories in Annex III. Here's the practical breakdown:
Biometric identification and categorisation Remote biometric identification systems (facial recognition for identifying people in public spaces). If you're not doing biometric ID, move on.
Critical infrastructure management AI that manages or operates critical infrastructure — energy grids, water supply, transport networks. Your internal fleet routing tool probably doesn't qualify. An AI system controlling traffic signals does.
Education and vocational training AI that determines access to education or evaluates students. An automated exam grading system is high-risk. A study recommendation chatbot isn't.
Employment and worker management This is where lots of SMEs get caught. AI used for recruitment (CV screening, candidate ranking), performance evaluation, task allocation based on individual behaviour, or termination decisions. If your AI touches hiring or HR decisions, pay attention.
Access to essential services Credit scoring, insurance risk assessment, eligibility for public benefits. If your AI decides whether someone gets a loan, a policy, or a service — it's high-risk.
Law enforcement Risk assessment tools, polygraphs, profiling. Unlikely for most SMEs, but worth listing.
Immigration and border control Application assessment, document verification. Again, niche.
Justice and democratic processes AI influencing legal outcomes or election processes.
The simple test: Does your AI system make or directly influence a decision that significantly affects a person's rights, opportunities, or access to services? If yes, it's probably high-risk. If your AI chatbot just answers FAQs about your product, it's not. If it scores job applicants or decides loan eligibility, it is.
Not sure? That's normal. Classification is the part where professional input saves you time. Get it wrong and you either over-invest in compliance you don't need, or under-invest and face penalties later.
Step 2: Build Your Risk Management System (Article 9)
This is the foundation. The AI Act requires a risk management system that runs through the entire lifecycle of your AI system — not a one-off document you file and forget.
Here's what it needs to contain:
Risk identification and analysis. What can go wrong? Map out the risks your AI system poses. Think about: biased outputs, incorrect decisions, data breaches, system failures, adversarial attacks. Be specific to your system. A recruitment AI has different risks than an insurance pricing model.
Risk estimation and evaluation. For each risk you've identified, assess the likelihood and severity. How often could it happen? What's the impact when it does? Use a simple risk matrix — high/medium/low for both probability and impact.
Risk management measures. For every significant risk, document what you're doing about it. Mitigation measures, controls, fallback procedures. "We'll monitor it" isn't enough. You need specific actions: bias testing schedules, human review thresholds, data quality checks.
Continuous monitoring. How will you track whether your risk measures are actually working? Set up metrics, review cycles, and trigger points for reassessment. Quarterly reviews are a sensible minimum.
What the document looks like in practice: A 15-30 page report covering the above, with a risk register (spreadsheet or table) listing each risk, its assessment, and the corresponding mitigation measure. Update it when the system changes.
Budget for this step: €1,500-€4,000 (£1,300-£3,500) if you're working with a specialist. Less if you're doing it internally and have the right template.
Step 3: Prepare Technical Documentation (Article 11)
This is the big one. Technical documentation is the most time-consuming part of the conformity assessment. It's also where regulators will look first if there's ever an inquiry.
Here's what you need to document:
General description of the AI system. What it does, its intended purpose, who it's for, and the provider's details. Include the version number and any hardware/software dependencies.
Detailed description of the development process. How was the system designed and built? What model architecture? What training methodology? What design choices were made and why? If you're using a third-party model (like GPT-4 or Claude via API), document the model provider, version, and how you've fine-tuned or configured it.
Information about monitoring, functioning, and control. How does the system operate in production? What are the input/output specifications? What logging and monitoring is in place? What human oversight mechanisms exist?
Risk management documentation. Cross-reference your risk management system from Step 2.
Description of data and data governance. Training data: what datasets were used, how were they collected, what preprocessing was applied, how was data quality ensured, what steps were taken to identify and address bias. If you're using a pre-trained model, you still need to document the data you used for fine-tuning or the data flowing through the system.
Testing and validation procedures. What metrics were used to evaluate performance? What test datasets were used? What were the results? Include accuracy, precision, recall, fairness metrics — whatever's appropriate for your system.
Post-market monitoring plan. How will you continue monitoring the system after deployment? What data will you collect? How will you detect performance degradation or emerging risks?
This documentation needs to be maintained and kept current. It's not a one-time writing exercise.
Budget for this step: €2,000-€6,000 (£1,700-£5,200) with a specialist. This is the costliest single step because it touches every aspect of the system.
Step 4: Testing and Validation (Article 15)
You need to test your AI system and document the results. The AI Act requires three specific areas:
Accuracy. Does the system do what it claims? Define performance metrics appropriate to your use case and test against them. For a classification system, that's precision, recall, and F1 scores. For a generative system, it might be factual accuracy rates and hallucination benchmarks.
Robustness. Does the system hold up when inputs are unexpected, noisy, or adversarial? Test with edge cases. What happens when someone feeds it garbage data? What happens when the input distribution shifts from what the model was trained on?
Cybersecurity. Is the system secure against attacks? This includes data poisoning (corrupted training data), model inversion (extracting training data from the model), adversarial examples (inputs designed to trick the model), and prompt injection (for LLM-based systems).
Bias and fairness. Test for discriminatory outcomes across protected characteristics — gender, race, age, disability. If your recruitment AI recommends male candidates at a higher rate, that's a problem you need to catch before a regulator does.
Document everything. Test results, methodologies, datasets used, pass/fail criteria, and any remedial actions taken.
Budget: €1,500-€4,000 (£1,300-£3,500). More if your system is complex or you need external penetration testing.
Step 5: Quality Management System (Article 17)
The AI Act requires a quality management system (QMS) — a set of policies and procedures that ensure ongoing compliance. This isn't a separate software system. It's documented processes.
Your QMS should cover:
Design and development quality assurance. How do you ensure quality when building and updating the AI system? Code review processes, testing protocols, change management procedures.
Post-market monitoring. Systematic approach to collecting and analysing data about the system's performance after deployment. Tied to your post-market monitoring plan from the technical documentation.
Incident reporting. Procedures for identifying, documenting, and reporting serious incidents to the relevant market surveillance authority. What counts as a serious incident? Anything that leads to death, serious damage to health, property, or environment, or a serious breach of fundamental rights.
Record keeping. How you store and maintain all compliance documentation. Retention periods, access controls, version management.
Resource management. Who's responsible for compliance? What training have they received? You need named individuals, not vague references to "the team."
Budget: €1,000-€3,000 (£850-£2,600). Smaller if you already have ISO 9001 or similar frameworks — you can adapt rather than build from scratch.
Step 6: EU Declaration of Conformity and CE Marking
This is the finish line. Once you've completed steps 1-5, you:
-
Draw up the EU Declaration of Conformity — a formal document stating your AI system complies with the relevant provisions of the EU AI Act. It must include: your name and address, the AI system identification, a statement that the system complies, references to the standards used, date and signature.
-
Apply the CE marking — the CE mark goes on the AI system itself, or if that's not possible, on the packaging or accompanying documentation. For software-only AI systems, it goes in the documentation and (where applicable) the user interface.
-
Register in the EU database — high-risk AI systems must be registered in the EU database (managed by the European Commission) before being placed on the market.
Keep the Declaration of Conformity and all supporting documentation for at least 10 years after the AI system is placed on the market. Regulators can request it at any time.
What It All Costs
Here's the honest breakdown for an SME with a single high-risk AI system:
| Approach | Cost (EUR) | Cost (GBP) |
|---|---|---|
| DIY with legal review | €5,000-€12,000 | £4,300-£10,300 |
| With specialist support | €8,000-€20,000 | £7,000-£17,000 |
| Third-party notified body | €15,000-€50,000+ | £13,000-£43,000+ |
A few things to note:
SME fee reductions. Article 84 of the AI Act specifically requires that fees charged by notified bodies are "proportionate" and that SMEs, including start-ups, pay reduced fees. The exact reductions will vary by notified body, but this is written into the law.
The combined approach saves money. If you work with someone who handles both the AI build and the compliance documentation, you cut the cost by 30-50% compared to hiring a developer and a compliance consultant separately. The person who built the system already understands how it works — they don't need weeks of discovery.
Ongoing costs. The conformity assessment isn't a one-and-done project. You'll need to maintain documentation, run periodic reviews, and update your risk management system when the AI changes. Budget around €2,000-€5,000 (£1,700-£4,300) per year for ongoing compliance maintenance.
The Timeline: What to Do and When
Working backwards from August 2, 2026:
| When | What |
|---|---|
| Now (March 2026) | Classify your AI system. Determine if it's high-risk. |
| April 2026 | Start risk management system and technical documentation. |
| May 2026 | Complete testing and validation. Begin QMS setup. |
| June 2026 | Finalise all documentation. Internal review. |
| July 2026 | Declaration of Conformity. CE marking. EU database registration. |
| August 2, 2026 | Deadline. System is compliant and legally operational. |
That's a tight but doable schedule. If you've already been operating responsibly — monitoring your AI, testing for bias, documenting how it works — you've got a head start. Most of this is formalising what good practice already looks like.
If you haven't started at all, don't panic. But don't wait until June either. The documentation alone takes 2-4 weeks, and that's if you know what you're doing.
What Happens If You Miss the Deadline?
Two things:
Legal consequences. You can't legally place a high-risk AI system on the EU market or put it into service without a completed conformity assessment. Fines run up to €35 million or 7% of global annual turnover, whichever is higher. For SMEs, penalties should be proportionate — but proportionate to a regulator is still a number that hurts.
Commercial consequences. This might matter more in practice. Enterprise clients and public sector buyers will start requiring proof of conformity in procurement processes. If you can't show your CE marking and Declaration of Conformity, you won't make the shortlist. Your competitors who did the work will.
Getting It Done
The conformity assessment process is detailed, but it's not mysterious. It's paperwork backed by genuine technical rigour. If your AI system actually works properly and you can prove it, you're most of the way there.
The hard part is knowing what to document, how to structure it, and where the regulators will focus their attention. That's where specialist support earns its keep.
We build AI systems and handle the compliance documentation as a single service. One team, one process. The person who architects your system is the same person who writes the technical documentation — because they already understand how it works.
If you've got a high-risk AI system and the August deadline is keeping you up at night, get in touch. We'll start with a classification check — figure out if you actually need a conformity assessment at all — and take it from there.
Related Reading
- EU AI Act Compliance for SMEs: What You Actually Need to Do Before August 2026 — the broader compliance picture
- Do I Need a DPIA for My AI System? — data protection impact assessments alongside AI Act compliance
- AI for Small Business: Practical Guide 2026 — what AI can actually do for your business
Need help with your conformity assessment? View our services or contact us directly. We'll tell you where you stand and what it'll take to get compliant.
Frequently Asked Questions
What is an EU AI Act conformity assessment?
A conformity assessment is the process of verifying that your AI system meets the requirements of the EU AI Act before it can be placed on the market or put into service in the EU. For most high-risk AI systems, this is a self-assessment (internal conformity assessment) where you verify compliance against Article 8-15 requirements and document it. Some categories (biometric identification, critical infrastructure) require third-party assessment by a notified body.
How much does a conformity assessment cost for an SME?
For an internal conformity assessment (most high-risk systems), expect to spend €8,000-€20,000 (£7,000-£17,000) on the full process including risk management documentation, technical documentation, testing, and quality management system setup. Third-party assessments by notified bodies cost €15,000-€50,000+. SMEs get reduced fees for notified body assessments under Article 84. Working with a specialist who handles both the AI build and compliance cuts the cost by 30-50% compared to hiring separate firms.
Do I need a conformity assessment if my AI system is not high-risk?
No. Only high-risk AI systems (Annex III) require conformity assessment. Limited-risk systems just need transparency obligations (tell users they're interacting with AI). Minimal-risk systems have no mandatory requirements. The first step is classifying your system — if it's not high-risk, you can skip the conformity assessment entirely, though voluntary compliance is encouraged.
How long does a conformity assessment take?
For an SME with a single AI system, allow 6-12 weeks for the full internal conformity assessment process. This includes risk classification (1-2 weeks), risk management system setup (2-3 weeks), technical documentation (2-4 weeks), testing and validation (1-2 weeks), and quality management system (1-2 weeks). Starting now gives you comfortable time before the August 2026 deadline. Leaving it until June is risky.
What happens if I don't complete the conformity assessment by August 2026?
You cannot legally place a high-risk AI system on the EU market or put it into service without a completed conformity assessment and CE marking. Fines for non-compliance are up to €35 million or 7% of global annual turnover, whichever is higher. For SMEs, the fines are proportionate but still significant. More practically, enterprise clients and public sector buyers will require proof of conformity before procuring your AI system.
Need help with this?
We build compliant AI systems and handle the documentation. Tell us what you need.
Get in TouchRelated Articles
AI Compliance
AI Risk Assessment: How to Evaluate Your AI System Before Regulators Do
The EU AI Act requires risk assessment for AI systems. The ICO expects it for GDPR. Here's how to actually do one — identify risks, score them, document mitigations, and stay ahead of enforcement.
AI Compliance
DPO as a Service: Does Your AI Company Need an External Data Protection Officer?
If your business uses AI to process personal data, you might need a Data Protection Officer. Here's when a DPO is required, what they actually do, and why outsourcing makes more sense than hiring for most SMEs.
AI Compliance
White Label AI Compliance: Why Every AI Agency Needs a Compliance Partner in 2026
Your clients are asking about GDPR and the EU AI Act. You build great AI — but compliance isn't your expertise. Here's why partnering with a compliance specialist makes you more money and protects your clients.