Yes, you almost certainly need a Data Protection Impact Assessment (DPIA) if your AI system processes personal data. The ICO fined MediaLab.AI £247,590 in February 2026 partly for failing to conduct one. Don't make the same mistake.
When Is a DPIA Required?
Under GDPR Article 35, a DPIA is mandatory when processing is "likely to result in a high risk to the rights and freedoms of natural persons." The regulation specifically calls out:
- New technologies — AI qualifies
- Automated decision-making with legal or significant effects
- Large-scale processing of personal data
- Systematic monitoring of publicly accessible areas
If your AI system does any of these, you need a DPIA. Most AI chatbots, recommendation engines, and automation tools tick at least one box.
What a DPIA for an AI System Should Cover
A standard DPIA template won't cut it for AI. You need to address:
- What personal data does the AI process? — inputs, outputs, training data
- Where does the data go? — if you're using an LLM API (OpenAI, Anthropic, etc.), data crosses borders
- What decisions does the AI make? — and can humans override them?
- Is there a lawful basis? — legitimate interest, consent, or contractual necessity?
- Data minimisation — are you sending more data to the AI than necessary?
- Retention — how long are conversations/interactions stored?
- Third-party risks — does your LLM provider have a GDPR-compliant DPA?
The LLM Provider Question
This is where most businesses get caught out. If you're using ChatGPT, Claude, or any cloud-based LLM:
- You need a Data Processing Agreement (DPA) with the provider
- Anthropic's DPA is governed by Irish law with SCCs included
- OpenAI's DPA uses an Irish entity with a UK Addendum available
- Data transfers to the US require adequate safeguards (SCCs or equivalent)
Your DPIA must document this transfer mechanism and assess whether the safeguards are adequate.
What Happens If You Skip the DPIA
The ICO can fine you up to £8.7 million or 2% of global turnover for failing to carry out a required DPIA. More importantly, without a DPIA you don't actually know whether your AI system is lawful — you're flying blind.
How to Get Started
- Download the ICO's DPIA template as a starting point
- Map every data flow in your AI system
- Identify which data is personal data (it's usually more than you think)
- Document your lawful basis for each processing activity
- Assess and mitigate risks
- Review the DPIA before launch, and again if the system changes
If this sounds like more than you can handle internally, that's normal. We build AI systems with the DPIA done as part of the build — not as an afterthought. Contact us if you need help.
Frequently Asked Questions
Is a DPIA mandatory for AI chatbots?
If your AI chatbot processes personal data — which most do — a DPIA is likely required under GDPR Article 35. This applies whenever processing is likely to result in a high risk to individuals, which includes automated decision-making, large-scale data processing, and new technologies like AI.
How long does a DPIA take?
A DPIA for a straightforward AI chatbot typically takes 1-2 weeks. Complex systems with multiple data flows, third-party integrations, or special category data may take 3-4 weeks. The ICO fined MediaLab.AI £247,590 partly for failing to conduct one — the cost of not doing it is far higher than doing it.
Can I do a DPIA myself or do I need a consultant?
You can do a DPIA yourself using the ICO's template. However, AI systems have specific complexities — model training data, automated decisions, cross-border transfers to LLM providers — that most generic templates don't cover. A specialist who understands both AI and data protection will produce a more robust assessment.
Need help with this?
We build compliant AI systems and handle the documentation. Tell us what you need.
Get in Touch