What Are the GDPR Implications of Using AI in the UK?
1 April 2026
What Are the GDPR Implications of Using AI in the UK?
If your AI system touches personal data in any way, UK GDPR applies. The Data (Use and Access) Act 2025, which came into force in June 2025, updated some automated decision-making rules, but the core obligations remain. You need a lawful basis, you must be transparent about how AI uses personal data, and you carry liability for data protection failures even when using third-party AI tools.
The Legal Landscape in 2026
The UK's data protection framework for AI sits across two main pieces of legislation: the UK GDPR (retained from the EU after Brexit) and the Data (Use and Access) Act 2025 (DUAA), which became law on 19 June 2025.
The DUAA made the biggest change to UK data protection since leaving the EU. It relaxed some restrictions on automated decision-making under Article 22, giving businesses more freedom to use AI for decisions that affect individuals. However, it also gave the ICO (Information Commissioner's Office) stronger enforcement powers and required regulators across all sectors to publish plans for enabling safe AI innovation.
In January 2026, the government wrote to 19 regulators asking them to publish those plans and report annually on progress. The regulatory direction is clear: the UK wants businesses to adopt AI, but it expects them to do so responsibly.
Meanwhile, the EU's AI Act began enforcement in stages from February 2025, with full compliance required by August 2026. If your business serves EU customers or processes EU residents' data, you need to consider both UK GDPR and the EU AI Act.
Lawful Basis: The Foundation You Cannot Skip
Every AI system that processes personal data needs a lawful basis under UK GDPR. The six options are: consent, contract, legal obligation, vital interests, public task, and legitimate interests.
For most commercial AI applications, legitimate interests is the most practical basis. You are saying: "We have a legitimate business reason for processing this data, we have checked it does not override the individual's rights, and we can demonstrate this." This requires a documented Legitimate Interests Assessment (LIA).
Consent is another option but is harder to maintain. Consent must be freely given, specific, informed, and unambiguous. It must also be easy to withdraw. If your AI system's functionality depends on processing personal data, asking for consent creates a situation where withdrawing consent breaks the service, which arguably makes the consent not freely given.
Critical point: using a third-party AI tool (such as ChatGPT, Claude, or a SaaS platform with AI features) does not remove your obligation to have a lawful basis. You are the data controller. The AI provider is a processor. The responsibility sits with you.
Data Protection Impact Assessments for AI
A DPIA is mandatory when your AI processing is "likely to result in a high risk to the rights and freedoms of individuals." In practice, most AI systems that process personal data at any scale will trigger this requirement.
The ICO specifically flags these scenarios as requiring a DPIA:
- Automated decision-making that produces legal or similarly significant effects
- Large-scale processing of sensitive personal data
- Systematic monitoring of publicly accessible areas
- Innovative use of new technologies (which AI almost always qualifies as)
Your DPIA should document:
- What personal data the AI processes and why
- The lawful basis for processing
- How you minimise the data used (data minimisation principle)
- Risks to individuals and how you mitigate them
- Whether you need to consult the ICO before proceeding
A DPIA is not a one-off exercise. If you change what data the AI processes, how it processes it, or what decisions it informs, you need to update the assessment.
Automated Decision-Making After the DUAA
The original UK GDPR Article 22 gave individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. The DUAA 2025 changed this.
Under the updated rules, businesses can now use automated decision-making more broadly, but individuals retain the right to:
- Be informed that automated decision-making is being used
- Request meaningful information about the logic involved
- Challenge the decision and request human review
What this means in practice: if your AI system automatically rejects a loan application, flags an employee for performance review, or decides which customers get a discount, you must be able to explain how that decision was made and provide a route for human review.
The ICO has stated it will monitor how businesses use this expanded freedom. Companies that abuse it by making significant automated decisions without proper safeguards should expect enforcement action.
Third-Party AI Tools and Data Processing Agreements
Using a cloud-based AI tool means sending data to a third party. Under UK GDPR, you need a Data Processing Agreement (DPA) with every AI provider that processes personal data on your behalf.
Key questions to ask your AI provider:
- Where is the data processed? If outside the UK, you need adequate safeguards (Standard Contractual Clauses or equivalent). The EU renewed its UK adequacy decision in early 2026, but transfers to the US, for example, require additional protections.
- Is the data used to train the model? Some AI providers use customer data for model training by default. If your data includes personal information, this creates a secondary processing purpose you need to account for. Many providers offer opt-out mechanisms, but you need to actively enable them.
- How long is the data retained? Prompts and responses may be logged for quality assurance or abuse prevention. Understand the retention period and ensure it aligns with your data minimisation obligations.
- Can you delete data on request? If an individual exercises their right to erasure, can your AI provider actually purge their data from all systems, including training datasets?
For businesses handling sensitive data, consider running AI models locally or in a private cloud. This removes the third-party data transfer issue entirely.
Practical Steps to Get Compliant
Here is a straightforward compliance checklist for UK businesses using AI:
- Audit your AI tools. List every AI system in use across your business, including tools individual employees may have adopted without formal approval. Shadow AI is a real GDPR risk.
- Map personal data flows. For each AI tool, document what personal data goes in, where it goes, who processes it, and what comes out.
- Establish lawful bases. Document the lawful basis for each AI processing activity. Legitimate interests with a documented LIA is the most common route.
- Conduct DPIAs. For any AI system that processes personal data at scale or makes decisions about individuals, complete a Data Protection Impact Assessment.
- Review DPAs. Ensure you have a signed Data Processing Agreement with every third-party AI provider. Check data location, retention, training opt-out, and deletion capabilities.
- Update privacy notices. Your privacy policy must explain how you use AI to process personal data, in plain language that your customers and employees can understand.
- Build a challenge route. Create a clear process for individuals to query and challenge AI-driven decisions that affect them.
- Train your team. Ensure everyone who uses AI tools understands what personal data they can and cannot input. A single employee pasting customer emails into ChatGPT without a DPA in place is a compliance breach.
When This Is NOT Right for You
If your business only uses AI for tasks that involve no personal data whatsoever, such as generating marketing copy from product specifications, analysing publicly available market data, or writing code, your GDPR obligations related to AI are minimal.
However, be honest with yourself about whether personal data truly is absent. An AI that summarises meeting notes, drafts emails, or analyses customer feedback is almost certainly processing personal data, even if that was not your primary intention.
If you are unsure whether your AI use involves personal data, assume it does and take the compliance steps above. The cost of a DPIA and proper documentation is negligible compared to an ICO investigation.
Is This Right For You?
This information is relevant if you are a UK business using or planning to use AI tools that process any personal data, including customer names, email addresses, employee records, behavioural data, or any information that could identify an individual.
If your AI system only processes fully anonymised, non-personal data (such as aggregated market statistics or publicly available technical datasets), GDPR obligations are significantly reduced, though you should still verify that your anonymisation is genuine and irreversible.
Frequently Asked Questions
Do I need to tell customers I am using AI?
Yes, if the AI processes their personal data. Your privacy notice must explain how AI is used, what data it processes, and what decisions it informs. The ICO expects this to be written in plain language, not legal jargon.
Can I use ChatGPT or Claude with customer data?
Only if you have a Data Processing Agreement in place, a documented lawful basis for processing, and confidence that the data handling meets UK GDPR requirements. Both OpenAI and Anthropic offer enterprise plans with DPAs and data opt-out options.
What happens if my AI system breaches GDPR?
The ICO can issue fines of up to 17.5 million GBP or 4% of annual global turnover, whichever is higher. Beyond fines, a data breach involving AI can severely damage customer trust and business reputation.
Does the EU AI Act apply to UK businesses?
If you serve EU customers or process EU residents' data, elements of the EU AI Act may apply to you. The Act classifies AI systems by risk level and imposes different requirements for each. Full enforcement begins August 2026.