What are the red flags I should look for in an AI agency contract?

21 April 2026

What are the red flags I should look for in an AI agency contract?

Most problematic AI agency contracts hide their risks in five areas: intellectual property ownership, data usage rights, liability limits, exit clauses, and deliverable definitions. Before you sign anything, these are the clauses you need to read twice - or get a solicitor to read for you. The consequences of getting this wrong range from losing ownership of tools your company paid to build, to having your confidential business data used to train AI models sold to your competitors.

Why AI Agency Contracts Are Uniquely Risky

The ONS reported that UK business AI adoption jumped from 9% in 2023 to an estimated 22% in 2024. With more businesses bringing in outside agencies to help, there has been an explosion of AI service contracts - and most buyers have no idea what they are signing.

A standard IT services contract from five years ago does not cover the specific risks of AI work: who owns a machine learning model your money trained? Can the agency use your customer data to improve their tools? What happens to your systems if you leave?

The legal framework is also catching up fast. The Data (Use and Access) Act 2025 tightened rules on automated decision-making and international data transfers. The Digital Markets, Competition and Consumers Act 2024 gives the CMA direct fining powers of up to 10% of global turnover for unfair commercial practices. An AI agency contract that was fine to sign in 2022 may now expose you to regulatory risk.

Here are the ten red flags to look for - in order of severity.

Red Flag 1: Vague or Missing IP Ownership Clauses

This is the most financially damaging red flag, and the most common one we see.

Under UK copyright law, AI-generated works are not automatically protected unless a human can be identified as the author. Kennedy's Law noted in 2025 that 'IP ownership terms for AI outputs must be clearly defined by contract, as default legal protection is uncertain.'

What this means in practice: if a contract simply says 'the agency builds you an AI system,' you may not legally own that system once it is delivered. The agency could resell the same underlying model - or a lightly modified version of it - to every client in your industry.

What a good contract says: all AI outputs, trained models, code, workflows, and documentation produced during your engagement are assigned to you in full on payment. The agency retains no licence to reuse your trained models or your data for other clients.

What a bad contract says: 'the agency grants the client a non-exclusive licence to use deliverables.' That phrase - non-exclusive licence - means they own it and are renting it to you. Run.

Red Flag 2: Broad Data Usage Rights

Read this clause extremely carefully: 'the client grants the agency a licence to use client data for service improvement purposes.'

That sentence, which appears in many AI agency contracts, can mean the agency is legally entitled to feed your customer data, your pricing data, your sales pipeline, and your internal documents into their AI models - which are then sold to other clients, including your competitors.

The Data (Use and Access) Act 2025 and UK GDPR place strict limits on how personal data can be used, but they do not cover non-personal commercial data. Your proprietary business intelligence is not protected by data protection law. Only your contract protects it.

A clean contract limits the agency's data usage rights to: (1) delivering the services you have paid for, and (2) nothing else. It should explicitly prohibit using your data for model training, benchmarking, product development, or any purpose that benefits the agency or third parties.

Also check: where is your data stored? An agency using US-based AI infrastructure (AWS, Azure, Google Cloud) means your data transfers internationally. Post-Brexit, this triggers specific compliance obligations under UK GDPR.

Red Flag 3: No Measurable SLAs or Performance Commitments

AI consultancy is easy to sell with promises. 'We will transform your operations.' 'Expect significant efficiency gains.' '10x your output.' These phrases are meaningless unless they are tied to contractual commitments.

A contract without Service Level Agreements (SLAs) gives you no legal recourse if the work fails to deliver. You paid, they delivered something, and there is nothing in writing to say it needed to work.

At minimum, a good AI agency contract should specify:

For a typical AI implementation engagement costing between £15,000 and £100,000, you need contractual teeth. If the agency pushes back on putting SLAs in writing, that is telling you something important about their confidence in the work.

Red Flag 4: Liability Caps That Do Not Match Contract Value

Every professional services contract limits the supplier's liability. That is normal and reasonable. The red flag is a liability cap set so low it offers no real protection.

Common bad practice: a £50,000 AI implementation contract with a liability cap of £5,000 (or sometimes just 'one month's fees'). If the work causes regulatory penalties, data breaches, or business disruption, you have almost no financial recourse.

What to push for: liability capped at the total contract value as a minimum, with carve-outs for unlimited liability in cases of wilful misconduct, fraud, data breaches involving personal data (where ICO fines can reach £17.5 million or 4% of global turnover under UK GDPR), and IP infringement.

Also check whether the agency has Professional Indemnity (PI) insurance and Cyber Liability insurance, and at what level. A serious AI agency working on business-critical systems should carry at minimum £1 million PI cover. Ask for their certificate of insurance before signing.

Red Flag 5: Lock-In Without an Earned Exit

Some AI agency contracts make it extremely difficult to leave - either through long notice periods, high termination fees, or by engineering technical dependency that makes switching costly regardless of what the contract says.

Specific clauses to check:

Red Flag 6: Subcontractor Clauses You Have Not Read

Many AI agencies do not build everything in-house. They use subcontractors, freelancers, and third-party AI platforms. That is not inherently a problem - but you need to know who has access to your data and what their obligations are.

A contract that lets the agency 'engage subcontractors as it sees fit' without notifying you means your confidential business data could be accessed by people or platforms you have never heard of, with no direct contractual relationship or accountability to you.

What good looks like: the contract lists approved subcontractors, requires notification before adding new ones, and makes the agency fully responsible for their subcontractors' compliance with all contract obligations including data protection and confidentiality.

This is especially important under UK GDPR, where you as a data controller remain responsible for how your data is processed, even by third parties your agency engages on your behalf. The ICO has made clear that 'I did not know about the subcontractor' is not a valid defence in a breach investigation.

Red Flag 7: Vague Deliverables and Scope Definitions

The most common source of AI project disputes is a mismatch between what the client thought they were buying and what the contract actually obligated the agency to deliver.

'AI strategy and implementation support' is not a deliverable. 'A trained chatbot integrated with your CRM, tested against an agreed set of 200 test cases, achieving at least 85% accurate responses, delivered by 30 June 2025' is a deliverable.

Before signing, every line item in the contract's scope of work should pass this test: could a neutral third party read this and know whether the work is complete? If the answer is no, push for more specificity.

Also watch for scope creep provisions: what happens when you ask for something slightly beyond the original spec? Some contracts allow the agency to charge for any work not explicitly listed, with no cap on what that additional work might cost. A well-drafted change control process - where scope changes are agreed in writing with a cost estimate before work starts - protects both parties.

Red Flag 8: No Transparency About AI Tools and Methods

If an agency is building AI systems using third-party AI platforms (OpenAI, Anthropic, Google Gemini, AWS Bedrock, and so on), their contract should say so explicitly. You need to know because:

A transparent AI agency contract will list the key platforms and tools being used to deliver your services, confirm that any platforms have been configured to disable training on your data where applicable, and address what happens if the agency needs to switch platforms mid-engagement.

Refusing to disclose which AI platforms they use is a serious red flag. You are paying for a service that depends entirely on those platforms - you have every right to know what they are.

Red Flag 9: No Clear Confidentiality Obligations

AI agencies see a huge amount of sensitive business information: your processes, your customer data patterns, your competitive positioning, your pricing strategies. A weak confidentiality clause exposes all of it.

Watch for time-limited confidentiality obligations: some contracts only protect your information for two or three years. Trade secrets and competitive intelligence have no natural expiry date. Confidentiality obligations should survive the contract termination indefinitely for genuinely sensitive business information.

Also check: does the confidentiality clause cover the agency's employees and subcontractors, not just the agency itself? A clause that binds only the legal entity but not the people who actually do the work is materially weaker than it appears.

Red Flag 10: No Post-Project Support or Knowledge Transfer Plan

Many AI projects are sold with a 'handover' phase that turns out to be a brief document and a 30-minute call. If the AI systems built for you require ongoing maintenance, updates, or specialist knowledge to operate, and that knowledge stays with the agency, you are creating permanent dependency.

A good contract specifies: what documentation will be delivered (technical specs, data dictionaries, user guides), what training will be provided to your team, what source code or model files will be handed over, and what the ongoing support arrangement looks like after the initial engagement ends.

If the agency is not willing to document their work in a way that would allow a competent third party to support it, that is a dependency trap. Intentional or not, it ensures you will keep paying them.

When This Does NOT Apply to You

If you are buying off-the-shelf AI software on standard subscription terms, this checklist is largely irrelevant. SaaS contracts are take-it-or-leave-it, and most of the risk is covered by the provider's standard terms.

If your AI engagement is a small, low-stakes pilot (under £5,000, no sensitive data, no critical business processes), the time investment of a full contract review may not be proportionate. A clear statement of work and a basic non-disclosure agreement may be sufficient.

If you are working with an agency on a purely advisory basis (strategy advice, workshop facilitation, no actual implementation or data access), IP ownership and data rights are much lower-risk concerns.

The full checklist above is most critical for: custom AI development, model training on your data, AI system integration with your operations, and ongoing AI-managed services where the agency has sustained access to your systems and data.

Is This Right For You?

This checklist is most relevant if you are a UK business that is negotiating with an AI agency, AI consulting firm, or software development partner that will build, implement, or manage AI systems on your behalf.

If you are simply subscribing to off-the-shelf AI tools (like ChatGPT, Copilot, or HubSpot AI features), most of these concerns do not apply - those are governed by platform terms, not bespoke contracts.

If your engagement involves bespoke AI development, custom model training, data integration, or ongoing AI strategy work, every single item in this checklist deserves your attention before you sign.

Frequently Asked Questions

Should I get a solicitor to review an AI agency contract?

For any engagement over £10,000 or involving access to personal data or sensitive business information, yes - it is worth investing in at least an hour of a technology solicitor's time (typically £150 to £350 per hour for a specialist). The cost of reviewing a contract is almost always less than the cost of untangling a bad one. Many technology solicitors now offer fixed-fee AI contract reviews.

What happens to my data when an AI agency contract ends?

That depends entirely on what your contract says. Without a clear data return and deletion clause, your data may sit on the agency's systems indefinitely. A well-drafted contract requires the agency to return all your data in a portable format within 30 days of termination, certify deletion of all copies (including backups and any data used in model training), and provide written confirmation that this has been done. If your contract does not say this, negotiate it in before signing.

Can an AI agency use my data to train their AI models?

They can if your contract permits it - and many standard contracts do, often buried in a clause about 'service improvement.' Your business data (pricing, processes, customer patterns, sales data) is not protected by data protection law unless it contains personal information. Only your contract protects it. Ensure your contract explicitly prohibits the agency from using your data for any purpose beyond delivering your specific services.

What is a reasonable liability cap for an AI agency engagement?

A liability cap equal to the total contract value is a reasonable baseline for most AI agency work. For engagements involving personal data or business-critical systems, push for higher caps and ensure the agency has appropriate Professional Indemnity and Cyber Liability insurance. Under UK GDPR, ICO fines can reach £17.5 million or 4% of global turnover - make sure your liability clauses reflect that scale of exposure.

What does a 'non-exclusive licence' in a contract actually mean?

It means the agency owns the work and is granting you permission to use it - but they can also grant the same permission to anyone else, including your direct competitors. If you are paying for custom AI development, a non-exclusive licence is almost certainly not what you want. Push for full IP assignment on payment, which transfers ownership to you outright.

Are AI agency contracts regulated in the UK?

AI agency contracts are commercial contracts governed by standard UK contract law (primarily the Contracts Act and relevant common law principles). There is no specific AI agency contract regulation, but several overlapping regimes apply: UK GDPR and the Data (Use and Access) Act 2025 govern personal data handling; the Digital Markets, Competition and Consumers Act 2024 covers unfair commercial practices; and sector-specific regulation may apply depending on your industry. The UK AI regulatory landscape is evolving rapidly - contracts signed today may need reviewing as new obligations come into force.

What should I do if I have already signed a bad AI agency contract?

First, do not panic - most issues can be remedied through negotiation even after signing. Review what the contract actually says rather than assuming the worst. If there are specific clauses that concern you (especially around data usage or IP), raise them with the agency directly - many will agree to clarifications or side letters. If the relationship has broken down or there are active disputes, get legal advice before taking any action. Document everything, especially any verbal agreements or commitments made during the sales process.