What Are the Red Flags I Should Look for in an AI Agency Contract?
24 March 2026
What Are the Red Flags I Should Look for in an AI Agency Contract?
The biggest red flags are vague deliverables, no IP ownership clause, missing exit terms, and black box pricing. A good AI agency contract clearly states what you own, what happens if things go wrong, and how you leave.
Red flag 1: Vague deliverables
This is the single most common problem. The contract says things like "AI strategy development," "implementation of AI solutions," or "ongoing AI optimisation" without specifying what that actually means in measurable terms.
What vague looks like:
- "We will develop and deploy an AI-powered customer service solution."
- "Ongoing optimisation and improvements to your AI systems."
- "Strategic AI consultancy and advisory services."
What specific looks like:
- "We will deploy a retrieval-augmented generation (RAG) chatbot trained on your support documentation, handling Tier 1 support queries via your website, with a target resolution rate of 60% without human escalation."
- "Monthly performance review including accuracy metrics, escalation rates, and a written report with specific improvement actions."
- "Four half-day workshops covering AI opportunity mapping, tool selection, Governance framework, and implementation roadmap, each with written deliverables."
If your contract does not tell you exactly what you are getting, when you are getting it, and how you will know it is done, push back before signing.
Red flag 2: You do not own your IP
This catches more UK businesses than you might expect. Some AI agency contracts include clauses that give the agency ownership or co-ownership of the models, workflows, or data pipelines they build for you. Others retain a licence to reuse your work for other clients.
Check for these specific clauses:
- IP assignment: Does the contract explicitly assign all intellectual property created during the engagement to you? It should.
- Background IP: The agency may reasonably retain ownership of their pre-existing tools and frameworks. That is fine, but it should be clearly listed.
- Data ownership: Your business data, training data, and any model fine-tuned on your data should remain yours. Full stop.
- Right to reuse: Some agencies include a clause letting them use anonymised versions of your work in their portfolio or for other clients. If you are not comfortable with that, strike it.
We are transparent about this: when we build something for a client, the client owns it. Our pre-existing tools and frameworks remain ours (and are clearly listed), but everything Custom belongs to you. That should be the standard.
Red flag 3: No exit clause or punitive termination
Morgan Lewis published a detailed analysis earlier this year on exit rights in AI contracts, and the core message is simple: if you cannot leave cleanly, you are locked in. A good contract includes:
- Termination for convenience: You should be able to end the engagement with reasonable notice (typically 30 to 90 days), not just for cause.
- Data portability: When the engagement ends, you get all your data, models, and documentation in usable formats. Not in a proprietary format that only works with their systems.
- Transition support: A reasonable handover period where the agency helps your team or your new provider take over.
- No punitive fees: Watch for early termination penalties that make leaving prohibitively expensive. Some contracts effectively trap you for 12 to 24 months.
If the only way out of the contract is to prove the agency failed (termination for cause), that is a red flag. Things change. Priorities shift. You need the flexibility to move on without a legal battle.
Red flag 4: Black box pricing
AI agency pricing is notoriously opaque. Some agencies quote a single large number without breaking down what that money actually buys. Others use consumption-based pricing that is impossible to predict.
What to look for:
- Itemised costs: You should see a breakdown of discovery, development, testing, deployment, and ongoing support costs separately.
- Rate cards: If the engagement is time-based, you should know the day rate or hourly rate for each team member involved.
- Infrastructure costs: Who pays for cloud compute, API calls, and hosting? This can add up quickly with AI workloads. Make sure it is clear.
- Change request process: What happens when scope changes (and it will)? Is there a clear process with agreed rates, or is it left vague?
A legitimate agency can explain every line item in their quote. If the answer to "what does this cover?" is "everything," that covers nothing.
Red flag 5: No mention of governance or compliance
UK businesses operate under GDPR, and depending on your sector, potentially FCA, ICO, or other regulatory oversight. Any AI agency working with your business data should address data protection and governance in the contract.
Specifically, look for:
- Data processing agreement: Required under GDPR if the agency processes personal data on your behalf.
- Data residency: Where is your data stored and processed? If the agency uses US-based cloud services, that has GDPR implications.
- Model training: Will your data be used to train models that serve other clients? It should not be, unless you explicitly agree.
- Audit rights: Can you audit how your data is handled? A good contract gives you that right.
If the contract does not mention GDPR, data processing, or governance at all, that tells you something important about how seriously the agency takes compliance.
Red flag 6: Unrealistic timelines and guarantees
Any agency that guarantees specific AI performance outcomes in the contract should be questioned carefully. AI systems are probabilistic. A responsible agency commits to best efforts, clear metrics, and remediation processes, not guaranteed results.
Be wary of:
- "Guaranteed 10x ROI within 6 months"
- "99.9% accuracy" (without specifying the task, dataset, and measurement methodology)
- "Full deployment in 2 weeks" for anything non-trivial
Realistic timelines for a meaningful AI deployment typically look like 4 to 8 weeks for discovery and design, 6 to 12 weeks for development and testing, and ongoing iteration after launch. If someone promises to transform your business in a fortnight, they are either oversimplifying or overpromising.
Red flag 7: No SLA for post-launch support
The launch is not the finish line. AI systems need monitoring, maintenance, and occasional retraining. Your contract should include clear service level agreements for post-launch support:
- Response times: How quickly will they respond to issues? (e.g., critical issues within 4 hours, standard within 24 hours)
- Monitoring: Who monitors model performance, and how often?
- Retraining: When and how will models be updated as your data changes?
- Cost: What does ongoing support cost, and is it included or extra?
If the contract covers the build but not what happens after, you are buying a system that will degrade over time with no plan for maintaining it.
The contract Checklist
Before signing any AI agency contract, check for:
- Specific, measurable deliverables with clear acceptance criteria
- Full IP assignment for custom work (with background IP clearly listed)
- Clean exit terms with data portability and no punitive fees
- Itemised, transparent pricing with change request process
- GDPR-compliant data processing agreement
- Realistic timelines without performance guarantees
- Post-launch SLA with defined response times and responsibilities
Is this right for you?
This article is for you if you are evaluating AI agencies and want to protect your business. If you are a sole trader looking at off-the-shelf AI tools, most of this does not apply since you are typically working with standard SaaS terms. But if you are commissioning custom AI work from an agency or consultancy, every point above matters.
One more thing: a good agency will not be offended if you push back on contract terms. They will welcome it. It shows you are a serious client who understands what you are buying. If an agency bristles at questions about IP ownership or exit terms, that tells you everything you need to know.
Frequently Asked Questions
Should I get a lawyer to review an AI agency contract?
Yes, especially for engagements over £10,000. A technology-savvy solicitor can spot problematic IP clauses, data handling gaps, and exit terms that a non-specialist might miss. Many UK law firms now offer fixed-fee contract reviews for technology agreements, typically costing between £500 and £1,500.
What should an AI agency contract cost to set up?
Setup fees for AI consulting engagements in the UK typically range from £2,000 for a focused audit to £15,000 or more for a full implementation programme. Be wary of agencies that charge large upfront fees before any discovery work. A reasonable approach is a smaller paid discovery phase followed by a costed implementation proposal.
Can I negotiate AI agency contract terms?
Absolutely. Most agency contracts are starting points, not take-it-or-leave-it documents. Common negotiation points include IP ownership, termination notice periods, data portability, and payment milestones tied to deliverables rather than time elapsed. A good agency expects and welcomes negotiation.
What happens to my data if an AI agency goes out of business?
This is why data portability and IP ownership clauses matter so much. Your contract should specify that all data and custom models are returned to you in standard formats if the engagement ends for any reason, including the agency ceasing operations. Without this clause, your data could be tied up in insolvency proceedings.