How transparent should an AI agency be about their methods?
1 May 2026
How transparent should an AI agency be about their methods?
An AI agency should show you how the work will be done, which tools and data are involved, what could go wrong, who approves outputs, what you own at the end and what it will cost to run. The right level of transparency depends on risk: a £3,000 prototype needs a clear scope, while a £30,000 operational workflow needs architecture, testing, governance and handover documentation.
What should the agency show you before you sign?
Before you sign, an AI agency should explain the proposed method in plain English, including the business problem, the data sources, the model or platform options, the expected workflow, the human approval points, the risks, the cost range and what will be handed over at the end. You do not need every line of code at proposal stage. You do need enough detail to know whether you are buying a maintainable business system or an impressive demo.
For a typical UK SME implementation, I would expect the agency to show a short technical approach, a delivery plan, a data protection view, a security view and a commercial breakdown. For a small proof of concept costing £2,000 to £5,000, that might be a 3-page scope. For an operational implementation costing £8,000 to £35,000, it should be closer to a proper blueprint with process maps, risk notes and acceptance criteria. For anything above £50,000, vague methodology is not acceptable.
The minimum disclosure should include these items:
- Which systems will connect to the AI workflow, such as Microsoft 365, Google Workspace, HubSpot, Xero, SharePoint, Notion or a CRM.
- Whether the solution uses a public AI model, a private hosted model, retrieval augmented generation, automation tooling, fine tuning or a standard SaaS feature.
- What data will be sent to which suppliers, in which region where known, and under what processing terms.
- What humans approve, review or override before the system affects a customer, employee or financial outcome.
- What is included in handover: documentation, prompts, workflow diagrams, admin access, training, tests and support.
- What is not included, especially custom software ownership, source code, long-term monitoring and future model changes.
A transparent agency should also name the alternatives. If Microsoft Copilot, ChatGPT Team, Make, Zapier, Power Automate, an internal hire or a larger consultancy such as Accenture or PwC is a better option, they should say so. That does not weaken their offer. It proves they understand the market and are not trying to force every problem through their preferred tool.
What should stay confidential?
Transparency does not mean an agency must reveal everything. There are three fair limits: client confidentiality, security and separately priced intellectual property. A good agency can be open about its method without exposing another client or weakening your security.
For example, it is reasonable for an agency to say, 'we use a retrieval layer so the AI answers from your approved documents rather than from memory'. It is not reasonable for them to show you another client's private knowledge base, API keys, workflow credentials or commercially sensitive process documents. It is also reasonable for them to protect reusable internal templates, evaluation scripts and accelerators, provided you still receive the deliverables you paid for.
The line should be written into the contract. If you are paying £10,000 for an implementation, you should know whether you own the workflow configuration, prompts, documentation and diagrams. If the agency is charging a lower monthly fee because it hosts the system on its own platform, you should know what happens if you leave. Can you export the knowledge base? Can you move the automation into your own Make, Zapier or Power Automate account? Can another supplier support it?
The red flag is not an agency protecting legitimate IP. The red flag is an agency using 'proprietary methodology' as a blanket excuse for not explaining data flow, security, cost, testing or handover. That is not sophistication. That is opacity.
What does UK regulation expect around transparency?
UK regulation does not give one simple rule that says every AI project must publish every method. It is more practical than that. The UK Government's AI regulation white paper sets out five cross-sector principles for regulators, including safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. In plain English, the direction of travel is clear: businesses using AI should be able to explain what it does, who is accountable and how risks are controlled.
Source: the UK Government's AI regulation white paper describes appropriate transparency and explainability as one of the core principles for regulators.
Data protection raises the bar further. The ICO and The Alan Turing Institute have guidance on explaining decisions made with AI. The ICO says the guidance gives organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI to the individuals affected by them. If your AI use touches personal data, recruitment, credit, healthcare, customer profiling, complaints or eligibility decisions, this is not a nice-to-have. It becomes part of responsible governance.
Source: ICO guidance on explaining decisions made with AI is aimed at helping organisations explain AI-assisted processes and decisions to affected people.
There is also a cyber security reality. A 2024 DSIT survey of 350 UK businesses using or considering AI found that 68% were already using at least one AI technology, yet among those currently using AI, 47% had no cyber security practices specifically for AI and 13% were unsure. That is the market context. Many businesses are moving quickly, but governance is behind adoption. A transparent agency should help close that gap, not exploit it.
Source: DSIT's 2024 AI cyber security survey reported that 47% of current AI users had no AI-specific cyber security practices and 13% were unsure.
How much technical detail is enough?
Enough detail means your leadership team can answer six questions without the agency in the room: what does the system do, what data does it use, what could go wrong, who checks the output, what does it cost to run, and who can maintain it if the agency disappears. If you cannot answer those six questions, the agency has not been transparent enough.
| Project type | Expected transparency | Typical UK budget |
|---|---|---|
| AI opportunity audit | Process review, prioritised use cases, risks, estimated ROI and recommended tools | £2,000 to £5,000 |
| Prototype or proof of concept | Data flow, tool choices, prompt approach, test criteria and next-step costs | £3,000 to £10,000 |
| Operational workflow | Architecture, access model, monitoring, failure handling, handover pack and support terms | £8,000 to £35,000 |
| Regulated or high-risk deployment | Full governance pack, DPIA support, human oversight model, testing evidence and audit trail | £25,000 to £100,000+ |
Do not confuse technical detail with jargon. A good agency can explain retrieval augmented generation, model selection, vector search, orchestration and evaluation in business terms. If they cannot explain it simply, they may not understand it deeply enough. If they deliberately make it sound mystical, they are selling dependence.
There is one exception: if you are buying a productised service at a very low price, you should not expect bespoke architecture documents. A £300 monthly AI receptionist tool will not come with the same disclosure as a £30,000 workflow implementation. But even then, you should still be told what data is captured, where it goes, what the tool can and cannot do, and how to switch it off.
What are the warning signs of poor transparency?
The biggest warning sign is performance claims without method. If an agency promises 30% productivity gains, 24-hour automation or 'AI transformation' but will not show the assumptions, baseline, test approach or failure handling, treat the claim as marketing, not evidence.
Other red flags include refusing to name tools, hiding ongoing software costs, avoiding data protection questions, promising that no human review is needed, saying 'the AI learns your business' without explaining how, or making you dependent on their private account with no export route. Another common one is pretending every project needs custom AI when a standard tool would solve 80% of the problem for £20 to £40 per user per month.
A transparent agency should be comfortable saying: 'This part is simple, this part is risky, this part is uncertain, and this part may not be worth doing.' That honesty saves money. It also prevents the painful situation where a business buys a shiny pilot that nobody can safely run in production.
When this does NOT apply
You do not need deep methodological transparency for every AI interaction. If a staff member uses ChatGPT to summarise a public article, the practical risk is low. If a designer uses an AI image tool for early concept work, a light usage policy may be enough. If you are experimenting internally with non-sensitive data, you can keep the governance proportionate.
This also does not mean every business should demand source code ownership. Sometimes the right answer is to buy a managed service and accept that you are paying for outcomes, not code. The important point is informed choice. You should know what you own, what you rent, what you can export and what you would lose if you changed supplier.
The transparency requirement increases when the AI touches personal data, customer promises, employee decisions, regulated work, financial transactions, operational continuity or brand reputation. In those cases, 'trust us' is not a method. It is a risk transfer from the agency to you.
What should you ask an AI agency?
Ask direct questions. A good agency will welcome them because they separate serious buyers from people chasing novelty.
- What tools and models are you likely to use, and why?
- What data leaves our environment?
- Will our data be used to train any model?
- What are the main failure modes?
- Where does human review sit in the workflow?
- What documentation will we receive?
- What do we own at the end?
- What are the monthly running costs?
- What happens if we stop working with you?
- Which cheaper or simpler option should we consider before doing this?
If the answers are clear, specific and occasionally uncomfortable, that is a good sign. If the answers are vague, polished and always flattering, be careful.
If you want to explore whether an AI project is worth doing, book a free call. No pitch, no pressure, just an honest conversation about the method, the risks and whether the numbers make sense for your business.
Is This Right For You?
This is right for you if you are a UK business leader choosing an AI agency and you need enough detail to judge risk, cost, data protection and maintainability before you sign. It is especially relevant if the work touches customer data, HR data, regulated decisions, financial processes, legal documents or operational systems that staff rely on every day.
It is not right for you if you only want a one-off prototype, a generic ChatGPT training session or a cheap automation with no ongoing accountability. In those cases, a freelancer, internal power user or standard SaaS tool may be a better fit. Transparency still matters, but you probably do not need a 20-page technical design pack for a £500 experiment.
Our rule is simple: if the method affects your risk, cost or ability to maintain the system, you should see it. If disclosure would expose another client, reveal a private credential or hand over proprietary code you did not buy, the agency should say that clearly rather than hiding behind vague language.
Frequently Asked Questions
Should an AI agency tell me which model it uses?
Yes, at least at platform level. You should know whether the work uses OpenAI, Anthropic, Google, Microsoft, an open source model or a packaged SaaS tool. The agency may not need to disclose every internal configuration, but it should explain why the model is suitable for your risk, cost and data requirements.
Should I expect to own the prompts and workflows?
If you are paying for a bespoke implementation, usually yes. Prompts, workflow diagrams, configuration notes and handover documentation should be included unless the contract says otherwise. If the agency hosts everything as a managed service, ownership may be different, but that should be explicit before you sign.
Is it a red flag if an agency uses proprietary methods?
Not automatically. Many agencies have reusable frameworks, templates and accelerators. It becomes a red flag when proprietary method is used to avoid explaining data flow, testing, security, running costs, handover or supplier lock-in.
How much should transparency documentation cost?
For a small proof of concept, the documentation may be included in a £3,000 to £10,000 project. For a larger operational system, proper architecture, risk and handover documentation may represent 10% to 20% of the project effort. If the system is important to your business, that is money well spent.
Do AI agencies need to explain AI decisions to customers?
If the system affects individuals and uses personal data, you should take ICO guidance seriously. The agency should help you understand what explanation may be needed, but your organisation remains responsible for how the system is used and communicated to customers, employees or applicants.
What should I do if an agency refuses to explain its method?
Ask for a written scope that covers data sources, suppliers, security, human oversight, handover and ongoing costs. If they still refuse, walk away or reduce the engagement to a low-risk discovery phase. Do not let an opaque supplier build a system your team will depend on.
Can too much transparency slow a project down?
Yes, if you demand enterprise-level documentation for a low-risk experiment. The answer is proportionate transparency. A £2,000 audit needs a clear method and useful outputs. A £50,000 AI workflow needs proper governance, testing evidence and handover materials.