How Do I Know Whether an AI Use Case Is Worth Automating?

14 May 2026

How Do I Know Whether an AI Use Case Is Worth Automating?

The best AI automation candidates are boring, repeated business workflows with clear inputs, clear outputs, and a visible cost. Good examples include invoice triage, quote preparation, inbox routing, sales follow-up, customer support drafting, compliance checking, and reporting. Weak examples are rare tasks, politically sensitive decisions, messy processes nobody owns, or anything where a wrong answer could harm a customer without human review.

What makes an AI use case worth automating?

A use case is worth automating when it passes five tests: volume, value, repeatability, data readiness and risk control. Volume means the task happens often enough to matter. Value means the task costs real money, slows revenue, creates errors, or consumes senior attention. Repeatability means there is a recognisable pattern. Data readiness means the information needed to do the work exists and can be accessed safely. Risk control means a wrong output can be reviewed, corrected or contained.

The mistake many businesses make is starting with the most exciting AI idea rather than the most economically sensible one. A chatbot that answers every customer question sounds attractive. But if support volume is low, knowledge is messy and one bad answer creates reputational risk, it may be a poor first project. A less glamorous workflow, such as drafting quote follow-ups from CRM notes, may pay back faster because the inputs are clearer and the value is easier to measure.

Use this blunt rule: if you cannot describe the current manual process, estimate the monthly cost of that process, name the person who owns it, and define what good output looks like, do not automate it yet. Discovery work is fine. A production build is premature.

The wider UK market is still early. The Office for National Statistics reported that 23% of businesses were using some form of AI in late September 2025, up from 9% when the question was introduced in September 2023. That means many firms are experimenting, but fewer have turned AI into dependable operating infrastructure. The businesses that benefit most are not necessarily the ones with the flashiest tools. They are the ones that choose use cases with a clear operational payback.

Source: ONS Business insights and impact on the UK economy, 2 October 2025.

How should you score an AI automation idea?

Score every candidate use case out of 25. Give 0 to 5 points for each of these areas:

A score of 20 to 25 is a strong automation candidate. Build a pilot, measure it, and consider production if the numbers hold. A score of 15 to 19 is worth a short discovery or prototype. A score of 10 to 14 usually needs process clean-up first. Anything below 10 should stay manual for now unless there is a strategic reason to learn.

Here is a practical example. A finance inbox receives 400 supplier emails per month. Staff spend 3 minutes per email identifying invoices, checking missing purchase order numbers and routing exceptions. That is roughly 20 hours per month. At a fully loaded admin cost of £25 per hour, the visible labour cost is £500 per month. If automation cuts that by 60%, the labour saving is £300 per month. On labour alone, a £6,000 build would take 20 months to pay back. That may not be good enough.

But if the same workflow also prevents late payment fees, improves supplier relationships, reduces month-end stress and removes work from a finance manager who costs £60 per hour, the case changes. The point is not to pretend every AI automation has a perfect spreadsheet ROI. The point is to make the assumptions visible before spending money.

For a small business, a sensible first AI automation project often costs between £3,000 and £12,000 for a focused workflow, depending on integrations, security, testing and documentation. If the project cannot plausibly return that within 6 to 18 months, reduce the scope or choose a different use case.

Which use cases usually produce the fastest return?

The fastest returns usually come from tasks that are repetitive, text-heavy and already semi-structured. Good examples include:

These use cases work because the AI is not being asked to run the company. It is being asked to prepare work, summarise information, spot likely issues, and speed up human decisions. That is where most SMEs should start.

By contrast, fully autonomous customer-facing agents, pricing decisions, hiring decisions and regulated advice need much stronger controls. They may still be worth automating, but they should not be treated as quick wins. In the UK, data protection, fairness, accountability and auditability matter. The ICO provides an AI and data protection risk toolkit designed to help organisations reduce risks to individuals' rights and freedoms caused by their AI systems. If your use case touches personal data, special category data, vulnerable customers or meaningful decisions about people, build governance into the business case from day one.

Source: ICO AI and data protection risk toolkit.

The strongest first project is normally one where the AI prepares, checks or routes work, while a person remains accountable for the final decision. That balance gives you measurable efficiency without pretending the model is infallible.

What numbers should you calculate before approving the project?

Before approving an AI automation project, calculate six numbers. First, the current monthly volume. Second, the average handling time. Third, the fully loaded hourly cost of the people doing the work. Fourth, the expected percentage reduction in manual effort. Fifth, the cost to build and operate the automation. Sixth, the cost of errors, including rework, refunds, churn, complaints and compliance risk.

For example, a recruitment firm might process 300 candidate CVs per month. If each initial review takes 8 minutes, that is 40 hours per month. At £35 per hour fully loaded, the visible cost is £1,400 per month. If an AI-assisted screening workflow reduces manual review time by 40% while keeping a recruiter in control, the direct saving is £560 per month. A £7,500 project would have a simple payback of about 13 months, before counting faster response times or improved candidate experience.

That may be worthwhile. But only if the screening rules are fair, documented and checked. If the data is biased, the role requirements are unclear, or the tool makes unexplained recommendations, the risk may outweigh the saving. In a people-related workflow, the business case needs to include governance, not just time saved.

DSIT's AI Adoption Research found that around 1 in 6 UK businesses were using at least one AI technology in 2025, while 80% were neither using nor planning to adopt AI. It also found that among AI adopters, 30% of staff used AI on average, and natural language processing and text generation were the most common uses, at 85% of adopters. That matters because many useful projects are not science fiction. They are practical language, document and workflow problems.

Source: DSIT AI Adoption Research.

When does a use case need simple automation instead of AI?

Not every automation needs AI. If the task is rule-based, predictable and does not require judgement, use simpler automation first. Zapier, Make, Microsoft Power Automate, CRM workflows, spreadsheet scripts and database rules are often cheaper, easier to maintain and more predictable than an AI agent.

Use rule-based automation when the instruction looks like this: if X happens, do Y. Send a Slack alert when a deal moves stage. Create an invoice when an order is approved. Add a tag when a form field equals a certain value. These are not AI problems. They are workflow problems.

Use AI when the instruction looks more like this: read this messy information, understand what it means, classify it, summarise it, draft a response, compare it to a policy, or suggest the next best action. AI earns its place when language, ambiguity or judgement is involved.

This distinction matters because AI adds cost and operational risk. You need prompts, test cases, monitoring, human review paths, model updates and data controls. If a basic workflow rule will solve 80% of the problem for £50 per month, that is probably the better business decision. A good AI consultant should tell you that. If every problem is being sold to you as an AI agent, be cautious.

A simple internal rule helps: automate with rules where the process is deterministic, assist with AI where the process is interpretive, and only delegate to AI where the risk is low or the oversight is strong. That framework prevents expensive over-engineering.

When this does NOT apply

This framework does not apply cleanly when the project is mainly strategic research, product experimentation or capability building. Sometimes a business funds an AI prototype because it wants to learn, not because the first version will pay back immediately. That can be valid, but call it what it is. It is research and development, not a guaranteed ROI project.

It also does not apply well where the workflow is rare but very high value. For example, automating part of a complex tender process may only affect 10 opportunities per year, but if one improved bid is worth £250,000, the business case may still be strong. In that situation, do not score only on frequency. Score on decision value and competitive advantage.

Finally, do not use this framework to justify replacing human judgement where trust is the product. If clients pay you for expert advice, relationship quality, negotiation, safeguarding or accountability, AI should support the professional, not impersonate them. The most valuable automation may be the one that gives your people more time for the parts of the job customers actually care about.

If you want a wider view on project risks before choosing a use case, read Why Do Most AI Projects Fail?. The short version is simple: bad use case selection is one of the fastest ways to waste AI budget.

Is This Right For You?

This approach is right for you if you have several AI ideas but no clear way to rank them. It is especially useful for UK SMEs where budget, staff time and operational risk matter more than impressive demos.

It is not right for you if you are trying to automate a broken process before anyone has agreed how that process should work. In that case, fix the process first. AI will only make a bad workflow faster, louder and harder to manage.

Frequently Asked Questions

What is the quickest way to decide if an AI idea is worth testing?

Write down the current monthly volume, the manual time per item, the people cost and the error cost. If the total value is not meaningful, the idea may still be interesting, but it is not a priority automation project.

What ROI should I expect from a first AI automation project?

For a focused SME workflow, aim for a realistic payback within 6 to 18 months. Faster is possible, but only when the task is high-volume, expensive and already well understood.

Should I automate the most annoying task first?

Not always. Annoyance is a useful signal, but it is not a business case. Automate the annoying task first only if it is frequent, measurable, safe and valuable enough to justify the cost.

How much should I spend validating an AI use case?

A short discovery exercise or prototype might cost £1,500 to £5,000. A focused production workflow often starts around £3,000 to £12,000. Larger projects with integrations, sensitive data or compliance needs cost more.

Can AI automation work if our data is messy?

Sometimes, but messy data reduces accuracy and increases testing time. If the data is inconsistent, incomplete or spread across too many systems, budget for clean-up before expecting reliable automation.

When should a human stay in the loop?

Keep a human in the loop when the output affects customers, money, legal obligations, employment, safety, vulnerable people or your reputation. Removing review is a later decision, not the starting point.

Is a chatbot a good first AI automation project?

Usually not unless you have high support volume, a clean knowledge base and a clear escalation path. Internal assistants, triage tools and drafting workflows often produce safer early wins.

What is the biggest red flag in AI use case selection?

The biggest red flag is a project described only in technology terms, such as 'we need an AI agent', with no owner, metric, workflow map or definition of success.