AI Daily Brief: 14 May 2026

14 May 2026

Quick Read: Sinch says 74% of enterprises have rolled back live AI customer communication agents, while UK cyber sector revenue reached £14.7bn and AI security firms grew 68%. The UK AI Security Institute says frontier models are completing longer cyber tasks in months, not years, and Coupa has bought London-based Rossum to expand agentic spend management.

Today's AI news has a clear operational theme: production AI is becoming more useful, more measurable and more difficult to govern. Customer service agents are being rolled back, AI security capability is accelerating, and UK cyber firms are growing fast as businesses confront the practical costs of deployment.

Three quarters of AI customer service agents are being rolled back

Sinch research, reported by The Register, says 74% of enterprises that have deployed AI customer communication agents later rolled them back or shut them down. The figure refers to systems that reached live service, not pilot projects that failed before launch, which makes the number more uncomfortable for leadership teams investing in customer automation.

The study also found that rollback rates rise to 81% among organisations described as having fully mature guardrails. Sinch argues that better governance may be helping teams spot failure earlier, rather than preventing it altogether.

For UK businesses, the message is blunt: replacing a call centre with bots is not a simple cost-cutting exercise. The operating model, escalation path, measurement framework and human fallback all matter more than the model demo.

Our take: This is the clearest warning yet that AI customer service needs to be treated as operational change, not software procurement. If the board case only counts reduced headcount, it is probably wrong. The stronger case is faster routing, better agent assist, fewer repeat contacts and a controlled escalation model where AI handles low-risk work and humans handle exceptions.

UK cyber sector reaches £14.7bn as AI security firms grow 68%

Infosecurity Magazine reports that the UK cybersecurity sector generated £14.7bn in revenue last year, with gross value added up 17% to £9.1bn. The sector now employs nearly 70,000 people and includes an estimated 2,603 active cybersecurity firms.

The AI angle is especially significant. The number of UK firms offering cybersecurity products and services for AI grew by an estimated 68% annually to 111. Government ministers are pushing companies toward the Cyber Resilience Pledge, including board-level responsibility, NCSC Early Warning sign-up and Cyber Essentials across supply chains.

The update comes as the Cyber Security and Resilience Bill continues through Parliament, increasing pressure on essential services and managed service providers to improve incident reporting and resilience.

Our take: AI is turning cyber from a technical risk into a board governance issue. The growth in AI security suppliers is positive, but buying tools will not solve the underlying question: who owns AI risk, how is it measured, and what evidence proves the organisation can recover when systems fail or are attacked?

UK AI Security Institute says autonomous cyber capability is accelerating

The UK AI Security Institute says frontier models are becoming more efficient at some cybersecurity work. Its time window benchmark estimates how much work an AI can complete compared with a human expert, and the institute has repeatedly shortened its estimated doubling period for cyber task capability.

The Register reports that AISI moved from an eight-month estimate in late 2025 to 4.7 months in February 2026, with Anthropic Mythos Preview and OpenAI GPT-5.5 outperforming that trend. AISI also cited results where a recent Mythos checkpoint solved a 32-step simulated corporate network attack in six of 10 attempts and completed a previously unsolved industrial control challenge in three of 10 attempts.

AISI is careful not to claim broad real-world takeover capability. Its warning is narrower but still important: the length of cyber tasks frontier models can complete autonomously is increasing on the order of months, not years.

Our take: This is not a reason for panic, but it is a reason to shorten security planning cycles. Annual reviews are too slow for a capability curve measured in months. UK businesses should prioritise patch cadence, asset visibility, access controls and incident rehearsal before buying another dashboard.

Coupa buys London-based Rossum to expand agentic spend management

BusinessCloud reports that Coupa has acquired London-based AI firm Rossum, which provides intelligent document processing technology. Rossum was founded by three university AI dropouts from Prague and raised £72m in 2021.

The deal builds on an existing partnership between Coupa and Rossum around complex invoicing for accounts payable teams. Coupa says Rossum's specialised transactional large language model will be extended across its autonomous spend management portfolio.

Coupa CEO Leagh Turner said the company has delivered more than $300bn in customer savings over the past 20 years and believes Rossum can help customers save the next $300bn in five years. Rossum CEO Tomáš Gogár said the combination pairs Rossum's transactional intelligence with Coupa's $10tn data set.

Our take: This is the agentic AI market moving into back-office workflows with clear financial owners. Invoices, spend controls and procurement approvals are ideal AI targets because the data is structured enough to automate, but sensitive enough to demand auditability. Expect more enterprise software vendors to buy specialist AI companies rather than build every capability themselves.

Microsoft study finds frontier AI models can corrupt 25% of document content

VentureBeat reports on a Microsoft Research study warning that large language models can silently corrupt documents during multi-step delegated workflows. The researchers built DELEGATE-52, a benchmark covering 310 work environments across 52 professional domains.

The headline result is uncomfortable: even top-tier frontier models corrupted an average of 25% of document content by the end of the tested workflows. The benchmark used reversible tasks and round-trip evaluation to test whether models could carry out edits and then reconstruct the original material faithfully.

The finding matters because many businesses are now asking AI systems to summarise, split, rewrite, reconcile and reassemble professional documents. In those settings, a fluent answer can hide a material change to the underlying evidence.

Our take: This is why document automation needs version control, sampling and exception checks. The risk is not only hallucination. It is quiet alteration of source material that looks plausible enough to pass a casual review. For legal, finance, compliance and technical teams, AI should create a review queue, not become an invisible document processor.

Anthropic passes OpenAI in business adoption, according to Ramp data

VentureBeat reports that Anthropic has overtaken OpenAI in US business adoption for the first time, based on Ramp AI Index spending data from more than 50,000 businesses. Anthropic adoption rose to 34.4% in April, while OpenAI fell to 32.3%.

Ramp's data also shows overall AI adoption among businesses reaching 50.6%. VentureBeat notes that Anthropic has quadrupled its business adoption over the past year, while OpenAI's business adoption has barely moved by comparison.

The analysis points to Claude Code as a major driver of the shift, particularly among technical teams and early adopters in software, finance and professional services.

Our take: The practical point is not whether Claude or ChatGPT is winning this month. It is that enterprise AI buying is becoming workflow-specific. Coding, finance operations, support, research and compliance may all settle on different tools. Businesses should avoid one-model procurement and instead define model choice by task, risk and integration fit.

AI sustainability pressure moves from ethics debate to procurement question

WIRED reports that AI sustainability researcher Sasha Luccioni is launching Sustainable AI Group with former Salesforce sustainability chief Boris Gamazaychikov. The venture will help companies understand and reduce the environmental impact of their AI use.

Luccioni told WIRED that companies are facing employee, board and director pressure to quantify how AI use affects ESG goals. She argues businesses need to know where models run, which grids power the data centres and what supply chain emissions sit behind their AI tooling.

The article also notes that Europe is already moving toward transparency requirements through the EU AI Act, while international bodies are trying to improve data centre energy reporting.

Our take: Sustainable AI is becoming a procurement and reporting issue, not a branding exercise. UK firms do not need to stop using AI, but they do need to ask better questions: where does the workload run, what model size is actually required, and can suppliers provide credible energy and location data?

Quick Hits

Frequently Asked Questions

How often is the AI Daily Brief published?

Every morning at 7:30am UK time, covering the previous 24 hours of AI news from over 30 sources.

How are stories selected?

UK-relevant stories are prioritised first, then by business impact and practical implications for UK organisations adopting AI.

Why should business leaders follow AI news?

AI is moving faster than any technology in history. Staying informed is essential for making smart decisions about AI investment, adoption, and governance.