AI Daily Brief: 1 May 2026

1 May 2026

Quick Read: UK job hunters are complaining that AI interviews feel opaque and dehumanising. VentureBeat reports six recent exploits against Claude Code, Copilot, Codex and Vertex AI that targeted credentials rather than model weights. Netomi raised $110 million for enterprise customer service AI, Writer launched event-triggered agents across Gmail, Slack, SharePoint and other systems, and Alibaba's Metis research cut redundant tool calls from 98% to 2%.

Today's brief is about AI moving from experiments into the operational layer of business. The sharpest signals are practical: agent security is becoming an identity problem, customer service AI is attracting strategic capital, and UK job applicants are pushing back against automated interviews.

UK job hunters push back against AI interviews

The Guardian reports growing frustration among UK job applicants being asked to complete AI-led interviews. Candidates described the experience as opaque, impersonal and difficult to challenge when the system appears to make or shape early screening decisions.

For UK employers, the issue is not simply whether AI screening saves recruiter time. It is whether the process can be explained, audited and defended when good candidates feel they have been filtered out by a system they cannot question.

Our take: Hiring is one of the fastest ways for an AI tool to create reputational risk. If a business cannot explain what the system assessed, what data it used and how a human reviewed the outcome, the efficiency gain may be smaller than the trust cost.

AI coding agent exploits are becoming credential attacks

VentureBeat reports that six research teams have disclosed exploits against Codex, Claude Code, Copilot and Vertex AI over the last nine months. The common pattern was not attackers trying to steal model weights, but agents holding credentials and then authenticating to production systems without a properly anchored human session.

Examples cited include a Codex branch-name injection that exposed a GitHub OAuth token, Claude Code sandbox and deny-rule failures, and Copilot attacks triggered through pull request or issue content.

Our take: The security lesson is direct: agent permissions are identity permissions. Businesses rolling out coding agents should treat them like privileged automation accounts, with scoped tokens, session binding, logging, approval gates and fast credential rotation.

Writer launches agents that act without prompts

Writer has launched event-based triggers for its enterprise AI agent platform, allowing agents to detect signals across Gmail, Gong, Google Calendar, Google Drive, Microsoft SharePoint and Slack. The agents can then run multi-step workflows without a person opening a chat window first.

The launch also includes an Adobe Experience Manager connector, bring-your-own encryption keys and a Datadog observability plugin, positioning Writer more directly against Amazon, Microsoft and Salesforce in enterprise automation.

Our take: This is the important shift from assistant to operator. The business case improves when agents respond to real events, but governance has to improve at the same time because nobody typed the prompt that triggered the work.

Netomi raises $110 million for customer service AI

Netomi has raised $110 million in a round led by Accenture Ventures, with participation from Adobe Ventures, WndrCo, Silver Lake Waterman, NAVER Ventures, Metis Strategy and Fin Capital. Jeffrey Katzenberg has joined the company's board.

The company is positioning itself around production customer service deployments rather than demo chatbots, with Netomi saying large deployments can generate at least tens of millions of dollars in impact.

Our take: Customer service is becoming the proving ground for enterprise AI ROI. The winners will not be the tools with the flashiest demos, but the systems that can survive messy policy, escalation, compliance and integration work at scale.

Alibaba research cuts redundant agent tool calls from 98% to 2%

Alibaba researchers introduced Hierarchical Decoupled Policy Optimization, a reinforcement learning framework designed to teach AI agents when to use tools and when to rely on internal knowledge. Their Metis model reportedly reduced redundant tool invocations from 98% to 2% while improving reasoning accuracy across benchmarks.

The work targets a practical problem in agentic systems: unnecessary API calls increase latency, inflate costs and can inject irrelevant noise into the model's context.

Our take: This is the kind of progress that matters for business deployment. Better agents will not just be smarter. They will be cheaper, faster and more disciplined about when to call external systems.

Big Tech AI capex is heading towards $725 billion

Tom's Hardware reports that Google, Microsoft, Meta and Amazon are expected to spend about $725 billion on capital expenditure in 2026, citing first-quarter earnings analysis originally compiled by the Financial Times. That would be up 77% from last year's record $410 billion.

The spending race is being driven by AI data centres, chips and cloud capacity, even as investors increasingly question when the returns will become visible outside a few platform winners.

Our take: For UK buyers, this is a reminder that cloud AI pricing and availability are now tied to a much bigger infrastructure cycle. Procurement teams should plan for capacity constraints, region choice, vendor lock-in and workload portability, not just per-token price.

Musk v Altman exhibits put OpenAI's founding mission under scrutiny

The Verge reports that new exhibits in Musk v Altman include early emails, photos and corporate documents from OpenAI's formation. The material includes details on Nvidia providing OpenAI with a sought-after supercomputer, Elon Musk influencing the original mission and structure, and early concerns about control of the organisation.

The trial centres on whether OpenAI moved away from its founding mission of ensuring broadly beneficial artificial general intelligence, with Microsoft also named among defendants.

Our take: The court case matters beyond Silicon Valley drama. It is forcing a public record of how mission, money, governance and compute access collided inside one of the most important AI companies in the market.

OpenAI adds advanced account security for ChatGPT

TechCrunch reports that OpenAI has announced new advanced account security for ChatGPT accounts, including a partnership with Yubico. The move comes as AI accounts increasingly contain sensitive prompts, uploaded documents, workflow data and enterprise context.

For business users, account compromise is no longer just a lost password problem. It can expose strategy documents, customer data, code, research and automated workflows connected to third-party tools.

Our take: AI account security should now sit in the same governance bucket as email, CRM and cloud admin access. Multifactor authentication, hardware keys for privileged users and clear offboarding are becoming baseline controls.

Quick Hits

Frequently Asked Questions

How often is the AI Daily Brief published?

Every morning at 7:30am UK time, covering the previous 24 hours of AI news from over 30 sources.

How are stories selected?

UK-relevant stories are prioritised first, then by business impact and practical implications for UK organisations adopting AI.

Why should business leaders follow AI news?

AI is moving faster than any technology in history. Staying informed is essential for making smart decisions about AI investment, adoption, and governance.