AI Daily Brief: 12 May 2026
12 May 2026
Quick Read: Google says it found a threat actor using an AI-developed zero-day exploit for a planned mass exploitation event. UK CEOs are highly committed to AI, with 81% calling it a top or high priority, but 51% have delayed initiatives because of regulatory uncertainty. Thinking Machines previewed real-time interaction models, Kuaishou's Kling AI unit is seeking a US$20 billion valuation, and OpenAI trial testimony put Sutskever's US$7 billion stake and old governance concerns back in the spotlight.
Today's briefing is about the operational reality of AI. Security teams are seeing AI-assisted exploitation move from theory to practice, boards are slowing some projects because accountability is unclear, and model builders are pushing towards more interactive systems that will make governance even harder to treat as an afterthought.
Google says AI-assisted zero-day exploitation is now real
Google Threat Intelligence Group says it has identified a threat actor using a zero-day exploit that it believes was developed with AI. Google says the criminal group planned to use it in a mass exploitation event, but its proactive discovery may have prevented the attack from being used.
The report also says AI is being used for exploit generation, malware development, autonomous malware operations, information operations, obfuscated access to premium models and attacks against AI supply chains. CNBC reported that Google has high confidence it recorded hackers using an AI model to find and exploit a flaw that could bypass two-factor authentication.
For UK organisations, the practical point is simple: AI cyber risk is no longer only about employees leaking data into chatbots. It is also about attackers using models to accelerate vulnerability discovery and initial access. Security teams need patching discipline, asset inventories and detection coverage that assume adversaries now have AI support.
Our take: This is the point where boards should stop treating AI security as a future scenario. The defensive response is not panic, it is hygiene at speed: know what you run, know who can access it, patch quickly, test controls and make sure AI tools are inside the security programme rather than sitting beside it.
UK CEOs want AI, but regulation is slowing the next spend
New Dataiku research reported by SecurityBrief says 81% of UK chief executives now rank AI strategy as a top or high priority, compared with 73% globally. The same survey found 77% of UK respondents were more concerned about over-investing in AI than under-investing.
The most important figure is the shift in delayed projects. SecurityBrief reports that 51% of UK CEOs have delayed AI initiatives because of regulatory uncertainty, up from 26% a year earlier. The survey covered 900 CEOs across the UK, US, France, Germany, UAE, Japan, South Korea and Singapore, all from large companies with annual revenue above US$500 million or equivalent.
That means AI is no longer being blocked because executives do not believe in it. It is being slowed because accountability, governance and return on investment are still not clear enough. The winners will be companies that can turn AI from an experiment into a controlled operating capability.
Our take: This is exactly the gap we see in the market: enthusiasm is high, but decision confidence is not. A sensible AI roadmap now needs a governance model, measurable use cases and executive accountability before it needs another tool subscription.
Thinking Machines previews AI that can listen, talk and work at the same time
Thinking Machines, the AI lab founded by former OpenAI CTO Mira Murati and other ex-OpenAI researchers, has previewed what it calls interaction models. The company says these systems handle interaction natively rather than relying on external scaffolding, taking in audio, video and text while responding in real time.
VentureBeat reports that the preview includes a dual model design: an interaction model for live conversation and a background model for longer reasoning, browsing and tool use. The system is not yet generally available, with a limited research preview expected before wider release later this year.
The business implication is bigger than another voice assistant. If AI systems can observe, interrupt, speak, generate interfaces and use tools while a person is still talking, the human-in-the-loop model changes. Organisations will need clearer rules for what the AI can do live, what requires confirmation and what gets logged for audit.
Our take: The next interface shift may be from prompt-and-wait to continuous collaboration. That is powerful, but it also raises the governance bar because mistakes can happen during the interaction, not just after someone presses submit.
Agent pilots are hitting an identity and trust wall
VentureBeat reports that Cisco executives see identity governance as one of the main reasons agentic AI is stuck in pilots. Cisco president Jeetu Patel told VentureBeat at RSAC 2026 that 85% of enterprises are running agent pilots, while only 5% have reached production.
The article frames the problem around non-human identities. A medical transcription agent updating hospital records or a vision agent inspecting a factory line needs access, permissions and revocation controls. Most enterprise identity systems were designed for people, not software workers that can act at machine speed.
For UK businesses, this matters because agent deployment will fail if security is bolted on after productivity tests. Every agent needs an owner, a permission boundary, logging, revocation and a clear escalation path when it is asked to do something outside scope.
Our take: Agent strategy is becoming identity strategy. If you cannot answer which agents can touch customer data, finance systems or production infrastructure, you are not ready to move beyond pilots.
Kuaishou's Kling AI unit is chasing a US$20 billion valuation
South China Morning Post reports that Kuaishou shares rose as much as 10% after reports that the company is raising new funding for its Kling AI video unit at a US$20 billion valuation. The company is reportedly in talks with investors including Tencent to raise US$2 billion.
The report says Kling has reached an annualised revenue run rate of US$500 million, roughly double its level before Chinese New Year. A separate report from The Information said Kuaishou is planning a 2027 IPO for the unit, while Kuaishou told the Hong Kong stock exchange it is assessing a restructuring proposal that could involve external funding.
The signal is that AI video is moving from novelty demos to standalone capital markets stories. For marketing and creative teams, the question is no longer whether the tools can generate impressive clips. It is whether workflows, rights management, approval processes and brand controls can keep pace.
Our take: AI video is becoming its own investment category. That will bring better tools, but also more pressure on businesses to decide where synthetic media is acceptable, how it is labelled and who signs it off.
OpenAI trial testimony puts governance back under the microscope
WIRED reports that former OpenAI chief scientist Ilya Sutskever testified in Elon Musk's case against OpenAI and Microsoft, revealing an ownership stake in OpenAI's US$850 billion for-profit arm that is currently worth about US$7 billion. Earlier in the trial, OpenAI president Greg Brockman reportedly acknowledged around US$30 billion worth of OpenAI shares.
The Guardian reported that the trial has aired testimony from former OpenAI figures about Sam Altman's leadership and the 2023 board crisis. Sutskever confirmed under questioning that he had told the board Altman showed what was described in court as a consistent pattern of lying, undermining executives and pitting them against one another. Altman and OpenAI deny Musk's allegations in the case.
This is not just Silicon Valley theatre. The trial is exposing how much of frontier AI governance depends on founder control, investor economics and private board judgement. Customers buying critical AI capability should be asking governance questions as well as model performance questions.
Our take: The governance lesson is blunt: the companies building critical AI infrastructure are still private companies with human incentives, messy boards and large financial stakes. Enterprise buyers should assess vendor governance as part of supplier risk, not as background gossip.
AWS pushes agent payments into Amazon Bedrock AgentCore
AWS's latest AI roundup says Amazon Bedrock AgentCore has previewed managed payment capabilities that let AI agents access and pay for APIs, MCP servers, web content and other agents. AWS says the feature is built with Coinbase and Stripe and is meant to remove custom work around billing, credentials and commercial access.
That sounds technical, but it points to an important next step in agent infrastructure. If agents can transact with other services, businesses need spend limits, procurement rules, audit trails, fraud controls and approval workflows designed for machine-initiated activity.
For UK companies, this is where agent pilots become operational finance questions. The moment an AI system can commit budget, even in tiny amounts, it needs the same control thinking as corporate cards, API keys and delegated purchasing authority.
Our take: Agent payments could unlock useful automation, but only if finance, security and operations are involved early. An agent that can spend money is not just a productivity tool. It is a controlled business actor.
Quick Hits
- Google's own threat report says adversaries are now using AI for exploit generation, autonomous malware operations, information operations and attacks on AI supply chains.
- Thinking Machines says its new interaction model design can process audio, video and text continuously rather than waiting for a turn to end.
- Kuaishou told the Hong Kong stock exchange it is assessing a restructuring proposal for Kling AI that may involve external funding.
Frequently Asked Questions
How often is the AI Daily Brief published?
Every morning at 7:30am UK time, covering the previous 24 hours of AI news from over 30 sources.
How are stories selected?
UK-relevant stories are prioritised first, then by business impact and practical implications for UK organisations adopting AI.
Why should business leaders follow AI news?
AI is moving faster than any technology in history. Staying informed is essential for making smart decisions about AI investment, adoption, and governance.