AI Daily Brief: 13 May 2026
13 May 2026
Quick Read: The UK Government's Sovereign AI fund has invested in Isomorphic Labs as the London AI drug discovery firm scales. Google says AI-assisted hacking has become an industrial-scale threat within three months, while a Shai-Hulud supply chain worm hit 172 npm and PyPI packages. Google is pushing Gemini deeper into Android and new Googlebook laptops, and xAI has added 19 more gas turbines at its Mississippi data centre site despite an ongoing lawsuit.
Today is about AI moving from novelty into infrastructure. The sharpest stories are not just model launches, but the systems around them: UK sovereign investment, AI-assisted cyber attacks, agentic supply chain risk, embedded Gemini devices, and the growing energy footprint of frontier compute.
UK Sovereign AI backs Isomorphic Labs for AI drug discovery
The UK Government's Sovereign AI fund has named Isomorphic Labs as its latest investment, backing the London-founded company as part of a new fundraise. Isomorphic, founded by Sir Demis Hassabis, is using frontier AI to design and develop medicines, building on the AlphaFold breakthrough from Google DeepMind.
For UK businesses, the important signal is that sovereign AI policy is now moving from statements into equity investment. The fund has backed three startups with direct investment and nine firms in total when compute support is included, which gives promising AI companies a clearer route to scale without leaving Britain too early.
Our take: This is the strongest version of sovereign AI: not trying to build everything inside government, but using state capital to keep strategic capability anchored in the UK. Drug discovery is also a sensible place to start because the commercial upside, research depth and national benefit are easier to explain than another generic chatbot.
Google says AI-powered hacking is now industrial scale
Google's threat intelligence group says AI-powered hacking has moved from an emerging problem to an industrial-scale threat in just three months. The Guardian reports that criminal and state-linked actors from China, North Korea and Russia are using commercial models including Gemini, Claude and OpenAI tools to refine and scale attacks.
The report also says one criminal group was close to using an AI-developed zero-day exploit for a mass exploitation campaign. That turns AI security from a future board risk into an immediate operational issue for software teams, managed service providers and regulated organisations.
Our take: The practical message is simple: patching windows are going to shrink. If attackers can find, test and adapt exploits faster, UK firms cannot keep treating vulnerability management as a quarterly hygiene exercise.
Bank of England regulator warns advanced AI could disrupt finance
Sam Woods, chief executive of the Prudential Regulation Authority, warned that it is reasonable to expect significant disruption as advanced AI systems become better at finding weaknesses in banking systems. The warning referenced powerful systems including Anthropic's Claude Mythos and ChatGPT 5.5 Instant.
Business Today also reported that Germany's BaFin is creating a new division to inspect banks and financial firms for cyber preparedness, while the IMF has warned that cyber risk is increasingly about correlated failures across shared software, cloud services and payment networks.
Our take: Finance is the early warning system for every other sector. If banks are being pushed to shorten patch cycles and prove AI cyber resilience, suppliers, insurers and professional services firms should expect the same questions to flow through procurement and audit processes.
Shai-Hulud worm hits 172 npm and PyPI packages
VentureBeat reports that the Mini Shai-Hulud supply chain campaign compromised 172 npm and PyPI packages across 403 malicious versions after first hitting 84 versions of 42 TanStack packages between 19:20 and 19:26 UTC on 11 May. The worm harvests credentials from more than 100 file paths, including AWS keys, SSH private keys, GitHub tokens, Kubernetes service accounts, Docker configs and AI agent configuration files.
The worrying detail is that the malicious packages carried valid SLSA Build Level 3 provenance attestations. In other words, signed provenance did not stop poisoned packages from reaching developers because the publishing pipeline itself was abused.
Our take: This is a board-level software supply chain story, not just a developer incident. Agent credentials, MCP server tokens and CI secrets are now part of the attack surface, so organisations need to audit what their AI tools can reach before the next package incident makes that painfully obvious.
Google pushes Gemini deeper into Android and laptops
Google used its Android Show: I/O Edition cycle to push Gemini further into everyday devices, including a new line of Android-based Googlebook laptops reported by The Register. Demonstrations included contextual suggestions from a Magic Pointer feature and drag-and-drop AI image composition built directly into the operating system.
The shift matters because it moves AI from a separate chatbot tab into the default interface layer. For businesses, that affects device management, data governance, staff training and procurement decisions, especially where employees use consumer-grade AI features on work devices.
Our take: The next AI adoption wave will not always arrive through a software licence. It will come preloaded in the operating system, browser and hardware stack, which means IT policies need to catch up with the devices people already want to buy.
Google Cloud customers dispute huge AI API bills after key leaks
The Register says several Google Cloud customers have seen compromised API keys used to run expensive image and video inference workloads, leaving some with bills of tens of thousands of dollars. One customer said a historically small Google Maps bill turned into more than $10,000 in charges within minutes after calls to Veo 3 and Gemini image output tokens.
Google told The Register that the issue is industry-wide and usually caused by leaked credentials, including API keys committed to public repositories. The dispute highlights a governance gap: many firms are turning on powerful AI services without spend limits, key restrictions and monitoring that match the cost profile of generative workloads.
Our take: AI cost control is now a security control. If one leaked key can trigger five-figure inference bills in minutes, finance teams need the same alerting and kill switches for model usage that security teams expect for suspicious login activity.
New RSL Media standard targets AI likeness and voice consent
A new public benefit non-profit is extending the Really Simple Licensing specification with the draft RSL Media Human Consent Standard, covering creative works as well as names, likenesses, voices and other identity attributes. The Register reports that supporters include Hollywood figures such as Cate Blanchett.
The planned registry would let people verify identities, set permissions for the use of their work and likeness, and encode those permissions for machine consumption. The difficult question is whether AI services that ignore the signal will face meaningful consequences.
Our take: Machine-readable consent will not solve AI copyright by itself, but it does give businesses a cleaner way to express rights and preferences. Any organisation using synthetic voice, avatars or creator content should watch this carefully because procurement teams will increasingly ask where consent is recorded.
xAI adds 19 gas turbines at Mississippi data centre site
WIRED reports that xAI has added 19 portable natural gas turbines to its second data centre campus in Southaven, Mississippi, over the past two months. Internal emails seen by WIRED suggest the additions bring the site to 46 turbines and more than 500 megawatts of added natural gas capacity since mid-March.
The expansion comes while xAI faces a lawsuit from the NAACP and environmental groups alleging Clean Air Act violations at the site. The story underlines the growing tension between rapid AI compute build-out, local permitting and the energy systems needed to support frontier training and inference.
Our take: AI infrastructure is becoming a planning, energy and public trust issue. For UK leaders, the lesson is that data centre capacity cannot be discussed separately from power, air quality, local consent and grid resilience.
Quick Hits
- Perceptron launched Mk1, a video analysis model priced at $0.15 per million input tokens and $1.50 per million output tokens.
- Sam Altman testified in the Musk v Altman trial as OpenAI's founding structure and charitable mission came under fresh scrutiny.
- The Guardian reported renewed concern over live facial recognition in retail, including shoppers wrongly accused by AI systems.
Frequently Asked Questions
How often is the AI Daily Brief published?
Every morning at 7:30am UK time, covering the previous 24 hours of AI news from over 30 sources.
How are stories selected?
UK-relevant stories are prioritised first, then by business impact and practical implications for UK organisations adopting AI.
Why should business leaders follow AI news?
AI is moving faster than any technology in history. Staying informed is essential for making smart decisions about AI investment, adoption, and governance.