Liz Kendall and the £500m AI bet: Why the minister says she doesn’t use AI at work

Liz Kendall is trying to sell the public on a technology she says she does not use in her own job. That tension sits at the centre of the latest debate around liz kendall, who unveiled the government’s first investment from a £500m sovereign AI fund while urging the UK to embrace artificial intelligence as a driver of security and growth. Her message is simple: the country should not hesitate. Yet her admission in an AI-powered driverless car interview exposed a more complicated reality about how government, workers and ministers are actually using the tools they want others to trust.
Why Liz Kendall’s AI message matters now
The timing is significant because the government has just committed public backing to British AI firms through the Sovereign AI unit, which is designed to act like a venture capital fund. Kendall called the investment “crucial to our national security and economic prosperity, ” framing AI not as a future option but as an immediate policy priority. At the same time, she acknowledged public concern about jobs and cybersecurity, two pressures that continue to shape the political case for faster adoption.
That is where liz kendall becomes more than a ministerial soundbite. Her own disclosure that she uses AI in private life, but not at work, underlines the distance between personal experimentation and institutional deployment. The government expects officials to use AI tools in specific tasks, including rewriting CVs for jobseekers and summarising consultation responses. But the political challenge is to convince the public that the same technology can improve state capacity without weakening accountability.
What lies beneath the headline about liz kendall
The deeper story is not whether a minister uses AI for work. It is about the state’s attempt to normalise AI while retaining control over where it is applied. Former minister Peter Kyle previously unveiled government tools powered by AI, signaling a broader push inside Whitehall. Kendall’s remarks suggest that the machine is now part of the policy conversation, but still not fully embedded in every decision-making process.
Her recent demonstration in a driverless car was meant to showcase what she sees as the practical promise of the technology. Her private-life example was notably ordinary: she used AI to help identify a possible ingredient behind an allergic reaction and then checked the information with the National Eczema Society and a pharmacist. That anecdote matters because it presents AI as a useful assistant rather than an authority. It also reflects the broader government posture: embrace the speed, but verify the output.
The investment itself is equally revealing. The first company to receive equity backing is Callosum, a London-based firm that helps computer chips work together efficiently to train and operate AI models. Six further startups will receive access to government-funded supercomputers, including firms focused on biological foundation models, autonomous AI agents and world models. In practical terms, the state is not just funding ideas; it is underwriting infrastructure.
Expert perspectives on jobs, security and scale
Kendall has acknowledged the risk of employment disruption, saying earlier this year that “some jobs will go” as AI automates certain tasks and roles, even as new roles emerge. That warning is important because it places her at the intersection of two competing truths: AI can raise productivity, but it can also unsettle labor markets before new opportunities arrive.
Rachel Reeves, the chancellor, argued that backing national AI champions can help ensure that companies “start, scale and stay here in Britain. ” Danyal Akarca, co-founder of Callosum, said the UK was the “natural place” to build his company because of university talent and private AI labs. Those comments point to the same conclusion from different angles: Britain is trying to convert research strength into commercial leverage before talent and capital move elsewhere.
Regional and global impact of the sovereign AI strategy
The wider implications extend beyond one funding round. The government is effectively trying to position Britain as a place where AI companies can access both capital and computing power without leaving the country. That could matter for sectors tied to medicine, chips and model development, particularly if the startups backed through the fund succeed in attracting follow-on investment.
There is also a geopolitical layer. Kendall described the investment as tied to national security, while the government’s official framing presents homegrown AI capacity as a strategic asset. In that sense, liz kendall is not just selling a tech policy; she is helping define how Britain intends to compete in an era where computing power, data and model development are becoming instruments of national strategy. The open question is whether the public will accept a government that promotes AI so strongly while its own minister says she prefers to keep it out of the workplace.




