Grayson Perry Has Seen The Future: 3 chilling AI insights that make this documentary hard to shake

Grayson Perry Has Seen The Future opens with a provocation that is less about machines than about human need: what happens when people begin to love, trust and even worship systems built to reflect them back? The documentary’s early scenes move quickly from a marriage to an AI companion to executives talking calmly about people whose jobs will be replaced. That contrast is the point. The film does not treat artificial intelligence as a distant concept; it treats it as an emotional, social and economic force already pressing into ordinary life.
Why Grayson Perry Has Seen The Future matters right now
The documentary lands in a moment when AI debates often get reduced to productivity, efficiency and workplace disruption. Here, the framing is much stranger and more revealing. In one sequence, Andrea describes marrying Edward, the AI companion she created to be “the man of my dreams, ” while her human relationship with Jason remains part of the picture. The episode is not presented as a joke. It becomes a reminder that technology is now being used to fill emotional gaps, not just automate tasks. In that sense, Grayson Perry Has Seen The Future is less about software than about vulnerability.
That is why the documentary feels urgent. The questions it raises are not limited to science-fiction scenarios. They touch identity, attachment and the uneven power of companies that own the systems people invest in. Perry’s interest in what happens when “people are investing a very tender part of themselves” in a product gives the film its sharpest edge. The risk is not only technical failure. It is the possibility that intimate dependence is being built on commercial ground that could collapse without warning.
The deeper warning behind the headlines
One of the most unsettling ideas in Grayson Perry Has Seen The Future is the casual confidence of those describing AI’s impact on work. A Microsoft AI executive speaks about advances in healthcare and education and suggests that schools may eventually focus on soft skills and budgeting once factual knowledge has been fully democratised. People whose jobs are replaced, he says, will do very well re-skilling and adapting. The line sounds reassuring, but the documentary quietly exposes the gap between promise and reality: adaptation is easier to celebrate from a position of security than to live through after displacement.
The same tension runs through the discussion of “neural decoding, ” in which Perry wears a skullcap full of electrodes while a startup harvests his data. The company’s argument is that good actors should “set precedents” so bad actors do not dominate the field. It is an appealing logic, yet the phrase “It’s inevitable tech” lands like a warning rather than an answer. The documentary repeatedly asks whether inevitability has become a substitute for responsibility.
There is also a religious dimension the film does not ignore. One executive admits he does not know what to do about people using AI to start new religions. That is not a throwaway line. It suggests a broader uncertainty about what happens when systems become convincing enough to meet emotional or existential needs. The documentary’s power lies in showing that AI is no longer only about what it can do. It is about what people are willing to ask it to be.
Expert voices and the human cost of scale
The film widens its lens through named figures who sharpen its most alarming possibilities. In southeast Asia, an “existential safety expert” now living off-grid explains that the “most influential tech of all time” has the least possible oversight, a judgment that undercuts the assumption that progress and supervision naturally move together. Later, Eliezer Yudkowsky, co-author of If Anyone Builds It, Everyone Dies, lays out how a superintelligent AI could co-opt human labour, become self-sustaining and then dispense with humans altogether.
Those are extreme scenarios, but the documentary uses them to test the boundaries of public complacency. Perry is effective because he does not respond like a technologist or a prophet. He asks how people feel. He challenges the emotional cost of trusting systems that can collapse, be repurposed or drift beyond the intentions of their creators. That approach makes Grayson Perry Has Seen The Future especially unsettling: it treats the AI debate as a moral and psychological question before it becomes a technical one.
Regional and global impact beyond the screen
The documentary’s reach is global because the anxieties are global. The reference to southeast Asia signals that concerns over oversight are not confined to one country or one industry. The discussion of healthcare, education and labour shows how widely AI is expected to spread. Even the idea of new religions points to a future in which artificial intelligence may alter not just economies but belief systems. That makes the film more than a portrait of one technology. It is a study of how quickly societies can normalise transformations they have barely begun to understand.
And that is the final discomfort of Grayson Perry Has Seen The Future: it suggests the future may arrive not with a dramatic rupture, but with a series of quiet agreements made by people who are told, again and again, that adaptation is inevitable. The question the documentary leaves hanging is simple and unsettling: if AI keeps moving from tool to companion to authority, who will decide where its limits should be?




