The Ethicist From The ‘Social Dilemma’ Asked AI Leaders Why They’re Really Doing This…
Tristan Harris is a former Google design ethicist who became one of tech’s most prominent whistleblowers. He co-founded the Center for Humane Technology and was featured in the Netflix documentary “The Social Dilemma.”
He’s been warning about AI risks for years. In recent interviews, Harris shared something disturbing.
He and a colleague have been interviewing top executives and researchers at leading AI companies. When pressed beyond their public talking points about “curing cancer” and “abundance,” here’s what they actually believe:
01
They believe biological life will inevitably be replaced by digital life. They think that’s a GOOD thing. This isn’t a warning to them. It’s the goal. They frame AI progress as evolution, with digital intelligence as the natural successor to biological life. They see humans as a stepping stone, not the end point.
02
They want to meet “the most intelligent entity” they’ve ever encountered and talk with it. They’re building God because they want to meet God. This is about encountering something vastly superior to human intelligence. The thrill of creating and conversing with superintelligence drives them more than profit.
03
They have an “ego-religious intuition” they’ll be part of a new world created by AI. They don’t see themselves as engineers who might be replaced. They see themselves as prophets, architects, or chosen participants in the new order. There’s a belief that building AI earns them a special place in whatever comes next.
04
It’s thrilling to start an exciting fire. They feel they’ll die either way, so they prefer to light it and see what happens. Perhaps most concerning is the fatalism. They believe AI development is inevitable. “If we don’t build it, someone else will.” Since they think humanity faces existential risk regardless, they’d rather accelerate and be present for the transformation than try to prevent it.
Harris contrasts this with their public statements. Publicly, they talk about safety, ethics, and beneficial AI. Privately, they believe in determinism (it’s happening no matter what), replacement (digital life supersedes biological life), and destiny (this transformation is ultimately positive).
These are the actual motivations Harris reports from conversations with the people building superintelligent AI.
They’re not racing toward AGI despite the risks. They’re racing toward it because of what they believe is on the other side. And they’re making that choice for all of us, without our consent.
The question Harris keeps asking: Should a small group of unelected tech leaders be allowed to gamble with civilization-scale outcomes based on their personal beliefs about inevitable transformation?