“Sam Altman’s God Complex: TED’s Chris Anderson Asks Who Gave Him the Keys to Humanity’s Future”

6 min readApr 11, 2025

“Sam, given that you’re helping create technology that will reshape the destiny of our entire species, who granted you or anyone, the moral authority to do that? And how are you personally responsible and accountable if you’re wrong?”

The question hung in the air at TED like a guillotine blade. Chris Anderson had just dropped the ultimate challenge on Sam Altman, the man whose company created ChatGPT. The question wasn’t Anderson’s — it came from ChatGPT o1 Pro, the LLM that was Altman’s own creation. The irony was delicious. The tension palpable.

Altman’s response? “You’ve been asking me versions of this for the last half hour.”

A deflection wrapped in a pleasantry. This single exchange encapsulates everything wrong with how we’re hurtling toward an AI future — the tough questions posed, acknowledged, and neatly sidestepped.

I watched this dance of accountability play out on stage, mesmerized by what it revealed about power in the age of artificial intelligence. The man who controls a technology used by 500 million people weekly — more than the entire population of North America — couldn’t directly answer who gave him the right to potentially transform humanity forever.

“This Is Gonna Happen”

“This is gonna happen. This is like a discovery of fundamental physics that the world now knows about, and it’s gonna be part of our world,” Altman declared with the calm certainty of someone stating that the sun will rise tomorrow. “We have to embrace this with caution, but not fear, or we will get run over by other people that use AI to be better.”

The fatalism in his statement is the ultimate conversation-stopper. If it’s inevitable, why even debate? If resistance means getting “run over,” what choice do we really have?

This framing — that AI development at breakneck speed isn’t a choice but a force of nature — conveniently absolves Altman and his peers of the responsibility to let democratic processes catch up to their ambitions.

Anderson didn’t let him off the hook. “The struggle is I’m naming that a safety agency is might be what we want, and yet, agency is the very thing that is unsafe,” he noted, highlighting the contradiction in Altman’s position on regulation.

The Ring of Power

“Elon Musk claimed that he thought that you’d been corrupted by the ring of power,” Anderson said, referencing Tolkien’s metaphor for corruption that feels particularly apt in Silicon Valley. “What’s in everyone’s mind as we see technology CEOs get more powerful, get richer, is can they handle it? Or does it become irresistible?”

Altman’s response was telling. He didn’t defend himself directly — instead, he turned the question back on Anderson: “How do you think I’m doing, really? Relative to other CEOs that have gotten a lot of power and changed how they act?”

It was a masterful pivot, forcing Anderson to either criticize him directly or back down. Anderson chose a middle path, acknowledging Altman’s personal conduct while pressing on the larger concern: “I think the fear is that just the transition of OpenAI to a for-profit model is, some people say, well, there you go. You’ve got corrupted by the desire for, well, at one point there’s going to be no economic… It’ll make you fabulous sums of it.”

The exchange revealed the impossible situation we find ourselves in: forced to trust in the personal integrity of tech leaders whose companies’ structures and incentives increasingly push toward profit and power.

“It’ll Get To Know You”

Perhaps the most chilling moment came when Altman described his vision for AI’s future:

“You will talk to ChatGPT over the course of your life and someday, maybe if you want, it’ll be listening to you throughout the day and sort of observing what you’re doing, and it’ll get to know you, and it’ll become this extension of yourself, this companion, this thing that just tries to help you be the best, do the best you can.”

Anderson likened this to the movie “Her,” where an AI reads all the protagonist’s emails and takes life-altering actions on his behalf. Altman didn’t disagree with the comparison.

The casual way Altman described a future where AI systems monitor our every move — “sort of observing what you’re doing” — betrays a fundamental disconnect from how most humans feel about constant surveillance. The “maybe if you want” qualification rings hollow in a world where meaningful consent to technology is increasingly illusory.

The Parental Paradox

When Anderson asked if becoming a father had changed Altman’s perspective on risk, pressing him with a thought experiment about pressing a button that would give his son an incredible life but with a 10% chance of destruction, Altman’s response was immediate:

“In the literal case, no. If the question is, do I feel like I’m doing that with my work? The answer is, I also don’t feel like that.”

This disconnection between personal risk tolerance and professional risk acceptance is the cognitive dissonance at the heart of AI development. Altman would never accept a 10% chance of harm to his child, yet OpenAI’s work carries unknown risks to billions of children globally.

As he put it: “I really cared about, like, not destroying the world before. I really care about it now. I didn’t need a kid for that part.”

But caring isn’t the same as slowing down, is it?

The Open Source Contradiction

“We’re gonna do a very powerful open source model,” Altman announced proudly. “This will not be all like, there will be people who use this in ways that some people in this room, maybe you or I don’t like.”

This casual acknowledgment of potential harms from releasing powerful AI models into the wild stands in stark contrast to Altman’s earlier emphasis on safety. It’s the tech equivalent of saying “some people will misuse the nuclear launch codes we’re about to publish, but that’s innovation for you.”

Anderson called out this contradiction, asking about the “red lines” OpenAI has drawn internally to prevent dangerous AI capabilities from being released. Altman mentioned their “preparedness framework” without specifics — a framework the public has no say in defining or enforcing.

“Different People Will Call It AGI at Different Points”

When pushed on defining Artificial General Intelligence — supposedly OpenAI’s north star — Altman revealed a startling truth:

“If you’ve got 10 OpenAI researchers in the room and ask to define AGI, you get 14 definitions.”

Anderson rightfully pointed out the problem: “That’s worrying, though, isn’t it? Because… that has been the mission initially. We’re going to be the first to get to AGI, we’ll do so safely, but we don’t have a clear definition of what it is.”

This exchange exposes the fundamental contradiction at the heart of OpenAI: pursuing an ill-defined goal with potentially species-level consequences, while assuring us they’ll do it “safely.”

Where Do We Go From Here?

The most revealing moment came when Anderson suggested a small summit of experts to establish global AI safety standards. Altman’s response?

“Of course, but I’m much more interested in what our hundreds of millions of users want as a whole.”

This sounds democratic until you realize the false choice it presents. Users can only “want” options presented to them, not the roads not taken. People “wanted” cigarettes too, until we learned the health consequences decades later.

The stark reality is that we don’t get to vote on whether Altman and his peers should be making these monumental decisions. They’ve already appointed themselves our technological destiny’s curators.

At one point, Anderson observed: “I’m puzzled by you. I’m kind of awed by you, because you’ve built one of the most astonishing things out there.”

I’m not puzzled. I’m terrified. Not by Altman the person, who seems genuinely thoughtful and concerned with doing right, but by the system that has allowed a handful of technologists to make decisions with planetary consequences, accountable primarily to investors and users who have no real power to guide development.

The most important question from that TED stage remains unanswered: Who granted Sam Altman, or anyone, the moral authority to reshape humanity’s destiny? The uncomfortable truth is: no one did. And yet, here we are.

--

--

Steve Rosenbaum
Steve Rosenbaum

No responses yet