Sam Alman — TED 2025, transcript
Chris Anderson: Thank you so much for coming.
Sam Altman: Thank you.
Chris Anderson: Your company has been releasing crazy, insane new models pretty much every other week, it feels like. I’ve been playing with a couple of them. I want to show you what I’ve been playing with.
So — this is the image and video generator. I asked Sora: “What would it look like when you share some shocking revelations here at TED?” You want to see how it imagined it?
Sam Altman: No. .
Chris Anderson: I mean — not bad, right? Five fingers on all hands?
Sam Altman: Very close to what I’m wearing, you know? I’ve never seen you quite that animated.
Chris Anderson: No, I’m not sure that is that animated person. So maybe a B+.
But this one genuinely astounded me. I asked it to come up with a diagram that shows the difference between intelligence and consciousness. Like — how would you do that? And this is what it did.
I mean — it’s so simple, but it’s incredible. What is the kind of process that would allow this? This isn’t just image generation — it’s linking into the core intelligences that your overall model has?
Sam Altman: Yeah, the new image generation model is part of the GPT-4o, so it’s got all of the intelligence in there. And I think that’s one of the reasons it’s been able to do these things that people really love.
Chris Anderson: I mean, if I’m a management consultant and I’m playing with some of this stuff, I’m thinking, “Uh-oh. What does my future look like?”
Sam Altman: Yeah. I mean, there are sort of two views you can take. You can say, “Oh man, it’s doing everything I do — what’s going to happen to me?” Or you can say, like through all the other technological revolutions in history, “Okay, now there’s this new tool. I can do a lot more — what am I going to be able to do?”
It is true that the expectation of what we’ll have for someone in a particular job increases, but the capabilities will increase so dramatically that I think it’ll be easy to rise to that occasion.
Chris Anderson: So — this impressed me too. I asked it to imagine Charlie Brown thinking of himself as an AI. And it came up with this. I thought this was actually rather profound. What do you think?
Sam Altman: Um. I mean, the writing quality of some of the new models — not just here but in detail — is really going to a new level. I mean, this is an incredible meta answer. But there’s really no way to know if it is thinking that, or it just saw that a lot of times in the training set.
And of course, like — if you can’t tell the difference, how much do you care?
Chris Anderson: So that’s what you’re saying — it doesn’t matter. But isn’t that, though, at first glance, just IP theft?
Sam Altman: You guys don’t want to deal with the lawsuits? You can clap about that all you want. Enjoy.
I will say — I think the creative spirit of humanity is an incredibly important thing. And we want to build tools that lift that up — that make it so that new people can create better art, better content, write better novels that we all enjoy.
I believe very deeply that humans will be at the center of that. I also believe that we probably do need to figure out some sort of new model around the economics of creative output. People have been building on the creativity of others for a long time.
Chris Anderson: People take inspiration for a long time. But as the access to creativity gets incredibly democratized and people are building off of each other’s ideas all the time, I think there are incredible new business models that me and others are excited that they explore. Exactly what that’s gonna look like, I’m not sure. Like, clearly, there’s some cut and dry stuff. Like, you can’t copy someone else’s work. But how much inspiration can you take?
Sam Altman: If you say, I’m gonna generate art in the style of these seven people, all of whom have consented to that, how do you, like, divvy up how much money goes to each one? These are, like, big questions. But every time throughout history, we have put better and more powerful technology in the hands of creators. I think we collectively get better creative output, and people do just more amazing stuff.
Chris Anderson: I mean, an even bigger question is when they haven’t consented to it. In our opening session, Carole Cadwalladr showed ChatGPT giving her talk in the style of Carole Cadwalladr, and sure enough it gave a talk that wasn’t quite as good as the talk she gave. But it was pretty impressive. And she said, “Okay, it’s great, but I did not consent to this.” How are we gonna navigate this? Like, shouldn’t it just be people who have consented, or should there be a model that says that any named individual in a prompt whose work is then used — they should get something for that?
Sam Altman: So, right now, if you use our mention thing and say, “I want someone in the style of a living artist,” it won’t do that. But if you say it in the style of a particular vibe or studio or art movement or whatever — it will.
And obviously, if you’re like, “I’ll put in a song that is like a copy of the song,” it won’t do that.
The question of where that line should be, and how people say, “This is too much,” — we sorted that out before with copyright law and kind of what Fair Use looks like. Again, I think in the world of AI, there will be a new model that we figure out.
Chris Anderson: In the movie Her, the AI basically announces that she’s read all his emails and decided he’s a great writer — and persuades a publisher to publish his work. Is that something that might be coming sooner than we think?
Sam Altman: I don’t think it’ll happen exactly like that, but yeah, I think something in that direction — where AI doesn’t just respond to questions but proactively pushes things that help you, that make you better — that seems like it’s coming soon.
Chris Anderson: So what have you seen that’s coming up internally that you think is going to blow people’s minds? Just a hint of the next big jaw-dropper.
Sam Altman: The thing I’m personally most excited about is AI-first science at this point.
Chris Anderson: I think: a general intelligence — I can ask it about anything, and it comes back with an intelligent concept. Why isn’t that AGI?
Sam Altman: Well, first of all, you can’t ask it about anything. That’s nice of you to say, but there are still lots of things it’s embarrassingly bad at. But even if we fix those, it still doesn’t learn continuously. It can’t improve itself or discover new science in an autonomous way. It can’t, for example, do any kind of knowledge work you could do fully in front of a computer. If you could say “Go do my job,” and it could actually go do it, autonomously? That might cross into AGI. But current systems still fall short of that.
Chris Anderson: Do you have a definition of AGI internally? And when do you think we’ll be there?
Sam Altman: It’s the old joke: ask 10 OpenAI researchers to define AGI, you’ll get 14 answers.
Chris Anderson: And that’s worrying, right? Because your founding mission is to safely reach AGI.
Sam Altman: Sure. But I think what really matters is not one “magic moment” when we declare AGI is done. What matters is that we’re on an exponential curve. These systems are going to get smarter and more capable over time. Different people will call it AGI at different points. Eventually it’ll go way beyond that. The important question is: how do we make each step safe? How do we build toward something beneficial and aligned as it exceeds human capabilities? That’s where the focus should be.
Chris Anderson: Well, one of the big themes this week is “agentic AI” — AI set free to take actions on your behalf. You’ve got something like this with Operator. I tried it — it’s amazing, but also kind of scary. It wants my credit card. It wants to book stuff for me. And Yoshua Bengio warned: this is where things could go wrong. How do you release agentic AI and keep guardrails in place?
Sam Altman: First off, obviously, people can choose not to use it. Some might say, “I’d rather just call the restaurant myself.” But others will say, “Let ChatGPT go online and do everything for me.”
I think people will be slow to get comfortable with agents. But the bigger challenge is: even if most people aren’t using it yet, some will. And when AIs are out there clicking around the internet, making decisions, it becomes the biggest safety challenge we’ve ever had. Trust becomes a gating function. People won’t use agents unless they really trust them. So safety becomes part of the product — completely integral.
Chris Anderson: But in a world where someone can say to an open model, “Go out and spread a meme that X people are evil,” and it executes on that task, maybe even replicates itself… Have you drawn clear red lines about what your systems will and won’t do?
Sam Altman: Yes. That’s what our preparedness framework is for. It outlines key categories of risk, how we measure them, how we mitigate them, and where we draw boundaries. We’ll update it over time, but the core idea is pre-emptive evaluation and intervention.
Chris Anderson: You’ve testified to Congress about this. At the time, you said there should be a safety agency to license these systems. Do you still believe in that?
Sam Altman: I’ve learned more about how government works since then. I’m not sure a federal safety agency is quite the right idea. But I still believe that for the most advanced models, there needs to be external safety testing and accountability. Maybe that’s industry-led, maybe hybrid. But yes, that principle still holds.
Chris Anderson: So I asked your new “reasoning” model what’s the most penetrating question I could ask you. It thought for two minutes. You want to hear the question?
Sam Altman: I do.
Chris Anderson: “Sam, given that you’re helping create technology that will reshape the destiny of our entire species, who granted you — or anyone — the moral authority to do that? And how are you personally responsible if you’re wrong?”
Sam Altman: That’s a good one. I feel like you’ve been asking versions of that all evening.
Chris Anderson: But what’s your answer?
Sam Altman: I don’t know.
Chris Anderson: There are two narratives about you. One: you’re the visionary who pulled off the impossible, leaping ahead of Google, reshaping the world. Two: you’ve shifted from idealistic openness to centralizing power, and you’ve lost some key people. Some say you’re not to be trusted. So: who are you? What’s your own narrative?
Sam Altman: Like everyone else, I’m complicated. Our goal at OpenAI is to make AGI and distribute its benefits widely and safely. That hasn’t changed. Tactics have evolved — we didn’t know what we were building at the start. We didn’t know we’d have to raise billions or build a for-profit entity. But overall, we’ve delivered very capable, very safe AI to hundreds of millions of people. We’ve made mistakes. We’ll make more. But we’re trying.
Chris Anderson: Okay, so — Lord of the Rings. Elon Musk said you’ve been corrupted by the “Ring of Power.” How do you respond to that?
Sam Altman: I mean, how do you think I’m doing — really — compared to other CEOs with that much power?
Chris Anderson: You’re not a rude, angry, aggressive billionaire.
Sam Altman: Thanks. I try. I think I’ve stayed pretty grounded. But I get it — people worry. Power can corrupt. And I think the best protection against that is openness, dialogue, and constantly reassessing the mission.
Chris Anderson: You recently became a parent. That changes things. So: if I gave you a red button, and pressing it gives your son an incredible life but with a 10% chance he’s destroyed — do you press it?
Sam Altman: In the literal case, no. If you’re asking, “Do I feel like that’s what I’m doing with my work?” — also no. Becoming a parent changed everything. But I already cared deeply about safety and the future. Now I think about it with even more intensity.
Chris Anderson: Tristan Harris argued here that one of the problems is that all of you — AI leaders — believe that rapid development is inevitable. That there’s no choice but to race. And that belief itself creates the danger.
Sam Altman: I think that’s a fair critique. But people do slow things down all the time — when it’s not safe, when it doesn’t work. There’s communication between most major players (well, not all). Most of us care deeply about safety. And we’ve held back features when needed.
One change we recently made: we’ve relaxed some of the “speech harm” guardrails in the new image model. People told us they wanted less censorship, more expressiveness. So we’re listening. But we’re still figuring out where to draw the line — especially as AI intersects with real-world harms.
Chris Anderson: If a group were to host a summit — with ethicists, technologists, and AI leaders — would you come? Would you be willing to attend?
Sam Altman: Of course. But I’m much more interested in what our hundreds of millions of users want as a whole. You know, I think a lot of the room has historically been decided in small elite summits. One of the cool new things about AI is that it can talk to everybody on Earth, and we can learn the collective value preferences of what everybody wants — rather than have a bunch of people who are blessed by society sit in a room and make those decisions.
I think that’s very cool. And I think you’ll see us do more in that direction. And when we’ve gotten things wrong — because the elites in the room had a different opinion than the broader public about what people wanted, like with the image guardrails — we changed course. I’m proud of that.
Chris Anderson: There is a long trail of unintended consequences coming out of the actions of hundreds of millions of people. Also, 100 people in a room may see things those millions don’t. And the hundreds of millions of people don’t necessarily see what the next step could lead to.
Sam Altman: That’s totally accurate and totally right. I am hopeful that AI can help us be wiser, make better decisions. It can talk to us. If we say, “Hey, I want thing X,” AI can respond: “Totally understand, and if that’s still what you want by the end of this conversation, it’s your call — you’re in control. But have you considered it from this other perspective, or the impact it’ll have on these people?”
I think AI can help us make better, wiser collective governance decisions than we’ve ever made before.
Chris Anderson: We’re almost out of time. I’ll give you the last word. What kind of world — taking all things into consideration — do you believe your son will grow up in?
Sam Altman: I remember — it was so long ago now, maybe 15 years or so — when the first iPad came out. I remember watching a YouTube video of a toddler sitting in a doctor’s office, bored. There was a magazine — one of those glossy cover ones — and the toddler was tapping and swiping on it. He got frustrated. To him, it was a broken iPad. That toddler had never lived in a world without touchscreens.
To the adults watching, it was this incredible moment. It was like, “Wow, that’s how fast this new world is becoming normal.”
My kids, hopefully, will never be smarter than AI. They will never grow up in a world where products and services aren’t incredibly smart, incredibly capable. They’ll never grow up in a world where computers don’t just kind of understand you.
Whatever you can imagine, AI will help make it real. It’ll be a world of incredible material abundance. A world where the rate of change is unimaginably fast. A world where individual ability and impact will be far beyond what any person today can do.
I hope my kids — and all your kids — look back at us with some pity and nostalgia and think: Wow, they lived such limited lives. The world suffered so much. And I think that’s great.
Chris Anderson: It’s incredible what you’ve built. It really is — unbelievable. I think over the next few years, you’re going to face some of the biggest opportunities, the biggest moral challenges, the biggest decisions of any human in history.
You should know that everyone here is cheering you on to do the right thing.
Sam Altman: We’ll try our best.
Chris Anderson: Thank you very much.
Sam Altman: Thank you.
Chris Anderson: Thank you.
Sam Altman: Thank you.