Meeting

CEO Speaker Series With Dario Amodei of Anthropic

Monday, March 10, 2025
Speaker

Chief Executive Officer and Cofounder, Anthropic

Presider

President, Council on Foreign Relations

Anthropic Chief Executive Officer and Cofounder Dario Amodei discusses the future of U.S. AI leadership, the role of innovation in an era of strategic competition, and the outlook for frontier model development.

The CEO Speaker series is a unique forum for leading global CEOs to share their insights on issues at the center of commerce and foreign policy, and to discuss the changing role of business globally.

FROMAN: Well, good evening, everybody. Welcome. My name is Mike Froman. I’m president of the Council. And it’s a great pleasure to have you here tonight for one of our CFR CEO Speaker Series, and to have the CEO and cofounder of Anthropic Dario Amodei with us tonight. Dario was vice president of research at OpenAI, where he helped develop GPT-2 and -3. And before joining OpenAI, he worked at Google Brain as a senior research scientist.

I’m going to talk with Dario for about thirty minutes. Then we’ll open it up to questions from people here in the hall. We have about 150 people here. We have about 350 online. And so we’ll try and get some of their questions in as well.

Welcome.

AMODEI: Thank you for having me.

FROMAN: So you left OpenAI to start Anthropic, a mission-first public benefit corporation. Why leave? What are Anthropic’s core values? And how do they manifest themselves in your work? And let me just say, a cynic would say, well, this mission first, this is all marketing. You know, how can you—can you give us some specific examples of how your product and strategy reflect your mission?

AMODEI: So, yeah, if I—if I were to, you know, just back up and kind of set the context. You know, we left at the end of 2020. I think in 2019 and 2020 something was happening which I think myself and a group within OpenAI, which eventually became my cofounders at Anthropic, were, I think, among the first to recognize. They’re called kind of scaling laws, or the scaling hypothesis today. And the basic hypothesis is simple. It says that—and it’s a really remarkable thing, and I can’t overemphasize how unlikely it seemed at the time—if you take more computation and more data to train AI systems with relatively simple algorithms, they get better at all kinds of cognitive tasks across the board.

And we were measuring these trends back when models cost $1,000 or $10,000 to train. So that’s a kind of an academic grant budget level. And we forecast that these trends would continue, even when models cost 100 million, a billion, $10 billion to train, which now we’re getting to. And, indeed, that if the quality of the models and their level of intelligence continued, they would have huge implications for the economy. It was even the first time we realized that they would likely have very serious national security implications. We generally felt that the leadership at OpenAI was on board with this general scaling hypothesis, although, you know, many people inside and outside were not.

But the second realization we had was that, you know, if the technology was going to have this level of significance, we really needed to do a good job of building it. We really needed to get it right. In particular, on one hand, these models are very unpredictable. They’re inherently statistical systems. One thing I often say is we grow them more than we build them. They’re like a child’s brain developing. So controlling them, making them reliable, is very difficult. The process of training them is not straightforward. So just from a systems safety perspective making these things predictable and safe is very important. And then, of course, there’s the use of them—the use of them by people, the use of them by nation-states, the effect that they have when companies deploy them.

And so we really felt like we needed to build this technology in, you know, absolutely the right way. You know, OpenAI, a bit as you’ve alluded to, was founded with some claims about—you know, that they would do exactly this. But for a number of reasons, which I won’t get into in detail, we didn’t feel that the leadership there was taking these things seriously. And so we decided to go off and do this on our own. And the last four years have actually been a kind of, you know, almost a side-by-side experiment of, you know, what happens when you try and do things one way and what happens when you try and do things the other the, and how it has played out.

So, you know, I’ll give a few examples of how, you know, we’ve really, I think, displayed a commitment to these ideas. One is we invested very early in the science of what is called mechanistic interpretability, which is looking inside the AI models and trying to understand exactly why they do what they do. One of our seven cofounders, Chris Olah, is the founder of the field of mechanistic interpretability. This had no commercial value, or at least no commercial value for the—you know, the first, you know, four years that we worked on it. It’s just starting to be a little bit—a little bit in the distance. But nevertheless, we had a team working on this the whole time in the—you know, in the presence of fierce commercial competition, because we believe that understanding what is going on inside these models is a public good that benefits everyone. And we published all of our work on it so others could benefit from it as well.

You know, I think another example is we came up with this idea of constitutional AI, which is training AI systems to follow a set of principles. You know, instead of training them from data, or from—you know, mass data or human feedback. You know, this allows you to get up, say, you know, in front of, you know, Congress, and say: These are the principles according to which we trained our model. When we first came to—you know, when we had our first product, our—you know, our first version of Claude, which is our model, we actually delayed the release of that model roughly six months because this was such a new technology that, you know, we just—we weren’t sure of the safety properties. We weren’t sure we wanted to be the ones to kind of kick off a race. This was just before ChatGPT. So, you know, we arguably had the opportunity to—you know, to seize the ChatGPT moment and, you know, we chose to release a little later. Which I think had real commercial consequences, but set the culture of the company.

A final example I would give is we were the first to have something called a responsible scaling policy. So what this does is it measures categories of risk of models as they scale. And we have to take increasingly strict security deployment measures as we meet these points. And so we were the first one to release this, the first one to commit to it. And then a few months—within a few months of when we did, the other companies all followed suit. And so we were able to set an example for the ecosystem. And, you know, when I look at what the other companies have done, we’ve often led the way on these issues, and often cause the other companies to follow us. Not always. Sometimes they do something great and we follow them. But I think, you know, there’s been a good—there’s been a good history of us, you know, sticking to our commitments. And, you know, I would contrast that with what we’ve seen from some of the other companies in their behavior. We now have several years of history. And, you know, so far, fingers crossed, I think our commitments have held up pretty well.

FROMAN: I want to talk about both the risks and the opportunities that you you’ve cited around AI. But since you mentioned the responsible scaling issue, let’s go back to that. We’re at level two now.

AMODEI: Yeah, so—

FROMAN: And at what level is it existential? How will we know when we hit level three? And if you hit level three, can you go backwards, or does it only get worse?

AMODEI: Yeah. So the way our responsible scaling policy is set up is we basically said—you know, and the analogy was to biosafety levels. So, you know, the biosafety level system is, like, you know, how dangerous various pathogens are. And so we said, let’s have AI safety levels. And so AI safety level two is a level we’re currently at. And that’s, you know, systems that are powerful but the risks they pose are comparable to the risks that, you know, other kinds of technology pose. ASL-3, which, actually, I think our models are starting to approach—the last model we released, we said this model isn’t ASL-3 yet but it’s getting there. ASL-3 is characterized—and, you know, we focus very much on the national security side. Very kind of serious risks that are out of proportion to the risks that normal technologies have. So an ASL-3 model is designed as one that could allow you—in the areas of, say, chemical, biological, or radiological weapons—could allow an unskilled person, simply by talking to the model and following its instructions, to do things that you would have to have, say, a PhD in virology to do today.

So once that is possible, if those risks aren’t mitigated then that would enhance the number of people in the world who are able to do these highly destructive things from, say, in the ten thousands today to in the tens of millions once the models are available. And so when the models are capable of this, we have to put in mitigation so that the models are not willing to actually, you know, provide this information, and security restrictions so that, you know, the models won’t be stolen. And, you know, I think we’re approaching that. We may actually hit that this year. And we believe we have a story for how to deploy those kinds of models safely by, you know, removing their ability to do this very narrow range of dangerous tasks without compromising their commercial viability.

FROMAN: So this is a fairly narrow set of tasks. As you say, you’re just going to prevent the model from answering those questions,

AMODEI: Yeah, prevent the model from engaging in those kinds of tasks. Which is—it’s not straightforward, right? You can say, you know, I’m taking a virology class at Stanford University. I’m working on my coursework, like, can you tell me how to make this particular plasmid? And so the model has to be smart enough to not fall for that and say, hey, you know, actually, that isn’t the kind of thing you would ask—

FROMAN: You sound like a bioterrorist. I won’t answer your question.

AMODEI: You sound like you have bad intent.

FROMAN: Yeah. But it’s sort of limited to your own imagination, or our own imagination, as to what all the bag acting could be. There are a lot of things that we may not anticipate beyond those four categories.

AMODEI: Yeah. Yeah. I mean, you know, I think this is an issue that that—just as every time we release a new model there are positive applications for it that people find that we weren’t expecting, I expect there will also be negative applications. We are—we always monitor the models for different use cases in order to discover this so that we have a continuous process where, you know, we don’t get taken by surprise. If, you know we’re worried that someone will do something evil with model six, hopefully some early signs of that can be seen in model five. And we monitor it. But this is the fundamental problem of the models. You don’t really know what they’re capable of. You don’t truly know what they’re capable of until they’re deployed to a million people.

You can test ahead of time. You can—you know, you can, you know, have your researchers bash against them. You can have, you know, even the government—we collaborate with the government, AISIs test them. But the hard truth is that there’s no way to be sure. They’re not like code, where you can do formal verification. What they can do is unpredictable. It’s just like, you know, if I think of you or me instead of the model. You know, if I’m, like, the quality assurance engineer for me or you, you know, can I—can I give a guarantee that, like, you know, there’s a particular kind of bad behavior you are logically not capable of, will never happen? People don’t work that way.

FROMAN: Let’s talk about the opportunities, the upside opportunities.

AMODEI: Absolutely.

FROMAN: End of last year you wrote an essay, Machines of Loving Grace, that talked about some of the upside. How one could achieve a decade’s worth of progress in biology, for example, in a year, how the machines were going to be as smart as all the Nobel Prize winners, which probably depresses some of them. Tell us the upside. Tell us your best-case scenario as to what AI is going to produce.

AMODEI: Yeah. So I go back by starting with the exponential. You know, if we go back to 2019, the models were barely able to give a coherent sentence or a coherent paragraph. People like me, of course, thought that was an amazing accomplishment that models were not capable of. And, you know, we had these predictions that five years from now, you know, the models are going to be generating billions of dollars of revenue. They’re going to be helping us code. We can talk to them like they’re—like they’re human beings. They’ll know as much as human beings do. And there were all these unprincipled objections of why that that couldn’t happen.

You know, the same exponential trends, the same arguments that predicted that, predict that if we go forward another two years, three years, maybe four years, we will get to all of this. We will get to models that are as intelligent as Nobel Prize winners across a whole bunch of fields. You won’t just chat with them. They’ll be able to do anything you can do on a computer. Basically, any remote work that humans do, any modality, being able to, you know, do tasks that take days, weeks, months. The kind of evocative phrase that I used for it in Machines of Loving Grace was it’s like having a country of geniuses in a datacenter. Like, a country of genius remote workers, which they can’t do everything, right? There are restrictions in the physical world.

And I think this still sounds crazy to many people but, you know, look back on previous exponential trends. You know, look at—look at the early days of the internet and how wild the prediction seemed, and what actually came to pass. I’m not sure of this. I would say I’m maybe 70 or 80 percent confidence. You know, it could very well be that the technology stops where it is, or stops in a few months and, you know, the essays that I’ve written and things I’ve said, you know, in events like this, people spend the next ten years laughing at me. But that would not be my bet.

FROMAN: Let’s just build on that one, because on the issue of jobs, and the impact that AI is likely to have on employment, there’s a pretty big debate. Where are you on the spectrum—well, before I get there—how long will it take for AI, let’s say, to replace the head of a think tank? I’m asking for a friend. (Laughter.) Actually, how—we’ll get to that one. That’s too—where are you on the spectrum of everyone’s going to be able do some really cool things, and they’re going to be able to do so many more things than they’re able to do now, versus everyone’s going to be sitting on their sofa collecting UBI?

AMODEI: Yeah. So I think it’s going to be a really complicated mix of those two things, that also depends on the policy choices that we make.

FROMAN: You can also answer the think-tank question if you like, but—(laughter).

AMODEI: Yeah. So, I mean, I guess I didn’t—I kind of, you know, ended my answer to the last question without saying all the great things that will happen. So honestly, the thing that makes me most optimistic, before I get to jobs, is things in the biological sciences—biology, health, neuroscience. You know, I think if we look at what’s happened in biology in the last hundred years, what we’ve solved are simple diseases. Solving viral and bacterial diseases is actually relatively easy because it’s the equivalent of repelling a foreign invader in your body. Dealing with things like cancer, Alzheimer’s, schizophrenia, major depression, these are system-level diseases. If we can solve these with AI at a baseline, regardless of kind of the job situation, we will have a much better world. And I think we will even—if we get to the mental illness side of it—have a world where it is at least easier for people to find meaning. So I’m very optimistic about that.

But now, getting to kind of the job side of this, I do have a fair amount of concern about this. On one hand, I think comparative advantage is a very powerful tool. If I look at coding, programming, which is one area where AI is making the most progress, what we are finding is we are not far from the world—I think we’ll be there in three to six months—where AI is writing 90 percent of the code. And then in twelve months, we may be in a world where AI is writing essentially all of the code. But the programmer still needs to specify, you know, what are—what are the conditions of what you’re doing, what—you know, what is the overall app you’re trying to make, what’s the overall design decision? How do we collaborate with other code that’s been written? You know, how do we have some common sense on whether this is a secure design or an insecure design?

So as long as there are these small pieces that a programmer, a human programmer, needs to do, the AI isn’t good at, I think human productivity will actually be enhanced. But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then we will eventually reach the point where, you know, the AIs can do everything that humans can. And I think that will happen in every industry. I think it’s actually better that it happens to all of us than that it happens—you know, that it kind of picks people randomly. I actually think the most societally divisive outcome is if randomly 50 percent of the jobs are suddenly done by AI, because what that means—the societal message is we’re picking half—we’re randomly picking half of people and saying, you are useless, you are devalued, you are unnecessary.

FROMAN: And instead we’re going to say, you’re all useless? (Laughter.)

AMODEI: Well, we’re all going to have to have that conversation, right? Like, we’re going to—we’re going to have to—we’re going to have to look at what is technologically possible and say, we need to think about usefulness and uselessness in a different way than we have before, right? Our current way of thinking has not been tenable. I don’t know what the solution is, but it’s got to be—it’s got to be different than, we’re all useless, right? We’re all useless is a nihilistic answer. We’re not going to get anywhere with that answer. We’re going to have to come up with something else.

FROMAN: That’s not a very optimistic picture. (Laughter.) Is what it is?

AMODEI: I actually—I would actually challenge that. You know, I think about a lot of the things that I do—you know, I spend a lot of time, for example, swimming. I spend time playing video games. I look at, like, human chess champions. You know, you might think when Deep Blue beat Kasparov, and that was almost thirty years ago, that after that it would be, like, chess would be seen as a pointless activity. But exactly the opposite has happened. Human chess champions, like Magnus Carlsen, are celebrities. I think he’s even, like, a fashion model. He’s like this kind of hero. So I think there’s something there where we can build—we can build a world where, you know, human life is meaningful, and humans, perhaps with the help of AIs, perhaps working with AIs, build really great things. So I am not that—I am actually not that pessimistic. But if we handle it wrongly, I think there’s maybe not that much room for error.

FROMAN: Couple months ago we had DeepSeek being released. In this town there was a fair degree of panic, I would say, around that. People talked about it as a Sputnik moment. Was it a Sputnik moment? And what does it teach us about whether those scaling rules that you laid out, about needing more compute, more data, better algorithms, whether that’s—those rules still apply, or whether there are some shortcuts.

AMODEI: Yeah. So DeepSeek, I think, actually was—rather than refuting the scaling laws, I think DeepSeek was actually an example of the scaling laws. So two dynamics—I had a post about this—but two dynamics are going on at the same time. One is that the cost of producing a given level of model intelligence is falling, roughly by about 4X a year. This is because we are getting better and better at, you know, kind of, algorithmically producing the same results with less cost. In other words, we’re shifting the curve. You can get for—you know, a year later you can get—you know, as good a model as you could get a year ago spending 4X. You can get a 4X better model by spending the same amount.

But what that means economically is that whatever economic value the current, you know, model of a given intelligence has, the fact that you can make it for 4X cheaper means we make a lot more of it, and in fact provides additional incentive to spend more money to produce smarter models, which have higher economic value. And so even as the cost of producing a given level of intelligence has gone down, the amount we’re willing to spend has gone up. In fact, has gone up fast, something like 10X a year, despite that 4X a year increase, right? That’s been eaten up and more by just society, the economy wants more intelligence. It wants more intelligent models.

So that is kind of the backdrop for DeepSeek. And DeepSeek was literally just another data point on the cost reduction curve. It was nothing unusual. It wasn’t like these U.S. companies are spending billions and DeepSeek did it for a few million. The costs were not out of line. They spent, yes, a few million on the model. What U.S. companies spend is not out of line with that. They, like us, spent billions on all the R&D and effort around the model. If you look at how many chips they have, it’s roughly on par.

Now, I do think it’s concerning, because up until recently there were only three, four, maybe five companies that were part of this curve that could produce frontier models. And they were all in the U.S. DeepSeek is the first time—the thing that really is notable—it’s the first time a company in China has been able to go toe to toe and produce the same kind of engineering innovations as companies like Anthropic, or OpenAI, or Google. That is actually very significant. And that actually worries me.

FROMAN: Now, some argue that the emergence of DeepSeek means that export controls don’t work, can’t work, we should stop trying to control the export of our most advanced chips. Others say it means we should double down on export controls. Where do you stand on that?

AMODEI: Yeah. So, you know, I think it’s an implication of the framework I just gave that the export controls are actually quite essential, because, yes, there’s this cost reduction curve, but at every point along the curve, no matter how much the curve is shifted, it is always the case that the more chips you spend, the more money you spend, the better model you get, right? If it’s like, you know, OK, before I could spend a billion dollars and get a model that was OK, now I can spend a billion dollars and get a model that’s much better, and I can get an OK model for $10 million. That doesn’t mean the export controls failed. That means stopping your adversaries from getting a billion-dollar model just became a higher stakes thing, because you can get a smarter model for a billion dollars.

And, yes, DeepSeek was—you know, they had relatively small—you know, relatively small amount of compute, consisting of chips that went around the export controls, some chips that were smuggled. But I think we’re heading for a world where we, OpenAI, Google are building billions, maybe in tens of millions of chips, costing tens of billions of dollars, or more. It’s very hard for that to be smuggled. If we put in place export controls we actually may be able to stop that from happening in China. Whereas, if we—if we don’t, I think they may be at parity with us. And so, you know, I was a big supporter of the diffusion rule. I’ve been a big supporter of export controls for several years, even before DeepSeek came out, because we saw this dynamic coming. And so I think it’s actually one of the most essential things across not just AI, across all fields, for the United States national security, for us to prevent China from getting millions of these very powerful chips.

FROMAN: The diffusion rule, as I understand it, divided the world—this is a Biden administration EO—that divided the world into three camps as to who could get access to what, in terms of chips from us. Some worry that the countries that are not in the top tier are just going to be served by China, and that China is going to end up running the AI infrastructure for the vast majority of the world. This occurred to you?

AMODEI: Yeah. So my understanding of the diffusion rule, and, you know, my understanding is the new administration is looking at it, but there are many parts that they’re sympathetic to. The way it actually sets things up is these tier-two countries. So, tier-one countries are, like, the majority of the developed world.

FROMAN: But not all.

AMODEI: Not all. Tier three is, you know, restricted countries like China or Russia. Tier two are, you know, countries in the middle. Actually, you can have a very large number of chips in those countries if the companies hosting them are able to provide security affidavits and guarantees, which basically say we are not a front company for China. We are not, you know, shipping the compute, or what is done with the compute, to China. And so there really is an opportunity to build a lot of U.S. chips, a lot of U.S. infrastructure in these countries, as long as they comply with the security restrictions.

I think the second piece of it is, yes, in theory, companies could switch to using Chinese chips. But Chinese chips are actually quite inferior. Nvidia is way ahead of Huawei, which is the main producer of—which is the main producer of chips for China. Like something like four years ahead. I think that gap is going to close eventually, over the course of, I don’t know, ten or twenty years. Probably the export controls may even have, you know, the impact of stimulating China. But the tech stack is so deep. And I think the next ten year, during which we will stay strongly ahead in hardware, are actually the critical period for establishing dominance in this technology, which, I would argue, whoever establishes dominance in this technology will have military and economic dominance everywhere.

FROMAN: The last administration launched a dialog with China about AI. What are the prospects for such a dialog? Where could we possibly agree with China? And do they care about responsible scaling?

AMODEI: Yeah. So, you know, I would describe myself—and, of course, I wasn’t part of any of these conversations, but I heard a little about them—I would describe myself as supportive of this dialog, but not especially optimistic that it would work. So, you know, the technology has so much economic and military potential that, you know, between companies in the U.S. or our democratic allies you can imagine passing laws that create some restraint. When it’s just, like, two sides are racing to build this technology that has so much economic and military value, perhaps more than everything else put together, it’s hard to imagine them slowing down significantly.

I do think there are a few things. One is this risk of the AI models autonomously acting in ways that are not in line with human interest, right? If you have a country of geniuses in a datacenter, a natural question—how could you not ask this question—well, what is their intent? What do they plan to do? You would certainly ask, well, is someone controlling them? Are they acting on someone’s behalf? But you would also ask, well, what is their intent? And because we grow these systems we don’t train them, I don’t think it’s safe to assume they’ll do exactly what their human designers or users want them to do.

So I think there’s real risk of that. I think it could be a threat to kind of all of humanity. And, like, issues of nuclear safety or nuclear proliferation, there’s probably some opportunity to take limited measures to help address that risk. So I’m relatively optimistic that maybe something narrow could be done. The stronger the evidence is of that coming—you know, right now that’s a kind of speculative thing. But if strong evidence came that this was imminent, then maybe more collaboration with China would be possible. So, you know, I’m hopeful that we can try and do something in this space, but I don’t think we’re going to change the dynamic of national competition between the two.

FROMAN: Last question before we open it up. You recently presented to, I guess, OSTP an action plan—a proposed action plan for the new administration, what they should do in this area. What are the main elements of that plan?

AMODEI: Yeah. So I think there’s three elements around kind of security and national security, and three elements around opportunity. So the first one is what we’ve been talking about, like, making sure we keep these export controls in place. Like, I honestly believe this is the—across all areas, not just AI—the most important policy for the National Security of the United States. The second thing is something actually related to the responsible scaling plans, which is the U.S. government, through the AISI, has been basically testing models for national security risks, such as, you know, biological and nuclear risks. You know, the institute is probably misnamed. You call it the safety institute. It makes it sound like trust and safety. But it’s really about measuring national security risks. And we don’t have an opinion of exactly where that’s done or what it’s called, but I think some function that does that measurement seems very important.

It’s also important even for measuring the capabilities of our adversaries. Like, you know, they can also measure DeepSeek’s models to see what dangers they might present, particularly if those models are used in the U.S. Like, what are they capable of? What might they do that’s dangerous? So that’s number two. Number three, on the risk side, is something we haven’t talked about, which is I am concerned about industrial espionage of the companies in the U.S., companies like Anthropic. You know, China is known for large-scale industrial espionage. We’re doing various things. There are things in our responsible scaling plan about, like, better and better security measures. But, you know, many of these algorithmic secrets, there are $100 million secrets that are a few lines of code. And, you know, I’m sure that there are folks trying to steal them, and they may be succeeding. And so more help from the U.S. government in helping to defend our companies against this risk is very important.

So those are the three on the security side. On the opportunity side, you know, the—I think the main three there are—one is the potential for the technology. In the application layer, in things like health care. I think we have an extraordinary opportunity, as I said, to cure major diseases, major complex diseases that have been with us for hundreds or thousands of years, and that we haven’t been able to do anything about yet. I think that will happen one way or another, but regulatory policy really could affect, you know, does it take five years for AI to help us produce all those cures and distribute to the world, or does it take thirty years? And that’s a big difference for people who suffer from those diseases. So, you know, our view here is that the policies of today around health care, around FDA approval of drugs, may may not be appropriate for the fast progress—for the fast progress we’re going to see. And we may want to clear away some blockers.

The second is energy provision. If we’re going to stay ahead of China in this—in this technology, and other authoritarian adversaries, we need to build datacenters. And it’s better if we build those datacenters in the U.S. or its allies than if we build them in countries that have divided loyalties, where they could literally just abscond with a datacenter and say, oh, sorry, we’re on China’s side now. And so, you know, some of this was done during the latter days of the Biden admin. And I think it’s a bipartisan thing. I think that the Trump admin, you know, this is one area of agreement. There’s interest in provisioning a lot more energy. We probably need, across the industry, maybe fifty gigawatts of additional energy by 2027 to fully power, you know, AI that has all the properties we’ve been talking about. Fifty gigawatts, for those who don’t know, is about how much energy was added in aggregate to the U.S. grid in 2024. So by that year, you know, we need as much as—you know, half as much as being as being added in the next two years. So it’s really—it’s really going to take a lot.

And then the final thing is the economic side of things. You know, as we talked about, you know, I think the worries on the economic side are just as existential as the worries on the national security side. You know, in the short run we’re going to need to manage the disruption, even as the pie gets much larger. You know, in the long run, as I’ve said, we’re going to need to think about a world where AI—and I don’t want to lie about this. I really think where this is going is that AI is going to be better than almost all humans at almost all things. We have to reckon with that world as soon as possible. For now, I think we just need to—the best thing we can do is measure to understand what’s going on.

We released this thing called the Anthropic Economic Index that, in a privacy-preserving way, looks through, you know, and summarizes our usage to understand, you know, in what fields are people using it? Is it augmentative? Is it replacing? But in the long run, you know, we’re really going to have—you know, this is going to implicate questions about tax policy and distribution of wealth, right? There’s this—you know, there’s this kind of alluring world where if the pie grows enough there could be the resources to do a lot about this. You know, like, let’s say—and this will sound crazy to this audience—but let’s say AI causes the economic growth rate to be 10 percent a year. Then suddenly the tax base is growing so much that, like, you can erase the deficit and, you know, maybe have all this—all this left over to manage the probably enormous disruption that comes from the technology. So that will—that will sound like crazy town, but, like, I just invite you to consider the hypothetical and start considering the possibility of crazy things like that now.

FROMAN: Crazy town. You heard it here first. OK, let’s open it up to questions. Yes, right here in front.

Q: Thanks, Dario. This has been a really fascinating conversation. Should I stand?

FROMAN: You just stand.

Q: OK. I will stand. Get my steps in.

FROMAN: And just say who you are.

Q: I’m Adem Bunkeddeko.

So I had—so I enjoyed reading Machines—your essay last year, and then hearing you on Hard Fork, on the Times, but also hearing this. And so the question I have for you is you sort of outline sort of the political and economic sort of implications. But I’m curious to get a sense of, like, are you—how have you thought about the social and moral sort of kind of considerations that are going to effectively come? Especially because I think most of the general public sort of sees some of the chatbots, sees some of this, and says, oh, it’s an improved Google search, but doesn’t really think about sort of the sort of downstream effects of the disruption in the labor market and the like. And so I’m curious to get a sense of how do you sort of think about that in tension with sort of building a company trying to build a commercial product?

AMODEI: Yeah. So, first of all, I mean, you know, I think this stuff is super important. And perhaps the most—the thing that’s disturbing me the most right now is the lack of awareness of the scope of what the technology is likely to bring. I mean, I could just be wrong. I’m saying a bunch of crazy stuff. Like, you know, the answer could just be the general public is right and, like, I’m wrong. I’m high on my own supply. I acknowledge that is possible. But let’s say it’s not the case.

What I’m seeing is there are these concentric circles of people realizing how big the technology could be. There’s probably maybe a few million people—very concentrated in Silicon Valley, but a few people high in the policy world—who also hold these beliefs. Again, we don’t know yet whether we/they are right or wrong. But if we are right, the whole population, again, thinks of this stuff as chatbots. If we say this is dangerous, if we say this could replace all human work, it sounds crazy because what they’re looking at is something that, in some cases, seems pretty frivolous. But they don’t know what’s about to hit them. And so I think—I think that’s—that actually keeps me up at night a lot, and is why I’m kind of trying to spread the message to more people. So I think awareness is step one.

I think these questions around human labor and human work, in a world where it is technologically possible to replicate the effects of the human mind, I think these are very deep questions. I don’t feel like I have the answer to them. I feel like, you know, as you’ve said, these are—these are kind of moral questions, almost—you know, almost, like, questions about purpose, you may even say spiritual questions, right? And so we are all going to have to answer these questions together. I mean, I’ll give you kind of the embryo of an answer I have, which is that somehow the idea of humans’ self-worth, the tying of that to the ability to create economic value, there are aspects of that are deeply embedded in our psychology, but there are aspects to that that are cultural.

You know, there’s a lot of things about that that work well. It’s created a modern participatory economy. But technology, as it often does, may kind of lay bare that illusion. It may be another moment, like, you know, the moment we realize that the Earth rotates around the Sun, instead of the Sun rotating around the Earth. Or, you know, there are many, many solar systems. Or organic material is not made up of different molecules than inorganic material. So we just may have one of those moments. And there may be a reckoning. And, again, my answer is I am struck by how meaningful activities can be even when they are not generating economic value.

I am struck by how much I can enjoy things that I am not the best in the world at. If the requirement is you have to be the best in the world at something in order for it to be somehow spiritually meaningful for you, I feel like you’ve taken a wrong turn. Like, I feel like there’s something wrong embedded in that assumption. And I say that as someone who spends a lot of time trying to be the best in the world, you know, at something I think is really important. But somehow we’re—our source of meaning is going to have to be something other than that.

 FROMAN: Yes, Cam.

Q: Thanks. Cam Kerry at the Brookings Institution.

One of the things that leapt out at me in the—from the U.K. AI Safety Report is the possibility that, come 2030 or so, that scaling up may run out of data. How do you then scale? How do you make the models smarter? And what are the limitations of that data? I mean, there’s tremendous amount of text, video information that’s digitized, a tremendous amount that resides in our minds and in the universe that is not. How do you deal with that?

AMODEI: Yeah. So a couple answers on this. One is that in the last six months there have been some innovations—actually not developed by us. You know, first, came from OpenAI, actually, but others that we have made—that obviate the need for as much data as we need before. These are the so-called reasoning models, where they basically—they have thoughts. They start to think through the answers to complex questions. And then they train on kind of their own thoughts.

You can think about how humans do this, where sometimes I can learn things by—you know, I’ll make a plan in my head and then I’ll think about it again and I’ll say, oh, actually, you know, on second thought, that doesn’t really make much sense. Like, what are you thinking, right? And then you kind of learn something from this. Of course, you also have to act in the world. You also have to act in the real world. But AIs have not been making use of that kind of cognition at all until recently. So far, that’s mostly applied to tasks like math and computer programming. But my view, without being too specific, is that it’s not going to be terribly difficult to extend that kind of thinking to a much wider range of tasks.

The second point is, even if we do run out of data in 2030, if the exponential continues for even two or three more years it may get us to a point where we’re kind of already at the—at the genius level. And, you know, that that may be enough for a lot of these changes. And we may also be able to ask the models, hey, we have this problem. Human scientists weren’t able to solve it. Can you help us solve this problem? I do still give a small likelihood that, for whatever reason, both of those things won’t work out or aren’t as they appear, and data could be one of the plausible things that could—that could block us. I thought it was a very plausible blocker one or two years ago. I thought one or two years ago if something would stop the show, this was in the top three of the list of things that would. But I think my potential skepticism here has been not completely refuted, but I think reasonably well refuted.

FROMAN: What are the top three things that could stop the show?

AMODEI: So, actually, at this point I think the number-one thing that could stop it would be an interruption to the supply of GPUs. If, for instance, the small, disputed territory where all the GPU is produced had some military conflict, that would certainly do it. I think another thing would be if there’s a large enough disruption to the stock market that messes with the capitalization of these companies. Basically, a kind of—a kind of belief that the technology will not, you know, move forward, and that kind of creates a self-fulfilling prophecy where there’s not enough capitalization. And, third, I would say, if I or we, the field, are kind of wrong about the promising-ness of this new paradigm of kind of learning from your own data. If somehow it’s not as broad as it seems, or just there’s more to getting it right that we think there are some insights missing.

FROMAN: We’ll go to an online question.

OPERATOR: We’ll take the next question from Esther Dyson.

AMODEI: I recognize that name.

OPERATOR: Ms. Dyson, please unmute your line.

Q: Thank you. Apologies. Esther Dyson, writing a book called Term Limits on term limits for people, and for AIs, and so forth.

I have a question about this whole existential risk thing. It seems to me that the bigger risks, honestly, are humans, who are even more unexplainable than AIs, but humans and their business models using AIs. And then specifically there’s the famous paper clip problem, where you ask the AI to make paper clips and it does that to the exclusion of anything else. And this is slightly metaphorical, but the world seems to be going mad for datacenters. And it really is kind of draining resources from everything else to fund datacenters, AI, data pools, whatever. And so in a sense, AI is creating a fitness function for society that is, I think, harming the value of humans, which is not just their intellectual capacity. That’s the end of the question. Thank you.

AMODEI: So, you know, I would say, just as there are many different benefits of AI—and every time we produce a new AI—every time we produce a new AI model it has, you know, a long list of ten benefits that we anticipated and then a bunch more that we didn’t. Like, every time we release a new model there’s, like, new use cases and customers are, like, I didn’t even think of doing that with an AI system. It is unfortunately also the case that, like, there is—you know, we shouldn’t say this risk is distraction from that risk. It just unfortunately is the case that there are many different risks to the AI systems. And if we want to get through this, we somehow have to deal with them all.

So I think it is a big risk that humans will misuse the AI systems. I think it is a big risk that the AI systems themselves we may have difficulty controlling them. Again, to use the analogy of a country of geniuses in a datacenter, we plop down a country of, you know, ten million geniuses in, you know, Antarctica or something. We’re going to have multiple questions about what that will do to humanity. You know, we’re going to ask, well, who’s—you know, is some existing—does some existing country own them? Is it—is it doing their bidding? And what will that do? You know, are the benefits—you know, is the outcome of that beneficial? We’ll say, you know, are there—are there individuals who could misuse it? And we’ll say, what are the intentions, you know, of that country of geniuses itself?

And then to get at the question you asked near the end, like, are there kind of more distributed societal things? Like, I certainly believe that if, you know, more and more of the world is—more and more of our energy is devoted to AI systems, like, you know, it’ll be great. They’ll do things really efficiently. But, like, you know, also could that make some of our existing environmental problems worse? Like, I think that’s a real risk. And then you can say, well, will the AIs be better at helping us to solve our environmental problems? So we spend a bunch of energy, and then the AI—the AI systems, you know, it kind of—you know, it turns—we end up better than we started if we’re able to solve it. So I’m optimistic that that will be the case, but that’s, like, another risk. Like, a number of things have to be true for it to turn out that way.

So, you know, I just think we’re at a time of great change. And therefore, you know, we have to make extraordinarily wise choices to get through it. I mean, you know, I recognize the name of the person asking the question. And I might—I might get this wrong, but I think it was your—I think it was your father who said—I listened to a video of him, because I was—I was a physicist. And I listened to a video of him, where, you know, he said, we have all these problems today and we—you know, it seems like we can’t solve them. But, you know, I remember—I remember in my day, you know, it really seemed like we had all these, you know, severe, severe, you know, problem—you know, thinking of just World War II, or the Cold War, or nuclear annihilation. And somehow we made it through. So it doesn’t mean we will again, but. (Laughter.)

FROMAN: Yes. The woman in the back, there.

Q: Hi. My name is Carmem Domingues. I’m an AI specialist with a background in development, implementation, and, more recently, focusing a bit more on the policy side.

I hear loud and clear on the lack of awareness generally of what is AI and what is not AI, and what it can and cannot do. But I’m going to skip over that. I do some science communication around that too. But my question today is around a few months ago you brought on Kyle Fish as an AI welfare researcher to look at, you know, sentience or lack of thereof of future AI models, and whether they might deserve moral consideration and protections in the future. If you could talk a bit about that, the reasoning for that, and if you have an equivalent human welfare research team going. Thanks.

AMODEI: Yeah. So this is—this is another one of those topics that’s going to make me sound completely insane. So it is actually my view that, you know, if we build these systems and you know, they differ in many details from the way the human brain is built, but the count of neurons, the count of connections, is strikingly similar. Some of the concepts are strikingly similar. I have a—I have a functionalist view of, you know, moral welfare of the nature of experience, perhaps even of consciousness. And so I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it’s a duck. And we should really think about, you know, do these things have, you know, real experience that’s meaningful in some way.

If we’re deploying millions of them and we’re not thinking about the experience that they have, and they may not have any. It is a very hard question to answer. It’s something we should think about very seriously. And this isn’t just a philosophical question. I was surprised to learn there are surprisingly practical things you can do. So, you know, something we’re thinking about starting to deploy is, you know, when we deploy—when we deploy our models in their deployment environments just giving the model a button that says, “I quit this job,” that the model can press, right? It’s just some kind of very basic, you know, preference framework where you say, if—hypothesizing the model did have experience, and that it hated the job enough, giving it the ability to press the button, “I quit this job.” If you find the models pressing this button a lot for things that are really unpleasant, you know, maybe you should pay some—it doesn’t mean you’re convinced, but maybe you should pay some attention to it. Sounds crazy, I know. It’s probably the craziest thing I’ve said so far.

FROMAN: Way in the back there. Trooper, yeah.

Q: Hi. Trooper Sanders.

You talked about the excitement of AI and medical science, biology, chemistry, et cetera. If you could say what—is there any excitement around the social sciences? So, you know, most of health care is done outside of the pill box and the exam room. Public health involves a number of other areas. Can you say anything about that side of things?

AMODEI: Yeah. I mean, if I think about epidemiology, you know, when I was in grad school there was a project being done by the Gates Foundation to use kind of, you know, mathematical and computational methods around epidemiology. I think they were, you know, planning to use it to help eradicate malaria, polio, other areas. The quantity of data that we get and the ability to pull all the pieces together and understand what’s going on in an epidemic, I bet that could benefit hugely from AI. The clinical trial process. We’ve already seen things like this.

So actually this is something Anthropic has done with Novo Nordisk, the maker of Ozempic and other drugs. At the end of a clinical trial you have to write a clinical study report. You know, it summarizes adverse incidents, does all the statistical analysis, to present to the FDA or other regulatory agencies for whether to approve the drug. Typically, this takes about ten weeks. They’ve started using our model for this. And the model takes about ten minutes to write the clinical study report, and humans take about three days to check it. And the quality, at least as we’ve seen in early studies—that doesn’t determine everything—has been deemed to be comparable to what—to what humans are able to do with the ten-week process.

So we need to do clinical trials. There’s a lot of social science problems around that. There’s a lot of regulatory problems. I write about that a bit in the essay, that I think those things are going to—are going to be the thing that that limit the rate of progress. But even within things like clinical trials, I think the AI systems will be able to help a lot in, if not dissolving those questions, at least radically simplifying them.

FROMAN: Yes, right here. There’s a microphone coming.

Q: I’m Louise Shelley. I’m an expert on the. Illicit trade, from George Mason University.

Next week there is a global summit of the OECD on illicit trade. But what you’ve talked about is not what I expected to hear on this problem of smuggling of parts. And it’s on no one’s radar screen. What happens when you’re talking about it? Because it’s not reaching the community that is needing to protect this illicit trade.

AMODEI: I didn’t hear the last part of the question.

FROMAN: So how come these issues aren’t on the agenda of those people concerned about illicit trade?

AMODEI: Yeah. You know, yeah. I think, my answer to that is, you know, it should be on the radar of those people. You know, again, I have a world view here that not everyone shares. And I may be right or I may be wrong. But all I can say is if this world view is correct then, you know, we should be worrying a lot more about smuggling these GPUs than, you know, we’re worried about smuggling, you know, guns, or even drones, or fentanyl, or whatever. Yeah, but—you know, if you were to smuggle five million of these to China—and, you know, to be clear, you know, that’s, like, $20 billion of value or something like that—you know, that would drastically change the national security balance of the world. I think it’s the—I think it’s the most important thing.

So, you know, again, this is—this is the dilemma of am I just crazy or does the world have a big, big awareness problem here? And if the world has a big, big awareness problem here, then a downstream consequence of that is we’re focusing on all these other things. And, you know, because when you say illicit trade there’s certain things that people have been focusing on for a long time, this is a new thing. But that doesn’t mean it’s not the most important thing.

FROMAN: Ah, so many good questions, I’m sure. This gentleman right here.

Q: Thank you. Alan Raul. Practicing lawyer, lecturer at Harvard Law School, and future useless person. (Laughter.)

AMODEI: So are we all.

Q: So I’d like to follow up on your various comments on national security. You mentioned the Artificial Intelligence Security Institute and its testing. The Biden executive order on AI had mandatory reporting of acquisition or development of super-capable, and to the 26 FLOP, dual-use foundation models. But my question is, how do you engage? How does, you know, Anthropic, the AI community, the developers of these super-capable models, how do they engage with, let’s just say, the U.S. national security community, the intelligence community? And practically, you know, what does that mean for the development of AI? And if you tell me you’d have to kill me, I don’t need to know that badly. (Laughter.)

AMODEI: Yeah. So I think there are a few things here. One is typically Anthropic, in particular, although the other companies have started doing similar things, whenever we develop a new model we have a team within Anthropic called the Frontier Red Team. Some of this is happening, you know, testing with the AI Safety and Security Institutes but, you know, we work in collaboration. We develop some stuff. They develop some stuff. But the general flow has been, when we test the models for things like biological risk or cyber risk, or, you know, chemical or radiological risk, we’ll typically go to people in the national security community and say, hey, this is where the models are at in terms of these particular capabilities.

You know, you guys should know about this because, you know, you’re the ones who are responsible for detecting the bad actors who would—who would do this with the models. You know what they’re capable of now. You might therefore have a sense of what the models can do that is additive or augmentative to their current capabilities, right? The part of it we miss is, like, you know, we’re not counterterrorism experts. We’re not experts on all the bad guys in the world and what their—what their capabilities are, and what, you know, large language models would add to the picture. And so we’ve had a very productive dialog with them on these issues.

The other topic which we’ve talked to them about is the security of the—you know, the companies themselves. You know, this was one of the things in our kind of OSTP submission of, you know, making this more formal, making this something the U.S. government does as a matter of course. But, you know, if we’re worried that we’re going to be attacked digitally or, you know—or, you know, via a kind of, you know, human means, insider threat, then, you know, we’ll often talk to the national security community about that.

And I think the—you know, the third kind of interaction is, you know, about the national security implications of the models, right? We’ve been—you know, these things that I’m saying publicly now, I’ve been—you know, I’ve been—I’ve been saying them in some form to some people for quite a while. Then I think the fourth thing is there’s an opportunity to apply the models to enhance our national security. This is something that I, and Anthropic, have been supportive of, although we want to make sure that there’s the right guardrails, right? On one hand, I think, you know, if we don’t apply these technologies for our national security, we are going to be defenseless against our adversaries.

On the other hand, I think everyone believes that there should be limits. You know, I don’t think there’s anyone who thinks, you know, we should—we should, you know, hook up the—hook up AI systems to nuclear weapons and let them fire nuclear weapons without humans being in the loop. That’s the plot of Dr. Strangelove. Yeah, that is—that is literally the plot of Dr. Strangelove. So somewhere between there, there is some—like, you know, there’s some ground. And, you know, we’re kind of still working on defining that. It’s one of the things where we hope to kind of be leaders in defining what the appropriate use of AI for national security is. But that’s another area where we’ve had interaction with the national security community.

FROMAN: Your comment a couple minutes ago about trying to understand the experience of the AI models has been sort of sinking in for me. So let me just conclude with one final question. Which is, in the world that you envisage, what does it mean to be human?

AMODEI: Yeah. You know, I think my picture of it—the thing that seems—the thing that seems most—there are maybe two things that seem most human to me. The first thing that seems most human to me is, you know, struggling through our relationships with other humans, our obligations to them, you know, how we have to treat them, the difficulties we have in our relationships with other humans and how we overcome those difficulties. You know, when I—when I think of, you know, both things that people are proud of doing and the biggest mistakes people have made, they almost always relate—they almost always relate to that. And AI systems maybe can help us to do that better, but I think that will always be one of the quintessential challenges of being human.

And I think maybe the second challenge is, you know, the ambition to do very difficult things, which, again, I will repeat, I think will ultimately be unaffected by the existence of AI systems that are smarter than us and can do things that we cannot do. I, again, think of, like, human chess champions are still celebrities. You know, I can—you know, I can learn to swim or learn to play tennis, and the fact that I am not the world champion does not—does not negate the meaning of those activities. And you know, even things that I might do over fifty years, over a hundred years, you know, I want those things to retain their—to retain their meaning and, you know, the ability of humans to strive towards these things, not to—not to give up. You know, again, I think—I think those two things are maybe what I would identify.

FROMAN: Please join me in thanking Dario Amodei for spending time with us. (Applause.)

AMODEI: Thank you for having me.

(END)

This is an uncorrected transcript.

Top Stories on CFR

Ukraine

Ten charts illustrate the extraordinary level of support the United States has provided Ukraine in its war against Russian invaders.

Iran

A strategically weakened Iran has sent signals it would be willing to discuss the militarization of its nuclear program with the United States, but any diplomatic breakthroughs are highly unlikely.

RealEcon

Studies have shown that tariffs depress productivity in protected industries. U.S. steel is a case in point.