
Eric Schmidt on AI, the Battle with China, and the Future of America
Episode Details
Episode Summary
No summary available.
Key Topics & People
Transcript
[Music] I honestly believe that the AI revolution is underhyped. Now, why is this all important? >> Eric Schmidt is here. He's the former Google executive chairman and CEO. >> These agents are going to be really powerful, and they'll start to work together. We're soon going to be able to have computers running on their own, deciding what they want to do. Now we have the arrival of a new nonhuman intelligence which is likely to have better reasoning skills than humans can have. >> So if you were emperor of the world for 1 hour, the most important thing I do is make sure that the west wins. >> Ladies and gentlemen, please welcome Eric Schmidt. [Music] >> Hi. [Music] Hi. Looking good. Good to see you. >> Good to see you. >> Good to see you. >> You're looking, Eric. You're looking. Very nice. >> Oh my god. David, good to see you. It's like >> David Sax is here as well. >> It's like a It's like a reunion of all of our former companies. David, why did you quit after all? >> My old >> my old boss. It was >> What was it like working with Young Freedberg? Take us back. >> Can I Can I tell a story? We have to come down to Orange County and they're like, "Hey, we're going to take the plane and it was Eric's plane." We get on the plane and then he goes up and flies the plane. I'm in the back of the plane by myself. >> Was it King Air? >> I'm like, "The CEO of Google's flying me down to Orange County." It was incredible. That was my first time actually hanging out with Eric. >> It was It was my Gulfream. >> That's right. >> Um, he was way too smart. >> Way too smart. >> Way too smart. >> Was he focused? Did he contribute? Did he move the needle? >> But he was very smart. Okay, that's kind of our consensus off the pot as well. >> Look, you guys know this guy well. He's really that smart. So, he taught me more stuff than most of any of the employees at Google and then you left. >> Well, tell us what you've been doing. So, um >> No, no, wait. Before that, I got to ask you this question. There was a recently deleted video from Stanford. >> Oh, no. >> You had a moment of clarity where you said, "Hey, you know, like at Google, people are like too much work life balance. They need to commit. They need to work harder. We had Sergey at the last event. He's going back to work. So Sergey got the message. Predicting Sergey's behavior is something I can fail at. I tried for 20 years. Um I am not in favor of um essentially working at home. I And the reason I mean many of you guys all work at home to some degree, but your careers are already established. But think about a 20-some who has to learn how the world works and you know they they come out of Berkeley or Dartmouth and they're very well educated. When I think about how much I learned when I was at Sun just listening to these elder people who were 5 or 10 years older than I was argue with each other in person. I don't how do you recreate that in this new thing? And and I'm in favor of work life balance and that's why people work for the government. Um, sorry. Um, >> strays. >> Sorry, sorry, sorry. Um, if you're going to be in tech and you're going to win, you're going to have to make some tradeoffs. And you're remember, we're up against the Chinese. The Chinese work life balance consists of 996, which is 9:00 a.m. to 9:00 p.m. 6 days a week. By the way, the Chinese have clarified that this is illegal. However, they all do it. >> That's who you're competing against. I brought I brought everybody back to office. It's so much better. >> So, it let's just pick up on that theme. So, >> you don't need to defend the government. >> No, no. Believe me, I don't I don't see the need to. I'm an unpaid part-time adviser to the government. So, uh but uh but we are in this high-tech competition with China. They obviously care about AI, too. They're trying to race ahead. How do you uh I I understand that you recently ma made a trip there. How do you um handicap this this this competition? Well, you and I just talked about this as part of your as your incredibly important work in the White House. Um, I had thought that China and the United States were competing at the peer level in AI and that the good work that you have done and your predecessors did to restrict chips were slowing them down. They're really doing something more different than I thought. They're not pursuing crazy AGI strategies partly because the hardware limitations that you've put in place, but partly because the depth of their capital markets don't exist. They can't raise based on a wing and a prayer $100 million or maybe an equivalent to to build the data centers. They just can't do it. And so the result is they're very focused on taking AI and applying it to everything. And so the concern I have is that while we're pursuing AGI, which is incredibly interesting and we should talk about and all of us will be affected by this, we better also be competing with the Chinese in day-to-day stuff. Consumer apps, this is something you understand very well, Chimath, uh, consumer apps, uh, robots and so forth and so on. I saw all the the Shanghai robotics companies and these guys are attempting to do in robots what they've successfully done with electric vehicles, right? and they're re they their work ethic is incredible. They're wellunded. It's not the crazy valuations that we have in America. They can't raise the capital, but they can win across that. The other thing the Chinese are doing, and I want to emphasize this is a major geopolitical issue, is that my own background is open source. In the audience, you all know open source means open code. Open weight open weights means open training data. China is competing with open weights and open training data. And the US is largely and majority focused on closed weights, closed data. That means that the majority of the world, think of it as the belt and road initiative, are going to use Chinese models and not American models. Now, I happen to think the West and democracies are correct. And I'd much rather have the proliferation of large language models and that learning be done based on Western values. Eric, we had a a major open- source initiative um with Meta, you know, incredible balance sheet, tremendous technical firepower, but they seem to have misexecuted and now are taking a step back and reformulating something to your point that looks a little bit more closed source. It's not clear. You know, Alex Wang's a good friend. Uh he's come in, he's taken over. He's obviously incredibly uh incredibly capable. I would not hold keep um I I would not say that they're going fully closed. And I think also they got screwed up because the deepseek people uh R1 did such a good job, right? If you look at the reasoning model in deepseek and in particular their ability to do reinforcement learning forward and back, forward and back and forward and back. This is a major achievement and it appears that they're doing it with less precision than numeric precision than the American models. As a bit of technical things, uh there's something called FP64, FP32, FP16. The American models are typically using 16 bit precision for their training. The Chinese are pushing eight and now even four. >> Is there is there something that um the American you know bigger companies need to be doing in open source so that we can actually combat this? >> Well, a number of the large companies have said that they want to be leaders in open source as well. Um Sam Alman indicated that the smallest version of the 03 model would be released I believe open weights and they have done so and he told me anyway that this model is much smaller than 10 the 26. It's much easier to train and it will fit or or can fit on your phone. So one path is to say that we'll have these supercomputers doing AGI which will always be incredibly expensive and so forth. But we also have to watch to make sure that the proliferation of these models for handheld devices is under American control whether it's OpenAI or Meta or Gemini or what have you. recently you uh took over uh relativity space and I think for for the people that don't know this is a business that effectively whose ambition is to compete with SpaceX. >> I think you were the first investor or the earliest investor in it. >> I was >> I'm sorry. >> It's okay. >> You lost some money in the first what happened? Did you get crammed down? >> No, no, no. I mean all of us did. >> All of I mean look >> I mean I've been very happily uh you know an investor in SpaceX and Swarm and Starlink. relativity was >> and by the way swarm is a big deal created >> swarm was a >> so thank you >> yeah swarm has been a really great success for for them and I think for the world um but what I was going to ask you is walk us through the evolution of the space market why you decided of all the companies you have the capital base to kind of put your money anywhere why did you pick that why did you pick that business why now >> rockets are really cool and they're really hard I had I'm as you know I'm a pilot and I know lots about jets And I had assumed that rockets were as mature as jet engines. They're not. It is an art and a science. These things are very hard to do. The amounts of power, I mean, in our case, the rocket is 4 million pounds of thrust. Um, you have to hold the thing down to test it. And you can't even hold it with metal things. You have to have other things to hold it down as well. There's so much force otherwise it will take off. Um, another interesting thing about rockets is that a rough number is that 2% of the weight of the rocket is the payload, 18% is roughly the rocket, and 80% is the propellant. And my reaction as a new person is, you're telling me you can't do any better. And the physicists say after 60 years of physics, that's the best we can do to get out of the gravitation of of the Earth. And so, I think rockets are interesting and they're challenging. There's always an opportunity for competition. um in relativity space's area, it's essentially a LEO com a LEO competitor. So low Earth orbit satellites, that sort of thing. The order book is full. We just have to launch the rocket and and this entry into space happened and I'm not sure how well known this is. So you can go as far as you want to go into this, but you've done a lot as well in next generation warfare as well. Do you want us just to talk about that and how you ended up there and what role that plays and just give us a landscape? Maybe David asked about the China question, but it's they're all kind of almost interrelated. >> Well, at first place, I'm a software person, not a hardware person. I I explain to people that hardware people go to different schools than software people. Um, and they think slightly differently. So, I'm always at a limitation in these new industries. Um, I had worked for the Secretary of Defense and have a top secret clearance and all that. I was given a medal, etc., uh, for trying to help the Pentagon reorganize itself. And when the Ukraine war started, I was watching and I thought, well, here's an opportunity to see a country that has no navy u and no uh and no air force how they do this with automation. And indeed, it has been a spectacular success as a matter of innovation. Um and outnumbered 3 to one with huge differences in um kinetic strength, weapons, mobilization, and so forth and so on. Ukraine has held on really quite well. Um and what's happening now is you're seeing essentially the birth of a completely new military national security structure. Um one way to think about it is that we all u so first place and I've I've seen it live and I will tell you that real war is much worse than the worst movies you have ever seen about war. And that's all I'll say. It's really horrific and it's to be avoided at all cost. Um and then right for obvious reasons and the the and I love all these people say well we'll well you know the wararmongering talk be careful what you wish for because the other side gets a vote when I started working and and trying to understand what Ukraine was doing uh Russia was pushed back and they've come back with a very very strong second and third round so the enemy gets a vote in this situation uh but to but to go on um The rough way in which war will evolve is first things will have to be very very mobile and very much not in fixed places. This takes out most of the military infrastructure that exists in the world. Um things like tanks um of which we're now building a whole bunch more even stronger tanks here in America don't make any sense in a world where a 2 kg payload from a a well-armed drone can destroy the tank. It's called the kill ratio. And that drone costs retail $5,000, $4,000. The tank, the American tank costs $30 million. You can see the you can send an awful lot of those drones to destroy those tanks. Um, the likely evolution goes something like this. So, first, people learn that drones are like rifles and like artillery. So, it's more efficient to use drones now than to use mortars, grenades, artillery. That's clear. If you just look at the economics, economics in terms of cost or effectiveness as it's called. Um the next thing that happens is that both sides develop drone capabilities which what you're seeing now and each then becomes a war of drone against drone. So you have drone against anti- drone. And so then the shift moves to how do you detect the enemy drone and how do you destroy it before it destroys you. So the doctrine ultimately is the drones are forward and the people are behind. And I've seen operations in for example sitting in Kev where the Ukrainians are commanding things over Starlink I might add um in the distance in the distant war and they're very very effective. So we've solved the latency problems, we've solved the timing problems and so forth in that area. The ultimate state is very interesting and I don't think anyone has foreseen this. If you go back to our conversation about RL and planning, which is what you're seeing with AI, let's say that that we're on one side and we have a million drones and there's another side over here that has another million drones. Each side will use reinforcement learning AI strategies to do battle plans, but neither side can figure out what the other side's battle plan is. And therefore, the deterrence against attacking each other will be very high. Today, the way military planners operate is that they count weapons. They say, "Well, you have this many and I have this many and you can do this kind of a maneuver and so forth." But in an AI world where you're doing reinforcement learning, you can't count what the other side is planning. You can't see it. You don't know it. And I believe that that will deter what I view as one of the most horrendous things ever done by humans, which is war. Because unless there's a perfect balance between either side, there will be some mutual destruction of the drone supply like there would be with any artillery stock in traditional warfare and whoever's left ends up winning. Like they're just >> Well, it's very important to understand that there's no winners in war. Um, by the time you have a drone battle of the scale I'm describing, the entire infrastructure of your side will be destroyed. The entire infrastructure of the other side will be destroyed. These are lose-lose scenarios. Isn't there like an an equilibrium though that that can also create where there because of that mutually assured destruction there's a det or is that >> well I'm arguing that it's it's not a deterrence >> right >> that as deterrence can be understood as I want to hit you which I don't but I want to hit you so much but that if I do that the penalty is the penalty is greater than the value of me hitting you >> right >> and that's how det that's how >> but that seems like an that seems like a great um advantage antage and upside of this move to sort of drones and automation that we don't have today. >> Well, I there are many advantages to moving to drones and automation. One, they're much much cheaper, right? They're much much cheaper. >> Yeah. >> And two, and two, you can stockpile algorithms. You can essentially learn and learn and learn. And remember, you can also build training data, right, that's synthetic, so you can be even better than the others. The final question I've been asked by our military is what's the role of the of a traditional land army? And I wish I could say that all of these human behaviors can occur without humans being at at risk. I don't think so. I think that the way um robot war essentially drone war will occur is there will be these destructive waves, but eventually humans are going to have to cross a line. they're going to have to >> after we've depleted them. So, you're investing in this drone technology and then do you think >> Optimus and humanoid robots are the next, you know, volley in this um new warfare. It's going to be a long time before humanoid robots we which is what we see in the movies all day, right? Be a very long time before we see that. Uh what you're going to see is very very fast mobility solutions, right? airly airbased solutions and also hypersonics >> hypersonics also things underwater there's a lot of that going on it's a different domain um if you look at the um the muro and some other boats that the Ukrainians used they have essentially used USVs to destroy the uh Russian fleet in the Black Sea this was crucial for them because they needed to be able to export the grain from Odessa around and it's like 6% or 10% of their economy It's a very big deal and they did that with drones. >> Eric, it seems like there's this overarching worldview that you have, meaning you have this view on AI. There's all the stuff you're doing now in drones, in warfare, in rocketry. It all converges quite honestly because in the in the next five or 10 years, these things will all come to pass. What is the like how do you view the world? Like what is the role of America? What is your role as a as a capitalist, as a technologist, as like a statesman? >> I want America to win, right? Uh I am here because the American dream, the people who invested in me, in my case, Berkeley and so forth, people took a chance on me. I want the next generation to have that. I also want you all to remember I was just in in uh as part of the World War II surrender ceremony in in Honolulu and they talked about fighting tyranny, right? We forget that our ancestors or greatgrandparents or whatever fought the Great War to keep liberalism and democracy alive. I want us to do that. How do we do that as Americans? We use our strengths. What are our strengths? We're chaotic, confusing, loud, you know, but we're clever. Uh we allocate capital smartly. We have very deep financial markets. We have this enormous industrial base of universities and entrepreneurs which are represented here. We should celebrate this. We should stoke it. We should make it go faster and faster. I spent lots of time in Europe because of the Ukraine stuff. They are so envious of us. When you're in Asia, they are envious of us. Don't screw it up, guys. That's what I want to work on. >> Can Can I Can I just ask you outside of this external conflict? We had um a conversation with Alex Karp today and we actually had Tucker Carlson here yesterday and some of the dialogue was around the I I don't know if the right term is the erosion of the west that there may be social issues that are brewing in the west that may be hurting us from the inside. How much do you observe or spend time on these issues? And the metric that often is cited now is declining birth rates in the west. And that our population, and we're gonna talk with Elon in a few minutes about this. Um, oh, sorry. >> Oh, we just ruined my bad. My surprise. Oops. >> Sorry. Sorry. >> There's your surprise guest. >> Sorry. Sorry. Sorry. Sorry. >> Um, slip. Uh, but um, >> Elon is a good friend and he's addressing this issue of population directly himself. problem solve for it. Good for you. >> Is it is it is it a reflection of something going on? There's a rise of Mandani getting elected in New York. Uh some of the historic values of the West seem to be, you know, kind of under a state of transformation. Right now, >> one metric of the success of a society is its ability to reproduce. And so, I think this is a legitimate concern of the West. It's much worse in Asia. The um the Chinese number is about 1.0 0 for two parents. In Korea, it's now down to 78 for two. So, it's really important to recognize that we as humans are collectively choosing to depopulate. And the numbers are staggering, right? And imagine a situation where instead of having growth, you have shrinkage. And furthermore, they're getting older. And so, as a business, all of a sudden, your revenue is declining. And there's nothing you can do because you can't innovate with fewer and fewer customers. So if you just put it in a business context, ignoring the moral issues which are all very real, it's just bad, right? So we have to solve that problem. I happen to be in favor broadly of immigration because I think immigration helps us solve that problem. But as a global mechanism, we have to address that. Um, in any case, from my perspective, you're going to have these issues, but America is organized around the concept of American exceptionalism. And as long as we understand that the way we make progress is we invest in the right people, in the right businesses, we have a a strong capital market, we invest in the infrastructure that they need, um, we'll be fine. That is my actual opinion. Can can we go back to um AI for a second? >> So um Eric, I think you can help us get to a let's call it a a bipartisan understanding of these issues. I think you you you think really clearly about this. Um you know, in in the wake of Chad GBT launching at the end of 2022, I think the discourse was really dominated in 2023 and 24 by this idea of AGI and that AGI was imminent. And I think it created almost like a panicky atmosphere in Washington among policy makers and you saw things like we got to restrict open source because you know then China will get it and um and this is before Deepseek launched and then we saw that actually they're ahead of us on open source but it feels like there's been um a pullback a little bit from the AGI narrative which I think I think it's actually a good thing. I think it's more conducive to calm rational policym. What's your perception of AGI right now? Where where are we on that whole train? >> So So um so first place, the speech that the president delivered about a month ago about AI strategy, which I think you probably wouldn't say it, but you kind of wrote it for him, was exactly right. Right. So thank you. >> David collaborated with an amazing leader who we all respect and admire so much, Eric. >> Yes. Uh, so nevertheless, >> saying I wrote it was was way too strong. I mean, actually, but anyway, >> if you didn't if you didn't write it, then it must have been your twin. But in any case, um, the you you got you got the emphasis right, which was that investment in research, investment in the kind of stuff that we do is really, really important. I don't agree with you on this on this AGI thing because there's this group which I call the San Francisco um narrative because they all live in San Francisco and their narrative goes something like this. Um today we're doing agents uh the agentic revolution will change businesses which I agree with. Um that what happens is the systems will become recursively self-intelligent with recursive self-improvement as it's called. If you have a scale-free problem and a scale-free problem for example is programming or math where you can just keep doing it you get these enormous fast gains if you buy enough hardware do enough software so forth and so on that is still underway. The collective of that says that in the next three-ish years they believe that we will get forms of super intelligence and the way they define it is basically a savant a chemist so a physics soant a mathematician soant I don't agree with the three years but I do agree that it'll be maybe six or seven years >> but if it's a savant in you know a particular area is that general intelligence >> it's not general intelligence yet general internal intelligence is when it can set its own objective function. >> Right? >> And there's no evidence of that. >> There's no evidence right now of the ability to set your own objective function. Um the the the thinking and I'm writing a paper on this so I've been studying it is that the the the technical problem is non-stationerity of mathematical proofs. And what you're doing is you're trying to solve against objective function but the objective function keeps changing which is how humans operate. your goal changes every day. Whereas computers have trouble with that. As a math problem, we don't have an algorithm yet for LLMs that can do that. People are working on it. Um and the the test will be can you basically um using the information available in 1902, can you derive the same thing that Einstein did with special relativity followed by general relativity? We cannot do that today. Um and most people believe that the way this will be solved is through analogy. So the theory of great geniuses is that they understand one area of extremely well and they're so brilliant the lady or man can then take their ideas and apply it to a completely different domain. If we can solve that problem then I think it's over. Then we get to AGI and then it's a whole different world. I think one of the reasons why it's hard to replace a human and you know JK and I debate this is that humans are end to end. You know we can do the whole job. You have sort of a complete understanding. You can pivot very easily. AI at least as we know it today is not end to end. It has to be prompted. You get an answer. That answer has to be validated. Then you have to ask a new question because it never gives you exactly what you want. You have to apply more context. You have to go through an iterative loop. Finally, you get to an answer that has business value. The way biology puts it is that AI is not end to end. It's middle to middle. Humans are end to end. And so, as a result of that, instead of AI replacing all of us, AI will be very synergistic with humans because we can define the objective function. We do the prompting and we work with it to iterate and it does a lot of the work in the middle. Um, that seems to me like a very optimistic, less duoristic take on it. What you just said is exactly what's going to happen for the next few years that each of us will have assistance which on our command and our prompting will be incredibly helpful to whatever problem we have you know personal uh you have people who are using these things for relationship advice for you know talking to their kids I mean it's all crazy stuff um but the fact of the matter is that's it the to me the real question is when does it cross over to having its own valition its own ability to seek information and solve new problems s that's a different animal. >> But have we seen any evidence of recursive self-improvement yet? >> Um, not yet. I'm I'm I've funded a number of startups which claim to be close to it, but of course these are startups and you never know, which tells me it's 5 10 years cutting numbers. >> What do you think Google's doing on this front? >> Um, well, I'm not at Google anymore. Um, every issue of Gemini is top of the leaderboard. So 2.5 just overcame everybody and I'm sure there's another one coming. Um Demis is working really hard on this question about um scientific discovery. So that's a p that is a path to getting to AGI. >> Eric, um we appreciate the work you're doing. Uh we appreciate you being here with us. We appreciate what you've done, the impact you've had on Silicon Valley, uh as society. Yeah. No, but it's it's really been >> I am so happy to be part of this. You created this incredible community and there's all of these smart people that spend all their time listening to you. >> Very concerning. >> Wow. Eric [Music] very Thanks, Eric. Appreciate you. Cheers. All right.