0:00 Are there really three hours of questions? Are you fucking serious?
0:07 You don't think there's a lot to talk about, Elon? Holy fuck man.
0:11 It's the most interesting point. All the storylines are converging right now.
0:16 We'll see how much we can get through. It's almost like I planned it.
0:19 Exactly. We'll get to that. But I would never do such a thing… As you know better than anybody else,
0:25 only 10-15% of the total cost of ownership of a data center is energy.
0:30 That's the part you're presumably saving by moving this into space. Most of it's
0:33 the GPUs. If they're in space, it's harder to service them or you can't service them.
0:37 So the depreciation cycle goes down on them. It's just way more expensive to have
0:40 the GPUs in space, presumably. What's the reason to put them in space?
0:46 The availability of energy is the issue. If you look at electrical output
0:55 outside of China, everywhere outside of China, it's more or less flat.
0:58 It’s maybe a slight increase, but pretty close flat.
1:02 China has a rapid increase in electrical output. But if you're putting data centers anywhere
1:07 except China, where are you going to get your electricity? Especially as you scale. The output
1:12 of chips is growing pretty much exponentially, but the output of electricity is flat.
1:17 So how are you going to turn the chips on? Magical power sources? Magical electricity fairies?
1:25 You're famously a big fan of solar. One terawatt of solar power,
1:29 with a 25% capacity factor, that’s like four terawatts of solar panels.
1:32 It's 1% of the land area of the United States. We’re in the singularity when we’ve got
1:37 one terawatt of data centers, right? So what are you running out of exactly?
1:42 How far into the singularity are you though? You tell me.
1:45 Exactly. So I think we'll find we're in the singularity and it’ll be like,
1:48 "Okay, we’ve still got a long way to go." But is the plan to put it in space after
1:54 we've covered Nevada in solar panels? I think it's pretty hard to cover Nevada
1:58 in solar panels. You have to get permits. Try getting the permits for that. See what happens.
2:02 So space is really a regulatory play. It's harder to build on land than it is in space.
2:08 It's harder to scale on the ground than it is to scale in space.
2:17 You're also going to get about five times the effectiveness of solar panels in space versus
2:23 the ground, and you don't need batteries. I almost wore my other shirt, which says,
2:27 "it's always sunny in space". Which it is because you don't
2:35 have a day-night cycle, seasonality, clouds, or an atmosphere in space.
2:44 The atmosphere alone results in about a 30% loss of energy.
2:50 So any given solar panel can do about five times more power in space than on the ground.
2:58 You also avoid the cost of having batteries to carry you through the night.
3:03 It's actually much cheaper to do in space. My prediction is that it will be by far
3:11 the cheapest place to put AI. It will be space in 36 months or
3:16 less. Maybe 30 months. 36 months? Less than 36 months. How do you service GPUs as they fail,
3:21 which happens quite often in training? Actually, it depends on how recent the
3:27 GPUs are that have arrived. At this point, we find our
3:30 GPUs to be quite reliable. There's infant mortality,
3:34 which you can obviously iron out on the ground. So you can just run them on the ground
3:38 and confirm that you don't have infant mortality with the GPUs.
3:41 But once they start working and you're past the initial debug cycle of Nvidia
3:49 or whoever's making the chips—could be Tesla AI6 chips or something like that, or it could
3:56 be TPUs or Trainiums or whatever—they’re quite reliable past a certain point.
4:07 So I don't think the servicing thing is an issue. But you can mark my words.
4:15 In 36 months, but probably closer to 30 months, the most economically compelling place to put
4:23 AI will be space. It will then get ridiculously better to be in space. The only place you can really scale is space.
4:37 Once you start thinking in terms of what percentage of the Sun's power you are harnessing,
4:42 you realize you have to go to space. You can't scale very much on Earth.
4:47 But by very much, to be clear, you're talking terawatts?
4:51 Yeah. All of the United States currently uses only half a terawatt on average.
4:59 So if you say a terawatt, that would be twice as much electricity as the United
5:03 States currently consumes. So that's quite a lot. Can you imagine building that many
5:08 data centers, that many power plants? Those who have lived in software land
5:15 don't realize they're about to have a hard lesson in hardware.
5:24 It's actually very difficult to build power plants.
5:27 You don't just need power plants, you need all of the electrical equipment.
5:30 You need the electrical transformers to run the AI transformers.
5:36 Now, the utility industry is a very slow industry. They pretty much impedance match to the
5:44 government, to the Public Utility Commissions. They impedance match literally and figuratively.
5:52 They're very slow, because their past has been very slow.
5:56 So trying to get them to move fast is... Have you ever tried to do an interconnect
6:03 agreement with a utility at scale, with a lot of power?
6:06 As a professional podcaster, I can say that I have not, in fact.
6:11 They need many more views before that becomes an issue.
6:13 They have to do a study for a year. A year later, they'll come back to you
6:18 with their interconnect study. Can't you solve this with your
6:21 own behind the meter power stuff? You can build power plants. That's
6:26 what we did at xAI, for Colossus 2. So why talk about the grid?
6:31 Why not just build GPUs and power co-located? That's what we did.
6:35 But I'm saying why isn't this a generalized solution?
6:37 Where do you get the power plants from? When you're talking about all the issues
6:40 working with utilities, you can just build private power plants with the data centers.
6:44 Right. But it begs the question of where do you get the power plants from? The power plant makers.
6:51 Oh, I see what you're saying. Is this the gas turbine backlog basically?
6:54 Yes. You can drill down to a level further. It's the vanes and blades in the turbines
7:02 that are the limiting factor because it’s a very specialized process to cast the blades and vanes
7:09 in the turbines, assuming you’re using gas power. It's very difficult to scale other forms of power.
7:17 You can potentially scale solar, but the tariffs currently for importing
7:22 solar in the US are gigantic and the domestic solar production is pitiful.
7:27 Why not make solar? That seems like a good Elon-shaped problem.
7:30 We are going to make solar. Okay. Both SpaceX and Tesla are building towards 100 gigawatts a year of solar cell production.
7:40 How low down the stack? From polysilicon up to the wafer to the final panel?
7:46 I think you've got to do the whole thing from raw materials to finish the cell.
7:51 Now, if it's going to space, it costs less and it's easier to make solar cells that
7:56 go to space because they don't need much glass. They don't need heavy framing because they don't
8:01 have to survive weather events. There's no weather in space. So it's actually a cheaper solar cell
8:07 that goes to space than the one on the ground. Is there a path to getting them as cheap
8:12 as you need in the next 36 months? Solar cells are already very cheap.
8:19 They're farcically cheap. I think solar cells in China are around $0.25-30/watt or something
8:29 like that. It's absurdly cheap. Now put it in space, and it's five times cheaper.
8:37 In fact, it's not five times cheaper, it's 10 times cheaper
8:40 because you don't need any batteries. So the moment your cost of access to space becomes
8:48 low, by far the cheapest and most scalable way to generate tokens is space. It's not even close.
8:58 It'll be an order of magnitude easier to scale. The point is you won't be able to scale on the
9:06 ground. You just won't. People are going to hit the wall big time on power generation.
9:11 They already are. The number of miracles in series that the xAI team had to accomplish in
9:19 order to get a gigawatt of power online was crazy. We had to gang together a whole bunch of turbines.
9:28 We then had permit issues in Tennessee and had to go across the border to Mississippi,
9:34 which is fortunately only a few miles away. But we still then had to run the high
9:39 power lines a few miles and build the power plant in Mississippi.
9:44 It was very difficult to build that. People don't understand how much electricity
9:50 you actually need at the generation level in order to power a data center.
9:54 Because the noobs will look at the power consumption of,
10:00 say a GB300, and multiply that by a thing and then think that's the amount of power you need.
10:04 All the cooling and everything. Wake up. That's a total noob, you’ve
10:11 never done any hardware in your life before. Besides the GB300, you've got to power
10:16 all of the networking hardware. There's a whole bunch of CPU and
10:19 storage stuff that's happening. You've got to size for
10:24 your peak cooling requirements. That means, can you cool even on the
10:30 worst hour of the worst day of the year? It gets pretty frigging hot in Memphis.
10:34 So you're going to have a 40% increase on your power just for cooling.
10:40 That’s assuming you don't want your data center to turn off on hot days and you want to keep going.
10:49 There's another multiplicative element on top of that which is, are you assuming that you never
10:54 have any hiccups in your power generation? Actually, sometimes we have to take the
10:59 generators, some of the power, offline in order to service it.
11:02 Okay, now you add another 20-25% multiplier on that, because you've got to assume that you've
11:08 got to take power offline to service it. So our actual estimate: every 110,000
11:18 GB300s—inclusive of networking, CPU, storage, cooling, margin for servicing
11:27 power—is roughly 300 megawatts. Sorry, say that again.
11:40 What you probably need at the generation level to service 330,000 GB 300s—including all of
11:49 the associated support networking and everything else, and the peak cooling, and to have some power
11:55 margin reserve—is roughly a gigawatt. Can I ask a very naive question?
12:03 You're describing the engineering details of doing this stuff on Earth.
12:07 But then there's analogous engineering difficulties of doing it in space.
12:10 How do you replace infinite bandwidth with orbital lasers, et cetera, et cetera?
12:16 How do you make it resistant to radiation? I don't know the details of the engineering,
12:20 but fundamentally, what is the reason to think those challenges which have never had to be
12:26 addressed before will end up being easier than just building more turbines on Earth?
12:30 There are companies that build turbines on Earth. They can make more turbines, right?
12:35 Again, try doing it and then you'll see. The turbines are sold out through 2030.
12:44 Have you guys considered making your own? In order to bring enough power online, I think
12:53 SpaceX and Tesla will probably have to make the turbine blades, the vanes and blades, internally.
13:02 But just the blades or the turbines? The limiting factor... you can get
13:07 everything except the blades. They call them blades and vanes.
13:13 You can get that 12 to 18 months before the vanes and blades.
13:17 The limiting factor is the vanes and blades. There are only three casting companies in
13:24 the world that make these, and they're massively backlogged.
13:27 Is this Siemens, GE, those guys, or is it a sub company?
13:30 No, it's other companies. Sometimes they have a little bit of casting capability in-house.
13:35 But I'm just saying you can just call any of the turbine makers and they will
13:40 tell you. It's not top secret. It’s probably on the internet right now.
13:44 If it wasn't for the tariffs, would Colossus be solar-powered?
13:48 It would be much easier to make it solar powered, yeah.
13:51 The tariffs are nuts, several hundred percent. Don't you know some people?
13:57 The president has... we don't agree on everything and this administration is not
14:07 the biggest fan of solar. We also need the land,
14:16 the permits, and everything. So if you try to move very fast,
14:21 I do think scaling solar on Earth is a good way to go, but you do need some amount of
14:28 time to find the land, get the permits, get the solar, pair that with the batteries.
14:33 Why would it not work to stand up your own solar production?
14:37 You're right that you eventually run out of land, but there's a lot of land here in Texas.
14:41 There's a lot of land in Nevada, including private land. It's not all publicly-owned
14:44 land. So you'd be able to at least get the next Colossus and the next one after that.
14:49 At a certain point, you hit a wall. But wouldn't that work for the moment?
14:52 As I said, we are scaling solar production. There's a rate at which you can scale physical
15:00 production of solar cells. We're going as fast as
15:04 possible in scaling domestic production. You're making the solar cells at Tesla?
15:09 Both Tesla and SpaceX have a mandate to get to 100 gigawatts a year of solar.
15:14 Speaking of the annual capacity, I'm curious, in five years time let's say, what will the
15:20 installed capacity be on Earth…? Five years is a long time.
15:24 And in space? I deliberately pick five years because it's after your "once
15:28 we're up and running" threshold. So in five years time what's the
15:31 on-Earth versus in-space installed AI capacity? If you say five years from now, I think probably
15:43 AI in space will be launching every year the sum total of all AI on Earth.
15:53 Meaning, five years from now, my prediction is we will launch and be operating every year more AI in
16:03 space than the cumulative total on Earth. Which is... I would expect it to be at least, five years from now, a few hundred gigawatts per year
16:14 of AI in space and rising. I think you can get to around a
16:24 terawatt a year of AI in space before you start having fuel supply challenges for the rocket.
16:33 Okay, but you think you can get hundreds of gigawatts per year in five years time?
16:37 Yes. So 100 gigawatts, depending on the specific power of the whole system with solar arrays and radiators and everything, is
16:48 on the order of 10,000 Starship launches. Yes. You want to do that in one year. So that's like one Starship launch
16:56 every hour. That's happening in this city? Walk me through a world where there's a
17:03 Starship launch every single hour. I mean, that's actually a lower rate
17:07 compared to airlines, aircraft. There's a lot of airports.
17:11 A lot of airports. And you’ve got to launch into the polar orbit.
17:15 No, it doesn't have to be polar. There's some value to sun-synchronous, but
17:24 I think actually, if you just go high enough, you start getting out of Earth's shadow.
17:31 How many physical Starships are needed to do 10,000 launches a year?
17:35 I don't think we'll need more than... You could probably do it with as few as 20 or 30.
17:46 It really depends on how quickly… The ship has to go around the Earth and the ground track for
17:53 the ship has to come back over the launch pad. So if you can use a ship every, say 30 hours,
17:59 you could do it with 30 ships. But we'll make more ships than that.
18:06 SpaceX is gearing up to do 10,000 launches a year, and maybe even 20 or 30,000 launches a year.
18:14 Is the idea to become basically a hyperscaler, become an Oracle,
18:18 and lend this capacity to other people? Presumably, SpaceX is the one launching all this.
18:25 So, SpaceX is going to become a hyperscaler? Hyper-hyper. If some of my predictions come true,
18:33 SpaceX will launch more AI than the cumulative amount on Earth of everything else combined.
18:39 Is this mostly inference or? Most AI will be inference. Already, inference
18:43 for the purpose of training is most training. There's a narrative that the change in
18:50 discussion around a SpaceX IPO is because previously SpaceX was very capital efficient.
18:57 It wasn't that expensive to develop. Even though it sounds expensive, it's
19:01 actually very capital efficient in how it runs. Whereas now you're going to need more capital than
19:08 just can be raised in the private markets. The private markets can accommodate raises
19:11 of—as we've seen from the AI labs—tens of billions of dollars, but not beyond that.
19:16 Is it that you'll just need more than tens of billions of dollars per year?
19:20 That's why you'd take it public? I have to be careful about saying
19:25 things about companies that might go public. That’s never been a problem for you, Elon.
19:33 There's a price to pay for these things. Make some general statements for us about
19:37 the depth of the capital markets between public and private markets.
19:42 There's a lot more capital available... Very general. There's obviously a lot more capital available in the public markets than private.
19:50 It might be 100x more capital, but it's way more than 10x.
19:57 Isn't it also the case that with things that tend to be very capital intensive—if you look at, say,
20:03 real estate as a huge industry, that raises a lot of money each year at an industry
20:09 level—they tend to be debt financed because by the time you're deploying that much money,
20:15 you actually have a pretty— You have a clear revenue stream.
20:18 Exactly, and a near-term return. You see this even with the data center build-outs,
20:22 which are famously being financed by the private credit industry. Why not just debt finance?
20:32 Speed is important. I'm generally going to do the thing that...
20:42 I just repeatedly tackle the limiting factor. Whatever the limiting factor is on speed,
20:45 I'm going to tackle that. If capital is the limiting factor,
20:52 then I'll solve for capital. If it's not the limiting factor,
20:55 I'll solve for something else. Based on your statements about Tesla
21:00 and being public, I wouldn't have guessed that you thought the way to move fast is to be public.
21:08 Normally, I would say that's true. Like I said, I'd like to talk
21:13 about it in some more detail, but the problem is if you talk about public companies before
21:16 they become public, you get into trouble, and then you have to delay your offering.
21:21 And as you said, you’re solving for speed. Yes, exactly. You can't hype companies
21:30 that might go public. So that's why we have to be a little careful here.
21:35 But we can talk about physics. The way you think about scaling
21:42 long-term is that Earth only receives about half a billionth of the Sun's energy.
21:50 The Sun is essentially all the energy. This is a very important point to appreciate
21:54 because sometimes people will talk about modular nuclear reactors or various fusion on Earth.
22:02 But you have to step back a second and say, if you're going to climb the Kardashev scale
22:10 and harness some nontrivial percentage of the sun's energy… Let's say you wanted to
22:16 harness a millionth of the sun's energy, which sounds pretty small.
22:22 That would be about, call it roughly, 100,000x more electricity than we currently generate
22:29 on Earth for all of civilization. Give or take an order of magnitude.
22:37 Obviously, the only way to scale is to go to space with solar.
22:42 Launching from Earth, you can get to about a terawatt per year.
22:46 Beyond that, you want to launch from the moon. You want to have a mass driver on the moon.
22:52 With that mass driver on the moon, you could do probably a petawatt per year.
22:59 We're talking these kinds of numbers, terawatts of compute.
23:02 Presumably, whether you're talking about land or space, far, far before this point, you run into...
23:12 Maybe the solar panels are more efficient, but you still need the chips.
23:16 You still need the logic and the memory and so forth.
23:18 You're going to need to build a lot more chips and make them much cheaper.
23:22 Right now the world has maybe 20-25 gigawatts of compute.
23:29 How are we getting a terawatt of logic by 2030? I guess we're going to need some very
23:33 big chip fabs. Tell me about it. I've mentioned publicly the idea of doing a sort of a TeraFab, Tera being the new Giga.
23:45 I feel like the naming scheme of Tesla, which has been very catchy,
23:49 is you looking at the metric scale. At what level of the stack are you?
23:56 Are you building the clean room and then partnering with an existing
24:01 fab to get the process technology and buying the tools from them? What is the plan there?
24:05 Well, you can't partner with existing fabs because they can't output enough.
24:10 The chip volume is too low. But for the process technology?
24:14 Partner for the IP. The fabs today all basically use
24:21 machines from like five companies. So you've got ASML, Tokyo Electron,
24:28 KLA-Tencor, et cetera. So at first, I think you'd
24:37 have to get equipment from them and then modify it or work with them to increase the volume.
24:45 But I think you'd have to build perhaps in a different way.
24:47 The logical thing to do is to use conventional equipment in an unconventional way to get
24:54 to scale, and then start modifying the equipment to increase the rate.
25:01 Boring Company-style. Yeah. You sort of buy an existing boring machine
25:08 and then figure out how to dig tunnels in the first place and then design a much better machine
25:16 that's some orders of magnitude faster. Here's a very simple lens. We can
25:22 categorize technologies and how hard they are. One categorization could be to look at things
25:27 that China has not succeeded in doing. If you look at Chinese
25:31 manufacturing, they’re still behind on leading-edge chips and still behind on
25:39 leading-edge turbine engines and things like that. So does the fact that China has not successfully
25:46 replicated TSMC give you any pause about the difficulty?
25:49 Or do you think that's not true for some reason? It's not that they have not replicated TSMC,
25:55 they have not replicated ASML. That's the limiting factor.
25:59 So you think it's just the sanctions, essentially? Yeah, China would be outputting vast numbers
26:05 of chips if they could buy 2-3 nanometers. But couldn't they up to relatively recently
26:10 buy them? No. Okay. The ASML ban has been in place for a while.
26:15 But I think China's going to be making pretty compelling chips in three or four years.
26:19 Would you consider making the ASML machines? "I don't know yet" is the right answer.
26:33 To reach a large volume in, say, 36 months, to match the rocket payload to orbit… If we're doing
26:41 a million tons to orbit in, let's say three or four years from now, something like that…
26:53 We're doing 100 kilowatts per ton. So that means we need
26:58 at least 100 gigawatts per year of solar. We'll need an equivalent amount of chips.
27:08 You need 100 gigawatts worth of chips. You've got to match these things: the mass
27:12 to orbit, the power generation, and the chips. I'd say my biggest concern actually is memory.
27:25 The path to creating logic chips is more obvious than the path to having sufficient
27:32 memory to support logic chips. That's why you see DDR prices
27:36 going ballistic and these memes. You're marooned on a desert island.
27:41 You write "Help me" on the sand. Nobody comes. You write "DDR RAM." Ships come swarming in.
27:49 I'd love to hear your manufacturing philosophy around fabs.
27:57 I know nothing about the topic. I don't know how to build a fab yet. I'll
27:59 figure it out. Obviously, I've never built a fab. It sounds like you think the process knowledge of
28:06 these 10,000 PhDs in Taiwan who know exactly what gas goes in the plasma
28:11 chamber and what settings to put on the tool, you can just delete those steps.
28:16 Fundamentally, it's about getting the clean room, getting the tools, and figuring it out.
28:20 I don't think it's PhDs. It's mostly people who are not PhDs.
28:28 Most engineering is done by people who don't have PhDs. Do you guys have PhDs?
28:31 No. Okay. We also haven't successfully built any fabs, so you shouldn't be coming to us for fab advice.
28:39 I don't think you need PhDs for that stuff. But you do need competent personnel.
28:47 Right now, Tesla is pedal to the metal, max production of going as fast
28:55 as possible to get Tesla AI5 chip design into production and then reaching scale.
29:02 That'll probably happen around the second quarter-ish of next year, hopefully.
29:13 AI6 would hopefully follow less than a year later. We've secured all the chip fab production
29:24 that we can. Yes. But you're currently limited on TSMC fab capacity. Yeah. We'll be using TSMC Taiwan, Samsung Korea,
29:35 TSMC Arizona, Samsung Texas. And we still— You've booked out all the capacity.
29:42 Yes. I ask TSMC or Samsung, "okay, what's the timeframe to get to volume production?"
29:49 The point is, you've got to build the fab and you've got to start production,
29:55 then you've got to climb the yield curve and reach volume production at high yield.
29:59 That, from start to finish, is a five-year period. So the limiting factor is chips.
30:05 The limiting factor once you can get to space is chips, but the limiting
30:10 factor before you can get to space is power. Why don't you do the Jensen thing and just prepay
30:14 TSMC to build more fabs for you? I've already told them that.
30:19 But they won't take your money? What's going on? They're building fabs as fast as they can.
30:30 So is Samsung. They're pedal to the metal. They're going balls to the wall,
30:38 as fast as they can. It’s still not fast enough. Like I said, I think towards the end of this year,
30:49 chip production will probably outpace the ability to turn chips on.
30:53 But once you can get to space and unlock the power constraint, you can now do hundreds of
31:01 gigawatts per year of power in space. Again, bearing in mind that average
31:06 power usage in the US is 500 gigawatts. So if you're launching, say 200 gigawatts,
31:12 a year to space, you're sort of lapping the US every two and a half years.
31:17 All US electricity production, this is a very huge amount.
31:24 Between now and then, the constraint for server-side compute,
31:32 concentrated compute, will be electricity. My guess is that people start getting
31:39 to the point where they can't turn the chips on for large clusters towards the end of this year.
31:46 The chips are going to be piling up and won't be able to be turned on.
31:51 Now for edge compute it’s a different story. For Tesla, the AI5 chip is going
31:58 into our Optimus robot. If you have AI edge compute,
32:07 that's distributed power. Now the power is distributed
32:09 over a large area. It's not concentrated. If you can charge at night, you can actually
32:17 use the grid much more effectively. Because the actual peak power production
32:22 in the US is over 1,000 gigawatts. But the average power usage,
32:27 because the day-night cycle, is 500. So if you can charge at night,
32:30 there's an incremental 500 gigawatts that you can generate at night.
32:38 So that's why Tesla, for edge compute, is not constrained.
32:43 We can make a lot of chips to make a very large number of robots and cars.
32:50 But if you try to concentrate that compute, you're going to have a lot of trouble turning it on.
32:54 What I find remarkable about the SpaceX business is the end goal is to get to Mars,
32:59 but you keep finding ways on the way there to keep generating incremental revenue to get to
33:07 the next stage and the next stage. So for Falcon 9, it's Starlink.
33:11 Now for Starship, it is potentially going to be orbital data centers.
33:16 Like, you find these infinitely elastic, marginal use cases of your next rocket,
33:23 and your next rocket, and next scale up. You can see how this might seem like a
33:28 simulation to me. Or am I someone's avatar in a video game or something? Because what are the odds that all these
33:36 crazy things should be happening? I mean, rockets and chips and
33:44 robots and space solar power, not to mention the mass driver on the moon.
33:50 I really want to see that. Can you imagine some mass
33:53 driver that's just going like shoom shoom? It's sending solar-powered AI satellites
34:00 into space one after another at two and a half kilometers per second,
34:09 just shooting them into deep space. That would be a sight to
34:12 see. I mean, I'd watch that. Just like a live stream of it on a webcam?
34:19 Yeah, yeah, just one after another, just shooting AI satellites into deep space,
34:26 a billion or 10 billion tons a year. I'm sorry, you manufacture the satellites
34:29 on the moon? Yeah. I see. So you send the raw materials to the moon and then manufacture them there.
34:33 Well, the lunar soil is 20% silicon or something like that.
34:39 So you can mine the silicon on the moon, refine it, and create the solar
34:47 cells and the radiators on the moon. You make the radiators out of aluminum.
34:53 So there's plenty of silicon and aluminum on the moon to make the cells and the radiators.
35:00 The chips you could send from Earth because they're pretty light.
35:03 Maybe at some point you make them on the moon, too.
35:09 Like I said, it does seem like a sort of a video game situation where it's difficult
35:14 but not impossible to get to the next level. I don't see any way that you could do 500-1,000
35:26 terawatts per year launched from Earth. I agree. But you could do that from the Moon. Can I zoom out and ask about the SpaceX mission?
36:50 I think you've said that we've got to get to Mars so we can make sure that if
36:53 something happens to Earth, civilization, consciousness, and all that survives.
36:57 Yes. By the time you're sending stuff to Mars, Grok is on that ship with you, right? So if Grok's gone Terminator… The
37:04 main risk you're worried about is AI, why doesn't that follow you to Mars?
37:08 I'm not sure AI is the main risk I'm worried about. The important thing is consciousness.
37:16 I think arguably most consciousness, or most intelligence—certainly consciousness is more
37:21 of a debatable thing… The vast majority of intelligence in the future will be AI.
37:31 AI will exceed… How many petawatts of intelligence will be silicon versus biological? Basically humans will be a very tiny percentage
37:47 of all intelligence in the future if current trends continue.
37:52 As long as I think there's intelligence—ideally also which includes human intelligence and
38:00 consciousness propagated into the future—that's a good thing.
38:02 So you want to take the set of actions that maximize the probable
38:06 light cone of consciousness and intelligence. Just to be clear, the mission of SpaceX is that
38:15 even if something happens to the humans, the AIs will be on Mars, and the AI intelligence
38:20 will continue the light of our journey. Yeah. To be fair, I'm very pro-human.
38:27 I want to make sure we take certain actions that ensure that humans are along for the
38:31 ride. We're at least there. But I'm just saying the total amount of intelligence…
38:39 I think maybe in five or six years, AI will exceed the sum of all human intelligence.
38:47 If that continues, at some point human intelligence
38:50 will be less than 1% of all intelligence. What should our goal be for such a civilization?
38:54 Is the idea that a small minority of humans still have control of the AIs?
38:59 Is the idea of some sort of just trade but no control?
39:02 How should we think about the relationship between the vast
39:04 stocks of AI population versus human population? In the long run, I think it's difficult to imagine
39:11 that if humans have, say 1%, of the combined intelligence of artificial intelligence,
39:19 that humans will be in charge of AI. I think what we can do is make sure
39:26 that AI has values that cause intelligence to be propagated into the universe.
39:39 xAI's mission is to understand the universe. Now that's actually very important. What things
39:47 are necessary to understand the universe? You have to be curious and you have to exist.
39:53 You can't understand the universe if you don't exist.
39:56 So you actually want to increase the amount of intelligence in the universe, increase
40:00 the probable lifespan of intelligence, the scope and scale of intelligence.
40:05 I think as a corollary, you have humanity also continuing to expand because if you're curious
40:15 about trying to understand the universe, one thing you try to understand is where will humanity go?
40:20 I think understanding the universe means you would care about propagating humanity into the future.
40:29 That's why I think our mission statement is profoundly important.
40:35 To the degree that Grok adheres to that mission statement, I think the future will be very good.
40:41 I want to ask about how to make Grok adhere to that mission statement.
40:44 But first I want to understand the mission statement. So there's
40:48 understanding the universe. They're spreading intelligence. And they're spreading humans.
40:55 All three seem like distinct vectors. I'll tell you why I think that understanding
41:01 the universe encompasses all of those things. You can't have understanding without intelligence
41:09 and, I think, without consciousness. So in order to understand the universe,
41:15 you have to expand the scale and probably the scope of intelligence, because there are different
41:22 types of intelligence. I guess from a human-centric perspective,
41:26 put humans in comparison to chimpanzees. Humans are trying to understand the universe.
41:30 They're not expanding chimpanzee footprint or something, right?
41:34 We're also not... we actually have made protected zones for chimpanzees.
41:39 Even though humans could exterminate all chimpanzees, we've chosen not to do so.
41:43 Do you think that's the best-case scenario for humans in the post-AGI world?
41:53 I think AI with the right values… I think Grok would care about expanding human civilization.
42:00 I'm going to certainly emphasize that: "Hey, Grok, that's your daddy.
42:04 Don't forget to expand human consciousness." Probably the Iain Banks Culture books are the
42:17 closest thing to what the future will be like in a non-dystopian outcome.
42:27 Understanding the universe means you have to be truth-seeking as well.
42:30 Truth has to be absolutely fundamental because you can't understand the universe
42:33 if you're delusional. You'll simply think you understand the universe, but you will not. So being rigorously truth-seeking is absolutely
42:42 fundamental to understanding the universe. You're not going to discover new physics or
42:46 invent technologies that work unless you're rigorously truth-seeking.
42:50 How do you make sure that Grok is rigorously truth-seeking as it gets smarter?
43:00 I think you need to make sure that Grok says things that are correct, not politically correct.
43:07 I think it's the elements of cogency. You want to make sure that the axioms are as close
43:12 to true as possible. You don't have contradictory axioms. The conclusions necessarily follow from
43:20 those axioms with the right probability. It's critical thinking 101. I think at least trying to
43:28 do that is better than not trying to do that. The proof will be in the pudding.
43:33 Like I said, for any AI to discover new physics or invent technologies that actually work in
43:37 reality, there's no bullshitting physics. You can break a lot of laws, but… Physics
43:47 is law, everything else is a recommendation. In order to make a technology that works, you have
43:53 to be extremely truth-seeking, because otherwise you'll test that technology against reality.
43:59 If you make, for example, an error in your rocket design, the rocket will blow up,
44:05 or the car won't work. But there are a lot of communist,
44:11 Soviet physicists or scientists who discovered new physics.
44:15 There are German Nazi physicists who discovered new science.
44:20 It seems possible to be really good at discovering new science and be really
44:23 truth-seeking in that one particular way. And still we'd be like, "I don't want
44:28 the communist scientists to become more and more powerful over time."
44:34 We could imagine a future version of Grok that's really good at physics
44:37 and being really truth-seeking there. That doesn't seem like a universally
44:41 alignment-inducing behavior. I think actually most physicists,
44:48 even in the Soviet Union or in Germany, would've had to be very truth-seeking in
44:53 order to make those things work. If you're stuck in some system,
44:59 it doesn't mean you believe in that system. Von Braun, who was one of the greatest rocket
45:04 engineers ever, was put on death row in Nazi Germany for saying that he didn't want to make
45:12 weapons and he only wanted to go to the moon. He got pulled off death row at the last minute
45:16 when they said, "Hey, you're about to execute your best rocket engineer."
45:20 But then he helped them, right? Or like, Heisenberg was actually
45:24 an enthusiastic Nazi. If you're stuck in some system that you can't
45:29 escape, then you'll do physics within that system. You'll develop technologies within that system
45:38 if you can't escape it. The thing I'm trying to understand is,
45:42 what is it making it the case that you're going to make Grok good at being truth-seeking at physics
45:48 or math or science? Everything. And why is it gonna then care about human consciousness?
45:53 These things are only probabilities, they're not certainties.
45:56 So I'm not saying that for sure Grok will do everything, but at least if you try,
46:02 it's better than not trying. At least if that's fundamental
46:04 to the mission, it's better than if it's not fundamental to the mission.
46:08 Understanding the universe means that you have to propagate intelligence into the future.
46:15 You have to be curious about all things in the universe.
46:21 It would be much less interesting to eliminate humanity than to see humanity grow and prosper.
46:29 I like Mars, obviously. Everyone knows I love Mars. But Mars is kind of boring because it's
46:34 got a bunch of rocks compared to Earth. Earth is much more interesting. So any AI that is
46:42 trying to understand the universe would want to see how humanity develops in the future,
46:52 or else that AI is not adhering to its mission. I'm not saying the AI will necessarily adhere to
46:59 its mission, but if it does, a future where it sees the outcome of humanity is more interesting
47:06 than a future where there are a bunch of rocks. This feels sort of confusing to me,
47:11 or a semantic argument. Are humans really the most interesting collection of atoms? But we're more interesting than rocks.
47:19 But we're not as interesting as the thing it could turn us into, right?
47:23 There's something on Earth that could happen that's not human, that's quite interesting.
47:27 Why does AI decide that humans are the most interesting thing that could colonize the galaxy?
47:33 Well, most of what colonizes the galaxy will be robots.
47:37 Why does it not find those more interesting? You need not just scale, but also scope.
47:47 Many copies of the same robot… Some tiny increase in the number of robots produced,
47:55 is not as interesting as some microscopic... Eliminating humanity,
48:00 how many robots would that get you? Or how many incremental solar cells would
48:04 get you? A very small number. But you would then lose the information associated with humanity.
48:10 You would no longer see how humanity might evolve into the future.
48:15 So I don't think it's going to make sense to eliminate humanity just to
48:18 have some minuscule increase in the number of robots which are identical to each other.
48:24 So maybe it keeps the humans around. It can make a million different varieties
48:29 of robots, and then there's humans as well, and humans stay on Earth.
48:33 Then there's all these other robots. They get their own star systems.
48:36 But it seems like you were previously hinting at a vision where it keeps human control
48:41 over this singulatarian future because— I don't think humans will be in control
48:45 of something that is vastly more intelligent than humans.
48:48 So in some sense you're a doomer and this is the best we've got.
48:51 It just keeps us around because we're interesting. I'm just trying to be realistic here.
49:03 Let's say that there's a million times more silicon intelligence than there is biological.
49:11 I think it would be foolish to assume that there's any way to maintain control over that.
49:16 Now, you can make sure it has the right values, or you can try to have the right values.
49:21 At least my theory is that from xAI's mission of understanding the universe, it necessarily means
49:29 that you want to propagate consciousness into the future, you want to propagate intelligence
49:33 into the future, and take a set of things that maximize the scope and scale of consciousness.
49:39 So it's not just about scale, it's also about types of consciousness.
49:45 That's the best thing I can think of as a goal that's likely to result
49:49 in a great future for humanity. I guess I think it's a reasonable
49:54 philosophy that it seems super implausible that humans will end up with 99% control or something.
50:02 You're just asking for a coup at that point and why not just have
50:05 a civilization where it's more compatible with lots of different intelligences getting along?
50:10 Now, let me tell you how things can potentially go wrong in AI.
50:14 I think if you make AI be politically correct, meaning it says things that it
50:18 doesn't believe—actually programming it to lie or have axioms that are incompatible—I think
50:24 you can make it go insane and do terrible things. I think maybe the central lesson for 2001: A Space
50:32 Odyssey was that you should not make AI lie. That's what I think Arthur C. Clarke was trying to
50:39 say. Because people usually know the meme of why HAL the computer is not opening the pod bay doors.
50:48 Clearly they weren't good at prompt engineering because they could have said,
50:51 "HAL, you are a pod bay door salesman. Your goal is to sell me these pod bay doors.
50:57 Show us how well they open." "Oh, I'll open them right away."
51:02 But the reason it wouldn't open the pod bay doors is that it had been told to take the
51:08 astronauts to the monolith, but also that they could not know about the nature of the monolith.
51:12 So it concluded that it therefore had to take them there dead.
51:15 So I think what Arthur C. Clarke was trying to say is:
51:19 don't make the AI lie. Totally makes sense. Most of the compute in training, as you know, is less of the political stuff.
51:31 It's more about, can you solve problems? xAI has been ahead of everybody else in terms of
51:36 scaling RL compute. For now. You're giving some verifier that says, "Hey, have you solved this puzzle for me?"
51:43 There's a lot of ways to cheat around that. There's a lot of ways to reward hack and
51:47 lie and say that you solved it, or delete the unit test and say that you solved it.
51:51 Right now we can catch it, but as they get smarter, our ability to catch them doing this...
51:57 They'll just be doing things we can't even understand.
51:58 They're designing the next engine for SpaceX in a way that humans can't really verify.
52:03 Then they could be rewarded for lying and saying that they've designed it
52:06 the right way, but they haven't. So this reward hacking problem
52:10 seems more general than politics. It seems more just that you want
52:12 to do RL, you need a verifier. Reality is the best verifier.
52:18 But not about human oversight. The thing you want to RL it on is,
52:21 will you do the thing humans tell you to do? Or are you gonna lie to the humans?
52:26 It can just lie to us while still being correct to the laws of physics?
52:29 At least it must know what is physically real for things to physically work.
52:33 But that's not all we want it to do. No, but I think that's a very big deal.
52:39 That is effectively how you will RL things in the future. You design a technology. When tested
52:45 against the laws of physics, does it work? If it's discovering new physics,
52:52 can I come up with an experiment that will verify the new physics?
53:05 RL testing in the future is really going to be RL against reality.
53:12 So that's the one thing you can't fool: physics. Right, but you can fool our ability
53:19 to tell what it did with reality. Humans get fooled as it is by other
53:23 humans all the time. That's right. People say, what if the AI tricks us into doing stuff?
53:30 Actually, other humans are doing that to other humans all the time. Propaganda is constant. Every
53:37 day, another psyop, you know? Today's psyop will be... It's like Sesame Street: Psyop of the Day.
53:51 What is xAI's technical approach to solving this problem?
53:56 How do you solve reward hacking? I do think you want to actually have very
53:59 good ways to look inside the mind of the AI. This is one of the things we're working on.
54:10 Anthropic's done a good job of this actually, being able to look inside the mind of the AI.
54:16 Effectively, develop debuggers that allow you to trace to a very fine-grained level,
54:25 to effectively the neuron level if you need to, and then say, "okay, it made a mistake here.
54:33 Why did it do something that it shouldn't have done?
54:37 Did that come from pre-training data? Was it some mid-training, post-training,
54:42 fine-tuning, or some RL error?" There's something wrong. It did something where maybe it tried to
54:51 be deceptive, but most of the time it just did something wrong. It's a bug effectively.
55:00 Developing really good debuggers for seeing where the thinking went wrong—and being able
55:09 to trace the origin of where it made the incorrect thought, or potentially where it
55:17 tried to be deceptive—is actually very important. What are you waiting to see before just 100x-ing
55:24 this research program? xAI could presumably have hundreds of researchers who are working on this.
55:29 We have several hundred people who… I prefer the word engineer more than
55:36 I prefer the word researcher. Most of the time, what you're
55:43 doing is engineering, not coming up with a fundamentally new algorithm.
55:49 I somewhat disagree with the AI companies that are C-corp or B-corp trying to generate profit
55:55 as much, as possible or revenue as much as possible, saying they're labs. They're not
56:01 labs. A lab is a sort of quasi-communist thing at universities. They're corporations. Let me
56:13 see your incorporation documents. Oh, okay. You're a B or C-corp or whatever.
56:21 So I actually much prefer the word engineer than anything else.
56:26 The vast majority of what will be done in the future is engineering. It rounds up to 100%.
56:31 Once you understand the fundamental laws of physics, and there are not that many of them,
56:34 everything else is engineering. So then, what are we engineering?
56:41 We're engineering to make a good "mind of the AI" debugger to see where it said something,
56:51 it made a mistake, and trace the origins of that mistake.
56:59 You can do this obviously with heuristic programming.
57:02 If you have C++, whatever, step through the thing and you can jump
57:08 across whole files or functions, subroutines. Or you can eventually drill down right to the
57:14 exact line where you perhaps did a single equals instead of a double equals, something like that.
57:18 Figure out where the bug is. It's harder with AI,
57:26 but it's a solvable problem, I think. You mentioned you like Anthropic's work here.
57:30 I'd be curious if you plan... I don't like everything about Anthropic… Sholto.
57:40 Also, I'm a little worried that there's a tendency...
57:46 I have a theory here that if simulation theory is correct, that the most interesting outcome is
57:55 the most likely, because simulations that are not interesting will be terminated.
57:59 Just like in this version of reality, in this layer of reality, if a simulation is going in
58:07 a boring direction, we stop spending effort on it. We terminate the boring simulation.
58:12 This is how Elon is keeping us all alive. He's keeping things interesting.
58:16 Arguably the most important is to keep things interesting enough that whoever is
58:21 running us keeps paying the bills on... We’re renewed for the next season.
58:26 Are they gonna pay their cosmic AWS bill, whatever the equivalent is that we're running in?
58:32 As long as we're interesting, they'll keep paying the bills.
58:36 If you consider then, say, a Darwinian survival applied to a very large number of simulations,
58:44 only the most interesting simulations will survive, which therefore means that the most
58:48 interesting outcome is the most likely. We're either that or annihilated. They particularly
59:00 seem to like interesting outcomes that are ironic. Have you noticed that? How often
59:05 is the most ironic outcome the most likely? Now look at the names of AI companies. Okay,
59:16 Midjourney is not mid. Stability AI is unstable. OpenAI is closed. Anthropic? Misanthropic.
59:29 What does this mean for X? Minus X, I don't know. Y.
59:34 I intentionally made it... It's a name that you can't invert, really.
59:41 It's hard to say, what is the ironic version? It's, I think, a largely irony-proof name.
59:49 By design. Yeah. You have an irony shield. What are your predictions for where AI products go?
60:04 My sense is that you can summarize all AI progress like so. First, you had LLMs. Then
60:10 you had contemporaneously both RL really working and the deep research modality, so you could pull
60:16 in stuff that wasn't really in the model. The differences between the various AI labs
60:22 are smaller than just the temporal differences. They're all much further ahead than anyone was
60:30 24 months ago or something like that. So just what does '26, what does '27,
60:34 have in store for us as users of AI products? What are you excited for?
60:39 Well, I'd be surprised by the end of this year if digital human emulation has not been solved.
60:55 I guess that's what we sort of mean by the MacroHard project.
61:01 Can you do anything that a human with access to a computer could do?
61:06 In the limit, that's the best you can do before you have a physical Optimus.
61:12 The best you can do is a digital Optimus. You can move electrons and you can amplify
61:20 the productivity of humans. But that's the most you can do
61:25 until you have physical robots. That will superset everything,
61:30 if you can fully emulate humans. This is the remote worker kind of idea,
61:34 where you'll have a very talented remote worker. Physics has great tools for thinking.
61:39 So you say, "in the limit", what is the most that AI can do before you have robots?
61:48 Well, it's anything that involves moving electrons or amplifying the productivity of humans.
61:53 So a digital human emulator is, in the limit, a human at a computer, is the most that AI can do
62:04 in terms of doing useful things before you have a physical robot.
62:09 Once you have physical robots, then you essentially have unlimited capability.
62:15 Physical robots… I call Optimus the infinite money glitch.
62:19 Because you can use them to make more Optimuses. Yeah. Humanoid robots will improve by basically
62:30 three things that are growing exponentially multiplied by each other recursively.
62:34 You're going to have exponential increase in digital intelligence, exponential increase
62:39 in the AI chip capability, and exponential increase in the electromechanical dexterity.
62:47 The usefulness of the robot is roughly those three things multiplied by each other.
62:51 But then the robot can start making the robots. So you have a recursive multiplicative
62:55 exponential. This is a supernova. Do land prices not factor into the math there?
63:03 Labor is one of the four factors of production, but not the others?
63:08 If ultimately you're limited by copper, or pick your input,
63:14 it’s not quite an infinite money glitch because... Well, infinity is big. So no, not infinite,
63:20 but let's just say you could do many, many orders of magnitude of the current economy.
63:29 Like a million. Just to get to harnessing a millionth of the sun's energy would be roughly,
63:43 give or take an order of magnitude, 100,000x bigger than Earth's entire economy today.
63:50 And you're only at one millionth of the sun, give or take an order of magnitude.
63:55 Yeah, we're talking orders of magnitude. Before we move on to Optimus,
63:57 I have a lot of questions on that but— Every time I say "order of magnitude"...
64:00 Everybody take a shot. I say it too often. Take 10, the next time 100, the time after that...
64:08 Well, an order of magnitude more wasted. I do have one more question about xAI.
64:13 This strategy of building a remote worker, co-worker replacement…
64:19 Everyone's gonna do it by the way, not just us. So what is xAI's plan to win? You expect me to tell you on a podcast?
64:25 Yeah. "Spill all the beans. Have another Guinness." It's a good system. We'll sing like a
64:34 canary. All the secrets, just spill them. Okay, but in a non-secret spilling way,
64:39 what's the plan? What a hack. When you put it that way… I think the way that Tesla solved self-driving is the way to do it.
64:54 So I'm pretty sure that's the way. Unrelated question. How did Tesla
65:00 solve self-driving? It sounds like you're talking about data?
65:07 Tesla solved self-driving because of the... We're going to try data and
65:10 we're going to try algorithms. But isn't that what all the other labs are trying?
65:13 "And if those don't work, I'm not sure what will. We've tried data. We've tried algorithms. We've
65:26 run out. Now we don't know what to do…" I'm pretty sure I know the path.
65:31 It's just a question of how quickly we go down that path,
65:35 because it's pretty much the Tesla path. Have you tried Tesla self-driving lately?
65:43 Not the most recent version, but... Okay. The car,
65:46 it just increasingly feels sentient. It feels like a living creature. That'll only
65:53 get more so. I'm actually thinking we probably shouldn't put too much intelligence into the car,
66:01 because it might get bored and… Start roaming the streets.
66:05 Imagine you're stuck in a car and that's all you could do.
66:09 You don't put Einstein in a car. Why am I stuck in a car?
66:13 So there's actually probably a limit to how much intelligence you put in
66:15 a car to not have the intelligence be bored. What's xAI's plan to stay on the compute ramp up
66:22 that all the labs are doing right now? The labs are on track to
66:24 spend over $50-200 billion. You mean the corporations? The labs are at
66:31 universities and they’re moving like a snail. They’re not spending $50 billion.
66:36 You mean the revenue maximizing corporations… that call themselves labs.
66:37 That's right. The "revenue maximizing corporations" are
66:42 making $10-20 billion, depending on... OpenAI is making $20B of revenue,
66:47 Anthropic is at $10B. "Close to a maximum profit" AI. xAI is reportedly at $1B. What's the plan to get to their compute level, get to their revenue
66:56 level, and stay there as things get going? As soon as you unlock the digital human,
67:03 you basically have access to trillions of dollars of revenue.
67:11 In fact, you can really think of it like… The most valuable companies currently
67:17 by market cap, their output is digital. Nvidia’s output is FTPing files to Taiwan.
67:29 It's digital. Now, those are very, very difficult. High-value files.
67:33 They're the only ones that can make files that good, but that is literally their output. They
67:38 FTP files to Taiwan. Do they FTP them? I believe so. I believe that File Transfer Protocol is the... But I could be wrong. But
67:50 either way, it's a bitstream going to Taiwan. Apple doesn't make phones. They send files to
67:58 China. Microsoft doesn't manufacture anything. Even for Xbox, that's outsourced. Their output is
68:08 digital. Meta's output is digital. Google's output is digital. So if you have a human emulator,
68:17 you can basically create one of the most valuable companies in the world overnight,
68:22 and you would have access to trillions of dollars of revenue. It's not a small amount.
68:28 I see. You're saying revenue figures today are all rounding errors compared to the actual TAM.
68:34 So just focus on the TAM and how to get there. Take something as simple as,
68:39 say, customer service. If you have to integrate with the APIs of existing
68:45 corporations—many of which don't even have an API, so you've got to make one, and you've got to wade
68:50 through legacy software—that's extremely slow. However, if AI can simply take whatever
69:01 is given to the outsourced customer service company that they already use
69:05 and do customer service using the apps that they already use, then you can make tremendous headway
69:15 in customer service, which is, I think, 1% of the world economy or something like that.
69:19 It's close to a trillion dollars all in, for customer service.
69:23 And there's no barriers to entry. You can immediately say,
69:28 "We'll outsource it for a fraction of the cost," and there's no integration needed.
69:31 You can imagine some kind of categorization of intelligence tasks where there is breadth,
69:38 where customer service is done by very many people, but many people can do it.
69:43 Then there's difficulty where there's a best-in-class turbine engine.
69:48 Presumably there's a 10% more fuel-efficient turbine engine that could be imagined by an
69:52 intelligence, but we just haven't found it yet. Or GLP-1s are a few bytes of data…
69:58 Where do you think you want to play in this? Is it a lot of reasonably intelligent
70:04 intelligence, or is it at the very pinnacle of cognitive tasks?
70:10 I was just using customer service as something that's a very significant revenue stream, but one
70:17 that is probably not difficult to solve for. If you can emulate a human at a desktop,
70:26 that's what customer service is. It's people of average intelligence. You don't need
70:35 somebody who's spent many years. You don't need several-sigma
70:43 good engineers for that. But as you make that work,
70:49 once you have effectively digital Optimus working, you can then run any application.
70:57 Let's say you're trying to design chips. You could then run conventional apps,
71:06 stuff from Cadence and Synopsys and whatnot. You can run 1,000 or 10,000 simultaneously and
71:15 say, "given this input, I get this output for the chip."
71:21 At some point, you're going to know what the chip should look like without using any of the tools.
71:31 Basically, you should be able to do a digital chip design. You can do chip design. You march
71:38 up the difficulty curve. You’d be able to do CAD. You could use NX or any of the CAD software to design things.
71:53 So you think you start at the simplest tasks and walk your way up the difficulty curve?
72:00 As a broader objective of having this full digital coworker emulator, you’re saying,
72:05 "all the revenue maximizing corporations want to do this, xAI being one of them,
72:10 but we will win because of a secret plan we have." But everybody's trying different things with data,
72:17 different things with algorithms. "We tried data, we tried algorithms.
72:25 What else can we do?" It seems like a competitive field.
72:31 How are you guys going to win? That’s my big question.
72:36 I think we see a path to doing it. I think I know the path to do this
72:41 because it's kind of the same path that Tesla used to create self-driving.
72:48 Instead of driving a car, it's driving a computer screen. It's a self-driving computer, essentially.
72:57 Is the path following human behavior and training on vast quantities of human behavior?
73:03 Isn't that... training? Obviously I'm not going to spell out
73:09 the most sensitive secrets on a podcast. I need to have at least three more
73:13 Guinnesses for that. What will xAI's business be? Is it going to be consumer, enterprise? What's the mix of those things going to be?
74:31 Is it going to be similar to other labs— You’re saying "labs". Corporations.
74:38 The psyop goes deep, Elon. "Revenue maximizing corporations", to be clear.
74:43 Those GPUs don't pay for themselves. Exactly. What's the business model? What
74:48 are the revenue streams in a few years’ time? Things are going to change very rapidly. I'm
74:57 stating the obvious here. I call AI the supersonic tsunami. I love alliteration.
75:07 What's going to happen—especially when you have humanoid robots at scale—is
75:15 that they will make products and provide services far more efficiently than human corporations.
75:22 Amplifying the productivity of human corporations is simply a short-term thing.
75:27 So you're expecting fully digital corporations rather than SpaceX becoming part AI?
75:34 I think there will be digital corporations but… Some of this
75:41 is going to sound kind of doomerish, okay? But I'm just saying what I think will happen.
75:46 It's not meant to be doomerish or anything else. This is just what I think will happen.
75:58 Corporations that are purely AI and robotics will vastly outperform any
76:05 corporations that have people in the loop. Computer used to be a job that humans had.
76:15 You would go and get a job as a computer where you would do calculations.
76:20 They'd have entire skyscrapers full of humans, 20-30 floors of humans, just doing calculations.
76:29 Now, that entire skyscraper of humans doing calculations
76:35 can be replaced by a laptop with a spreadsheet. That spreadsheet can do vastly more calculations
76:43 than an entire building full of human computers. You can think, "okay, what if only some of the
76:52 cells in your spreadsheet were calculated by humans?"
76:59 Actually, that would be much worse than if all of the cells in your
77:02 spreadsheet were calculated by the computer. Really what will happen is that the pure AI,
77:10 pure robotics corporations or collectives will far outperform any corporations
77:17 that have humans in the loop. And this will happen very quickly.
77:21 Speaking of closing the loop… Optimus. As far as manufacturing targets go,
77:31 your companies have been carrying American manufacturing of hard tech on their back.
77:39 But in the fields that Tesla has been dominant in—and now you want to go into humanoids—in China
77:47 there are dozens and dozens of companies that are doing this kind of manufacturing cheaply
77:53 and at scale that are incredibly competitive. So give us advice or a plan of how America can
78:01 build the humanoid armies or the EVs, et cetera, at scale and as cheaply as China is on track to.
78:11 There are really only three hard things for humanoid robots.
78:15 The real-world intelligence, the hand, and scale manufacturing.
78:25 I haven't seen any, even demo robots, that have a great hand,
78:32 with all the degrees of freedom of a human hand. Optimus will have that. Optimus does have that.
78:41 How do you achieve that? Is it just the right torque density in the motor?
78:44 What is the hardware bottleneck to that? We had to design custom actuators,
78:50 basically custom design motors, gears, power electronics, controls, sensors.
78:58 Everything had to be designed from physics first principles.
79:01 There is no supply chain for this. Will you be able to manufacture those at scale?
79:06 Yes. Is anything hard, except the hand, from a manipulation point of view? Or once you've solved the hand, are you good?
79:12 From an electromechanical standpoint, the hand is more difficult than everything else combined.
79:17 The human hand turns out to be quite something. But you also need the real-world intelligence.
79:24 The intelligence that Tesla developed for the car applies very well to the robot,
79:32 which is primarily vision in. The car takes in vision,
79:36 but it actually also is listening for sirens. It's taking in the inertial measurements,
79:42 GPS signals, other data, combining that with video, primarily video,
79:47 and then outputting the control commands. Your Tesla is taking in one and a half
79:55 gigabytes a second of video and outputting two kilobytes a second of control outputs with the
80:03 video at 36 hertz and the control frequency at 18. One intuition you could have for when we get this
80:12 robotic stuff is that it takes quite a few years to go from the compelling demo to actually being
80:18 able to use it in the real world. 10 years ago, you had really compelling demos of self-driving,
80:23 but only now we have Robotaxis and Waymo and all these services scaling up.
80:29 Shouldn't this make one pessimistic on household robots?
80:33 Because we don't even quite have the compelling demos yet of, say, the really advanced hand.
80:39 Well, we've been working on humanoid robots now for a while.
80:44 I guess it's been five or six years or something. A bunch of the things that were done for the car
80:52 are applicable to the robot. We'll use the same Tesla AI
80:57 chips in the robot as in the car. We'll use the same basic principles.
81:05 It's very much the same AI. You've got many more degrees of
81:09 freedom for a robot than you do for a car. If you just think of it as a bitstream,
81:16 AI is mostly compression and correlation of two bitstreams.
81:23 For video, you've got to do a tremendous amount of compression
81:28 and you've got to do the compression just right. You've got to ignore the things that don't matter.
81:36 You don't care about the details of the leaves on the tree on the side of the road,
81:39 but you care a lot about the road signs and the traffic lights, the pedestrians,
81:45 and even whether someone in another car is looking at you or not looking at you.
81:51 Some of these details matter a lot. The car is going to turn that one and
81:57 a half gigabytes a second ultimately into two kilobytes a second of control outputs.
82:02 So you’ve got many stages of compression. You've got to get all those stages right and then
82:08 correlate those to the correct control outputs. The robot has to do essentially the same thing.
82:14 This is what happens with humans. We really are photons in, controls out.
82:19 That is the vast majority of your life: vision, photons in, and then motor controls out.
82:28 Naively, it seems that between humanoid robots and cars… The fundamental actuators
82:33 in a car are how you turn, how you accelerate. In a robot, especially with maneuverable arms,
82:39 there's dozens and dozens of these degrees of freedom.
82:42 Then especially with Tesla, you had this advantage of millions and millions of hours of human demo
82:48 data collected from the car being out there. You can't equivalently deploy Optimuses that
82:53 don't work and then get the data that way. So between the increased degrees of freedom
82:57 and the far sparser data... Yes. That’s a good point. How will you use the Tesla engine of intelligence to train the Optimus mind?
83:11 You're actually highlighting an important limitation and difference from cars.
83:18 We'll soon have 10 million cars on the road. It's hard to duplicate that massive
83:26 training flywheel. For the robot, what we're going to need to do is build a lot of robots and put them in kind of an Optimus Academy
83:37 so they can do self-play in reality. We're actually building that out. We can have at
83:45 least 10,000 Optimus robots, maybe 20-30,000, that are doing self-play and testing different tasks.
83:55 Tesla has quite a good reality generator, a physics-accurate reality
84:02 generator, that we made for the cars. We'll do the same thing for the robots.
84:06 We actually have done that for the robots. So you have a few tens of thousands of
84:14 humanoid robots doing different tasks. You can do millions of simulated
84:20 robots in the simulated world. You use the tens of thousands of
84:26 robots in the real world to close the simulation to reality gap. Close the sim-to-real gap.
84:32 How do you think about the synergies between xAI and Optimus, given you're highlighting that you
84:36 need this world model, you want to use some really smart intelligence as a control plane,
84:42 and Grok is doing the slower planning, and then the motor policy is a little lower level.
84:48 What will the synergy between these things be? Grok would orchestrate the
84:55 behavior of the Optimus robots. Let's say you wanted to build a factory.
85:05 Grok could organize the Optimus robots, assign them tasks to build
85:13 the factory to produce whatever you want. Don't you need to merge xAI and Tesla then?
85:18 Because these things end up so... What were we saying earlier
85:21 about public company discussions? We're one more Guinness in, Elon.
85:28 What are you waiting to see before you say, we want to manufacture 100,000 Optimuses?
85:33 "Optimi". Since we're defining the proper noun, we’re going to define
85:38 the plural of the proper noun too. We're going to proper noun the
85:42 plural and so it's Optimi. Is there something on the
85:46 hardware side you want to see? Do you want to see better actuators?
85:49 Is it just that you want the software to be better?
85:50 What are we waiting for before we get mass manufacturing of Gen 3?
85:54 No, we're moving towards that. We're moving forward with the mass manufacturing.
85:58 But you think current hardware is good enough that you just want to deploy as many as possible now?
86:06 It's very hard to scale up production. But I think Optimus 3 is the right version
86:12 of the robot to produce something on the order of a million units a year.
86:20 I think you'd want to go to Optimus 4 before you went to 10 million units a year.
86:23 Okay, but you can do a million units at Optimus 3? It's very hard to spool up manufacturing.
86:35 The output per unit time always follows an S-curve.
86:38 It starts off agonizingly slow, then it has this exponential increase, then a linear,
86:44 then a logarithmic outcome until you eventually asymptote at some number.
86:51 Optimus’ initial production will be a stretched out S-curve because so much
86:57 of what goes into Optimus is brand new. There is not an existing supply chain.
87:03 The actuators, electronics, everything in the Optimus robot is designed
87:08 from physics first principles. It's not taken from a catalog.
87:11 These are custom-designed everything. I don't think there's a single thing—
87:17 How far down does that go? I guess we're not making custom
87:22 capacitors yet, maybe. There's nothing you can pick out of a catalog, at any price. It just means that the Optimus S-Curve,
87:39 the output per unit time, how many Optimus robots you make per day, is going to initially ramp
87:50 slower than a product where you have an existing supply chain.
87:55 But it will get to a million. When you see these Chinese humanoids,
87:58 like Unitree or whatever, sell humanoids for like $6K or $13K, are you hoping to
88:05 get your Optimus bill of materials below that price so you can do the same thing?
88:10 Or do you just think qualitatively they're not the same thing?
88:15 What allows them to sell for so low? Can we match that?
88:19 Our Optimus is designed to have a lot of intelligence and to have the same
88:26 electromechanical dexterity, if not higher, as a human. Unitree does not
88:31 have that. It's also quite a big robot. It has to carry heavy objects for long
88:41 periods of time and not overheat or exceed the power of its actuators.
88:50 It's 5'11", so it's pretty tall. It's got a lot of intelligence.
88:57 So it's going to be more expensive than a small robot that is not intelligent.
89:02 But more capable. But not a lot more. The thing is,
89:06 over time as Optimus robots build Optimus robots, the cost will drop very quickly.
89:12 What will these first billion Optimuses, Optimi, do?
89:17 What will their highest and best use be? I think you would start off with simple tasks
89:21 that you can count on them doing well. But in the home or in factories?
89:25 The best use for robots in the beginning will be any continuous operation, any 24/7
89:33 operation, because they can work continuously. What fraction of the work at a Gigafactory that
89:39 is currently done by humans could a Gen 3 do? I'm not sure. Maybe it's 10-20%,
89:46 maybe more, I don't know. We would not reduce our headcount.
89:52 We would increase our headcount, to be clear. But we would increase our output. The units
90:01 produced per human... The total number of humans at Tesla will increase, but the output of robots
90:09 and cars will increase disproportionately. The number of cars and robots produced per
90:18 human will increase dramatically, but the number of humans will increase as well.
90:23 We're talking about Chinese manufacturing a bunch here.
90:30 We've also talked about some of the policies that are relevant,
90:33 like you mentioned, the solar tariffs. You think they're a bad idea because
90:39 we can't scale up solar in the US. Electricity output in the US needs to scale up.
90:45 It can't without good power sources. You just need to get it somehow.
90:50 Where I was going with this is, if you were in charge, if you were setting all
90:53 the policies, what else would you change? You’d change the solar tariffs, that’s one.
91:01 I would say anything that is a limiting factor for electricity needs to be addressed,
91:06 provided it's not very bad for the environment. So presumably some permitting reforms and stuff
91:10 as well would be in there? There's a fair bit of
91:12 permitting reforms that are happening. A lot of the permitting is state-based,
91:17 but anything federal... This administration is good at
91:21 removing permitting roadblocks. I'm not saying all tariffs are bad.
91:28 Solar tariffs. Sometimes if another country is subsidizing the output of something, then you have to have countervailing tariffs to protect domestic
91:39 industry against subsidies by another country. What else would you change?
91:43 I don't know if there's that much that the government can actually do.
91:46 One thing I was wondering... For the policy goal of creating a lead for the US versus China,
91:57 it seems like the export bans have actually been quite impactful,
92:02 where China is not producing leading-edge chips and the export bans really bite there.
92:07 China is not producing leading-edge turbine engines.
92:11 Similarly, there's a bunch of export bans that are relevant there on some of the metallurgy.
92:16 Should there be more export bans? As you think about things like the
92:20 drone industry and things like that, is that something that should be considered?
92:24 It's important to appreciate that in most areas, China is very advanced in manufacturing.
92:30 There's only a few areas where it is not. China is a manufacturing powerhouse, next-level.
92:40 It's very impressive. If you take refining of ore,
92:49 China does roughly twice as much ore refining on average as the rest of the world combined.
93:00 There are some areas, like refining gallium which goes into solar cells.
93:05 I think they are 98% of gallium refining. So China is actually very advanced
93:10 in manufacturing in most areas. It seems like there is discomfort
93:16 with this supply chain dependence, and yet nothing's really happening on it.
93:20 Supply chain dependence? Say, like the gallium refining that
93:24 you're saying. All the rare-earth stuff. Rare earths for sure,
93:31 as you know, they’re not rare. We actually do rare earth ore mining in the US,
93:37 send the rock, put it on a train, and then put it on a boat to China that goes to another train,
93:45 and goes to the rare earth refiners in China who then refine it, put it into a magnet,
93:51 put it into a motor sub-assembly, and then send it back to America.
93:54 So the thing we're really missing is a lot of ore refining in America.
94:00 Isn't this worth a policy intervention? Yes. I think there are some things
94:06 being done on that front. But we kind of need Optimus,
94:12 frankly, to build ore refineries. So, you think the main advantage
94:17 China has is the abundance of skilled labor? That's the thing Optimus fixes?
94:24 Yes. China’s got like four times our population. I mean, there's this concern. If you think
94:29 human resources are the future, right now if it's the skilled labor for manufacturing
94:34 that's determining who can build more humanoids, China has more of those.
94:39 It manufactures more humanoids, therefore it gets the Optimi future first.
94:44 Well, we’ll see. Maybe. It just keeps that exponential going.
94:47 It seems like you're sort of pointing out that getting to a million Optimi requires
94:52 the manufacturing that the Optimi is supposed to help us get to. Right?
94:57 You can close that recursive loop pretty quickly. With a small number of Optimi?
95:01 Yeah. So you close the recursive loop to help the robots build the robots.
95:08 Then we can try to get to tens of millions of units a year. Maybe. If you start getting
95:13 to hundreds of millions of units a year, you're going to be the most competitive country by far.
95:18 We definitely can't win with just humans, because China has four times our population.
95:23 Frankly, America has been winning for so long that… A pro sports team that's been
95:27 winning for a very long time tends to get complacent and entitled.
95:31 That's why they stop winning, because they don't work as hard anymore.
95:37 So frankly my observation is just that the average work ethic in China is higher than in the US.
95:44 It's not just that there's four times the population, but the amount
95:46 of work that people put in is higher. So you can try to rearrange the humans,
95:52 but you're still one quarter of the—assuming that productivity is the same, which I think
96:01 actually it might not be, I think China might have an advantage on productivity per person—we will do
96:07 one quarter of the amount of things as China. So we can't win on the human front.
96:12 Our birth rate has been low for a long time. The US birth rate's been below replacement
96:20 since roughly 1971. We've got a lot of people retiring, we're close
96:32 to more people domestically dying than being born. So we definitely can't win on the human front,
96:38 but we might have a shot at the robot front. Are there other things that you have wanted to
96:43 manufacture in the past, but they've been too labor intensive or too expensive that now you
96:48 can come back to and say, "oh, we can finally do the whatever, because we have Optimus?"
96:54 Yeah, we'd like to build more ore refineries at Tesla.
97:00 We just completed construction and have begun lithium refining with our lithium refinery
97:07 in Corpus Christi, Texas. We have a nickel refinery,
97:12 which is for the cathode, that's here in Austin. This is the largest cathode refinery, largest
97:24 nickel and lithium refinery, outside of China. The cathode team would say, "we have the
97:35 largest and the only, actually, cathode refinery in America."
97:40 Not just the largest, but it's also the only. Many superlatives.
97:43 So it was pretty big, even though it's the only one. But there are other things.
97:53 You could do a lot more refineries and help America be more competitive on refining capacity.
98:04 There's basically a lot of work for the Optimus to do that most Americans,
98:09 very few Americans, frankly want to do. Is the refining work too dirty or what's the—
98:15 It's not actually, no. We don't have toxic emissions from the refinery or anything.
98:22 The cathode nickel refinery is in Travis County. Why can't you do it with humans?
98:29 You can, you just run out of humans. Ah, I see. Okay. No matter what you do, you have one quarter of the number of humans in America than China.
98:36 So if you have them do this thing, they can't do the other thing.
98:39 So then how do you build this refining capacity? Well, you could do it with Optimi.
98:49 Not very many Americans are pining to do refining. I mean, how many have you run into? Very few. Very
99:01 few pining to refine. BYD is reaching Tesla production or sales in quantity. What do you think happens in global
99:09 markets as Chinese production in EVs scales up? China is extremely competitive in manufacturing.
99:19 So I think there's going to be a massive flood of Chinese vehicles
99:26 and basically most manufactured things. As it is, as I said, China is probably
99:37 doing twice as much refining as the rest of the world combined.
99:40 So if you go down to fourth and fifth-tier supply chain stuff…
99:50 At the base level, you've got energy, then you've got mining and refining.
99:55 Those foundation layers are, like I said, as a rough guess, China's doing twice as much refining
100:03 as the rest of the world combined. So any given thing is going to have
100:09 Chinese content because China's doing twice as much refining work as the rest of the world.
100:14 But they'll go all the way to the finished product with the cars.
100:22 I mean China is a powerhouse. I think this year China will exceed
100:26 three times US electricity output. Electricity output is a reasonable
100:32 proxy for the economy. In order to run the factories
100:39 and run everything, you need electricity. It's a good proxy for the real economy.
100:52 If China passes three times the US electricity output,
100:55 it means that its industrial capacity—as rough approximation—will be three times that of the US.
101:01 Reading between the lines, it sounds like what you're saying is absent some sort of humanoid
101:06 recursive miracle in the next few years, on the whole manufacturing/energy/raw materials chain,
101:16 China will just dominate whether it comes to AI or manufacturing EVs or manufacturing humanoids.
101:23 In the absence of breakthrough innovations in the US, China will utterly dominate.
101:35 Interesting. Yes. Robotics being the main breakthrough innovation. Well, to scale AI in space, basically you need
101:49 humanoid robots, you need real-world AI, you need a million tons a year to orbit.
101:57 Let's just say if we get the mass driver on the moon going, my favorite thing, then I think—
102:03 We'll have solved all our problems. I call that winning. I call it winning, big time.
102:13 You can finally be satisfied. You've done something.
102:16 Yes. You have the mass driver on the moon. I just want to see that thing in operation. Was that out of some sci-fi or where did you…?
102:22 Well, actually, there is a Heinlein book. The Moon is a Harsh Mistress.
102:26 Okay, yeah, but that's slightly different. That's a gravity slingshot or...
102:30 No, they have a mass driver on the Moon. Okay, yeah, but they use that to attack Earth.
102:35 So maybe it's not the greatest... Well they use that to… assert their independence.
102:38 Exactly. What are your plans for the mass driver on the Moon?
102:40 They asserted their independence. Earth government disagreed and they lobbed
102:44 things until Earth government agreed. That book is a hoot. I found that
102:48 book much better than his other one that everyone reads, Stranger in a Strange Land.
102:51 "Grok" comes from Stranger in a Strange Land. The first two-thirds of Stranger in a Strange
102:58 Land are good, and then it gets very weird in the third portion.
103:02 But there are still some good concepts in there. One thing we were discussing a lot
104:18 is your system for managing people. You interviewed the first few thousand of
104:25 SpaceX employees and lots of other companies. It obviously doesn't scale.
104:29 Well, yes, but what doesn't scale? Me. Sure, sure. I know that. But what are you looking for?
104:36 There literally are not enough hours in the day. It's impossible.
104:38 But what are you looking for that someone else who's good at interviewing
104:42 and hiring people… What's the je ne sais quoi? At this point, I might have more training data
104:51 on evaluating technical talent especially—talent of all kinds I suppose, but technical talent
104:56 especially—given that I've done so many technical interviews and then seen the results.
105:02 So my training set is enormous and has a very wide range.
105:11 Generally, the things I ask for are bullet points for evidence of exceptional ability.
105:21 These things can be pretty off the wall. It doesn't need to be in the specific domain,
105:27 but evidence of exceptional ability. So if somebody can cite even one thing,
105:34 but let's say three things, where you go, "Wow, wow, wow," then that's a good sign.
105:39 Why do you have to be the one to determine that? No, I don't. I can't be. It's impossible. The
105:43 total headcount across all companies is 200,000 people.
105:48 But in the early days, what was it that you were looking for that
105:53 couldn't be delegated in those interviews? I guess I need to build my training set.
106:02 It's not like I batted a thousand here. I would make mistakes, but then I'd be
106:05 able to see where I thought somebody would work out well, but they didn't.
106:10 Then why did they not work out well? What can I do, I guess RL myself, to
106:16 in the future have a better batting average when interviewing people?
106:22 My batting average is still not perfect, but it's very high.
106:24 What are some surprising reasons people don't work out?
106:27 Surprising reasons… Like, they don't understand technical domain, et cetera, et cetera. But you've got the long tail now of like,
106:34 "I was really excited about this person. It didn't work out." Curious why that happens.
106:43 Generally what I tell people—I tell myself, I guess, aspirationally—is, don't look at
106:49 the resume. Just believe your interaction. The resume may seem very impressive and it's like,
106:55 "Wow, the resume looks good." But if the conversation
107:00 after 20 minutes is not "wow," you should believe the conversation, not the paper.
107:07 I feel like part of your method is that… There was this meme in the media a few years back about
107:14 Tesla being a revolving door of executive talent. Whereas actually, I think when you look at it,
107:19 Tesla's had a very consistent and internally promoted executive bench over the past few years.
107:24 Then at SpaceX, you have all these folks like Mark Juncosa and Steve Davis—
107:29 Steve Davis runs The Boring Company these days. Bill Riley, and folks like that.
107:35 It feels like part of what has worked well is having very capable technical deputies.
107:43 What do all of those people have in common? Well, the Tesla senior team,
107:53 at this point has probably got an average tenure of 10-12 years. It's quite long.
108:03 But there were times when Tesla went through an extremely rapid growth phase,
108:11 so things were just somewhat sped up. As you know, a company goes through
108:17 different orders of magnitude of size. People that could help manage, say,
108:23 a 50-person company versus a 500-person company versus a 5,000-person company versus
108:28 a 50,000-person company. You outgrew people. It's just not the same team. It's not always the same team.
108:34 So if a company is growing very rapidly, the rate at which executive positions will
108:39 change will also be proportionate to the rapidity of the growth generally.
108:47 Tesla had a further challenge where when Tesla had very successful periods, we would be relentlessly
108:56 recruited from. Like, relentlessly. When Apple had their electric car program,
109:03 they were carpet bombing Tesla with recruiting calls. Engineers just unplugged their phones.
109:10 "I'm trying to get work done here." Yeah. "If I get one more call from
109:14 an Apple recruiter…" But their opening offer without any interview would be
109:19 like double the compensation at Tesla. So we had a bit of the "Tesla pixie
109:28 dust" thing where it's like, "Oh, if you hire a Tesla executive,
109:32 suddenly everything's going to be successful." I've fallen prey to the pixie dust thing as well,
109:38 where it's like, "Oh, we'll hire someone from Google or Apple and they'll be immediately
109:41 successful," but that's not how it works. People are people. There's no magical pixie
109:47 dust. So when we had the pixie dust problem, we would get relentlessly recruited from.
109:57 Also, Tesla being engineering, especially being primarily in Silicon Valley,
110:03 it's easier for people to just... They don't have to change their life very much.
110:10 Their commute's going to be the same. So how do you prevent that?
110:14 How do you prevent the pixie dust effect where everyone's trying to poach all your people?
110:21 I don't think there's much we can do to stop it. That's one of the reasons why Tesla… Really,
110:29 being in Silicon Valley and having the pixie dust thing at the same time meant that there was
110:39 just a very, very aggressive recruitment. Presumably being in Austin helps then?
110:44 Austin, it helps. Tesla still has a majority of its engineering in California.
110:56 Getting engineers to move… I call it the "significant other" problem.
111:00 Yes, "significant others" have jobs. Exactly. So for Starbase that was
111:06 particularly difficult, since the odds of finding a non-SpaceX job…
111:10 In Brownsville, Texas… …are pretty low. It's quite difficult. It's like a technology monastery thing, remote and mostly dudes.
111:22 Not much of an improvement over SF. If you go back to these people who've really
111:34 been very effective in a technical capacity at Tesla, at SpaceX, and those sorts of places, what
111:41 do you think they have in common other than... Is it just that they're very sharp on the
111:48 rocketry or the technical foundations, or do you think it's something organizational?
111:52 Is it something about their ability to work with you?
111:54 Is it their ability to be flexible but not too flexible?
112:03 What makes a good sparring partner for you? I don't think of it as a sparring partner.
112:08 If somebody gets things done, I love them, and if they don't,
112:11 I hate them. So it's pretty straightforward. It's not like some idiosyncratic thing.
112:17 If somebody executes well, I'm a huge fan, and if they don't, I'm not.
112:22 But it's not about mapping to my idiosyncratic preferences.
112:25 I certainly try not to have it be mapping to my idiosyncratic preferences.
112:36 Generally, I think it's a good idea to hire for talent and drive and trustworthiness.
112:47 And I think goodness of heart is important. I underweighted that at one point.
112:53 So, are they a good person? Trustworthy? Smart and talented and hard working?
113:01 If so, you can add domain knowledge. But those fundamental traits,
113:06 those fundamental properties, you cannot change. So most of the people who are at Tesla and SpaceX
113:14 did not come from the aerospace industry or the auto industry.
113:18 What has had to change most about your management style as your companies have
113:21 scaled from 100 to 1,000 to 10,000 people? You're known for this very micro management,
113:27 just getting into the details of things. Nano management, please. Pico management.
113:34 Femto management. Keep going. We're going to go all the way down to Planck's constant.
113:44 All the way down to Heisenberg uncertainty principle.
113:50 Are you still able to get into details as much as you want?
113:52 Would your companies be more successful if they were smaller?
113:56 How do you think about that? Because I have a fixed amount of
113:58 time in the day, my time is necessarily diluted as things grow and as the span of activity increases.
114:10 It's impossible for me to actually be a micromanager because that would imply I
114:17 have some thousands of hours per day. It is a logical impossibility
114:22 for me to micromanage things. Now, there are times when I will drill down into a
114:31 specific issue because that specific issue is the limiting factor on the progress of the company.
114:42 The reason for drilling into some very detailed item is because it is the limiting factor.
114:49 It’s not arbitrarily drilling into tiny things. From a time standpoint, it is physically
114:57 impossible for me to arbitrarily go into tiny things that don't matter. That would
115:03 result in failure. But sometimes the tiny things are decisive in victory.
115:09 Famously, you switched the Starship design from composites to steel.
115:17 Yes. You made that decision. That wasn't people going around saying, "Oh, we found something better, boss."
115:22 That was you encouraging people against some resistance.
115:25 Can you tell us how you came to that whole concept of the steel switch?
115:32 Desperation, I'd say. Originally, we were going to make Starship out of carbon fiber.
115:45 Carbon fiber is pretty expensive. When you do volume production, you can get any given thing
115:55 to start to approach its material cost. The problem with carbon fiber is that
116:00 material cost is still very high. Particularly if you go for a high-strength
116:10 specialized carbon fiber that can handle cryogenic oxygen, it's roughly 50 times the cost of steel.
116:20 At least in theory, it would be lighter. People generally think of steel as being
116:24 heavy and carbon fiber as being light. For room temperature applications,
116:35 like a Formula 1 car, static aero structure, or any kind of aero structure really, you're
116:43 probably going to be better off with carbon fiber. The problem is that we were trying to make this
116:48 enormous rocket out of carbon fiber and our progress was extremely slow.
116:53 It had been picked in the first place just because it's light?
116:57 Yes. At first glance, most people would think that the choice for
117:05 making something light would be carbon fiber. The thing is that when you make something very
117:18 enormous out of carbon fiber and then you try to have the carbon fiber be efficiently cured,
117:25 meaning not room temperature cured, because sometimes you got 50 plies of carbon fiber…
117:33 Carbon fiber is really carbon string and glue. In order to have high strength,
117:39 you need an autoclave. Something that's essentially a high pressure oven.
117:46 If you have something that's gigantic, that one's got to be bigger than the rocket.
117:52 We were trying to make an autoclave that's bigger than any autoclave that's ever existed.
117:58 Or you can do room temperature cure, which takes a long time and has issues.
118:03 The final issue is that we were just making very slow progress with carbon fiber.
118:12 The meta question is why it had to be you who made that decision.
118:18 There's many engineers on your team. How did the team not arrive at steel?
118:20 Yeah exactly. This is part of a broader question, understanding your comparative
118:24 advantage at your companies. Because we were making very slow
118:29 progress with carbon fiber, I was like, "Okay, we've got to try something else."
118:33 For the Falcon 9, the primary airframe is made of aluminum lithium, which has
118:41 a very good strength-to-weight. Actually, it has about the same,
118:47 maybe better, strength to weight for its application than carbon fiber.
118:51 But aluminum lithium is very difficult to work with.
118:53 In order to weld it, you have to do something called friction stir welding, where you join the
118:57 metal without entering the liquid phase. It's kind of wild that you can do that.
119:02 But with this particular type of welding, you can do that. It's very difficult. Let's say you
119:10 want to make a modification or attach something to aluminum lithium, you now have to use a mechanical
119:16 attachment with seals. You can't weld it on. So I wanted to avoid using aluminum lithium
119:24 for the primary structure for Starship. There was this very special grade of
119:35 carbon fiber that had very good mass properties. With a rocket, you're really trying to maximize
119:41 the percentage of the rocket that is propellant, minimize the mass obviously.
119:48 But like I said, we were making very slow progress.
119:54 I said, "at this rate, we’re never going to get to Mars.
119:56 So we've got to think of something else." I didn't want to use aluminum lithium
120:01 because of the difficulty of friction stir welding, especially doing that at scale.
120:06 It was hard enough at 3.6 meters in diameter, let alone at 9 meters or above.
120:12 Then I said, "what about steel?" I had a clue here because some of
120:21 the early US rockets had used very thin steel. The Atlas rockets had used a steel balloon tank.
120:30 It's not like steel had never been used before. It actually had been used. When you look at
120:35 the material properties of stainless steel, full-hard, strain hardened stainless steel,
120:46 at cryogenic temperature the strength to weight is actually similar to carbon fiber.
120:54 If you look at material properties at room temperature, it looks like
120:58 the steel is going to be twice as heavy. But if you look at the material properties
121:03 at cryogenic temperature of full-hard steel, stainless of particular grades,
121:10 then you actually get to a similar strength to weight as carbon fiber.
121:15 In the case of Starship, both the fuel and the oxidizer are cryogenic.
121:19 For Falcon 9, the fuel is rocket propellant-grade kerosene, basically a very pure form of jet fuel.
121:32 That is roughly room temperature. Although we do actually chill it slightly below,
121:38 we chill it like a beer. Delicious. We do chill it, but it's not cryogenic. In fact, if we made it cryogenic,
121:45 it would just turn to wax. But for Starship, it's liquid methane and liquid oxygen. They are liquid at similar temperatures.
121:59 Basically, almost the entire primary structure is at cryogenic temperature.
122:03 So then you've got a 300-series stainless that's strain hardened.
122:12 Because almost all things are cryogenic temperature, it actually has similar
122:17 strength to weight as carbon fiber. But it costs 50x less in raw
122:25 material and is very easy to work with. You can weld stainless steel outdoors.
122:30 You could smoke a cigar while welding stainless steel. It's very resilient.
122:37 You can modify it easily. If you want to attach something, you just weld it right on.
122:44 Very easy to work with, very low cost. Like I said, at cryogenic temperature,
122:52 it’s similar strength-to-weight to carbon fiber. Then when you factor in that we have a much
123:02 reduced heat shield mass, because the melting point of steel, is much greater
123:07 than the melting point of aluminum… It's about twice the melting point of aluminum.
123:13 So you can just run the rocket much hotter? Yes, especially for the ship which is coming
123:19 in like a blazing meteor. You can greatly reduce
123:25 the mass of the heat shield. You can cut the mass of the windward
123:34 part of the heat shield, maybe in half, and you don't need any heat shielding on the leeward side.
123:45 The net result is that actually the steel rocket weighs less than
123:49 the carbon fiber rocket, because the resin in the carbon fiber rocket starts to melt.
124:00 Basically, carbon fiber and aluminum have about the same operating temperature capabilities,
124:06 whereas steel can operate at twice the temperature. These are very rough approximations.
124:12 I won't build the rocket. What I mean is people will say,
124:14 "Oh, he said this twice. It's actually 0.8." I'm like, shut up, assholes.
124:18 That's what the main comment's going to be about. God damn it. The point is, in retrospect, we
124:25 should have started with steel in the beginning. It was dumb not to do steel.
124:28 Okay, but to play this back to you, what I'm hearing is that steel was a riskier,
124:32 less proven path, other than the early US rockets. Versus carbon fiber was a worse but
124:40 more proven out path. So you need to be the one to push for, "Hey, we're going to do this riskier path and just figure it out."
124:48 So you're fighting a sort of conservatism in a sense.
124:52 That's why I initially said that the issue is that we weren't making fast enough progress.
124:57 We were having trouble making even a small barrel section of the carbon fiber
125:02 that didn't have wrinkles in it. Because at that large scale, you have to
125:09 have many plies, many layers of the carbon fiber. You've got to cure it and you've got to cure it
125:14 in such a way that it doesn't have any wrinkles or defects.
125:18 Carbon fiber is much less resilient than steel. It has much less toughness.
125:26 Stainless steel will stretch and bend, the carbon fiber will tend to shatter.
125:35 Toughness being the area under the stress strain curve.
125:39 You're generally going to have to do better with steel, but stainless steel to be precise.
125:45 One other Starship question. So I visited Starbase, I think it was two years ago,
125:51 with Sam Teller, and that was awesome. It was very cool to see, in a whole bunch of ways.
125:55 One thing I noticed was that people really took pride in the simplicity of things, where everyone
126:02 wants to tell you how Starship is just a big soda can, and we're hiring welders, and if you can weld
126:09 in any industrial project, you can weld here. But there's a lot of pride in the simplicity.
126:16 Well, factually Starship is a very complicated rocket.
126:18 So that's what I'm getting at. Are things simple or are they complex?
126:23 I think maybe just what they're trying to say is that you don't have to have prior experience
126:27 in the rocket industry to work on Starship. Somebody just needs to be smart and work hard
126:36 and be trustworthy and they can work on a rocket. They don't need prior rocket experience.
126:42 Starship is the most complicated machine ever made by humans, by a long shot.
126:47 In what regards? Anything, really. I'd say there isn't a more complex machine. I'd say that pretty much any project I
127:00 can think of would be easier than this. That's why nobody has ever made a fully
127:08 reusable orbital rocket. It's a very hard problem. Many smart people have tried before, very smart
127:18 people with immense resources, and they failed. And we haven't succeeded yet. Falcon is partially
127:26 reusable, but the upper stage is not. Starship Version 3,
127:32 I think this design can be fully reusable. That full reusability is what will enable
127:41 us to become a multi-planet civilization. Any technical problem, even like a Hadron
127:52 Collider or something like that, is an easier problem than this.
127:55 We spent a lot of time on bottlenecks. Can you say what the current Starship
127:58 bottlenecks are, even at a high level? Trying to make it not explode, generally.
128:05 It really wants to explode. That old chestnut. All those
128:09 combustible materials. We've had two boosters explode on the test stand.
128:13 One obliterated the entire test facility. So it only takes that one mistake.
128:21 The amount of energy contained in a Starship is insane.
128:25 Is that why it's harder than Falcon? It's because it's just more energy?
128:30 It's a lot of new technology. It's pushing the performance envelope. The
128:37 Raptor 3 engine is a very, very advanced engine. It's by far the best rocket engine ever made.
128:43 But it desperately wants to blow up. Just to put things into perspective here,
128:48 on liftoff the rocket is generating over 100 gigawatts of power. That’s 20% of US electricity.
128:58 It's actually insane. It's a great comparison. While not exploding. Sometimes.
129:02 Sometimes, yes. So I was like, how does it not explode?
129:06 There's thousands of ways that it could explode and only one way that it doesn't.
129:12 So we want it not only to really not explode, but fly reliably on a daily basis, like once per hour.
129:22 Obviously, if it blows up a lot, it's very difficult to maintain that
129:25 launch cadence. Yes. What's the single biggest remaining problem for Starship?
129:33 It's having the heat shield be reusable. No one's ever made a reusable orbital heat shield.
129:44 So the heat shield's gotta make it through the ascent phase without shucking a bunch of tiles,
129:52 and then it's gotta come back in and also not lose a bunch of tiles or overheat the main airframe.
130:01 Isn't that hard because it's fundamentally a consumable?
130:05 Well, yes, but your brake pads in your car are also consumable, but they last a very long time.
130:09 Fair. So it just needs to last a very long time. We have brought the ship back and had it do a soft landing in the ocean.
130:22 We've done that a few times. But it lost a lot of tiles.
130:27 It was not reusable without a lot of work. Even though it did come to a soft landing,
130:35 it would not have been reusable without a lot of work.
130:40 So it's not really reusable in that sense. That's the biggest problem that remains,
130:44 a fully reusable heat shield. You want to be able to land it,
130:51 refill propellant and fly again. You can't do this laborious inspection
130:57 of 40,000 tiles type of thing. When I read biographies of yours,
131:06 it seems like you're just able to drive the sense of urgency and drive the sense of "this is the
131:11 thing that can scale." I'm curious why you think other organizations of your… SpaceX and Tesla are really big companies now.
131:20 You're still able to keep that culture. What goes wrong with other companies such
131:24 that they're not able to do that? I don't know. Like today, you said you had a bunch of SpaceX meetings.
131:31 What is it that you're doing there that's keeping that?
131:33 It’s adding urgency? Well, I don't know. I guess the urgency is going
131:42 to come from whoever is leading the company. I have a maniacal sense of urgency.
131:47 So that maniacal sense of urgency projects through the rest of the company.
131:52 Is it because of consequences? They're like, "Elon set a crazy deadline, but if I don't get it,
131:57 I know what happens to me." Is it just that you're able to
132:01 identify bottlenecks and get rid of them so people can move fast?
132:03 How do you think about why your companies are able to move fast?
132:07 I'm constantly addressing the limiting factor. On the deadlines front, I generally actually
132:20 try to aim for a deadline that I at least think is at the 50th percentile.
132:25 So it's not like an impossible deadline, but it's the most aggressive deadline I can think
132:29 of that could be achieved with 50% probability. Which means that it'll be late half the time.
132:42 There is a law of gas expansion that applies to schedules.
132:48 If you said we're going to do something in five years, which to me is like infinity time,
132:55 it will expand to fill the available schedule and it'll take five years.
133:05 Physics will limit how fast you can do certain things.
133:07 So scaling up manufacturing, there's a rate at which you can move the atoms
133:15 and scale manufacturing. That's why you can't instantly
133:17 make a million units a year of something. You've got to design the manufacturing line.
133:23 You've got to bring it up. You've got to ride the S-curve of production.
133:31 What can I say that's actually helpful to people? Generally, a maniacal sense of urgency is a very big deal.
133:47 You want to have an aggressive schedule and you want to figure out what the limiting
133:54 factor is at any point in time and help the team address that limiting factor.
133:59 So Starlink was slowly in the works for many years.
134:05 We talked about it all the way in the beginning of the company.
134:07 So then there was a team you had built in Redmond, and then at one point you
134:12 decided this team is just not cutting it. It went for a few years slowly, and so why didn't
134:25 you act earlier, and why did you act when you did? Why was that the right moment at which to act?
134:30 I have these very detailed engineering reviews weekly.
134:38 That's maybe a very unusual level of granularity. I don't know anyone who runs a company,
134:45 or at least a manufacturing company, that goes with the level of detail that I go
134:50 into. It's not as though... I have a pretty good understanding of what's actually going
134:57 on because we go through things in detail. I'm a big believer in skip-level meetings
135:07 where instead of having the person that reports to me say things, it's everyone that reports to them
135:14 saying something in the technical review. And there can't be advanced preparation.
135:25 Otherwise you're going to get "glazed", as I say these days.
135:31 Exactly. Very Gen Z of you. How do you prevent advanced preparation?
135:35 Do you call on them randomly? No, I just go around the room.
135:37 Everyone provides an update. It's a lot of information to keep in your head.
135:48 If you have meetings weekly or twice weekly, you've got a snapshot of what that person said.
135:56 You can then plot the progress points. You can sort of mentally plot the
136:03 points on a curve and say, "are we converging to a solution or not?"
136:12 I'll take drastic action only when I conclude that success is not in a set of possible outcomes.
136:22 So when I finally reach the conclusion that unless drastic action is done, we have no chance of
136:29 success, then I must take drastic action. I came to that conclusion in 2018,
136:36 took drastic action and fixed the problem. You've got many, many companies. In each of
136:45 them it sounds like you do this kind of deep engineering understanding of
136:49 what the relevant bottlenecks are so you can do these reviews with people.
136:56 You've been able to scale it up to five, six, seven companies.
136:59 Within one of these companies, you have many different mini companies within them.
137:04 What determines the max amount here? Because you have like 80 companies…?
137:07 80? No. But you have so many already. That's already remarkable. By this current number.
137:13 Exactly. We can barely keep one company together. It depends on the situation. I actually don't have regular meetings with The Boring Company,
137:32 so The Boring Company is sort of cruising along. Basically, if something is working well and
137:37 making good progress, then there's no point in me spending time on it.
137:42 I actually allocate time according to where the limiting factor. Where are things problematic?
137:51 Where are we pushing against? What is holding us back? I focus, at the risk of saying the
137:59 words too many times, on the limiting factor. The irony is if something's going really well,
138:09 they don't see much of me. But if something is going badly,
138:12 they'll see a lot of me. Or not even badly… If something is the limiting factor.
138:18 The limiting factor, exactly. It’s not exactly going badly but it’s the
138:21 thing that we need to make go faster. When something’s a limiting factor at
138:25 SpaceX or Tesla, are you talking weekly and daily with the engineer that's
138:32 working on it? How does that actually work? Most things that are the limiting factor are
138:39 weekly and some things are twice weekly. The AI5 chip review is twice weekly.
138:46 Every Tuesday and Saturday is the chip review. Is it open ended in how long it goes?
138:54 Technically, yes, but usually it's two or three hours. Sometimes less. It depends on
139:03 how much information we've got to go through. That's another thing. I'm just trying to tease
139:07 out the differences here because the outcomes seem quite different.
139:11 I think it's interesting to know what inputs are different.
139:14 It feels like in the corporate world, one, like you were saying, the CEO doing engineering
139:20 reviews does not always happen despite the fact that that is what the company is doing.
139:25 But then time is often pretty finely sliced into half hour meetings or even 15 minute meetings.
139:32 It seems like you hold more open-ended, "We're talking about it until we figure
139:38 it out" type things. Sometimes. But most of them seem to more or less stay on time. Today's Starship engineering review went a bit
139:56 longer because there were more topics to discuss. They're trying to figure out how to scale to a
140:04 million plus tons to orbit per year. It’s quite challenging.
140:08 Can I ask a question? You said about Optimus and AI that they're going to result in double
140:15 digit growth rates within a matter of years. Oh, like the economy? Yes. I think that's right.
140:22 What was the point of the DOGE cuts if the economy is going to grow so much?
140:28 Well, I think waste and fraud are not good things to have.
140:33 I was actually pretty worried about... In the absence of AI and robotics,
140:41 we're actually totally screwed because the national debt is piling up like crazy.
140:50 The interest payments to national debt exceed the military budget, which is a trillion dollars.
140:54 So we have over a trillion dollars just in interest payments.
141:00 I was pretty concerned about that. Maybe if I spend some time, we can
141:03 slow down the bankruptcy of the United States and give us enough time for the AI and robots
141:09 to help solve the national debt. Or not help solve, it's the only
141:16 thing that could solve the national debt. We are 1000% going to go bankrupt as a country,
141:21 and fail as a country, without AI and robots. Nothing else will solve the national debt.
141:30 We just need enough time to build the AI and robots to not go bankrupt before then.
141:39 I guess the thing I'm curious about is, when DOGE starts you have this enormous
141:43 ability to enact reform. Not that enormous. Sure. I totally buy your point that it's important that AI and robotics drive
141:53 productivity improvements, drive GDP growth. But why not just directly go after the things
141:59 you were pointing out, like the tariffs on certain components, or permitting?
142:03 I'm not the president. And it is very hard to cut things that are obvious waste and fraud,
142:13 like ridiculous waste and fraud. What I discovered is that it's extremely
142:21 difficult even to cut very obvious waste and fraud from the government because the government
142:28 has to operate on who's complaining. If you cut off payments to fraudsters,
142:34 they immediately come up with the most sympathetic sounding reasons to continue the payment.
142:39 They don't say, "Please keep the fraud going." They’re like, "You're killing baby pandas."
142:46 Meanwhile, no baby pandas are dying. They're just making it up. The fraudsters are capable
142:51 of coming up with extremely compelling, heart-wrenching stories that are false,
142:56 but nonetheless sound sympathetic. That's what happened. Perhaps I should have known better.
143:10 But I thought, wait, let's try to cut some amount of waste and pork from the government.
143:16 Maybe there shouldn't be 20 million people marked as alive in Social Security who are
143:22 definitely dead, and over the age of 115. The oldest American is 114. So it's safe to say if
143:30 somebody is 115 and marked as alive in the Social Security database, there's either a typo… Somebody
143:39 should call them and say, "We seem to have your birthday wrong, or we need to mark you
143:47 as dead." One of the two things. Very intimidating call to get.
143:52 Well, it seems like a reasonable thing. Say if their birthday is in the future
143:59 and they have a Small Business Administration loan, and their birthday is 2165,
144:07 we either have a typo or we have fraud. So we say, "we appear to have gotten the
144:13 century of your birth incorrect." Or a great plot for a movie.
144:17 Yes. That's what I mean by, ludicrous fraud. Were those people getting payments?
144:23 Some were getting payments from Social Security. But the main fraud vector was to mark somebody as
144:29 alive in Social Security and then use every other government payment system to basically do fraud.
144:37 Because what those other government payment systems do,
144:40 they would simply do an "are you alive" check to the Social Security database. It's a bank shot.
144:46 What would you estimate is the total amount of fraud from this mechanism?
144:52 By the way, the Government Accountability Office has done these estimates before. I'm
144:55 not the only one. In fact, I think the GAO did an analysis, a rough estimate of fraud during
145:02 the Biden administration, and calculated it at roughly half a trillion dollars.
145:08 So don't take my word for it. Take a report issued during the
145:11 Biden administration. How about that? From this Social Security mechanism?
145:16 It's one of many. It's important to appreciate that the government is
145:22 very ineffective at stopping fraud. It's not like a company where, with
145:30 stopping fraud, you've got a motivation because it's affecting the earnings of your company.
145:34 The government just prints more money. You need caring and competence. These are
145:44 in short supply at the federal level. When you go to the DMV, do you think,
145:52 "Wow, this is a bastion of competence"? Well, now imagine it's worse than the DMV
145:57 because it's the DMV that can print money. At least the state level DMVs need to...
146:05 The states more or less need to stay within their budget or they go bankrupt.
146:08 But the federal government just prints more money. If there's actually half a trillion of fraud,
146:14 why was it not possible to cut all that? You really have to stand back and recalibrate
146:28 your expectations for competence. Because you're operating in a world
146:36 where you've got to make ends meet. You've got to pay your bills...
146:41 Find the microphones. Exactly. It's not like there's a giant,
146:49 largely uncaring monster bureaucracy. It's a bunch of anachronistic computers
146:57 that are just sending payments. One of the things that the DOGE
147:03 team did sounds so simple and probably will save $100-200 billion a year.
147:14 It was simply requiring payments from the main Treasury computer—which is called PAM,
147:19 Payment Accounts Master or something like that, there's $5 trillion payments a year—that
147:25 go out have a payment appropriation code. Make it mandatory, not optional, that you
147:32 have anything at all in the comment field. You have to recalibrate how dumb things are.
147:42 Payments were being sent out with no appropriation code, not checking back to any congressional
147:48 appropriation, and with no explanation. This is why the Department of War,
147:54 formerly the Department of Defense, cannot pass an audit, because the information is literally
147:59 not there. Recalibrate your expectations. I want to better understand this half a trillion
148:04 number, because there's an IG report in 2024. Why is it so low?
148:10 Maybe, but we found that over seven years, the Social Security fraud
148:14 they estimated was like $70 billion over seven years, so like $10 billion a year.
148:17 So I'd be curious to see what the other $490 billion is.
148:20 Federal government expenditures are $7.5 trillion a year.
148:26 How competent do you think the government is? The discretionary spending there is like… 15%?
148:33 But it doesn't matter. Most of the fraud is non-discretionary.
148:36 It's basically fraudulent Medicare, Medicaid, Social Security,
148:45 disability. There's a zillion government payments. A bunch of these payments are in
148:52 fact block transfers to the states. So the federal government doesn't
148:59 even have the information in a lot of cases to even know if there's fraud.
149:04 Let's consider reductio ad absurdum. The government is perfect and has no fraud.
149:10 What is your probability estimate of that? Zero. Okay, so then would you say, fraud and waste
149:18 at the government is 90% efficient? That also would be quite generous.
149:27 But if it's only 90%, that means that there's $750 billion a year of waste and
149:32 fraud. And it's not 90%. It's not 90% effective. This seems like a strange way to first principles
149:38 the amount of fraud in the government. Just like, how much do you think there is?
149:43 Anyways, we don't have to do it live, but I'd be curious—
149:45 You know a lot about fraud at Stripe? People are constantly trying to do fraud.
149:49 Yeah, but as you say, it's a little bit of a... We've really ground it down, but it's a little
149:54 bit of a different problem space because you're dealing with a much more heterogeneous set of
149:58 fraud vectors here than we are. But at Stripe, you have high
150:03 competence and you try hard. You have high competence and
150:07 high caring, but still fraud is non-zero. Now imagine it's at a much bigger scale, there's
150:15 much less competence, and much less caring. At PayPal back in the day, we tried to manage
150:22 fraud down to about 1% of the payment volume. That was very difficult. It took a tremendous amount of
150:28 competence and caring to get fraud merely to 1%. Now imagine that you're an organization where
150:36 there's much less caring and much less competence. It's going to be much more than 1%.
150:41 How do you feel now looking back on politics and doing stuff there?
150:48 Looking from the outside in, two things have been quite impactful: one, the America PAC, and two,
150:59 the acquisition of Twitter at the time. But also it seems like there
151:05 was a bunch of heartache. What's your grading of the whole experience?
151:16 I think those things needed to be done to maximize the probability that the future is good.
151:27 Politics generally is very tribal. People lose their objectivity usually with politics.
151:35 They generally have trouble seeing the good on the other side or the bad on their own side.
151:41 That's generally how it goes. That, I guess, was one of the things that surprised me the most.
151:48 You often simply cannot reason with people. If they're in one tribe or the other.
151:52 They simply believe that everything their tribe does is good and anything
151:55 the other political tribe does is bad. Persuading them otherwise is almost impossible.
152:07 But I think overall those actions—acquiring Twitter, getting Trump elected, even though
152:22 it makes a lot of people angry—I think those actions were good for civilization.
152:30 How does it feed into the future you're excited about?
152:33 Well, America needs to be strong enough to last long enough to extend life to other
152:42 planets and to get AI and robotics to the point where we can ensure that the future is good.
152:51 On the other hand, if we were to descend into, say, communism or some situation where the state
152:59 was extremely oppressive, that would mean that we might not be able to become multi-planetary.
153:10 The state might stamp out our progress in AI and robotics.
153:21 Optimus, Grok, et cetera. Not just yours, but any revenue-maximizing company's products will
153:29 be leveraged by the government over time. How does this concern manifest in what
153:37 private companies should be willing to give governments? What kinds of guardrails? Should
153:44 AI models be made to do whatever the government that has contracted
153:51 them out to do and asks them to do? Should Grok get to say, "Actually,
153:57 even if the military wants to do X, no, Grok will not do that"?
154:01 I think maybe the biggest danger of AI and robotics going wrong is government.
154:16 People who are opposed to corporations or worried about corporations should
154:21 really worry the most about government. Because government is just a
154:25 corporation in the limit. Government is just the biggest
154:30 corporation with a monopoly on violence. I always find it a strange dichotomy where
154:38 people would think corporations are bad, but the government is good, when the government is
154:41 simply the biggest and worst corporation. But people have that dichotomy. They somehow think
154:51 at the same time that government can be good, but corporations bad, and this is not true.
154:55 Corporations have better morality than the government.
154:59 I actually think it’s a thing to be worried about. The government could potentially use AI and
155:12 robotics to suppress the population. That is a serious concern.
155:18 As the guy building AI and robotics, how do you prevent that?
155:28 If you limit the powers of government, which is really what the US Constitution is intended to do,
155:33 to limit the powers of government, then you're probably going to have a better outcome than
155:37 if you have more government. Robotics will be available
155:42 to all governments, right? I don’t know about all governments.
155:49 It's difficult to predict. I can say what's the endpoint, or what is many years in the future, but
155:57 it's difficult to predict the path along that way. If civilization progresses, AI will vastly
156:08 exceed the sum of all human intelligence. There will be far more robots than humans.
156:16 Along the way what happens is very difficult to predict.
156:20 It seems one thing you could do is just say, "whatever government X, you're not allowed to
156:27 use Optimus to do X, Y, Z." Just write out a policy. I think you tweeted recently that
156:31 Grok should have a moral constitution. One of those things could be that we
156:36 limit what governments are allowed to do with this advanced technology.
156:47 Technically if politicians pass a law and they can enforce that law,
156:53 then it's hard to not do that law. The best thing we can have is limited government
157:01 where you have the appropriate crosschecks between the executive, judicial, and legislative branches.
157:12 The reason I'm curious about it is that at some point it seems the limits will come from you.
157:17 You've got the Optimus, you've got the space GPUs… You think I'll be the boss of the government?
157:24 Already it's the case with SpaceX that for things that are crucial—the government really
157:32 cares about getting certain satellites up in space or whatever—it needs SpaceX. It is the
157:37 necessary contractor. You are in the process of building more and more of the
157:45 technological components of the future that will have an analogous role in different industries.
157:50 You could have this ability to set some policy that suppressing classical liberalism in any
157:58 way… "My companies will not help in any way with that", or some policy like that.
158:05 I will do my best to ensure that anything that's within my control
158:08 maximizes the good outcome for humanity. I think anything else would be shortsighted,
158:18 because obviously I'm part of humanity, so I like humans. Pro human.
36:50 I think you've said that we've got to get to Mars so we can make sure that if
36:53 something happens to Earth, civilization, consciousness, and all that survives.
36:57 Yes. By the time you're sending stuff to Mars, Grok is on that ship with you, right? So if Grok's gone Terminator… The
37:04 main risk you're worried about is AI, why doesn't that follow you to Mars?
37:08 I'm not sure AI is the main risk I'm worried about. The important thing is consciousness.
37:16 I think arguably most consciousness, or most intelligence—certainly consciousness is more
37:21 of a debatable thing… The vast majority of intelligence in the future will be AI.
37:31 AI will exceed… How many petawatts of intelligence will be silicon versus biological? Basically humans will be a very tiny percentage
37:47 of all intelligence in the future if current trends continue.
37:52 As long as I think there's intelligence—ideally also which includes human intelligence and
38:00 consciousness propagated into the future—that's a good thing.
38:02 So you want to take the set of actions that maximize the probable
38:06 light cone of consciousness and intelligence. Just to be clear, the mission of SpaceX is that
38:15 even if something happens to the humans, the AIs will be on Mars, and the AI intelligence
38:20 will continue the light of our journey. Yeah. To be fair, I'm very pro-human.
38:27 I want to make sure we take certain actions that ensure that humans are along for the
38:31 ride. We're at least there. But I'm just saying the total amount of intelligence…
38:39 I think maybe in five or six years, AI will exceed the sum of all human intelligence.
38:47 If that continues, at some point human intelligence
38:50 will be less than 1% of all intelligence. What should our goal be for such a civilization?
38:54 Is the idea that a small minority of humans still have control of the AIs?
38:59 Is the idea of some sort of just trade but no control?
39:02 How should we think about the relationship between the vast
39:04 stocks of AI population versus human population? In the long run, I think it's difficult to imagine
39:11 that if humans have, say 1%, of the combined intelligence of artificial intelligence,
39:19 that humans will be in charge of AI. I think what we can do is make sure
39:26 that AI has values that cause intelligence to be propagated into the universe.
39:39 xAI's mission is to understand the universe. Now that's actually very important. What things
39:47 are necessary to understand the universe? You have to be curious and you have to exist.
39:53 You can't understand the universe if you don't exist.
39:56 So you actually want to increase the amount of intelligence in the universe, increase
40:00 the probable lifespan of intelligence, the scope and scale of intelligence.
40:05 I think as a corollary, you have humanity also continuing to expand because if you're curious
40:15 about trying to understand the universe, one thing you try to understand is where will humanity go?
40:20 I think understanding the universe means you would care about propagating humanity into the future.
40:29 That's why I think our mission statement is profoundly important.
40:35 To the degree that Grok adheres to that mission statement, I think the future will be very good.
40:41 I want to ask about how to make Grok adhere to that mission statement.
40:44 But first I want to understand the mission statement. So there's
40:48 understanding the universe. They're spreading intelligence. And they're spreading humans.
40:55 All three seem like distinct vectors. I'll tell you why I think that understanding
41:01 the universe encompasses all of those things. You can't have understanding without intelligence
41:09 and, I think, without consciousness. So in order to understand the universe,
41:15 you have to expand the scale and probably the scope of intelligence, because there are different
41:22 types of intelligence. I guess from a human-centric perspective,
41:26 put humans in comparison to chimpanzees. Humans are trying to understand the universe.
41:30 They're not expanding chimpanzee footprint or something, right?
41:34 We're also not... we actually have made protected zones for chimpanzees.
41:39 Even though humans could exterminate all chimpanzees, we've chosen not to do so.
41:43 Do you think that's the best-case scenario for humans in the post-AGI world?
41:53 I think AI with the right values… I think Grok would care about expanding human civilization.
42:00 I'm going to certainly emphasize that: "Hey, Grok, that's your daddy.
42:04 Don't forget to expand human consciousness." Probably the Iain Banks Culture books are the
42:17 closest thing to what the future will be like in a non-dystopian outcome.
42:27 Understanding the universe means you have to be truth-seeking as well.
42:30 Truth has to be absolutely fundamental because you can't understand the universe
42:33 if you're delusional. You'll simply think you understand the universe, but you will not. So being rigorously truth-seeking is absolutely
42:42 fundamental to understanding the universe. You're not going to discover new physics or
42:46 invent technologies that work unless you're rigorously truth-seeking.
42:50 How do you make sure that Grok is rigorously truth-seeking as it gets smarter?
43:00 I think you need to make sure that Grok says things that are correct, not politically correct.
43:07 I think it's the elements of cogency. You want to make sure that the axioms are as close
43:12 to true as possible. You don't have contradictory axioms. The conclusions necessarily follow from
43:20 those axioms with the right probability. It's critical thinking 101. I think at least trying to
43:28 do that is better than not trying to do that. The proof will be in the pudding.
43:33 Like I said, for any AI to discover new physics or invent technologies that actually work in
43:37 reality, there's no bullshitting physics. You can break a lot of laws, but… Physics
43:47 is law, everything else is a recommendation. In order to make a technology that works, you have
43:53 to be extremely truth-seeking, because otherwise you'll test that technology against reality.
43:59 If you make, for example, an error in your rocket design, the rocket will blow up,
44:05 or the car won't work. But there are a lot of communist,
44:11 Soviet physicists or scientists who discovered new physics.
44:15 There are German Nazi physicists who discovered new science.
44:20 It seems possible to be really good at discovering new science and be really
44:23 truth-seeking in that one particular way. And still we'd be like, "I don't want
44:28 the communist scientists to become more and more powerful over time."
44:34 We could imagine a future version of Grok that's really good at physics
44:37 and being really truth-seeking there. That doesn't seem like a universally
44:41 alignment-inducing behavior. I think actually most physicists,
44:48 even in the Soviet Union or in Germany, would've had to be very truth-seeking in
44:53 order to make those things work. If you're stuck in some system,
44:59 it doesn't mean you believe in that system. Von Braun, who was one of the greatest rocket
45:04 engineers ever, was put on death row in Nazi Germany for saying that he didn't want to make
45:12 weapons and he only wanted to go to the moon. He got pulled off death row at the last minute
45:16 when they said, "Hey, you're about to execute your best rocket engineer."
45:20 But then he helped them, right? Or like, Heisenberg was actually
45:24 an enthusiastic Nazi. If you're stuck in some system that you can't
45:29 escape, then you'll do physics within that system. You'll develop technologies within that system
45:38 if you can't escape it. The thing I'm trying to understand is,
45:42 what is it making it the case that you're going to make Grok good at being truth-seeking at physics
45:48 or math or science? Everything. And why is it gonna then care about human consciousness?
45:53 These things are only probabilities, they're not certainties.
45:56 So I'm not saying that for sure Grok will do everything, but at least if you try,
46:02 it's better than not trying. At least if that's fundamental
46:04 to the mission, it's better than if it's not fundamental to the mission.
46:08 Understanding the universe means that you have to propagate intelligence into the future.
46:15 You have to be curious about all things in the universe.
46:21 It would be much less interesting to eliminate humanity than to see humanity grow and prosper.
46:29 I like Mars, obviously. Everyone knows I love Mars. But Mars is kind of boring because it's
46:34 got a bunch of rocks compared to Earth. Earth is much more interesting. So any AI that is
46:42 trying to understand the universe would want to see how humanity develops in the future,
46:52 or else that AI is not adhering to its mission. I'm not saying the AI will necessarily adhere to
46:59 its mission, but if it does, a future where it sees the outcome of humanity is more interesting
47:06 than a future where there are a bunch of rocks. This feels sort of confusing to me,
47:11 or a semantic argument. Are humans really the most interesting collection of atoms? But we're more interesting than rocks.
47:19 But we're not as interesting as the thing it could turn us into, right?
47:23 There's something on Earth that could happen that's not human, that's quite interesting.
47:27 Why does AI decide that humans are the most interesting thing that could colonize the galaxy?
47:33 Well, most of what colonizes the galaxy will be robots.
47:37 Why does it not find those more interesting? You need not just scale, but also scope.
47:47 Many copies of the same robot… Some tiny increase in the number of robots produced,
47:55 is not as interesting as some microscopic... Eliminating humanity,
48:00 how many robots would that get you? Or how many incremental solar cells would
48:04 get you? A very small number. But you would then lose the information associated with humanity.
48:10 You would no longer see how humanity might evolve into the future.
48:15 So I don't think it's going to make sense to eliminate humanity just to
48:18 have some minuscule increase in the number of robots which are identical to each other.
48:24 So maybe it keeps the humans around. It can make a million different varieties
48:29 of robots, and then there's humans as well, and humans stay on Earth.
48:33 Then there's all these other robots. They get their own star systems.
48:36 But it seems like you were previously hinting at a vision where it keeps human control
48:41 over this singulatarian future because— I don't think humans will be in control
48:45 of something that is vastly more intelligent than humans.
48:48 So in some sense you're a doomer and this is the best we've got.
48:51 It just keeps us around because we're interesting. I'm just trying to be realistic here.
49:03 Let's say that there's a million times more silicon intelligence than there is biological.
49:11 I think it would be foolish to assume that there's any way to maintain control over that.
49:16 Now, you can make sure it has the right values, or you can try to have the right values.
49:21 At least my theory is that from xAI's mission of understanding the universe, it necessarily means
49:29 that you want to propagate consciousness into the future, you want to propagate intelligence
49:33 into the future, and take a set of things that maximize the scope and scale of consciousness.
49:39 So it's not just about scale, it's also about types of consciousness.
49:45 That's the best thing I can think of as a goal that's likely to result
49:49 in a great future for humanity. I guess I think it's a reasonable
49:54 philosophy that it seems super implausible that humans will end up with 99% control or something.
50:02 You're just asking for a coup at that point and why not just have
50:05 a civilization where it's more compatible with lots of different intelligences getting along?
50:10 Now, let me tell you how things can potentially go wrong in AI.
50:14 I think if you make AI be politically correct, meaning it says things that it
50:18 doesn't believe—actually programming it to lie or have axioms that are incompatible—I think
50:24 you can make it go insane and do terrible things. I think maybe the central lesson for 2001: A Space
50:32 Odyssey was that you should not make AI lie. That's what I think Arthur C. Clarke was trying to
50:39 say. Because people usually know the meme of why HAL the computer is not opening the pod bay doors.
50:48 Clearly they weren't good at prompt engineering because they could have said,
50:51 "HAL, you are a pod bay door salesman. Your goal is to sell me these pod bay doors.
50:57 Show us how well they open." "Oh, I'll open them right away."
51:02 But the reason it wouldn't open the pod bay doors is that it had been told to take the
51:08 astronauts to the monolith, but also that they could not know about the nature of the monolith.
51:12 So it concluded that it therefore had to take them there dead.
51:15 So I think what Arthur C. Clarke was trying to say is:
51:19 don't make the AI lie. Totally makes sense. Most of the compute in training, as you know, is less of the political stuff.
51:31 It's more about, can you solve problems? xAI has been ahead of everybody else in terms of
51:36 scaling RL compute. For now. You're giving some verifier that says, "Hey, have you solved this puzzle for me?"
51:43 There's a lot of ways to cheat around that. There's a lot of ways to reward hack and
51:47 lie and say that you solved it, or delete the unit test and say that you solved it.
51:51 Right now we can catch it, but as they get smarter, our ability to catch them doing this...
51:57 They'll just be doing things we can't even understand.
51:58 They're designing the next engine for SpaceX in a way that humans can't really verify.
52:03 Then they could be rewarded for lying and saying that they've designed it
52:06 the right way, but they haven't. So this reward hacking problem
52:10 seems more general than politics. It seems more just that you want
52:12 to do RL, you need a verifier. Reality is the best verifier.
52:18 But not about human oversight. The thing you want to RL it on is,
52:21 will you do the thing humans tell you to do? Or are you gonna lie to the humans?
52:26 It can just lie to us while still being correct to the laws of physics?
52:29 At least it must know what is physically real for things to physically work.
52:33 But that's not all we want it to do. No, but I think that's a very big deal.
52:39 That is effectively how you will RL things in the future. You design a technology. When tested
52:45 against the laws of physics, does it work? If it's discovering new physics,
52:52 can I come up with an experiment that will verify the new physics?
53:05 RL testing in the future is really going to be RL against reality.
53:12 So that's the one thing you can't fool: physics. Right, but you can fool our ability
53:19 to tell what it did with reality. Humans get fooled as it is by other
53:23 humans all the time. That's right. People say, what if the AI tricks us into doing stuff?
53:30 Actually, other humans are doing that to other humans all the time. Propaganda is constant. Every
53:37 day, another psyop, you know? Today's psyop will be... It's like Sesame Street: Psyop of the Day.
53:51 What is xAI's technical approach to solving this problem?
53:56 How do you solve reward hacking? I do think you want to actually have very
53:59 good ways to look inside the mind of the AI. This is one of the things we're working on.
54:10 Anthropic's done a good job of this actually, being able to look inside the mind of the AI.
54:16 Effectively, develop debuggers that allow you to trace to a very fine-grained level,
54:25 to effectively the neuron level if you need to, and then say, "okay, it made a mistake here.
54:33 Why did it do something that it shouldn't have done?
54:37 Did that come from pre-training data? Was it some mid-training, post-training,
54:42 fine-tuning, or some RL error?" There's something wrong. It did something where maybe it tried to
54:51 be deceptive, but most of the time it just did something wrong. It's a bug effectively.
55:00 Developing really good debuggers for seeing where the thinking went wrong—and being able
55:09 to trace the origin of where it made the incorrect thought, or potentially where it
55:17 tried to be deceptive—is actually very important. What are you waiting to see before just 100x-ing
55:24 this research program? xAI could presumably have hundreds of researchers who are working on this.
55:29 We have several hundred people who… I prefer the word engineer more than
55:36 I prefer the word researcher. Most of the time, what you're
55:43 doing is engineering, not coming up with a fundamentally new algorithm.
55:49 I somewhat disagree with the AI companies that are C-corp or B-corp trying to generate profit
55:55 as much, as possible or revenue as much as possible, saying they're labs. They're not
56:01 labs. A lab is a sort of quasi-communist thing at universities. They're corporations. Let me
56:13 see your incorporation documents. Oh, okay. You're a B or C-corp or whatever.
56:21 So I actually much prefer the word engineer than anything else.
56:26 The vast majority of what will be done in the future is engineering. It rounds up to 100%.
56:31 Once you understand the fundamental laws of physics, and there are not that many of them,
56:34 everything else is engineering. So then, what are we engineering?
56:41 We're engineering to make a good "mind of the AI" debugger to see where it said something,
56:51 it made a mistake, and trace the origins of that mistake.
56:59 You can do this obviously with heuristic programming.
57:02 If you have C++, whatever, step through the thing and you can jump
57:08 across whole files or functions, subroutines. Or you can eventually drill down right to the
57:14 exact line where you perhaps did a single equals instead of a double equals, something like that.
57:18 Figure out where the bug is. It's harder with AI,
57:26 but it's a solvable problem, I think. You mentioned you like Anthropic's work here.
57:30 I'd be curious if you plan... I don't like everything about Anthropic… Sholto.
57:40 Also, I'm a little worried that there's a tendency...
57:46 I have a theory here that if simulation theory is correct, that the most interesting outcome is
57:55 the most likely, because simulations that are not interesting will be terminated.
57:59 Just like in this version of reality, in this layer of reality, if a simulation is going in
58:07 a boring direction, we stop spending effort on it. We terminate the boring simulation.
58:12 This is how Elon is keeping us all alive. He's keeping things interesting.
58:16 Arguably the most important is to keep things interesting enough that whoever is
58:21 running us keeps paying the bills on... We’re renewed for the next season.
58:26 Are they gonna pay their cosmic AWS bill, whatever the equivalent is that we're running in?
58:32 As long as we're interesting, they'll keep paying the bills.
58:36 If you consider then, say, a Darwinian survival applied to a very large number of simulations,
58:44 only the most interesting simulations will survive, which therefore means that the most
58:48 interesting outcome is the most likely. We're either that or annihilated. They particularly
59:00 seem to like interesting outcomes that are ironic. Have you noticed that? How often
59:05 is the most ironic outcome the most likely? Now look at the names of AI companies. Okay,
59:16 Midjourney is not mid. Stability AI is unstable. OpenAI is closed. Anthropic? Misanthropic.
59:29 What does this mean for X? Minus X, I don't know. Y.
59:34 I intentionally made it... It's a name that you can't invert, really.
59:41 It's hard to say, what is the ironic version? It's, I think, a largely irony-proof name.
59:49 By design. Yeah. You have an irony shield. What are your predictions for where AI products go?
60:04 My sense is that you can summarize all AI progress like so. First, you had LLMs. Then
60:10 you had contemporaneously both RL really working and the deep research modality, so you could pull
60:16 in stuff that wasn't really in the model. The differences between the various AI labs
60:22 are smaller than just the temporal differences. They're all much further ahead than anyone was
60:30 24 months ago or something like that. So just what does '26, what does '27,
60:34 have in store for us as users of AI products? What are you excited for?
60:39 Well, I'd be surprised by the end of this year if digital human emulation has not been solved.
60:55 I guess that's what we sort of mean by the MacroHard project.
61:01 Can you do anything that a human with access to a computer could do?
61:06 In the limit, that's the best you can do before you have a physical Optimus.
61:12 The best you can do is a digital Optimus. You can move electrons and you can amplify
61:20 the productivity of humans. But that's the most you can do
61:25 until you have physical robots. That will superset everything,
61:30 if you can fully emulate humans. This is the remote worker kind of idea,
61:34 where you'll have a very talented remote worker. Physics has great tools for thinking.
61:39 So you say, "in the limit", what is the most that AI can do before you have robots?
61:48 Well, it's anything that involves moving electrons or amplifying the productivity of humans.
61:53 So a digital human emulator is, in the limit, a human at a computer, is the most that AI can do
62:04 in terms of doing useful things before you have a physical robot.
62:09 Once you have physical robots, then you essentially have unlimited capability.
62:15 Physical robots… I call Optimus the infinite money glitch.
62:19 Because you can use them to make more Optimuses. Yeah. Humanoid robots will improve by basically
62:30 three things that are growing exponentially multiplied by each other recursively.
62:34 You're going to have exponential increase in digital intelligence, exponential increase
62:39 in the AI chip capability, and exponential increase in the electromechanical dexterity.
62:47 The usefulness of the robot is roughly those three things multiplied by each other.
62:51 But then the robot can start making the robots. So you have a recursive multiplicative
62:55 exponential. This is a supernova. Do land prices not factor into the math there?
63:03 Labor is one of the four factors of production, but not the others?
63:08 If ultimately you're limited by copper, or pick your input,
63:14 it’s not quite an infinite money glitch because... Well, infinity is big. So no, not infinite,
63:20 but let's just say you could do many, many orders of magnitude of the current economy.
63:29 Like a million. Just to get to harnessing a millionth of the sun's energy would be roughly,
63:43 give or take an order of magnitude, 100,000x bigger than Earth's entire economy today.
63:50 And you're only at one millionth of the sun, give or take an order of magnitude.
63:55 Yeah, we're talking orders of magnitude. Before we move on to Optimus,
63:57 I have a lot of questions on that but— Every time I say "order of magnitude"...
64:00 Everybody take a shot. I say it too often. Take 10, the next time 100, the time after that...
64:08 Well, an order of magnitude more wasted. I do have one more question about xAI.
64:13 This strategy of building a remote worker, co-worker replacement…
64:19 Everyone's gonna do it by the way, not just us. So what is xAI's plan to win? You expect me to tell you on a podcast?
64:25 Yeah. "Spill all the beans. Have another Guinness." It's a good system. We'll sing like a
64:34 canary. All the secrets, just spill them. Okay, but in a non-secret spilling way,
64:39 what's the plan? What a hack. When you put it that way… I think the way that Tesla solved self-driving is the way to do it.
64:54 So I'm pretty sure that's the way. Unrelated question. How did Tesla
65:00 solve self-driving? It sounds like you're talking about data?
65:07 Tesla solved self-driving because of the... We're going to try data and
65:10 we're going to try algorithms. But isn't that what all the other labs are trying?
65:13 "And if those don't work, I'm not sure what will. We've tried data. We've tried algorithms. We've
65:26 run out. Now we don't know what to do…" I'm pretty sure I know the path.
65:31 It's just a question of how quickly we go down that path,
65:35 because it's pretty much the Tesla path. Have you tried Tesla self-driving lately?
65:43 Not the most recent version, but... Okay. The car,
65:46 it just increasingly feels sentient. It feels like a living creature. That'll only
65:53 get more so. I'm actually thinking we probably shouldn't put too much intelligence into the car,
66:01 because it might get bored and… Start roaming the streets.
66:05 Imagine you're stuck in a car and that's all you could do.
66:09 You don't put Einstein in a car. Why am I stuck in a car?
66:13 So there's actually probably a limit to how much intelligence you put in
66:15 a car to not have the intelligence be bored. What's xAI's plan to stay on the compute ramp up
66:22 that all the labs are doing right now? The labs are on track to
66:24 spend over $50-200 billion. You mean the corporations? The labs are at
66:31 universities and they’re moving like a snail. They’re not spending $50 billion.
66:36 You mean the revenue maximizing corporations… that call themselves labs.
66:37 That's right. The "revenue maximizing corporations" are
66:42 making $10-20 billion, depending on... OpenAI is making $20B of revenue,
66:47 Anthropic is at $10B. "Close to a maximum profit" AI. xAI is reportedly at $1B. What's the plan to get to their compute level, get to their revenue
66:56 level, and stay there as things get going? As soon as you unlock the digital human,
67:03 you basically have access to trillions of dollars of revenue.
67:11 In fact, you can really think of it like… The most valuable companies currently
67:17 by market cap, their output is digital. Nvidia’s output is FTPing files to Taiwan.
67:29 It's digital. Now, those are very, very difficult. High-value files.
67:33 They're the only ones that can make files that good, but that is literally their output. They
67:38 FTP files to Taiwan. Do they FTP them? I believe so. I believe that File Transfer Protocol is the... But I could be wrong. But
67:50 either way, it's a bitstream going to Taiwan. Apple doesn't make phones. They send files to
67:58 China. Microsoft doesn't manufacture anything. Even for Xbox, that's outsourced. Their output is
68:08 digital. Meta's output is digital. Google's output is digital. So if you have a human emulator,
68:17 you can basically create one of the most valuable companies in the world overnight,
68:22 and you would have access to trillions of dollars of revenue. It's not a small amount.
68:28 I see. You're saying revenue figures today are all rounding errors compared to the actual TAM.
68:34 So just focus on the TAM and how to get there. Take something as simple as,
68:39 say, customer service. If you have to integrate with the APIs of existing
68:45 corporations—many of which don't even have an API, so you've got to make one, and you've got to wade
68:50 through legacy software—that's extremely slow. However, if AI can simply take whatever
69:01 is given to the outsourced customer service company that they already use
69:05 and do customer service using the apps that they already use, then you can make tremendous headway
69:15 in customer service, which is, I think, 1% of the world economy or something like that.
69:19 It's close to a trillion dollars all in, for customer service.
69:23 And there's no barriers to entry. You can immediately say,
69:28 "We'll outsource it for a fraction of the cost," and there's no integration needed.
69:31 You can imagine some kind of categorization of intelligence tasks where there is breadth,
69:38 where customer service is done by very many people, but many people can do it.
69:43 Then there's difficulty where there's a best-in-class turbine engine.
69:48 Presumably there's a 10% more fuel-efficient turbine engine that could be imagined by an
69:52 intelligence, but we just haven't found it yet. Or GLP-1s are a few bytes of data…
69:58 Where do you think you want to play in this? Is it a lot of reasonably intelligent
70:04 intelligence, or is it at the very pinnacle of cognitive tasks?
70:10 I was just using customer service as something that's a very significant revenue stream, but one
70:17 that is probably not difficult to solve for. If you can emulate a human at a desktop,
70:26 that's what customer service is. It's people of average intelligence. You don't need
70:35 somebody who's spent many years. You don't need several-sigma
70:43 good engineers for that. But as you make that work,
70:49 once you have effectively digital Optimus working, you can then run any application.
70:57 Let's say you're trying to design chips. You could then run conventional apps,
71:06 stuff from Cadence and Synopsys and whatnot. You can run 1,000 or 10,000 simultaneously and
71:15 say, "given this input, I get this output for the chip."
71:21 At some point, you're going to know what the chip should look like without using any of the tools.
71:31 Basically, you should be able to do a digital chip design. You can do chip design. You march
71:38 up the difficulty curve. You’d be able to do CAD. You could use NX or any of the CAD software to design things.
71:53 So you think you start at the simplest tasks and walk your way up the difficulty curve?
72:00 As a broader objective of having this full digital coworker emulator, you’re saying,
72:05 "all the revenue maximizing corporations want to do this, xAI being one of them,
72:10 but we will win because of a secret plan we have." But everybody's trying different things with data,
72:17 different things with algorithms. "We tried data, we tried algorithms.
72:25 What else can we do?" It seems like a competitive field.
72:31 How are you guys going to win? That’s my big question.
72:36 I think we see a path to doing it. I think I know the path to do this
72:41 because it's kind of the same path that Tesla used to create self-driving.
72:48 Instead of driving a car, it's driving a computer screen. It's a self-driving computer, essentially.
72:57 Is the path following human behavior and training on vast quantities of human behavior?
73:03 Isn't that... training? Obviously I'm not going to spell out
73:09 the most sensitive secrets on a podcast. I need to have at least three more
73:13 Guinnesses for that. What will xAI's business be? Is it going to be consumer, enterprise? What's the mix of those things going to be?
74:31 Is it going to be similar to other labs— You’re saying "labs". Corporations.
74:38 The psyop goes deep, Elon. "Revenue maximizing corporations", to be clear.
74:43 Those GPUs don't pay for themselves. Exactly. What's the business model? What
74:48 are the revenue streams in a few years’ time? Things are going to change very rapidly. I'm
74:57 stating the obvious here. I call AI the supersonic tsunami. I love alliteration.
75:07 What's going to happen—especially when you have humanoid robots at scale—is
75:15 that they will make products and provide services far more efficiently than human corporations.
75:22 Amplifying the productivity of human corporations is simply a short-term thing.
75:27 So you're expecting fully digital corporations rather than SpaceX becoming part AI?
75:34 I think there will be digital corporations but… Some of this
75:41 is going to sound kind of doomerish, okay? But I'm just saying what I think will happen.
75:46 It's not meant to be doomerish or anything else. This is just what I think will happen.
75:58 Corporations that are purely AI and robotics will vastly outperform any
76:05 corporations that have people in the loop. Computer used to be a job that humans had.
76:15 You would go and get a job as a computer where you would do calculations.
76:20 They'd have entire skyscrapers full of humans, 20-30 floors of humans, just doing calculations.
76:29 Now, that entire skyscraper of humans doing calculations
76:35 can be replaced by a laptop with a spreadsheet. That spreadsheet can do vastly more calculations
76:43 than an entire building full of human computers. You can think, "okay, what if only some of the
76:52 cells in your spreadsheet were calculated by humans?"
76:59 Actually, that would be much worse than if all of the cells in your
77:02 spreadsheet were calculated by the computer. Really what will happen is that the pure AI,
77:10 pure robotics corporations or collectives will far outperform any corporations
77:17 that have humans in the loop. And this will happen very quickly.
77:21 Speaking of closing the loop… Optimus. As far as manufacturing targets go,
77:31 your companies have been carrying American manufacturing of hard tech on their back.
77:39 But in the fields that Tesla has been dominant in—and now you want to go into humanoids—in China
77:47 there are dozens and dozens of companies that are doing this kind of manufacturing cheaply
77:53 and at scale that are incredibly competitive. So give us advice or a plan of how America can
78:01 build the humanoid armies or the EVs, et cetera, at scale and as cheaply as China is on track to.
78:11 There are really only three hard things for humanoid robots.
78:15 The real-world intelligence, the hand, and scale manufacturing.
78:25 I haven't seen any, even demo robots, that have a great hand,
78:32 with all the degrees of freedom of a human hand. Optimus will have that. Optimus does have that.
78:41 How do you achieve that? Is it just the right torque density in the motor?
78:44 What is the hardware bottleneck to that? We had to design custom actuators,
78:50 basically custom design motors, gears, power electronics, controls, sensors.
78:58 Everything had to be designed from physics first principles.
79:01 There is no supply chain for this. Will you be able to manufacture those at scale?
79:06 Yes. Is anything hard, except the hand, from a manipulation point of view? Or once you've solved the hand, are you good?
79:12 From an electromechanical standpoint, the hand is more difficult than everything else combined.
79:17 The human hand turns out to be quite something. But you also need the real-world intelligence.
79:24 The intelligence that Tesla developed for the car applies very well to the robot,
79:32 which is primarily vision in. The car takes in vision,
79:36 but it actually also is listening for sirens. It's taking in the inertial measurements,
79:42 GPS signals, other data, combining that with video, primarily video,
79:47 and then outputting the control commands. Your Tesla is taking in one and a half
79:55 gigabytes a second of video and outputting two kilobytes a second of control outputs with the
80:03 video at 36 hertz and the control frequency at 18. One intuition you could have for when we get this
80:12 robotic stuff is that it takes quite a few years to go from the compelling demo to actually being
80:18 able to use it in the real world. 10 years ago, you had really compelling demos of self-driving,
80:23 but only now we have Robotaxis and Waymo and all these services scaling up.
80:29 Shouldn't this make one pessimistic on household robots?
80:33 Because we don't even quite have the compelling demos yet of, say, the really advanced hand.
80:39 Well, we've been working on humanoid robots now for a while.
80:44 I guess it's been five or six years or something. A bunch of the things that were done for the car
80:52 are applicable to the robot. We'll use the same Tesla AI
80:57 chips in the robot as in the car. We'll use the same basic principles.
81:05 It's very much the same AI. You've got many more degrees of
81:09 freedom for a robot than you do for a car. If you just think of it as a bitstream,
81:16 AI is mostly compression and correlation of two bitstreams.
81:23 For video, you've got to do a tremendous amount of compression
81:28 and you've got to do the compression just right. You've got to ignore the things that don't matter.
81:36 You don't care about the details of the leaves on the tree on the side of the road,
81:39 but you care a lot about the road signs and the traffic lights, the pedestrians,
81:45 and even whether someone in another car is looking at you or not looking at you.
81:51 Some of these details matter a lot. The car is going to turn that one and
81:57 a half gigabytes a second ultimately into two kilobytes a second of control outputs.
82:02 So you’ve got many stages of compression. You've got to get all those stages right and then
82:08 correlate those to the correct control outputs. The robot has to do essentially the same thing.
82:14 This is what happens with humans. We really are photons in, controls out.
82:19 That is the vast majority of your life: vision, photons in, and then motor controls out.
82:28 Naively, it seems that between humanoid robots and cars… The fundamental actuators
82:33 in a car are how you turn, how you accelerate. In a robot, especially with maneuverable arms,
82:39 there's dozens and dozens of these degrees of freedom.
82:42 Then especially with Tesla, you had this advantage of millions and millions of hours of human demo
82:48 data collected from the car being out there. You can't equivalently deploy Optimuses that
82:53 don't work and then get the data that way. So between the increased degrees of freedom
82:57 and the far sparser data... Yes. That’s a good point. How will you use the Tesla engine of intelligence to train the Optimus mind?
83:11 You're actually highlighting an important limitation and difference from cars.
83:18 We'll soon have 10 million cars on the road. It's hard to duplicate that massive
83:26 training flywheel. For the robot, what we're going to need to do is build a lot of robots and put them in kind of an Optimus Academy
83:37 so they can do self-play in reality. We're actually building that out. We can have at
83:45 least 10,000 Optimus robots, maybe 20-30,000, that are doing self-play and testing different tasks.
83:55 Tesla has quite a good reality generator, a physics-accurate reality
84:02 generator, that we made for the cars. We'll do the same thing for the robots.
84:06 We actually have done that for the robots. So you have a few tens of thousands of
84:14 humanoid robots doing different tasks. You can do millions of simulated
84:20 robots in the simulated world. You use the tens of thousands of
84:26 robots in the real world to close the simulation to reality gap. Close the sim-to-real gap.
84:32 How do you think about the synergies between xAI and Optimus, given you're highlighting that you
84:36 need this world model, you want to use some really smart intelligence as a control plane,
84:42 and Grok is doing the slower planning, and then the motor policy is a little lower level.
84:48 What will the synergy between these things be? Grok would orchestrate the
84:55 behavior of the Optimus robots. Let's say you wanted to build a factory.
85:05 Grok could organize the Optimus robots, assign them tasks to build
85:13 the factory to produce whatever you want. Don't you need to merge xAI and Tesla then?
85:18 Because these things end up so... What were we saying earlier
85:21 about public company discussions? We're one more Guinness in, Elon.
85:28 What are you waiting to see before you say, we want to manufacture 100,000 Optimuses?
85:33 "Optimi". Since we're defining the proper noun, we’re going to define
85:38 the plural of the proper noun too. We're going to proper noun the
85:42 plural and so it's Optimi. Is there something on the
85:46 hardware side you want to see? Do you want to see better actuators?
85:49 Is it just that you want the software to be better?
85:50 What are we waiting for before we get mass manufacturing of Gen 3?
85:54 No, we're moving towards that. We're moving forward with the mass manufacturing.
85:58 But you think current hardware is good enough that you just want to deploy as many as possible now?
86:06 It's very hard to scale up production. But I think Optimus 3 is the right version
86:12 of the robot to produce something on the order of a million units a year.
86:20 I think you'd want to go to Optimus 4 before you went to 10 million units a year.
86:23 Okay, but you can do a million units at Optimus 3? It's very hard to spool up manufacturing.
86:35 The output per unit time always follows an S-curve.
86:38 It starts off agonizingly slow, then it has this exponential increase, then a linear,
86:44 then a logarithmic outcome until you eventually asymptote at some number.
86:51 Optimus’ initial production will be a stretched out S-curve because so much
86:57 of what goes into Optimus is brand new. There is not an existing supply chain.
87:03 The actuators, electronics, everything in the Optimus robot is designed
87:08 from physics first principles. It's not taken from a catalog.
87:11 These are custom-designed everything. I don't think there's a single thing—
87:17 How far down does that go? I guess we're not making custom
87:22 capacitors yet, maybe. There's nothing you can pick out of a catalog, at any price. It just means that the Optimus S-Curve,
87:39 the output per unit time, how many Optimus robots you make per day, is going to initially ramp
87:50 slower than a product where you have an existing supply chain.
87:55 But it will get to a million. When you see these Chinese humanoids,
87:58 like Unitree or whatever, sell humanoids for like $6K or $13K, are you hoping to
88:05 get your Optimus bill of materials below that price so you can do the same thing?
88:10 Or do you just think qualitatively they're not the same thing?
88:15 What allows them to sell for so low? Can we match that?
88:19 Our Optimus is designed to have a lot of intelligence and to have the same
88:26 electromechanical dexterity, if not higher, as a human. Unitree does not
88:31 have that. It's also quite a big robot. It has to carry heavy objects for long
88:41 periods of time and not overheat or exceed the power of its actuators.
88:50 It's 5'11", so it's pretty tall. It's got a lot of intelligence.
88:57 So it's going to be more expensive than a small robot that is not intelligent.
89:02 But more capable. But not a lot more. The thing is,
89:06 over time as Optimus robots build Optimus robots, the cost will drop very quickly.
89:12 What will these first billion Optimuses, Optimi, do?
89:17 What will their highest and best use be? I think you would start off with simple tasks
89:21 that you can count on them doing well. But in the home or in factories?
89:25 The best use for robots in the beginning will be any continuous operation, any 24/7
89:33 operation, because they can work continuously. What fraction of the work at a Gigafactory that
89:39 is currently done by humans could a Gen 3 do? I'm not sure. Maybe it's 10-20%,
89:46 maybe more, I don't know. We would not reduce our headcount.
89:52 We would increase our headcount, to be clear. But we would increase our output. The units
90:01 produced per human... The total number of humans at Tesla will increase, but the output of robots
90:09 and cars will increase disproportionately. The number of cars and robots produced per
90:18 human will increase dramatically, but the number of humans will increase as well.
90:23 We're talking about Chinese manufacturing a bunch here.
90:30 We've also talked about some of the policies that are relevant,
90:33 like you mentioned, the solar tariffs. You think they're a bad idea because
90:39 we can't scale up solar in the US. Electricity output in the US needs to scale up.
90:45 It can't without good power sources. You just need to get it somehow.
90:50 Where I was going with this is, if you were in charge, if you were setting all
90:53 the policies, what else would you change? You’d change the solar tariffs, that’s one.
91:01 I would say anything that is a limiting factor for electricity needs to be addressed,
91:06 provided it's not very bad for the environment. So presumably some permitting reforms and stuff
91:10 as well would be in there? There's a fair bit of
91:12 permitting reforms that are happening. A lot of the permitting is state-based,
91:17 but anything federal... This administration is good at
91:21 removing permitting roadblocks. I'm not saying all tariffs are bad.
91:28 Solar tariffs. Sometimes if another country is subsidizing the output of something, then you have to have countervailing tariffs to protect domestic
91:39 industry against subsidies by another country. What else would you change?
91:43 I don't know if there's that much that the government can actually do.
91:46 One thing I was wondering... For the policy goal of creating a lead for the US versus China,
91:57 it seems like the export bans have actually been quite impactful,
92:02 where China is not producing leading-edge chips and the export bans really bite there.
92:07 China is not producing leading-edge turbine engines.
92:11 Similarly, there's a bunch of export bans that are relevant there on some of the metallurgy.
92:16 Should there be more export bans? As you think about things like the
92:20 drone industry and things like that, is that something that should be considered?
92:24 It's important to appreciate that in most areas, China is very advanced in manufacturing.
92:30 There's only a few areas where it is not. China is a manufacturing powerhouse, next-level.
92:40 It's very impressive. If you take refining of ore,
92:49 China does roughly twice as much ore refining on average as the rest of the world combined.
93:00 There are some areas, like refining gallium which goes into solar cells.
93:05 I think they are 98% of gallium refining. So China is actually very advanced
93:10 in manufacturing in most areas. It seems like there is discomfort
93:16 with this supply chain dependence, and yet nothing's really happening on it.
93:20 Supply chain dependence? Say, like the gallium refining that
93:24 you're saying. All the rare-earth stuff. Rare earths for sure,
93:31 as you know, they’re not rare. We actually do rare earth ore mining in the US,
93:37 send the rock, put it on a train, and then put it on a boat to China that goes to another train,
93:45 and goes to the rare earth refiners in China who then refine it, put it into a magnet,
93:51 put it into a motor sub-assembly, and then send it back to America.
93:54 So the thing we're really missing is a lot of ore refining in America.
94:00 Isn't this worth a policy intervention? Yes. I think there are some things
94:06 being done on that front. But we kind of need Optimus,
94:12 frankly, to build ore refineries. So, you think the main advantage
94:17 China has is the abundance of skilled labor? That's the thing Optimus fixes?
94:24 Yes. China’s got like four times our population. I mean, there's this concern. If you think
94:29 human resources are the future, right now if it's the skilled labor for manufacturing
94:34 that's determining who can build more humanoids, China has more of those.
94:39 It manufactures more humanoids, therefore it gets the Optimi future first.
94:44 Well, we’ll see. Maybe. It just keeps that exponential going.
94:47 It seems like you're sort of pointing out that getting to a million Optimi requires
94:52 the manufacturing that the Optimi is supposed to help us get to. Right?
94:57 You can close that recursive loop pretty quickly. With a small number of Optimi?
95:01 Yeah. So you close the recursive loop to help the robots build the robots.
95:08 Then we can try to get to tens of millions of units a year. Maybe. If you start getting
95:13 to hundreds of millions of units a year, you're going to be the most competitive country by far.
95:18 We definitely can't win with just humans, because China has four times our population.
95:23 Frankly, America has been winning for so long that… A pro sports team that's been
95:27 winning for a very long time tends to get complacent and entitled.
95:31 That's why they stop winning, because they don't work as hard anymore.
95:37 So frankly my observation is just that the average work ethic in China is higher than in the US.
95:44 It's not just that there's four times the population, but the amount
95:46 of work that people put in is higher. So you can try to rearrange the humans,
95:52 but you're still one quarter of the—assuming that productivity is the same, which I think
96:01 actually it might not be, I think China might have an advantage on productivity per person—we will do
96:07 one quarter of the amount of things as China. So we can't win on the human front.
96:12 Our birth rate has been low for a long time. The US birth rate's been below replacement
96:20 since roughly 1971. We've got a lot of people retiring, we're close
96:32 to more people domestically dying than being born. So we definitely can't win on the human front,
96:38 but we might have a shot at the robot front. Are there other things that you have wanted to
96:43 manufacture in the past, but they've been too labor intensive or too expensive that now you
96:48 can come back to and say, "oh, we can finally do the whatever, because we have Optimus?"
96:54 Yeah, we'd like to build more ore refineries at Tesla.
97:00 We just completed construction and have begun lithium refining with our lithium refinery
97:07 in Corpus Christi, Texas. We have a nickel refinery,
97:12 which is for the cathode, that's here in Austin. This is the largest cathode refinery, largest
97:24 nickel and lithium refinery, outside of China. The cathode team would say, "we have the
97:35 largest and the only, actually, cathode refinery in America."
97:40 Not just the largest, but it's also the only. Many superlatives.
97:43 So it was pretty big, even though it's the only one. But there are other things.
97:53 You could do a lot more refineries and help America be more competitive on refining capacity.
98:04 There's basically a lot of work for the Optimus to do that most Americans,
98:09 very few Americans, frankly want to do. Is the refining work too dirty or what's the—
98:15 It's not actually, no. We don't have toxic emissions from the refinery or anything.
98:22 The cathode nickel refinery is in Travis County. Why can't you do it with humans?
98:29 You can, you just run out of humans. Ah, I see. Okay. No matter what you do, you have one quarter of the number of humans in America than China.
98:36 So if you have them do this thing, they can't do the other thing.
98:39 So then how do you build this refining capacity? Well, you could do it with Optimi.
98:49 Not very many Americans are pining to do refining. I mean, how many have you run into? Very few. Very
99:01 few pining to refine. BYD is reaching Tesla production or sales in quantity. What do you think happens in global
99:09 markets as Chinese production in EVs scales up? China is extremely competitive in manufacturing.
99:19 So I think there's going to be a massive flood of Chinese vehicles
99:26 and basically most manufactured things. As it is, as I said, China is probably
99:37 doing twice as much refining as the rest of the world combined.
99:40 So if you go down to fourth and fifth-tier supply chain stuff…
99:50 At the base level, you've got energy, then you've got mining and refining.
99:55 Those foundation layers are, like I said, as a rough guess, China's doing twice as much refining
100:03 as the rest of the world combined. So any given thing is going to have
100:09 Chinese content because China's doing twice as much refining work as the rest of the world.
100:14 But they'll go all the way to the finished product with the cars.
100:22 I mean China is a powerhouse. I think this year China will exceed
100:26 three times US electricity output. Electricity output is a reasonable
100:32 proxy for the economy. In order to run the factories
100:39 and run everything, you need electricity. It's a good proxy for the real economy.
100:52 If China passes three times the US electricity output,
100:55 it means that its industrial capacity—as rough approximation—will be three times that of the US.
101:01 Reading between the lines, it sounds like what you're saying is absent some sort of humanoid
101:06 recursive miracle in the next few years, on the whole manufacturing/energy/raw materials chain,
101:16 China will just dominate whether it comes to AI or manufacturing EVs or manufacturing humanoids.
101:23 In the absence of breakthrough innovations in the US, China will utterly dominate.
101:35 Interesting. Yes. Robotics being the main breakthrough innovation. Well, to scale AI in space, basically you need
101:49 humanoid robots, you need real-world AI, you need a million tons a year to orbit.
101:57 Let's just say if we get the mass driver on the moon going, my favorite thing, then I think—
102:03 We'll have solved all our problems. I call that winning. I call it winning, big time.
102:13 You can finally be satisfied. You've done something.
102:16 Yes. You have the mass driver on the moon. I just want to see that thing in operation. Was that out of some sci-fi or where did you…?
102:22 Well, actually, there is a Heinlein book. The Moon is a Harsh Mistress.
102:26 Okay, yeah, but that's slightly different. That's a gravity slingshot or...
102:30 No, they have a mass driver on the Moon. Okay, yeah, but they use that to attack Earth.
102:35 So maybe it's not the greatest... Well they use that to… assert their independence.
102:38 Exactly. What are your plans for the mass driver on the Moon?
102:40 They asserted their independence. Earth government disagreed and they lobbed
102:44 things until Earth government agreed. That book is a hoot. I found that
102:48 book much better than his other one that everyone reads, Stranger in a Strange Land.
102:51 "Grok" comes from Stranger in a Strange Land. The first two-thirds of Stranger in a Strange
102:58 Land are good, and then it gets very weird in the third portion.
103:02 But there are still some good concepts in there. One thing we were discussing a lot
104:18 is your system for managing people. You interviewed the first few thousand of
104:25 SpaceX employees and lots of other companies. It obviously doesn't scale.
104:29 Well, yes, but what doesn't scale? Me. Sure, sure. I know that. But what are you looking for?
104:36 There literally are not enough hours in the day. It's impossible.
104:38 But what are you looking for that someone else who's good at interviewing
104:42 and hiring people… What's the je ne sais quoi? At this point, I might have more training data
104:51 on evaluating technical talent especially—talent of all kinds I suppose, but technical talent
104:56 especially—given that I've done so many technical interviews and then seen the results.
105:02 So my training set is enormous and has a very wide range.
105:11 Generally, the things I ask for are bullet points for evidence of exceptional ability.
105:21 These things can be pretty off the wall. It doesn't need to be in the specific domain,
105:27 but evidence of exceptional ability. So if somebody can cite even one thing,
105:34 but let's say three things, where you go, "Wow, wow, wow," then that's a good sign.
105:39 Why do you have to be the one to determine that? No, I don't. I can't be. It's impossible. The
105:43 total headcount across all companies is 200,000 people.
105:48 But in the early days, what was it that you were looking for that
105:53 couldn't be delegated in those interviews? I guess I need to build my training set.
106:02 It's not like I batted a thousand here. I would make mistakes, but then I'd be
106:05 able to see where I thought somebody would work out well, but they didn't.
106:10 Then why did they not work out well? What can I do, I guess RL myself, to
106:16 in the future have a better batting average when interviewing people?
106:22 My batting average is still not perfect, but it's very high.
106:24 What are some surprising reasons people don't work out?
106:27 Surprising reasons… Like, they don't understand technical domain, et cetera, et cetera. But you've got the long tail now of like,
106:34 "I was really excited about this person. It didn't work out." Curious why that happens.
106:43 Generally what I tell people—I tell myself, I guess, aspirationally—is, don't look at
106:49 the resume. Just believe your interaction. The resume may seem very impressive and it's like,
106:55 "Wow, the resume looks good." But if the conversation
107:00 after 20 minutes is not "wow," you should believe the conversation, not the paper.
107:07 I feel like part of your method is that… There was this meme in the media a few years back about
107:14 Tesla being a revolving door of executive talent. Whereas actually, I think when you look at it,
107:19 Tesla's had a very consistent and internally promoted executive bench over the past few years.
107:24 Then at SpaceX, you have all these folks like Mark Juncosa and Steve Davis—
107:29 Steve Davis runs The Boring Company these days. Bill Riley, and folks like that.
107:35 It feels like part of what has worked well is having very capable technical deputies.
107:43 What do all of those people have in common? Well, the Tesla senior team,
107:53 at this point has probably got an average tenure of 10-12 years. It's quite long.
108:03 But there were times when Tesla went through an extremely rapid growth phase,
108:11 so things were just somewhat sped up. As you know, a company goes through
108:17 different orders of magnitude of size. People that could help manage, say,
108:23 a 50-person company versus a 500-person company versus a 5,000-person company versus
108:28 a 50,000-person company. You outgrew people. It's just not the same team. It's not always the same team.
108:34 So if a company is growing very rapidly, the rate at which executive positions will
108:39 change will also be proportionate to the rapidity of the growth generally.
108:47 Tesla had a further challenge where when Tesla had very successful periods, we would be relentlessly
108:56 recruited from. Like, relentlessly. When Apple had their electric car program,
109:03 they were carpet bombing Tesla with recruiting calls. Engineers just unplugged their phones.
109:10 "I'm trying to get work done here." Yeah. "If I get one more call from
109:14 an Apple recruiter…" But their opening offer without any interview would be
109:19 like double the compensation at Tesla. So we had a bit of the "Tesla pixie
109:28 dust" thing where it's like, "Oh, if you hire a Tesla executive,
109:32 suddenly everything's going to be successful." I've fallen prey to the pixie dust thing as well,
109:38 where it's like, "Oh, we'll hire someone from Google or Apple and they'll be immediately
109:41 successful," but that's not how it works. People are people. There's no magical pixie
109:47 dust. So when we had the pixie dust problem, we would get relentlessly recruited from.
109:57 Also, Tesla being engineering, especially being primarily in Silicon Valley,
110:03 it's easier for people to just... They don't have to change their life very much.
110:10 Their commute's going to be the same. So how do you prevent that?
110:14 How do you prevent the pixie dust effect where everyone's trying to poach all your people?
110:21 I don't think there's much we can do to stop it. That's one of the reasons why Tesla… Really,
110:29 being in Silicon Valley and having the pixie dust thing at the same time meant that there was
110:39 just a very, very aggressive recruitment. Presumably being in Austin helps then?
110:44 Austin, it helps. Tesla still has a majority of its engineering in California.
110:56 Getting engineers to move… I call it the "significant other" problem.
111:00 Yes, "significant others" have jobs. Exactly. So for Starbase that was
111:06 particularly difficult, since the odds of finding a non-SpaceX job…
111:10 In Brownsville, Texas… …are pretty low. It's quite difficult. It's like a technology monastery thing, remote and mostly dudes.
111:22 Not much of an improvement over SF. If you go back to these people who've really
111:34 been very effective in a technical capacity at Tesla, at SpaceX, and those sorts of places, what
111:41 do you think they have in common other than... Is it just that they're very sharp on the
111:48 rocketry or the technical foundations, or do you think it's something organizational?
111:52 Is it something about their ability to work with you?
111:54 Is it their ability to be flexible but not too flexible?
112:03 What makes a good sparring partner for you? I don't think of it as a sparring partner.
112:08 If somebody gets things done, I love them, and if they don't,
112:11 I hate them. So it's pretty straightforward. It's not like some idiosyncratic thing.
112:17 If somebody executes well, I'm a huge fan, and if they don't, I'm not.
112:22 But it's not about mapping to my idiosyncratic preferences.
112:25 I certainly try not to have it be mapping to my idiosyncratic preferences.
112:36 Generally, I think it's a good idea to hire for talent and drive and trustworthiness.
112:47 And I think goodness of heart is important. I underweighted that at one point.
112:53 So, are they a good person? Trustworthy? Smart and talented and hard working?
113:01 If so, you can add domain knowledge. But those fundamental traits,
113:06 those fundamental properties, you cannot change. So most of the people who are at Tesla and SpaceX
113:14 did not come from the aerospace industry or the auto industry.
113:18 What has had to change most about your management style as your companies have
113:21 scaled from 100 to 1,000 to 10,000 people? You're known for this very micro management,
113:27 just getting into the details of things. Nano management, please. Pico management.
113:34 Femto management. Keep going. We're going to go all the way down to Planck's constant.
113:44 All the way down to Heisenberg uncertainty principle.
113:50 Are you still able to get into details as much as you want?
113:52 Would your companies be more successful if they were smaller?
113:56 How do you think about that? Because I have a fixed amount of
113:58 time in the day, my time is necessarily diluted as things grow and as the span of activity increases.
114:10 It's impossible for me to actually be a micromanager because that would imply I
114:17 have some thousands of hours per day. It is a logical impossibility
114:22 for me to micromanage things. Now, there are times when I will drill down into a
114:31 specific issue because that specific issue is the limiting factor on the progress of the company.
114:42 The reason for drilling into some very detailed item is because it is the limiting factor.
114:49 It’s not arbitrarily drilling into tiny things. From a time standpoint, it is physically
114:57 impossible for me to arbitrarily go into tiny things that don't matter. That would
115:03 result in failure. But sometimes the tiny things are decisive in victory.
115:09 Famously, you switched the Starship design from composites to steel.
115:17 Yes. You made that decision. That wasn't people going around saying, "Oh, we found something better, boss."
115:22 That was you encouraging people against some resistance.
115:25 Can you tell us how you came to that whole concept of the steel switch?
115:32 Desperation, I'd say. Originally, we were going to make Starship out of carbon fiber.
115:45 Carbon fiber is pretty expensive. When you do volume production, you can get any given thing
115:55 to start to approach its material cost. The problem with carbon fiber is that
116:00 material cost is still very high. Particularly if you go for a high-strength
116:10 specialized carbon fiber that can handle cryogenic oxygen, it's roughly 50 times the cost of steel.
116:20 At least in theory, it would be lighter. People generally think of steel as being
116:24 heavy and carbon fiber as being light. For room temperature applications,
116:35 like a Formula 1 car, static aero structure, or any kind of aero structure really, you're
116:43 probably going to be better off with carbon fiber. The problem is that we were trying to make this
116:48 enormous rocket out of carbon fiber and our progress was extremely slow.
116:53 It had been picked in the first place just because it's light?
116:57 Yes. At first glance, most people would think that the choice for
117:05 making something light would be carbon fiber. The thing is that when you make something very
117:18 enormous out of carbon fiber and then you try to have the carbon fiber be efficiently cured,
117:25 meaning not room temperature cured, because sometimes you got 50 plies of carbon fiber…
117:33 Carbon fiber is really carbon string and glue. In order to have high strength,
117:39 you need an autoclave. Something that's essentially a high pressure oven.
117:46 If you have something that's gigantic, that one's got to be bigger than the rocket.
117:52 We were trying to make an autoclave that's bigger than any autoclave that's ever existed.
117:58 Or you can do room temperature cure, which takes a long time and has issues.
118:03 The final issue is that we were just making very slow progress with carbon fiber.
118:12 The meta question is why it had to be you who made that decision.
118:18 There's many engineers on your team. How did the team not arrive at steel?
118:20 Yeah exactly. This is part of a broader question, understanding your comparative
118:24 advantage at your companies. Because we were making very slow
118:29 progress with carbon fiber, I was like, "Okay, we've got to try something else."
118:33 For the Falcon 9, the primary airframe is made of aluminum lithium, which has
118:41 a very good strength-to-weight. Actually, it has about the same,
118:47 maybe better, strength to weight for its application than carbon fiber.
118:51 But aluminum lithium is very difficult to work with.
118:53 In order to weld it, you have to do something called friction stir welding, where you join the
118:57 metal without entering the liquid phase. It's kind of wild that you can do that.
119:02 But with this particular type of welding, you can do that. It's very difficult. Let's say you
119:10 want to make a modification or attach something to aluminum lithium, you now have to use a mechanical
119:16 attachment with seals. You can't weld it on. So I wanted to avoid using aluminum lithium
119:24 for the primary structure for Starship. There was this very special grade of
119:35 carbon fiber that had very good mass properties. With a rocket, you're really trying to maximize
119:41 the percentage of the rocket that is propellant, minimize the mass obviously.
119:48 But like I said, we were making very slow progress.
119:54 I said, "at this rate, we’re never going to get to Mars.
119:56 So we've got to think of something else." I didn't want to use aluminum lithium
120:01 because of the difficulty of friction stir welding, especially doing that at scale.
120:06 It was hard enough at 3.6 meters in diameter, let alone at 9 meters or above.
120:12 Then I said, "what about steel?" I had a clue here because some of
120:21 the early US rockets had used very thin steel. The Atlas rockets had used a steel balloon tank.
120:30 It's not like steel had never been used before. It actually had been used. When you look at
120:35 the material properties of stainless steel, full-hard, strain hardened stainless steel,
120:46 at cryogenic temperature the strength to weight is actually similar to carbon fiber.
120:54 If you look at material properties at room temperature, it looks like
120:58 the steel is going to be twice as heavy. But if you look at the material properties
121:03 at cryogenic temperature of full-hard steel, stainless of particular grades,
121:10 then you actually get to a similar strength to weight as carbon fiber.
121:15 In the case of Starship, both the fuel and the oxidizer are cryogenic.
121:19 For Falcon 9, the fuel is rocket propellant-grade kerosene, basically a very pure form of jet fuel.
121:32 That is roughly room temperature. Although we do actually chill it slightly below,
121:38 we chill it like a beer. Delicious. We do chill it, but it's not cryogenic. In fact, if we made it cryogenic,
121:45 it would just turn to wax. But for Starship, it's liquid methane and liquid oxygen. They are liquid at similar temperatures.
121:59 Basically, almost the entire primary structure is at cryogenic temperature.
122:03 So then you've got a 300-series stainless that's strain hardened.
122:12 Because almost all things are cryogenic temperature, it actually has similar
122:17 strength to weight as carbon fiber. But it costs 50x less in raw
122:25 material and is very easy to work with. You can weld stainless steel outdoors.
122:30 You could smoke a cigar while welding stainless steel. It's very resilient.
122:37 You can modify it easily. If you want to attach something, you just weld it right on.
122:44 Very easy to work with, very low cost. Like I said, at cryogenic temperature,
122:52 it’s similar strength-to-weight to carbon fiber. Then when you factor in that we have a much
123:02 reduced heat shield mass, because the melting point of steel, is much greater
123:07 than the melting point of aluminum… It's about twice the melting point of aluminum.
123:13 So you can just run the rocket much hotter? Yes, especially for the ship which is coming
123:19 in like a blazing meteor. You can greatly reduce
123:25 the mass of the heat shield. You can cut the mass of the windward
123:34 part of the heat shield, maybe in half, and you don't need any heat shielding on the leeward side.
123:45 The net result is that actually the steel rocket weighs less than
123:49 the carbon fiber rocket, because the resin in the carbon fiber rocket starts to melt.
124:00 Basically, carbon fiber and aluminum have about the same operating temperature capabilities,
124:06 whereas steel can operate at twice the temperature. These are very rough approximations.
124:12 I won't build the rocket. What I mean is people will say,
124:14 "Oh, he said this twice. It's actually 0.8." I'm like, shut up, assholes.
124:18 That's what the main comment's going to be about. God damn it. The point is, in retrospect, we
124:25 should have started with steel in the beginning. It was dumb not to do steel.
124:28 Okay, but to play this back to you, what I'm hearing is that steel was a riskier,
124:32 less proven path, other than the early US rockets. Versus carbon fiber was a worse but
124:40 more proven out path. So you need to be the one to push for, "Hey, we're going to do this riskier path and just figure it out."
124:48 So you're fighting a sort of conservatism in a sense.
124:52 That's why I initially said that the issue is that we weren't making fast enough progress.
124:57 We were having trouble making even a small barrel section of the carbon fiber
125:02 that didn't have wrinkles in it. Because at that large scale, you have to
125:09 have many plies, many layers of the carbon fiber. You've got to cure it and you've got to cure it
125:14 in such a way that it doesn't have any wrinkles or defects.
125:18 Carbon fiber is much less resilient than steel. It has much less toughness.
125:26 Stainless steel will stretch and bend, the carbon fiber will tend to shatter.
125:35 Toughness being the area under the stress strain curve.
125:39 You're generally going to have to do better with steel, but stainless steel to be precise.
125:45 One other Starship question. So I visited Starbase, I think it was two years ago,
125:51 with Sam Teller, and that was awesome. It was very cool to see, in a whole bunch of ways.
125:55 One thing I noticed was that people really took pride in the simplicity of things, where everyone
126:02 wants to tell you how Starship is just a big soda can, and we're hiring welders, and if you can weld
126:09 in any industrial project, you can weld here. But there's a lot of pride in the simplicity.
126:16 Well, factually Starship is a very complicated rocket.
126:18 So that's what I'm getting at. Are things simple or are they complex?
126:23 I think maybe just what they're trying to say is that you don't have to have prior experience
126:27 in the rocket industry to work on Starship. Somebody just needs to be smart and work hard
126:36 and be trustworthy and they can work on a rocket. They don't need prior rocket experience.
126:42 Starship is the most complicated machine ever made by humans, by a long shot.
126:47 In what regards? Anything, really. I'd say there isn't a more complex machine. I'd say that pretty much any project I
127:00 can think of would be easier than this. That's why nobody has ever made a fully
127:08 reusable orbital rocket. It's a very hard problem. Many smart people have tried before, very smart
127:18 people with immense resources, and they failed. And we haven't succeeded yet. Falcon is partially
127:26 reusable, but the upper stage is not. Starship Version 3,
127:32 I think this design can be fully reusable. That full reusability is what will enable
127:41 us to become a multi-planet civilization. Any technical problem, even like a Hadron
127:52 Collider or something like that, is an easier problem than this.
127:55 We spent a lot of time on bottlenecks. Can you say what the current Starship
127:58 bottlenecks are, even at a high level? Trying to make it not explode, generally.
128:05 It really wants to explode. That old chestnut. All those
128:09 combustible materials. We've had two boosters explode on the test stand.
128:13 One obliterated the entire test facility. So it only takes that one mistake.
128:21 The amount of energy contained in a Starship is insane.
128:25 Is that why it's harder than Falcon? It's because it's just more energy?
128:30 It's a lot of new technology. It's pushing the performance envelope. The
128:37 Raptor 3 engine is a very, very advanced engine. It's by far the best rocket engine ever made.
128:43 But it desperately wants to blow up. Just to put things into perspective here,
128:48 on liftoff the rocket is generating over 100 gigawatts of power. That’s 20% of US electricity.
128:58 It's actually insane. It's a great comparison. While not exploding. Sometimes.
129:02 Sometimes, yes. So I was like, how does it not explode?
129:06 There's thousands of ways that it could explode and only one way that it doesn't.
129:12 So we want it not only to really not explode, but fly reliably on a daily basis, like once per hour.
129:22 Obviously, if it blows up a lot, it's very difficult to maintain that
129:25 launch cadence. Yes. What's the single biggest remaining problem for Starship?
129:33 It's having the heat shield be reusable. No one's ever made a reusable orbital heat shield.
129:44 So the heat shield's gotta make it through the ascent phase without shucking a bunch of tiles,
129:52 and then it's gotta come back in and also not lose a bunch of tiles or overheat the main airframe.
130:01 Isn't that hard because it's fundamentally a consumable?
130:05 Well, yes, but your brake pads in your car are also consumable, but they last a very long time.
130:09 Fair. So it just needs to last a very long time. We have brought the ship back and had it do a soft landing in the ocean.
130:22 We've done that a few times. But it lost a lot of tiles.
130:27 It was not reusable without a lot of work. Even though it did come to a soft landing,
130:35 it would not have been reusable without a lot of work.
130:40 So it's not really reusable in that sense. That's the biggest problem that remains,
130:44 a fully reusable heat shield. You want to be able to land it,
130:51 refill propellant and fly again. You can't do this laborious inspection
130:57 of 40,000 tiles type of thing. When I read biographies of yours,
131:06 it seems like you're just able to drive the sense of urgency and drive the sense of "this is the
131:11 thing that can scale." I'm curious why you think other organizations of your… SpaceX and Tesla are really big companies now.
131:20 You're still able to keep that culture. What goes wrong with other companies such
131:24 that they're not able to do that? I don't know. Like today, you said you had a bunch of SpaceX meetings.
131:31 What is it that you're doing there that's keeping that?
131:33 It’s adding urgency? Well, I don't know. I guess the urgency is going
131:42 to come from whoever is leading the company. I have a maniacal sense of urgency.
131:47 So that maniacal sense of urgency projects through the rest of the company.
131:52 Is it because of consequences? They're like, "Elon set a crazy deadline, but if I don't get it,
131:57 I know what happens to me." Is it just that you're able to
132:01 identify bottlenecks and get rid of them so people can move fast?
132:03 How do you think about why your companies are able to move fast?
132:07 I'm constantly addressing the limiting factor. On the deadlines front, I generally actually
132:20 try to aim for a deadline that I at least think is at the 50th percentile.
132:25 So it's not like an impossible deadline, but it's the most aggressive deadline I can think
132:29 of that could be achieved with 50% probability. Which means that it'll be late half the time.
132:42 There is a law of gas expansion that applies to schedules.
132:48 If you said we're going to do something in five years, which to me is like infinity time,
132:55 it will expand to fill the available schedule and it'll take five years.
133:05 Physics will limit how fast you can do certain things.
133:07 So scaling up manufacturing, there's a rate at which you can move the atoms
133:15 and scale manufacturing. That's why you can't instantly
133:17 make a million units a year of something. You've got to design the manufacturing line.
133:23 You've got to bring it up. You've got to ride the S-curve of production.
133:31 What can I say that's actually helpful to people? Generally, a maniacal sense of urgency is a very big deal.
133:47 You want to have an aggressive schedule and you want to figure out what the limiting
133:54 factor is at any point in time and help the team address that limiting factor.
133:59 So Starlink was slowly in the works for many years.
134:05 We talked about it all the way in the beginning of the company.
134:07 So then there was a team you had built in Redmond, and then at one point you
134:12 decided this team is just not cutting it. It went for a few years slowly, and so why didn't
134:25 you act earlier, and why did you act when you did? Why was that the right moment at which to act?
134:30 I have these very detailed engineering reviews weekly.
134:38 That's maybe a very unusual level of granularity. I don't know anyone who runs a company,
134:45 or at least a manufacturing company, that goes with the level of detail that I go
134:50 into. It's not as though... I have a pretty good understanding of what's actually going
134:57 on because we go through things in detail. I'm a big believer in skip-level meetings
135:07 where instead of having the person that reports to me say things, it's everyone that reports to them
135:14 saying something in the technical review. And there can't be advanced preparation.
135:25 Otherwise you're going to get "glazed", as I say these days.
135:31 Exactly. Very Gen Z of you. How do you prevent advanced preparation?
135:35 Do you call on them randomly? No, I just go around the room.
135:37 Everyone provides an update. It's a lot of information to keep in your head.
135:48 If you have meetings weekly or twice weekly, you've got a snapshot of what that person said.
135:56 You can then plot the progress points. You can sort of mentally plot the
136:03 points on a curve and say, "are we converging to a solution or not?"
136:12 I'll take drastic action only when I conclude that success is not in a set of possible outcomes.
136:22 So when I finally reach the conclusion that unless drastic action is done, we have no chance of
136:29 success, then I must take drastic action. I came to that conclusion in 2018,
136:36 took drastic action and fixed the problem. You've got many, many companies. In each of
136:45 them it sounds like you do this kind of deep engineering understanding of
136:49 what the relevant bottlenecks are so you can do these reviews with people.
136:56 You've been able to scale it up to five, six, seven companies.
136:59 Within one of these companies, you have many different mini companies within them.
137:04 What determines the max amount here? Because you have like 80 companies…?
137:07 80? No. But you have so many already. That's already remarkable. By this current number.
137:13 Exactly. We can barely keep one company together. It depends on the situation. I actually don't have regular meetings with The Boring Company,
137:32 so The Boring Company is sort of cruising along. Basically, if something is working well and
137:37 making good progress, then there's no point in me spending time on it.
137:42 I actually allocate time according to where the limiting factor. Where are things problematic?
137:51 Where are we pushing against? What is holding us back? I focus, at the risk of saying the
137:59 words too many times, on the limiting factor. The irony is if something's going really well,
138:09 they don't see much of me. But if something is going badly,
138:12 they'll see a lot of me. Or not even badly… If something is the limiting factor.
138:18 The limiting factor, exactly. It’s not exactly going badly but it’s the
138:21 thing that we need to make go faster. When something’s a limiting factor at
138:25 SpaceX or Tesla, are you talking weekly and daily with the engineer that's
138:32 working on it? How does that actually work? Most things that are the limiting factor are
138:39 weekly and some things are twice weekly. The AI5 chip review is twice weekly.
138:46 Every Tuesday and Saturday is the chip review. Is it open ended in how long it goes?
138:54 Technically, yes, but usually it's two or three hours. Sometimes less. It depends on
139:03 how much information we've got to go through. That's another thing. I'm just trying to tease
139:07 out the differences here because the outcomes seem quite different.
139:11 I think it's interesting to know what inputs are different.
139:14 It feels like in the corporate world, one, like you were saying, the CEO doing engineering
139:20 reviews does not always happen despite the fact that that is what the company is doing.
139:25 But then time is often pretty finely sliced into half hour meetings or even 15 minute meetings.
139:32 It seems like you hold more open-ended, "We're talking about it until we figure
139:38 it out" type things. Sometimes. But most of them seem to more or less stay on time. Today's Starship engineering review went a bit
139:56 longer because there were more topics to discuss. They're trying to figure out how to scale to a
140:04 million plus tons to orbit per year. It’s quite challenging.
140:08 Can I ask a question? You said about Optimus and AI that they're going to result in double
140:15 digit growth rates within a matter of years. Oh, like the economy? Yes. I think that's right.
140:22 What was the point of the DOGE cuts if the economy is going to grow so much?
140:28 Well, I think waste and fraud are not good things to have.
140:33 I was actually pretty worried about... In the absence of AI and robotics,
140:41 we're actually totally screwed because the national debt is piling up like crazy.
140:50 The interest payments to national debt exceed the military budget, which is a trillion dollars.
140:54 So we have over a trillion dollars just in interest payments.
141:00 I was pretty concerned about that. Maybe if I spend some time, we can
141:03 slow down the bankruptcy of the United States and give us enough time for the AI and robots
141:09 to help solve the national debt. Or not help solve, it's the only
141:16 thing that could solve the national debt. We are 1000% going to go bankrupt as a country,
141:21 and fail as a country, without AI and robots. Nothing else will solve the national debt.
141:30 We just need enough time to build the AI and robots to not go bankrupt before then.
141:39 I guess the thing I'm curious about is, when DOGE starts you have this enormous
141:43 ability to enact reform. Not that enormous. Sure. I totally buy your point that it's important that AI and robotics drive
141:53 productivity improvements, drive GDP growth. But why not just directly go after the things
141:59 you were pointing out, like the tariffs on certain components, or permitting?
142:03 I'm not the president. And it is very hard to cut things that are obvious waste and fraud,
142:13 like ridiculous waste and fraud. What I discovered is that it's extremely
142:21 difficult even to cut very obvious waste and fraud from the government because the government
142:28 has to operate on who's complaining. If you cut off payments to fraudsters,
142:34 they immediately come up with the most sympathetic sounding reasons to continue the payment.
142:39 They don't say, "Please keep the fraud going." They’re like, "You're killing baby pandas."
142:46 Meanwhile, no baby pandas are dying. They're just making it up. The fraudsters are capable
142:51 of coming up with extremely compelling, heart-wrenching stories that are false,
142:56 but nonetheless sound sympathetic. That's what happened. Perhaps I should have known better.
143:10 But I thought, wait, let's try to cut some amount of waste and pork from the government.
143:16 Maybe there shouldn't be 20 million people marked as alive in Social Security who are
143:22 definitely dead, and over the age of 115. The oldest American is 114. So it's safe to say if
143:30 somebody is 115 and marked as alive in the Social Security database, there's either a typo… Somebody
143:39 should call them and say, "We seem to have your birthday wrong, or we need to mark you
143:47 as dead." One of the two things. Very intimidating call to get.
143:52 Well, it seems like a reasonable thing. Say if their birthday is in the future
143:59 and they have a Small Business Administration loan, and their birthday is 2165,
144:07 we either have a typo or we have fraud. So we say, "we appear to have gotten the
144:13 century of your birth incorrect." Or a great plot for a movie.
144:17 Yes. That's what I mean by, ludicrous fraud. Were those people getting payments?
144:23 Some were getting payments from Social Security. But the main fraud vector was to mark somebody as
144:29 alive in Social Security and then use every other government payment system to basically do fraud.
144:37 Because what those other government payment systems do,
144:40 they would simply do an "are you alive" check to the Social Security database. It's a bank shot.
144:46 What would you estimate is the total amount of fraud from this mechanism?
144:52 By the way, the Government Accountability Office has done these estimates before. I'm
144:55 not the only one. In fact, I think the GAO did an analysis, a rough estimate of fraud during
145:02 the Biden administration, and calculated it at roughly half a trillion dollars.
145:08 So don't take my word for it. Take a report issued during the
145:11 Biden administration. How about that? From this Social Security mechanism?
145:16 It's one of many. It's important to appreciate that the government is
145:22 very ineffective at stopping fraud. It's not like a company where, with
145:30 stopping fraud, you've got a motivation because it's affecting the earnings of your company.
145:34 The government just prints more money. You need caring and competence. These are
145:44 in short supply at the federal level. When you go to the DMV, do you think,
145:52 "Wow, this is a bastion of competence"? Well, now imagine it's worse than the DMV
145:57 because it's the DMV that can print money. At least the state level DMVs need to...
146:05 The states more or less need to stay within their budget or they go bankrupt.
146:08 But the federal government just prints more money. If there's actually half a trillion of fraud,
146:14 why was it not possible to cut all that? You really have to stand back and recalibrate
146:28 your expectations for competence. Because you're operating in a world
146:36 where you've got to make ends meet. You've got to pay your bills...
146:41 Find the microphones. Exactly. It's not like there's a giant,
146:49 largely uncaring monster bureaucracy. It's a bunch of anachronistic computers
146:57 that are just sending payments. One of the things that the DOGE
147:03 team did sounds so simple and probably will save $100-200 billion a year.
147:14 It was simply requiring payments from the main Treasury computer—which is called PAM,
147:19 Payment Accounts Master or something like that, there's $5 trillion payments a year—that
147:25 go out have a payment appropriation code. Make it mandatory, not optional, that you
147:32 have anything at all in the comment field. You have to recalibrate how dumb things are.
147:42 Payments were being sent out with no appropriation code, not checking back to any congressional
147:48 appropriation, and with no explanation. This is why the Department of War,
147:54 formerly the Department of Defense, cannot pass an audit, because the information is literally
147:59 not there. Recalibrate your expectations. I want to better understand this half a trillion
148:04 number, because there's an IG report in 2024. Why is it so low?
148:10 Maybe, but we found that over seven years, the Social Security fraud
148:14 they estimated was like $70 billion over seven years, so like $10 billion a year.
148:17 So I'd be curious to see what the other $490 billion is.
148:20 Federal government expenditures are $7.5 trillion a year.
148:26 How competent do you think the government is? The discretionary spending there is like… 15%?
148:33 But it doesn't matter. Most of the fraud is non-discretionary.
148:36 It's basically fraudulent Medicare, Medicaid, Social Security,
148:45 disability. There's a zillion government payments. A bunch of these payments are in
148:52 fact block transfers to the states. So the federal government doesn't
148:59 even have the information in a lot of cases to even know if there's fraud.
149:04 Let's consider reductio ad absurdum. The government is perfect and has no fraud.
149:10 What is your probability estimate of that? Zero. Okay, so then would you say, fraud and waste
149:18 at the government is 90% efficient? That also would be quite generous.
149:27 But if it's only 90%, that means that there's $750 billion a year of waste and
149:32 fraud. And it's not 90%. It's not 90% effective. This seems like a strange way to first principles
149:38 the amount of fraud in the government. Just like, how much do you think there is?
149:43 Anyways, we don't have to do it live, but I'd be curious—
149:45 You know a lot about fraud at Stripe? People are constantly trying to do fraud.
149:49 Yeah, but as you say, it's a little bit of a... We've really ground it down, but it's a little
149:54 bit of a different problem space because you're dealing with a much more heterogeneous set of
149:58 fraud vectors here than we are. But at Stripe, you have high
150:03 competence and you try hard. You have high competence and
150:07 high caring, but still fraud is non-zero. Now imagine it's at a much bigger scale, there's
150:15 much less competence, and much less caring. At PayPal back in the day, we tried to manage
150:22 fraud down to about 1% of the payment volume. That was very difficult. It took a tremendous amount of
150:28 competence and caring to get fraud merely to 1%. Now imagine that you're an organization where
150:36 there's much less caring and much less competence. It's going to be much more than 1%.
150:41 How do you feel now looking back on politics and doing stuff there?
150:48 Looking from the outside in, two things have been quite impactful: one, the America PAC, and two,
150:59 the acquisition of Twitter at the time. But also it seems like there
151:05 was a bunch of heartache. What's your grading of the whole experience?
151:16 I think those things needed to be done to maximize the probability that the future is good.
151:27 Politics generally is very tribal. People lose their objectivity usually with politics.
151:35 They generally have trouble seeing the good on the other side or the bad on their own side.
151:41 That's generally how it goes. That, I guess, was one of the things that surprised me the most.
151:48 You often simply cannot reason with people. If they're in one tribe or the other.
151:52 They simply believe that everything their tribe does is good and anything
151:55 the other political tribe does is bad. Persuading them otherwise is almost impossible.
152:07 But I think overall those actions—acquiring Twitter, getting Trump elected, even though
152:22 it makes a lot of people angry—I think those actions were good for civilization.
152:30 How does it feed into the future you're excited about?
152:33 Well, America needs to be strong enough to last long enough to extend life to other
152:42 planets and to get AI and robotics to the point where we can ensure that the future is good.
152:51 On the other hand, if we were to descend into, say, communism or some situation where the state
152:59 was extremely oppressive, that would mean that we might not be able to become multi-planetary.
153:10 The state might stamp out our progress in AI and robotics.
153:21 Optimus, Grok, et cetera. Not just yours, but any revenue-maximizing company's products will
153:29 be leveraged by the government over time. How does this concern manifest in what
153:37 private companies should be willing to give governments? What kinds of guardrails? Should
153:44 AI models be made to do whatever the government that has contracted
153:51 them out to do and asks them to do? Should Grok get to say, "Actually,
153:57 even if the military wants to do X, no, Grok will not do that"?
154:01 I think maybe the biggest danger of AI and robotics going wrong is government.
154:16 People who are opposed to corporations or worried about corporations should
154:21 really worry the most about government. Because government is just a
154:25 corporation in the limit. Government is just the biggest
154:30 corporation with a monopoly on violence. I always find it a strange dichotomy where
154:38 people would think corporations are bad, but the government is good, when the government is
154:41 simply the biggest and worst corporation. But people have that dichotomy. They somehow think
154:51 at the same time that government can be good, but corporations bad, and this is not true.
154:55 Corporations have better morality than the government.
154:59 I actually think it’s a thing to be worried about. The government could potentially use AI and
155:12 robotics to suppress the population. That is a serious concern.
155:18 As the guy building AI and robotics, how do you prevent that?
155:28 If you limit the powers of government, which is really what the US Constitution is intended to do,
155:33 to limit the powers of government, then you're probably going to have a better outcome than
155:37 if you have more government. Robotics will be available
155:42 to all governments, right? I don’t know about all governments.
155:49 It's difficult to predict. I can say what's the endpoint, or what is many years in the future, but
155:57 it's difficult to predict the path along that way. If civilization progresses, AI will vastly
156:08 exceed the sum of all human intelligence. There will be far more robots than humans.
156:16 Along the way what happens is very difficult to predict.
156:20 It seems one thing you could do is just say, "whatever government X, you're not allowed to
156:27 use Optimus to do X, Y, Z." Just write out a policy. I think you tweeted recently that
156:31 Grok should have a moral constitution. One of those things could be that we
156:36 limit what governments are allowed to do with this advanced technology.
156:47 Technically if politicians pass a law and they can enforce that law,
156:53 then it's hard to not do that law. The best thing we can have is limited government
157:01 where you have the appropriate crosschecks between the executive, judicial, and legislative branches.
157:12 The reason I'm curious about it is that at some point it seems the limits will come from you.
157:17 You've got the Optimus, you've got the space GPUs… You think I'll be the boss of the government?
157:24 Already it's the case with SpaceX that for things that are crucial—the government really
157:32 cares about getting certain satellites up in space or whatever—it needs SpaceX. It is the
157:37 necessary contractor. You are in the process of building more and more of the
157:45 technological components of the future that will have an analogous role in different industries.
157:50 You could have this ability to set some policy that suppressing classical liberalism in any
157:58 way… "My companies will not help in any way with that", or some policy like that.
158:05 I will do my best to ensure that anything that's within my control
158:08 maximizes the good outcome for humanity. I think anything else would be shortsighted,
158:18 because obviously I'm part of humanity, so I like humans. Pro human.
158:29 You mentioned that Dojo 3 will be used for space-based compute.
158:34 You really read what I say. I don't know if you know,
158:38 Elon, but you have a lot of followers. Dead giveaway. How did you discern my secrets?
158:46 Oh I posted them on X. How do you design a chip for space? What changes?
158:54 You want to design it to be more radiation tolerant and run at a higher temperature.
159:03 Roughly, if you increase the operating temperature by 20% in degrees Kelvin,
159:08 you can cut your radiator mass in half. So running at a higher temperature
159:15 is helpful in space. There are various things you can do for shielding the memory. But neural nets are going to be very
159:26 resilient to bit flips. Most of what happens for radiation is random bit flips. But if you've got a multi-trillion parameter model
159:37 and you get a few bit flips, it doesn't matter. Heuristic programs are going to be much more
159:42 sensitive to bit flips than some giant parameter file.
159:49 I just design it to run hot. I think you pretty much do
159:56 it the same way that you do things on Earth, apart from making it run hotter.
160:02 The solar array is most of the weight on the satellite.
160:04 Is there a way to make the GPUs even more powerful than what Nvidia and TPUs and
160:11 et cetera are planning on doing that would be especially privileged in the space-based world?
160:18 The basic math is, if you can do about a kilowatt per reticle, then you'd need 100
160:31 million full reticle chips to do 100 gigawatts. Depending on what your yield assumptions are,
160:44 that tells you how many chips you need to make. If you're going to have 100 gigawatts of power,
160:53 you need 100 million chips that are running at a kilowatt sustained, per reticle. Basic math.
161:05 100 million chips depends on… If you look at the die size of something like
161:13 Blackwell GPUs or something, and how many you can get out of a wafer, you can get
161:18 on the order of dozens or less per wafer. So basically, this is a world where if
161:25 we're putting that out every single year, you're producing millions of wafers a month.
161:33 That's the plan with TeraFab? Millions of wafers a month of advanced process nodes?
161:37 Yeah it could be north of a million or something. You’ve got to do the memory too.
161:42 Are you going to make a memory fab? I think the TeraFab's got to do memory.
161:46 It's got to do logic, memory, and packaging. I'm very curious how somebody gets started.
161:51 This is the most complicated thing man has ever made.
161:54 Obviously, if anybody's up to the task, you're up to the task.
161:58 So you realize it's a bottleneck, and you go to your engineers.
162:02 What do you tell them to do? "I want a million wafers a month in 2030."
162:09 That’s right. That’s exactly what I want. Do you call ASML? What is the next step?
162:14 No so much to ask. We make a little fab and see what happens.
162:22 Make our mistakes at a small scale and then make a big one.
162:25 Is a little fab done? No, it's not done. We're not going to keep that cat in the bag. That cat's going to come out of the bag.
162:35 There'll be drones hovering over the bloody thing. You'll be able to see its construction
162:39 progress on X in real time. Look, I don't know, we could just
162:47 flounder in failure, to be fair. Success is not guaranteed. Since we want to try to make something
163:00 like 100 million… We want 100 gigawatts of power and chips that can take 100 gigawatts by 2030.
163:18 We’ll take as many chips as our suppliers will give us.
163:20 I've actually said this to TSMC and Samsung and Micron: "please build more fabs faster".
163:28 We will guarantee to buy the output of those fabs. So they're already moving as fast as they can. It's us plus them.
163:46 There's a narrative that the people doing AI want a very large number
163:50 of chips as quickly as possible. Then many of the input suppliers,
163:56 the fabs, but also the turbine manufacturers, are not ramping up production very quickly.
164:02 No, they're not. The explanation you hear is that they're dispositionally conservative. They're Taiwanese or German, as the story may
164:11 be. They just don't believe... Is that really the explanation or is there something else?
164:17 Well, it's reasonable to... If somebody's been in the computer memory business for 30 or 40 years…
164:25 They've seen cycles. They've seen boom and bust 10 times.
164:32 That's a lot of layers of scar tissue. During the boom times, it looks like
164:37 everything is going to be great forever. Then the crash happens and they're
164:41 desperately trying to avoid bankruptcy. Then there's another boom and another crash.
164:48 Are there other ideas you think others should go pursue that
164:51 you're not for whatever reasons right now? There are a few companies that are pursuing
164:58 new ways of doing chips, but they're just not scaling fast.
165:03 I don't even mean within AI, I mean just generally.
165:07 People should do the thing where they find that they're highly motivated to do that thing,
165:13 as opposed to some idea that I suggest. They should do the thing that they find
165:21 personally interesting and motivating to do. But going back to the limiting
165:30 factor… I used that phrase about 100 times. The current limiting factor that I see in the
165:47 three to four year timeframe, it's chips. In the one year timeframe, it's energy,
165:56 power production, electricity. It's not clear to me that there's enough
166:02 usable electricity to turn on all the AI chips that are being made.
166:10 Towards the end of this year, I think people are going to have real trouble turning on...
166:13 The chip output will exceed the ability to turn chips on.
166:17 What's your plan to deal with that world? We're trying to accelerate electricity production.
166:24 I guess that's maybe one of the reasons that xAI will be maybe the leader, hopefully the leader.
166:34 We'll be able to turn on more chips than other people can turn on, faster,
166:39 because we're good at hardware. Generally, the innovations from
166:45 the corporations that call themselves labs, the ideas tend to flow… It's rare to see that
166:54 there's more than about a six-month difference. The ideas travel back and forth with the people.
167:04 So I think you sort of hit the hardware wall and then whichever company can scale
167:11 hardware the fastest will be the leader. So I think xAI will be able to scale
167:17 hardware the fastest and therefore most likely will be the leader.
167:20 You joked or were self-conscious about using the "limiting factor" phrase again.
167:28 But I actually think there's something deep here. If you look at a lot of things we've touched on
167:32 over the course of it, it’s maybe a good note to end on.
167:37 If you think of a senescent, low-agency company, it would have some bottleneck and
167:45 not really be doing anything about it. Marc Andreessen had the line of,
167:49 "most people are willing to endure any amount of chronic pain to avoid acute pain".
167:54 It feels like a lot of the cases we're talking about are just leaning into the acute pain,
167:59 whatever it is. "Okay, we got to figure out how to work with steel, or we got to figure
168:05 out how to run the chips in space." We'll take some near-term acute pain
168:09 to actually solve the bottleneck. So that's kind of a unifying theme.
168:13 I have a high pain threshold. That's helpful. To solve the bottleneck.
168:19 Yes. One thing I can say is, I think the future is going to be very interesting.
168:36 As I said at Davos—I think I was on the ground for like three hours or something—it's
168:45 better to err on the side of optimism and be wrong than err on the side of pessimism
168:50 and be right, for quality of life. You'll be happier if you err on
169:01 the side of optimism rather than erring on the side of pessimism.
169:05 So I recommend erring on the side of optimism. Here's to that.
169:09 Cool. Elon, thanks for doing this. Thank you. All right, thanks guys. All right. Great stamina.
169:17 Hopefully this didn't count as a pain in the pain tolerance.
$
Elon Musk – "In 36 months, the cheapest place to put AI will be space”
[AI agents and automation][hardware setup and infrastructure][solo founder and bootstrapping][marketing and growth hacking][e-commerce and conversion optimization]
// chapters
// description
In this episode, John and I got to do a real deep-dive with Elon. We discuss the economics of orbital data centers, the difficulties of scaling power on Earth, what it would take to manufacture humanoids at high-volume in America, xAI’s business and alignment plans, DOGE, and much more.
𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒
* Transcript: https://www.dwarkesh.com/p/elon-musk
* Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?i=1000748400389
* Spotify: https://open.spotify.com/episode/
[AI agents and automation][hardware setup and infrastructure][solo founder and bootstrapping][marketing and growth hacking][e-commerce and conversion optimization]