Hi, listeners, and welcome to no priors. Today, we're here again. One here since our last discussion with the one and only Jensen Huang, founder and CEO of NVIDIA. Today, NVIDIA's market cap is over $3,000,000,000,000, and it's the one literally holding all the chips in the AI revolution. We're excited to hang out in NVIDIA's headquarters and talk all things frontier models and data center scale computing and the bets NVIDIA is taking on a 10 year basis. Welcome back, Jensen. 30 years in to NVIDIA and looking 10 years out, what are the big bets you think are are still to make? Is it all about scale up from here? Are we running into limitations in terms of how we can squeeze more compute memory out of the architectures we have? What are you focused on? Well, if we take a step back and and think about what we've done, we went from coding to machine learning, from writing software tools to creating AIs and all of that running on CPUs that was designed for human coding to now running on GPUs designed for, AI coding basically, machine learning. And so the world has changed. The way we do computing, the whole stack has changed. And as a result, the scale of the problems we could address has changed a lot because we could If you could paralyze your software on 1 GPU, you've set the foundations to paralyze across a whole cluster or maybe across multiple clusters or multiple data centers. And so I think we've set ourselves up to be able to scale computing, at a level and develop software at a level that nobody's ever imagined before. And so we're at the beginning of that. Over the next 10 years, our hope is that we could double or triple performance every year at scale, not at chip, at scale. And to be able to therefore drive the cost down by a factor of 2 or 3, drive the energy down by a factor of 2 or 3 every single year. When you do that every single year, when you double or triple every year, in just a few years, it adds up. So it compounds really, really aggressively. And so I wouldn't be surprised if, you know, the way people think about Moore's law, which is 2 x every couple of years, you know, we're gonna be on some kind of a hyper Moore's law curve. And, I fully hope that we continue to do that. What what do you think is the driver of making that happen even faster than Moore's law? Because I know Moore's law was sort of self reflexive. Right? It was something that he said, and then they had people kind of implemented it to make it happen. Yep. The 2 fundamental, technical pillars, one of them was Dennard scaling and the other one was Carver Mead's VLSI scaling. And both of those techniques were rigorous techniques, but those techniques have really run out of steam. And so now we need a new way of doing scaling. You know, obviously, the new way of doing scaling are all kinds of things associated with co design unless you can modify or change the algorithm to reflect the architecture of the system or change and then change the system to reflect the architecture of the new software and go back and forth. Mhmm. Unless you can control both sides of it, you have no hope. But if you can control both sides of it, you can do things like move from FP64 to FP32 to BF16 to FP8 to, you know, FP4 to who knows what, right? And so I think that that co design is a very big part of that. The second part of it, we call it full stack. The second part of it is, data center scale. You know, unless you could treat the network as a compute fabric and push a lot of the work into the network, push a lot of the work into the fabric. And as a result, you're compressing, you know, doing compressing at very large scales. And so that's the reason why we bought Mellanox and started fusing InfiniBand and MV Link, in such an aggressive way. And now look where NVLink is gonna go. You know, the compute fabric is going to scale out what appears to be one incredible processor called a GPU. Now we've got hundreds of GPUs that are gonna be working together. You know, most of these computing challenges that we're dealing with now, one of the most exciting ones, of course, is inference time scaling, has to do with essentially, generating tokens at incredibly low latency. Mhmm. Because you're self reflecting as you just mentioned. I mean, you're gonna be doing tree search, you're gonna be doing chain of thought, you're gonna be doing probably some amount of simulation in your head, you're gonna be reflecting on your own answers. Well, you're gonna be prompting yourself and generating text to your you know, silently, and still respond hopefully in a second. Well, the only way to do that is if your latency low your latency is extremely low. Meanwhile, the data center is still about producing high throughput tokens because, you know, you still want to keep the cost down, you want to keep the throughput high, you want to, right, generate a return. And so these two fundamental things about a factory, low latency and high throughput, they're at odds with each other. And so in order for us to create something that is really great in both, we have to go invent something new and NVLink is really our way of doing that. Now you have a virtual GPU that has incredible amount of flops because you need it for context, you need a huge amount of memory, working memory, and still have incredible bandwidth for token generation all at the same time. I guess the parallel you also have all the people building the models actually also optimizing things pretty dramatically. Like David and my team pulled data where over the last 18 months or so, the cost of, a 1,000,000 tokens going into a GPT 4 equivalent model has basically dropped 240 x. Yeah. And so there's just massive, optimization and compression happening on that side as well. Just in our layer. Just on the layer that we work on. You know, one of the things that we care a lot about, of course, is the ecosystem of our stack and the productivity of our software. You know, people forget that because you have CUDA Foundation and that's a solid foundation, everything above it can change. Mhmm. If everything, if the foundation's changing underneath you, it's hard to build a building on top. It's hard to create anything interesting on top. And so CUDA made it possible for us to iterate so quickly. Just in the last year, and then we just went back and benchmarked, when llama first came out, we've improved the performance of hopper by a factor of 5 without the layer on top ever changing. Now, well, a factor of 5 in 1 year is impossible using traditional computing approaches but accelerated computing and using this way of, code design, we're able to integrate all kinds of new things. Yeah. How much are, you know, your biggest customers thinking about the, interchangeability of their infrastructure between large scale training and, inference? Well, you know, infrastructure is disaggregated these days. Sam was just telling me that he he decommissioned Volta just recently. They have Pascals, they have Amperes, all different configurations of blackwall coming. Some of it is optimized for air cools, some of it's optimized for liquid cool. Your services are gonna have to take advantage of all of this. The advantage that NVIDIA has, of course, is that the the infrastructure that you built today for training will just be wonderful for inference tomorrow. And most of Chat GPT, I believe, are inferenced on the same type of systems that were trained on just recently. And so if you can train on it, you can inference on it. And so you're leaving a trail of infrastructure that you know is gonna be incredibly good at inference and you have complete confidence that you can then take that return on the investment that you've had and put it into a new infrastructure to go scale with. You know you're gonna leave behind something of use. And you know that NVIDIA and the rest of the ecosystem are gonna be working on improving the algorithm so that the rest of your infrastructure improves by a factor of 5, you know, in just a year. And so that motion will never never change. And so the way that people will think about the infrastructures, yeah, even though I built it for training today, it's gotta be great for training, we know it's gonna be great for inference. Inference is gonna be multi scale. I mean, you're gonna take, first of all, in order to distill smaller models, it's good to have a larger model to distill from. And so, so you're still gonna create these incredible frontier models. They're gonna be used for, of course, the groundbreaking work. You're gonna use it for synthetic data generation. You're gonna use the models, big models to teach smaller models and distill down to smaller models. And so there's a whole bunch of different things you could do but in the end, you're gonna have giant models all the way down to little tiny models. The little tiny models are gonna be quite effective, you know, not as generalizable but quite effective. And so, you know, they're gonna perform very specific stunts incredibly well, that one task. And we're gonna we're gonna see superhuman task in one one little tiny domain from a little tiny, tiny, tiny model, maybe, you know, it's not a small language model, but, you know, tiny language model, TLMs or, you know, whatever. Yeah, so I think we're gonna see all kinds of sizes and we hope, is that right? Just kind of like softwares today. I think in a lot of ways, artificial intelligence allows us to break new ground in how easy it is to create new applications. But everything about computing has largely remained the same. For example, the cost of maintaining software is extremely expensive. And once you build it, you would like it to run on a large of an installed base as possible. You would like not to write the same software twice. I mean, a lot of people still feel the same way. You like to take your engineering and move them forward. And so, to the extent that the architecture allows you to on one hand, create software today that runs even better tomorrow with new hardware, that's great. Or software that you create tomorrow, AI that you create tomorrow runs on a large installed base, you think that that's great? That way of thinking about software is not gonna change. NVIDIA has moved into larger and larger, let's say, like unit of support for customers. I think about it going from single chip to, you know, server to rack, m v l 72. How do you think about that progression? Like, what what's next? Like, should NVIDIA do full data center? In fact, we'd build full data centers. The way that we build everything unless you're building if you're developing software, you need the computer in its full manifestation. We don't we don't build PowerPoint slides and shit the chips. And we build a whole data center. And until we get the whole data center built up, how do you know the software works until you get the whole data center built up? How do you know your, you know, your fabric works and all the things that you expected the efficiencies to be, how do you know it's gonna really work at scale? And and that's the reason why that's the reason why it's not unusual to see somebody's actual performance be dramatically lower than their peak performance as shown in PowerPoint slides. And it's computing is just not used to is not what it used to be. You know, I say that the new unit of computing is the data center. That's to us So that's what you have to deliver. That's what we build. Now we build a whole thing like that. And then we for every single thing we have every combination, air cooled, x86, liquid cooled, Grace, Ethernet, InfiniBand, NVLink, no NVLink. You know what I'm saying? We build every single configuration. We have 5 supercomputers in our company today. Next year, we're gonna build easily 5 more. So if you're serious about software, you build your own computers. If you're serious about software, then you're gonna build your whole computer and we build it all at scale. This is the part that is really interesting. We build it at scale and we build it vertically integrated. We optimize it, full stack and then, and then we disaggregate everything and we sell it in parts. That's the part that is completely utterly remarkable about what we do. The complexity of of that is just insane. And the reason for that is we wanna be able to graft our infrastructure into GCP, AWS, Azure, OCI. All of their control planes, security planes are all different. And all of the way they think about their cluster sizing, all different. And, but yet we make it possible for them to all accommodate NVIDIA's architecture so that CUDA could be everywhere. That's really, really in the end, the singular thought, you know, that we would like to have a computing platform that developers could use that's largely consistent. Modulo, you know, 10% here and there because people's infrastructure are slightly optimized differently. And Modulo, 10% here and there, but everything they build will run everywhere. This is kind of the one of the principles of software that should never be given up and we protect it quite dearly. It makes it possible for our software engineers to build once, run everywhere. And that's because we recognize that the investment of software is the most expensive investment and it's easy to test. Look at the size of the whole hardware industry and then look at the size of the world's industries. It's a $100,000,000,000,000 on top of this $1,000,000,000,000 industry. And that tells you something. The software that you build, you have to, you know, you basically maintain for as long as you shall live. We've never given up on a piece of software. The reason why CUDA is used is because, you know, I told everybody we will maintain this for as long as we shall live and we're serious. And we still maintain I just saw a review the other day, NVIDIA Shield, our Android TV. It's the best Android TV in the world. We shipped it 7 years ago. It is still the number one Android TV that that people you know, anybody who who enjoys TV. And we just updated the software just this last week and people wrote a new story about it. GeForce, we have 300,000,000 gamers around the world. We've never stranded a single one of them. And so the fact that our architecture is compatible across all of these different areas makes it possible for us to do it. Otherwise, we would be we would be we would have, you know, we would have software teams that are a 100 times the size of our company as today if not for this architectural compatibility. So we're very serious about that. And that translates to benefits to, you know, the developers. One impressive substantiation of that recently was how quickly you brought up a cluster for X dotai. Yeah. And if you wanna talk about that because that that was striking in terms of both the scale and the speed with which you did that. You know, a lot of that credit you gotta give to Elon. I think the, first of all, to, decide to do something, select the site, bring cooling to it, power, and then decide to build this 100,000 GPU supercluster which is, you know, the largest of its kind in 1 unit. And then working backwards, you know, we started planning together the date that he was gonna stand everything up. And the date that he was gonna stand everything up was determined, quite a few months ago. And so all of the components, all the OEMs, all the systems, all the software integration we did with their team, all the network simulation, We simulate all the network configurations. I mean, it's like we pre staged everything as a digital twin. We pre staged all of the supply chain. We pre staged all of the wiring of the networking. We even set up a small version of it, kind of a, you know, just a first instance of it, you know, ground truth, if you're a reference 0, you know, system 0, before everything else showed up. So by the time that everything showed up, everything was staged, all the practicing was done, all the simulations were done and then the massive integration. Even then the massive integration was a monument of gargantuan teams of humanity crawling over each other, wiring everything up 20 fourseven. And within a few weeks, the clusters were up. I mean, it's really, yeah, it's really a testament to his willpower and how he's able to think through mechanical things, electrical things, and overcome what is apparently, you know, extraordinary obstacles. I mean, what was done there is the first time that a computer of that large scale has ever been done at that speed. Unless our 2 teams are working from the networking team, the compute team, the software team, the training team, the, you know, and the infrastructure team, the people that the electrical engineers to the, you know, to the software engineers all working together. Yeah. It's really quite a feat to watch. Was there a challenge that felt most likely to be blocking from an engineering perspective? Just a tonnage of electronics that had to come together. I mean, it'd probably be worth just to measure it. I mean, it's, you know, tons and tons of equipment. It's just abnormal. Mhmm. You know, usually usually a supercomputer system like that, you plan it for a couple of years, from the moment that the first systems come delivered to the time that you probably submitted everything for some serious work, don't be surprised if it's a year, you know? I mean, that happens all the time. It's not abnormal. Now we couldn't afford to do that. So we created, you know, a few years ago, there was an initiative in our company that's called data center as a product. We don't sell it as a product, but we have to treat it like it's a product. Everything about planning for it and then standing it up, optimizing it, tuning it, keep it operational, the goal is that it should be, you know, kinda like opening up your beautiful new iPhone and you open it up and everything just kinda works. Now, of course, it's a miracle of technology making it that like that, but we now have the skills to do that. And so if you're interested in the data center and just have to give me a space and some power, some cooling, you know, and we'll we'll help you set it up within, call it, 30 days. I mean, it's pretty extraordinary. That's wild. If you think if you look ahead to 200,000, 500,000, a 1,000,000 in a supercluster or whatever you call it at that point Mhmm. What do you think is the biggest blocker? Capital, energy, supply in one area? Everything. Nothing about what you just the scales that you talked about, but nothing is normal. Yeah. But nothing is impossible. Nothing is, yeah. No laws of physics limits, but everything is gonna be hard. And of course, you know, is it worth it? Like you can't believe, you know, to get to something that we would recognize as a computer that, so easily and so able to do what we ask it to do, you know, otherwise general intelligence of some kind. And even, you know, even if we could argue about is it really general intelligence, just getting close to it is going to be a miracle. We know that. And so I think there are 5 or 6 endeavors to try to get there, right? I think, of course, OpenAI and Anthropic and X and, you know, of course, Google and Meta and Microsoft and, you know, this frontier, the next couple of clicks up that mountain are just so vital. Mhmm. Who doesn't wanna be the first on that on that on that mountain? I think the the prize for reinventing, intelligence altogether, it's just it's it's too consequential not to attempt it. And so I think that there are no laws of physics. Everything is gonna be hard. A year ago, when we spoke together, you talked about we asked, like, what applications you got most excited about that NVIDIA would serve next in AI or otherwise. And you talked about how you let your most extreme customers sort of lead you there Yeah. And and about some of the scientific applications. I think that's become, like, much more, mainstream of you over the last year. Is it still, like, science and AI's application of science that most excites you? I love the fact that we have digital we have AI chip designers. Here at NVIDIA? Yeah. I I love that we have AI software engineers. How effective are AI chip designers today? Super good. We can't we couldn't build we couldn't build Hopper without it. And the reason for that is because they could explore a much larger space than we can. And because they have infinite time, they're running on a supercomputer. We have so little time using human engineers that we don't explore as much of the space as we should. And we also can't explore it combinatorially. I can't explore my space while including your exploration and your exploration. And so, you know, our chips are so large. It's not like it's designed as one chip. It's designed almost like a 1,000 chips. And we have to optimize each one of them kind of in isolation. You really wanna optimize a lot of them together and, you know, cross module co design and optimize across a much larger space. Obviously, we're gonna be able to find local maximums that are hidden behind local minimums somewhere. And so clearly, we can find better answers. You can't do that without AI engineers, Just simply can't do it. We just don't have enough time. One other thing that's changed, since we last spoke, collectively, and I looked it up. At the time, NVIDIA's market cap was about 500,000,000,000. It's now over 3,000,000,000,000. So over the last 18 months, you've added 2 and a half trillion plus of market cap, which effectively is a $100,000,000,000 plus a month or 2 and a half snowflakes or, you know, a stripe plus a little bit or however you wanna think about it. A country or 2. A country or 2. Obviously, a lot of things have stayed consistent in terms of focus on what you're building and etcetera. And, you know, walking through here earlier today, I felt the buzz. Like, when I was at Google 15 years ago, it was kind of you felt the energy of the company and the vibe of excitement. What has changed during that period, if anything? Or how what what is different in terms of either how NVIDIA functions or how you think about the world or the size of bets you can take or Mhmm. Well, our company can't change as fast as the stock price. Let's just be clear about that. Yeah. And so in a lot of ways, we haven't changed that much. I think the, the thing to do is to take a step back and ask ourselves, what are we doing? I think that that's really the big, you know, the big observation, realization, awakening for companies and countries is what's actually happening. I think what we were talking about earlier, from our industry perspective, we reinvented computing. Now, it hasn't been reinvented for 60 years. That's how big of a deal it is. That we've driven down the the marginal cost of computing down probably by a 1,000,000 x in the last 10 years to the point that we just, hey. Let's just let the computer go exhaustively right the software. That's the big realization. And that that in a lot of ways, I was kinda say we were kinda saying the same thing about chip design. We would love for the computer to go discover something about our chips that we otherwise couldn't have done ourselves, explore our chips and optimize it in a way that we couldn't do ourselves, right, in in the way that we were left for digital biology or, you know, any other any other field of science. And so I I think people are starting to realize, 1, we reinvented we reinvented computing. But what does that mean even? And as we all of a sudden we created this thing called intelligence, and what happened to computing? Well, we went from data centers, data centers are multi tenant, stores our files. These new data centers we're creating are not data centers. They don't they're not multi tenant. They tend to be single tenant. They're not storing any of our files. They're just they're producing something. They're producing tokens. And these tokens are reconstituted into what appears to be intelligence. Isn't that right? Mhmm. And intelligence of all different kinds. You know, it could be articulation of robotic motion. It could be, sequences of of amino acids. It could be, you know, chemical chains. It could be all kinds of interesting things. Right? So what are we really doing? We've created a new instrument, a new machinery that that in a lot of ways is the the noun of the adjective generative AI. You know, instead of generative AI, it's an AI factory. It's a factory that generates AI. And we're doing that at extremely large scale. And what people are starting to realize is, you know, maybe this is a new industry. It generates tokens. It generates numbers. But these numbers constitute in a way that is fairly valuable. And what industry would benefit from it? Then you take a step back and you ask yourself again, what's going on in NVIDIA? On the one hand, we reinvented computing as we know it and so there's a $1,000,000,000,000 of infrastructure that needs to be modernized. That's just one layer of it. The big layer of it is that there's this instrument that we're building is not just for data centers, which we're modernizing, but you're using it for producing some new commodity. And how big can this new commodity industry be? Hard to say, but it's probably worth trillions. And so that I think is kind of the, if you were to take a step back, you know, we don't build computers anymore, we build factories. Mhmm. And every country is gonna need it. Every company is gonna need it. You know, give me an example of a company who or industry has this, you know what? We don't need to produce intelligence. We got plenty of it. Mhmm. And so, so that's the big idea, I think, you know, and that's kind of an abstracted industrial view. And, you know, someday people will realize that in a lot of ways, the semiconductor industry wasn't about building chips, it was about building the foundational fabric for society. And then all of a sudden everybody goes, ah, I get it. You know, this is a big deal. It's not just about chips. How do you think about embodiment now? Well, the thing I'm super excited about is in a lot of ways, we're close to artificial general intelligence but we're also close to artificial general robotics. Tokens are tokens. I mean, the question is can you tokenize it? You know, of course, tokenizing things is not easy as you guys know. But if you were able to tokenize things, align it with large language models and other modalities, if I can generate a video that has Jensen reaching out to pick up the coffee cup, why can't I prompt a robot to generate the tokens to pick up the you know? And so intuitively, you would think that the problem statement is rather similar for a computer. And so I think that we're that close. That's incredibly exciting. Now, the 2 brownfield, robotic systems, brownfield mean that you don't have to change the environment for is, self driving cars and, with digital chauffeurs and embodied robots, right? Between the cars and the human robot, we could literally bring robotics to the world without changing the world because we build a world for those two things. Probably not a coincidence that Elon spoke, is then those two forms of robotics because it is likely to have the larger potential scale. And so I think that that's exciting. But the digital version of it is equally exciting. You know, we're talking about digital or AI employees. There's no question we're gonna have AI employees of all kinds and our outlook will be some biologics and some artificial intelligence and we will prompt them in the same way. Isn't that right? Mostly I prompt my employees, I, you know, provide them context, ask them to perform a mission. They go and recruit other team members. They come back and we're going back and forth. How's that gonna be any different with digital and AI employees of all kinds? So we're gonna have AI marketing people, AI designers, AI supply chain people, AI, you know. And I'm hoping that NVIDIA is someday, biologically bigger, but also from an artificial intelligence perspective, much, much bigger. That's our future company. If we came back and talk to you a year from now, what part of the company do you think would be, most artificially intelligent? I'm hoping it's chip design. Okay. Most important part. And the that's right. Because it because I should start I should start where it moves the needle most, also where we can make the biggest impact most. You know, it's such an ins insanely hard problem. I work with, Sustin at at Synopsys and Ruth at at Cadence. I totally imagine them having Synopsys chip designers that I can rent. And they they know something about a particular module, their their their tool, and and they trained an AI to be incredibly good at it. And we'll just hire a whole bunch of them whenever we need, we're in that phase of that chip design. You know, I might rent a 1,000,000 Synopsys engineers to come and help me out and then go rent a 1,000,000 Cadence engineers to help me out. And what an exciting future for them that they have all these agents that sit on top of their tools platform, that use the tools platform and collaborate with other platforms. And you'll do that for Christian will do that at SAP, and Bill will do that at ServiceNow. People say that these SaaS platforms are gonna be disrupted. I actually think the opposite that they're sitting on a gold mine that they're gonna be this flourishing of agents that are gonna be specialized in Salesforce, specialized in, you know, or Salesforce, I think they call it Lightning and SAP as a BAP. And everybody's got their own language, is that right? And we got CUDA and we've got Open USD for Omniverse. And who's gonna create an AI agent that's awesome at OpenUSD? We are, you know, because nobody cares about it more than we do. And so I think in a lot of ways, these platforms are gonna be flourishing with agents and we're gonna introduce them to each other, and they're gonna collaborate and solve problems. You see a wealth of different people working in every domain in AI. What do you think is, undernoticed or that people that you want more entrepreneurs or engineers or business people to go work on? Well, first of all, I think what what is misunderstood and and and, misunderstood maybe maybe underestimated is the the under the under the water activity, under the surface activity of groundbreaking science, computer science to science and engineering that is being affected by AI and machine learning. I think you just can't walk into a science department anywhere, theoretical math department anywhere, where AI and machine learning and the type of work that we're talking about today is gonna transform tomorrow. If they are, If you take all of the engineers in the world, all of the scientists in the world, and you say that the way they're working today is early indication of the future, because obviously it is, then you're gonna see a tidal wave of generative AI, a tidal wave of AI, a tidal wave of machine learning change everything that we do in some short period of time. Now remember, I saw the early indications of computer vision and and the work with with, Alex and Ilya and and Hinton at at at, in Toronto and, Jan Lacun and and of course, Andrew Ang here in Stanford. And, you know, I saw the early indications of it. And we were fortunate to have extrapolated from what was observed to be detecting cats into a profound change in computer science, in computing altogether. That extrapolation was fortunate for us. And now, of course, we were so excited by, so inspired by it that we changed everything about how we did things. But that took how long? It took literally 6 years from observing that toy, AlexNet, which I think by today's standards will be considered a toy, to superhuman levels of capabilities and object recognition. Well, that was only a few years. What is happening right now, the groundswell in all of the fields of science, not one field of science left behind. I mean, just to be very clear. Okay? Everything from quantum computing to quantum chemistry, you know, every field of science is involved in the approaches that we're talking about. If we give ourselves, and they've been at it for a couple, 2, 3 years. If we give ourselves another couple, 2, 3 years, the world's gonna change. There's not gonna be one paper. There's not gonna be one breakthrough in science, one breakthrough in engineering where generative AI isn't at the foundation of it. I'm fairly certain of it. And so I think, you know, there's a lot of questions about, you know, every so often I hear about whether this is a fad. Mhmm. Computer, you just gotta go back to first principles and observe what is actually happening. The computing stack, the way we do computing has changed. If the way you write software has changed, I mean, that is pretty core. Mhmm. Software is how humans encode knowledge. This is how we encode our, you know, our algorithms. We encode it in a very different way now. That's gonna affect everything. Nothing else would ever be the same. And so I I think the the, I think I'm I'm talking to the converted here and and we all see the same thing. And all the startups that that, you know, you guys you guys work with and the scientists I work with and the engineers I work with, nothing will be left behind. I mean, this we're gonna take everybody with us. I think one of the most exciting things coming from, like, the computer science world and looking at all these other fields of science is, Like, I can go to a robotics conference now Yeah. Material science conference Oh, yeah. Biotech conference. And, like, I'm like, oh, I understand this. You know? Not at every level of the science, but in the driving of discovery, it is all the algorithms that are general. And there's some universal some universal unifying concepts. Mhmm. Yeah. Yeah. And and I I think that's, like, incredibly exciting when you see how effective it is in every domain. Yep. Absolutely. Yeah. And and, I'm so excited that I'm using it myself every day. You know, I don't know about you guys, but it's my tutor now. I mean, I don't do I don't learn anything without first going to AI. Mhmm. You know? Why learn the hard way? Just just go directly to an AI. Yeah. I go directly to chat GPT or, you know, sometimes I do perplexity just depending on just the formulation of my questions. And I just start learning from there. And then you can always fork off and go deeper if you like. But but holy cow. It's just incredible. And and almost everything I know, I I double check. Mhmm. Even though I know it to be a fact. You know? What I consider to be ground truth. I'm the expert. I'll still go to AI and check. Let me double check. Yeah. Yeah. It's so great. Almost everything I do, I involve it. Yeah. I think it's a great note to stop on. Yeah. Thanks so much for your time today. Yeah. I really enjoyed it. Nice to see you guys. Thanks, Jensen. Find us on Twitter at no priors pod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.