Which I'm not gonna be able to say something that, not quite at speed on. When we were training this model, we didn't expect it to come out nearly as powerful as it did or to have all the capabilities that it does. In fact, it was just part of the scaling ladder experiment. But Okay. Yeah. That was the next step. Okay. But, you know, we definitely messed up, on the image generations, and I think it was mostly due to just, like, not thorough testing. And, it definitely, for good reasons, upset a lot of people, on the images you might have seen. I think the the images prompted a lot of people to really deeply test the base text models. And the text models have 2 separate effects going. You know, one thing is, quite honestly, if you deeply test any text model out there, whether it's ours, Chachim Patel, Brock, what have you, It'll say, you know, some pretty weird things that are out there that, you know, definitely feel far left, for example. And kind of any model you try hard on could be prompted to the best review. But, also, just to be fair, there's definitely work in that model that once again. We haven't fully understood, why it means lost in many cases, and that's not our intention. But if you try it starting over this last week, it should be at least 80% better of the test cases that we've covered. So I'm glad I'm all going to try it. This shouldn't be a big effect. To follow your time, the Gemini 1.5 Pro, which isn't gonna be sort of public facing app, the thing we used to call BART, should not match that effect except for that general effect that if you sort of red team any AI model, you're gonna get weird corner cases. But we're not even though this one hasn't been sort of thoroughly tested that way, we don't expect it to expect it to have, strong particular readings. I suppose you could give it a go. But we're more excited today to try it for all context and some of the technical features. But, no. I mean, the multimodal, both in and out, is very exciting with video, audio. I mean, we run early experiments, and, it's I mean, it's an exciting field. Even though, you guys remember the doc video that kinda got us in trouble. But to be fair, I was fully displaying the video that wasn't real time. But, but that that is something that we've actually done is embedded images and, you know, in like Frank, I tried to do do do do do. We had to talk about it. So, yeah, that's super exciting. I hope, I hope we have anything, like, real time to present, right now today. Yeah. Are you personally writing code for some projects? I have been actually writing code, to be perfectly honest. It's not, like, code that you would be very impressed by. But, yeah, every once in a while, it's a little, like, kind of debugging or just trying to understand for myself, how a model works or, you know, to just analyze the performance in a slightly different way or something like that. A little bits and pieces that make me feel connected. It's I'm scared. I don't think you you would be very technically impressed by it. But it's nice to it's nice to be able to play with that. And sometimes I'll use the AI boss to write the code for me, because I'm rusty, and I actually do a pretty good job. So I'm very pleased with that. Of disapproval or anything. I think I mean, I what can I say about game engines? I think, obviously, like, on the graphics, you can do new and interesting things with game engines. But I think maybe the more interesting is the interaction with the other, you know, virtual players or things like that, like, whatever the characters are. I guess I guess these days, you know, you can call people, land, and PCs or whatever. But in the future, maybe, MPCs will be actually very powerful and interesting. Yeah. So I think that's, like, a really rich possibility. Probably not enough enough for the gamer to think through all the possible Yeah. What kind of applications I'm most excited about? I mean, I I think just adjusting, right now or for the version of our fact, I think, you know, 1 by 5 pro. Long context is something we're really experimenting with. And whether you dump a ton of code in there or video I mean, I've just seen people do I I didn't think the model did this, to be perfectly honest. But but people, like, dump their code and do a video of the app and say, hey. Here's the bug, and the model will figure out where the bug is the code, which is kind of mind blowing that that works at all. I honestly don't really understand how the model is that. But I don't think you should do exactly that thing. But, you know, experimenting with things that really require a little more context. Do we have the service for all these people here banging on that stuff? We we have people on the phone service here as well. Okay. Because my phone is buzzing. Everybody's really stressed out, but it's you're not sure how this model works or you you weren't sure that this could do the thing that it does. Do you think we can reach a point where we actually understand how these models work or will they remain black boxes that we just trust no. I I I think you can learn to understand. I mean, you know, the fact is that when, we train these things, there are a thousand different capabilities you could try out. So on the one hand, it's very surprising that it could do it. On the other hand, if it's any particular one capability, you can go back and you know, we can look at, where the attention is going in each layer between, like, the code and the video. Now we can't deeply analyze it. I've personally done that. I don't know how far along the researchers have gone to doing that kind of thing. But, you know, it takes a huge amount of time and study to really slice apart of why a model is able to do some things. And, honestly, most of the time that I see slicing, it's like, why it's not doing something. So I guess I would say it's it's mostly because I think we could understand it. People probably are. But most of the effort is spent figuring out where it goes wrong, not where it goes wrong. Yeah. I think it's very exciting to, you know, to have these things actually improve themselves. I remember when I was I remember when I was I think in grad school, I wrote this game where, like, it was like a wall of days you're flying through when you shot the walls. The walls corresponded to bits of memory, and we just, like, flip those bits, at the goal is to crash it as quickly as possible, which doesn't really answer your question. But that was an example of self modifying code, I guess, not for a particularly useful purpose. But, anyway, I'd have people, you know, I think, you know, open loop could have worked for certain I think for certain, they're very limited domains today, like, if you without being a human intervention to guide it. I bet it could actually do some kind of continued improvement. But I don't think we're quite at the stage where for, I don't know, real serious. And first of all, a little context is not actually enough for a big code basis, to to turn on the entire code base. But you could do, like, retrieval and then computation editing. I guess I haven't personally played it enough, but I I haven't seen it be at the stage today where accomplice sort of piece of code will just iteratively improve itself. But, but it's it's a great tool. And like I said, with human assistance, we for sure do I mean, I will use Gemini to, like, try to do something with the Gemini code. I mean, today, but not very open loop deep sophisticated things, I guess. I think it was, it was meant for, like, chip development or something like that. I I don't I don't get I'm not an expert in chip development, but I don't get the sense that it's just something you can, like, sort of pour money, like, even huge amounts of money out of chips. I'm not an expert, but I can go. Oh, the training cost of models are super high. Yeah. The training costs are definitely high, and, you know, that's something companies like us have to cope with. But I think, you know, the long term utility is incomparably higher. Like, if you kind of measure it, human productivity level, you know, to save somebody an hour of work over the course of the week. That hour is a lot. There are a lot of people using these things or will be using them. But you do it's a big bet on the future. Model training on device. Model running on device. Yeah. Model running on device. We've shifted to, I think, Android, Chrome, and, yeah, Pixel Funnels. I think even Chrome runs a pretty decent model these days. We just opened source Java, which is pretty small. I think a couple of parameters. I can't remember. Yeah. Yeah. I think that's really useful. You know, it can be low latency. You're not dependent on connectivity. And, small models can call bigger models in the cloud too. So I think the call device is a really good idea. Yes. What are some vertical slash industry that you feel like this generally gonna have a big impact on and then Oh, which industry do you still think have a big opportunity? I think that it's just, like, very hard to predict. I mean, there are sort of the obvious industries that people think of, sort of customer service or, kind of just, like, you know, analyzing, I don't know, like, different likely documents with kind of the workflow automation, I guess. Those are obvious, but, I think they're gonna be non obvious ones, which I can't predict, especially as you look through certain multimodal models and the surprising capabilities that they have. And I feel like I mean, that's why we have all these here. You guys are the creative ones to figure that out? That this 10% idea is 20% idea. And, like, month after month, that's adds up. I think our GPUs are actually pretty damn good, at, inferencing enough to to sync the GPUs. But but for certain inference workloads, they're just configured really nicely. And the other big effect is actually we're able to make smaller models more and more effective just with new generations, just whatever architectural changes, changing changes, all kinds of things like that. So the models are getting more powerful even at the same time size. So I would not expect worse.