[00:00] (0.88s)
How impressive is Gro 4 for you?
[00:03] (3.12s)
If you look at the AIM benchmark, which
[00:05] (5.60s)
is an advanced math quiz, Grock 4 scored
[00:08] (8.64s)
100% on it.
[00:10] (10.72s)
You're literally running out of
[00:11] (11.84s)
benchmarks.
[00:13] (13.84s)
It's got to be driving uh Google nuts
[00:16] (16.08s)
that Elon got this done in 28 months
[00:18] (18.32s)
from a from a cold start.
[00:20] (20.00s)
When he said he was going to put this
[00:21] (21.36s)
huge cluster together, every AI expert
[00:24] (24.00s)
in the world said, "You cannot get power
[00:26] (26.24s)
laws and coherence at that scale. You
[00:28] (28.08s)
just can't do it." Every AI expert is
[00:30] (30.08s)
like, "Oh, god dang, he did it."
[00:32] (32.08s)
The amount of compute and resources
[00:34] (34.08s)
again are going exponential. Now, it's
[00:36] (36.16s)
the real quality that differentiates the
[00:38] (38.16s)
top models between each other.
[00:39] (39.76s)
My big question is, where do we go from
[00:46] (46.08s)
Now, that's a moonshot, ladies and
[00:47] (47.68s)
gentlemen.
[00:51] (51.20s)
Everybody, welcome to Moonshots. An
[00:53] (53.04s)
episode of WTF just happened in tech
[00:55] (55.52s)
this week. Special episode today
[00:57] (57.92s)
following the release of Gro 4. It is
[01:01] (61.04s)
large language model release month. An
[01:03] (63.76s)
extraordinary string of new models
[01:06] (66.24s)
coming up. I'm here with my moonshot
[01:08] (68.24s)
mates Dave Blondon, the head of Link
[01:11] (71.84s)
XPV, Salem Ismael, the CEO of Open Exo,
[01:16] (76.64s)
and a special guest to help us dissect
[01:18] (78.32s)
all of this is Immad Mustach, the
[01:20] (80.24s)
founder of Intelligent Internet. Guys,
[01:24] (84.32s)
um it was a pretty epic day yesterday.
[01:26] (86.48s)
Good to see you all. Pleasure to have
[01:28] (88.64s)
Yeah, likewise.
[01:29] (89.76s)
Yeah. And this is our special Gro 4
[01:32] (92.00s)
edition. Um Immod, you're in London.
[01:37] (97.20s)
Yep. Fantastic. And uh uh and See, where
[01:42] (102.00s)
in the planet are you, buddy?
[01:43] (103.60s)
Uh New York.
[01:46] (106.64s)
Dave's in Boston. I'm in Santa Monica.
[01:49] (109.20s)
All right, let's get going. So, just to
[01:51] (111.52s)
jump in. goal here is dissect what
[01:53] (113.68s)
happened yesterday blowby-blow
[01:56] (116.96s)
what's Grock 4 all about and just to you
[01:59] (119.84s)
know shadow what's coming we've got a
[02:02] (122.24s)
few new model releases coming with
[02:03] (123.92s)
Gemini 3 GPT5
[02:06] (126.88s)
you know and probably a few others so
[02:10] (130.56s)
let's kick it off with this video like
[02:12] (132.96s)
Grocer is postgraduate like PhD level in
[02:18] (138.16s)
everything better than PhD but like most
[02:20] (140.08s)
PhDs would fail
[02:21] (141.92s)
So it's better said I mean at least with
[02:24] (144.64s)
respect to academic questions it I want
[02:27] (147.44s)
to just emphasize this point with
[02:29] (149.12s)
respect to academic questions Grock 4 is
[02:32] (152.00s)
better than PhD level in every subject
[02:34] (154.72s)
no exceptions
[02:37] (157.44s)
um now this doesn't mean that it's it
[02:40] (160.72s)
you know times it may lack common sense
[02:43] (163.20s)
and it has not yet invented new
[02:45] (165.52s)
technologies or discovered new physics
[02:49] (169.20s)
but that is just a matter of time.
[02:51] (171.60s)
Um if it I I I think it may discover new
[02:54] (174.48s)
technologies
[02:56] (176.00s)
uh as soon as later this year. Um and I
[03:00] (180.24s)
I would be shocked if it has not done so
[03:01] (181.92s)
next year.
[03:02] (182.64s)
All right, Dave, you want to take the
[03:04] (184.32s)
first bite of
[03:05] (185.44s)
Yeah, it's awesome. This is actually a
[03:06] (186.80s)
golden moment in time because uh it is
[03:09] (189.60s)
an absolutely brilliant assistant that
[03:11] (191.68s)
can do almost anything you want it to
[03:13] (193.12s)
do, but like Elon said, it's not
[03:15] (195.76s)
reasoning yet. So it's not coming up
[03:18] (198.00s)
with the fundamental this is what we
[03:20] (200.24s)
should build and this is why. So that's
[03:22] (202.88s)
still in the hands of the creator, the
[03:24] (204.56s)
human operator. And so this this moment
[03:27] (207.84s)
in time is actually really really
[03:29] (209.12s)
golden. It feels just like an Iron Man
[03:31] (211.28s)
movie where you've got Jarvis. Jarvis
[03:33] (213.44s)
will build the suit for you. You have to
[03:35] (215.20s)
decide how you're going to save the
[03:36] (216.40s)
world. Uh it's it's a really really fun
[03:40] (220.48s)
time to be using these brand new like
[03:43] (223.28s)
like you said, there'll be three of
[03:44] (224.40s)
these in the next month or so. this is
[03:46] (226.80s)
the first round and uh he's dead right
[03:49] (229.68s)
you know the PhD level solution it's all
[03:51] (231.76s)
measured in the in the benchmarks we'll
[03:53] (233.20s)
get into in a minute uh but it it does
[03:56] (236.48s)
virtually anything mind-blowing
[03:58] (238.16s)
capabilities but it doesn't decide what
[04:00] (240.48s)
to do and why
[04:01] (241.68s)
uh I would love your take on this you've
[04:04] (244.24s)
been plugged into this world you know
[04:08] (248.00s)
intimately for a while how impressive is
[04:10] (250.88s)
Gro 4 for you
[04:12] (252.96s)
I think it is very impressive I think
[04:14] (254.56s)
you know picking up what Dave said I I
[04:16] (256.00s)
think it is reasoning but it's not
[04:17] (257.68s)
planning as yet. And there was a
[04:20] (260.16s)
question as when we got to this rona
[04:22] (262.00s)
flop level I think that's the term like
[04:24] (264.72s)
10 to 28 I think flops
[04:30] (270.24s)
would we continue to see improvements
[04:32] (272.48s)
and part of that is the compute and part
[04:34] (274.00s)
of that is the data as we'll get to
[04:35] (275.60s)
later and the answer is yes and again
[04:38] (278.72s)
like Elon said getting above graduate
[04:41] (281.36s)
level in every sub postgraduate level in
[04:43] (283.84s)
every subject it can now execute and it
[04:46] (286.40s)
can reason it doesn't have planning yet.
[04:50] (290.32s)
So, I mean, isn't isn't that AGI? Isn't
[04:52] (292.72s)
that the sort of like kind of definition
[04:54] (294.72s)
of AGI? We've been we passed through the
[04:56] (296.88s)
touring test without noticing. Are we
[04:59] (299.28s)
going to pass through AGI without
[05:00] (300.80s)
noticing, too?
[05:01] (301.68s)
It's like this hydonic adaptation.
[05:03] (303.28s)
You're like, of course, it's fine. You
[05:04] (304.88s)
know, but already again, if you want to
[05:07] (307.28s)
get a job done, it will do the job for
[05:09] (309.68s)
you of summarizing a book. Like, it will
[05:12] (312.16s)
do the job for you of like writing a
[05:15] (315.60s)
summary of something or translating,
[05:17] (317.60s)
etc. And life is just the same so far
[05:20] (320.88s)
because you haven't got that final step
[05:22] (322.56s)
that Dave said and there's a few extra
[05:24] (324.32s)
bits that we need for full aergentic
[05:27] (327.04s)
above that. But we're nearly there
[05:29] (329.44s)
because we have that final building
[05:31] (331.20s)
block now with this next level of model.
[05:34] (334.40s)
Where it's reliable
[05:35] (335.44s)
distinction by the way that it's it is
[05:37] (337.20s)
reasoning. It has to be to solve these
[05:38] (338.96s)
really hard PhD level pro problems but
[05:40] (340.96s)
it's not planning. That's a great way to
[05:42] (342.88s)
phrase it. Uh run a flop is 10 to the
[05:45] (345.92s)
27th. So that's the scale of these
[05:48] (348.16s)
algorithms.
[05:49] (349.12s)
That that was the that was the level the
[05:50] (350.96s)
AI act said they wanted to ban by the
[05:52] (352.80s)
way. So this would be the first ban
[05:54] (354.56s)
model. Yeah, that's a great point. First
[05:56] (356.80s)
I think I think one of the thing that's
[05:58] (358.00s)
happening is the absolute beauty of
[05:59] (359.68s)
capitalism where you've got big
[06:01] (361.76s)
juggernaut companies fighting it out for
[06:04] (364.32s)
supremacy and throwing taking massive
[06:07] (367.36s)
risks um choosing design paths taking
[06:10] (370.56s)
huge gamles and really really going for
[06:12] (372.72s)
it. I think it's really ma ma magical to
[06:15] (375.36s)
watch this happening.
[06:16] (376.56s)
Yeah, I love this tweet from Sawyer
[06:18] (378.24s)
Merritt. It says XAI was founded in
[06:20] (380.24s)
March of 2023. Just 28 months later,
[06:23] (383.92s)
it's now the number one model in the
[06:26] (386.16s)
world, verified by independent testing.
[06:28] (388.72s)
Incredible achievement. I mean, it is
[06:31] (391.60s)
insanely fast compared to everything
[06:34] (394.40s)
else that's being built. I remember when
[06:36] (396.88s)
uh in May two years ago when Elon was
[06:40] (400.72s)
first raising money and I had a chance
[06:42] (402.72s)
to sit in on a investor pitch in the
[06:45] (405.76s)
first round for XAI and he said I'm
[06:48] (408.24s)
going to have 100,000 GPUs H100s
[06:51] (411.20s)
operating by the by the end of the
[06:53] (413.04s)
summer and everybody's like no no
[06:55] (415.20s)
freaking way. Uh and he did just that.
[06:58] (418.64s)
Um and he's not slowed down. So here we
[07:02] (422.00s)
see in this image artificial analysis
[07:05] (425.04s)
intelligence index
[07:07] (427.36s)
gro 3 was placing like fifth or sixth
[07:11] (431.60s)
gro 4 leaps to the front of the line uh
[07:14] (434.88s)
are we going to continue seeing this
[07:16] (436.24s)
emod you know this just leaprogging each
[07:18] (438.72s)
other leaprogging each other is there is
[07:20] (440.72s)
there no end in sight
[07:22] (442.72s)
it's getting very difficult because if
[07:24] (444.56s)
you look at the benchmarks they have
[07:25] (445.92s)
there if you look at the aime benchmark
[07:28] (448.80s)
which is an advanced math quiz
[07:31] (451.28s)
Grock 4 scored 100% on it.
[07:36] (456.48s)
I mean, so you're literally running out
[07:38] (458.80s)
of benchmarks uh in order to do that.
[07:41] (461.60s)
And the amount of compute and resources
[07:44] (464.08s)
again are going exponential uh because
[07:47] (467.76s)
you need to to squeeze that out as well
[07:49] (469.52s)
as have good data as well as have good
[07:51] (471.20s)
algorithms. So before you could just
[07:53] (473.04s)
chuck everything into a pot, slush it
[07:54] (474.56s)
around. Now it's the real quality that
[07:57] (477.12s)
differentiates the top models between
[07:58] (478.80s)
each other. And it's become more of an
[08:00] (480.48s)
engineering and quality challenge than
[08:02] (482.64s)
just a brute force challenge.
[08:05] (485.36s)
Insane.
[08:06] (486.08s)
Can I please So what for a second,
[08:08] (488.00s)
please?
[08:08] (488.48s)
Um, okay. So, I've got a problem. I I
[08:11] (491.28s)
would suggest that if I'm trying to
[08:13] (493.12s)
answer that problem or get a solution to
[08:15] (495.12s)
it, I could go to any of these and
[08:16] (496.72s)
they're going to give me marginally the
[08:18] (498.32s)
roughly the same answer. Yes.
[08:20] (500.24s)
So, we're at a point where the the new
[08:23] (503.52s)
step is I'd love to I want to get into
[08:25] (505.84s)
the details of Grock to figure out why
[08:27] (507.92s)
is it so radically different from any of
[08:30] (510.56s)
the others, right? And I that's where I
[08:32] (512.64s)
think the fun will come.
[08:34] (514.00s)
Every week I study the 10 major tech
[08:36] (516.48s)
meta trends that will transform
[08:38] (518.40s)
industries over the decade ahead. I
[08:40] (520.40s)
cover trends ranging from humanoid
[08:42] (522.16s)
robots, AGI, quantum computing,
[08:44] (524.40s)
transport, energy, longevity, and more.
[08:47] (527.60s)
No fluff, only the important stuff that
[08:50] (530.24s)
matters, that impacts our lives and our
[08:52] (532.56s)
careers. If you want me to share these
[08:54] (534.64s)
with you, I write a newsletter twice a
[08:56] (536.88s)
week, sending it out as a short
[08:58] (538.72s)
two-minute read via email. And if you
[09:00] (540.96s)
want to discover the most important
[09:02] (542.48s)
metatrens 10 years before anyone else,
[09:04] (544.88s)
these reports are for you. Readers
[09:06] (546.72s)
include founders and CEOs from the
[09:08] (548.80s)
world's most disruptive companies and
[09:11] (551.04s)
entrepreneurs building the world's most
[09:13] (553.12s)
disruptive companies. It's not for you
[09:15] (555.84s)
if you don't want to be informed of
[09:17] (557.60s)
what's coming, why it matters, and how
[09:20] (560.00s)
you can benefit from it. To subscribe
[09:21] (561.84s)
for free, go to dmmanis.com/tatrends.
[09:25] (565.36s)
That's dmandis.com/tatrends
[09:28] (568.32s)
to gain access to trends 10 plus years
[09:31] (571.12s)
before anyone else. Well, the funny
[09:33] (573.76s)
thing is we're using, you know, we're
[09:35] (575.20s)
basically going to Einstein, you know,
[09:37] (577.60s)
and asking him to summarize a uh a poem
[09:40] (580.48s)
for us. I mean, it's like there's such
[09:42] (582.56s)
massive level intelligence and the
[09:44] (584.16s)
utilization for the general public is is
[09:46] (586.88s)
dimminimous. All right, let's look at
[09:48] (588.96s)
what's next on this. Uh, so Grock
[09:51] (591.52s)
outperforms the uh the highest level
[09:55] (595.60s)
test, humanity's last exam. Uh, up until
[09:58] (598.24s)
now, we've seen uh, see 03 was at 21%,
[10:03] (603.36s)
Grock 4 was at 25.4%,
[10:06] (606.24s)
Gemini 2.5 at 26.9%.
[10:09] (609.44s)
And then Gro 4 and then Gro 4 heavy
[10:12] (612.40s)
comes in at 44.4%.
[10:15] (615.12s)
Uh we were talking about this a little
[10:17] (617.28s)
bit earlier you know can you speak to
[10:20] (620.16s)
humanity's last exam for us?
[10:22] (622.08s)
Yeah this was um come up by scale AI and
[10:25] (625.52s)
kind of a few others um to have an exam
[10:28] (628.56s)
that even the most polymathic people in
[10:31] (631.44s)
the world would find difficult. So they
[10:33] (633.28s)
estimated that like some of the smartest
[10:34] (634.88s)
people in the world would score maybe 5%
[10:36] (636.48s)
on it maximum 10% and the top models at
[10:39] (639.84s)
the time which was probably like half a
[10:42] (642.24s)
year ago 9 months ago scored 8%. Now you
[10:46] (646.16s)
have a qualitative leap above to that
[10:48] (648.24s)
44% level. And I think it's interesting
[10:50] (650.88s)
because as kind of Salem was referring
[10:53] (653.36s)
to like what are these models for?
[10:55] (655.20s)
They're at this super genius level. It's
[10:57] (657.04s)
like having a mega liberal arts program.
[10:59] (659.76s)
And then the next step is going to be to
[11:01] (661.68s)
have really useful people in the
[11:03] (663.12s)
workforce on one stream and then the
[11:05] (665.84s)
other stream will be to take the
[11:07] (667.52s)
subcomponents of this and just push up
[11:10] (670.24s)
to superhuman reasoning discovering new
[11:13] (673.12s)
things at a level that we could never be
[11:14] (674.80s)
have before. And I think this is one of
[11:16] (676.64s)
the indications of that cuz again I I
[11:19] (679.12s)
tried to read some of the questions I
[11:20] (680.40s)
didn't even understand the questions.
[11:23] (683.60s)
Examples I I literally just gave a
[11:25] (685.36s)
presentation on this yesterday so I have
[11:26] (686.80s)
it right in front of me. Tell me
[11:28] (688.72s)
humanity's last exam 2700 questions.
[11:31] (691.44s)
When the slide says for reference humans
[11:33] (693.68s)
can score 5%. That means the very best
[11:37] (697.36s)
humans in any given domain can score 5%
[11:40] (700.48s)
within just the domain they understand.
[11:42] (702.96s)
And I'll tell you why. Like here's
[11:44] (704.24s)
here's an example question. Compute the
[11:46] (706.64s)
reduced 12thdimensional spin boardism of
[11:49] (709.28s)
the classifying space of the lie group
[11:51] (711.52s)
G2. And then it goes on from there.
[11:56] (716.00s)
Most people can't even understand one
[11:57] (717.44s)
word of that.
[11:58] (718.40s)
Exactly. Here's another one. Take a
[12:00] (720.88s)
five-dimensional gravitational theory
[12:02] (722.96s)
compactifified on a circle down to a
[12:05] (725.12s)
fourthdimensional vacuum.
[12:08] (728.80s)
So, yeah, these are the hardest
[12:11] (731.36s)
questions and and that's why this exam
[12:13] (733.44s)
is supposed to last for a long time. a
[12:15] (735.12s)
44% score is just way outside the range
[12:18] (738.64s)
of human ability because nobody has that
[12:20] (740.96s)
broad knowledge that spans all this all
[12:22] (742.96s)
these topics.
[12:24] (744.00s)
So ho how far how long before we hit
[12:26] (746.00s)
100% here too any bets?
[12:28] (748.80s)
Uh two years max I would say probably
[12:31] (751.84s)
next year.
[12:32] (752.88s)
So you know there was a conversation
[12:34] (754.64s)
years ago about AI getting to a point
[12:36] (756.88s)
where um you can't understand the
[12:40] (760.24s)
questions it's asking and answering. Uh,
[12:43] (763.84s)
and we're not far from that. So, I mean,
[12:47] (767.52s)
we're unable to actually at some point
[12:49] (769.44s)
we're unable to measure how rapidly it's
[12:52] (772.56s)
advancing. That becomes a little bit
[12:54] (774.56s)
frightening.
[12:55] (775.76s)
It's got to be driving uh Google nuts
[12:58] (778.08s)
that that Elon got this done in 28
[13:00] (780.40s)
months from a from a cold start.
[13:02] (782.64s)
Absolutely. largely because, you know,
[13:04] (784.88s)
Elon is phenomenal at large-scale
[13:06] (786.96s)
manufacturing, large-scale
[13:08] (788.56s)
organizational management and and, you
[13:10] (790.40s)
know, people working four or five a.m.
[13:12] (792.72s)
sleeping in tents on the on the factory
[13:14] (794.88s)
floor. That's that's his wheelhouse. And
[13:16] (796.72s)
that's Tesla, that's SpaceX, and and
[13:19] (799.92s)
because all the intellectual property
[13:22] (802.40s)
was more or less open sourced by the
[13:24] (804.80s)
research community at Google and Meta,
[13:27] (807.44s)
he was able to pick up all that
[13:29] (809.12s)
brilliant thinking and just plow it into
[13:31] (811.68s)
implementation. It's also small teams,
[13:33] (813.92s)
right? It's not large. I mean, Google's
[13:36] (816.08s)
a massive organization.
[13:38] (818.48s)
I think there's something else here,
[13:39] (819.76s)
though. Remember, we talked about this
[13:41] (821.68s)
last time when Grock 3 came out, right?
[13:43] (823.92s)
But when he said he was going to put
[13:45] (825.20s)
this huge cluster together, every AI
[13:47] (827.60s)
expert in the world said, "You cannot
[13:50] (830.48s)
power laws and coherence at that scale.
[13:52] (832.48s)
You just can't do it." And he went right
[13:54] (834.72s)
back to first principles, created new
[13:56] (836.72s)
kind of connections between the chips
[13:58] (838.56s)
and whatever, and did it. And every AI
[14:01] (841.36s)
expert is like, "Oh, god dang, he did
[14:03] (843.60s)
it." And so this is the this is the
[14:06] (846.08s)
incredible ability he has to go into a
[14:09] (849.36s)
domain with a beginner's mind, go to
[14:11] (851.84s)
first principles, and just re-engineer
[14:13] (853.92s)
the heck out of it to achieve massive
[14:15] (855.84s)
performance. And I think um uh this is
[14:19] (859.20s)
an indication of that. My big question
[14:20] (860.96s)
is is as you mentioned earlier, Dave,
[14:23] (863.60s)
where do we go from here, right? like
[14:25] (865.28s)
what what what does it mean to have a
[14:27] (867.20s)
50% versus 44% on this test?
[14:31] (871.12s)
Yeah. I think if I can just give it a
[14:32] (872.80s)
little bit of context, in 2022,
[14:35] (875.52s)
Amazon built us the 10th fastest public
[14:38] (878.16s)
supercomputer in the world. 4,000
[14:40] (880.80s)
A100s, you know,
[14:43] (883.44s)
And that was 2022. That was 10th fastest
[14:45] (885.76s)
in the world.
[14:47] (887.28s)
Of any supercomputer that we were
[14:48] (888.96s)
training on. And there was an instance
[14:50] (890.88s)
where literally hundreds of the chips
[14:52] (892.64s)
melted because of the scaling.
[14:55] (895.92s)
Now they've managed to by turning this
[14:57] (897.68s)
into engineering problem scale the
[14:59] (899.28s)
hardware but also the inside of the
[15:01] (901.68s)
model which I think is this really
[15:02] (902.88s)
important thing. The reason it's above
[15:04] (904.80s)
PhD level in each of these areas is that
[15:07] (907.12s)
was a computation scale problem.
[15:10] (910.96s)
And so what happens is that if you could
[15:12] (912.88s)
scale a liberal arts person all the way
[15:14] (914.56s)
up to postgrad in everything you would.
[15:16] (916.80s)
And then you specialize down and then
[15:18] (918.56s)
you look at some of these things. And
[15:20] (920.24s)
Sem's question there,
[15:21] (921.84s)
you've got just for reference everybody,
[15:24] (924.32s)
it's uh the XAI cluster now has 340,000
[15:29] (929.52s)
uh GPUs. Just
[15:31] (931.44s)
about $30,000 or more each.
[15:35] (935.04s)
Yeah, do the math.
[15:36] (936.40s)
10 billion.
[15:37] (937.12s)
A lot. I mean, this is why, you know,
[15:38] (938.56s)
this is why we're seeing a billion
[15:40] (940.08s)
dollars a day going into AI and why
[15:42] (942.56s)
Jensen said, you know, there'll be a
[15:44] (944.16s)
trillion dollars a year by 2030. Uh, and
[15:47] (947.36s)
it's not slowing down. So, here's
[15:49] (949.44s)
another image from the uh from the
[15:52] (952.88s)
little conversation Elon had yesterday.
[15:54] (954.80s)
These are the benchmarks his team put
[15:56] (956.56s)
up. Um, I don't know if you want to hit
[15:59] (959.76s)
on any of these, Immod or or Dave or
[16:02] (962.00s)
Salem. Any of the favorites for you?
[16:04] (964.32s)
Well, my favorite one is the AM AIME25
[16:07] (967.04s)
100%. You're done. You know, GPQA. These
[16:10] (970.72s)
are all hard benchmarks.
[16:12] (972.40s)
I think Elon want I think Elon would
[16:14] (974.32s)
want to go to uh 110%. He likes 11 as
[16:20] (980.96s)
But the only one I don't recognize is on
[16:22] (982.48s)
the bottom right. Do you know what that
[16:23] (983.68s)
is? The USA 25.
[16:25] (985.76s)
I think it's the um USA Mathematical
[16:27] (987.84s)
Olympiad.
[16:28] (988.64s)
Oh, right. Of course.
[16:29] (989.60s)
So, it's about to happen. Um but but
[16:32] (992.48s)
again like these are novel hard
[16:35] (995.12s)
benchmarks effectively all of them and
[16:38] (998.16s)
they're being saturated because
[16:39] (999.92s)
ultimately the AI can reason mathematics
[16:43] (1003.04s)
and science better than we can. Again it
[16:45] (1005.92s)
can't plan just yet. It doesn't have the
[16:48] (1008.16s)
same memory capacity and the building
[16:50] (1010.32s)
blocks haven't been put together. But
[16:52] (1012.96s)
it's already superhuman narrow
[16:54] (1014.80s)
capability in many narrow areas. So it's
[16:57] (1017.04s)
inevitable I think what happens next.
[17:00] (1020.56s)
You know, we glossed over his quote
[17:02] (1022.08s)
there. Discover new physics. Uh,
[17:04] (1024.32s)
wouldn't surprise me if it's this year,
[17:06] (1026.56s)
certainly no later than the end of next
[17:08] (1028.64s)
year. Uh, Alex Wisner Gross has been
[17:10] (1030.96s)
having a field day with that all day,
[17:13] (1033.04s)
I bet. First of all, what does it mean
[17:14] (1034.96s)
to discover new physics? That's that's
[17:16] (1036.64s)
pretty interesting by itself.
[17:19] (1039.20s)
Well, I mean, you know, Alex Alex has
[17:21] (1041.44s)
been saying we're going to solve all of
[17:22] (1042.80s)
math and then physics comes next.
[17:25] (1045.04s)
Chemistry and biology follow quickly. I
[17:27] (1047.28s)
mean, this is the most exciting for me.
[17:29] (1049.60s)
This is the most exciting thing of these
[17:31] (1051.76s)
models are
[17:33] (1053.20s)
will they literally unwrap the president
[17:36] (1056.80s)
of the universe before us right here
[17:39] (1059.04s)
right now during our lives in the next 5
[17:40] (1060.72s)
or 10 years.
[17:41] (1061.60s)
Well, there's a couple of specific
[17:42] (1062.96s)
applications that I think uh I've been
[17:45] (1065.60s)
watching. I want I want to I want to see
[17:47] (1067.92s)
an AI break and solve the quandry of the
[17:51] (1071.20s)
wave particle duality of light. That
[17:52] (1072.96s)
would be interesting
[17:54] (1074.08s)
and seeing what exactly is going on in
[17:56] (1076.48s)
this. Uh the second one would be
[17:58] (1078.56s)
molecular manufacturing and how do we
[18:00] (1080.56s)
new techniques for doing molecular
[18:02] (1082.32s)
because you crack that then you crack
[18:04] (1084.80s)
all assembly and manufacturing of all
[18:07] (1087.12s)
kinds right and everything the cost of
[18:09] (1089.84s)
anything becomes about a dollar a pound
[18:12] (1092.24s)
per weight a computer a dollar a pound
[18:14] (1094.56s)
yeah now you're in an amazing
[18:16] (1096.88s)
I mean listen again going back to Ray
[18:18] (1098.80s)
Kerszswwell's predictions right uh how
[18:21] (1101.76s)
he does it I still you know you know
[18:24] (1104.16s)
he's mentored you he's mentored me But,
[18:26] (1106.96s)
you know, these predictions that we're
[18:28] (1108.40s)
going to have nanotech in the early
[18:29] (1109.92s)
2030s,
[18:31] (1111.52s)
uh, where is it? Where is it? Well, this
[18:34] (1114.56s)
is probably its parents.
[18:37] (1117.20s)
Well, the one the one that's really fun
[18:38] (1118.88s)
to think about, you know, the quantum
[18:40] (1120.56s)
teleportation, Peter, that you brought
[18:42] (1122.16s)
up at one of our enterprise meetings.
[18:44] (1124.08s)
So, how do you reconcile the fact that
[18:46] (1126.40s)
two entangled particles can be
[18:48] (1128.48s)
infinitely far apart
[18:50] (1130.16s)
yet still communicating in real time
[18:52] (1132.80s)
with the fact that the speed of light
[18:54] (1134.32s)
can't be transcended? So, so Alex's
[18:57] (1137.20s)
speculation is if we can solve physics
[18:59] (1139.92s)
in the next year or two or three and it
[19:02] (1142.80s)
turns out that you can communicate using
[19:05] (1145.76s)
quantum teleportation that we instantly
[19:08] (1148.32s)
discover all these other intelligences
[19:10] (1150.16s)
around the universe.
[19:11] (1151.84s)
Yeah, we've just been listening at the
[19:13] (1153.36s)
wrong frequency with the with the wrong
[19:15] (1155.60s)
codecs.
[19:16] (1156.80s)
Uh these are the key takeaways. I'm
[19:18] (1158.40s)
going to just read these out loud and we
[19:20] (1160.40s)
can talk about them.
[19:22] (1162.32s)
They spent just as much on fine-tuning
[19:25] (1165.52s)
training the AI after initial phase as
[19:28] (1168.00s)
they did on pre-training. So that's a
[19:31] (1171.20s)
big change. Iman, you want to detect
[19:33] (1173.28s)
that for us?
[19:34] (1174.56s)
Yeah. So it used to be that everything
[19:36] (1176.64s)
was basically you take a snapshot of the
[19:38] (1178.32s)
internet and then you put it into this
[19:39] (1179.92s)
giant supercomputer mixer and it figures
[19:41] (1181.68s)
out all the connections, the latent
[19:43] (1183.52s)
spaces to guess the next word. Then you
[19:46] (1186.32s)
had this very weird AI that came out
[19:49] (1189.12s)
that was a little bit crazy. It's like a
[19:51] (1191.12s)
disheveled graduate student without his
[19:53] (1193.28s)
coffee and then you had to tidy him up
[19:55] (1195.20s)
with the reinforcement learning. That
[19:56] (1196.80s)
was the post training and that was 1% of
[19:59] (1199.04s)
the compute. Then with Deepseek it was
[20:00] (1200.48s)
10% of the compute and now it's moved to
[20:02] (1202.96s)
equal because they figured out how to
[20:05] (1205.20s)
chain reasoning strips. And in fact I
[20:08] (1208.24s)
think part of what they did we've seen
[20:10] (1210.00s)
this with other labs is they used their
[20:11] (1211.92s)
frontier model to make data for the next
[20:14] (1214.48s)
frontier model. So having large amounts
[20:17] (1217.44s)
of compute to create your own training
[20:20] (1220.96s)
in a structured manner allows you to
[20:23] (1223.84s)
take that latent space the landscape and
[20:28] (1228.32s)
make it smarter and smarter and smarter
[20:30] (1230.56s)
just like your brain adapts as you learn
[20:32] (1232.72s)
more and more reasoning as you see more
[20:34] (1234.24s)
and more things. And so rather than
[20:36] (1236.72s)
having to have these massive scrapes of
[20:38] (1238.24s)
the internet or whatever, it's more and
[20:40] (1240.24s)
more structured data making up these
[20:41] (1241.68s)
models which are making them smarter
[20:43] (1243.04s)
reasoners. So the the 50% additional
[20:46] (1246.88s)
compute dedicated to uh to the the
[20:50] (1250.32s)
finetuning does that mean we have a more
[20:52] (1252.72s)
sane version of Grock?
[20:58] (1258.00s)
Fingers crossed. Um, it doesn't
[21:00] (1260.00s)
necessarily mean that because you can
[21:01] (1261.44s)
still get all sorts of mode collapse
[21:03] (1263.12s)
within it in terms of if the latent
[21:05] (1265.12s)
space goes, but probably
[21:07] (1267.68s)
um because again you're training it just
[21:11] (1271.04s)
on a certain field of things as opposed
[21:13] (1273.44s)
to Reddit and other things. In terms of
[21:16] (1276.16s)
order, I'd say this is probably like a
[21:17] (1277.52s)
hund00 million each. So it probably adds
[21:19] (1279.84s)
up to one Meta AI researcher. You know,
[21:23] (1283.76s)
a new a new a new unit of measure in the
[21:26] (1286.64s)
AI world. That's funny. So, let's
[21:30] (1290.00s)
comment on the cost here. $3 per million
[21:32] (1292.56s)
tokens. Um, $15 per million output
[21:36] (1296.48s)
tokens and can handle long context
[21:39] (1299.28s)
windows of 56,000 tokens. How does that
[21:42] (1302.64s)
measure up, Dave, in your mind?
[21:44] (1304.88s)
Uh, well, it's pretty normal these days.
[21:46] (1306.56s)
It's it's a longer context. You know, a
[21:48] (1308.88s)
lot of the claimed context windows
[21:50] (1310.72s)
aren't real.
[21:52] (1312.00s)
Under the covers, the dimension of the
[21:53] (1313.60s)
neural net is much smaller than the
[21:55] (1315.12s)
claimed context window. Um, so I
[21:57] (1317.84s)
suspect, you know, at this scale that
[22:00] (1320.00s)
this is the true dimension of the
[22:01] (1321.84s)
network, but I don't really know. We'll
[22:03] (1323.12s)
have to dig in over the next couple of
[22:04] (1324.32s)
days and and find out. But, you know,
[22:07] (1327.12s)
what it means is, you know, you can feed
[22:08] (1328.64s)
in a 100 books worth of information
[22:11] (1331.04s)
concurrently. It instantly digests all
[22:13] (1333.36s)
that knowledge and then gives you an
[22:14] (1334.64s)
intelligent answer based on all of that
[22:16] (1336.88s)
information in one pass.
[22:19] (1339.36s)
So, it's just it's just, you know, the
[22:20] (1340.96s)
next step in what's been going up
[22:22] (1342.56s)
sequentially from model to model to
[22:24] (1344.24s)
model. Iman, do you expect we're going
[22:26] (1346.00s)
to be con constantly reducing the uh the
[22:29] (1349.12s)
price per token? Is this a is this a
[22:32] (1352.56s)
demonetizing curve for a while to come?
[22:36] (1356.72s)
100%. I mean, so the cost of this is
[22:39] (1359.20s)
about the same as the cost of Claude for
[22:41] (1361.04s)
Sonnet, which is the second model of of
[22:45] (1365.36s)
anthropic or 03's cost, but it's better
[22:47] (1367.52s)
than both. Uh it's about 0.7 words per
[22:50] (1370.32s)
token to give you an idea. And so the
[22:53] (1373.12s)
cost of a million very good words that
[22:57] (1377.12s)
are smart is $20.
[23:00] (1380.88s)
But next year with Vera Rubin, the next
[23:03] (1383.44s)
generation chip they're going to whack
[23:04] (1384.64s)
in there. Just by the hardware, it'll be
[23:07] (1387.12s)
three times to four times cheaper and
[23:08] (1388.88s)
they'll probably figure out some more
[23:10] (1390.16s)
stuff around that. So Equi Intelligence,
[23:13] (1393.92s)
the cost probably drops by around five
[23:15] (1395.68s)
to 10 times a year. So it'll be a buck
[23:18] (1398.64s)
for a million amazing words. It's hard
[23:21] (1401.28s)
to believe the most powerful technology
[23:23] (1403.36s)
in the world is dimminimous in cost.
[23:28] (1408.32s)
It's crazy.
[23:30] (1410.00s)
I want to I want to I want to put a a
[23:32] (1412.00s)
comparator though here. Um you know we
[23:34] (1414.56s)
we this is amazing. like we could put
[23:37] (1417.68s)
hundreds of our books into the thing and
[23:39] (1419.68s)
it would hold all of that in real time
[23:41] (1421.12s)
as as Dave said, but let's note that a
[23:44] (1424.00s)
single human cell has several billion
[23:46] (1426.64s)
operations going on in it at at any time
[23:49] (1429.28s)
point in time, right? So, we're kind of
[23:52] (1432.48s)
several orders, multiple orders of
[23:54] (1434.64s)
magnitude from modeling one cell. Uh,
[23:57] (1437.68s)
and so we're we've got a long way to go
[23:59] (1439.28s)
to try and model life or get to really
[24:01] (1441.36s)
big big big things. There's a coming
[24:03] (1443.68s)
wave of technological convergence as AI,
[24:06] (1446.88s)
robots, and other exponential tech
[24:08] (1448.64s)
transform every company and industry.
[24:11] (1451.36s)
And in its wake, no job or career will
[24:13] (1453.84s)
be left untouched. The people who going
[24:15] (1455.44s)
to win in the coming era won't be the
[24:17] (1457.60s)
strongest. It won't even be the
[24:19] (1459.04s)
smartest. It'll be the people who are
[24:20] (1460.80s)
fastest to spot trends and to adapt. A
[24:23] (1463.36s)
few weeks ago, I took everything I teach
[24:25] (1465.52s)
to executive teams about navigating
[24:27] (1467.36s)
disruption, spotting exponential trends
[24:30] (1470.00s)
a decade out, and put them into a course
[24:32] (1472.56s)
designed for one purpose, to futureproof
[24:34] (1474.88s)
your life, your career, and your company
[24:37] (1477.60s)
against this coming surge of AI,
[24:39] (1479.76s)
humanoids, and exponential tech. I'm
[24:41] (1481.84s)
giving the first lesson out for free.
[24:44] (1484.08s)
You can access this first lesson and
[24:45] (1485.84s)
more at dmandis.com/futureproof.
[24:49] (1489.60s)
That's dmandis.com/futureproof.
[24:52] (1492.56s)
The link is below.
[24:54] (1494.24s)
Let's talk about Super Gro Heavy. You
[24:56] (1496.40s)
know, I gotta love Elon's terminology,
[24:58] (1498.24s)
right? It's we we've got Falcon Heavy,
[25:01] (1501.36s)
now we've got Super Gro Heavy. Um, he
[25:04] (1504.64s)
loves his terms and I love them, too,
[25:06] (1506.32s)
actually. It makes me I smiled when I
[25:08] (1508.48s)
saw that.
[25:08] (1508.96s)
Why heavy, by the way? Is there a name
[25:10] (1510.72s)
reason for that?
[25:12] (1512.56s)
Falcons, the Elonverse.
[25:14] (1514.64s)
Yeah. No, I mean, like, you know, Falcon
[25:16] (1516.32s)
Heavy was able to have, you know, three
[25:18] (1518.16s)
boosters to launch a heavier payload to
[25:20] (1520.32s)
orbit. So why not why not uh talk about
[25:23] (1523.28s)
heavier capacity? So I mean uh in in
[25:25] (1525.68s)
reality right Falcon Heavy had multiple
[25:28] (1528.32s)
boosters and this has multiple agents.
[25:32] (1532.72s)
super next one will be heavier and the
[25:34] (1534.80s)
one that will have to
[25:36] (1536.08s)
next next one will be Grock Starship.
[25:40] (1540.16s)
It'll be it'll be it'll be BFG BFG. Yes.
[25:46] (1546.08s)
So the price point here sets a new high
[25:47] (1547.92s)
bar. Uh that's going to scare a lot of
[25:49] (1549.84s)
people. Um, I I say the same thing I
[25:53] (1553.12s)
said last time. You know, try it. Burn
[25:56] (1556.16s)
the 300 bucks for one month. You can
[25:57] (1557.68s)
turn off the subscription, but you got
[25:58] (1558.88s)
to try it to know what you're what
[26:00] (1560.40s)
you're missing or not missing. A lot of
[26:02] (1562.80s)
the use cases, you know, the day-to-day
[26:05] (1565.12s)
use cases, it won't matter much. But if
[26:07] (1567.60s)
you're building something complicated,
[26:09] (1569.20s)
writing code, uh, or or designing
[26:12] (1572.32s)
mechanical parts or whatever, you're
[26:14] (1574.72s)
going to get addicted to it. What I'm
[26:16] (1576.64s)
really curious about is the margin at
[26:18] (1578.08s)
300 bucks a month. Are they actually
[26:19] (1579.44s)
chewing up all that money on compute for
[26:21] (1581.60s)
you or do they have
[26:23] (1583.28s)
significant margin at that price point?
[26:25] (1585.12s)
Because one thing I've been predicting
[26:26] (1586.24s)
for a long time, it's inevitably going
[26:27] (1587.84s)
to happen soon is
[26:29] (1589.28s)
the use cases where you need that extra
[26:31] (1591.68s)
intelligence. Like when you're when
[26:32] (1592.96s)
you're building a software product and
[26:34] (1594.64s)
you're prompting it, you absolutely need
[26:37] (1597.12s)
that extra level of intelligence. It
[26:39] (1599.12s)
makes you dramatically more efficient in
[26:41] (1601.44s)
moving forward. And if you look at the
[26:43] (1603.92s)
cost of an engine software engineer's
[26:45] (1605.84s)
time, you can afford to go up another
[26:47] (1607.36s)
factor of 10 or or even more in price
[26:50] (1610.40s)
point for this and still be glad that
[26:52] (1612.56s)
you paid it. And so I think the
[26:54] (1614.48s)
escalation of pricing is is going to
[26:56] (1616.40s)
come soon. The counterargument is that
[26:58] (1618.32s)
the competing models will then
[26:59] (1619.60s)
commoditize it. But I think people will
[27:01] (1621.68s)
pay a lot for marginally better
[27:04] (1624.16s)
improvement because the the effective
[27:07] (1627.20s)
product you get out the other side. It
[27:09] (1629.44s)
it really accelerates your time to
[27:11] (1631.44s)
development or the quality of the design
[27:13] (1633.04s)
or whatever the solution to the math
[27:15] (1635.12s)
problem you is right rather than wrong.
[27:17] (1637.20s)
Makes a big difference.
[27:18] (1638.40s)
My guess is they're losing money.
[27:20] (1640.64s)
You think so?
[27:21] (1641.84s)
That's what that's what OpenAI said for
[27:23] (1643.60s)
their pro level whereas the level below
[27:27] (1647.20s)
they make money. So I think the way that
[27:28] (1648.96s)
I view this is a loss leader because if
[27:31] (1651.12s)
someone's paying 300 bucks, you
[27:32] (1652.56s)
enterprise sell them up.
[27:34] (1654.80s)
And then you do team things to get
[27:36] (1656.32s)
everyone doing it because basically
[27:38] (1658.08s)
right now what we have is a UI problem.
[27:40] (1660.88s)
The reasoner is there. The way to hook
[27:43] (1663.60s)
it up and make it usable for as many
[27:45] (1665.68s)
people on your team isn't there. You
[27:48] (1668.32s)
know, this is what Andre Carpathy calls
[27:49] (1669.92s)
context engineering. You know, like what
[27:52] (1672.16s)
are the new UIs that will enable us to
[27:54] (1674.00s)
use this most efficiently and get our
[27:55] (1675.36s)
data in there? If you can crack that,
[27:57] (1677.36s)
then 300 bucks a month for a high level
[27:59] (1679.92s)
knowledge worker is nothing.
[28:01] (1681.44s)
Yeah. You know,
[28:02] (1682.00s)
zero, right?
[28:03] (1683.12s)
Just like we used to pay a,000 2,000
[28:05] (1685.36s)
bucks a month for Bloomberg when I was a
[28:07] (1687.60s)
hedge fund manager mostly for instant
[28:09] (1689.44s)
messaging. But, you know, like again,
[28:12] (1692.40s)
it's just not quite there, but it's
[28:13] (1693.92s)
about to flip there.
[28:15] (1695.20s)
Yeah. Well, like a lawyer will cost you
[28:17] (1697.44s)
that much per hour or or three, five
[28:20] (1700.40s)
times that per hour. Will this do the
[28:23] (1703.76s)
job of your legal document better? I I
[28:26] (1706.16s)
can't wait. That's the one profession I
[28:28] (1708.00s)
would love to replace is lawyers. All
[28:31] (1711.36s)
right. Uh you you mentioned enterprise
[28:33] (1713.52s)
level uh emod let's go there right now.
[28:35] (1715.92s)
What else can GU do? So we're actually
[28:37] (1717.36s)
releasing this GUG if you want to try uh
[28:39] (1719.60s)
right now to evaluate run the same
[28:41] (1721.28s)
benchmark as us. Uh it's on API um has
[28:47] (1727.12s)
contact length. So we already actually
[28:49] (1729.28s)
see some of the early early adopters to
[28:51] (1731.76s)
try guac for API. So uh our polo
[28:54] (1734.96s)
neighbor ARC Institute which is a
[28:56] (1736.72s)
leading uh biomedical research uh center
[28:59] (1739.68s)
is already using seeing like how can
[29:01] (1741.76s)
they automate their research flows with
[29:04] (1744.08s)
gro. Uh it turned out it performs is
[29:06] (1746.88s)
able to help the scientists to sniff
[29:08] (1748.72s)
through you know millions of experiments
[29:10] (1750.72s)
logs and then you know just like pick
[29:13] (1753.28s)
the best hypothesis within a split of
[29:15] (1755.36s)
seconds. uh we see this is being used
[29:17] (1757.68s)
for their like the crisper uh research
[29:20] (1760.16s)
and also uh you know Grog four
[29:22] (1762.24s)
independently evaluated scores as the
[29:24] (1764.48s)
best model to exam the chess x-ray uh
[29:27] (1767.28s)
who would know um and uh uh on in the
[29:31] (1771.20s)
financial sector we also see you know
[29:32] (1772.96s)
the graph for with access tools realtime
[29:35] (1775.28s)
information is actually one of the most
[29:37] (1777.44s)
popular AIs out there so uh you know our
[29:40] (1780.56s)
graph is also going to be available on
[29:42] (1782.16s)
the hyperscalers so the XAI enterprise
[29:45] (1785.28s)
sector
[29:46] (1786.16s)
is only, you know, started two months
[29:48] (1788.48s)
ago and we're open for business.
[29:50] (1790.40s)
Open for business. So, Iman, you've been
[29:52] (1792.72s)
working on medical related um AI. Uh
[29:56] (1796.80s)
it's, you know, the block here isn't the
[29:59] (1799.92s)
tech. It's going to be the regulations.
[30:02] (1802.24s)
It's going to be when will an AI be able
[30:05] (1805.20s)
to fully replace a radiologist or fully
[30:07] (1807.76s)
replace a um you know, any profession of
[30:11] (1811.36s)
in the medical world. How you think
[30:12] (1812.88s)
about that? Well, I think it's the
[30:15] (1815.36s)
augmentation first. Reduce errors,
[30:17] (1817.60s)
increase outcomes, and then eventually
[30:19] (1819.68s)
it's replacement because Google had
[30:22] (1822.00s)
their AI medical expert study which
[30:24] (1824.08s)
showed that it was doctor doctor plus
[30:26] (1826.16s)
Google search doctor plus AI and then AI
[30:28] (1828.56s)
by itself.
[30:30] (1830.24s)
But just do self-driving cars just
[30:32] (1832.96s)
I want I just want to touch on that
[30:34] (1834.64s)
because it was a really important
[30:36] (1836.48s)
article that came out. uh if you again
[30:39] (1839.52s)
the physician by themselves was getting
[30:42] (1842.40s)
something like 80% of the cases correct
[30:45] (1845.36s)
the centaur the physician plus the AI
[30:48] (1848.80s)
was getting like 87% the numbers are
[30:51] (1851.60s)
approximate and then the AI without the
[30:53] (1853.76s)
human bias without the human biasing the
[30:56] (1856.88s)
output the AI by itself was outdoing all
[30:59] (1859.52s)
of them at like the early 90%. uh
[31:02] (1862.48s)
extraordinary
[31:03] (1863.60s)
well again it's what you said it's
[31:04] (1864.88s)
better than any postgrad at the moment
[31:07] (1867.04s)
but right now I think it's about the
[31:08] (1868.96s)
empowering and the acceleration in terms
[31:10] (1870.80s)
of the integration and you're way off
[31:13] (1873.12s)
the liability profile of replacement I
[31:15] (1875.44s)
don't think you need replacement right
[31:16] (1876.80s)
now what we need is less errors in
[31:19] (1879.68s)
something like medicine right I I think
[31:22] (1882.48s)
the doctor number by itself Peter was
[31:24] (1884.80s)
70% because I remember Daniel Craft
[31:27] (1887.36s)
saying when you go to the doctor you get
[31:28] (1888.64s)
the wrong diagnosis about 30% of the
[31:31] (1891.04s)
time, right? That's a staggering number
[31:33] (1893.60s)
of errors, by the way. That means out of
[31:36] (1896.40s)
four of us, one and a half got the wrong
[31:39] (1899.04s)
diagnosis the last time we went to the
[31:40] (1900.64s)
doctor. I mean, we need to figure out
[31:42] (1902.00s)
who that was. Uh, that's really
[31:43] (1903.76s)
ridiculous. And so, you need an AI to
[31:46] (1906.00s)
take over that whole field.
[31:47] (1907.52s)
Well, human bias and getting human bias
[31:50] (1910.40s)
out of that is also even more important
[31:51] (1911.84s)
as we can.
[31:52] (1912.80s)
The number of types of scans and sensors
[31:54] (1914.48s)
you can do is way way outstripping any
[31:56] (1916.64s)
human ability to look at all the data
[31:58] (1918.40s)
that comes out of it. So, a lot of a lot
[32:00] (1920.72s)
of it isn't trying to beat a doctor.
[32:02] (1922.16s)
It's trying to assimilate data that
[32:03] (1923.68s)
never could have gotten into the
[32:05] (1925.44s)
diagnosis before.
[32:06] (1926.48s)
That's a great point. That's a great
[32:07] (1927.92s)
point.
[32:08] (1928.48s)
Yeah. Just
[32:10] (1930.00s)
All right. Let's go on to Let's go on to
[32:11] (1931.60s)
our next uh
[32:13] (1933.04s)
next one. Uh so, available uh for an
[32:16] (1936.32s)
API. All right. Uh we've covered these
[32:19] (1939.12s)
areas already. Let's move on. A quick
[32:22] (1942.24s)
aside, you've probably heard me speaking
[32:23] (1943.84s)
about fountain life before and you're
[32:25] (1945.92s)
probably wishing, "Peter, would you
[32:27] (1947.44s)
please stop talking about fountain
[32:28] (1948.88s)
life?" And the answer is no, I won't
[32:30] (1950.80s)
because genuinely we're living through a
[32:33] (1953.12s)
healthcare crisis. You may not know
[32:34] (1954.80s)
this, but 70% of heart attacks have no
[32:36] (1956.96s)
precedent, no pain, no shortness of
[32:38] (1958.72s)
breath. And half of those people with a
[32:40] (1960.72s)
heart attack never wake up. You don't
[32:42] (1962.64s)
feel cancer until stage three or stage
[32:44] (1964.96s)
4, until it's too late. But we have all
[32:47] (1967.60s)
the technology required to detect and
[32:49] (1969.68s)
prevent these diseases early at scale.
[32:52] (1972.32s)
That's why a group of us including Tony
[32:54] (1974.08s)
Robbins, Bill Cap, and Bob Heruri
[32:56] (1976.32s)
founded Fountain Life, a one-stop center
[32:58] (1978.56s)
to help people understand what's going
[33:00] (1980.32s)
on inside their bodies before it's too
[33:02] (1982.80s)
late and to gain access to the
[33:04] (1984.40s)
therapeutics to give them decades of
[33:06] (1986.24s)
extra health span. Learn more about
[33:08] (1988.00s)
what's going on inside your body from
[33:09] (1989.60s)
Fountain Life. Go to
[33:10] (1990.56s)
fountainlife.com/per
[33:12] (1992.88s)
and tell them Peter sent you. Okay, back
[33:15] (1995.36s)
to the episode.
[33:16] (1996.96s)
All right. Uh, I love this. You know,
[33:19] (1999.52s)
Elon is a gamer and so it's not
[33:22] (2002.56s)
unreasonable for him to be talking about
[33:24] (2004.72s)
using Grock to make games. Take a
[33:26] (2006.96s)
listen.
[33:27] (2007.68s)
Yeah. So, uh, the other thing, uh, we
[33:29] (2009.76s)
talked a lot about, you know, having
[33:31] (2011.20s)
Grock to make games, uh, video games.
[33:33] (2013.20s)
Uh, so Denny is actually a, uh, video
[33:35] (2015.84s)
game designers on X. So uh you know we
[33:38] (2018.96s)
mentioned hey who want to try out some
[33:40] (2020.80s)
uh uh gro for uh preview APIs uh to make
[33:44] (2024.00s)
games and then he answered the call. Uh
[33:46] (2026.40s)
so this was actually just made first
[33:48] (2028.32s)
person shooting game in a span of four
[33:50] (2030.48s)
hours. Uh so uh some of the actually the
[33:54] (2034.16s)
unappreciated hardest problem of making
[33:56] (2036.40s)
video games is not necessarily encoding
[33:58] (2038.48s)
the core logic of the game but actually
[34:01] (2041.36s)
go out source all the assets all the
[34:03] (2043.44s)
textures of files and and uh you know to
[34:06] (2046.56s)
create a visually appealing game.
[34:08] (2048.64s)
I think one of the challenges is what we
[34:11] (2051.76s)
do with all of our time in the future
[34:14] (2054.08s)
and we may be playing a lot of video
[34:16] (2056.00s)
games.
[34:18] (2058.96s)
You know, this could actually light up
[34:20] (2060.56s)
the entire metaverse world because
[34:23] (2063.84s)
building the metaverse world and
[34:25] (2065.76s)
building those environments was the big
[34:27] (2067.36s)
limiting factor and now you can do it at
[34:29] (2069.36s)
a very rich level. This could be really
[34:31] (2071.60s)
interesting to see what comes from this.
[34:34] (2074.16s)
When did you guys first hear that Gro 4
[34:36] (2076.48s)
was going to come out last night?
[34:40] (2080.32s)
Well, he said a few days ago, didn't he?
[34:42] (2082.24s)
I mean, a week ago. I mean, he was
[34:45] (2085.68s)
saying it was going to be this weekend
[34:47] (2087.44s)
and then it got pushed to to yesterday.
[34:50] (2090.64s)
Yeah. Because I feel like we had about
[34:52] (2092.40s)
48 hour notice plus or minus a day or
[34:55] (2095.92s)
But it was amazing the if you look at
[34:57] (2097.60s)
the presentation, the raw presentation
[34:59] (2099.20s)
from last night and compare it to Google
[35:01] (2101.84s)
Google IO was was scripted and staged
[35:04] (2104.40s)
with multiple presenters and, you know,
[35:06] (2106.80s)
clearly planned way in advance.
[35:09] (2109.04s)
Uh this last night was like, is it done
[35:11] (2111.36s)
yet, guys? Is it done? Does it work?
[35:13] (2113.36s)
Okay. If it works, we're launching
[35:15] (2115.04s)
tonight. Let's go. Get on stage. Let's
[35:17] (2117.12s)
go. And and I think that's the way it's
[35:18] (2118.88s)
going to be in the future because uh you
[35:20] (2120.96s)
know, it seems like getting to market
[35:22] (2122.96s)
one day, two days sooner actually
[35:24] (2124.80s)
matters a lot in this horse race. So
[35:27] (2127.04s)
this is kind of the dynamic we should
[35:28] (2128.72s)
expect going forward. But
[35:30] (2130.00s)
by the way, that narrator, that's the AI
[35:33] (2133.20s)
voice of a geek who is living and
[35:35] (2135.60s)
breathing it. And that's what you want
[35:36] (2136.88s)
in there.
[35:37] (2137.52s)
That's what you want. All right, let's
[35:38] (2138.56s)
let's take a listen uh on Elon on video
[35:41] (2141.92s)
games and and movie production
[35:44] (2144.96s)
for example for for video games you'd
[35:47] (2147.20s)
want to use, you know, Unreal Engine or
[35:49] (2149.36s)
Unity or one of the one of the the main
[35:52] (2152.00s)
graphics engines um and then gen
[35:55] (2155.28s)
generate the generate the art uh apply
[35:58] (2158.72s)
it to a 3D model uh and then create an
[36:01] (2161.20s)
executable that someone can run on a PC
[36:03] (2163.68s)
or or a console or or a phone. um like
[36:08] (2168.16s)
we we expect that to happen probably
[36:10] (2170.96s)
this year. Um and if not this year,
[36:14] (2174.00s)
certainly next year. U so that's uh it's
[36:19] (2179.20s)
going to be wild. I would expect the
[36:20] (2180.56s)
first really good AI video game to be
[36:24] (2184.80s)
next year.
[36:28] (2188.32s)
and probably the first uh
[36:31] (2191.44s)
half hour of watchable
[36:33] (2193.92s)
TV this year and probably the first
[36:38] (2198.16s)
watchable AI movie next year.
[36:40] (2200.72s)
Yeah, it's amazing with the the
[36:42] (2202.16s)
fragmentation of those industries is
[36:44] (2204.00s)
going to be incredible because, you
[36:45] (2205.44s)
know, normally we think of a video game
[36:47] (2207.04s)
coming out in a release, all of your
[36:48] (2208.64s)
friends get the exact same release. It's
[36:50] (2210.48s)
a release that's maybe good for a year
[36:52] (2212.16s)
or more and you're all on like FIFA 23
[36:54] (2214.88s)
now or whatever 25. Um, but here because
[36:58] (2218.64s)
it's only four hours to create the next
[37:00] (2220.08s)
iteration, then you can say, well, no, I
[37:01] (2221.76s)
want a customized version or I want
[37:03] (2223.52s)
there's going to be all this
[37:04] (2224.56s)
fragmentation and the version of the
[37:06] (2226.16s)
movie that I saw isn't the same ending
[37:07] (2227.76s)
that the one that Sem saw. So now we're
[37:09] (2229.68s)
debating on how it we're not even on the
[37:11] (2231.52s)
same page and how the movie ends because
[37:12] (2232.96s)
we saw a different a different AI
[37:14] (2234.48s)
generated version and it's going to be
[37:16] (2236.24s)
great. It's going to be it's going to be
[37:17] (2237.68s)
really really cool because everything's
[37:19] (2239.52s)
we're gonna have a lot to do with our
[37:20] (2240.88s)
time. I mean I listen you spent so much
[37:23] (2243.12s)
time uh as CEO of Stability in this
[37:26] (2246.64s)
market arena of entertainment and video
[37:29] (2249.12s)
production and such. Uh when I asked you
[37:31] (2251.52s)
earlier whether Hollywood is you know
[37:34] (2254.64s)
going to be disrupted you said no. Um
[37:38] (2258.00s)
can you can you explain that please?
[37:40] (2260.64s)
So I think the thing that won't grow is
[37:44] (2264.32s)
people's attention. So if you look at
[37:46] (2266.08s)
Netflix, their biggest competitor is
[37:48] (2268.00s)
video games, which is why they're going
[37:49] (2269.28s)
into video games. You only have so many
[37:51] (2271.04s)
hours in a day and you're a consumer.
[37:52] (2272.96s)
Video game sector right now, I think, is
[37:54] (2274.64s)
$450 billion. The movie sector is 70
[37:57] (2277.76s)
billion. That's how fast it's grown.
[37:59] (2279.52s)
Like education around the world is like
[38:01] (2281.76s)
10 times larger. So it's 10% of
[38:03] (2283.76s)
education in terms of size. So if you
[38:07] (2287.12s)
think about that, then for Hollywood
[38:09] (2289.44s)
Studios, this is great because the costs
[38:10] (2290.96s)
have coming down and it's been a
[38:12] (2292.64s)
dramatic shift. To give you an idea, the
[38:14] (2294.40s)
first video models, stable video I think
[38:16] (2296.80s)
was pretty much the first. We released
[38:18] (2298.88s)
that in 2023.
[38:21] (2301.28s)
And now with V3 from Google and others,
[38:23] (2303.76s)
you're pretty much a Hollywood level,
[38:25] (2305.92s)
close to it, but you need one more
[38:27] (2307.76s)
generation to get there. And the average
[38:30] (2310.08s)
Hollywood click length is 2.5 seconds.
[38:32] (2312.80s)
It used to be 12 seconds. Now it's 2.5.
[38:35] (2315.36s)
And we can generate eight. And soon
[38:36] (2316.88s)
we'll be able to generate more.
[38:39] (2319.04s)
So you're getting to this point where
[38:40] (2320.24s)
you can make that. But again, people
[38:42] (2322.00s)
like having common stories to talk about
[38:43] (2323.92s)
Barbie Oenheimer and things like that.
[38:46] (2326.00s)
So these marquee things,
[38:48] (2328.64s)
they can get the license of Carrie Grant
[38:51] (2331.04s)
from back in the day and make him a star
[38:52] (2332.80s)
again.
[38:54] (2334.48s)
you know,
[38:56] (2336.80s)
don't you think don't you think that uh
[38:59] (2339.84s)
there's going to be so much supply and
[39:02] (2342.64s)
if I have a chance to watch, you know, a
[39:04] (2344.80s)
new episode of classic Star Trek, but
[39:08] (2348.24s)
you know, I'm the character playing
[39:09] (2349.92s)
Captain Kirk and uh and you know, you're
[39:13] (2353.52s)
playing Spock and my friends are taking
[39:15] (2355.76s)
the roles. I I mean it I don't know why
[39:19] (2359.60s)
I would not be buying that entertainment
[39:23] (2363.20s)
uh from a source other than you know
[39:24] (2364.80s)
outside of Hollywood.
[39:26] (2366.24s)
Well, you'll buy that too, but I think
[39:27] (2367.92s)
one of the things we've seen in the AI
[39:29] (2369.20s)
world, what's it about? Distribution,
[39:30] (2370.96s)
distribution, distribution. So, you'll
[39:32] (2372.96s)
buy your interactive games and put
[39:35] (2375.04s)
yourself in the game, but you'll still
[39:36] (2376.40s)
have your marquee things and the cost of
[39:37] (2377.84s)
that will reduce dramatically and the
[39:39] (2379.84s)
distribution cost will decrease
[39:41] (2381.04s)
dramatically and the impact will
[39:42] (2382.24s)
increase. So again, for companies, this
[39:44] (2384.08s)
is all great. For the individuals
[39:46] (2386.48s)
working in the industry, this is
[39:47] (2387.84s)
terrible.
[39:49] (2389.20s)
And so I think this is the key thing.
[39:51] (2391.44s)
For the individual creators, this is
[39:53] (2393.20s)
great because you can finally tell the
[39:54] (2394.96s)
stories. So we'll see richer stories,
[39:56] (2396.56s)
but you've still got to distribute them.
[39:58] (2398.56s)
It's like one of the examples I had to
[39:59] (2399.84s)
give is, you know, Taylor Swift, bless
[40:01] (2401.44s)
her heart, it's not the best music in
[40:03] (2403.28s)
the world, but she still causes
[40:05] (2405.20s)
earthquakes, you know.
[40:08] (2408.80s)
Yeah. No. Uh your your point that uh I
[40:11] (2411.68s)
think the video game industry bypassed
[40:13] (2413.52s)
all other media combined. Uh I think I
[40:16] (2416.24s)
read that
[40:17] (2417.04s)
and it's on a much faster growth
[40:18] (2418.64s)
trajectory as well.
[40:20] (2420.56s)
But I think the video games are far more
[40:23] (2423.04s)
compelling with AI components, AI
[40:24] (2424.96s)
players, AI voices, voices that are
[40:27] (2427.20s)
talking directly to you. Uh and so that
[40:30] (2430.88s)
interactive media is going to get even
[40:32] (2432.96s)
more accelerated by this trend. So I
[40:35] (2435.68s)
whether you call it movies or video
[40:37] (2437.76s)
games or other the media is going to
[40:40] (2440.00s)
change, right? It always does. So it may
[40:42] (2442.00s)
not fit exactly in those swim lanes, but
[40:44] (2444.16s)
it's clearly the interactive talk to me
[40:46] (2446.48s)
part is going to grow much much faster
[40:48] (2448.72s)
than passive watching part.
[40:50] (2450.96s)
Yeah, I think it's the quality part and
[40:52] (2452.40s)
it's the feedback for you to find flow.
[40:54] (2454.40s)
So the movie industry's grown from like
[40:56] (2456.56s)
six 50 billion to 60 billion in the last
[40:58] (2458.80s)
10 years. Average IMDb score 6.3. Video
[41:02] (2462.48s)
game industry is like doubled inside,
[41:04] (2464.56s)
quadrupled. It was 170 billion, now it's
[41:06] (2466.40s)
like 500 billion. The average score has
[41:08] (2468.48s)
gone from 69% on Metacritic to 74%.
[41:11] (2471.68s)
Games are good now
[41:13] (2473.20s)
and you need to be good to compete. And
[41:15] (2475.52s)
again, I think what we can see from this
[41:17] (2477.44s)
technology is I as a creator can create
[41:22] (2482.16s)
the best things better because I can
[41:24] (2484.24s)
control every pixel. This is what Jensen
[41:26] (2486.88s)
has said. Every pixel will be generated
[41:29] (2489.04s)
exactly what's in your mind. maybe you
[41:30] (2490.80s)
know you have to use a keyboard it just
[41:32] (2492.16s)
comes straight from your mind can be on
[41:33] (2493.68s)
that screen you can tell the stories you
[41:34] (2494.96s)
want and on the other side you've got
[41:36] (2496.56s)
the fast food so you know the general
[41:39] (2499.28s)
content farms get even better
[41:41] (2501.28s)
so you got your gourmet and you've got
[41:42] (2502.64s)
your fast food and both of the quality
[41:44] (2504.48s)
of those will increase
[41:47] (2507.20s)
every day I get the strangest compliment
[41:49] (2509.36s)
someone will stop me and say Peter you
[41:51] (2511.52s)
have such nice skin honestly I never
[41:54] (2514.16s)
thought I'd hear that from anyone and
[41:56] (2516.08s)
honestly I can't take the full credit
[41:58] (2518.08s)
all I do is use something called onskin
[42:00] (2520.32s)
OS1 twice a day every day. The company
[42:03] (2523.36s)
is built by four brilliant PhD women
[42:05] (2525.68s)
who've identified a peptide that
[42:07] (2527.68s)
effectively reverses the age of your
[42:09] (2529.76s)
skin. I love it and again I use this
[42:12] (2532.32s)
twice a day every day. You can go to
[42:14] (2534.72s)
onskin.co and write peter at checkout
[42:17] (2537.52s)
for a discount on the same product I
[42:19] (2539.52s)
use. That's oneskin.co and use the code
[42:22] (2542.96s)
peter at checkout. All right, back to
[42:25] (2545.20s)
the episode.
[42:26] (2546.40s)
Uh, of course, Grock for coding. Let's
[42:28] (2548.64s)
take a quick listen.
[42:29] (2549.68s)
Right. So if you think about what are
[42:31] (2551.28s)
the applications out there that can
[42:32] (2552.88s)
really benefit from all those very
[42:34] (2554.72s)
intelligent, fast and smart models and
[42:36] (2556.96s)
coding is actually one of them.
[42:38] (2558.80s)
Yeah. So the team is currently working
[42:40] (2560.64s)
very heavily on coding models. Um I
[42:43] (2563.68s)
think uh right now the main focus is we
[42:46] (2566.32s)
actually trained recently a specialized
[42:48] (2568.64s)
coding model which is going to be both
[42:50] (2570.88s)
fast and smart. Um and I believe we can
[42:54] (2574.64s)
share with that model with you with all
[42:56] (2576.64s)
of you uh in a few weeks. Yeah. Yeah.
[42:59] (2579.68s)
I still remember Immod when you were on
[43:01] (2581.36s)
stage with me uh like three years ago at
[43:04] (2584.00s)
the abundance summit and you said no
[43:06] (2586.40s)
more coders in five years and it was it
[43:09] (2589.36s)
was front page throughout India.
[43:12] (2592.16s)
I I got I got hate mail about that you
[43:14] (2594.56s)
Oh my god. You scared the daylights out
[43:16] (2596.48s)
of and and it's true. I mean there's I
[43:18] (2598.56s)
mean it's a big issue. It's a big issue.
[43:21] (2601.44s)
Why would you be able to talk to a
[43:22] (2602.80s)
computer better than a computer can talk
[43:24] (2604.16s)
to a computer?
[43:26] (2606.32s)
you know,
[43:26] (2606.72s)
well, hold on. Let me drill into that
[43:28] (2608.96s)
just for a second. Don't you think we'll
[43:30] (2610.64s)
end up with really good coders just
[43:32] (2612.96s)
creating 100 times more code?
[43:35] (2615.68s)
No. Because what you'll have is really
[43:37] (2617.84s)
good context engineers directing to
[43:41] (2621.44s)
build things. code is an intermediate
[43:43] (2623.76s)
step of language because the computers
[43:46] (2626.32s)
and the compilers couldn't handle the
[43:48] (2628.80s)
complexity of what we wanted to talk
[43:50] (2630.40s)
about. Now you can talk to the AI all
[43:52] (2632.40s)
day long about anything and it
[43:53] (2633.68s)
understands to a reasonable degree what
[43:55] (2635.44s)
you actually want and once we get the
[43:57] (2637.12s)
feedback loops really going as we've
[43:58] (2638.88s)
seen with cursor and other things like
[44:00] (2640.32s)
that like there's a reason it's got to
[44:02] (2642.24s)
$500 million in revenue in a year you
[44:04] (2644.88s)
know there's a reason that anthropics
[44:06] (2646.16s)
got to $4 billion probably twothirds of
[44:08] (2648.16s)
that is code.
[44:10] (2650.56s)
Yeah. Crazy.
[44:12] (2652.16s)
All right.
[44:12] (2652.80s)
Disappointing that we won't have this
[44:14] (2654.00s)
for a couple weeks. We'll have to get
[44:15] (2655.04s)
back on the pod and and check it out
[44:16] (2656.56s)
when it's out. Somebody told me you can
[44:18] (2658.64s)
get to it through cursor right now. I'm
[44:20] (2660.24s)
looking at cursor as we speak and I
[44:21] (2661.76s)
don't see it popping up as a as an
[44:23] (2663.76s)
option. But
[44:24] (2664.56s)
cursor is very much linked towards
[44:26] (2666.56s)
anthropic. So it probably like lobomize
[44:28] (2668.72s)
it. But Grock 3, Grock 4 already heavy
[44:32] (2672.16s)
is a pretty good code. It writes clean
[44:33] (2673.76s)
code and the coding model I think will
[44:35] (2675.76s)
be even better. But again, how much
[44:37] (2677.92s)
better are you going to get when you can
[44:39] (2679.20s)
output a 3D video game like that or just
[44:42] (2682.32s)
about anything?
[44:43] (2683.52s)
And I think this comes to think, are you
[44:45] (2685.36s)
if you're trying to create content, the
[44:47] (2687.68s)
AI is good enough already for just about
[44:49] (2689.60s)
anything.
[44:50] (2690.64s)
If you're trying to create something
[44:52] (2692.08s)
creative, this is the final part that
[44:54] (2694.80s)
requires planning and coordination and
[44:56] (2696.64s)
multi- aent systems and the UIUX isn't
[44:59] (2699.20s)
there yet for the feedback loops, etc.
[45:01] (2701.84s)
Yeah. I can use all the horsepower they
[45:03] (2703.84s)
can give me though cuz like when you're
[45:05] (2705.20s)
writing a little code module it's all
[45:06] (2706.80s)
pretty much perfect already. But right
[45:08] (2708.88s)
now I can go to the best claude model
[45:10] (2710.96s)
and say build me a dashboard for this
[45:14] (2714.32s)
function and just give it that prompt
[45:16] (2716.40s)
and most of the time it comes back great
[45:18] (2718.80s)
and even thinks of things that I
[45:20] (2720.24s)
wouldn't have thought of for that
[45:21] (2721.60s)
dashboard and I can use another step up
[45:24] (2724.96s)
of capability in that area. So I'll use
[45:28] (2728.08s)
it up as quickly as it comes out.
[45:29] (2729.68s)
Believe me. all the tokens to Dave.
[45:34] (2734.00s)
let's hear from Elon about uh his video
[45:36] (2736.72s)
model training. What's coming on input
[45:39] (2739.04s)
output?
[45:40] (2740.16s)
We expect to be training our video model
[45:42] (2742.00s)
with uh over 100,000 GB200s uh and uh to
[45:46] (2746.56s)
begin that training within the next
[45:48] (2748.96s)
three or four weeks. So, we're we're
[45:51] (2751.60s)
confident it's going to be pretty
[45:53] (2753.12s)
spectacular in video generation and
[45:55] (2755.20s)
video understanding. So 100,000 GB 200
[46:00] (2760.40s)
uh more than anybody's thrown at this uh
[46:04] (2764.00s)
what is that how does that how does that
[46:06] (2766.16s)
hit you? So when we trained the
[46:08] (2768.56s)
state-of-the-art first video model two
[46:10] (2770.88s)
years ago, two years ago,
[46:14] (2774.40s)
that's right.
[46:15] (2775.20s)
We use 700
[46:20] (2780.96s)
H100s. So like uh let's say they're
[46:23] (2783.84s)
three times slower. So the equivalent of
[46:25] (2785.76s)
200 of the chips that he's about to use
[46:27] (2787.76s)
cuz these are the integrated GB chips
[46:29] (2789.76s)
from um Nvidia. The top level models
[46:33] (2793.92s)
right now, if you look at the Lumas of
[46:35] (2795.76s)
the world, the bite dance models of the
[46:37] (2797.60s)
world, the V3s, use 2 to 4,000.
[46:41] (2801.76s)
He's about to use a 100,000
[46:44] (2804.96s)
of those. And the thing about video is
[46:47] (2807.44s)
when you train a video model, it
[46:48] (2808.88s)
actually learns a representation of the
[46:50] (2810.80s)
world through computation. So once we
[46:53] (2813.84s)
made a video model, we extended it to a
[46:55] (2815.44s)
3D model that could generate any 3D
[46:57] (2817.28s)
asset. It understands physics and more.
[47:00] (2820.40s)
So actually video models are world
[47:02] (2822.80s)
models that can be used to do all sorts
[47:05] (2825.12s)
of things like improve self-driving cars
[47:07] (2827.68s)
by creating whole worlds and other
[47:09] (2829.28s)
things like that as well. I think that's
[47:11] (2831.20s)
the reason why given they've got 300,000
[47:13] (2833.60s)
chips, they're putting a 100,000 of
[47:15] (2835.84s)
these to their video model.
[47:17] (2837.44s)
Well, and they're planning a million
[47:18] (2838.88s)
GPUs by the end of this year. You know,
[47:22] (2842.96s)
let's you know, it's like it's like no
[47:25] (2845.12s)
small dreams here.
[47:27] (2847.28s)
Mod, when you pioneered this just a
[47:28] (2848.88s)
couple years ago, like you said, um the
[47:31] (2851.60s)
video model was trained completely
[47:33] (2853.20s)
separate from the large language model
[47:34] (2854.80s)
because, you know, it was just too much.
[47:36] (2856.08s)
You couldn't put everything into one
[47:37] (2857.44s)
mega model. Is he going to do um a
[47:39] (2859.84s)
monster retraining of this model with
[47:42] (2862.48s)
video data or is it a separate set of
[47:44] (2864.24s)
parameters and a separate model?
[47:45] (2865.84s)
This will be a separate model. So, we
[47:47] (2867.36s)
took the image model and then we created
[47:49] (2869.04s)
the video model from that and then we
[47:50] (2870.48s)
created the 3D model from that. Now
[47:52] (2872.48s)
they're doing from scratch training
[47:54] (2874.24s)
because the technology we developed for
[47:56] (2876.48s)
stable diffusion 3, the diffusion
[47:58] (2878.00s)
transformer matching it is able to do
[48:00] (2880.16s)
that all at once. And this is similar to
[48:02] (2882.88s)
what V3 and others use. And with
[48:05] (2885.60s)
optimizations, you can just pop that all
[48:07] (2887.68s)
straight in. Now the arch that they use,
[48:10] (2890.24s)
like the Grock model for the image, is
[48:14] (2894.08s)
actually the same architecture as for
[48:15] (2895.92s)
the language. And they may do the same
[48:17] (2897.60s)
thing. I'm not sure how they're going to
[48:18] (2898.72s)
train this model cuz again they're super
[48:20] (2900.40s)
smart. But it's a different model
[48:22] (2902.72s)
entirely. But they may all end up being
[48:25] (2905.60s)
the same model because if you want a
[48:27] (2907.60s)
model that understands physics and the
[48:29] (2909.84s)
wonders of the universe and what's the
[48:32] (2912.32s)
question to get to the answer 42,
[48:34] (2914.96s)
you probably want to train on everything
[48:36] (2916.48s)
that a human sees
[48:38] (2918.64s)
and more because it'll train on
[48:40] (2920.00s)
everything a million humans can see and
[48:41] (2921.92s)
understand and read and all sorts of
[48:44] (2924.16s)
stuff. I mean, you know, I'm excited
[48:46] (2926.32s)
about the idea of there's so many of my
[48:48] (2928.56s)
favorite science fiction books that have
[48:50] (2930.08s)
never been made into movies or TV
[48:51] (2931.76s)
series, right? I mean, the ability to
[48:54] (2934.16s)
just say, "Hey, uh, you know, like one
[48:56] (2936.64s)
of my favorite books is, uh, the
[48:58] (2938.16s)
Bobverse series, uh, by Dennis Taylor.
[49:01] (2941.44s)
Uh, and you know, I love it. It's a four
[49:04] (2944.96s)
book series. It's it's extraordinary.
[49:08] (2948.96s)
Make it into a movie for me. Make it
[49:10] (2950.64s)
into a 20 part TV series for me. Um,
[49:13] (2953.68s)
here's a hundred bucks.
[49:15] (2955.84s)
100 bucks.
[49:17] (2957.68s)
Really fun actually if you took took the
[49:19] (2959.44s)
the best books that have ever been
[49:21] (2961.20s)
turned into movies already and use that
[49:22] (2962.88s)
as training data. So like this book
[49:24] (2964.96s)
turned into this killer movie. Make the
[49:27] (2967.28s)
changes necessary to get from point A to
[49:29] (2969.12s)
point B. Okay, now here's a book that
[49:31] (2971.04s)
never got made into a movie. From what
[49:32] (2972.88s)
you learned about those patterns, make
[49:34] (2974.64s)
the movie that's most compelling.
[49:36] (2976.72s)
The thing is you won't even have to do
[49:39] (2979.44s)
that. like just with the pace of chip
[49:42] (2982.16s)
improvements as we go through the
[49:43] (2983.52s)
generations in 2 years you will have
[49:46] (2986.24s)
live 4K TV so you've already seen some
[49:49] (2989.68s)
people do like live low resolution stuff
[49:52] (2992.32s)
interactive stuff when Jensen says every
[49:54] (2994.96s)
pixel will be generated he literally
[49:56] (2996.56s)
means it
[49:58] (2998.16s)
like with the next generation chips and
[50:00] (3000.32s)
a bit of more improvement in the
[50:01] (3001.76s)
algorithms and optimization of the
[50:03] (3003.52s)
models you can have live streaming 3D or
[50:07] (3007.84s)
video where every single pixel is
[50:10] (3010.24s)
generated on your screen within a few
[50:11] (3011.84s)
years. And so you can just say, "Stop,
[50:14] (3014.88s)
try this, adjust this, and that'll be
[50:16] (3016.32s)
the feedback loop."
[50:17] (3017.60s)
It'd be fun to take some old movies and
[50:19] (3019.84s)
and make them way better. Like take the
[50:22] (3022.00s)
old Kona and the Barbarian movie and
[50:23] (3023.92s)
make it really a proper movie.
[50:30] (3030.00s)
Oh my god. You know what hits me? We're
[50:32] (3032.00s)
sitting here having this conversation in
[50:34] (3034.40s)
four different cities around the world
[50:36] (3036.16s)
where, you know, we've taken so much for
[50:39] (3039.04s)
granted in this video channel and like,
[50:42] (3042.32s)
you know, 10 years ago, what do we have?
[50:44] (3044.16s)
We had just barely had Skype.
[50:47] (3047.60s)
and now,
[50:48] (3048.88s)
you know, it's it's crazy. So, we humans
[50:52] (3052.80s)
adapt so rapidly to awesomeness
[50:56] (3056.72s)
and we take it we take it from
[50:58] (3058.16s)
we normalize it very fast. It's like
[50:59] (3059.92s)
your second Whimo motor ride, right?
[51:02] (3062.32s)
Your first one's like, "Wow." And your
[51:03] (3063.68s)
second one was like, "Okay."
[51:07] (3067.44s)
Oh, for sure. So, any any closing
[51:11] (3071.36s)
thoughts on
[51:11] (3071.92s)
Gro? I have a question. I have a
[51:14] (3074.00s)
question for Emod.
[51:15] (3075.68s)
Uh, you've been in the space for a while
[51:17] (3077.44s)
now. We have Gro 4, right?
[51:20] (3080.00s)
What are the types of things that Grock
[51:22] (3082.00s)
5 will be able to do?
[51:24] (3084.00s)
So Grock 5 will be a multi-agentic
[51:26] (3086.48s)
system, but rather than having four
[51:27] (3087.68s)
boosters, it'll have 60 or 600 or 6,000
[51:31] (3091.44s)
depending on what you want. It'll
[51:33] (3093.20s)
probably have a world model plugged in
[51:34] (3094.96s)
and it'll have interconnectivity, and
[51:36] (3096.56s)
this is something that Elon mentioned
[51:37] (3097.60s)
yesterday, to every major type of
[51:39] (3099.92s)
system. So it knows how to use Maya, it
[51:41] (3101.92s)
knows how to use advanced physics
[51:43] (3103.12s)
simulators, it will write its own lean
[51:45] (3105.84s)
code and optimize it for mathematics.
[51:48] (3108.72s)
And so it's just going to be like an
[51:50] (3110.32s)
incredibly versatile worker. And just
[51:53] (3113.12s)
like he's going to unleash millions of
[51:54] (3114.88s)
Optimus robots, he's going to unleash
[51:57] (3117.04s)
billions if not trillions of these
[51:58] (3118.88s)
things, GPU demand withstanding
[52:02] (3122.16s)
into the economy and that's going to be
[52:04] (3124.72s)
a bit crazy. And I think the way that
[52:06] (3126.56s)
you'll interact with Gro 6, probably
[52:08] (3128.56s)
Grot 5, is you'll have a Zoom call with
[52:10] (3130.96s)
it just like you have now.
[52:13] (3133.68s)
Hey folks, Lim here. Hope you're
[52:15] (3135.04s)
enjoying these podcasts and this one in
[52:16] (3136.72s)
particular was amazing. Um, if you want
[52:18] (3138.96s)
to hear more from me or get involved in
[52:20] (3140.88s)
our EXO ecosystem on the 23rd of July,
[52:23] (3143.28s)
we're doing a once a month workshop.
[52:25] (3145.28s)
Tickets are $100. Uh, we limit it to a
[52:28] (3148.08s)
few people to make sure it's
[52:29] (3149.36s)
intermittent and proper. And we go
[52:30] (3150.72s)
through the exo model. What we do there
[52:32] (3152.96s)
is we basically show you how to take
[52:34] (3154.64s)
your organization and turn it into one
[52:37] (3157.04s)
of these hypers growth AI type
[52:39] (3159.52s)
companies. And we've done this now for
[52:41] (3161.36s)
10 years with thousands of companies. Uh
[52:43] (3163.76s)
many of these use the model that we have
[52:45] (3165.60s)
called the exponential organizations
[52:47] (3167.36s)
model. Peter and I co-authored the
[52:49] (3169.28s)
second edition a couple of years ago. So
[52:51] (3171.12s)
it's a 100 bucks June July 23rd. Come
[52:53] (3173.84s)
along. It's the best $100 you'll spend.
[52:55] (3175.92s)
Link is below. See you there. Uh Gemini
[52:59] (3179.12s)
3 and uh and GPT5. Let's talk one second
[53:03] (3183.92s)
about what you expect there. Are these
[53:06] (3186.48s)
going to just leapfrog Gro 4? Are they
[53:09] (3189.52s)
going to be, you know, sort of diverting
[53:11] (3191.44s)
in different directions? Immad, your
[53:13] (3193.20s)
thoughts?
[53:13] (3193.84s)
I think they'll probably all be kind of
[53:15] (3195.68s)
the same plateau. Now, it's really about
[53:17] (3197.60s)
the UIUX and then how you wrap these
[53:19] (3199.92s)
into agents and then multi- aent systems
[53:22] (3202.40s)
and then how you make it so just easy
[53:24] (3204.08s)
for anyone to use like this.
[53:26] (3206.24s)
So, you know, Google in the work that
[53:28] (3208.96s)
they've done with their AR glasses,
[53:32] (3212.08s)
um, you know, enabling you to have a
[53:34] (3214.40s)
conversation with your AI and being able
[53:36] (3216.72s)
to have it see what you see. That's a
[53:39] (3219.28s)
great step forward. uh you know open AI
[53:42] (3222.72s)
uh with their their voice their voice
[53:45] (3225.20s)
mode has been fantastic. Uh are there
[53:48] (3228.24s)
any versions of I you know user
[53:51] (3231.36s)
interface that we haven't seen yet? I
[53:54] (3234.00s)
mean BCI will be one of them for sure.
[53:57] (3237.28s)
I mean I personally think again the
[53:59] (3239.44s)
interface is just the interface that you
[54:01] (3241.04s)
have with a remote worker
[54:02] (3242.72s)
and all the technology is almost in
[54:04] (3244.40s)
place for that like
[54:05] (3245.52s)
get on a call hit him a slack
[54:07] (3247.44s)
pretty much and you just don't know.
[54:08] (3248.56s)
That's my AGI. My AGI is actually more
[54:11] (3251.44s)
like actually useful intelligence,
[54:12] (3252.88s)
right? Like this is I think probably
[54:14] (3254.64s)
what Selem would like just
[54:17] (3257.76s)
I don't know it's an AI or not. It just
[54:19] (3259.28s)
gets the job done and it doesn't sleep.
[54:21] (3261.36s)
And this final part of it as well is
[54:22] (3262.88s)
that the task length of these AIs has
[54:25] (3265.04s)
gone to like 7 hours now. I think I've
[54:27] (3267.52s)
seen from various entities now they're
[54:29] (3269.44s)
getting that up to almost arbitrary
[54:31] (3271.52s)
length. So you can set teams away and
[54:34] (3274.00s)
they have organizing AIS and others.
[54:35] (3275.92s)
They get the job done. They check in
[54:37] (3277.36s)
whenever they're unsure about something.
[54:39] (3279.44s)
And then this is that next step up for
[54:41] (3281.60s)
all these technologies. But I think the
[54:43] (3283.52s)
10 to the 27 models will, as you said,
[54:46] (3286.00s)
all be pretty much similar cuz they're
[54:49] (3289.04s)
already above PhD and everything. Now
[54:51] (3291.20s)
it's about making them super useful and
[54:52] (3292.80s)
getting them out there. And the demand
[54:54] (3294.88s)
for that is in the billions of agents.
[55:00] (3300.24s)
Dave, you know what I find interesting
[55:01] (3301.52s)
is Elon's got basically a limitless
[55:04] (3304.88s)
capital supply. Yeah,
[55:07] (3307.20s)
you know, it's every time he's gone to
[55:09] (3309.28s)
raise money, you know, I've asked, well,
[55:11] (3311.76s)
how much can I get in the next round and
[55:14] (3314.32s)
it's like, well, we're over subscribed
[55:16] (3316.40s)
already.
[55:18] (3318.80s)
Yeah. Yeah. No, the constraint isn't
[55:20] (3320.80s)
going to be the money. It's going to be
[55:21] (3321.92s)
the the GPUs. I have a question for you,
[55:23] (3323.68s)
Mad, about that actually because if you
[55:25] (3325.84s)
say, okay, the the you know, GBD5 will
[55:29] (3329.04s)
be out soon, couple weeks hopefully.
[55:30] (3330.80s)
It'll be on the same plane, probably
[55:32] (3332.56s)
leak frog, but in the same genre and
[55:35] (3335.84s)
then Gemini 3 will come out and it'll be
[55:38] (3338.80s)
somewhere similar, maybe a little
[55:40] (3340.80s)
better. Um, but the chip supply, you
[55:44] (3344.72s)
know, Google has huge amounts of GPU and
[55:49] (3349.12s)
and a massive cloud computing platform,
[55:51] (3351.92s)
plus they make their own TPUs.
[55:54] (3354.56s)
Then, um, you know, you got a million
[55:56] (3356.40s)
chips going to Elon. We just talked
[55:58] (3358.32s)
about that Sam at OpenAI has had a
[56:01] (3361.44s)
little bit of trouble with Microsoft
[56:02] (3362.64s)
recently. There's there's definitely
[56:03] (3363.76s)
some kind of falling out there. And the
[56:05] (3365.60s)
way OpenAI got ahead of everyone in the
[56:07] (3367.20s)
first place is getting access to the
[56:08] (3368.64s)
compute from Microsoft. And so is he
[56:12] (3372.56s)
going to have a problem getting catching
[56:14] (3374.32s)
up to a million concurrent GPUs training
[56:16] (3376.88s)
a single massive model? I mean, I think
[56:20] (3380.00s)
Stargate is in that order of magnitude
[56:21] (3381.92s)
when you look at the kind of gigawatts
[56:23] (3383.60s)
and now Amazon's just announced poor
[56:26] (3386.00s)
Anthropic using Tranium for something
[56:27] (3387.92s)
that's even bigger than Stargate with
[56:29] (3389.44s)
their latest kind of trip supply.
[56:31] (3391.36s)
Google's the leader in this. So, they
[56:32] (3392.72s)
have 3 million odd. But the thing that I
[56:35] (3395.12s)
come back to is Open AI basically slowed
[56:39] (3399.36s)
down when everyone was making Giblly
[56:41] (3401.52s)
memes.
[56:43] (3403.36s)
And so if you think about order of
[56:44] (3404.88s)
compute of Giblly memes compared to
[56:47] (3407.20s)
order of compute for useful work like
[56:50] (3410.00s)
it's that versus that right Google is
[56:53] (3413.76s)
okay because Google are actually landing
[56:55] (3415.44s)
millions of their own TPUs and they have
[56:57] (3417.12s)
the full stack and it has better
[56:58] (3418.24s)
interconnect for large context length.
[57:00] (3420.08s)
It's actually really good 7th generation
[57:02] (3422.24s)
hardware. Elon will get the supply
[57:04] (3424.40s)
because he's a beast. And I think again
[57:06] (3426.96s)
OpenAI have the capital, but they're
[57:09] (3429.68s)
moving more and more towards consumer
[57:11] (3431.60s)
with the Johnny IV acquisition and
[57:13] (3433.20s)
things like that.
[57:14] (3434.16s)
The dark horse here is probably again
[57:16] (3436.48s)
meta to be honest because Zuck is going
[57:19] (3439.76s)
to drop a hundred billion.
[57:22] (3442.16s)
On this. He dropped 30 billion on the
[57:24] (3444.80s)
glasses on the metaverse.
[57:27] (3447.60s)
He thinks AGI is coming and Meta is a
[57:30] (3450.16s)
$1.7 trillion stock. will easily drop a
[57:32] (3452.32s)
h 100red billion.
[57:33] (3453.28s)
Yeah, he's got $70 billion of free cash
[57:35] (3455.52s)
right now to to use and can pump it up.
[57:38] (3458.64s)
Well, I I did an interview of Yan Lun at
[57:40] (3460.88s)
MIT not super long ago and they had
[57:43] (3463.04s)
committed and already bought a million
[57:44] (3464.56s)
GPUs for internal use at Meta. So, he he
[57:47] (3467.44s)
had those on order already then. I'm
[57:50] (3470.40s)
sure they're in house now. So, he has
[57:52] (3472.00s)
the the compute inhouse.
[57:54] (3474.40s)
So, basically all the top guys can get a
[57:55] (3475.92s)
million. The next step is 10 million.
[57:58] (3478.88s)
Well, there's only 20 million in the
[58:00] (3480.08s)
world. This is where it runs into a
[58:01] (3481.36s)
bottle.
[58:01] (3481.76s)
You can't even keep a straight face, can
[58:04] (3484.08s)
Well, well, but again, think about every
[58:05] (3485.52s)
pixel being generated and think about
[58:07] (3487.20s)
again the economic activity of actually
[58:10] (3490.08s)
having a single useful teammate or
[58:13] (3493.44s)
account. I mean, we're talking about
[58:14] (3494.56s)
like accountants and lawyers and other
[58:16] (3496.48s)
things like that on the other side of
[58:17] (3497.92s)
the screen. We're not even talking about
[58:19] (3499.12s)
super genius PhDs.
[58:21] (3501.36s)
Is Nvidia just going to just keep going
[58:23] (3503.76s)
going going? Is anybody going to
[58:25] (3505.68s)
displace their their production at all?
[58:29] (3509.20s)
uh all of the top chip manufacturers are
[58:32] (3512.72s)
good enough to run these models. The
[58:35] (3515.52s)
only question is who has enough gating
[58:37] (3517.36s)
supply. So the reason for the hopper
[58:39] (3519.36s)
thing was actually the packaging of the
[58:40] (3520.96s)
chips. You know the co-ops.
[58:43] (3523.52s)
So you have different supply channel
[58:44] (3524.96s)
constraints just like robots. In two
[58:47] (3527.68s)
years robots will be good enough to do
[58:50] (3530.16s)
what 90% 95% of human labor. The only
[58:53] (3533.60s)
reason the entire global economy on
[58:55] (3535.44s)
labor isn't going to flip over from do
[58:57] (3537.04s)
$2 robots is supply chains. So what
[59:00] (3540.08s)
we've got is a complete replacement of
[59:02] (3542.40s)
the capital stock of the economy from
[59:05] (3545.04s)
GPUs for virtual workers and robots and
[59:08] (3548.24s)
it's just supply constraint. So Nvidia
[59:11] (3551.12s)
number one, you don't go wrong. You
[59:12] (3552.64s)
don't get fired getting Nvidia, but
[59:14] (3554.48s)
you'll get chips from wherever you can
[59:16] (3556.08s)
get them because those chips are orders
[59:19] (3559.28s)
of magnitude cheaper than your team
[59:21] (3561.28s)
members. H
[59:23] (3563.20s)
I just asked actually um Gemini in the
[59:26] (3566.00s)
background here what it costs today's
[59:27] (3567.68s)
market rate to train a Ronoflop. So one
[59:30] (3570.48s)
of these models just to compute cost is
[59:32] (3572.96s)
312 million. So like you said Ahmad it's
[59:36] (3576.40s)
it's like one signing bonus over at
[59:38] (3578.24s)
OpenAI these days. So that's not the
[59:40] (3580.48s)
cost is not the issue. It's it's who has
[59:42] (3582.48s)
access to the compute. What's amazing to
[59:44] (3584.24s)
me in this entire conversation we
[59:45] (3585.60s)
haven't said the word Apple once.
[59:47] (3587.20s)
Yeah. and and Apple controls about a
[59:49] (3589.68s)
third of the manufacturing capacity at
[59:51] (3591.44s)
TSMC for their M3 line, M2 line chips.
[59:54] (3594.80s)
So, they could easily become a player in
[59:57] (3597.20s)
the the get a big data center up and
[60:00] (3600.40s)
running game. They'd have an incredible
[60:02] (3602.64s)
asset having that manufacturing towhold
[60:05] (3605.60s)
with TSMC. It's just incredible that
[60:08] (3608.16s)
they haven't done that.
[60:09] (3609.20s)
Well, I think this comes down to the
[60:10] (3610.56s)
thing. These models have economies of
[60:12] (3612.40s)
scope in that once you train a model
[60:15] (3615.04s)
that's good enough do you really need
[60:16] (3616.96s)
another one
[60:19] (3619.20s)
and then it becomes like electricity it
[60:22] (3622.24s)
becomes a utility so your genius models
[60:24] (3624.96s)
become utilities and then what matters
[60:26] (3626.48s)
is the model that runs on the M3 or
[60:29] (3629.28s)
whatever you know like liquid AI just
[60:31] (3631.36s)
releasing edge models those things
[60:33] (3633.52s)
become even more important because the
[60:35] (3635.84s)
M3 has capac M4s have capacity
[60:38] (3638.56s)
yeah that's a really big deal. By the
[60:41] (3641.04s)
way, liquid is uh I didn't appreciate
[60:43] (3643.20s)
how big a deal it was until recently,
[60:44] (3644.80s)
but um people are going to want to use
[60:46] (3646.56s)
this stuff immediately. I mean, it's so
[60:49] (3649.04s)
addictive and the inference time compute
[60:52] (3652.00s)
is severely constrained. Uh and liquid,
[60:55] (3655.36s)
you know, runs fine on the edge on these
[60:57] (3657.12s)
M3s. It runs really, really fast. It
[60:59] (3659.04s)
runs on the chips in the cars and it's
[61:01] (3661.20s)
about, you know, they say about 100
[61:02] (3662.80s)
times more efficient than just trying to
[61:04] (3664.88s)
run a brute force transformer. So that
[61:07] (3667.52s)
could be a huge unlock for people having
[61:09] (3669.28s)
access to AI, you know, at least more
[61:11] (3671.28s)
access to keep up with the demand.
[61:13] (3673.12s)
Exactly. Because you'll have your gated
[61:15] (3675.04s)
stuff and then they might increase
[61:16] (3676.88s)
prices because they have to because
[61:17] (3677.92s)
there'll be so much competition for
[61:19] (3679.12s)
chips even as you get them cheaper. And
[61:21] (3681.20s)
then you just got this AI with you, but
[61:22] (3682.80s)
that AI will be smart enough to do your
[61:24] (3684.32s)
day-to-day. And so you'll have a whole
[61:26] (3686.32s)
curve of intelligence just like
[61:28] (3688.24s)
sometimes you need to have steady
[61:30] (3690.80s)
workers and sometimes you need your
[61:32] (3692.16s)
geniuses. I forgot you were the you were
[61:34] (3694.64s)
actually the first guy to to see liquid
[61:36] (3696.64s)
when it was just a research project.
[61:38] (3698.24s)
Yeah, I I gave them all the compute to
[61:39] (3699.68s)
get going.
[61:40] (3700.40s)
Yeah, that's right. That was
[61:41] (3701.76s)
amazing. And now they're what $2 billion
[61:43] (3703.36s)
valuation.
[61:44] (3704.48s)
So, listen, when you come back and join
[61:46] (3706.96s)
us next week, I think we have it
[61:48] (3708.96s)
scheduled. I want to hear all about the
[61:50] (3710.40s)
intelligent internet. I'd love you to
[61:52] (3712.96s)
break the news on what you've been
[61:54] (3714.48s)
working on in secret for the last, you
[61:57] (3717.52s)
know, year or so. Uh I I've seen pieces
[62:01] (3721.20s)
of it. It's awesome.
[62:02] (3722.96s)
But hopefully you'll you'll spill the
[62:05] (3725.44s)
whole master plan for us. Uh Dave,
[62:08] (3728.40s)
Salem, my Moonshot mates, thank you
[62:10] (3730.48s)
guys. Uh Gro 4 special edition.
[62:13] (3733.68s)
See you at Grock 5.
[62:15] (3735.12s)
Yeah, we got Gemini 3
[62:17] (3737.68s)
in like three weeks.
[62:19] (3739.44s)
We'll be back online soon.
[62:21] (3741.28s)
All right, see you all. Thank you for
[62:22] (3742.72s)
joining us.
[62:23] (3743.12s)
Take care, folks. Bye, guys.
[62:26] (3746.08s)
If you could have had a 10year head
[62:27] (3747.76s)
start on the dot boom back in the 2000s,
[62:30] (3750.32s)
would you have taken it? Every week I
[62:32] (3752.32s)
track the major tech meta trends. These
[62:34] (3754.80s)
are massive game-changing shifts that
[62:37] (3757.12s)
will play out over the decade ahead.
[62:39] (3759.20s)
From humanoid robotics to AGI, quantum
[62:41] (3761.68s)
computing, energy breakthroughs and
[62:43] (3763.52s)
longevity. I cut through the noise and
[62:45] (3765.68s)
deliver only what matters to our lives
[62:48] (3768.56s)
and our careers. I send out a Metatron
[62:50] (3770.80s)
newsletter twice a week as a quick
[62:53] (3773.04s)
two-minute readover email. It's entirely
[62:55] (3775.12s)
free. These insights are read by
[62:57] (3777.52s)
founders, CEOs, and investors behind
[62:59] (3779.76s)
some of the world's most disruptive
[63:01] (3781.20s)
companies. Why? Because acting early is
[63:04] (3784.72s)
everything. This is for you if you want
[63:06] (3786.88s)
to see the future before it arrives and
[63:09] (3789.52s)
profit from it. Sign up at
[63:11] (3791.12s)
dmmadness.com/tatrends
[63:13] (3793.44s)
and be ahead of the next tech bubble.
[63:15] (3795.84s)
That's dmmadness.com/tatrends.
[63:18] (3798.48s)
[Music]