[00:00] (0.08s)
Every week it seems like the new tools
[00:02] (2.72s)
that drop are just mind-blowing
[00:05] (5.76s)
that it's just a non-stop now. That that
[00:07] (7.76s)
rate of acceleration, you can feel it.
[00:10] (10.16s)
Welcome to the Talking AI podcast where
[00:12] (12.16s)
we talk AI with both experts in the
[00:14] (14.00s)
field and early adopters. I'm your host,
[00:16] (16.40s)
Matt Paige, and we're here to demystify
[00:18] (18.40s)
AI for you so you can get some value
[00:20] (20.16s)
from it. Let's talk some AI.
[00:24] (24.64s)
Today we got a special guest, Robert
[00:26] (26.24s)
Ransen, director and founding member of
[00:28] (28.16s)
the Agentics Foundation, also founding
[00:30] (30.48s)
part partner of human race, an Agentic
[00:33] (33.12s)
engineering firm. And he's an author,
[00:35] (35.28s)
too. The AI dividend preparing for a
[00:37] (37.36s)
post labor economy. All kinds of you're
[00:39] (39.60s)
a renaissance, Robert. But for those who
[00:42] (42.16s)
aren't familiar with the Agentics
[00:44] (44.00s)
Foundation, it's an open foundation
[00:45] (45.60s)
dedicated to building scalable,
[00:47] (47.52s)
intuitive, and human- centered agentic
[00:49] (49.60s)
AI systems and infrastructure.
[00:52] (52.16s)
And it was originally founded by Ruven
[00:54] (54.48s)
Cohen, who hands down is my favorite
[00:56] (56.96s)
person to follow in the AI space right
[00:59] (59.12s)
now. Robert, first of all, welcome to
[01:01] (61.12s)
the show. I think you'd agree there on
[01:02] (62.72s)
the Reven side. The dude's just insane.
[01:05] (65.04s)
Yes. Yes. Absolutely. Thank you. I'm
[01:07] (67.20s)
happy to be here, Matt. And it's always
[01:09] (69.92s)
a lot of fun to have these
[01:10] (70.96s)
conversations. And Ruen Cohen is is his
[01:13] (73.52s)
own singularity.
[01:16] (76.24s)
That's probably the best way to describe
[01:18] (78.00s)
it right there. And so Robert and I, we
[01:20] (80.40s)
were both kind of early members in this
[01:22] (82.96s)
small random community that Ruven had
[01:25] (85.60s)
started. And it began as like just this
[01:28] (88.00s)
Friday meetup online called the AI
[01:31] (91.92s)
hacker space. And essentially Ruv would
[01:34] (94.16s)
jump on, blow our minds with something
[01:36] (96.64s)
new he'd built that week and or maybe
[01:39] (99.52s)
just had AI building while he sleeps,
[01:41] (101.04s)
which see seems to be the standard now.
[01:43] (103.12s)
And then he would open the floor and
[01:44] (104.56s)
then others would share their projects.
[01:46] (106.56s)
It was just this really cool community
[01:49] (109.36s)
that started and what kind of started as
[01:51] (111.52s)
like this casual gathering has grown
[01:54] (114.00s)
like wildfire. So much so that it's like
[01:55] (115.84s)
an official nonprofit organization and
[01:59] (119.52s)
we're going to deep dive more into the
[02:01] (121.68s)
community. A I think it's the best one
[02:04] (124.48s)
anybody out there that's interested in
[02:06] (126.24s)
AI can be a part of but I also want to
[02:09] (129.28s)
get into this central topic around open
[02:11] (131.76s)
source versus close source, right?
[02:12] (132.96s)
because it's one of the core principles
[02:15] (135.04s)
behind the agentics foundation at the
[02:18] (138.32s)
core of the discussion is that and I
[02:20] (140.00s)
want to start Robert just give me your
[02:22] (142.00s)
what is your definition of open source
[02:24] (144.48s)
versus closed source and your stance on
[02:26] (146.24s)
it let's start super high level and then
[02:28] (148.08s)
we'll rabbit hole dive down some
[02:29] (149.68s)
interesting parts
[02:30] (150.64s)
sure I mean at its core I think it's not
[02:32] (152.96s)
trying to retain control over ideas
[02:36] (156.64s)
and and that the tools that we build out
[02:38] (158.88s)
around the concepts are meant to be
[02:42] (162.96s)
freely available to society to run with
[02:46] (166.16s)
and and I agree with you Matt that's
[02:48] (168.40s)
from the very beginning and actually
[02:50] (170.80s)
next week or on the 26th of April is our
[02:55] (175.68s)
one year or 52nd
[02:57] (177.84s)
live event
[02:59] (179.28s)
is it really the actual calendar year
[03:02] (182.88s)
full year
[03:03] (183.36s)
I'm trying to remember back when I how I
[03:06] (186.00s)
think it was maybe just some random like
[03:08] (188.00s)
LinkedIn post
[03:09] (189.36s)
I wasn't even connected with Rub and I
[03:11] (191.04s)
just randomly saw button. I was like,
[03:12] (192.32s)
"Oh, that seems interesting." And I I
[03:15] (195.60s)
jumped in it. It just it's evolved so
[03:18] (198.40s)
much. Actually, let's stay there. I'm
[03:20] (200.00s)
curious like what is your story of
[03:22] (202.48s)
getting involved in what was the AI
[03:24] (204.72s)
hacker space and now the Agentics
[03:26] (206.40s)
Foundation? Like how did you meet Ruven?
[03:29] (209.12s)
Kind of give me your story, your
[03:30] (210.48s)
perspective as this thing's grown into
[03:32] (212.64s)
this like how many people are in it?
[03:34] (214.96s)
Yeah, there's there's 1,200 members in
[03:37] (217.92s)
the WhatsApp space and that's capped.
[03:40] (220.48s)
So, we can't invite any more folks in.
[03:42] (222.88s)
Occasionally, somebody will leave and
[03:44] (224.32s)
somebody else will come in. So, folks,
[03:46] (226.08s)
there is a little bit of rotation, but
[03:48] (228.00s)
it's kept. That's why we've also now
[03:49] (229.84s)
opened up the Discord side. There will
[03:51] (231.92s)
be a place for the community to grow.
[03:53] (233.68s)
There's over 85,000, I think, followers
[03:57] (237.20s)
on LinkedIn. There is or maybe no,
[04:00] (240.08s)
that's 120,000
[04:01] (241.60s)
and there's at least 65,000 in the
[04:03] (243.60s)
Reddit over several Reddits.
[04:06] (246.08s)
So, it's an audience that's pretty much
[04:08] (248.00s)
closing in on a couple hundred thousand.
[04:09] (249.68s)
And then there's a huge longtail
[04:11] (251.36s)
audience for those live events. So every
[04:14] (254.32s)
Ruben does a coding session on Thursdays
[04:16] (256.24s)
and then we have the open forum demo and
[04:19] (259.76s)
new tool drop integration conversations.
[04:23] (263.28s)
That's every Friday.
[04:24] (264.72s)
And then those get recorded. Our friends
[04:27] (267.04s)
over at Caloura have that archive
[04:28] (268.96s)
available. So you can go back over the
[04:31] (271.28s)
last year and look all those different
[04:33] (273.60s)
coding sessions. But for me, uh,
[04:35] (275.76s)
that'd actually be a fun exercise,
[04:37] (277.28s)
Robert, is go back to the first episode
[04:39] (279.28s)
just as like the throwback. Yeah.
[04:41] (281.12s)
And see what we were talking about
[04:42] (282.64s)
because I'm sure it' blow our minds just
[04:44] (284.48s)
how far things have progressed.
[04:46] (286.32s)
It It does undoubtedly because every
[04:48] (288.24s)
week it seems like the new tools that
[04:50] (290.72s)
drop are just mind-blowing.
[04:53] (293.68s)
And it's just a non-stop now. That that
[04:55] (295.52s)
rate of acceleration, you can feel it.
[04:58] (298.32s)
Like today for instance, I I was eating
[05:00] (300.48s)
lunch so I jumped on a little bit late,
[05:03] (303.04s)
but Rufin was building something. What
[05:05] (305.36s)
was it? It was like being able to find
[05:07] (307.68s)
rare earth medals or something. I don't
[05:09] (309.60s)
know what it was, but something insane
[05:12] (312.08s)
leveraging AI system.
[05:13] (313.92s)
Today was one of those moments, Matt,
[05:15] (315.28s)
where even his buddies are like, "Wait,
[05:18] (318.32s)
I think you're crazy."
[05:20] (320.16s)
You know what I mean? But where a
[05:22] (322.56s)
concept idea run through deep research
[05:24] (324.80s)
he came across an article which is a lot
[05:27] (327.52s)
of scientific work has been going into
[05:30] (330.72s)
quantum signals of magnetic
[05:32] (332.72s)
it's how do you even describe
[05:33] (333.92s)
magnometer or magnetic fields
[05:37] (337.04s)
with the science being that there are
[05:40] (340.24s)
fingerprints of each layer of material
[05:43] (343.60s)
and item and concentrations of metals
[05:46] (346.08s)
and all that kind all have fingerprints.
[05:49] (349.12s)
It's just that we have not looked
[05:51] (351.76s)
through the right set of tools at the to
[05:54] (354.64s)
find those fingerprints. much. It took
[05:56] (356.00s)
us a long time to figure out
[05:56] (356.88s)
gravitational waves. And this is it
[05:59] (359.44s)
speaks just so potently to the era that
[06:02] (362.48s)
we're in because now an incredibly
[06:05] (365.20s)
talented engineer like Ruben with these
[06:07] (367.52s)
tools can utilize a recursive like a
[06:10] (370.88s)
chain of deep research to really explore
[06:13] (373.44s)
the entire footprint of that scientific
[06:15] (375.68s)
body of work and then hypothesize about
[06:19] (379.76s)
what a codebase that utilizes the
[06:23] (383.12s)
science and tests for it would look like
[06:26] (386.48s)
and then build it and then run demos
[06:29] (389.28s)
within 24 hours.
[06:32] (392.00s)
that's that's crazy.
[06:32] (392.88s)
It is crazy that it means that frontier
[06:36] (396.00s)
scientific work around the earth, around
[06:39] (399.12s)
the entire planet can be identified and
[06:43] (403.04s)
have tools built around it very quickly.
[06:45] (405.84s)
That's acceleration. That's what it
[06:47] (407.92s)
feels like in civilization when those
[06:50] (410.64s)
kinds of advances can be made,
[06:53] (413.04s)
discovered, tested, validated, and
[06:55] (415.36s)
pushed forward so quickly.
[06:58] (418.08s)
Yeah. And it's, you mentioned something
[07:00] (420.48s)
and folks want to check out like you
[07:02] (422.56s)
mentioned the recordings are out there
[07:03] (423.68s)
and you can join the community as well.
[07:05] (425.36s)
We'll get into some of that later, but
[07:07] (427.44s)
you mentioned the Thursday sessions.
[07:09] (429.04s)
Those kind of came later. This is like
[07:11] (431.04s)
That's right. Ruven's call it vibe
[07:12] (432.96s)
coding sessions where you literally you
[07:14] (434.96s)
can just be a fly on the wall and see
[07:16] (436.40s)
him vibe coding and you can see his how
[07:18] (438.72s)
he thinks about it his stream of
[07:20] (440.08s)
consciousness as he approaches it. And
[07:22] (442.80s)
he said something today that I think
[07:24] (444.48s)
really resonated with me. He talked
[07:25] (445.92s)
about like the idea of vibe almost being
[07:29] (449.12s)
like on a spectrum because there's one
[07:31] (451.12s)
spectrum where it's like you're
[07:32] (452.16s)
literally just starting from nothing and
[07:34] (454.00s)
you're just like it truly is like vibes
[07:36] (456.64s)
based. letting AI guide you and then to
[07:38] (458.96s)
the other end of the spectrum truly are
[07:41] (461.92s)
having a detailed plan of what you want
[07:44] (464.32s)
to do and build and there's varying
[07:46] (466.80s)
degrees of where that is and then he's
[07:49] (469.84s)
also built this spark framework too
[07:51] (471.52s)
which I don't know how much you've
[07:52] (472.56s)
played with that but that's essentially
[07:54] (474.00s)
his methodology for using AI to build
[07:58] (478.08s)
yeah specification pseudo code
[08:00] (480.64s)
architecture revisions and then
[08:03] (483.20s)
convention so uh to
[08:07] (487.20s)
work with AI coding assistants in order
[08:09] (489.84s)
to give them an overview of what it is
[08:13] (493.44s)
they're about to build.
[08:15] (495.36s)
And it's yeah, it's really it's
[08:17] (497.20s)
wonderful too because I think Ruben
[08:19] (499.28s)
first really formulated that back about
[08:21] (501.44s)
eight months ago
[08:22] (502.64s)
and it's been adopted now. Now when you
[08:25] (505.04s)
you know when you look up room boomerang
[08:27] (507.60s)
or custom rooms, it's coming back for
[08:30] (510.56s)
coming back with that with the Spark
[08:32] (512.48s)
framework. So really hats off to the
[08:34] (514.72s)
work that Ruven's been doing in the
[08:36] (516.40s)
space. Agenic engineering,
[08:40] (520.16s)
define that for us because I don't think
[08:41] (521.60s)
a lot of people either know what that is
[08:43] (523.92s)
or even the term agentic. I don't know
[08:46] (526.08s)
about you, but spellch check still says
[08:47] (527.60s)
it's not a word for my
[08:50] (530.40s)
they'll catch up. Agenic engineering is
[08:52] (532.88s)
not the process of building agents.
[08:55] (535.28s)
Agenic engineering is the process of
[08:57] (537.04s)
building systems that build things.
[09:00] (540.40s)
Yeah. Yeah.
[09:00] (540.64s)
So agentic means it can interact with
[09:02] (542.72s)
information and then based on a goal set
[09:06] (546.32s)
decisions about how it's going to
[09:07] (547.68s)
respond to that information.
[09:09] (549.28s)
So it takes action. It is agentic. It
[09:11] (551.52s)
has agency. The key point is that
[09:13] (553.36s)
there's a touch point with information
[09:15] (555.28s)
that's incoming in order for it to
[09:17] (557.20s)
formulate it. It's not an AB. It's not a
[09:19] (559.44s)
binary decision. It we can utilize large
[09:22] (562.72s)
language models to give it a space of
[09:25] (565.28s)
logic to be much much more powerful.
[09:28] (568.80s)
But genic engineering is literally
[09:30] (570.80s)
building systems that build things,
[09:34] (574.08s)
And and that's what we've been doing for
[09:36] (576.16s)
the last year is really explore the
[09:37] (577.92s)
space of what engineering looks like
[09:39] (579.68s)
with artificial intelligence and what
[09:41] (581.68s)
kinds of things that you can do with
[09:44] (584.08s)
that. And you still hear an awful lot
[09:46] (586.88s)
online and LinkedIn and everything. Oh,
[09:48] (588.40s)
it's not. This is really important. And
[09:50] (590.80s)
Matt, you put your finger right on it.
[09:52] (592.88s)
the vibe coding to a genenic engineering
[09:55] (595.04s)
in that spectrum. What that looks like
[09:57] (597.20s)
is really whether or not you understand
[10:00] (600.32s)
what's being built and you've planned
[10:02] (602.00s)
for what's being built and whether when
[10:04] (604.16s)
it's going off the rails or not. Agenic
[10:06] (606.72s)
engineer is is somebody who's learned
[10:10] (610.40s)
the the space, the architecture, or is a
[10:13] (613.92s)
previous computer science engineer, a
[10:17] (617.12s)
developer, and they're not just having
[10:20] (620.64s)
the coding assistant go from scratch on
[10:22] (622.88s)
its own figured as you go. That's the
[10:25] (625.44s)
vibe coding space,
[10:26] (626.72s)
which is is beautiful because literally
[10:29] (629.92s)
millions and millions of people around
[10:31] (631.28s)
the world and growing every single day
[10:33] (633.76s)
are diving back into software and what
[10:37] (637.28s)
they can create and discovering this new
[10:39] (639.28s)
tool and the the the ease of developing
[10:42] (642.80s)
software is coming closer and closer to
[10:44] (644.56s)
everybody to make it universally
[10:46] (646.48s)
accessible. You can build great stuff.
[10:48] (648.00s)
We're not there yet, but it's coming
[10:49] (649.52s)
closer every day. And people are
[10:51] (651.04s)
learning more quickly. So, they're
[10:53] (653.60s)
approaching the technology every day.
[10:56] (656.00s)
And the impact of that is more
[10:57] (657.92s)
creativity, more discovery, faster
[11:00] (660.56s)
iteration. and and that impact is a huge
[11:04] (664.80s)
boom to society within the right uh
[11:07] (667.76s)
guard rails.
[11:09] (669.28s)
And to your point, there is this like
[11:11] (671.04s)
knowledge gap right now. It's almost
[11:12] (672.72s)
like this arbitrage overnowledge of
[11:14] (674.80s)
where the technology actually is. And I
[11:18] (678.08s)
think two key points. One is even if AI
[11:20] (680.32s)
LLM did not progress one iota from
[11:22] (682.88s)
today, there's still so much untapped
[11:24] (684.96s)
potential. And I think that's a big
[11:26] (686.56s)
thing. And then B, I y'all go check out
[11:29] (689.84s)
for the listeners the episode I did with
[11:31] (691.60s)
Michael Lou from Stripe.
[11:33] (693.84s)
The key phrase he said that sticks with
[11:35] (695.60s)
me almost every day. It's like this is
[11:37] (697.12s)
as bad as AI and LLMs are ever going to
[11:40] (700.08s)
be, right? It's only going to keep
[11:41] (701.76s)
getting better. You put those two things
[11:43] (703.36s)
together, it's insane. And I love how
[11:46] (706.24s)
you define the agentic side. It's about
[11:48] (708.00s)
it's very much a systems thinking.
[11:50] (710.40s)
You're building the systems to build the
[11:52] (712.08s)
systems and whatnot. And we're a similar
[11:53] (713.92s)
type of firm, Patrick's AI. So it's the
[11:55] (715.92s)
way I like to describe it in simple
[11:57] (717.28s)
terms is we build AI native solutions
[11:59] (719.76s)
and we use AI to build them.
[12:01] (721.52s)
On that side it's like you do those two
[12:03] (723.20s)
things but the key thing is you're using
[12:04] (724.96s)
AI to build in essence.
[12:08] (728.00s)
The other cool thing too we mentioned
[12:09] (729.36s)
the Spark framework earlier. Ruven open
[12:11] (731.60s)
sources everything he does and we'll put
[12:13] (733.20s)
that in the show notes if anybody wants
[12:14] (734.80s)
to go deep into that. Like it's almost
[12:17] (737.12s)
absurd how much stuff he open sources
[12:19] (739.92s)
and just puts out there. But that's like
[12:22] (742.96s)
his foundational view and mindset is
[12:25] (745.84s)
things should be open. This shouldn't be
[12:28] (748.24s)
gatekeep. But like back to the original
[12:30] (750.32s)
kind of closed versus open discussion.
[12:34] (754.64s)
This this is like a very interesting
[12:36] (756.08s)
debate to me because there's I see
[12:38] (758.24s)
points on both sides that make sense,
[12:41] (761.36s)
right? There's the point on one side
[12:43] (763.36s)
around just the transparency and trust
[12:46] (766.16s)
and you know the safety side of it right
[12:48] (768.88s)
because some will say oh it needs to be
[12:50] (770.24s)
closed because this is so powerful and
[12:51] (771.92s)
it can't get into the hands of nefarious
[12:54] (774.80s)
actors and people with ill intent and on
[12:57] (777.76s)
the other side it should be completely
[12:59] (779.84s)
democratized and open and that will most
[13:02] (782.64s)
benefit society versus a small number of
[13:05] (785.28s)
folks at the corporate level dictating
[13:08] (788.00s)
how this grows and like both points I'm
[13:11] (791.52s)
Yes, makes sense. What's your take
[13:14] (794.08s)
there? Especially like in the wrong
[13:16] (796.40s)
hands. I think that's the scariest point
[13:18] (798.00s)
for people is like when it gets in the
[13:20] (800.24s)
wrong hands and can you stop that
[13:22] (802.32s)
anyways? I think is the point. But I'm
[13:23] (803.92s)
just curious like going into some of
[13:25] (805.20s)
those points around the debate.
[13:26] (806.56s)
Yeah. And it's entirely all those
[13:28] (808.72s)
concerns are completely legitimate. So
[13:31] (811.68s)
they are things that we need to
[13:33] (813.28s)
constructively and creatively solve for.
[13:36] (816.08s)
I think most people would agree it would
[13:38] (818.32s)
be uncomfortable to have a singular
[13:41] (821.68s)
entity or organization that had the sole
[13:45] (825.60s)
access to near infinite cheap and
[13:49] (829.76s)
abundant intelligence and that was not
[13:53] (833.92s)
to some degree ubiquitously understood
[13:56] (836.64s)
and except accessible and that's the
[13:59] (839.20s)
heart of it for the Genesis Foundation
[14:01] (841.36s)
at android is that these tools are so
[14:04] (844.56s)
important
[14:05] (845.76s)
that the awareness of how to use them,
[14:08] (848.40s)
what they can do, how to build them, how
[14:10] (850.80s)
to develop them responsibly, deploy them
[14:12] (852.80s)
responsible responsibly, all of that
[14:15] (855.36s)
needs to be open. Now, transparency,
[14:19] (859.36s)
accountability, traceability and and how
[14:22] (862.72s)
we identify and respond to the potential
[14:25] (865.68s)
of bad actors. That's a tool set and
[14:28] (868.08s)
that's in a number of tool sets that a
[14:30] (870.32s)
number of organizations are working on.
[14:32] (872.24s)
And but also let's be practical. Let's
[14:34] (874.80s)
be realistic. Governments and and
[14:37] (877.76s)
leading institutions are going to retain
[14:40] (880.08s)
the most powerful models that they've
[14:42] (882.16s)
been able to build for a period of time
[14:43] (883.92s)
before they're public. So there will
[14:45] (885.92s)
always be a leading edge of governance.
[14:49] (889.36s)
And obviously as a civilization you want
[14:52] (892.80s)
to be able to have trust and faith and
[14:55] (895.12s)
the transparency and accountability of
[14:56] (896.80s)
your governance but also ensure that the
[14:59] (899.68s)
ability to protect all of us
[15:01] (901.28s)
collectively. Yeah. There's a whole
[15:03] (903.12s)
bunch of things there. The base level
[15:05] (905.12s)
like you said before of what we've
[15:06] (906.88s)
already got is so incredibly powerful
[15:09] (909.52s)
and we've only scratched the surface in
[15:12] (912.32s)
terms of utilization and where that can
[15:14] (914.00s)
be applied that we've got lots to work
[15:16] (916.32s)
with already.
[15:18] (918.24s)
Yeah. My view is I like to believe that
[15:20] (920.72s)
there's more good actors out in the
[15:22] (922.64s)
world than bad actors when something's
[15:24] (924.64s)
truly open source that there's almost
[15:26] (926.48s)
this these two forces combating each
[15:29] (929.28s)
other and ultimately I think the good
[15:31] (931.04s)
will win at least I like to believe that
[15:33] (933.60s)
but there is power in the
[15:35] (935.12s)
decentralization
[15:36] (936.72s)
of this like insanely powerful tool and
[15:40] (940.16s)
innovation right
[15:40] (940.96s)
yes yeah absolutely there and the
[15:43] (943.92s)
collective input of human creativity
[15:46] (946.80s)
because we have such great context of
[15:49] (949.12s)
the entirety of our lives and our world
[15:50] (950.72s)
views uniquely still leading in terms of
[15:54] (954.56s)
side by side of being able to direct our
[15:57] (957.20s)
creativity. Anyways, that's why I love
[15:59] (959.36s)
the Fridays the Friday live events
[16:01] (961.12s)
because you get to hear from so many.
[16:02] (962.64s)
And what Ruven's been doing is he's been
[16:04] (964.88s)
showing what's possible in the hands of
[16:07] (967.12s)
an amazing engineer,
[16:09] (969.36s)
right? Taking everything to its logical
[16:11] (971.68s)
conclusion
[16:12] (972.80s)
and and it's so the eugenics.org
[16:15] (975.52s)
We want to make sure that those tools,
[16:17] (977.44s)
the understanding, the repos members are
[16:19] (979.84s)
working on all of that is available and
[16:22] (982.88s)
it's really exciting. We've got I think
[16:25] (985.04s)
there's about 80 chapter requests around
[16:28] (988.00s)
the world and one of the first in Asia
[16:30] (990.32s)
will what's will be in Singapore. So
[16:32] (992.80s)
there's lots of really work there
[16:34] (994.16s)
because at the ground level people want
[16:35] (995.36s)
to get want to know in their communities
[16:37] (997.68s)
who else is involved, who else is
[16:39] (999.28s)
engaged, what are the leading tool sets
[16:41] (1001.36s)
and so those local chapters will be a
[16:43] (1003.68s)
great asset to the community.
[16:45] (1005.60s)
So I think the other point with open
[16:47] (1007.20s)
source versus closed source is how do
[16:49] (1009.36s)
you define it? Because you hear like
[16:51] (1011.12s)
companies and model makers coming out
[16:52] (1012.88s)
saying they're open source but they've
[16:54] (1014.40s)
just essentially given the weights to
[16:57] (1017.36s)
their model. What's your view on that?
[16:59] (1019.20s)
Is that do you view that as fully open
[17:01] (1021.12s)
source or is there a spectrum of open
[17:03] (1023.76s)
versus closed source as well?
[17:05] (1025.68s)
For sure. Yeah. The weights of the
[17:07] (1027.20s)
models are that that is that's open
[17:09] (1029.20s)
that's an open source model otherwise.
[17:11] (1031.36s)
Yeah, it is it's just a variation on the
[17:13] (1033.68s)
theme if you're not exposing those
[17:15] (1035.28s)
weights. The ability now for lighter
[17:19] (1039.28s)
open source models to train up on
[17:24] (1044.24s)
leading closed source models.
[17:26] (1046.64s)
Yes. means that the economics of that of
[17:29] (1049.44s)
that intelligence gap
[17:32] (1052.00s)
really significantly changed right since
[17:34] (1054.64s)
the quen
[17:35] (1055.76s)
the quen which lead to the which led to
[17:37] (1057.84s)
the deepseek which got
[17:39] (1059.44s)
which Quinn was Ali Alibaba's model and
[17:42] (1062.00s)
then was it deepseek that leveraged that
[17:44] (1064.40s)
so you're almost saying they're building
[17:45] (1065.44s)
on the backs of these other models in a
[17:47] (1067.68s)
sense during for the as a an approach
[17:50] (1070.40s)
for their training of the new model in a
[17:53] (1073.20s)
yeah that's right that's right because
[17:54] (1074.24s)
they can just d tap right in directly
[17:56] (1076.08s)
interact
[17:57] (1077.04s)
with the leading thinking models and
[17:58] (1078.96s)
then benefit from that logic exchange. I
[18:01] (1081.60s)
think the repo that Ru was talking about
[18:03] (1083.44s)
was DRP DPRO
[18:06] (1086.24s)
which allows you to and it's not
[18:08] (1088.16s)
expensive either. It's not it's not what
[18:10] (1090.88s)
it was six months ago. We talked a
[18:14] (1094.48s)
little bit about Ruben Spark framework
[18:17] (1097.28s)
that he leverages, but like on the AI
[18:20] (1100.56s)
IDE coding front. Curious your take on
[18:23] (1103.12s)
the different tools, which ones you
[18:24] (1104.24s)
like, which ones use for different
[18:25] (1105.52s)
purposes.
[18:26] (1106.48s)
And then this may be old news by the
[18:28] (1108.08s)
time this launches, but I know you saw a
[18:30] (1110.48s)
couple days ago OpenAI's talking about
[18:33] (1113.20s)
acquiring Windsurf
[18:34] (1114.96s)
and what does that start to do, but
[18:36] (1116.64s)
maybe start on just the different AI IDs
[18:40] (1120.48s)
that are out there? What's your go-to?
[18:42] (1122.72s)
Are there ones you use in different
[18:44] (1124.48s)
scenarios?
[18:46] (1126.00s)
I've been using Klein since it first
[18:48] (1128.80s)
came out and that has been my preference
[18:51] (1131.52s)
from the very beginning and now Rue R
[18:54] (1134.96s)
code which is built on that's yeah which
[18:56] (1136.96s)
is a fork off of client that's right
[18:58] (1138.40s)
they they community they're doing just
[19:00] (1140.64s)
great work
[19:02] (1142.08s)
and so yeah so Windsor cursor
[19:06] (1146.24s)
the really for public facing small
[19:09] (1149.60s)
projects both lovable you can get pretty
[19:12] (1152.56s)
far now especially that you can
[19:14] (1154.40s)
integrate superb work with that. But as
[19:16] (1156.72s)
far as in the IDE for me, it is
[19:19] (1159.84s)
completely rude code with custom modes.
[19:22] (1162.00s)
Boomerang.
[19:23] (1163.44s)
Being able to create subtasks and custom
[19:26] (1166.24s)
modes and the creativity that of what we
[19:30] (1170.16s)
can do with those custom modes is just a
[19:32] (1172.32s)
it's a wonderful combination. And that
[19:35] (1175.28s)
is you do that well with the Spark
[19:38] (1178.16s)
framework which means all the lifting
[19:40] (1180.32s)
comes in the planning and the
[19:41] (1181.52s)
articulation of what it is that you want
[19:43] (1183.12s)
the software to do.
[19:44] (1184.80s)
and user stories and functional specs
[19:47] (1187.04s)
and everything else builds out from
[19:48] (1188.80s)
there. That's where the work is
[19:51] (1191.92s)
that with root codes, custom modes and
[19:53] (1193.68s)
the Spark framework that gets us there.
[19:56] (1196.96s)
So, I want you to go deeper on those two
[19:59] (1199.12s)
things, custom modes and the boomerang.
[20:02] (1202.24s)
But just for people as well because I
[20:03] (1203.60s)
feel like a lot of people either aren't
[20:04] (1204.96s)
using these tools or they're new to
[20:06] (1206.40s)
them. But like Root Code for instance,
[20:08] (1208.48s)
you can be in just a regular old VS Code
[20:11] (1211.20s)
instance and just it's almost it's like
[20:13] (1213.28s)
a plugin, right? So you kind of have the
[20:15] (1215.20s)
chat window there. So that's the way you
[20:17] (1217.04s)
can access that. Technically, you can do
[20:19] (1219.04s)
that in cursor as well, but may do you
[20:22] (1222.00s)
need to I guess is the question. But go
[20:24] (1224.48s)
deeper on those two things. Custom
[20:26] (1226.00s)
modes, boomerang. What are those? How
[20:28] (1228.08s)
should you think about them? Yes.
[20:29] (1229.44s)
For folks listening.
[20:30] (1230.88s)
Okay, great. Yeah. So root code VSC
[20:33] (1233.20s)
extension open source you can set and
[20:37] (1237.20s)
select the models that you're going to
[20:38] (1238.72s)
work with that you want it to work with
[20:41] (1241.36s)
and it lives on the left side of your
[20:43] (1243.52s)
IDE and you interact with it and it can
[20:48] (1248.40s)
read write edit do all kinds of create
[20:51] (1251.20s)
the code for it or edit or review or you
[20:54] (1254.00s)
can keep it in ask mode and you just
[20:55] (1255.76s)
have conversations about your code.
[20:57] (1257.92s)
That's an underutilized thing, by the
[20:59] (1259.76s)
way, is like collaborating, aligning
[21:01] (1261.84s)
with your
[21:03] (1263.36s)
whatever tool you're using just to
[21:06] (1266.40s)
plan together, right?
[21:08] (1268.56s)
Yep. That's it. That's it. Because and
[21:10] (1270.80s)
in that dialogue, all kinds of really
[21:13] (1273.44s)
neat insights and creative inspirations
[21:15] (1275.92s)
come. So key is that have that fun and
[21:19] (1279.12s)
do that exploration before you really
[21:20] (1280.72s)
get into say, okay, now this is what
[21:22] (1282.08s)
we're going to build. And so you've got
[21:23] (1283.60s)
that mapped out. So with R codes, now
[21:25] (1285.76s)
you can think about it for a second.
[21:27] (1287.68s)
I want to turn off the ability to edit,
[21:30] (1290.16s)
leave the ability to read, and we'll
[21:32] (1292.00s)
call that ask mode. So then I said,
[21:34] (1294.72s)
great, go through my project
[21:36] (1296.16s)
documentation. Let's talk about how
[21:38] (1298.56s)
we're going to what kind of ideas we
[21:40] (1300.32s)
have for the UI, right?
[21:42] (1302.88s)
And have a conversation back and forth.
[21:44] (1304.96s)
Maybe what are some interesting visual
[21:46] (1306.96s)
visualizations we could pull from the
[21:48] (1308.48s)
data? That kind of that's ask mode.
[21:50] (1310.96s)
architect mode. Let's say you're gonna
[21:53] (1313.60s)
you can give it a specific
[21:56] (1316.64s)
set of custom instructions whenever it's
[21:59] (1319.04s)
in architect mode. You describe how it
[22:01] (1321.44s)
is that you want that documentation and
[22:03] (1323.76s)
those plans and the specs to be created.
[22:06] (1326.40s)
How do you want them written? What's the
[22:08] (1328.00s)
form? What's the flavor of it? What your
[22:10] (1330.56s)
general approach to writing those kinds
[22:12] (1332.64s)
of documents. So different modes can
[22:16] (1336.08s)
have different custom instructions so
[22:17] (1337.84s)
that when you enter that mode it's got a
[22:20] (1340.32s)
it's wearing a specific hat and rue code
[22:24] (1344.16s)
enables us to create custom modes. So we
[22:27] (1347.28s)
can create any mode that we want in a
[22:29] (1349.52s)
template in the R modes and and now we
[22:32] (1352.80s)
can move between them. Okay, there's one
[22:35] (1355.28s)
really important mode just like in any
[22:37] (1357.60s)
multi- aent system, interestingly enough
[22:40] (1360.16s)
because this sort of
[22:41] (1361.84s)
transitions root code from a coding
[22:44] (1364.32s)
assistant into more like a multi- aent
[22:47] (1367.28s)
framework
[22:48] (1368.16s)
specifically for coding
[22:49] (1369.84s)
and that most important mode is the
[22:51] (1371.68s)
orchestrator mode.
[22:53] (1373.84s)
So that's the mode that it's going to be
[22:55] (1375.68s)
in when it's thinking about the project
[22:57] (1377.44s)
or the part of the project that it is
[23:00] (1380.24s)
working on and it's the one that has to
[23:03] (1383.20s)
is going to determine what the next task
[23:08] (1388.32s)
and which mode it's going to assign it
[23:10] (1390.80s)
to. Fascinating. That's the boomerang.
[23:13] (1393.60s)
It's almost like having another human in
[23:15] (1395.60s)
the loop in like a weird way. I I get
[23:18] (1398.64s)
more into this like mode of as I'm
[23:21] (1401.28s)
working with AI, I I intentionally
[23:24] (1404.96s)
pretend like I'm working with another
[23:27] (1407.92s)
in a sense.
[23:28] (1408.80s)
Yep. It's it is fascinating
[23:32] (1412.88s)
where you can go when you get down the
[23:35] (1415.76s)
rabbit hole of coding when you can we
[23:39] (1419.20s)
have a whole team of specialists in the
[23:42] (1422.40s)
development environment. Right. One of
[23:44] (1424.64s)
the first things I did when I saw the
[23:46] (1426.08s)
the remotes is said I said great let me
[23:48] (1428.40s)
create a custom researcher mode and I
[23:51] (1431.12s)
just laid out use the open AAI API
[23:53] (1433.84s)
connection and GPT40 search preview
[23:56] (1436.80s)
model and anytime you need to look up
[23:58] (1438.88s)
API documents just use that and I've got
[24:01] (1441.68s)
my key and my amp and and it works. So
[24:04] (1444.56s)
now when the coding team is building a
[24:06] (1446.72s)
project and it runs into a problem with
[24:08] (1448.32s)
with an API setup it can just look it up
[24:11] (1451.68s)
and continue on. Yeah. And then Chris
[24:14] (1454.16s)
Royce morphed that on Rub suggest,
[24:16] (1456.56s)
morphed it into a Perplexity MCP server.
[24:19] (1459.28s)
So now you can do deep research while
[24:21] (1461.84s)
coding. Phenomenal. Earlier you
[24:24] (1464.56s)
mentioned lovable, which we glazed over
[24:26] (1466.80s)
that, but I got to say that's almost my
[24:29] (1469.92s)
favorite go-to tool for just call it
[24:32] (1472.64s)
vibe coding or whatever. And the beauty
[24:34] (1474.40s)
of it is you mentioned the connection
[24:35] (1475.84s)
with Superbase. So, I've got into this
[24:38] (1478.40s)
nice little method that I work
[24:41] (1481.04s)
especially for like niche like use cases
[24:43] (1483.36s)
and things where I'll almost have it
[24:45] (1485.60s)
define you align with it on what you
[24:47] (1487.52s)
want to build, but I'll have it like
[24:49] (1489.28s)
build out the landing page of the thing
[24:51] (1491.44s)
first. It's something about that does
[24:53] (1493.44s)
this nice like alignment on the
[24:55] (1495.28s)
positioning and what we're trying to
[24:56] (1496.40s)
build the value prop and then connect
[24:58] (1498.80s)
into superbase, have it do
[25:00] (1500.24s)
authentication, set up the backend, and
[25:02] (1502.56s)
then you just start building feature by
[25:04] (1504.00s)
feature. But it's pretty insane of how
[25:06] (1506.88s)
capable it is when you have it
[25:08] (1508.64s)
integrated with Superbase. And then the
[25:11] (1511.28s)
design is just very impressive in terms
[25:13] (1513.36s)
of what it can do from a design
[25:14] (1514.88s)
perspective.
[25:15] (1515.84s)
Haven't they done a wonderful job on
[25:17] (1517.12s)
that? It's really nice that it puts out
[25:20] (1520.08s)
in the sandbox environment so that as
[25:22] (1522.16s)
you're asking, for example, the other
[25:24] (1524.40s)
day I'm a sculptor. I wholesale.
[25:27] (1527.12s)
Did you sculpt that behind you? The
[25:28] (1528.40s)
thing behind you?
[25:29] (1529.12s)
Yes. Yeah.
[25:30] (1530.48s)
Who is that?
[25:31] (1531.36s)
Uh that is no one in particular. That's
[25:33] (1533.76s)
just a clay study.
[25:35] (1535.52s)
I didn't know if it was like error.
[25:36] (1536.64s)
Those listening, there's a beautiful
[25:38] (1538.48s)
clay sculpture. What do you call it? A
[25:40] (1540.56s)
bust behind Robert. That's right.
[25:43] (1543.44s)
Thank you, Matt. Yeah. I was actually I
[25:45] (1545.92s)
thought I was retired from the business
[25:50] (1550.08s)
until J. You're back in AI brought you
[25:52] (1552.48s)
back in
[25:53] (1553.44s)
completely immersed in the studio in the
[25:55] (1555.12s)
studio work and and over the la and the
[25:58] (1558.00s)
last and when gen AI first was publicly
[26:01] (1561.60s)
available 3.5 came out it was because
[26:05] (1565.12s)
the lifetime before the clay studio I
[26:07] (1567.20s)
was an operations exec the impact on
[26:10] (1570.16s)
workflows operations organizations was
[26:12] (1572.24s)
just so obvious it hit me like a 2x4 and
[26:16] (1576.40s)
I've been immersed ever since but I do
[26:18] (1578.08s)
still keep the clay work for my Zen
[26:20] (1580.08s)
counterbalance
[26:21] (1581.84s)
Nice. There you go. There you go. That's
[26:24] (1584.24s)
you do the the vibe sculpting. It's the
[26:26] (1586.72s)
next thing. That's the thing that's
[26:27] (1587.76s)
caught on is like there's vibe
[26:29] (1589.36s)
everything which is funny how that's
[26:30] (1590.88s)
become like a moniker.
[26:31] (1591.84s)
Yeah. Lovable is Anton and the crew.
[26:34] (1594.48s)
They I think they've done it just
[26:36] (1596.16s)
obviously they've done a phenomenal job
[26:38] (1598.00s)
and they've Yeah.
[26:38] (1598.88s)
And the work that they do under the hood
[26:41] (1601.52s)
is it's a just a wonderful user
[26:44] (1604.16s)
experience. So you can chat in natural
[26:46] (1606.80s)
language. You've got dev mode. I don't
[26:49] (1609.20s)
know that's actually trademarkable or
[26:50] (1610.80s)
copyrightable, but
[26:52] (1612.08s)
yeah, they got into issues with Figma.
[26:53] (1613.84s)
The response back to Figma was funny.
[26:56] (1616.24s)
It's some back and forth.
[26:57] (1617.36s)
All right. I didn't see the response
[26:58] (1618.56s)
yet. I'm sure I'm sure it would. Yeah,
[27:00] (1620.48s)
that'll be good. The but lovable itself
[27:02] (1622.72s)
is a wonderful program. And what this
[27:04] (1624.40s)
means is yeah, demos, value props, you
[27:08] (1628.16s)
can create a you can create a little app
[27:10] (1630.32s)
just for a meeting. It makes sense now.
[27:13] (1633.36s)
And for non-developers, like there is
[27:15] (1635.52s)
the side of it that's like you and Ruben
[27:17] (1637.44s)
and folks like that that are like deep
[27:19] (1639.36s)
into it,
[27:20] (1640.24s)
but anybody can now build have that
[27:22] (1642.40s)
experience and I feel like it's such a
[27:24] (1644.08s)
good inroad to then say, "Oh, how does
[27:27] (1647.04s)
this work? How does that work?" Then you
[27:28] (1648.56s)
leverage AI to teach you and there's
[27:30] (1650.72s)
nothing stopping you from progressing to
[27:33] (1653.12s)
where a Reven using AI as your teacher.
[27:35] (1655.84s)
That is the uncomfortable truth
[27:39] (1659.68s)
that literally the ability to access the
[27:42] (1662.00s)
information and create a self-learning
[27:43] (1663.76s)
program and and expose yourself to a an
[27:46] (1666.80s)
entirely new
[27:48] (1668.08s)
corpus of thought and knowledge is is
[27:51] (1671.04s)
widely and freely available.
[27:54] (1674.08s)
this is a nice segue, last thing I want
[27:55] (1675.76s)
to hit on. You mentioned the folks at
[27:57] (1677.68s)
Lovable. They're a tiny team in
[28:00] (1680.08s)
comparison to what they've done. And
[28:01] (1681.84s)
that's a new pattern that's emerging.
[28:04] (1684.24s)
when you have AI, you just don't need as
[28:05] (1685.76s)
many humans. You've written this book
[28:07] (1687.60s)
called The AI Dividend: Preparing for a
[28:10] (1690.00s)
Post Labor Economy. I'm just curious,
[28:12] (1692.56s)
and I think folks can let us know where
[28:14] (1694.24s)
they can find it, but what's your thesis
[28:16] (1696.96s)
around this? I think this is like the
[28:18] (1698.64s)
one of the next interesting debates that
[28:20] (1700.96s)
like us as society will begin to have.
[28:24] (1704.64s)
And I think it can go a multitude of
[28:26] (1706.64s)
different ways
[28:28] (1708.16s)
entirely. Obviously, we're not there
[28:30] (1710.48s)
yet, but the writing is on the wall. We
[28:32] (1712.40s)
can feel the impact already. If you are
[28:35] (1715.28s)
a young computer science graduate, you
[28:37] (1717.68s)
know that we already are seeing the
[28:40] (1720.16s)
change in society and the impact as as
[28:43] (1723.04s)
these tools become increasingly more
[28:47] (1727.12s)
more capable. But it's and but it's the
[28:50] (1730.00s)
systems that are built with the tools,
[28:52] (1732.32s)
right? I suspect AGI won't be achieved
[28:55] (1735.52s)
by the base language model necessarily,
[28:58] (1738.72s)
but by some really clever engineering
[29:02] (1742.16s)
that is built on top of one of these.
[29:05] (1745.28s)
That's such a good point, right? Large
[29:06] (1746.40s)
language models. Yeah. The impact on the
[29:08] (1748.24s)
work the workplace. Let's just let's
[29:10] (1750.56s)
take in a long term five years, right?
[29:13] (1753.44s)
Which is almost impossible to conceive
[29:15] (1755.04s)
of what that escalation and speed. I
[29:17] (1757.52s)
love how five years is now long-term,
[29:19] (1759.44s)
which is just hilarious. But it is
[29:21] (1761.12s)
like investors used to want fiveyear
[29:22] (1762.80s)
financial forecasts. It's like
[29:25] (1765.84s)
even three year threeear forecast. What
[29:28] (1768.16s)
does that mean right now? Contracts that
[29:30] (1770.48s)
companies are entering into big
[29:32] (1772.24s)
long-term the implications are
[29:34] (1774.00s)
incredible. Anyways, so we know that we
[29:36] (1776.64s)
know what's coming and that is that the
[29:39] (1779.12s)
ability for these systems to be able to
[29:41] (1781.60s)
outperform the vast majority of humans
[29:44] (1784.08s)
in the vast majority of economically
[29:46] (1786.08s)
viable roles. That's how artificial
[29:49] (1789.04s)
general intelligence is currently
[29:50] (1790.40s)
defined by open AI, which most people
[29:52] (1792.48s)
don't realize.
[29:53] (1793.68s)
AGI is defined as being able to do your
[29:57] (1797.20s)
right? If it can do most jobs, then
[29:59] (1799.52s)
that's what they're that's the
[30:01] (1801.52s)
definition of AGI as it stands with open
[30:03] (1803.68s)
air right now. That's the course that
[30:05] (1805.84s)
we're on. Whether or not if 3 to 5% of
[30:09] (1809.92s)
the working population gets displaced,
[30:11] (1811.76s)
that is a massive societal impact.
[30:15] (1815.36s)
and and we haven't been having the
[30:17] (1817.76s)
conversations necessary to digest and
[30:21] (1821.76s)
really prepare for what that might look
[30:24] (1824.00s)
like, let alone the 30% mark, the 60%
[30:27] (1827.36s)
mark, etc., etc. There'll always be
[30:30] (1830.16s)
there's lots of opportunities for things
[30:32] (1832.16s)
for us to do as humans with each other,
[30:35] (1835.20s)
but that the economic economically
[30:38] (1838.56s)
performing tasks for the majority of the
[30:41] (1841.04s)
economy will be capable of being
[30:42] (1842.96s)
executed by artificial intelligence in
[30:46] (1846.00s)
soon. So now what? Here's the principle
[30:49] (1849.76s)
of the book and what I believe is best
[30:51] (1851.84s)
for society and it's just my idea and
[30:53] (1853.68s)
there's lots of others and and it's not
[30:56] (1856.64s)
novel. There's three things that went
[30:58] (1858.96s)
into the discovery of the validation of
[31:00] (1860.96s)
the theory of the current era of Gen AI
[31:05] (1865.76s)
and that is the science, the capital,
[31:08] (1868.80s)
right? The folks that that backed it and
[31:11] (1871.12s)
paid for it and the data.
[31:13] (1873.68s)
And the science and the capital are
[31:15] (1875.76s)
recognized with equity and the data is a
[31:19] (1879.44s)
complete mess. Right.
[31:21] (1881.60s)
Yeah. Pilford's don't now we're just
[31:23] (1883.28s)
talking about going back and say oh
[31:24] (1884.64s)
let's just get rid of IP and forget
[31:26] (1886.96s)
copyright and so everybody so the
[31:29] (1889.60s)
scientists get paid and the capitalists
[31:31] (1891.76s)
get paid but the people that contributed
[31:33] (1893.76s)
the data
[31:34] (1894.64s)
to the achievement of the breakthrough
[31:36] (1896.48s)
don't and I think that's a fatal flaw
[31:39] (1899.36s)
and it's really it's a it's now made its
[31:42] (1902.96s)
way through the entirety of the large
[31:44] (1904.64s)
language model ecosystem because you
[31:46] (1906.96s)
can't track where where that stolen data
[31:49] (1909.84s)
went but you know it's being utilized
[31:51] (1911.60s)
That's the thing. It's very hard to
[31:53] (1913.92s)
trace from like an IP perspective. So
[31:56] (1916.56s)
it's like even if you wanted to good
[31:59] (1919.04s)
luck, you know what I mean? Because it's
[32:00] (1920.48s)
just so difficult to truly tell
[32:03] (1923.76s)
it's leveraging in its response.
[32:05] (1925.36s)
That's that's right. Especially
[32:06] (1926.56s)
constitutional AI will have layer
[32:08] (1928.40s)
protection systems to make sure that
[32:10] (1930.16s)
it's not responding in a way that does
[32:12] (1932.08s)
clearly display what it's aware of, what
[32:15] (1935.84s)
it's been trained on. So that leads me
[32:18] (1938.32s)
to the idea that data should be
[32:21] (1941.20s)
recognized. It should be recognized as
[32:23] (1943.44s)
as effectively as equity and it should
[32:26] (1946.56s)
be paid a dividend
[32:29] (1949.04s)
and that dividend should be available to
[32:30] (1950.88s)
everybody in on earth
[32:33] (1953.36s)
and it should come directly from the
[32:35] (1955.04s)
economic activity of artificial
[32:36] (1956.48s)
intelligence
[32:38] (1958.16s)
and there is no reason that it ought to
[32:40] (1960.48s)
be something restricted to basic
[32:44] (1964.08s)
because the economic implications
[32:46] (1966.16s)
current for us will be
[32:50] (1970.80s)
profound.
[32:52] (1972.32s)
and the ability for all of us to benefit
[32:55] (1975.36s)
from as a as from a from that structure
[33:00] (1980.32s)
specifically a dividend paid out by
[33:03] (1983.36s)
participation in equity. It takes it out
[33:05] (1985.68s)
of the hands of governance because
[33:08] (1988.16s)
governments are led by parties and
[33:09] (1989.76s)
parties are fickle, right? And and this
[33:13] (1993.28s)
is much more foundational. I think that
[33:17] (1997.20s)
an option for how to address the
[33:20] (2000.24s)
increasing displacement of human labor
[33:22] (2002.64s)
would be for a universal participation
[33:25] (2005.04s)
in dividends of the economic activity of
[33:27] (2007.60s)
artificial intelligence. And that's the
[33:29] (2009.44s)
I like how you framed it as
[33:31] (2011.52s)
Yeah. I like how you framed it as
[33:32] (2012.72s)
dividends. It's almost like democratized
[33:34] (2014.80s)
capitalism.
[33:36] (2016.32s)
In a sense
[33:37] (2017.36s)
because all of our data has contributed.
[33:39] (2019.36s)
We can't say where, how much, when, but
[33:41] (2021.76s)
we know it's there.
[33:43] (2023.36s)
Yeah. No, it's definitely interesting.
[33:45] (2025.44s)
And where can folks find the book? Is it
[33:48] (2028.32s)
It's up on Amazon.
[33:49] (2029.92s)
And then the the
[33:52] (2032.16s)
the Kindle version is titled tokenized
[33:56] (2036.64s)
and there's more words coming.
[33:58] (2038.88s)
Human race, that's your company on the
[34:01] (2041.60s)
Gentic Engineering side. That's right. I
[34:03] (2043.84s)
think folks can find that. What's
[34:05] (2045.76s)
human race.ai? Yeah. And we're largely
[34:08] (2048.72s)
experimental, but also we love joint
[34:10] (2050.72s)
ventures. So if there's projects or
[34:12] (2052.88s)
joint ventures that people have an idea,
[34:15] (2055.12s)
they want to get it built. That's the
[34:17] (2057.04s)
kind of stuff that we love to engage in.
[34:19] (2059.60s)
Yeah. We need to do something together
[34:21] (2061.04s)
there. What's the coolest thing you've
[34:22] (2062.56s)
built or been a part of or that you've
[34:24] (2064.64s)
seen out of curiosity? Anything come to
[34:28] (2068.00s)
The one we saw today by Ruben.
[34:29] (2069.28s)
Yeah, I know. Because you can just pull
[34:30] (2070.80s)
things out of the ether that's really
[34:32] (2072.16s)
good for us to stretch exercise our
[34:34] (2074.96s)
elasticity of creativity. Yeah,
[34:37] (2077.04s)
for me what I'm working on right now,
[34:38] (2078.56s)
which I really am enjoy, excuse me, and
[34:41] (2081.28s)
the first newsletter is about to come
[34:42] (2082.40s)
out in the next week or so, but that's
[34:44] (2084.00s)
the awareness layer. The awareness layer
[34:46] (2086.32s)
as a critical structural component of
[34:49] (2089.84s)
any organization or business
[34:52] (2092.08s)
conceptually. And what I'm working on
[34:55] (2095.04s)
within that is is that chain of deep
[34:57] (2097.68s)
research and how to map out that
[35:01] (2101.44s)
creative space when you point a team of
[35:03] (2103.68s)
agents in a specific direction. But the
[35:06] (2106.16s)
awareness layer I think is one of these
[35:08] (2108.64s)
the applications of agents and the
[35:10] (2110.72s)
applications of artificial intelligence
[35:12] (2112.96s)
that really hasn't crystallized clearly
[35:15] (2115.92s)
yet and I think that's a space that we
[35:18] (2118.80s)
can contribute to because for example a
[35:21] (2121.52s)
product owner you take a startup and in
[35:23] (2123.92s)
the old days a startup would come they
[35:25] (2125.44s)
do a competitive analysis right and so
[35:28] (2128.72s)
here's all the competitors and this is
[35:30] (2130.24s)
what the space looks like they do it
[35:33] (2133.68s)
maybe revisit it eight months later
[35:35] (2135.84s)
something big is that's it. That's gone.
[35:38] (2138.32s)
That we now with agents we can be aware
[35:41] (2141.52s)
of fairly be aware of absolutely every
[35:44] (2144.96s)
competitor in our space have all of
[35:47] (2147.20s)
their concepts mapped out. We know
[35:49] (2149.04s)
exactly what their feature set is. We
[35:51] (2151.28s)
know what their pricing is. We know what
[35:52] (2152.72s)
their sentiment analysis in the
[35:54] (2154.08s)
marketplace is. And that's all
[35:55] (2155.68s)
dashboard. And that's going to that is a
[35:57] (2157.76s)
tool that I can use as a product leader
[35:59] (2159.68s)
in order to ideulate and AB test into.
[36:02] (2162.88s)
So that's an awareness layer in one
[36:04] (2164.96s)
specific really obvious case, but it
[36:07] (2167.60s)
doesn't exist for the vast majority of
[36:10] (2170.00s)
companies yet. There's an enormous
[36:12] (2172.16s)
amount of development work and
[36:13] (2173.60s)
opportunity that will come just from
[36:16] (2176.00s)
organizations opening up their eyes
[36:19] (2179.20s)
and expanding their field of view in the
[36:21] (2181.60s)
marketplace.
[36:23] (2183.60s)
It's funny on the competitive research.
[36:25] (2185.60s)
I was listening to a podcast. I think
[36:27] (2187.44s)
it's like the Greg Eisenberg, but he had
[36:29] (2189.12s)
the CEO from Lindy, which is one of
[36:31] (2191.44s)
these AI, think of it like ni in that
[36:35] (2195.36s)
space, right? And he was getting
[36:38] (2198.00s)
overwhelmed on the competitive side
[36:39] (2199.68s)
because he had an agent that was doing
[36:41] (2201.28s)
this research for him and it's
[36:42] (2202.88s)
another company. What he did is he set
[36:45] (2205.12s)
it up to say, "Okay, all those companies
[36:47] (2207.36s)
you researched, look, let me know in six
[36:50] (2210.64s)
months or whatever time frame where they
[36:53] (2213.12s)
are now."
[36:53] (2213.60s)
Right?
[36:54] (2214.32s)
And you see that 90% of them just
[36:56] (2216.00s)
fizzled out and it allowed him to calm
[36:58] (2218.08s)
his nerves. How many people are actually
[37:00] (2220.88s)
progressing past just like a splashy
[37:03] (2223.36s)
launch? It's kind of an interesting
[37:05] (2225.12s)
angle. Matt, it's beautiful because you
[37:06] (2226.80s)
know where this leads is two years ago
[37:08] (2228.96s)
or a year and a half ago I said roof
[37:10] (2230.56s)
said this is I was actually at the
[37:12] (2232.64s)
beginning about a year ago artificial
[37:14] (2234.24s)
intelligence in the longer run is going
[37:16] (2236.00s)
to be deflationary obviously because it
[37:18] (2238.56s)
enables things to make it's it's pennies
[37:20] (2240.64s)
per millions of tokens
[37:22] (2242.88s)
uh for this intelligence this is a
[37:24] (2244.40s)
really good case point because when you
[37:26] (2246.40s)
have the ability to have near complete
[37:28] (2248.96s)
market visualization
[37:31] (2251.12s)
in a sector
[37:32] (2252.64s)
then ultimately that's going to drive
[37:34] (2254.48s)
creativity and it's going to drive
[37:36] (2256.64s)
competition. And one of those one of
[37:38] (2258.80s)
those buckets is price and you have
[37:41] (2261.84s)
market triage and it's more difficult
[37:44] (2264.88s)
for people to say I'm going to sell this
[37:46] (2266.64s)
for $800 when I can buy it for 50 and I
[37:49] (2269.36s)
just I make my living on ignorance,
[37:51] (2271.92s)
right?
[37:52] (2272.64s)
My my competitive advantage is you don't
[37:55] (2275.04s)
know what I paid for it.
[37:56] (2276.80s)
That's coming to an end.
[37:59] (2279.28s)
Another fascinating area where there's
[38:01] (2281.20s)
going to be big implications.
[38:03] (2283.20s)
Yeah. so many areas. Let's give the
[38:05] (2285.44s)
Agentics Foundation plug for Ruven's
[38:08] (2288.72s)
community. Free for anybody to join, but
[38:11] (2291.60s)
give the breakdown. This turning into an
[38:13] (2293.52s)
actual not for-p profofit foundation is
[38:16] (2296.56s)
relatively recent in the past month or
[38:18] (2298.88s)
so, but how do people get involved?
[38:20] (2300.80s)
Where do they go? By the time this comes
[38:22] (2302.80s)
out, the Discord, it's out there now,
[38:24] (2304.40s)
but I think it'll be humming along, but
[38:26] (2306.16s)
give us the lowown in terms of how
[38:27] (2307.92s)
people get engaged.
[38:28] (2308.80s)
Sure. The website is atix.org
[38:31] (2311.52s)
org and on that you can find the
[38:34] (2314.00s)
discord, you can find the WhatsApp. The
[38:35] (2315.68s)
WhatsApp you may have to try to jump in
[38:37] (2317.52s)
a couple times because again it's capped
[38:39] (2319.60s)
out but on there you can find everything
[38:42] (2322.16s)
about the organization. The website is
[38:45] (2325.68s)
in evolution. So there's some tools on
[38:48] (2328.48s)
there. ultimately the Aenics Foundation
[38:51] (2331.52s)
there's so much talent and experience
[38:54] (2334.88s)
in that that in the 1,200 let alone the
[38:58] (2338.72s)
150,000 but of active engaged
[39:01] (2341.04s)
conversations that are taking place
[39:02] (2342.40s)
every single day in that WhatsApp group
[39:05] (2345.68s)
really it's about giving a forum for the
[39:07] (2347.76s)
membership base to be able to put
[39:09] (2349.60s)
forward tools and ideas and best
[39:11] (2351.20s)
practices for each other and that is the
[39:13] (2353.68s)
foundation and then the working local
[39:16] (2356.08s)
chapters are about making sure that
[39:17] (2357.92s)
people have networks in their local
[39:19] (2359.92s)
community to be able to explore the
[39:22] (2362.08s)
tools and help communities around the
[39:24] (2364.56s)
world put them into play.
[39:26] (2366.32s)
Yeah. The Agenics Foundation I'm
[39:28] (2368.32s)
temporarily leading up the AI safety and
[39:31] (2371.28s)
responsible deployment committee. There
[39:33] (2373.36s)
is in order to ensure that with these
[39:35] (2375.28s)
tools comes a framework and an
[39:36] (2376.96s)
understanding and another set of tools
[39:39] (2379.04s)
that help us do that responsibly and and
[39:41] (2381.84s)
proactively socially positive social
[39:44] (2384.00s)
impact. And then there's a committee on
[39:46] (2386.72s)
research and of course member services,
[39:49] (2389.60s)
the operations side, the chapter
[39:51] (2391.36s)
management. So there's a little over 10
[39:53] (2393.84s)
different working groups or teams or
[39:55] (2395.60s)
committees within the organization and
[39:57] (2397.84s)
it's open. We welcome everybody to
[39:59] (2399.76s)
participate and and hop in. They can
[40:02] (2402.00s)
sign up for that on the website. And
[40:03] (2403.84s)
then also chapters if there's if you're
[40:06] (2406.32s)
in Milan or there's a city community
[40:09] (2409.28s)
that you would like to host live events
[40:11] (2411.76s)
and have the support of the Aenics
[40:14] (2414.08s)
Foundation there on content promotion
[40:16] (2416.24s)
and actual materials to work with then
[40:19] (2419.20s)
then you can sign up as a as an
[40:20] (2420.72s)
ambassador on the agendics as well.
[40:23] (2423.92s)
So really excited.
[40:24] (2424.80s)
I'd say for everybody check out check
[40:27] (2427.28s)
come to a Friday hacker space session.
[40:30] (2430.40s)
Yes. and you'll understand why we're
[40:32] (2432.72s)
high on this this community. It's it's
[40:35] (2435.04s)
the best community out there. There's a
[40:37] (2437.04s)
lot of AI communities. It's
[40:38] (2438.56s)
overwhelming. This is the one that I
[40:40] (2440.72s)
proactively recommend to people. But
[40:42] (2442.88s)
Robert, thanks for being on talking some
[40:44] (2444.72s)
AI. Appreciate you being on.
[40:46] (2446.48s)
Yeah, Matt. Wonderful. It's always great
[40:48] (2448.32s)
fun. Good to see you again. And I look
[40:49] (2449.76s)
forward to hearing. You've got a bunch
[40:51] (2451.12s)
of great projects on yourself, so look
[40:53] (2453.28s)
forward to hearing about them.
[40:55] (2455.36s)
Awesome. Thanks for listening to the
[40:57] (2457.36s)
Talking AI podcast. If you enjoyed the
[40:59] (2459.28s)
show, give us a follow or subscribe on
[41:01] (2461.12s)
your favorite podcast platform. And
[41:02] (2462.96s)
don't forget to leave us a review. We
[41:04] (2464.64s)
love those. For more info on Talking AI,
[41:07] (2467.36s)
visit talkingodcast.com.
[41:16] (2476.56s)
[Music]
[41:16] (2476.64s)
[Applause]