[00:00] (0.16s)
Good morning.
[00:02] (2.00s)
It's uh great to be back in in London. I
[00:04] (4.48s)
I was supposed to be here five years
[00:05] (5.76s)
ago, but finally we made it happen.
[00:07] (7.44s)
These days when when I look around, AI
[00:09] (9.68s)
is all across the headlines. I just
[00:11] (11.36s)
collected a few of the things when I
[00:12] (12.64s)
search for
[00:14] (14.72s)
you know what is going on and some of
[00:16] (16.32s)
that honestly just kind of triggered me.
[00:18] (18.40s)
So here's this one from Microsoft CEO
[00:21] (21.68s)
saying that 30% of all code is written
[00:25] (25.04s)
by AI. And at that point, you I was
[00:26] (26.80s)
talking with people like what does that
[00:27] (27.92s)
even mean? like is is this big or or
[00:30] (30.72s)
anyway it's it's a CEO clearly talking
[00:32] (32.72s)
up their their own product right like
[00:34] (34.72s)
Microsoft is interested in in in selling
[00:36] (36.72s)
then we have a few months ago Antropic
[00:38] (38.56s)
CEO saying all code will be generated in
[00:41] (41.44s)
in a year or he also said things like in
[00:43] (43.76s)
six to three to six months 90% of all
[00:46] (46.08s)
code will be written by AI again is an
[00:48] (48.72s)
AI company founder very much interested
[00:51] (51.36s)
in in in this and then we also had Jeff
[00:53] (53.52s)
Dean uh an engineer actually a chief
[00:55] (55.84s)
scientist at at uh Google saying that AI
[00:58] (58.72s)
could be at the level of junior coder in
[01:00] (60.40s)
a year. And again, all these headlines
[01:02] (62.16s)
are from executives at large companies.
[01:05] (65.68s)
But on the other hand, when I look at
[01:07] (67.92s)
the ground reality, there are some
[01:09] (69.28s)
things that don't really match these
[01:10] (70.72s)
really positive and really enthusiastic
[01:14] (74.08s)
predictions. For example, this is from
[01:16] (76.32s)
January. This is a software engineer at
[01:18] (78.24s)
a startup saying that they use this tool
[01:20] (80.96s)
called Devon which costs $500 a month
[01:23] (83.84s)
autonomous AI agent and added a bug and
[01:26] (86.64s)
it cost them $700 extra dollars because
[01:29] (89.04s)
of all the 6 million poss events. So
[01:31] (91.04s)
clearly you know like AI is not we know
[01:34] (94.40s)
this by the way right like the bugs will
[01:36] (96.08s)
will make it through but this is just a
[01:37] (97.20s)
good example of yes it's it's it's not
[01:39] (99.68s)
that great. And then there was this
[01:41] (101.52s)
Reddit thread that went absolutely viral
[01:44] (104.08s)
after Microsoft's build conference. Some
[01:45] (105.92s)
of you are laughing. You read this.
[01:48] (108.24s)
After Microsoft's build conference,
[01:50] (110.08s)
Microsoft
[01:51] (111.68s)
showed how they released the copilot
[01:54] (114.32s)
agents in the .NET codebase. And
[01:57] (117.20s)
Microsoft engineers were really trying
[01:59] (119.84s)
too hard to help this agent land a fix
[02:02] (122.80s)
in a production and really complex
[02:04] (124.16s)
codebase, a .NET codebase, and they
[02:06] (126.32s)
failed spectacularly. So, for example,
[02:08] (128.16s)
the agent would add tests that break uh
[02:11] (131.04s)
engineers would prompt them to to break
[02:12] (132.64s)
it. And there's a lot of laughing going
[02:15] (135.04s)
around. Now on on one hand, I do
[02:16] (136.40s)
appreciate that Microsoft was really
[02:17] (137.76s)
transparent about this. No other startup
[02:19] (139.60s)
has shown their agents do like this. But
[02:22] (142.16s)
again, we see that this thing is just
[02:23] (143.76s)
really limited. So there's a big
[02:25] (145.92s)
disconnect. We have the executives
[02:28] (148.48s)
talking uh about one thing and the
[02:30] (150.64s)
other. But then I look back to my
[02:32] (152.72s)
thinking and and writing and and
[02:34] (154.88s)
research and in the last just month or
[02:37] (157.28s)
or two months, the last couple of deep
[02:38] (158.88s)
dives on the pragmatic engineer have all
[02:40] (160.48s)
been related to AI. I just realized on
[02:42] (162.56s)
how cursor was built on what VIP coding
[02:45] (165.04s)
means for us professional software
[02:46] (166.32s)
engineers how these Microsoft tools work
[02:48] (168.48s)
or don't work how chat GPC images was
[02:51] (171.36s)
was built and and scaled
[02:54] (174.32s)
and I would I just for for this event I
[02:57] (177.20s)
just want to pause a little bit and just
[02:58] (178.56s)
get a temperature check of what is
[03:00] (180.08s)
really happening like there's extremes
[03:01] (181.92s)
here with CEOs there's other extremes
[03:04] (184.32s)
where like it doesn't work at all and
[03:06] (186.48s)
you know what what is really happening
[03:08] (188.08s)
so I happen to talk to a lot of software
[03:10] (190.24s)
engineers like this is the the perk of
[03:11] (191.84s)
of being and seeing a software engineer.
[03:13] (193.76s)
I mean I I do write a lot about it but I
[03:15] (195.60s)
I try to stay close to the ground. So I
[03:17] (197.52s)
just asked them how are you using AI
[03:20] (200.64s)
tools at your company and I asked for
[03:24] (204.80s)
different types of of kind of companies
[03:26] (206.40s)
and categories.
[03:29] (209.76s)
I asked this from a couple of AI dev
[03:31] (211.44s)
tools startups who are you know selling
[03:33] (213.04s)
this thing. So you would expect that
[03:34] (214.48s)
they are really all in. I asked some of
[03:36] (216.80s)
big tech companies, some AI startups
[03:39] (219.44s)
that are not selling AI tools, but
[03:41] (221.12s)
they're they're they're building AI
[03:42] (222.64s)
tools and some independent software
[03:43] (223.92s)
engineers. So, let's start with the AI
[03:47] (227.12s)
dev tool startups. First, I talked with
[03:49] (229.52s)
the team at Entropic. Um, I just did it
[03:52] (232.24s)
over the last week and I asked them,
[03:54] (234.00s)
hey, what are you what are you seeing?
[03:55] (235.28s)
Like, again, let's keep in mind, they
[03:56] (236.56s)
will be biased necessarily, right? But
[03:58] (238.48s)
this is what they told me. When we gave
[04:00] (240.00s)
cloud code access to our engineers, they
[04:01] (241.76s)
all started using it every day, which is
[04:03] (243.28s)
pretty surprising. Now, this was months
[04:04] (244.64s)
and months ago. Cloud code was released
[04:06] (246.40s)
in public one month ago, but this was
[04:08] (248.40s)
internally. But they said they saw a
[04:10] (250.48s)
really big poll immediately. Cloud code
[04:12] (252.40s)
is a command command line interface.
[04:14] (254.08s)
It's not an IDE. It works in a terminal.
[04:17] (257.76s)
And they also told me that 90% of cloud
[04:20] (260.72s)
code the product is written with cloud
[04:22] (262.64s)
code, which seems obscenely high. You
[04:25] (265.44s)
would think this is kind of an
[04:26] (266.40s)
advertisement, but again, I did talk
[04:28] (268.08s)
with engineers, and engineers aren't
[04:30] (270.24s)
exactly the ones who make things up.
[04:33] (273.20s)
Now, on traffic also told me something
[04:34] (274.56s)
interesting. They launched cloud code
[04:36] (276.00s)
less than a month ago. I think May 22nd,
[04:37] (277.76s)
so like 3 weeks ago, and they said on
[04:39] (279.52s)
day one, they had 40% increase uh in
[04:42] (282.24s)
usage. And since the launch in less than
[04:43] (283.92s)
a month, there's been 160% increase.
[04:45] (285.84s)
This just means they they're seeing a
[04:47] (287.28s)
poll for this product for whatever
[04:48] (288.88s)
reason. Now, one more thing that
[04:51] (291.20s)
Entrophic is has started actually this
[04:53] (293.76s)
thing called MCP, the model context
[04:55] (295.36s)
protocol. I won't go into all the
[04:57] (297.20s)
details. I I have a deep dive on it and
[04:58] (298.96s)
you can find a lot of articles but the
[05:00] (300.72s)
idea is that you can have MCP clients
[05:03] (303.36s)
that can be your IDE or agents and you
[05:05] (305.44s)
have a protocol and you can kind of
[05:07] (307.04s)
connect things like your database like
[05:08] (308.72s)
GitHub, Google Drive, Puppety or
[05:10] (310.96s)
whatever you want to. I actually used it
[05:12] (312.72s)
to connect it to my database. So one of
[05:14] (314.80s)
my APIs and I can now chat with it. I
[05:16] (316.64s)
say like hey you know how many people
[05:18] (318.08s)
have have uh claimed this pro promo code
[05:20] (320.08s)
that I have an API for and it kind of
[05:21] (321.60s)
creates SQL. It's it's pretty neat. It
[05:24] (324.00s)
is a fun way of an interesting way of
[05:25] (325.84s)
doing it. And then trophic was telling
[05:27] (327.60s)
me that they open sourced this protocol
[05:30] (330.00s)
in November. In December and February, a
[05:32] (332.24s)
few smaller companies and scaleups
[05:34] (334.08s)
adopted it. In March and April, the big
[05:36] (336.08s)
guns, OpenAI, Google, Microsoft all
[05:38] (338.24s)
added support. And today, they're
[05:39] (339.68s)
estimating there's thousands of MCP
[05:41] (341.44s)
servers happening. And we'll we'll see a
[05:43] (343.04s)
little bit later on why this is
[05:44] (344.24s)
relevant. Now, I also talked with
[05:45] (345.68s)
Windinsurf, another AI uh ID editor. Um,
[05:48] (348.88s)
and I I I was asking their their uh team
[05:52] (352.16s)
what they're seeing and they said they
[05:53] (353.52s)
see 95% of their code is being used
[05:55] (355.92s)
written using wind surf. So either their
[05:57] (357.76s)
agent or their passive tabbing. Now I
[06:00] (360.32s)
mean this this sounds awfully high and
[06:02] (362.24s)
I'm I'm a bit surprised but again uh
[06:04] (364.24s)
this is what they're they're seeing in
[06:05] (365.60s)
again don't forget they're going to eat
[06:06] (366.56s)
their they're going to be dog fooding
[06:07] (367.76s)
these companies. I finally I reached out
[06:09] (369.52s)
to cursor and they told me that about 50
[06:12] (372.00s)
40 or 50% they didn't have as exact but
[06:13] (373.84s)
they're like h that's kind of roughly
[06:15] (375.20s)
what it feels you know they're a bunch
[06:17] (377.36s)
of it it works but a bunch of it doesn't
[06:19] (379.68s)
again these are the companies that that
[06:21] (381.92s)
want to get to 100% because that's what
[06:23] (383.52s)
they're selling right so okay not too
[06:26] (386.32s)
surprising that it's it's as high as is
[06:28] (388.80s)
I do appreciate the honesty from cursor
[06:30] (390.48s)
by the way so um now on to to big tech
[06:35] (395.44s)
here I talk with people anonymously so
[06:37] (397.68s)
at Google I talk with about five
[06:39] (399.44s)
different uh engineers and so first
[06:42] (402.32s)
thing that we need to know about Google
[06:43] (403.84s)
is everything is custom there. They
[06:45] (405.84s)
don't use Kubernetes they open
[06:47] (407.44s)
Kubernetes but they have something
[06:48] (408.40s)
called Borg. They they don't use uh
[06:51] (411.36s)
GitHub they have their own repository.
[06:53] (413.68s)
They don't use they they have their own
[06:55] (415.68s)
critique uh code review tool and so on
[06:57] (417.92s)
and their ID is called cider which which
[07:00] (420.64s)
has which is an acronym for something
[07:03] (423.20s)
integrated development environment and
[07:04] (424.80s)
repository. uh it's a cloud integrated
[07:08] (428.00s)
development environment repository. It
[07:10] (430.16s)
used to be a web tool. Today it's a VS
[07:11] (431.84s)
code fork and it is integrated across
[07:14] (434.00s)
all the Google stacks. All their
[07:15] (435.44s)
internal services are integrated. It
[07:16] (436.96s)
works really really nicely inside of
[07:18] (438.88s)
Google. Now engineers told me that AI is
[07:22] (442.48s)
just everywhere. LMS have been
[07:23] (443.84s)
integrated into Cider the ID the VS code
[07:26] (446.40s)
fork that they use the web version
[07:28] (448.16s)
called Cider V. They have autocomplete.
[07:30] (450.64s)
They have a chatbased ID. They said it
[07:32] (452.48s)
works pretty good. Maybe not as good as
[07:34] (454.00s)
let's say cursor, but it's it's it's
[07:35] (455.52s)
pretty good. Critique their AI review
[07:37] (457.92s)
tool. It gives you feedback and they
[07:40] (460.96s)
said it's just it's sensible. It it
[07:42] (462.72s)
works. Code search, something that's
[07:45] (465.36s)
apparently amazing inside Google. And
[07:46] (466.72s)
again, it has LLM support. You can ask
[07:48] (468.32s)
about stuff and it spits out parts of
[07:50] (470.40s)
the codebase. And I've heard there's
[07:52] (472.96s)
been a lot of progress. So, a a former
[07:55] (475.28s)
Googler who who left Google about 6
[07:57] (477.52s)
months ago said that about a year ago,
[07:59] (479.20s)
it it was just weird how all of this was
[08:01] (481.84s)
not really used inside of Google, but
[08:03] (483.92s)
now it is. So, things have just evolved
[08:06] (486.72s)
pretty quickly. And a current software
[08:08] (488.88s)
engineer told me that they think Google
[08:10] (490.72s)
internally has this really slow approach
[08:13] (493.12s)
where they're taking a cautious
[08:14] (494.40s)
approach. They want to get things right
[08:16] (496.32s)
so that engineers stick with it, that
[08:18] (498.32s)
they they don't mistrust it. Uh also
[08:21] (501.04s)
Google has a bunch of other tools that
[08:22] (502.64s)
again these are coming from engineers.
[08:23] (503.92s)
Notebook LM this is a product we can all
[08:26] (506.32s)
use uh uh as well. You can just put docs
[08:29] (509.20s)
and chat with them. LM prompt playground
[08:31] (511.52s)
which is like open playground but Google
[08:33] (513.52s)
apparently built it internally before
[08:35] (515.04s)
open released it. They have this thing
[08:36] (516.88s)
called the MoMA search engine a
[08:38] (518.24s)
knowledge base using LMS and engineers
[08:40] (520.00s)
are using it all the time and a lot more
[08:42] (522.08s)
are being built. Now, this is a quote
[08:43] (523.60s)
from a Googler who will definitely not
[08:45] (525.52s)
be on on the record with their name, but
[08:47] (527.12s)
they say there's an orc specific genai
[08:49] (529.28s)
tooling happening everywhere because
[08:50] (530.40s)
that's what leadership likes to see. And
[08:52] (532.24s)
honestly, that's how you get more
[08:53] (533.60s)
funding these days. Now, you know, if
[08:56] (536.24s)
you work in a large organization, you
[08:57] (537.28s)
can see this is true, but this is
[08:58] (538.32s)
probably also deliberate. Like, this is
[08:59] (539.92s)
how tools like notebook LM have been
[09:02] (542.00s)
built inside of Google. A team just
[09:03] (543.92s)
funding it and building it. So, that's
[09:05] (545.44s)
Google. And one really interesting thing
[09:08] (548.88s)
that really got my attention, this is
[09:10] (550.80s)
from a former SR who is really good
[09:13] (553.20s)
friends with a bunch of Google S people.
[09:15] (555.60s)
They said, "What I'm hearing from my SR
[09:17] (557.20s)
friends at Google is they are prepared
[09:18] (558.88s)
for 10 times the lines of code making
[09:21] (561.20s)
their way into production. So they're
[09:23] (563.12s)
beefing up their infra, their deployment
[09:24] (564.96s)
pipelines, their code review tooling, uh
[09:27] (567.52s)
feature flagging, all of these things.
[09:29] (569.84s)
This was really, really interesting.
[09:31] (571.76s)
What is Google seeing that we might not
[09:33] (573.68s)
be aware of?" Amazon. I also talked with
[09:36] (576.72s)
with engineers here and Amazon is not
[09:38] (578.88s)
really known as well for for AI but
[09:41] (581.12s)
apparently internally almost all devs
[09:43] (583.84s)
are using this tool called Amazon Q
[09:45] (585.60s)
developer pro. Uh it's really good for
[09:48] (588.08s)
AWS related coding. In fact the Amazon
[09:50] (590.88s)
devs that talked to me said they're
[09:52] (592.32s)
really surprised that people outside of
[09:53] (593.84s)
Amazon don't really know about it. So
[09:55] (595.44s)
apparently if you're doing anything with
[09:57] (597.04s)
AWS it's it's really good with the with
[09:58] (598.96s)
the context and they just like it. Uh
[10:01] (601.60s)
again six months ago when I talk with
[10:03] (603.28s)
people they were not that enthusiastic
[10:04] (604.88s)
and a year ago they're like h it doesn't
[10:06] (606.32s)
really work that well Q but now it does
[10:09] (609.12s)
and engineers also told me they use
[10:11] (611.20s)
cloth for everything. This engineer was
[10:12] (612.72s)
telling me how when they have to write a
[10:14] (614.32s)
PR fact which is Amazon six pager uh or
[10:17] (617.52s)
or or kind of press release they use it
[10:19] (619.52s)
a lot for it. Perf season apparently has
[10:22] (622.08s)
this engineer did a a lot of it with
[10:23] (623.84s)
with that and just with a lot of writing
[10:25] (625.60s)
tasks. Uh Amazon has a relationship with
[10:28] (628.08s)
with with Entropics. So they have an
[10:30] (630.00s)
internal cloud
[10:32] (632.16s)
and one interest the with Amazon the
[10:35] (635.52s)
most interesting thing with MCP servers
[10:37] (637.20s)
we we mentioned how entropic came up
[10:38] (638.80s)
with MCP servers. Now let me take just a
[10:41] (641.52s)
little bit of detour about how Amazon is
[10:44] (644.32s)
this massive API company in 2022 based
[10:48] (648.88s)
on this is how Steve Yaggi and former
[10:51] (651.04s)
Amazon engineer and and and well-known
[10:53] (653.76s)
uh person in the industry summarized
[10:55] (655.68s)
what happened there. Jeff Bezos had this
[10:57] (657.44s)
big mandate. It went along these lines.
[10:59] (659.36s)
One, all teams will expose their data
[11:01] (661.44s)
and functionality through service
[11:02] (662.64s)
interfaces aka APIs. Two, teams must
[11:05] (665.44s)
communicate with each other through
[11:06] (666.72s)
these interfaces. Three, there will be
[11:08] (668.72s)
no forward or forward interprocess
[11:10] (670.32s)
connection allowed. And I think four was
[11:12] (672.24s)
something like if you do don't do this,
[11:13] (673.68s)
you're fired. Uh but Amazon has done
[11:16] (676.56s)
this and internally this is how AWS was
[11:19] (679.36s)
partially b born as well. All all their
[11:21] (681.36s)
services that they use internally, they
[11:22] (682.64s)
can expose externally because they have
[11:23] (683.84s)
all these APIs. They've been doing this
[11:24] (684.96s)
for more than 20 years. And if you have
[11:28] (688.48s)
a service with an API, it is trivial to
[11:31] (691.20s)
to bolt on an MCP server so your ID or
[11:34] (694.96s)
your AI agents can use it. And this is
[11:37] (697.60s)
Amazon. What is this doing? This is I
[11:39] (699.36s)
I've never heard this before and I'll
[11:40] (700.72s)
talk with this person and you're
[11:41] (701.76s)
probably the first one to hear this, but
[11:43] (703.28s)
most internal tools and website inside
[11:45] (705.12s)
Amazon already have MCP support.
[11:47] (707.60s)
Automation is happening everywhere. So
[11:50] (710.08s)
people were telling me, devs were
[11:51] (711.12s)
telling me that they're they're
[11:51] (711.76s)
automating ticketing system, emails,
[11:53] (713.28s)
internal systems, and devs are loving
[11:55] (715.04s)
it. Some some of them are automating a
[11:56] (716.56s)
good a huge part of their workflow.
[11:58] (718.48s)
Again, no one's talking about it, but
[11:59] (719.60s)
it's happening. So I I wonder if Amazon
[12:02] (722.48s)
by being API first since actually that
[12:04] (724.48s)
that's 2002. Apologies for the typo,
[12:06] (726.40s)
they might be MCP first starting in
[12:10] (730.88s)
With big out of the way, I I wanted to
[12:12] (732.88s)
talk to some some smaller startups that
[12:14] (734.96s)
are are have no real pull for the AI dev
[12:18] (738.48s)
tools themselves. they do have a pool
[12:19] (739.76s)
for AI and I talk with uh a startup
[12:22] (742.40s)
called incident.io Oh, they didn't start
[12:23] (743.68s)
as an AI startup. They started as on as
[12:25] (745.92s)
as on call platform, but with AI, it's
[12:28] (748.00s)
kind of an obvious place to to integrate
[12:30] (750.16s)
and and have resolution all that. So now
[12:31] (751.92s)
they're they're turning it to pretty
[12:33] (753.44s)
much AI first and and I talked with
[12:35] (755.28s)
Lawrence Jones who will later be doing a
[12:36] (756.88s)
talk at at uh uh at LDX3 and he said
[12:40] (760.32s)
that our team is massively using AI to
[12:42] (762.32s)
accelerate them and they share tips and
[12:44] (764.96s)
tricks in the Slack and he just generous
[12:47] (767.44s)
to share a few of these with me. So one
[12:49] (769.36s)
of them is an engineer saying, "Hey, I
[12:50] (770.96s)
just used another MCP server for the
[12:52] (772.64s)
first time and it works really well for
[12:54] (774.48s)
well- definfined tickets." So this
[12:55] (775.60s)
engineer realize, "Oh, if you have a
[12:57] (777.20s)
really well- definfined ticket, you can
[12:58] (778.80s)
pass it to an agent and they can come up
[13:01] (781.28s)
with a first pass." And sometimes it's
[13:02] (782.96s)
pretty good and they just share this to
[13:04] (784.40s)
the chat saying, "Hey, this works for
[13:05] (785.68s)
me. Why don't you try it? See what you
[13:07] (787.44s)
think?" And there's a lot of chatter and
[13:09] (789.28s)
you know, they're sharing all these
[13:10] (790.32s)
things. A second example is another
[13:13] (793.12s)
engineer saying that their new favorite
[13:14] (794.56s)
trick is is prompting to ask for
[13:16] (796.96s)
options. For example, can you give me
[13:18] (798.96s)
options for writing a code that does
[13:20] (800.64s)
this and this that I need to do? What do
[13:22] (802.64s)
you think you're I'm seeing this error?
[13:24] (804.00s)
Can can you give me explanations? How
[13:25] (805.36s)
would you train Zapont? And so on.
[13:29] (809.92s)
And what I really love about this is is
[13:31] (811.68s)
inside of the company they're they're
[13:33] (813.20s)
experimenting. They're seeing this is it
[13:34] (814.80s)
works for me. Do you think it works for
[13:36] (816.24s)
you? And you can see the you know the
[13:37] (817.76s)
reaction discussions etc. There's
[13:39] (819.44s)
there's a lot more examples but they're
[13:41] (821.52s)
they're really coming around to it. And
[13:42] (822.64s)
and Lawrence closed with this. They said
[13:44] (824.72s)
the biggest change has been from cloud
[13:46] (826.64s)
code just released again three weeks
[13:48] (828.16s)
ago. I just checked yesterday. So this
[13:50] (830.16s)
was this was uh on Sunday and and their
[13:52] (832.08s)
entire team are regular users. Again
[13:54] (834.40s)
this is no affiliation with with with
[13:56] (836.00s)
any of the vendors but they're they're
[13:57] (837.44s)
starting to use startups. Now I also
[13:59] (839.60s)
talked with a biotech AI startup who
[14:01] (841.36s)
asked not to be named and I I'll tell
[14:02] (842.80s)
you why in a second. Uh they do really
[14:05] (845.28s)
cool stuff. They use AI and ML models to
[14:07] (847.20s)
design proteins. They've been founded
[14:08] (848.64s)
three years ago. Uh they have a team of
[14:10] (850.56s)
about 50 to 100 people. They have a lot
[14:12] (852.64s)
of automated numerical pipelines built
[14:14] (854.56s)
on Kubernetes. They're using Python,
[14:16] (856.24s)
Huns, and so on. And an engineer told me
[14:18] (858.96s)
this. We've experimented with several
[14:20] (860.96s)
LLMs, but none of it has really stuck.
[14:23] (863.68s)
It's still faster for us to write the
[14:25] (865.20s)
correct code than to review the LM code
[14:26] (866.88s)
that that will and have fixed all those
[14:28] (868.72s)
problems. And even this is even using
[14:30] (870.48s)
the latest models, even using like Solid
[14:33] (873.04s)
3.7 or or maybe even Solomon 4. Given
[14:35] (875.84s)
the hyperlms, I think we might just be
[14:38] (878.24s)
in a weird niche. And this is why this
[14:40] (880.16s)
engine didn't want to give their name to
[14:42] (882.08s)
to this. They're like, I I we we don't
[14:43] (883.68s)
want to be the AI skeptic. But it's
[14:45] (885.04s)
true. There are really, you know,
[14:47] (887.28s)
fastmoving startups that are
[14:49] (889.20s)
experimenting, but it's it's it's just
[14:50] (890.72s)
not working for them. Like it they're
[14:52] (892.08s)
trying it. It doesn't work. They move
[14:53] (893.52s)
on. Again, they they tried AI code
[14:55] (895.36s)
review tools, uh, and they're kind of
[14:57] (897.52s)
using it on and off, but it's just it's
[14:58] (898.96s)
just not a thing for them. Again, don't
[15:00] (900.24s)
forget they're they're building novel
[15:01] (901.44s)
software, right? Like this has never
[15:02] (902.80s)
been built before. So, just just keep
[15:04] (904.48s)
that in mind. So having gone through the
[15:07] (907.44s)
startups, I I just wanted to turn to a
[15:09] (909.20s)
few independent software engineers,
[15:10] (910.64s)
people who have been accomplished before
[15:12] (912.64s)
AI. They've done a bunch of cool stuff
[15:14] (914.48s)
and they love coding. Like it just you
[15:16] (916.96s)
you can they love the craft. So first I
[15:20] (920.24s)
I turn to Armen Ronacher who is the
[15:22] (922.80s)
creator of the Flask framework at at at
[15:24] (924.96s)
uh Python. Uh he was a founding engineer
[15:27] (927.20s)
at at Sentry. Uh and he just recently
[15:30] (930.16s)
left Sentry to to just maybe do do a
[15:32] (932.88s)
startup. He's been coding for 17 years.
[15:35] (935.12s)
a really nice coder and he got really
[15:36] (936.80s)
excited about AI development recently.
[15:38] (938.72s)
So I he published this article u just uh
[15:42] (942.72s)
a few weeks ago saying AI changes
[15:44] (944.24s)
everything in and he wrote I I'm quoting
[15:46] (946.56s)
the the highlighted text if you would
[15:48] (948.32s)
have told me even six months ago that I
[15:50] (950.24s)
prefer being an engineering lead to a
[15:52] (952.16s)
virtual programmer intern aka an agent I
[15:54] (954.72s)
would have not believed it and so I
[15:56] (956.96s)
asked him like what's changed like you
[15:58] (958.64s)
love coding like why are you into this
[16:00] (960.40s)
whole agent stuff and he told me a few
[16:02] (962.24s)
things first cloud code got really good
[16:05] (965.12s)
I don't know if you're seeing a trend
[16:06] (966.40s)
here by the way there's zero affiliate
[16:08] (968.48s)
iliation here and this is this is not
[16:10] (970.24s)
any sort of advert for anything. He also
[16:12] (972.72s)
said by using alums extensively he got
[16:14] (974.72s)
through this hurdle of not accepting it
[16:16] (976.64s)
and most importantly he said that the
[16:18] (978.24s)
faults of the model hallucination are
[16:20] (980.56s)
avoided because the tool just runs
[16:22] (982.48s)
itself and sees the results and and it
[16:24] (984.40s)
gets feedback. So I like okay that's
[16:26] (986.40s)
interesting. Let me talk with Peter
[16:28] (988.24s)
Seinberger. He is the creator of
[16:29] (989.76s)
PSPDFKit. He is an iOS junkie. He loves
[16:32] (992.56s)
he is inside iOS internals. He has
[16:35] (995.36s)
strong opinions about the API changes.
[16:37] (997.36s)
He's PSP PDF kit was uh one of the I I
[16:41] (1001.36s)
think it's still the most popular like
[16:42] (1002.40s)
kind of PDF related iOS uh tool and he
[16:44] (1004.88s)
sold his startup I think one or two a
[16:46] (1006.96s)
year and a half ago or so and he's been
[16:48] (1008.32s)
tinkering on his side and he didn't
[16:50] (1010.64s)
really do much and then again he
[16:52] (1012.40s)
published an article that caught my mind
[16:54] (1014.00s)
and and in it he said the spark returns
[16:55] (1015.84s)
I haven't been this excited astounded
[16:57] (1017.92s)
and amazed by technology in a very long
[17:00] (1020.00s)
time. So I reached out to him and I
[17:02] (1022.00s)
said, "Hey, hey Pete, what has changed?"
[17:04] (1024.32s)
And he told me he feels there's some
[17:06] (1026.00s)
inflection point where it just works as
[17:09] (1029.12s)
an iOS junkie. He's who loves Objective
[17:12] (1032.00s)
C and and SEXA and Swift. He told me
[17:13] (1033.68s)
that languages and frameworks just
[17:14] (1034.96s)
matter less because it's so easy to
[17:16] (1036.80s)
switch. He's now coding in I don't know
[17:18] (1038.64s)
the TypeScript and other languages. I
[17:20] (1040.16s)
don't think he would have touched
[17:21] (1041.12s)
because of of these tools. And he's
[17:22] (1042.72s)
saying that a keepable engineering can
[17:24] (1044.08s)
just have a lot more output.
[17:26] (1046.40s)
and and then he he posted this on on
[17:28] (1048.88s)
social media actually sent this over to
[17:30] (1050.32s)
me. He's saying that his all his tech
[17:32] (1052.16s)
friends are just and often have trouble
[17:34] (1054.24s)
going to sleep and it's such a
[17:35] (1055.60s)
mind-blowing technology. And it's kind
[17:36] (1056.96s)
of ironic because we exchanged messages
[17:38] (1058.32s)
with him at 5:00 a.m. when I was already
[17:39] (1059.76s)
awake for some other reason and he was
[17:41] (1061.20s)
awake coding. And another engineer uh
[17:44] (1064.16s)
said that he's seeing a lot of burntout
[17:45] (1065.92s)
developers come back into the field to
[17:47] (1067.76s)
create stuff.
[17:49] (1069.92s)
So I I I I I now shout out to Bri
[17:52] (1072.80s)
Brigita who uh is also doing a talk here
[17:55] (1075.84s)
at LDX3. She's a distinguished engineer
[17:57] (1077.68s)
at Thoughtworks and she's been very
[17:59] (1079.12s)
thoughtful about exploring understanding
[18:02] (1082.80s)
what works and what doesn't in in AI.
[18:04] (1084.96s)
She's methodological. I love her
[18:06] (1086.48s)
article. She she does a bunch of them.
[18:08] (1088.24s)
Uh you should check them out. And I
[18:10] (1090.08s)
asked her to her take and she said that
[18:12] (1092.08s)
she feels that LMS are this tool that we
[18:16] (1096.00s)
can use it in any abstraction level. And
[18:17] (1097.60s)
this is the difference. we can now
[18:18] (1098.96s)
create low code like assembly highle
[18:21] (1101.60s)
languages or or you know even even human
[18:23] (1103.52s)
language if we want to it thinks that
[18:25] (1105.04s)
this is like a lateral move like it's
[18:26] (1106.48s)
it's not just a new layer on top of it
[18:28] (1108.72s)
it's like across the stack and this is
[18:30] (1110.48s)
what makes LMS really exciting again
[18:32] (1112.64s)
this is someone who's been thinking
[18:33] (1113.84s)
about LMS for quite a while and very
[18:36] (1116.64s)
accomplished engineer before LMS finally
[18:39] (1119.84s)
I turn to Simon Willis he is the creator
[18:41] (1121.84s)
of Django at independent software
[18:43] (1123.28s)
engineer and he's been blogging on the
[18:44] (1124.56s)
side for like 23 years karpathy
[18:46] (1126.96s)
co-founder of OpenAI posted this just a
[18:49] (1129.76s)
few days ago saying he loves his blog
[18:52] (1132.00s)
and reads almost everything and Simon's
[18:54] (1134.08s)
blog is known as the LLM blog because
[18:56] (1136.08s)
he's been tinkering with since every
[18:57] (1137.76s)
Chad GPT came out on what works and what
[18:59] (1139.52s)
doesn't again really good writing. So I
[19:01] (1141.28s)
asked Simon, how would you summarize the
[19:03] (1143.36s)
state of Genaii tool? And again, Simon
[19:04] (1144.80s)
is as independent as can be. Like he has
[19:06] (1146.40s)
an open source project. He he makes
[19:08] (1148.00s)
enough from that and from donations of
[19:09] (1149.36s)
the blog like that's his is his income
[19:11] (1151.04s)
stream. And this is what Simon told me.
[19:12] (1152.56s)
He said coding agents actually work. You
[19:15] (1155.44s)
can run in a loop do compilers and all
[19:17] (1157.44s)
that stuff. And the model improvements
[19:19] (1159.60s)
in the last 6 months have been some sort
[19:21] (1161.44s)
of tipping point and and now it's
[19:23] (1163.28s)
becoming useful.
[19:27] (1167.84s)
So to sum it up, this is roughly what
[19:29] (1169.52s)
I've heard. AI dev tool startups do
[19:31] (1171.60s)
heavy usage, not too surprising. Big
[19:33] (1173.12s)
tech is very heavy investment and
[19:34] (1174.96s)
growing usage. AI startups, you know,
[19:37] (1177.20s)
maybe a hit or miss. Some are using it,
[19:39] (1179.12s)
some are not. Independent software
[19:40] (1180.64s)
engineers, they're a lot more
[19:41] (1181.52s)
enthusiastic than before. This is
[19:43] (1183.20s)
interesting.
[19:44] (1184.80s)
But there are still a bunch of questions
[19:46] (1186.56s)
left. As as I was looking through like
[19:48] (1188.48s)
it it doesn't feel to me like the slam
[19:50] (1190.00s)
dunk of, oh, you know, the future is
[19:51] (1191.36s)
here. Not at all. And I'm going to give
[19:53] (1193.36s)
you four of my questions. Number one,
[19:56] (1196.48s)
why is it that founders and CEOs are far
[19:58] (1198.80s)
more excited than engineers? Now, some
[20:00] (1200.40s)
of the engineers that we've seen who are
[20:02] (1202.00s)
excited like Armen and and Peter, they
[20:03] (1203.84s)
will probably be founders themselves.
[20:05] (1205.76s)
And here is an example of of Zack Lloyd
[20:08] (1208.00s)
who is the founder of Warp. Uh this is
[20:10] (1210.00s)
an AI terminal, so kind of an AI dev
[20:12] (1212.00s)
tool if you will. and he's saying anyone
[20:13] (1213.28s)
else having a hard time that their most
[20:14] (1214.56s)
senior engineers are are not really
[20:17] (1217.36s)
using AI and the most enthusiastic
[20:19] (1219.28s)
adopters are the the founder the PM and
[20:22] (1222.96s)
and this is from an AI tooling company.
[20:25] (1225.76s)
It it is an interesting question. I see
[20:27] (1227.28s)
this all the time and also if you
[20:28] (1228.96s)
remember the headlines from CEOs from
[20:30] (1230.56s)
public CEOs they're super enthusiastic
[20:32] (1232.40s)
about it. Why is that? I don't know.
[20:35] (1235.28s)
Number two is how mainstream or niche is
[20:39] (1239.28s)
AI usage across devs? Hands up if you're
[20:41] (1241.52s)
use if you're using any AI tools for for
[20:43] (1243.60s)
coding or software engineering for at
[20:45] (1245.36s)
least once a week.
[20:48] (1248.32s)
So I'm I'm kind of seeing roughly like
[20:50] (1250.40s)
60 70% of of the room go up. And this is
[20:53] (1253.36s)
data that I've gotten from DX who ran a
[20:55] (1255.20s)
survey of 38,000
[20:57] (1257.52s)
uh devs recently. They're seeing that
[21:00] (1260.16s)
the median organization has about 50%.
[21:02] (1262.88s)
Five out of 10 use it on a weekly basis.
[21:04] (1264.88s)
And not daily. This is this is weekly.
[21:07] (1267.20s)
And the very top companies have six out
[21:09] (1269.44s)
of 10. So on on one end I mean this is
[21:11] (1271.36s)
amazing it given that this technology
[21:12] (1272.80s)
didn't exist 3 years ago but it's not
[21:15] (1275.04s)
really the story that I've told you
[21:16] (1276.96s)
right so most of the stories you've just
[21:18] (1278.96s)
heard they are all above the median
[21:20] (1280.80s)
except for maybe that uh unnamed uh AI
[21:23] (1283.52s)
biotech startup so just keep that in
[21:25] (1285.76s)
mind you know the reality is and and
[21:28] (1288.00s)
maybe there's a selection bias maybe the
[21:29] (1289.76s)
ones who are using it are more willing
[21:31] (1291.36s)
to talk about it number three how much
[21:34] (1294.56s)
time do we save so you know Peter uh or
[21:37] (1297.68s)
Pete told me that he thinks output is 10
[21:39] (1299.76s)
to 20x more. But then DX did the survey
[21:43] (1303.04s)
and they found that on a weekly basis,
[21:46] (1306.00s)
uh, estimations are maybe 3 to 5 hours,
[21:49] (1309.12s)
maybe 4 hours. I mean, okay, 4 hours
[21:51] (1311.44s)
saved is pretty good, but that's not
[21:53] (1313.04s)
10x, you know, even on a 40hour work
[21:55] (1315.04s)
week. And what do we do with that time?
[21:56] (1316.72s)
Like, do do we produce anything more? I
[21:58] (1318.48s)
don't know.
[22:00] (1320.16s)
Finally, why does it work so much better
[22:02] (1322.64s)
for individuals and teams? Uh, we we see
[22:04] (1324.88s)
this all the time. And Laura Tasha from
[22:06] (1326.48s)
DX told me the same thing. These tools
[22:08] (1328.16s)
are great for individual developers but
[22:10] (1330.08s)
not yet good at the org level. So in
[22:12] (1332.96s)
summary, I'm not really surprised to see
[22:15] (1335.20s)
the CEOs and founders especially for AI
[22:17] (1337.20s)
related companies to be so enthusiastic.
[22:18] (1338.80s)
It's kind of like you know their their
[22:20] (1340.00s)
financials are the line. Big tech
[22:21] (1341.76s)
investing into AI kind of makes sense
[22:23] (1343.76s)
starts experimenting with AI tools also
[22:25] (1345.52s)
makes sense. But what makes me pay the
[22:27] (1347.68s)
most attention is the exper experienced
[22:30] (1350.72s)
engineers who have been around for a
[22:32] (1352.64s)
long time. They find a lot more success
[22:34] (1354.88s)
and they want to use them more.
[22:38] (1358.08s)
My sense is that we are seeing some sort
[22:40] (1360.32s)
of step change happen in how we build
[22:41] (1361.92s)
software looking ahead and I reached out
[22:44] (1364.72s)
to Martin Fowler and asked his take on
[22:47] (1367.44s)
this on a on on a piece that were uh he
[22:50] (1370.40s)
reviewed and this is what he said. These
[22:51] (1371.84s)
are his words. He said I think the
[22:54] (1374.88s)
appearance of elements will change
[22:56] (1376.16s)
software development to similar assembly
[22:58] (1378.32s)
similar level to when we went from the
[22:59] (1379.76s)
assembler to high level programming
[23:01] (1381.36s)
languages. after the high level
[23:04] (1384.32s)
programming languages the newer ones
[23:06] (1386.40s)
didn't really add a step change of
[23:07] (1387.76s)
productivity compared to assembler but
[23:09] (1389.92s)
he's saying that he thinks that LMS will
[23:12] (1392.96s)
give us the same kind of productivity
[23:14] (1394.40s)
boost and going from assembly to highle
[23:16] (1396.24s)
languages did except these things are
[23:19] (1399.60s)
non-deterministic for the first time in
[23:21] (1401.44s)
computing and this is a big difference
[23:24] (1404.64s)
and so I turned to a veteran software
[23:27] (1407.84s)
engineer who is still alive in coding
[23:29] (1409.76s)
and is doing it for doing it for 52
[23:31] (1411.60s)
years Ken Beck. Uh we we have a long
[23:34] (1414.80s)
conversation on on on the podcast and
[23:37] (1417.04s)
Ken told me this really interesting
[23:38] (1418.64s)
statement that I had a hard time
[23:39] (1419.76s)
believing. He said, "I'm having more fun
[23:41] (1421.12s)
programming than I ever had in 52
[23:42] (1422.72s)
years." My first question was like,
[23:44] (1424.08s)
"Ken, is someone telling you to say this
[23:45] (1425.84s)
or someone like he's like, "No, like
[23:48] (1428.08s)
he's just doing his side projects and
[23:51] (1431.36s)
he's having more fun because he got a
[23:54] (1434.64s)
bit tired of of learning new
[23:56] (1436.24s)
technologies again and again and moving
[23:58] (1438.32s)
and migrating to new frameworks." and he
[23:59] (1439.92s)
said that the LMS really help him just
[24:02] (1442.08s)
be really ambitious and he's now
[24:04] (1444.00s)
building a small talk server that he's
[24:05] (1445.52s)
always wanted to do that's going to run
[24:06] (1446.88s)
a bunch of parallel stuff and and do
[24:08] (1448.96s)
bunch of virtual computing he's doing a
[24:10] (1450.88s)
small talk a language server to
[24:13] (1453.36s)
integrate in into all these things and I
[24:16] (1456.40s)
asked Kent how do you compare LMS to all
[24:18] (1458.64s)
the technologies changes through your
[24:20] (1460.08s)
lifetime and he said I've seen something
[24:22] (1462.08s)
like this before in fact a few things
[24:24] (1464.08s)
one was microprocessors going from
[24:25] (1465.76s)
mainframes to to smaller computing which
[24:28] (1468.24s)
was a huge shift apparently developers
[24:29] (1469.84s)
had a hard time putting their heads
[24:30] (1470.96s)
around it. Number two was the internet,
[24:33] (1473.04s)
which I think we can all agree it's it
[24:34] (1474.72s)
changed the economy. And then the
[24:36] (1476.64s)
smartphones, they just changed how you
[24:38] (1478.72s)
can have live location and people spend
[24:40] (1480.08s)
a lot more online. And he's comparing it
[24:41] (1481.76s)
to to these things. And this is what he
[24:44] (1484.00s)
closed. He said the whole landscape of
[24:45] (1485.28s)
what is cheap and what's expensive has
[24:46] (1486.72s)
just shifted. Things that we didn't do
[24:48] (1488.56s)
because we assumed we're were going to
[24:50] (1490.32s)
expensive or hard just got ridiculously
[24:52] (1492.56s)
cheap. So we need to be trying things.
[24:54] (1494.72s)
So my takeaway is things are changing
[24:56] (1496.72s)
and we need to experiment more. I think
[24:58] (1498.24s)
we need to do more of what the startups
[24:59] (1499.60s)
are doing. Try out what works, what
[25:01] (1501.20s)
doesn't. Understand what is cheap, what
[25:02] (1502.64s)
is expensive. And I'm I'm leaving you
[25:04] (1504.56s)
with this message. Thank you very much
[25:07] (1507.76s)
and see you around the conference.
[25:12] (1512.62s)
[Music]