YouTube Deep SummaryYouTube Deep Summary

Star Extract content that makes a tangible impact on your life

Video thumbnail

Who's Coding Now? - AI and the Future of Software Development

a16z • 43:26 minutes • YouTube

📚 Chapter Summaries (9)

🤖 AI-Generated Summary:

The Future of AI in Coding: Revolutionizing Software Development and Beyond

Artificial Intelligence (AI) is rapidly transforming the landscape of software development, evolving from a mere tool to a potentially revolutionary programming abstraction. In a recent insightful discussion among industry experts, various dimensions of AI-assisted coding were explored, revealing profound implications for developers, enterprises, and the future of programming itself.


AI: More Than Just a Tool — A New Programming Paradigm?

Traditionally, programming languages and compiler designs have provided structured ways to instruct machines. But with the advent of Large Language Models (LLMs) and AI coding assistants, there’s a growing belief that AI could evolve into a higher-level language abstraction. Instead of writing code line-by-line, developers might soon express software specifications in natural language, which AI then translates into executable code.

This shift could redefine how compilers work, enabling humans to describe tasks in efficient human language that AI systems directly compile. While this vision is still emerging, it underscores a potential paradigm shift in software creation.


AI Coding: The Second Largest AI Market

The AI coding market is currently one of the largest sectors in AI, second only to consumer chatbots. However, the distinction is nuanced, as tools like ChatGPT serve both companionship and coding needs. Coding AI is unique because it builds upon existing user behaviors—developers traditionally rely on resources like Stack Overflow to solve problems. AI-assisted coding enhances this by providing instant, context-aware help, effectively becoming a new form of collaborative coding partner.


Why AI Coding is Thriving

Several factors contribute to the explosive growth and adoption of AI coding tools:

  • Existing Developer Behavior: Developers are accustomed to searching for solutions online. AI replaces or supplements this with faster, more integrated assistance.

  • Verifiable Outcomes: Coding has clear input-output relationships, making it easier to verify AI-generated solutions compared to subjective tasks.

  • Market Size: With approximately 30 million developers worldwide and an average annual value of $100,000 per developer, the global developer productivity market is enormous—estimated at $3 trillion annually.

  • Productivity Gains: Early AI tools like GitHub Copilot have already shown productivity improvements of 15%, with potential to double output, unlocking trillions in value.

  • Developer Adoption: Developers are natural early adopters due to their inclination to tinker, automate, and optimize.


Evolving AI Coding Workflows

The way developers interact with AI coding assistants has matured rapidly:

  • From Copy-Paste to Integrated Autocomplete: Initial usage involved asking AI models for code snippets to copy into editors, replacing Stack Overflow searches. Now, AI integrations like GitHub Copilot and Cursor provide inline autocomplete and deeper IDE integration.

  • Context-Rich Collaboration: Developers now engage AI in iterative dialogues—starting with high-level specifications, refining details collaboratively, and then generating code. This conversational approach helps clarify requirements and enhances the quality of the generated software.

  • Real-Time Documentation Access: Modern AI agents can fetch the latest documentation or API specs dynamically, improving accuracy and reducing manual lookups.

  • Handling Complexity: AI excels in managing repetitive or complex tasks such as CSS styling or boilerplate code, freeing developers to focus on unique logic.


Challenges and Limitations

Despite tremendous progress, AI-assisted coding faces notable challenges:

  • Hallucinations: AI sometimes confidently generates incorrect code or functions that don’t exist, requiring vigilant review.

  • Context Dependency: Highly novel or specialized problems with limited training data remain difficult for AI to solve autonomously.

  • Opaque Code Generation: AI-generated code can be hard to understand or modify, even for seasoned developers, creating a gap between AI assistance and manual coding.

  • Tool and Context Limits: Current AI models and IDE integrations have constraints on context length and the number of tools they can handle simultaneously.


The Role of Human Developers

The future developer role is evolving rather than disappearing. Developers may shift towards:

  • Specification and Design: Focusing on defining clear, detailed specifications and architectural decisions.

  • Review and Debugging: Acting as quality assurance experts, verifying AI-generated code meets requirements.

  • Optimization and Deep Expertise: Handling performance tuning, distributed systems, and specialized tasks beyond AI’s current reach.

Interestingly, AI may democratize coding by enabling non-developers ("vibe coders") to create useful software applications through natural language prompts, expanding the pool of software creators and fostering innovation.


AI and Legacy Code Modernization

AI also shows promise in assisting with legacy system modernization. For example, enterprises are using AI to:

  • Extract specifications from old codebases (like COBOL) where original intent is poorly documented.
  • Reimplement modern, compact, and maintainable versions of legacy software.

This approach can accelerate modernization projects that have historically been costly and error-prone.


Broader Implications: Chaos, Uncertainty, and Software Architecture

Incorporating AI into software introduces new dimensions of uncertainty and non-determinism. Unlike traditional deterministic code, AI outputs can vary with slight input changes, resembling chaotic systems. This demands new software architectures and design patterns capable of handling unpredictability, akin to how networking introduced timeouts and retries.

Developers and organizations must adjust expectations and adopt new metrics for AI reliability, especially in sensitive domains like finance.


The Narrow Waist of AI: Prompting as the New API

An interesting analogy is the "narrow waist" concept from internet architecture—where a simple, universal interface (like IP) connects complex, diverse systems. In AI, prompting acts as this narrow waist, serving as the interface through which developers interact with powerful, complex models.

Currently, prompting is informal and varies between models, but the future may bring formalized prompting languages and structured frameworks that standardize and optimize human-AI communication.


Looking Ahead: Education and the Programming Spectrum

The rise of AI coding raises questions about future education and skill requirements:

  • Will traditional computer science education become obsolete, or will foundational knowledge remain essential for optimization and deep understanding?
  • What abstraction levels and languages will future developers learn?
  • How will AI-native programming tools evolve to balance ease of use with control and transparency?

Experts agree that while AI simplifies many tasks, formal languages and understanding underlying systems will remain critical for complex or large-scale projects.


Conclusion

AI-assisted coding is reshaping software development by augmenting human creativity, boosting productivity, and enabling new forms of collaboration between humans and machines. While challenges remain, the potential market impact is enormous, promising trillions in value creation and the democratization of software creation.

As AI matures, it may become a new programming abstraction, blending natural and formal languages, and ushering in a new era of software engineering—where humans focus on high-level intent and AI handles the intricate details.

Developers, enterprises, and educators alike must adapt to this evolving landscape, embracing AI as a powerful partner in building the software of tomorrow.


Stay tuned as we continue to explore the cutting edge of AI in coding and its transformative impact on technology and society.


📝 Transcript Chapters (9 chapters):

📝 Transcript (1256 entries):

Today I think AI is not just a higher level language abstraction or something like that. But could it become one over time? That's the question. I don't do you do you think yes? I I think it could. If if I look at a classic say compiler design or or you know in in in in programming languages, if I would have LLMs as a tool, I would probably think very differently about how I would build a compiler. Yeah. And I don't think we've seen that work its way through yet. But if if I can basically define certain things in human language in efficient way and you know maybe in a sort of tight enough way that can use this directly as as input for a compiler that could change a lot of things. We're pretty sure it's the second biggest AI market right now. Correct me guys if I'm wrong, but you know consumer pure chatbot I think is number one and I think coding is number two. just purely purely looking at the numbers but consumer is the aggregation of a lot of different exactly right that's how I define market I think there's you can make an argument it's the number one actually if you look at really homogeneous markets is coding bigger than companions yes I think so yeah yes at this point it is you think so that'd be interesting it depends it probably depends how how you classify something like chat GPT which to some degree is used for companionship so a a large portion of chat GPT usage which I think now is companionship. I think that's right. Yeah. So, well, in the end, you know, is is is a person's motivation greater to build something or to or to to find love? I think, you know, you know, it may be neck and neck. One thing that's very unique about AI coding that's sometimes under underappreciated is this was actually an existing behavior in in a couple of ways. First, people were already going somewhere to look for help, which which we mentioned earlier is is Stack Overflow for for the most part. So there was already sort of this muscle people were exercising when they hit a problem they couldn't solve to go find the information on the internet and this is really just a much better form of that. you know, there are all these jokes that Stack Overflow is actually writing most of the code, you know, for the last, you know, x number of years. A lot of that may just shift to to AI models, right? It's not clear if that was a joke or not, but yeah. Yeah. Maybe maybe not a joke. Also, there was this thing called GitHub copilot, right? They did this really foundational work to start to transition people off of that sort of Stack Overflow use case to uh to, you know, using AI models. And I think companies like Cursor have just done a much better job with that now. and and and so you have this fairly unique thing that you're actually taking advantage of a an existing user behavior an existing market and like selling a great product into it. I think there's one other aspect which is look if you're a developer and you have access to the latest AI technology the first problem you solve is are your own problems right so I think it's uh developers just that's the problems they understand best that's the problems they they face every day and so they build infrastructure for themselves to use and developers are always early adopters for new technologies just because like naturally they're they like to tinker uh they like to configure net new tools and they're lazy so anything that actually you know increase the productivity they will adopt but I also think coding market is doing so well also because it's somewhat verifiable problem like you can verify if like a coding function like input output are very clear compared to like user preference you know uh and all the other problems and you can also reframe a lot of different uh problems into coding so I would even argue that some of the art generation is a coding problem people will not like it but like historically we always used machine learning learning before we call it AI in Adobe Photoshop and you know like that's somewhat coding you can really map the trajectory of brushes that's coding vector generation is also coding so I think the beauty of code is that you can really remodel a lot of the real world problems and make it into very machine consumable formats I think there's one other aspect which is it's it's a it's a massive market like if you if you think about this we have 30 million developers worldwide let's say average value created by developers is $100,000 a year. That's $3 trillion. Right? I think we if I look at the data we've seen from some of the large financial institutions, they're estimating that the increase in developer productivity from just a vanilla co-pilot deployment is something like 15%. My gut feeling is we can get that substantially higher, right? Let's assume we can double the productivity of a developer. For all developers in the world, that's $3 trillion worth of value, right? That's the value of Apple computer or something like that. It's an incredible amount of value that we unlock there. So it's it's it's a massive market and I mean there was I think it was last year when there was you know a good blog debate on on you know if we're overinvested in AI and I think back then the number was is $200 billion annual investment you know something would ever you know recuperate here we have a way to recuperate $3 trillion right so that makes the the 200 billion look like peanuts. I think what might also be at work is it's it's an easy market to capture because developers understand it and it's it's a very very big market something potentially it might be the first really large market for AI in terms of in terms of driver. Yeah. No, that's that's a great point. Software and software developers create a huge amount of value at every company and every organization around the planet now and this is sort of a shortcut you know into into this sort of core capability. So so that that makes a lot of sense. There's almost a bootstrapping effect to your point too Guido because if you count not only the productivity gains but like the brand new things that are being created with these with these models like you kind of I think can see a cycle starting where you're getting better and better AI coding models which allow you to kind of create better better absolutely software you know better new net new AI applications also once the AI revolution has run its course do we have any idea how the job of a software developer will look like right it look different from today I think we're seeing that when I'm writing code today I'm I'm writing specifications I'm having discussions with a with a model about how to implement something um you know I'm I'm I'm in for easy features I can actually ask it to implement a feature and just reviewing it how will the process be different will there still be the same stages will there will we all turn into basically product managers that write specifications and then the AI writes the code and you just step in occasionally to debug or what is the what's the and state yet. Do we have any idea at this point? Or we all become QA engineers and we test if it's to the spec. There we go. That is kind of ironic, right? We all got into this to avoid being QA engineers. I like what you were saying, Guido. Maybe we could each just talk a little bit about how we use AI models in coding right now. Like can you share a couple of stories about how this has changed your your coding workflows? to totally and I mean boy I'm not sure I'm even the I'm probably maybe the person coding least here but but the I think the the the most interesting insight is it has changed a lot over the last even 6 months how I use these models right it used to be that you take your favorite uh you know chatpt or something like that and you give it a prompt and out comes a program and you copy that into your editor and you know you you see if it works or not right that was that's sort of the stack overflow replacement thing is like when you inevitably hit a problem instead of going to Stack Overflow you go stat uh chatgbt and it actually gives you code back copy paste but from a different source right and this was like six months ago this was state-of-the-art this wasn't like that long ago maybe nine months nine months ago yeah 9 months but then uh so then the next step was you started having integrated things that are integrated in your IDE right GitHub copilot then cursor that basically allows you to use autocomplete which is a big step forward right it's no longer like monolithic questions but but it's sort of in the flow then this split up into autocomplete at a line level. I can ask questions about paragraphs or I can have a um you know sort of a separate chat interface where I can have longer discussions. Then the IDE started to be able to use command line tools. So suddenly I can say hey can you set up you know my new Python project with UV or something like that and and um it could could basically run commands to to do all that work. And I think where we are today is when I write want to write a new piece of software or you know this is this is not production code right? this is like but I want to try something out. The first thing I do is I start writing a spec, right? I'm start basically a very high level. Here's what I'd like to do. And it's, you know, still fairly abstract and not very well thought through. And then I basically ask, you know, the the model, maybe something a set 3.5 or 3.7 or Gemini, here's here's what I'd like to do. Does this make sense? Please ask any questions that are unclear. And then write me a more detailed spec. And then the model gets to work. And usually there's lots of questions for me. It's like, hey, you know, I need an API key for that. you know, very simple things or more complex things like, you know, how do you want to manage stage? Should we put this in a database? Should we just dump it into a file or something like that? And so it's it's basically a back and forth discussion that helps me clarify my thinking. And the the model is almost a sparring partner to think through the process, which is really weird in a way, but it it works, right? Uh and you know, and and so over time you basically get more detailed specs and only when you have them, then you ask the model to start implementing. And all of that comes with a fair amount of context. You know, you also together with the model I have my standard Python coding guidelines. This is how I like to do commenting. This is how I like to do, you know, more object-oriented versus more procedural is how I like to structure my classes. I'm an object-oriented guy. So, we're talking Java here or what? No, Python, right? Yeah. Do you want to have type Python or untype Python? All these things, right? So, it's it's a lot about context. It's a lot about explaining your general development methodology. It's a lot about a back and forth with a model now where you sort of together figure out something. That's how I'm how are you coding? I guess like compared to maybe six months ago uh how I use coding agent nowadays is um I give it more of the world knowledge. Before I was mostly relying on what's a foundational models knowledge and it's funny because when you ask the coding agent when do you think today is it's always like 2023 and then all the specs I will give you are from like 2024 at best. So depending on when the knowledge cut off is nowadays I think it's very natural for me to like uh reach out to like linear here's a ticket and I just give my idea uh pull my idea into cursor cursor agent will take a first step at implementing it. So that's one kind of workflow change. The other one is more user prompted like uh active queries. So before um I may need to copy paste documentation into my little cursor window. Now I just ask the cursor agent to like hey can you use fire crawl to go search for the most up-to-date uh you know uh like clerk documentation and it will actually fetch a page and it will you know read up that works. Yeah it actually works and then it will mp it uses MCP but it's likeation detail it could be a tool call or whatever but it's there's more integration to the real world now. You guys sound much more planful than than me. I always, you know, the scenario for me is like Saturday night I finally have an hour free and I have a weird idea for an app and I just dive right into it and like ask ask cursor to do everything. And I've always found it works really well for high complexity, high kind of annoyance factor things like front end like if anybody on earth can remember all of the CSS classes that people use now for margins and padding it's it's like it's you know I I don't think that person exists. I center a div yet? Yes. Oh, yeah. We should do a benchmark on centering a div. Totally. Yeah, we Yeah. tutorial on div center. I mean, it's one of these it's one of these hard problems for for no reason, right? There's just like five different ways to center text and elements and and I can never remember any of them, right? And and and you know, AI models are really good at this, right? Um and they now can do it. When you start going to more niche libraries and function calls, that's where I always run into trouble. So, so I love this fire crawl kind of idea because usually I'm hunting for docs and then putting them back in or something like that. Yeah, sometimes I also copy paste like a milify doc because they have the lm.ext on like the um developer tool docs. I just give the URL add doc and then enter the URL and ask cursor to implement that and that works too. Has anything gone really wrong for you guys yet doing doing sort of AI assisted coding? Not really wrong per se, but a lot of how we code is dependent on the agent behavior on how like the client implemented the agent. One example is there's this very cool tool that actually generate like very pretty pages and send back like a react component like a HTML page for the coding agent to refer to. So one time I asked cursor agent to like reach out to this tool implement based on whatever it told you. Cursor agent's reaction was very interesting. It look at the code, it says, "Oh, this looks great. Let me give you a new version." So, it didn't adopt whatever that was returned. Interesting. Yeah. Which is like a very interesting like agent to agent communication. Cursor agent is like, "I don't agree with this direction." So, you've done a bunch of work on MCP, Yoko. How do you think that plays into this? I think MCP to its essence is just a way to provide context, the most relevant context to LMS. So it helps that a longtail MCP servers nowadays can be leveraged whatever client you're using. So that's what's you know empowering the kind of experience I was just describing. I can use linear MCP I can use GitHub MCP to pull in the relevant context and tool calling is like a technical detail how they implemented fetching the context but the crux of the MCP is actually the context part. what is the most relevant thing I can provide to you as a model so you can help me better. And so do you think having these kinds of tools available in an IDE means AI coding is kind of more productive or a better fit for kind of senior developers? Because because a a a knock against this for a long time has been that you know vibe coders for lack of a better word are kind of producing great demos and and you know junior developers are kind of you know getting up to speed faster but the people I've always affectionately called neck beards right the the people who you know own the cluster and stop you from breaking things or or you know like own the overall architecture are sort of skeptics. Do do you think this is one one way to get you know the the the neck beards engaged? I think it depends on what the very senior engineers are optimizing for. There are very senior like application engineers who are just very good at you know fleshing out ideas. So like in this case it's like a more evenly distributed skill set. You just need to put the stack together. But there are very senior engineers who are say optimizing best thing in the world for optimizing for distributed systems that I think we're not quite there yet just because the coding agent first like it can't fetch any and all state of the distributed system. It's a lot of human you know like intervention when it comes to like how to solve certain problems. But I feel like we're on the way there given enough context window, enough tool calling capabilities to bring just the right knowledge into the model. Today I think most IDs have a limit on the number of tools it can handle. I I remember was like 40 or 50 or something. So it naturally limits what's the context and what's the tools that the coding agent can leverage. I think that there's sort of a a pattern that the more sort of esoteric the problem is, the more novel the problems you're trying to solve, the more context you have to provide. Right? If I'm like, hey, write me a blog or, you know, what is it? Write me a, you know, online um um store like the the simplified version, that's of a standard, I don't know, undergradu software development class problem. So, the amount of samples on the internet is more or less infinite. The models have seen this a gazillion times. incredibly good in in regurgitating um um this code. If you have something for which there's very little training code that typically all goes away and it's all sort of you have to specify exactly what you want. You need to provide the context. You need to provide the API specification. It's much much harder and it will very confidently give you a wrong answer too. I can't tell you the number of times I'm like oh my god this function exists. I had no idea it's exactly what I needed. It's like wait no it doesn't exist. Pure hallucination and it's once it does that it's very hard to get it off. Right. And if you're saying like the function doesn't exist, it hallucinates a new one. It's like, "Oh, I'm so sorry. Here's here's another function that doesn't exist, that might work." Yeah. I think what models today are very bad at is telling you if they don't know something. Yeah. Do you think RL would change that in a training process? If you theoretically you give it all the relevant environments in the world, it can do all the things it needs to do to simulate a distributed system um and debug it. Look, I I I think in the extreme case, if you are the first person on the planet to write code that solves a specific problem, there's just zero training data out there. I think it'll always be very hard, right? I think the models are not really creative so far. They can do a little bit of transfer, but but but not much. So, you know, if you're say there's a brand new chip which has a new architecture and you're the first one to write a driver for it, it's going to be a fairly manual task. I think the good news is that is 0.01% 01% of all software development right for the I don't know you know 100 thousands ERP system implementation or so right that we have tons of training data I think these tools can be very very powerful we haven't talked about vibe coding too much but right but there's this idea that that people who aren't developers can now kind of write code which is which is pretty cool right and and and sort of feels like something that should happen. You know, we're not like priests of the computer where, you know, we need to intervene between ordinary people and and the and the processor, right? It should be that indoctrinated before. Yeah. Exactly. There's no, you know, seminary of Well, maybe maybe there were seminaries of computers. I don't know. CS departments. Yeah. Exactly. Um but but it kind of makes sense that people should be able to control computers in direct ways, not just in sort of pre-baked, you know, programs that have that have been given to them. So, so this is, I think, a a super interesting and and super exciting thing. I think there's a question there is is that true at all scales or or is this a little bit like look everybody can build a shack but you cannot build a skyscraper right well so so this is this is exactly why why I bring it up right the demos that everybody does their first time they're they're trying you know a a website generator or or you know cursor or something like that probably are not you know doing that much for the rest of humanity right this sort of first weekend project but if you assume some portion of people who give that a I maybe start to climb the ladder and do increasingly sophisticated things and by the way in a totally different way than from you know the three of us would probably do it having learned sort of programming the the hard way. I just have a ton of optimism that that creates all sorts of kind of new things. You know you have a new pool of people writing software in a new way who may look at the world in a completely new way. I I just have a ton of optimism that gives you kind of new new stuff, new applications, new programs and new ways of kind of using computers and computing that that we haven't had before. You know, this actually reminds me a lot of of the 2000s, like when uh blog was the new word uh on the blog and everyone was like, I need a new blog and then, you know, we rush to create our own blog and there comes like WordPress. People are still using WordPress, by the way. I'm surprised by that. And this wave of vibe coding almost felt like everyone and my my mom and my mom's neighbor are like trying to, you know, use the models to create personal software. So like we came from personal static content to like personal CRM to like manage your relationship or something. How deep do the software go? Like I don't know. I don't think it's very deep, but it doesn't matter like as long as there's like personal utility. Um I I think Martin tweeted about this earlier. He was like you should still learn to code. There's a uh if you're operating on on one abstraction, you need to learn the abstraction lower than where you're operating from which is very fair. And I I keep coming back to that because I wonder what is the one level lower abstraction for VIP coders. Is that code? Is that the IDE? Is that something else? But curious about your you guys take. I think this is a super good question and and let me try to rephrase the question a little bit like what is the thing that future people that want to do software development need to learn, right? Is it is it is it one level deeper? Is it actually something that's sitting more to the side? I mean there's actually some people say look there's no point in in learning CS anymore. Yeah. It's all about social emotional learning and and the kind of things. I'm not sure I agree with that. Right. But it's I feel like that comes up every 20 years or so. Definitely is a cycle there. Yeah. Honestly, I have absolutely no idea how the equivalent of computer science education will look like in 5 years, right? when when when we're on the other side of this you're probably I mean historically what happened when we did similar things say with with calculation right when we went from adding numbers manually to Excel right it's not that the whole job category disappeared right it's more that bookkeepers became accountants or something like that right edit entering data and writing down numbers and adding them manually became less important and doing higher level more abstract concepts became more important so for a pattern match that one to one you The guess would be that you know explaining the problem statement, explaining the algorithmic foundations, explaining architecture and explaining data flow is getting more important. And the nitty-gritty coding, you know, what's the most clever way to unroll a for loop? That's a very specialized more niche um discipline. It does almost feel like we're waiting for something, doesn't it? Right? Like like if you think about a a sort of classical computer science undergraduate education, you you don't just learn kind of the latest thing, you know, at least in a lot of programs, you learn you may do a semester of sort of assembly, right? You actually learn all the oldest things. Yeah. You start with the old or you know, we even had to take like a processors course, right? And I'm the world's worst computer engineer, but like you know, I got in there and I was like connecting gates and you know, like that was fun. So you learn you learn like how processors work. You learn assembly. We did a course on lisp which was cool. You know we did file systems and some bits of operating systems and you know you learn Java like Java was state-of-the-art at the time. That's why I mentioned it before not not anymore obviously. So it's tempting to say this is this is like the next kind of thing that is kind of built on top of those things and that you would learn to code kind of only for historical reasons or or for educational reasons. I just don't know yet if that's actually true. like a lot of the kind of layers we've added on on top over over over the course of decades are are things that truly are a new programming interface. AI is not actually a programming interface, right? It's not actually a framework. It's it's sort of a tool that uses things you already helped you use things you already have. So I that just makes me wonder if we're waiting for kind of the the next iteration of this thing like the thing that AI actually can like change about the way computers are programmed. for instance, it could just be prompts that are somehow, you know, that are somehow translated to code in a more direct way, you know, like and and agents as we see them now are kind of a starting point there. So that that's what I'm sort of curious about today. I think AI is not just a higher level language abstraction or something like that, but could it become one over time? That's the question. I don't do you do you think yes? I I think it could. I mean, look, I think we really haven't figured that out yet. I mean it's if if I look at a classic say compiler design or or you know in in in in programming languages if I would have LLMs as a tool I would probably think about very differently about how I would build a compiler. Mhm. Yeah. And I don't think we've seen that work its way through yet. I have no idea how it's going to look like. Right. But if if I can basically define certain things in human language in efficient way and you know maybe in a sort of tight enough way I can use this directly as as input for a compiler that could change a lot of things. analogy here would be like because a lot of companies are building agent-based systems and then when you kind of take a look at that system when you see what the agents are building you're like oh this is what I learned in operating system class many years ago these are processes one process fork another one and then hence the task to another one and then something else manage the resource of the system I don't think we have the framework like this is why I think the CS education will not go away because it give you a way to compare the few things otherwise you wouldn't have known there's a thing called process in the first place but at the same time I don't think on the on top of the foundational model we have invented the paradigm to make that work as if it's an operating system formal languages exist for a reason right I guess is the is the one thing I would say whether that's a programming language or you know a specification language or something like that there has to be some high bandwidth expressive way for a person to design software or or anything but software in this case. So I just have a hard time seeing you know Python going away or or programming languages going away entirely. You know to Yoko your point about you have to understand at least one level of abstraction down. Um it is an interesting interesting question if some will be more popular than others because they're kind of more AI native in a way. We're you know we're sort of seeing Python and JavaScript are are kind of leading the pack right now but but you know it's not clear. Um, tooling I think is another really interesting thing. Like we're seeing a bunch of new Python tooling come out right now which is kind of cool because the uh the Python ecosystem is kind of more active than than ever and you can imagine that sort of has an impact on you know how well does it work with with kind of the AI you know add-ons to the language too. So so you know I I just don't think we can toss these things out toss these things out completely. I think the reason behind you need to know a level like a abstraction level deeper is if and when you need to do optimization on the system you're writing you just need to know how to optimize that. uh if you don't then you really don't need to know like there's a lot of people who back in the days coding Java and never heard of you know JVM or know how it worked just uh like creating a calculator using Java you don't need to know JVM but if you want to optimize for you know runtime threading you do need to know JVM it's very similar with the vibe coding uh use cases if you're just building a marketing website I I don't think you need to know the next level of optimization like unless you're serving something at scale then you probably need to know what CDNs are, you know, how to cache pages, things like that. But at the same time, like if you're someone who uh aspire to serve something at scale and then want to, you know, flesh out the real service one day, it's really hard to get away without knowing the underlying knobs because the essence is there are certain things computers can do and these things are, you know, defined by formal languages. One language is buried under the other. Uh and then to touch these knobs and then to know what to even do, you need to know these languages. Yeah. So curious about your take too, Gradle. No, and I agree. I think formal languages won't go away because ultimately they they seem complicated, but I think effectively a formal language often the simplest type representation you can you can find to specify intent, right? doing that in a a language like natural language is is often very imprecise and you need a lot more words to to get the to get the same result. I mean I think the the interesting question at the moment is are there cases where AI has enough context from understanding humans and enough context from you inserting clever ad symbols and pulling in additional pages um that it can take for a certain subset of problems natural language descriptions and and um translate it accurately. And I mean it seems like there are there are areas where that's possible, right? So that's what we're using every day when we use use AI for coding. So can you hybridize that somehow that you actually create a language of of that type? Right. I don't know yet. I mean your distinction is really interesting, right? Like uh complicated is this word complicated is sort of overloaded, right? And in one sense it can mean a highly, you know, complex system that has a lot of pieces and you never quite know how it's going to behave. On the other hand, it may mean just kind of hard to use, right? And I think people sometimes see programming languages complicated in the sense that they're hard to use or hard to learn, right? You need to learn this kind of new language to speak in. They're actually very simple, right? You can you can draw a a tree that sort of, you know, encapsulates the entire set of things that can be expressed in that language. So, it's it's funny. We're we're switching to this thing called AI coding that's easier to use, but actually much more complicated under the hood. You know, insert meme about giant green monster with a mask on, you know, like here. So like so so to your point it's like how do you how do you sort of handle that and is it some hybrid solution or or something else? Um I know the the guys at Cursor have always talked about kind of formal specifications which which I think you alluded to also Guido as as kind of like writing a a spec in a really clear way as kind of the task that people will be faced with more and more over time. It's it's almost like an annealing process between you and the AI to go from some loosely formed model that you have and loosely formed model that the AI has to to a tight spec that you can implement at the end of the day. This is so true. I talked to a classical vibe coder recently and then because like my question was do you really need the coding interface like you know how you enter a prompt it generates bunch of code and this voders's uh answer was so interesting uh he said uh I like that the AI is generating code and showing me it's very empowering for me to see that I generated all this code and but when I want to go in and actually change something myself I don't know where to start so it tell me that there's a gap between what the AI generated and where you know v coders uh operate. It does feel like there is a product somewhere between like you we want to um give them the power to actually change the underlying knobs too. I mean this is not restricted to people who are not experienced programmers by the way like if one of us tried to vibe code an app after four to five turns if you went in to try to edit the code it would be very difficult. It's very opaque what's going on. I ran into this when I was trying out uh the Blender MCP. I've never used Blender before kind of like it's just really hard piece of software to get into, but um so I installed the MCP server on my cursor uh IDE and then I was able to prompt uh like a mini statue of A6Z infra uh just very easily. But when it comes to uh modifying this 3D representation, like that's where things you know start to break apart. I don't even know where to start, why I need to model. Yeah. Like a flat surface has like 10,000 polygons. Yeah. But there's a lot of opportunities here like kind of existing in the gaps of AI and VIP coders and what the representation is today. What's really cool about this is is you're sort of creating a new layer of context and a new layer of intent in in software programming that that didn't exist before. So So for instance, can AI help port old code? Right? this is one of these like the banks have been trying to drop cobalt for you know for hundred years or something like that and personally I think the answer is kind of no right like it can definitely help but it doesn't solve the hard problem and and I mean that in the in the following way right like AI may be able to transpile you know cobalt to Java but there's a huge amount of context in what went into creating that cobalt code that's been totally lost right over the course of decades in many cases something that started as an airline booking system became and airline booking plus HR plus you know fetch the coffee system and many of the people who contributed to it and by the way didn't write a lot of documentation or comments may not be around anymore at the company or you know on this earth right so and and so but and and so this is a problem that AI I think can help and not solve but what's actually even more interesting about this to me we talk about specifications is like if they had been using AI at the time to create those systems there would be this whole other record of what their intention was um when they were creating the software that kind of comes for free, right? It's not something you have to go back and do. And I think that's something that's kind of cool now, like if we see this kind of take off more and more, you have this kind of other set of metadata that can kind of capture the the software intent in a slightly different way. It's almost like a higher level language abstraction, isn't it? But it's I think it's different, right? Like because it doesn't like you can't compile it down, you know what I mean? Sort of. You can feed it back. I agree. It's sort of But actually, you're raising a very interesting point there. I recently talked to some large enterprises that are using AI to basically take legacy code bases specifically mainframes. So cobalt and PL1 is the other other good one there and and move that to more modern languages. And the it's super interesting. They they have exactly the issue that you described, which is that if you just um look at the the old codebase, you often have no idea what the intent was. And if you just try to translate that, you you pick up many of the idiosyncrasies of that old programming language, right? I mean, Java has much more modern constructs that you didn't have in in in Forrron. Maybe you want to use some of Cobalt, maybe you want to use some of those. So what I've heard from from now multiple multiple organizations that they're saying the most efficient way for them is to actually go first and try to create a spec use to create a spec from that code right and once they have the spec then to reimplement the spec and that gets them much better results much more compact code much more modern code than than what they had originally and that is sort of an AI assisted problem for sure both of those problems I think. Yes it is. That's very cool. Yeah, that's interesting. Uh because I was actually just thinking about uh it's actually much easier to rewrite modern software like modern meaning something in the about past 10 years. It's like easier to implement from Angular to React especially both frameworks are well understood by the agent. It's much harder if the state PHP to Angular is a little PHP I mean Laravel is you know working out pretty well. So that one's uh easier depends on what kind of framework you're using. It's much harder if the state one is spanning across many software uh systems uh because like you just need to do some discovery or have an agent that can have access to this discovery. I can see that working out. And two, there's specific uh specificities on the hardware some of these things are running on. Uh like for example like uh for the runtime maybe I give it enough memory for this docker container I need to have specific configs to make this work. Uh sometimes like to your point all of that is lost until the day we can take a snapshot of the runtime like how is this run? What's the requirement of this? It's hard to migrate systems like that. I'm now getting like pre- nightmares of like something goes down in prod and you're digging through the chat GPT logs to like try to figure out what someone might have accidentally tried to do. Guido, I have sort of an interesting question for you. Like if you think of AI as a primitive in a in an application, not just a tool to write code, it does seem like it's kind of the pushing the frontier of the degree of kind of like uncertainty and and like non-deterministic behavior we can we can have in software, right? like like if you think like really old days kind of probably predating you know a lot of a lot of us and our our listeners um you know you just write software for like a local machine and you could have a pretty good expectation of of how it was going to execute. We had this new thing called the network, right? Which is which is very hard to predict how it's going to behave, but but you can kind of express it in the same terms, right? It feels it feels like a problem that you can wrap your arms around. It feels like AI is kind of an extension of that in a way where like you actually don't know what's going to happen when you when you add AI into your software or or use it to write code or whatever. Like how do you how do you think about that? Do you think that's a reasonable way to look at it? And are there any lessons from kind of the networking world to um you know that that would help us figure out what's going to happen in AI? Yeah, I mean I want to say probably because I don't think we have the lessons fully digested yet. When it went to network systems there were sort of new failure modes like timeouts and you know then new remedies for this like retries and you know and uh once you got to sort of distributed database you had to worry about automicity and roll backs in in a digital context. things got very complicated very quickly and I think for some of these design some of the design patterns today even today we don't have very good software architectures yet right there still and they may be kind of unsolvable some of these problems right that's I mean I think the fundamental problem is not solvable but you can at least make it as easy as possible for a developer right I mean everything is just tools for the developer to to cushion some of the blow models are funny because at temperature zero a model is technically deterministic right so it's it's not so much that different inputs that the same input would pay result in different outputs. That's that's something we do by choice. I think the bigger problem is that an infantessimally small change in the input can have an arbitrary large effect. So it's a chaotic system. You're saying chaotic system. Exactly. The user could put anything into a text box and the system is chaotic enough that you get you know like it used to be you just had to check for apostrophes and then you could execute a database statement. Like there's only a few things that could break a text box. Now, like kind of anything could happen when when someone enters text. That's right. Ignore all previous instructions. But that's a really interesting thing you're saying that it may it may be the case that we just need to expose the primitives and capabilities of the system in a way that developers can use, not necessarily tamp down all of the, you know, all the failure modes, you know, the the equivalent of a timeout, for instance. I think that's one part of it, but I think we also have to change our expectations. So there was I I talked to one large bank and they they implemented software and the the um you know they to basically generate text and you know one of the important things in financial institutions never give investment advice, right? And so you're trying to have an LM that is very helpful and never even implicitly gives investment advice. That's of an unsolvable problem, right? you can you can get better and better and better, but you can never completely rule it out. And you can add a second element that tries to catch it, but also will occasionally not catch something because it's it thinks it's helpful. And at the end of the day, they basically made a decision to say, you know, we can't build a software system that never does this. We have to change our metrics. We basically have to go and I think they ended up with something like it has to be whatever half the probability of a human of a well-trained human doing the same situation. If we were to zoom out a little bit, you were there for the whole inter rise of the internet history and then you were a pioneer on you know a lot of the networking research. So how the internet came to be is there is a narrow waste of the internet somehow that happened. Do you think it would be a similar dynamics playing out in AI at all like is there analogy? Maybe the waste is never narrow for AI like for the waste the narrow waste. I think it's the prompt. Oh interesting how why is that the case? I mean I mean look the typically these big tech cycles are built on abstractions that allow you to encapsulate the complexity underneath in a very narrow API for say a database it was sickle right in the early database or the the transaction databases where how does the database under the query work it's something with B star trees we learned that in grad school but that doesn't really matter anymore like I just need to be able to specify the query and I think that's the same thing that led to the rise of modern ML right you no longer need the overpaid Stanford PhD that that trains a model for you, but instead you can now express and steer the model with a prompt. And so, you know, a fairly uh say mediocre Python programmer can suddenly leverage a very powerful LLM just by by prompting it. Interesting. If you were to double click on the prompts, like do you think it's like a natural language representation of what you want to do or is it like because there's no standard there. It's like prompts can be anything and everything. It's like partly a narrow waist formal language, right? It's not a formal language. It's clearly not like English either, though, right? It's It makes me think latent. Yeah. I mean, we're all learning kind of a new language in order to prompt these things. And actually, it's a little different for each model. So, dialects, you know, we've got like a translation issue, all that kind of stuff. I mean, will we ever have a formal prompting language? Maybe. I think there are some, you know, some overpriced Stanford PhDs working on that problem. I'm I'm you know hopeful to see what they come up with. Are Asian frameworks formal prompting languages? I think a little bit. Yeah, a little bit. Yeah, I mean we're certainly seeing starting to see prompts with structure, right? Um where it's like I don't know user something agent response or something like that or you know think and think beginning of the answer end of the answer and you say like okay good it's like two tags and a lot of text that's not not very formal but I think the the first starting points are there. You could see future models getting trained and fine-tuned on a more structured representation. Oh yeah, we're already seeing this happening, right? There's uh models, every model has JSON mode now. Uh and then how you define what you want out of the JSON mode is like you give it a type system. It's like uh you can prompt like I want uh you to generate three fruits but only return it to fruits uh colon like you know apples uh like fruit uh like types of fruits and then you could define it in your code saying I only want your answer to be you know have the fruits key I don't want anything else I guess that's like a kind of formalization longterm I actually wonder if the like for for a reasoning model where a lot of the the thinking sort of happens internally If the model is generating the userfacing machine facing output, it's going to be a different model from the model doing the reasoning if that makes sense. Right? So the you know I I like a really a chatty chatty model or somebody else um you know wants a more tourist model or if you want to have generate JSON output you know one want have yet another model. So you could see sort of the the model output layer daminating at some point from the reasoning layer. In the future, do we think there's going to be different vibe coding models versus enterprise coding models? I actually don't think so. Oh. Well, I define vibe coding as you kind of let the model, you have a spec, you let the model generate whatever it needs to with the implementation detail. You don't care about the implementation, but you do care about what comes out of that implementation is what you wanted. So, it's less less formal, less constraint than classic coding. What what is the difference between web coding and classic coding? I think for classical coding you have to make a lot more choices in what you want to put in a code. So I want to use this this SDK and all the other one. For VIP coding you just don't care about the underlying technical details as long as the model drive. Yeah. As long as it gets things done but you still care about the higher level needs. Otherwise why are you writing this? Got it. So I can totally see enterprise users doing VIP coding and that's a compliment.