The Pragmatic Engineer thumbnail

The Pragmatic Engineer

Software engineering with LLMs in 2025: reality check

Key Takeaways & Insights

  • Hype vs. Ground Reality: There’s a significant disconnect between the optimistic predictions of tech CEOs about AI’s role in software development and the day-to-day experiences of engineers. While executives tout AI tools as transformative, real-world adoption and efficacy are more nuanced.
  • Widespread, but Varied Adoption: AI coding tools are being rapidly adopted, especially within AI startups and big tech, but their effectiveness and integration vary widely across different organizations and use cases.
  • Step-Change in Productivity: Experienced engineers and industry veterans believe AI tools represent a fundamental shift in how software is built, comparable to the move from assembly to high-level languages. However, benefits are currently more evident at the individual developer level than at the organizational level.
  • Experimentation is Key: The most successful teams and engineers are those actively experimenting with AI tools, sharing tips, and iterating on what works.
  • Non-determinism as a New Challenge: Unlike past productivity leaps in programming, AI introduces non-determinism, requiring new approaches to verification and trust in code.
  • Long-Term Impact: The consensus among seasoned engineers is that AI will reshape software development, change cost dynamics (what’s easy or hard), and enable new kinds of ambitious projects.

Actionable Strategies

  • Leverage MCP (Model Context Protocol): Adopt protocols like MCP to connect various tools, databases, and APIs, enabling conversational and agent-driven workflows.
  • Encourage Knowledge Sharing: Teams should create channels (e.g., Slack) to exchange AI tool tips, effective prompts, and use cases, accelerating collective learning.
  • Automate Repetitive Workflows: Identify and automate well-defined and repetitive tasks (ticketing, documentation, code reviews) using AI agents and integrations.
  • Iterative Experimentation: Regularly test and evaluate different AI tools for coding, code review, and documentation. Share findings and adapt workflows accordingly.
  • Focus on Use-case Fit: Apply AI coding tools to areas where they excel—such as generating first-pass solutions for well-defined tickets—while being cautious in novel, complex, or highly specialized domains.
  • Integrate AI in Existing Pipelines: For organizations with established APIs and modular services, integrating AI agents (via MCP or similar) can accelerate automation and productivity.

Specific Details & Examples

  • Adoption Rates: At AI-focused startups like Anthropic and Windinsurf, 90–95% of code is written with AI assistance; at Cursor, it's around 40–50%.
  • Big Tech Implementation: Google has integrated LLMs across many internal tools (autocomplete, code review, search), and is preparing for a 10x increase in code throughput. Amazon reports almost universal use of Amazon Q Developer Pro among internal devs, especially for AWS coding.
  • MCP Protocol: Open-sourced by Anthropic, adopted by OpenAI, Google, and Microsoft within months; thousands of servers now in use.
  • Case Study: Incident.io, a startup, found AI agents effective for well-defined tickets and shared practical prompting techniques internally.
  • Adoption Survey: A DX survey of 38,000 developers found about 50% use AI coding tools weekly; in leading companies, this rises to 60%.
  • Veteran Perspectives: Kent Beck (52 years in programming) finds AI tools have made programming more enjoyable and accessible, likening the change to the advent of microprocessors, the internet, and smartphones.

Warnings & Common Mistakes

  • Overreliance on Hype: Don’t assume AI coding tools are universally transformative just because of executive claims; actual utility varies.
  • Blind Trust in AI Output: Reviewing and validating AI-generated code is still crucial, especially for novel or critical codebases, as hallucinations and errors persist.
  • Poor Fit for Novel Domains: AI tools often underperform in highly specialized or cutting-edge areas (e.g., novel biotech pipelines), where human expertise and context are irreplaceable.
  • Neglecting Organizational Fit: AI tools often work better for individuals or small teams than at the org level. Rolling them out organization-wide without a clear strategy can lead to disappointment.
  • Ignoring Feedback Loops: Not iterating or adapting based on AI tool failures or successes can stifle potential productivity gains.

Resources & Next Steps

  • Protocols & Tools: Explore MCP (Model Context Protocol), Amazon Q Developer Pro, Google Notebook LM, and other AI-integrated dev tools.
  • Further Reading: Articles by Armen Ronacher, Peter Steinberger, Simon Willison, and Bri Brigita for practical insights and experiences.
  • Surveys & Benchmarks: Consult DX’s developer surveys for current adoption benchmarks.
  • Experimentation: Start small—trial AI tools on well-defined tasks, measure impact, and scale successful practices.
  • Community Engagement: Participate in discussions, share experiences, and learn from ongoing experimentation in the developer community.
  • Keep Updated: Follow blogs and talks by leading engineers and AI tool creators for the latest developments and best practices.

Main Topics

  • AI Coding Tools: Hype vs. Reality
  • Adoption Patterns Across Startups, Big Tech, and Independent Engineers
  • Protocols for Integration (MCP) and Automation
  • Case Studies and Real-World Examples
  • Survey Data on AI Tool Usage
  • Limitations and Effective Use Cases
  • Veteran Engineers’ Perspectives on AI’s Impact
  • Open Questions: Productivity, Adoption, and Organizational Challenges
  • Strategies for Experimentation and Integration
← Back to The Pragmatic Engineer Blog