YouTube Deep SummaryYouTube Deep Summary

Star Extract content that makes a tangible impact on your life

Video thumbnail

Cursor vs. Claude Code - Which is the Best AI Coding Agent?

Greg + Code (Greg Baugues) • 11:59 minutes • Published 2025-03-12 • YouTube

🤖 AI-Generated Summary:

Cursor vs. Claude Code: Which AI Coding Agent Reigns Supreme?

The emergence of AI-powered coding agents has been one of the most exciting developments for software developers recently. Just this past week, two major players—Cursor and Anthropic’s Claude Code—launched their coding agents simultaneously. Intrigued by which tool might better serve developers, I decided to put both to the test on a real-world Rails application running in production. Here’s a detailed breakdown of my experience, comparing their user experience, code quality, cost, autonomy, and integration with the software development lifecycle.


The Test Setup: A Real Rails App with Complex Needs

My project is a Rails app acting as an email "roaster" for GPTs—essentially bots that process and respond to emails with unique personalities. The codebase is moderately complex and had been untouched for nine months, making it perfect for testing AI assistance on:

  1. Cleaning up test warnings and updating gem dependencies.
  2. Replacing LangChain calls with direct OpenAI API usage.
  3. Adding support for Anthropic’s API.

Both agents used the same underlying model—Claude 3.7 Sonnet—to keep the comparison fair.


User Experience (UX): Terminal Simplicity vs. IDE Integration

Cursor:
Cursor’s agent is integrated into a fully featured IDE and has recently made the agent the primary way to interact with the code. While this offers powerful context and control, I found the interface occasionally clunky—multiple “accept” buttons, cramped terminal panes, and confusing prompts requiring manual clicks. The file editor pane often felt unnecessarily large given that I rarely needed to manually tweak files mid-action.

Claude Code:
Claude Code operates as a CLI tool right in the terminal. You run commands from your project root, and it prompts you with simple yes/no questions to confirm each action. This single-pane approach felt clean, intuitive, and perfectly suited for delegating control to the agent. The lack of a GUI was a non-issue given the agent’s autonomy.

Winner: Claude Code for its streamlined, efficient command-line interaction.


Code Quality and Capability: Documentation Search Matters

Both agents produced similar code given the same model, but Cursor’s ability to search the web for documentation gave it a notable edge. When adding Anthropic support, Claude Code struggled with API syntax and ultimately wrote its own HTTP implementation. Cursor, however, seamlessly referenced web docs to get the calls right, rescuing itself from dead ends.

Winner: Cursor, thanks to its web search integration.


Cost: Subscription vs. Metered Pricing

  • Claude Code: Approximately $8 for 90 minutes of work on these tasks. While reasonable, costs could add up quickly for frequent use.
  • Cursor: $20/month subscription includes 500 premium model requests; I used less than 10% of that for this exercise, roughly costing $2.

Winner: Cursor, offering more usage for less money and a simpler subscription pricing model.


Autonomy: Earning Trust with Incremental Permissions

Claude Code shines here with a granular permission model. Initially, it asks for approval on commands; after repeated approvals, it earns trust to perform actions autonomously. By the end of my session, it was acting independently with minimal prompts.

Cursor, in contrast, lacks this “earned trust” feature. It repeatedly asks for confirmation without a way to grant blanket permissions. Given the nature of coding agents, I believe this is a feature Cursor should adopt soon.

Winner: Claude Code for smarter incremental permissioning.


Integration with Software Development Lifecycle

I emphasize test-driven development (TDD) and version control (Git), so how each agent handled these was crucial.

  • Claude Code: Excellent at generating and running tests before coding features, ensuring quality. Its commit messages were detailed and professional—better than any I’ve written myself. Being a CLI tool, it felt natural coordinating commands and output.

  • Cursor: While it offers a nice Git UI within the IDE and can autogenerate commit messages, these were more generic and less informative. Its handling of test outputs in a small terminal pane felt awkward.

Winner: Claude Code, for superior test and version control workflow integration.


Final Verdict: Use Both, But Lean Towards Claude Code—for Now

Both agents completed all three complex tasks successfully—a testament to how far AI coding assistants have come. It’s remarkable to see agents not only write code but also tests and meaningful commit messages that improve project maintainability.

That said, this is not a binary choice. I recommend developers use both tools in tandem:

  • Use Cursor for day-to-day coding within your IDE, benefiting from its subscription model and web documentation search.
  • Use Claude Code for command-line driven tasks that require incremental permissions, superior test integration, and detailed commit management.

For now, I personally prefer Claude Code for its user experience, autonomy model, and lifecycle integration. But Cursor’s rapid iteration pace means it will likely close these gaps soon.


Takeaway for Developers

If you’re a software developer curious about AI coding agents:

  • Get the $20/month Cursor subscription to familiarize yourself with agent-assisted coding.
  • Experiment with Claude Code in your terminal to experience granular control and trust-building autonomy.
  • Use both to balance cost, control, and convenience.
  • Embrace AI coding agents as powerful collaborators that can help you break through stalled projects and increase productivity.

The future of software development is here—and these AI coding agents are just getting started.


Have you tried Cursor or Claude Code? Share your experiences and thoughts in the comments below!


📝 Transcript (313 entries):

cursor and anthropic both release coding agents the same week and I wanted to learn which one's better so I put them to work on a rails app that I have running in production I gave each of them the same three tasks to complete and this is what I learned along the way let's first talk about the ux so for cursor for .46 the big change is that they promoted the agent to be the default way of interacting with the llm but you're still operating inside of a fully featured IDE your interactions with the agent are really the primary way in which you're making changes to the code I found that I I didn't actually need to see the files open there wasn't really much for me to do I wasn't going to tweak the files in the midst of an agent's actions I also just thought that some elements of uh the agent design were a little bit clunky um at times there would be two or three different places where I could click accept at times there would be terminal commands running in the agent Pane and the terminal command needed me to hit yes or no but the prompt had gone off of the right hand side of the screen because the pain was so small um there were often times when I saw like a spinning Circle and I thought that I was waiting for a prompt gen or I thought that I was waiting for an llm response but really it was waiting for me to click a button and now that the agent has taken prominence and the agent's doing so much action for you I found myself just saying like do I really need two-thirds of the pain taken up by the file editor by contrast CLA code is a CLI you run the command in the root of your project in the root of your code base and it will examine your project and then ask hey what do you want me to do and you tell it what to do and then it just asks you a series of yes no questions as it comes with commands should I do this should I not do this you just have that terminal window that's all you're seeing at no point are you seeing the files open up and close and I felt like since you're abating so much control to the agent that single pane with a single interface the agent was the right way to do this and so when it comes to ux I preferred Cloud code next let's talk about code quality and let me tell you a little bit about the challenges that I ran them through so I have a rails app uh you can think of it as like a email rapper for gpts so for inst if you want to try it out you can email roast highh high. just forward an email to it it will roast your email and reply back to you and there's a whole bunch of these different uh email Bots set up and each one has its own system message and tracks conversations Etc now touch this thing for 9 months because there's enough complexity there that it's like hard for me to load up the context into my brain and so this felt like a good opportunity to get some momentum on a project that had stalled the first thing I needed to do is clean up my tests I was getting some warnings from some of my gems I just needed to update some of the gems and some dependencies and then I wanted to replace Lang chain for direct calls to the open AI API and then finally I wanted to add some support for anthropic as well now it's worth noting that for both agents the underlying model that I was using was Claude 3.7 Sonet uh and so as expected a lot of the code was similar or the same on both approaches I did find though that the one advantage cursor had was that it had the ability to search the web for documentation and towards the end when I was adding anthropic support it was kind of funny that you know Claud 37 Sonet was struggling to add support for anthropic to my rails but whatever um I I found that what Claud 37 Sonet wanted to do was to mimic the syntax that was already present in the code for open Ai and so it was having a hard time getting the anthropic gym to work and figuring out the right parameters and the right syntax to call and what ker was able to do was search the web search for the documentation and fight the right answer uh what Claude code ended up doing was sort of giving up and writing its own implementation for the anthropic API using http uh which worked uh but the fact that it lacked the ability to search the web and to look up documentation is really the only reason that I would give the plus to cursor here like that I definitely saw cursor use that ability to get itself out of a jam once or twice in this exercise next let's talk about cost uh cladco can get expensive uh I guess expensive is relative when we're talking about software development I believe that I probably had about 90 minutes or so of working with Claude all in in order to implement these three changes to my codebase and it ended up costing about $8 so not a lot of money in the grand scheme of software development but if I were doing this for 3 or 4 hours a day every day it it would certainly add up I I do think there'd be a lot of value there absolutely um but it is non-trivial cursor on the other hand I pay my 20 bucks a month with that I get 500 premium model requests uh going through these three coding tasks used less than 50 of my 500 so less than a tenth my subscription cost 20 bucks a month let's just sort of naively say that it cost me $2 to run this exercise on cursor and it cost me $8 on CLA code CLA code was about four times more expensive this super super naive but you can see that CLA code is non-trivially more expensive and I do think that the psychology of the metered pricing versus the subscription pricing is interesting here but for most folks Cloud code is not going to be a replacement for cursor it's going to be something they use in addition to cursor and so they're really going to have to ask themselves even if Cloud code is better uh is it worth the incremental cost over their subscription when they're already getting so much use out of the cursor agent included with the subscription that they already have so purely in terms of cost cursor agent wins uh the 20 bucks a mon month get you a whole lot more usage and it does seem like Cloud code is about four times more expensive than cursor agent next let's talk about autonomy so I first did the exercise with Cloud code and Cloud code will propose a change to you and you have three options yes you can do this command yes you can do this command and you don't need to ask uh again for this command in the future or no I want you to do something else and what I found was in the beginning I was hesitant uh but after it had performed the same command a couple times I finally would just say yes okay you can do this command and you don't have to ask for permission and by the end of my session working with Cloud code it was doing almost everything autonomously I had basically had earned it had earned my trust and I had given permission to do I think just about everything except for like RM um cursor agent on the other hand did not have that concept of gaining trust it would ask you do you want to accept this command or accept this change or do you want to turn on mode and even though I'd already been through this experience with Claud code where I'd given it permission to do basically everything I did not trust cursor agent to enough to turn on yellow mod and so I hope that cursor agent does roll out that sort of incremental permissioning that earned trust and it feels like an easy enough change I suspect we'll see it in uh update soon but as of right now as we all try to Grapple with the question of how much do we trust our coding agent how much do we want to let it do on our local machine I think CLA code really nailed that model with the earn trust or the incremental permissions finally let's talk about the whole software development life cycle um I tried to embrace test driven development or at least having some good test coverage with these agents I do feel like since I'm giving up a lot of the control in the code that's being written in the file I want to make sure that I have a lot of tests and uh I felt like Cloud code did a much better job both working with tests and also working with Version Control so my workflow with Cloud code was asking it to first uh write tests for the feature that it was going to build then to build the feature then to make sure that the test pass and then to commit its changes and I'll say that the best commit messages that have ever been written uh for code that I guess I've written didn't really write it uh were written by Cloud code like it's commit messages were beautiful and it seemed to do a much better job of interacting with tests um than cursor did and I think part of this is just that notion that it is a command line tool it does live in the terminal so just anytime Claude code was running terminal commands it felt much more natural uh anytime cursor agent was doing this it just it didn't feel like it fit right again back to some of the ux stuff like I I had a a small terminal window in that third of a pane on the right hand side and it just did not feel like cursor agent was as comfortable of getting output for my tests and then updating the files based on uh what it was seeing happening in the tests and then also I while I do like cursors um get repository UI that lets you browse all the past get all the past commits and everything and browse the branches I really really do like having that baked into my ID e um the place where it sort of fell short was uh it has a little button where you can autogenerate the commit message and it just does like align it basically it writes like commit messages like I would uh whereas CLA CES were just so detailed like you have to give it points for that between its use of tests and its use of very detailed very verbose get commit messages I feel like Claud code did a better job of making up for some of the concerns I would have for an agent than cursor agent did before I Crown a winner here let's just step back and acknowledge two things one uh I gave both of these code agents uh these three coding tasks on a project that I was stalled on and both of them completed the job I I sort of can't believe that we're here uh I I did not expect these coding agents to work as well as they did and I sort of have the last couple years thought that uh while llms worked really well for coding it was really essential to have a human in the loop orchestrating the changes and this is one of the first times that I've used a coding agent and been truly impressed with the results and felt like it did a better job than I could was it perfect no is my code base as complex as what you might be working on at work probably not but this is a non-trivial code base and these things applied changes and wrote tests and wrote commit messages better than I would uh second I don't want to set up a false dichotomy here of uh do you use cloud code or do you use cursor agent the truth is you probably should be using both you uh if you want to actually you can just open up CLA code inside a terminal inside a cursor and then you sort of get the best of both worlds um but honestly if you're a software developer these days and you have the ability you should probably just get the $20 month cursor subscription get familiar with it and then you should just as you use cloud code watch your costs make sure you're compacting your conversation history often that will help keep your cost down uh and just use it and and get familiar with both the tools so it's this is not an either or thing all that said I preferred Cloud code uh I did think the ux was better uh I loved the way that it had the incremental permissions I loved the way that it earned my trust and I thought it did a better job uh working with Version Control and working with my tests uh all that said the cursor team they iterate they ship so fast so I'm sure they're going to be learning from cloud code and you're going be seeing a lot of these changes and improvements coming to cursor very soon