Greg Baugues thumbnail

📝 Greg Baugues Blog

Bringing Claude Code to Life with Custom Sounds: A Fun Introduction to Hooks

When Enthropic announced hooks for Claude Code, it opened a whole new world of possibilities for customizing how Claude interacts with your workflow. Inspired by a playful comment on the Claude AI subreddit—“I’m going to use this to make Claude meow. I’ve always wanted a cat, but I’m allergic”—I decided to dive in and create a unique sound-based notification system for Claude’s various events. The result? Claude meowing, beeping, and even speaking as it runs commands in the background.

Why Use Hooks in Claude Code?

Hooks are a powerful addition that let you run custom commands triggered by different Claude events. These events include:

  • Pre-tool use: Before Claude uses a tool like bash, file editing, or fetching data from the web.
  • Post-tool use: After Claude completes a task.
  • Notification: When Claude is waiting for your approval.
  • Stop: When Claude finishes its task.
  • Pre-compact: Before autocompacting your session history.

By assigning sounds or commands to these events, you get real-time feedback about what Claude is doing, making it easier to understand and control its behavior.

Setting Up Hooks: Where to Start

To set up hooks, you edit your settings.json file within your project. This ensures your hooks are version-controlled and consistent across different work trees. Although you can also configure hooks in user-level settings, keeping them project-specific is a good starting point.

Here’s the gist of the setup:

  • Create a hooks directory inside your .cloud folder.
  • Define a list of hooks in your settings.json, specifying the event type and the command to run.
  • Use a single Python script (hook_handler.py) to handle all hook events. This centralizes your logic and simplifies debugging.

Example snippet from settings.json:

json "hooks": [ { "event": "notification", "command": ["python3", "/path/to/hook_handler.py"] }, { "event": "stop", "command": ["python3", "/path/to/hook_handler.py"] }, { "event": "pre-tool-use", "command": ["python3", "/path/to/hook_handler.py"] } ]

The Magic Behind the Scenes: The Python Hook Handler

The hook_handler.py script receives JSON data from Claude via standard input whenever an event is triggered. This data includes session info, event name, tool name, and tool input, which you can log and analyze to determine what action to take.

I used Claude itself to generate and iteratively refine the Python script, making it as readable and maintainable as possible. The script’s responsibilities include:

  • Logging incoming event data for debugging.
  • Mapping specific events and commands to corresponding sounds or voice notifications.
  • Playing sounds using predefined audio files stored in categorized directories (beeps, voices, meows).

Curating the Sounds: From Beeps to Meows

To keep things fun and engaging, I sourced sounds from Epidemic Sound, a platform I often use for my YouTube videos. They offer a wide range of sound effects, including beeps, cat meows, and even voice actors providing customizable voice clips.

Some tips for working with Epidemic Sound:

  • Sounds often include multiple effects in one file, so use their segment tool to extract just the part you want.
  • You can assign specific sounds to particular actions, for example:
  • A "committing" voice clip when Claude commits code.
  • Different beeps for bash commands, file edits, or pull request creation.
  • Sad meows or playful cat sounds for other events.

This mapping helped me instantly recognize what Claude was doing just by listening.

Lessons Learned and Practical Uses for Hooks

Building this sound notification system was more than just a fun experiment; it was a fantastic way to understand Claude’s inner workings and the power of hooks. Here are some insights and practical applications:

  • Understanding Cloud Code behavior: Assigning sounds to events revealed how often Claude updates to-dos or runs bash commands.
  • Custom safeguards: You can create hooks that prevent dangerous commands like rm -rf from executing accidentally.
  • Automation enforcement: Hooks can ensure tests run before opening pull requests or run linters automatically.
  • Better notifications: Replace or supplement default notifications with customized alerts that better fit your workflow.

Getting Started Yourself

If you want to explore this yourself, check out the full code repository at hih high.ai/hooks. The Python script and example sounds are all there for you to experiment with and customize.

I’d love to hear how you’re using hooks in Claude Code—whether for fun like me or to build more pragmatic workflows. Hooks unlock a new layer of control and creativity, and I hope this post inspires you to dive in!


Summary

  • Hooks allow running custom commands on Claude Code events.
  • Setting hooks in settings.json keeps them project-specific and version-controlled.
  • A Python script can handle multiple hook events for better maintainability.
  • Assigning sounds to events helps understand Claude’s behaviors.
  • Hooks can be used for both fun notifications and practical workflow controls.
  • Check out the full project and share your hook ideas!

Happy coding—and meowing—with Claude Code! đŸ±đŸŽ¶

đŸ“č Video Information:

Title: I used HOOKS to make CLAUDE CODE Meow, Beep, and Talk
Duration: 14:28

How Custom Sounds Help You Understand Claude Code Hooks (and Why You Should Try It!)

When Anthropic announced hooks for Claude Code, one Reddit user joked, “I’m going to use this to make Claude meow. I’ve always wanted a cat, but I’m allergic.” That comment sparked an unexpectedly powerful way to get started with hooks: adding custom sounds to different Claude events. Not only is it fun and whimsical, but it’s also an educational deep-dive into how Claude Code operates under the hood.

Let’s walk through how this sound-based project works, why it’s useful, and how it can inspire more advanced, pragmatic uses of hooks.


Understanding the Power of Claude Code Hooks

Claude Code hooks let you assign custom commands to different events in the coding workflow. These events include things like:

  • Pre-tool use: Before Claude uses a tool (often Bash commands, editing files, reading files, web requests, etc.)
  • Post-tool use: After Claude finishes using a tool.
  • Notifications: When Claude is waiting for user input or approval.
  • Stop: When the current task is completed.
  • Pre-compact: Before Claude auto-compacts session history.

By assigning sounds to each event, you get immediate, sensory feedback as these events are triggered. This helps you see (or hear!) what’s happening in the background, making hooks less abstract and more tangible.


Setting Up Your Project: Where to Configure Hooks

To get started, you’ll edit your settings.json—specifically, the one in your project directory (not your user or local settings). This ensures your hook configuration is committed to your repository and applies across all work trees.

Within your project, create a hooks directory to store all the scripts and sound files. If you eventually want these hooks to work across all projects, you can migrate them to your user settings, but localizing them per project is best for experimentation.


Defining Hooks in settings.json

In your settings.json, hooks are defined as a list, where each hook specifies:

  • Type of event (e.g., pre-tool use, stop, notification)
  • Command to run (in this case, a Python script)

For simplicity and maintainability, it’s best to keep the JSON configuration minimal and put most of the logic inside your Python script. This allows for easier debugging and flexibility.


Building the Python Hook Handler

The Python script acts as the core logic center. Here’s how to approach it:

  1. Log Incoming Data: Whenever Claude triggers a hook, JSON data is piped into your script via standard input. This data contains session information, the event name, the tool being used, and any tool-specific input. Logging this is crucial for understanding what’s happening and for debugging.

  2. Map Events to Sounds: Create directories for different types of sounds (beeps, meows, voices, etc.). You can use sound effect services like Epidemic Sound to download fun beeps, cat meows, or even AI-generated voice snippets.

  3. Assign Sounds to Actions: Either assign random sounds or, more effectively, map specific sounds to specific events or Bash commands. For example, use a “meow” for file edits, a “beep” for notifications, or a British-accented “committing” for git actions.

  4. Optional Patterns: Fine-tune the sound mapping for more granular feedback. For example, distinguish between editing files, reading files, or running specific CLI commands by matching against the command name.


Why Start with Sounds?

Assigning sounds to hooks isn’t just playful—it’s surprisingly educational. You’ll quickly discover:

  • Which events are triggered most often (e.g., how many actions are actually Bash commands)
  • How Claude interacts with your files and tools
  • Opportunities for more advanced hook logic (like intercepting dangerous commands or ensuring tests run before a pull request)

Making abstract processes audible helps demystify Claude’s inner workings and gives you confidence to try more serious customizations.


Beyond Sounds: Unlocking the Full Potential of Hooks

Once you’re comfortable, hooks enable all sorts of productivity and safety improvements:

  • Preventing dangerous commands: Block risky Bash operations like rm -rf before they execute.
  • Running Linters: Automatically trigger code quality checks after edits.
  • Enforcing Test Runs: Ensure tests pass before allowing pull requests.
  • Custom Notifications: Replace unreliable system beeps with tailored sounds or even spoken messages.

Hooks give you deterministic control over Claude Code’s behavior—making your coding environment smarter, safer, and more responsive.


Ready to Try? Resources and Next Steps

You can find all the code and examples from this project at hihigh.ai/hooks.

Whether you want to make Claude meow, beep, or speak in a proper British accent, starting with custom sounds is a delightful way to understand hooks. Once you grasp the basics, you’ll be well-equipped to use hooks for more complex, pragmatic workflows.

What creative uses have you found for Claude Code hooks? Share your ideas and let’s build smarter tools together!

Unlocking New Superpowers: AI-Assisted Coding Workflow with Cloud Code and GitHub

In recent weeks, I’ve been experimenting with a powerful AI-assisted coding workflow using Cloud Code and GitHub to build a new web application. This workflow has truly unlocked new superpowers for me as a developer, streamlining how I plan, create, test, and deploy software. In this blog post, I’ll walk you through the workflow, explain why it’s effective, and share practical tips on how you can implement it in your own projects.


The High-Level Workflow: Plan, Create, Test, Deploy

The workflow is elegantly simple yet powerful, revolving around the four classic phases of the software development life cycle:

  1. Plan: Create and refine GitHub issues to clearly define atomic, manageable tasks.
  2. Create: Use Cloud Code’s custom slash commands to generate code that addresses the issues.
  3. Test: Run automated tests, including UI tests powered by Puppeteer, to ensure quality.
  4. Deploy: Commit and push changes to GitHub, open pull requests (PRs), and merge after review to deploy via platforms like Render.

By leveraging GitHub flow — a tried-and-true workflow designed for small teams — and integrating AI-powered coding assistants, this process makes it feasible for a “team” of one human and one AI to build complex applications efficiently.


Creating and Refining GitHub Issues: Your Project’s Backbone

The first step is to capture all work as GitHub issues. I started by dictating initial requirements and then worked with Claude Code to translate them into issues. However, I quickly learned the importance of granularity and specificity in these issues. The more atomic and well-defined the issues, the better Claude Code could handle them.

This phase reminded me of my managerial days, as I found myself writing detailed specs, reviewing code, and leaving feedback for improvements — essentially playing the role of an engineering manager. This approach ensures that the AI-generated code is aligned with your vision and standards.


Setting Up a Solid Foundation: Testing and Continuous Integration

Before diving into rapid development, it’s crucial to establish:

  • A robust test suite to verify that new changes don’t break existing functionality.
  • Continuous Integration (CI) using GitHub Actions to run tests and linters automatically on every commit.
  • Puppeteer integration to simulate user interactions and test UI changes in a real browser environment.

Using frameworks like Rails (with its MVC architecture and integrated testing) makes it easier for AI coding agents to work on modular code sections rather than sprawling, monolithic files.


Custom Slash Commands: Automating the Plan-Create-Test-Deploy Cycle

Cloud Code slash commands are prompt templates with command-line arguments that instruct the AI on how to handle each issue. My main /process-issue command breaks down into:

  • Plan: The AI reviews the GitHub issue, searches previous related work and pull requests, and creates a detailed plan with atomic tasks using “scratchpads” (dedicated planning files).
  • Create: The AI writes code addressing the plan.
  • Test: The AI runs tests to verify its work.
  • Deploy: The AI commits changes, opens a PR, and optionally requests or performs code reviews.

This structured approach ensures clarity and accountability throughout the development cycle.


The Human-AI Partnership: Code Review and Responsibility

One common concern about AI-assisted coding is trust — how do you know what the AI wrote is correct? The answer remains the same as with any developer: you must review the code.

I’ve found it helpful to:

  • Read through pull requests carefully.
  • Optionally have Claude Code perform a PR review using a separate slash command, emulating expert styles like Sandy Metz’s principles for maintainable code.
  • Rely heavily on tests to catch regressions and unexpected issues.

While I sometimes let Claude commit code directly, I make sure tests pass and the changes look good before merging.


Managing Context: The Importance of /clear

After completing and merging an issue, I always run /clear in Cloud Code to wipe the AI’s context window. This forces Claude to start fresh on the next issue, relying solely on the issue description, scratchpads, and repository history — no leftover “working memory.”

This practice helps:

  • Maintain focus on the current issue.
  • Reduce token usage.
  • Improve AI performance and accuracy.

Using Claude in GitHub Actions vs. Cloud Code Console

Anthropic recently launched Claude integration via GitHub Actions, allowing you to tag Claude directly on GitHub. While this is convenient for small tweaks and copy changes, I prefer using Claude Code in the console for more significant development work because:

  • GitHub Actions usage incurs metered billing, even on premium plans.
  • The console provides better insight and control.
  • For large code changes, the console-based approach is more efficient and manageable.

Running Parallel Agents with Work Trees

Work trees let you run multiple instances of Claude on different branches simultaneously, similar to multitabling poker. However, I encountered some challenges:

  • Permission approvals need to be repeated for each new Claude session.
  • Managing multiple work trees can feel clunky and increase babysitting overhead.
  • For my project, sequential work on a single instance sufficed.

Still, as projects grow or teams scale, work trees offer a way to increase parallelism in AI-assisted development.


Final Thoughts

This AI-assisted workflow combining Cloud Code, GitHub, and Puppeteer has revolutionized how I build software. It marries the power of classic software development principles with cutting-edge AI coding assistance to create a cycle of continuous, manageable progress.

If you want to get started, focus on:

  • Writing clear, atomic GitHub issues.
  • Setting up a solid test suite and continuous integration.
  • Creating custom slash commands to automate planning, coding, testing, and deployment.
  • Embracing your role as the reviewer and planner to guide the AI effectively.

For more insights, I recommend checking out my related video on Claude Code pro tips and reading Thomas Tacic’s excellent post on AI-assisted coding skepticism.


References and Resources


Harness the power of AI in your development process — it might just unlock new superpowers for you too!

Mastering Claude Code: Pro Tips from a Developer’s Experience

Hey there! I’m Greg, a developer who’s recently made Claude Code my go-to tool for writing code. Over the past few months, I’ve gathered some valuable insights and pro tips that can help you get the most out of Claude Code. These tips are mainly based on a fantastic post by Boris Churnney, the creator of Claude Code at Anthropic. Let’s dive into some of the best practices and features that will supercharge your coding workflow with Claude Code.


What is Claude Code?

Claude Code is a powerful command-line interface (CLI) tool that leverages AI to assist you in coding. It’s flexible, integrates well with other tools, and supports advanced workflows, making it a game-changer for developers.


Claude Code Pro Tips

1. Leveraging the CLI Nature of Claude Code

Claude Code operates like any other bash-based CLI, which means:

  • You can pass command-line arguments to customize behavior on startup.
  • Run it in headless mode using the -p flag.
  • Chain Claude Code with other command-line tools and pipe data in and out.
  • Run multiple instances simultaneously — even having Claude Code launch sub-agents for complex or parallel tasks.

This flexibility lets you build sophisticated automation and multi-agent workflows easily.


2. Using Images in Claude Code

Images are a powerful input method in Claude Code:

  • On macOS, you can drag and drop an image directly into the terminal.
  • Alternatively, use Shift + Command + Control + 4 to take a screenshot and then paste it with Control + V (not Command + V).

Two main use cases for images:
- Mockups: Paste UI mockups and have Claude Code build the interface based on the design.
- Feedback loop: Take screenshots of what Claude Code generates, feed them back in, and ask for iterations. This manual feedback cycle can be automated using Puppeteer MCP servers to capture screenshots programmatically.


3. MCP Servers and Clients

Claude Code supports MCP (Multi-Channel Protocol) servers and clients, allowing it to:

  • Act as an MCP server for other agents.
  • Connect to various MCP servers, such as:
  • Postgres server to interact directly with your database.
  • API wrappers from dev tool companies like Cloudflare, providing up-to-date documentation.

You can also fetch URLs for real-time knowledge. For example, I built a game for my daughter by feeding Claude Code the official Uno rules from a website, ensuring accurate gameplay logic rather than relying on generic training data.


4. The Power of claude.md

The claude.md file is a prompt loaded with every request to Claude Code and can include:

  • Project instructions.
  • Common bash commands.
  • Style and linting guidelines.
  • Testing instructions.
  • Repository etiquette.

Create it easily with the /init command, which scans your directory and summarizes its structure. You can also maintain:

  • A global claude.md in your home directory for universal instructions.
  • Subdirectory-specific claude.md files for finer control.

Keep these files concise and specific for better performance. Use Anthropic’s prompt optimizer tool to refine your prompts.


5. Slash Commands

Slash commands are customizable prompt templates stored in the cloud/commands folder. Examples include commands for:

  • Refactoring code.
  • Running lint checks.
  • Reviewing pull requests.

You can pass command-line arguments to these templates for dynamic and reusable workflows.


6. UI Tips for Efficient Interactions

  • Use Tab to autocomplete files and directories.
  • Be specific about files and directories to improve results.
  • Don’t hesitate to hit Escape to stop Claude Code if it goes off-track.
  • Use Escape to undo the last action and revert conversation turns.

7. Version Control Integration

One of the biggest pitfalls is letting Claude Code make breaking changes without proper tracking. To avoid headaches:

  • Use version control diligently.
  • Have Claude Code commit after every major change with well-written commit messages (likely better than most commit messages you’ve seen).
  • Revert changes often and clear conversation history to keep things clean.
  • Install the GitHub CLI for seamless GitHub interactions, or use the GitHub MCP server as an alternative.
  • Claude Code can file PRs and perform code reviews for you.

8. Managing Context and Cost

Managing context windows is crucial because Claude Code auto-compacts conversations to stay within token limits:

  • Monitor the auto-compact indicator carefully.
  • Compact conversations at natural breakpoints (e.g., after commits).
  • Consider clearing conversations periodically to start fresh.
  • Use scratchpads or GitHub issues to plan and organize work externally.

If you pay per token, monitoring usage is essential. For teams, you can track costs robustly using Cloud Code’s open telemetry support, integrating with tools like DataDog for dashboards.


9. Upgrading and Pricing

Claude Code can be expensive, but upgrading to Claude Max plans ($100 or $200) significantly improves value. I personally spent around $150 over three days on the $100 plan, which felt reasonable for the productivity gains.


Final Thoughts

Claude Code is a versatile and powerful tool that can revolutionize how you write and manage code. The tips above only scratch the surface of what’s possible. For a deeper dive, check out Boris Churnney’s original post and other linked resources.

Happy coding with Claude Code!


Resources:

  • Boris Churnney’s Post on Claude Code Pro Tips
  • Anthropic Prompt Optimizer Tool
  • Martin AMP’s Blog on Open Telemetry with Claude Code
  • Puppeteer MCP Server Setup Guide

Feel free to share your own tips or questions in the comments!

How to Build a Real-Time Web Research Bot Using Anthropic’s New Web Search API

In the rapidly evolving world of AI, staying updated with the latest information is crucial. Traditional AI models often rely on static training data, which can be months or even years old. That’s why Anthropic’s recent announcement of web search capabilities via their cloud API is a game changer. It enables developers to build AI-powered research bots that fetch real-time data from the web — without relying on external scraping tools or additional servers.

In this blog post, we’ll explore how to leverage Anthropic’s new web search API to build a real-time research assistant in Python. We’ll cover a demo example, key implementation details, useful options, and pricing considerations.


What’s New with Anthropic’s Web Search API?

Before this update, building a research bot that accessed up-to-date information meant integrating third-party tools or managing your own scraping infrastructure. Anthropic’s web search API simplifies this by allowing direct querying of the web through their cloud API, keeping everything streamlined in one place.

For example, imagine wanting to know about a breaking news event that happened just hours ago — such as the recent selection of the first American Pope (recorded May 8th, 2025). Since this information isn’t part of the AI’s training data, it needs to perform a live web search to generate an accurate and current report.


Building Your First Web Search Request: A “Hello World” Example

Getting started is straightforward. Here’s an overview of the basic steps using the Anthropic Python client:

  1. Set your Anthropic API Key as an environment variable. This allows the client to authenticate requests seamlessly.

  2. Instantiate the Anthropic client in Python.

  3. Send a message to the model with your question. For example, “Who is the new pope?”

  4. Add the tools parameter with web_search enabled. This tells the model to access live web data.

Here’s a snippet summarizing this:

```python
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT

client = Anthropic()

response = client.completions.create(
model="claude-3",
messages=[{"role": "user", "content": "Who is the new pope?"}],
tools=["web_search"], # Enable web search tool
max_tokens=1000
)

print(response.choices[0].message.content)
```

Without web search enabled, the model might respond with outdated information (e.g., “As of the last update, it was Pope Francis”). But with web search active, it fetches the latest details, complete with citations from recent news sources.


Understanding the Web Search Response

The response from the API when using web search is richer and more complex than standard completions. It includes:

  • Initial text from the model indicating it is performing a search.
  • Tool usage details showing search queries and pages found.
  • Encrypted content blocks representing scraped snippets (Anthropic encrypts these to avoid direct scraping exposure).
  • Summarized text with citations — a distilled answer referencing URLs, page titles, and quoted text snippets.

Parsing this response can be a bit challenging. The Python client lets you convert the response to a dictionary or JSON format for easier inspection.

For example, you can iterate over the response’s message blocks, extract the main text, and gather citations like URLs and titles. This lets you assemble a report with clickable sources, ideal for building research assistants or automated reporting tools.


Improving Performance with Streaming

Waiting 20 seconds for a full response might be too slow for some applications. Anthropic supports streaming responses through an asynchronous client.

Using the async client, you can receive partial results as they become available and display them in real-time, improving user experience in chatbots or interactive assistants.


Customizing Search Domains: Allowed and Blocked Domains

Anthropic’s API offers parameters to restrict searches to certain domains (allowed_domains) or exclude others (blocked_domains). For example, if you only want information from Reuters, you can specify that in your request:

python tools=[{"name": "web_search", "allowed_domains": ["reuters.com"]}]

However, note that some domains are off-limits due to scraping restrictions (e.g., BBC.co.uk, Reddit). Trying to search those will result in an error.

You can use either allowed_domains or blocked_domains in a single request, but not both simultaneously.


Pricing Overview: How Much Does It Cost?

Anthropic’s web search API pricing stands out as very competitive:

  • $10 per 1,000 searches plus token usage for the normal API calls.
  • Compared to OpenAI’s web search pricing of $30 to $50 per 10,000 calls, Anthropic’s is more affordable.

The pricing difference might be due to different search context sizes or optimizations, but it makes Anthropic a cost-effective choice for integrating live web data.


Wrapping Up

Anthropic’s new web search API opens exciting possibilities for developers building AI applications that require fresh, real-time data from the web. With simple integration, customizable domain filters, streaming support, and competitive pricing, it’s a compelling option for research bots, news aggregators, and knowledge assistants.

If you want to try this out yourself, check out the Anthropic Python client, set your API key, and start experimenting with live web queries today!


Useful Links


Author: Greg
Recorded May 8th, 2025

Feel free to leave comments or questions below if you want help building your own web research bot!

How to Build a Remote MCP Server with Python and FastMCP: A Step-by-Step Guide

In the rapidly evolving world of AI and large language models (LLMs), new tools and integrations continue to push the boundaries of what’s possible. One of the most exciting recent developments is Enthropic’s announcement of remote MCP (Model Control Protocol) server support within Claude, an AI assistant platform. This breakthrough means that users can now connect to MCP servers simply by providing a URL—no complex local setup or developer-level skills required.

In this blog post, we’ll explore what remote MCP servers are, why they matter, and how you can build your own using Python and the FastMCP framework. Whether you’re a developer or an AI enthusiast, this guide will give you the knowledge and tools to create powerful, accessible AI integrations.


What is an MCP Server?

An MCP server serves as a bridge between large language models and external tools or data sources. This enables LLMs like Claude to perform tasks or fetch real-time information beyond their static knowledge base. Traditionally, MCP servers were local setups requiring developer expertise to configure and maintain, limiting their accessibility.


Why Remote MCP Servers are a Game-Changer

Remote MCP servers allow users to connect to MCP servers hosted anywhere via a simple URL. This innovation dramatically lowers the barrier to entry, making it easier for less technical users to enhance their AI assistants with custom tools. For example, a remote MCP server can provide up-to-the-minute data like the current time, weather, or stock prices—capabilities that standard LLMs often lack due to knowledge cutoffs.

Claude’s new integration support means you can now:

  • Add MCP servers by entering a URL in Claude’s settings.
  • Automatically discover available tools and their parameters.
  • Seamlessly invoke those tools during conversations with Claude.

This marks a significant step toward more interactive, capable AI assistants.


Demo: Adding a Current Time Tool to Claude

To illustrate the power of remote MCP servers, here’s a quick example:

  1. Problem: Claude cannot provide the current time because its knowledge is frozen at a cutoff date.
  2. Solution: Create a remote MCP server that returns the current date and time.
  3. Integration: Add the MCP server URL to Claude’s settings under “Integrations.”
  4. Usage: Ask Claude, “What is the current time?” Claude recognizes it has access to the time tool, invokes it with the correct parameters (like time zone), and returns an accurate, up-to-date answer.

This simple enhancement vastly improves Claude’s utility for real-world tasks.


Building Your Own Remote MCP Server in Python with FastMCP

Step 1: Set Up Your Environment

Begin by visiting gomcp.com or the FastMCP GitHub repository for documentation and code examples.

Install FastMCP via pip:

bash pip install fastmcp

Step 2: Create the MCP Server

Here’s a basic MCP server script that provides the current date and time:

```python
from fastmcp import MCPServer
from datetime import datetime
import pytz

server = MCPServer(name="DateTime Server", instructions="Provides the current date and time.")

@server.tool(name="current_datetime", description="Returns the current date and time given a timezone.")
def current_datetime(time_zone: str = "UTC") -> str:
try:
tz = pytz.timezone(time_zone)
now = datetime.now(tz)
return now.strftime("%Y-%m-%d %H:%M:%S %Z")
except Exception as e:
return f"Error: {str(e)}"

if name == "main":
server.run()
```

Step 3: Run Locally and Test with MCP Inspector

FastMCP offers an MCP Inspector tool for debugging and testing your server:

bash npx fastmcp inspector my_server.py

This GUI lets you invoke your tools and view responses directly, providing a deterministic way to debug interactions with your MCP server.

Step 4: Deploy as a Remote MCP Server

To make your MCP server accessible remotely, you need to use a transport protocol suitable for networking.

  • The default stdin/stdout transport works locally.
  • For remote access, use Server-Sent Events (SSE) transport or, soon, the more efficient streamable HTTP transport (currently in development).

Modify your server code to use SSE transport and deploy it on a cloud platform such as Render.com. Assign a custom domain (e.g., datetime.yourdomain.ai) pointing to your deployment.

Once deployed, add your server URL in Claude’s integrations, and it will be ready for use remotely.


The Future of MCP Servers

The adoption of remote MCP servers is poised to explode, as they become far easier to create and integrate with AI assistants like Claude. This will likely spur more companies to launch their own MCP servers, offering a diverse ecosystem of tools accessible via URLs.

For developers, this is an exciting time to dive into MCP and FastMCP development. Even those with limited coding experience can now build meaningful AI enhancements quickly.


Final Thoughts

  • MCP servers empower AI models to access real-time data and perform specialized tasks.
  • Remote MCP servers eliminate the technical hurdles of local setup.
  • FastMCP and Python provide a straightforward path to building your own MCP servers.
  • Claude’s new integrations make adding MCP servers as simple as entering a URL.
  • The future will see more widespread adoption and innovation in MCP technology.

If you want to stay ahead in AI tooling, start experimenting with FastMCP today. Build your own remote MCP server, connect it to Claude, and unlock new capabilities for your AI assistant.

Happy coding!


Resources


Have questions or want to share your MCP server projects? Drop a comment below or connect with me on Twitter!

Why Cloud Code Outshines OpenAI’s Codeex: A Developer’s Perspective

Hey there! I’m Greg, a developer who has spent hundreds of dollars experimenting with AI-powered coding assistants over the past few months. Lately, I’ve made Cloud Code my go-to coding agent, especially when starting new projects or navigating large, older codebases. With the recent launch of OpenAI’s Codeex, I was eager to give it a shot and pit it against Cloud Code in a head-to-head comparison. Spoiler alert: Codeex fell short in several key areas.

In this blog post, I’ll share my firsthand experience with both tools, highlight what OpenAI needs to improve in Codeex, and explain why developer experience is crucial for AI coding assistants to truly shine.


First Impressions Matter: The Developer Experience

Right from the start, Codeex’s developer experience felt frustrating. Although I have a Tier 5 OpenAI account—which is supposed to grant access to the latest GPT-4 models—Codeex informed me that GPT-4 was unavailable. Instead of gracefully falling back to a supported model, the system simply failed when I tried to use GPT-4 Mini.

To make matters worse, the interface for switching models was confusing. I had to use a /help command to discover a /model command with a list of options ranging from GPT-3.5 to Babbage and even DALL·E (an image generation model that doesn’t belong here). Most of these options didn’t work with the product, so I was left guessing which model to pick. This was a baffling experience—why show options that don’t actually work? It felt like a basic user experience bug that should have been caught during testing.

For developers, the first interaction with a tool should be smooth and intuitive—no guesswork, no dead ends. Sadly, Codeex made me jump through unnecessary hoops just to get started.


API Key Management: A Security and Usability Concern

Cloud Code shines in how it manages API keys. It securely authenticates you via OAuth, then automatically stores your API key in a local config file. This seamless process means you can focus on coding without worrying about environment variables or security risks.

Codeex, on the other hand, expects you to manually set your OpenAI API key as a global environment variable or in a .env file. This approach has several drawbacks:

  • Security Risk: Having a global API key in your environment exposes it to any local script or app, increasing the chances of accidental leaks.
  • Lack of Separation: You can’t easily dedicate a separate API key for Codeex usage, which complicates cost tracking and project management.
  • Inconvenience: Managing environment variables across multiple projects can become tedious.

Cloud Code’s approach is more secure, user-friendly, and better suited for developers juggling multiple projects.


Cost Management: Transparency and Control Matter

AI coding assistants can get expensive, and managing usage costs is critical. Cloud Code offers helpful features to keep your spending in check:

  • /cost Command: View your session’s spend anytime.
  • /compact Command: Summarize and compress chat history to reduce token usage and lower costs.

Codeex lacks these features entirely. There is no way to check how much you’ve spent during a session or to compact conversation history to reduce billing. This opacity can lead to unpleasant surprises on your bill and makes cost management stressful.


Project Context Awareness: Smarter by Design

One of Cloud Code’s standout features is its ability to scan your project directory on startup, building an understanding of your codebase. It lets you save this context into a claw.md file, so it doesn’t have to reanalyze your project every time you launch the tool. You can even specify project-specific preferences and coding conventions.

Codeex, by contrast, offers zero context-awareness upon startup. It simply opens a chat window with your chosen model and waits for input. This puts the burden on the developer to manually introduce project context, which is inefficient and time-consuming.

For a coding agent, understanding your existing codebase from the get-go is a game-changer that Codeex currently misses.


User Interface: Polished vs. Minimal Viable

Cloud Code’s command-line interface (CLI) is thoughtfully designed with clear separation between input and output areas, syntax highlighting, and even color schemes optimized for color-blind users. The UI feels intentional, refined, and comfortable for extended use.

Codeex feels like a bare minimum implementation. Its output logs scroll continuously without clear visual breaks, it lacks syntax highlighting, and it provides only rudimentary feedback like elapsed wait time messages. This minimalism contributes to a frustrating user experience.


Stability and Reliability: Crashes Are a Dealbreaker

Cloud Code has never crashed on me. Codeex, unfortunately, has crashed multiple times, especially when switching models. Each crash means reconfiguring preferences and losing all previous session context—a major productivity killer.

Reliability is table stakes for developer tools, and Codeex’s instability makes it feel unready for prime time.


Advanced Features: MCP Server Integration

Cloud Code supports adding MCP (Machine Control Protocol) servers, enabling advanced use cases like controlling a browser via Puppeteer to close the feedback loop by viewing changes in real-time. This kind of extensibility greatly expands what you can do with the tool.

Codeex currently lacks support for MCP servers, limiting its potential for power users.


The Origin Story: Why Polished Tools Matter

During a recent Cloud Code webinar, I learned that Cloud Code began as an internal tool at Anthropic. It gained traction within the company, prompting the team to polish it extensively before releasing it publicly. This internal usage ensured a high-quality, battle-tested product.

In contrast, Codeex feels like it was rushed to market with minimal internal adoption and testing. With just a couple of weeks of internal use and intentional polish, Codeex could improve dramatically.


Final Thoughts: Potential vs. Reality

I have not even touched on the core coding ability or problem-solving skills of Codeex’s models, such as GPT-4 Mini plus codecs. It’s possible that, once the bugs and UX issues are ironed out, Codeex could outperform Cloud Code at a lower cost.

But right now, the frustrating user experience, instability, poor key management, and lack of cost transparency prevent me from fully engaging with Codeex. A well-designed developer experience isn’t just a nice-to-have; it’s essential to unlocking the true power of AI coding assistants.


What OpenAI Needs to Do to Bring Codeex Up to Par

  1. Graceful Model Fallback: Automatically switch to a supported model if the default is unavailable.
  2. Clear and Accurate Model List: Only show models that actually work with the product.
  3. Secure and Convenient API Key Management: Implement OAuth or a dedicated API key setup for the tool.
  4. Cost Transparency: Add commands or UI elements to track session spending and manage token usage.
  5. Project Context Awareness: Automatically scan and remember project details to save time and costs.
  6. Stable, Polished UI: Improve the CLI interface with clear input/output zones, syntax highlighting, and accessibility options.
  7. Reliability: Fix crash bugs to ensure smooth, uninterrupted workflows.
  8. Advanced Feature Support: Enable MCP servers or equivalent extensibility to boost functionality.

Conclusion

AI coding assistants hold incredible promise to revolutionize software development, but only if they respect developers’ time, security, and workflows. Cloud Code exemplifies how thoughtful design and polish can make a tool truly empowering.

OpenAI’s Codeex has potential, but it needs significant improvements in developer experience and stability before it can compete. I look forward to seeing how it evolves and hope these insights help guide its growth.


Thanks for reading! If you’ve tried either Cloud Code or Codeex, I’d love to hear about your experiences in the comments below. Happy coding!

Cursor vs. Claude Code: Which AI Coding Agent Reigns Supreme?

The emergence of AI-powered coding agents has been one of the most exciting developments for software developers recently. Just this past week, two major players—Cursor and Anthropic’s Claude Code—launched their coding agents simultaneously. Intrigued by which tool might better serve developers, I decided to put both to the test on a real-world Rails application running in production. Here’s a detailed breakdown of my experience, comparing their user experience, code quality, cost, autonomy, and integration with the software development lifecycle.


The Test Setup: A Real Rails App with Complex Needs

My project is a Rails app acting as an email "roaster" for GPTs—essentially bots that process and respond to emails with unique personalities. The codebase is moderately complex and had been untouched for nine months, making it perfect for testing AI assistance on:

  1. Cleaning up test warnings and updating gem dependencies.
  2. Replacing LangChain calls with direct OpenAI API usage.
  3. Adding support for Anthropic’s API.

Both agents used the same underlying model—Claude 3.7 Sonnet—to keep the comparison fair.


User Experience (UX): Terminal Simplicity vs. IDE Integration

Cursor:
Cursor’s agent is integrated into a fully featured IDE and has recently made the agent the primary way to interact with the code. While this offers powerful context and control, I found the interface occasionally clunky—multiple “accept” buttons, cramped terminal panes, and confusing prompts requiring manual clicks. The file editor pane often felt unnecessarily large given that I rarely needed to manually tweak files mid-action.

Claude Code:
Claude Code operates as a CLI tool right in the terminal. You run commands from your project root, and it prompts you with simple yes/no questions to confirm each action. This single-pane approach felt clean, intuitive, and perfectly suited for delegating control to the agent. The lack of a GUI was a non-issue given the agent’s autonomy.

Winner: Claude Code for its streamlined, efficient command-line interaction.


Code Quality and Capability: Documentation Search Matters

Both agents produced similar code given the same model, but Cursor’s ability to search the web for documentation gave it a notable edge. When adding Anthropic support, Claude Code struggled with API syntax and ultimately wrote its own HTTP implementation. Cursor, however, seamlessly referenced web docs to get the calls right, rescuing itself from dead ends.

Winner: Cursor, thanks to its web search integration.


Cost: Subscription vs. Metered Pricing

  • Claude Code: Approximately $8 for 90 minutes of work on these tasks. While reasonable, costs could add up quickly for frequent use.
  • Cursor: $20/month subscription includes 500 premium model requests; I used less than 10% of that for this exercise, roughly costing $2.

Winner: Cursor, offering more usage for less money and a simpler subscription pricing model.


Autonomy: Earning Trust with Incremental Permissions

Claude Code shines here with a granular permission model. Initially, it asks for approval on commands; after repeated approvals, it earns trust to perform actions autonomously. By the end of my session, it was acting independently with minimal prompts.

Cursor, in contrast, lacks this “earned trust” feature. It repeatedly asks for confirmation without a way to grant blanket permissions. Given the nature of coding agents, I believe this is a feature Cursor should adopt soon.

Winner: Claude Code for smarter incremental permissioning.


Integration with Software Development Lifecycle

I emphasize test-driven development (TDD) and version control (Git), so how each agent handled these was crucial.

  • Claude Code: Excellent at generating and running tests before coding features, ensuring quality. Its commit messages were detailed and professional—better than any I’ve written myself. Being a CLI tool, it felt natural coordinating commands and output.

  • Cursor: While it offers a nice Git UI within the IDE and can autogenerate commit messages, these were more generic and less informative. Its handling of test outputs in a small terminal pane felt awkward.

Winner: Claude Code, for superior test and version control workflow integration.


Final Verdict: Use Both, But Lean Towards Claude Code—for Now

Both agents completed all three complex tasks successfully—a testament to how far AI coding assistants have come. It’s remarkable to see agents not only write code but also tests and meaningful commit messages that improve project maintainability.

That said, this is not a binary choice. I recommend developers use both tools in tandem:

  • Use Cursor for day-to-day coding within your IDE, benefiting from its subscription model and web documentation search.
  • Use Claude Code for command-line driven tasks that require incremental permissions, superior test integration, and detailed commit management.

For now, I personally prefer Claude Code for its user experience, autonomy model, and lifecycle integration. But Cursor’s rapid iteration pace means it will likely close these gaps soon.


Takeaway for Developers

If you’re a software developer curious about AI coding agents:

  • Get the $20/month Cursor subscription to familiarize yourself with agent-assisted coding.
  • Experiment with Claude Code in your terminal to experience granular control and trust-building autonomy.
  • Use both to balance cost, control, and convenience.
  • Embrace AI coding agents as powerful collaborators that can help you break through stalled projects and increase productivity.

The future of software development is here—and these AI coding agents are just getting started.


Have you tried Cursor or Claude Code? Share your experiences and thoughts in the comments below!

đŸ“č Video Information:

Title: Obsidian + MCP + SuperWhisper: Write FASTER with AI
Duration: 05:25

Short Summary:

In the video, Greg explains how combining Claude Desktop, MCP servers, and Obsidian streamlines workflows, particularly for writing tasks. He describes using Claude Desktop (which can act as an MCP client) to interact with his Obsidian notes via a file system MCP server. By dictating responses to interview questions using Super Whisper, having Claude organize and clean up the responses, and then making final edits in Obsidian, Greg significantly reduced the effort and time needed to complete a large writing project. He suggests that this integration, possibly enhanced with tools like Git for version control, is highly effective for managing and editing notes or interview responses.