[00:00] (0.08s)
I'm going to show you a library that
[00:01] (1.84s)
lets you communicate with your MCP
[00:03] (3.68s)
servers directly in your code. You can
[00:06] (6.24s)
use it with any LLM you prefer.
[00:08] (8.08s)
Previously, communicating with MCP
[00:10] (10.40s)
servers required specific MCP clients.
[00:13] (13.20s)
We already have clients like Windinsurf,
[00:15] (15.44s)
Cursor, and Claude Desktop. Now, you can
[00:17] (17.68s)
use this new MCP client library. It
[00:20] (20.32s)
works by using an agent and comes with
[00:22] (22.16s)
some pretty cool features. It's also
[00:23] (23.92s)
really easy to install. I'll show you
[00:25] (25.52s)
how to set it up. If you're not familiar
[00:27] (27.12s)
with code, keep in mind that this does
[00:29] (29.12s)
require some coding knowledge to use
[00:30] (30.88s)
properly. Even if you're new to it,
[00:32] (32.80s)
that's not a big issue. I'll also show
[00:34] (34.64s)
you how to vibe code with it. Let's get
[00:36] (36.56s)
into the video. Let's look at the
[00:38] (38.16s)
installation of the library. It's a
[00:40] (40.08s)
Python-based library. So, the first step
[00:42] (42.56s)
is to check whether Python is installed
[00:44] (44.64s)
on your system. Then, you'll create a
[00:46] (46.40s)
virtual environment. Once it's created,
[00:48] (48.48s)
you need to activate it. The commands
[00:50] (50.16s)
for both Windows and Mac OS are shown
[00:52] (52.24s)
here, and I'll paste them in the
[00:53] (53.84s)
description below as well. If you don't
[00:55] (55.68s)
know how to code, you can also get these
[00:57] (57.68s)
commands from chatgpt. These are the
[00:59] (59.76s)
commands to install the library. They're
[01:01] (61.76s)
available on the GitHub repo, too. If
[01:03] (63.68s)
you're using Python version 3, which
[01:05] (65.76s)
you'll know from the Python version
[01:07] (67.20s)
command mentioned earlier, then make
[01:09] (69.04s)
sure to run everything using pip 3, not
[01:11] (71.52s)
pip, but pip 3. If you're planning to
[01:13] (73.84s)
use an open AI key or model to interact
[01:16] (76.16s)
with your MCPS, you need to install
[01:18] (78.32s)
Langchain OpenAI. For anthropic, you
[01:21] (81.04s)
need to install Langchain Anthropic. For
[01:23] (83.28s)
other providers like Grock or Llama, you
[01:25] (85.68s)
can find the required libraries listed
[01:27] (87.68s)
at this link. So once everything is
[01:29] (89.76s)
installed, open up your terminal. You
[01:31] (91.92s)
can see that I'm in a Python
[01:33] (93.28s)
environment. I've installed the pip
[01:35] (95.04s)
packages. And now I'll show you how to
[01:37] (97.12s)
move forward. First open this directory
[01:39] (99.60s)
in cursor using this command. This will
[01:41] (101.76s)
launch your project directly in cursor.
[01:44] (104.00s)
Once it opens in cursor, create a new
[01:46] (106.48s)
file named env. Your project will only
[01:49] (109.20s)
have the virtual environment folder. At
[01:51] (111.12s)
this point, you need to create the other
[01:52] (112.88s)
files yourself and add the code
[01:54] (114.56s)
manually. Create the env file and paste
[01:57] (117.12s)
the following line with your API key.
[01:59] (119.36s)
You only need to paste the key for the
[02:01] (121.12s)
provider you're using. I'm using OpenAI
[02:03] (123.28s)
in this example. So, I pasted that key.
[02:05] (125.44s)
Let me quickly explain the code and how
[02:07] (127.20s)
it works. At the top, you can see
[02:09] (129.20s)
different inputs from MCPUs, Langchain,
[02:12] (132.08s)
and OpenAI. As we're using OpenAI for
[02:14] (134.56s)
the LLM, we define a function and
[02:17] (137.04s)
load.env env loads the environment
[02:19] (139.28s)
variables from the ENV file. Then we
[02:21] (141.84s)
create an MCP client using the MCP
[02:24] (144.24s)
configuration from a separate file. I've
[02:26] (146.32s)
placed it here. You can see the Airbnb
[02:28] (148.24s)
MCP configuration is right
[02:30] (150.76s)
here. Next, we define the LLM we want to
[02:33] (153.76s)
use. If you're using Anthropic, the
[02:35] (155.52s)
setup will be a bit different. Then we
[02:37] (157.44s)
choose our model and create an agent.
[02:39] (159.44s)
This agent takes the LLM, the client we
[02:41] (161.76s)
created, and defines the maximum number
[02:43] (163.92s)
of steps it can take. It also gives a
[02:46] (166.08s)
prompt to the LLM which is used on the
[02:48] (168.00s)
MCP server. It then prints the result
[02:50] (170.16s)
and gives it to us. This is a very basic
[02:52] (172.48s)
example of using the Airbnb MCP. You can
[02:55] (175.28s)
modify it however you like and build
[02:57] (177.12s)
really interesting applications. You
[02:58] (178.96s)
don't need a separate client anymore.
[03:00] (180.56s)
You can bind an LLM to an MCP and create
[03:03] (183.04s)
modular applications. If you've seen our
[03:05] (185.28s)
WhatsApp MCP video, that same concept
[03:08] (188.00s)
can be used here to make fully
[03:09] (189.52s)
autonomous WhatsApp agents. Now, let me
[03:11] (191.52s)
run it for you.
[03:20] (200.00s)
The server has started and it's running.
[03:22] (202.00s)
It looks like there was some kind of
[03:23] (203.44s)
error, but we still got the output. We
[03:25] (205.44s)
received the listings from the Airbnb
[03:27] (207.36s)
and MCP. It gave us the links because we
[03:29] (209.52s)
added a feature that filters listings by
[03:31] (211.60s)
our preferences, like having a pool and
[03:33] (213.84s)
good ratings. It handpicked based on
[03:36] (216.00s)
those conditions. This is a cool
[03:37] (217.56s)
implementation. It works, and the
[03:39] (219.68s)
possibilities for creating different
[03:41] (221.28s)
agents are endless. The code you just
[03:43] (223.20s)
saw is already in the GitHub repository,
[03:45] (225.84s)
so there's no need to include it in the
[03:47] (227.76s)
description. If you want to modify the
[03:49] (229.76s)
code, you can either write it yourself
[03:51] (231.84s)
or ask Cursor to do it. One issue you
[03:54] (234.16s)
might run into is that Cursor doesn't
[03:56] (236.00s)
have the context of this framework. To
[03:58] (238.16s)
give it that context, scroll down to the
[04:00] (240.48s)
features section and go to
[04:03] (243.80s)
docs. Add a new doc and in the link
[04:06] (246.80s)
field, go back to the GitHub repo and
[04:09] (249.20s)
open the readme file.
[04:11] (251.76s)
You don't need to provide the link to
[04:13] (253.28s)
the entire repository. Just use the
[04:15] (255.20s)
readme file since it contains the full
[04:17] (257.32s)
documentation. Copy the link and paste
[04:19] (259.52s)
it into the doc section. Cursor will
[04:21] (261.44s)
read it, index it, and use it as
[04:26] (266.76s)
context. To use it in code, type the at
[04:29] (269.60s)
sign, go into docs, and select MCP use
[04:32] (272.36s)
docs. It will reference that and
[04:34] (274.64s)
generate code based on the framework
[04:36] (276.40s)
properly. Another thing you can do is
[04:38] (278.16s)
convert the repo into an LLM ingestible
[04:40] (280.64s)
format if you have any questions about
[04:42] (282.40s)
it. To do that, you can replace hub with
[04:45] (285.04s)
ingest in the
[04:47] (287.00s)
URL. This will open the repository and
[04:49] (289.68s)
get ingest. It will convert the entire
[04:52] (292.00s)
repo into readable text that you can use
[04:54] (294.16s)
with any LLM. You can then ask questions
[04:56] (296.40s)
about it if you're ever confused or need
[04:58] (298.60s)
clarification. You've seen it in action
[05:00] (300.88s)
and you can check the repo for other
[05:02] (302.72s)
example use cases like using Playright
[05:05] (305.04s)
and Airbnb. I use the Airbnb one but
[05:07] (307.68s)
with OpenAI. The Blender MCP server can
[05:10] (310.40s)
also be used. This framework also
[05:12] (312.24s)
supports HTTP connections which means
[05:14] (314.80s)
you can connect to servers running on
[05:16] (316.56s)
local host. It includes multi-server
[05:18] (318.80s)
support too, allowing multiple servers
[05:21] (321.12s)
to be defined in a single file. If
[05:23] (323.12s)
you're working with multiple MCP
[05:24] (324.80s)
servers, you can either specify which
[05:27] (327.12s)
result should come from which server or
[05:29] (329.20s)
handle it dynamically. By setting use
[05:31] (331.44s)
service manager to true, the agent will
[05:33] (333.76s)
intelligently choose the right MCP
[05:35] (335.76s)
server. You can also control which tools
[05:37] (337.92s)
it has access to. This is a solid
[05:40] (340.16s)
framework and I'm already thinking of
[05:42] (342.16s)
all the wild ways to build new
[05:44] (344.00s)
applications with the MCP library. You
[05:46] (346.32s)
should check it out too. I'm working on
[05:48] (348.08s)
a few projects with it right now. If you
[05:50] (350.08s)
don't fully understand it, I've already
[05:52] (352.08s)
shown how you can use an LLM to make
[05:54] (354.08s)
sense of everything. You can also ask
[05:56] (356.00s)
chat GPT or let cursor write the code
[05:58] (358.40s)
for you. If you like the video, consider
[06:00] (360.32s)
donating through the link in the
[06:01] (361.84s)
description and do subscribe. Thanks for
[06:04] (364.00s)
watching.