
We have reached a saturation point with basic chatbots. The novelty of asking a question and receiving a text-based answer has faded; the industry is now pivoting violently towardagency—systems that can calculate, read local files, and structure complex workflows.
This is where the Model Context Protocol (MCP) becomes the critical infrastructure of the new AI stack.
However, a word of caution before we write a single line of code:Do not reinvent the wheel.If a server already exists for your specific database or API, use it. The true value for a senior developer lies not in building redundant connectors, but in crafting bespoke servers that bridge specific, proprietary gaps in your workflow.
While tools like n8n might offer a lower barrier to entry for simple automation, coding a custom MCP server using the Python SDK offers precision, particularly when integrating complexTools,Resources, andPromptsinto a single, cohesive unit.
This article details the construction of a comprehensive "Multitool" server. We will not just build a calculator; we will build a system that understands context through documentation resources and executes complex logic through dynamic prompt templates.
To build a server that actually feels "intelligent," we must move beyond simple function calling. We need to implement the three pillars of the protocol:
We will use thefast-mcppython library, creating a server that offers mathematical precision, documentation access, and structured analysis simultaneously.
The ecosystem we are entering requires a specific stack. We are not coding like it’s 1999; we are utilizing modern tooling.
uv(for lightning-fast dependency management).We begin by initializing a project structure that is clean and modular.
Our entry point will be a file namedserver.py. The imports set the stage: we needfast_mcpfor the server architecture and standard libraries likemathandosfor our logic. mcp.server.fastmcp import FastMCP
The most direct way an LLM affects the world is through tools. We will implement a robust calculator. While trivial in logic, it demonstrates the critical requirement ofprecise description.
If you do not describe your tool accurately, the LLM will not know when to call it. Ambiguity is the enemy of agency.
We define tools using the@mcp.tool()decorator. Note the docstrings; these are not just comments for developers, they are instructions for the AI.()
We can expand this to include subtraction, division, square roots, and functionality for percentages. The key takeaway is modularity. Each mathematical operation is a discrete tool the model can invoke.
Tools allow action, butResourcesallow understanding. A common use case is feeding the model local documentation or file content that isn't part of its training data.
In our architecture, we will expose a Markdown file containing documentation (hypothetically, a TypeScript SDK manual) as a consumable resource.
Resources differ from tools; they are read-only data streams. We define a path to a local markdown file and expose it via the@mcp.resourcedecorator.
When the client (e.g., Claude Desktop) requests this resource, the server reads the file dynamically and pipes the text directly into the model's context window. This allows you to ask questions like "How do I use the TypeScript SDK?" and have the model answer based onyourlocal file, not general training data.
This is often the most underutilized capability.Promptsallow you to standardize workflows. Instead of manually typing "You are an executive assistant, please summarize this transcript...", you codify that into a template.
We will create a "Meeting Summary" prompt. This prompt will accept dynamic arguments: the date, the meeting title, and the transcript.
The prompt template itself is a Markdown file stored in your project (e.g.,templates/prompt.md). It defines a persona and a structure:
There is a temptation to over-engineer this using complexlist_promptsandget_promptshandlers. Avoid this. The SDK provides a cleaner abstraction using@mcp.prompt.()
By defining arguments in the function signature, the MCP client automatically generates a UI form for these inputs. You simply type the date and paste the transcript; the server constructs the engineered prompt.
A server running in a vacuum is useless. We must bridge our local python process with the client, in this case, Claude Desktop.
This requires theclaude_desktop_config.jsonfile. This file tells the client: "here is a server, here is how you launch it, and here is how you talk to it."
The Configuration Structure:
Key Insight:Notice we aren't just callingpython. We are invokinguv run. This ensures the server runs inside the correct virtual environment with all dependencies resolved. Breaking out of the environment is the number one cause of "module not found" errors in MCP deployment.
Debugging an MCP server by restarting Claude Desktop repeatedly is painful and inefficient. It is akin to recompiling a kernel to change a font size.
Use theMCP Inspector.
The Inspector acts as a web-based client for your server. It allows you to test:
To run the Inspector:
Developer Warning:When the Inspector launches, it may generate an authorization token in the URL for security. If you try to connect via a genericlocalhost:portURL without the token, it will fail. Always use the full URL provided in the terminal output, or verify if you are running with authorization disabled (only recommended for local testing).
By default, we utilizestdio(Standard Input/Output) for communication. This is effective for local desktop integration. However, as you scale, you may wish to expose your server over a network.
Historically,SSE (Server-Sent Events)was the standard. However, the protocol is shifting. SSE as a standalone transport is being deprecated in favor ofStreamable HTTP, which incorporates SSE as a streaming mechanism but offers a more robust HTTP wrapping.
If you wish to future-proof your server for remote access (e.g., hosting it on a VM), you can switch the transport mode in your main execution block:
When you inspect a server running over HTTP, remember to append/mcpto your endpoint URL (e.g.,http://localhost:8000/mcp) to successfully handshake with the Inspector.
We have moved from an empty directory to a fully functional "Multitool" server. It possesses the ability to calculate (Tools), the ability to read and contextualize external data (Resources), and the ability to guide high-level analysis (Prompts).
You now possess the blueprint. The next step is not to add more calculator functions, but to look at your own proprietary data—your SQL databases, your internal documentation, your specific workflows—and build the bridges that allow your AI to access them. That is where the true engineering begins.