Skip to content

jlowin/fastmcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

FastMCP v2 πŸš€

The fast, Pythonic way to build MCP servers and clients.

Docs PyPI - Version Tests License

jlowin%2Ffastmcp | Trendshift

Note

FastMCP 2.0 & The Official MCP SDK

Recognize the FastMCP name? You might have used the version integrated into the official MCP Python SDK, which was based on FastMCP 1.0.

Welcome to FastMCP 2.0! This is the actively developed successor, and it significantly expands on 1.0 by introducing powerful client capabilities, server proxying & composition, OpenAPI/FastAPI integration, and more advanced features.

FastMCP 2.0 is the recommended path for building modern, powerful MCP applications. Ready to upgrade or get started? Follow the installation instructions, which include specific steps for upgrading from the official MCP SDK.


The Model Context Protocol (MCP) is a new, standardized way to provide context and tools to your LLMs, and FastMCP makes building MCP servers and clients simple and intuitive. Create tools, expose resources, define prompts, and connect components with clean, Pythonic code.

# server.py
from fastmcp import FastMCP

mcp = FastMCP("Demo πŸš€")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

if __name__ == "__main__":
    mcp.run()

Run the server locally:

fastmcp run server.py

πŸ“š Documentation

This readme provides only a high-level overview. For detailed guides, API references, and advanced patterns, please refer to the complete FastMCP documentation at gofastmcp.com.


Table of Contents


What is MCP?

The Model Context Protocol (MCP) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:

  • Expose data through Resources (similar to GET requests; load info into context)
  • Provide functionality through Tools (similar to POST/PUT requests; execute actions)
  • Define interaction patterns through Prompts (reusable templates)
  • And more!

FastMCP provides a high-level, Pythonic interface for building and interacting with these servers.

Why FastMCP?

The MCP protocol is powerful but implementing it involves a lot of boilerplate - server setup, protocol handlers, content types, error management. FastMCP handles all the complex protocol details and server management, so you can focus on building great tools. It's designed to be high-level and Pythonic; in most cases, decorating a function is all you need.

While the core server concepts of FastMCP 1.0 laid the groundwork and were contributed to the official MCP SDK, FastMCP 2.0 (this project) is the actively developed successor, adding significant enhancements and entirely new capabilities like a powerful client library, server proxying, composition patterns, OpenAPI/FastAPI integration, and much more.

FastMCP aims to be:

πŸš€ Fast: High-level interface means less code and faster development

πŸ€ Simple: Build MCP servers with minimal boilerplate

🐍 Pythonic: Feels natural to Python developers

πŸ” Complete: FastMCP aims to provide a full implementation of the core MCP specification for both servers and clients

Installation

We recommend installing FastMCP with uv:

uv pip install fastmcp

For full installation instructions, including verification, upgrading from the official MCPSDK, and developer setup, see the Installation Guide.

Core Concepts

These are the building blocks for creating MCP servers and clients with FastMCP.

The FastMCP Server

The central object representing your MCP application. It holds your tools, resources, and prompts, manages connections, and can be configured with settings like authentication providers.

from fastmcp import FastMCP

# Create a server instance
mcp = FastMCP(name="MyAssistantServer")

Learn more in the FastMCP Server Documentation.

Tools

Tools allow LLMs to perform actions by executing your Python functions (sync or async). Ideal for computations, API calls, or side effects (like POST/PUT). FastMCP handles schema generation from type hints and docstrings. Tools can return various types, including text, JSON-serializable objects, and even images using the fastmcp.Image helper.

@mcp.tool()
def multiply(a: float, b: float) -> float:
    """Multiplies two numbers."""
    return a * b

Learn more in the Tools Documentation.

Resources & Templates

Resources expose read-only data sources (like GET requests). Use @mcp.resource("your://uri"). Use {placeholders} in the URI to create dynamic templates that accept parameters, allowing clients to request specific data subsets.

# Static resource
@mcp.resource("config://version")
def get_version(): 
    return "2.0.1"

# Dynamic resource template
@mcp.resource("users://{user_id}/profile")
def get_profile(user_id: int):
    # Fetch profile for user_id...
    return {"name": f"User {user_id}", "status": "active"}

Learn more in the Resources & Templates Documentation.

Prompts

Prompts define reusable message templates to guide LLM interactions. Decorate functions with @mcp.prompt(). Return strings or Message objects.

@mcp.prompt()
def summarize_request(text: str) -> str:
    """Generate a prompt asking for a summary."""
    return f"Please summarize the following text:\n\n{text}"

Learn more in the Prompts Documentation.

Context

Access MCP session capabilities within your tools, resources, or prompts by adding a ctx: Context parameter. Context provides methods for:

  • Logging: Log messages to MCP clients with ctx.info(), ctx.error(), etc.
  • LLM Sampling: Use ctx.sample() to request completions from the client's LLM.
  • HTTP Request: Use ctx.http_request() to make HTTP requests to other servers.
  • Resource Access: Use ctx.read_resource() to access resources on the server
  • Progress Reporting: Use ctx.report_progress() to report progress to the client.
  • and more...

To access the context, add a parameter annotated as Context to any mcp-decorated function. FastMCP will automatically inject the correct context object when the function is called.

from fastmcp import FastMCP, Context

mcp = FastMCP("My MCP Server")

@mcp.tool()
async def process_data(uri: str, ctx: Context):
    # Log a message to the client
    await ctx.info(f"Processing {uri}...")

    # Read a resource from the server
    data = await ctx.read_resource(uri)

    # Ask client LLM to summarize the data
    summary = await ctx.sample(f"Summarize: {data.content[:500]}")

    # Return the summary
    return summary.text

Learn more in the Context Documentation.

MCP Clients

Interact with any MCP server programmatically using the fastmcp.Client. It supports various transports (Stdio, SSE, In-Memory) and often auto-detects the correct one. The client can also handle advanced patterns like server-initiated LLM sampling requests if you provide an appropriate handler.

Critically, the client allows for efficient in-memory testing of your servers by connecting directly to a FastMCP server instance via the FastMCPTransport, eliminating the need for process management or network calls during tests.

from fastmcp import Client

async def main():
    # Connect via stdio to a local script
    async with Client("my_server.py") as client:
        tools = await client.list_tools()
        print(f"Available tools: {tools}")
        result = await client.call_tool("add", {"a": 5, "b": 3})
        print(f"Result: {result.text}")

    # Connect via SSE
    async with Client("http://localhost:8000/sse") as client:
        # ... use the client
        pass

To use clients to test servers, use the following pattern:

from fastmcp import FastMCP, Client

mcp = FastMCP("My MCP Server")

async def main():
    # Connect via in-memory transport
    async with Client(mcp) as client:
        # ... use the client

Learn more in the Client Documentation and Transports Documentation.

Advanced Features

FastMCP introduces powerful ways to structure and deploy your MCP applications.

Proxy Servers

Create a FastMCP server that acts as an intermediary for another local or remote MCP server using FastMCP.from_client(). This is especially useful for bridging transports (e.g., remote SSE to local Stdio) or adding a layer of logic to a server you don't control.

Learn more in the Proxying Documentation.

Composing MCP Servers

Build modular applications by mounting multiple FastMCP instances onto a parent server using mcp.mount() (live link) or mcp.import_server() (static copy).

Learn more in the Composition Documentation.

OpenAPI & FastAPI Generation

Automatically generate FastMCP servers from existing OpenAPI specifications (FastMCP.from_openapi()) or FastAPI applications (FastMCP.from_fastapi()), instantly bringing your web APIs to the MCP ecosystem.

Learn more: OpenAPI Integration | FastAPI Integration.

Running Your Server

You can run your FastMCP server in several ways:

  1. Development (fastmcp dev): Recommended for building and testing. Provides an interactive testing environment with the MCP Inspector.

    fastmcp dev server.py
    # Optionally add temporary dependencies
    fastmcp dev server.py --with pandas numpy
  2. FastMCP CLI: Run your server with the FastMCP CLI. This can autodetect and load your server object and run it with any transport configuration you want.

    fastmcp run path/to/server.py:server_object
    
    # Run as SSE on port 4200
    fastmcp run path/to/server.py:server_object --transport sse --port 4200

    FastMCP will auto-detect the server object if it's named mcp, app, or server. In these cases, you can omit the :server_object part unless you need to select a specific object.

  3. Direct Execution: For maximum compatibility with the MCP ecosystem, you can run your server directly as part of a Python script. You will typically do this within an if __name__ == "__main__": block in your script:

    # Add this to server.py
    if __name__ == "__main__":
        # Default: runs stdio transport
        mcp.run()
    
        # Example: Run with SSE transport on a specific port
        mcp.run(transport="sse", host="127.0.0.1", port=9000)

    Run your script:

    python server.py
    # or using uv to manage the environment
    uv run python server.py
  4. Claude Desktop Integration (fastmcp install): The easiest way to make your server persistently available in the Claude Desktop app. It handles creating an isolated environment using uv.

    fastmcp install server.py --name "My Analysis Tool"
    # Optionally add dependencies and environment variables
    fastmcp install server.py --with requests -v API_KEY=123 -f .env

See the Server Documentation for more details on transports and configuration.

Contributing

Contributions are the core of open source! We welcome improvements and features.

Prerequisites

  • Python 3.10+
  • uv (Recommended for environment management)

Setup

  1. Clone the repository:

    git clone https://github.com/jlowin/fastmcp.git 
    cd fastmcp
  2. Create and sync the environment:

    uv sync

    This installs all dependencies, including dev tools.

  3. Activate the virtual environment (e.g., source .venv/bin/activate or via your IDE).

Unit Tests

FastMCP has a comprehensive unit test suite. All PRs must introduce or update tests as appropriate and pass the full suite.

Run tests using pytest:

pytest

Static Checks

FastMCP uses pre-commit for code formatting, linting, and type-checking. All PRs must pass these checks (they run automatically in CI).

Install the hooks locally:

uv run pre-commit install

The hooks will now run automatically on git commit. You can also run them manually at any time:

pre-commit run --all-files
# or via uv
uv run pre-commit run --all-files

Pull Requests

  1. Fork the repository on GitHub.
  2. Create a feature branch from main.
  3. Make your changes, including tests and documentation updates.
  4. Ensure tests and pre-commit hooks pass.
  5. Commit your changes and push to your fork.
  6. Open a pull request against the main branch of jlowin/fastmcp.

Please open an issue or discussion for questions or suggestions before starting significant work!