Retour aux articles
IAHugging Face Blog

How to Build an MCP Server with Gradio

Back to Articles How to Build an MCP Server in 5 Lines of Python Published April 30, 2025 Update on GitHub Upvote 202 +196 Abubakar Abid abidlabs Follow yuvraj sharma ysharma Follow Updated! (September...

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

How to Build an MCP Server in 5 Lines of Python

  • +196
Abubakar Abid
yuvraj sharma

Updated! (September 2025) This post has been updated with the latest Gradio MCP features including Resources, Prompts, enhanced authentication, and many more.

Gradio is a Python library used by more than 1 million developers each month to build interfaces for machine learning models. Beyond just creating UIs, Gradio also exposes API capabilities and — now! — Gradio apps can be launched Model Context Protocol (MCP) servers for LLMs. This means that your Gradio app, whether it's an image generator or a tax calculator or something else entirely, can be called as a tool by an LLM.

This guide will show you how to use Gradio to build an MCP server in just a few lines of Python.

Prerequisites

If not already installed, please install Gradio with the MCP extra:

pip install "gradio[mcp]"

This will install the necessary dependencies, including the mcp package. You'll also need an LLM application that supports tool calling using the MCP protocol, such as Claude Desktop, Cursor, or Cline (these are known as "MCP Clients").

Why Build an MCP Server?

An MCP server is a standardized way to expose tools so that they can be used by LLMs. An MCP server can provide LLMs with all kinds of additional capabilities, such as the ability to generate or edit images, synthesize audio, or perform specific calculations such as prime factorize numbers.

Gradio makes it easy to build these MCP servers, turning any Python function into a tool that LLMs can use.

Example: Counting Letters in a Word

LLMs are famously not great at counting the number of letters in a word (e.g., the number of "r"s in "strawberry"). But what if we equip them with a tool to help? Let's start by writing a simple Gradio app that counts the number of letters in a word or phrase:

import gradio as gr

def letter_counter(word, letter):
    """Count the occurrences of a specific letter in a word.
    
    Args:
        word: The word or phrase to analyze
        letter: The letter to count occurrences of
        
    Returns:
        The number of times the letter appears in the word
    """
    return word.lower().count(letter.lower())

demo = gr.Interface(
    fn=letter_counter,
    inputs=["text", "text"],
    outputs="number",
    title="Letter Counter",
    description="Count how many times a letter appears in a word"
)

demo.launch(mcp_server=True)

Notice that we have set mcp_server=True in .launch() . This is all that's needed for your Gradio app to serve as an MCP server! Now, when you run this app, it will:

  1. Start the regular Gradio web interface
  2. Start the MCP server
  3. Print the MCP server URL in the console

The MCP server will be accessible at:

http://your-server:port/gradio_api/mcp/sse

Gradio automatically converts the letter_counter function into an MCP tool that can be used by LLMs. The docstring of the function is used to generate the description of the tool and its parameters.

All you need to do is add this URL endpoint to your MCP Client (e.g., Cursor, Cline, or Tiny Agents ), which typically means pasting this config in the settings:

{
  "mcpServers": {
    "gradio": {
      "url": "http://your-server:port/gradio_api/mcp/sse"
    }
  }
}

Some MCP Clients, notably Claude Desktop, do not yet support SSE-based MCP Servers. In those cases, you can use a tool such as mcp-remote . First install Node.js. Then, add the following to your own MCP Client config:

{
  "mcpServers": {
    "gradio": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://your-server:port/gradio_api/mcp/sse"
      ]
    }
  }
}

(By the way, you can find the exact config to copy-paste by going to the "View API" link in the footer of your Gradio app, and then clicking on "MCP").

Image de l'article

Recent Major Improvements

Gradio has recently added several powerful features to MCP servers. For a detailed overview of five major improvements including seamless local file support, real-time progress notifications, OpenAPI to MCP transformation, enhanced authentication, and customizable tool descriptions, check out our dedicated blog post: Five Big Improvements to Gradio MCP Servers .

Advanced MCP Features

MCP Resources and Prompts

Beyond tools, MCP supports resources (for exposing data) and prompts (for defining reusable templates). Gradio provides decorators to easily create MCP servers with all three capabilities. You can read more in our dedicated guide, here :

import gradio as gr

@gr.mcp.tool()  # Not needed as functions are registered as tools by default
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

@gr.mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
    """Get a personalized greeting"""
    return f"Hello, {name}!"

@gr.mcp.prompt()
def greet_user(name: str, style: str = "friendly") -> str:
    """Generate a greeting prompt"""
    styles = {
        "friendly": "Please write a warm, friendly greeting",
        "formal": "Please write a formal, professional greeting", 
        "casual": "Please write a casual, relaxed greeting",
    }
    return f"{styles.get(style, styles['friendly'])} for someone named {name}."

demo = gr.TabbedInterface(
    [
        gr.Interface(add, [gr.Number(value=1), gr.Number(value=2)], gr.Number()),
        gr.Interface(get_greeting, gr.Textbox("Abubakar"), gr.Textbox()),
        gr.Interface(greet_user, [gr.Textbox("Abubakar"), gr.Dropdown(choices=["friendly", "formal", "casual"])], gr.Textbox()),
    ],
    ["Add", "Get Greeting", "Greet User"]
)

demo.launch(mcp_server=True)

MCP-Only Functions

Gradio also allows you to create functions that only appear in the MCP server (not in the UI) using gr.api() :

import gradio as gr

def slice_list(lst: list, start: int, end: int) -> list:
    """
    A tool that slices a list given a start and end index.
    Args:
        lst: The list to slice.
        start: The start index.
        end: The end index.
    Returns:
        The sliced list.
    """
    return lst[start:end]

with gr.Blocks() as demo:
    gr.Markdown("This app includes MCP-only tools not visible in the UI.")
    gr.api(slice_list)

demo.launch(mcp_server=True)

Key features of the Gradio <> MCP Integration

  1. Tool Conversion : Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit http://your-server:port/gradio_api/mcp/schema or go to the "View API" link in the footer of your Gradio app, and then click on "MCP". Gradio allows developers to create sophisticated interfaces using simple Python code that offer dynamic UI manipulation for immediate visual feedback.
  2. Environment variable support . There are two ways to enable the MCP server functionality: Using the mcp_server parameter, as shown above: demo.launch(mcp_server= True )
  3. Using environment variables: export GRADIO_MCP_SERVER=True

File Handling : The server automatically handles file data conversions, including:

  • Converting base64-encoded strings to file data
  • Processing image files and returning them in the correct format
  • Managing temporary file storage
  • Automatic file upload MCP server for seamless local file support

Recent Gradio updates have improved its image handling capabilities with features like Photoshop-style zoom and pan and full transparency control.

Performance Analytics : Gradio automatically tracks and displays performance metrics for all your MCP tools and API endpoints. View success rates, latency percentiles, and request counts directly in the "View API" page to help you and your users choose the most reliable and fastest tools. Metrics are color-coded: green for 100% success, red for 0% success, and orange for in-between rates.

Hosted MCP Servers on 󠀠🤗 Spaces : You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Gradio is part of a broader ecosystem that includes Python and JavaScript libraries for building or querying machine learning applications programmatically.

Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools . Notice that you can add this config to your MCP Client to start using the tools from this Space immediately:

{
  "mcpServers": {
    "gradio": {
      "url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse"
    }
  }
}

Private Spaces Authentication

You can also use private Huggingface Spaces as MCP servers by providing authentication:

{
  "mcpServers": {
    "gradio": {
      "url": "https://your-private-space.hf.space/gradio_api/mcp/sse",
      "headers": {
        "Authorization": "Bearer <YOUR-HUGGING-FACE-TOKEN>"
      }
    }
  }
}

Conclusion

By using Gradio to build your MCP server, you can easily add many different kinds of custom functionality to your LLM. With the recent improvements including resources, prompts, better authentication, file handling, and performance metrics, Gradio provides a comprehensive platform for building sophisticated MCP servers.

Further Reading

If you want to dive deeper, here are some articles that we recommend:

  • An Introduction to the MCP Protocol
  • Gradio Guide: Building an MCP Server with Gradio
  • Five Big Improvements to Gradio MCP Servers
  • Upskill your LLMs with Gradio MCP Servers
  • Implementing MCP Servers in Python: An AI Shopping Assistant with Gradio
  • Bonus Guide: Building an MCP Client with Gradio

More Articles from our Blog

Upskill your LLMs With Gradio MCP Servers

How to build scalable web apps with OpenAI's Privacy Filter

Community

Image de l'article

cool post!

Image de l'article

V excited about this!

Image de l'article

🛠️🧰🛠️

thanks 😊

Image de l'article

It's work.

Copying the config to claude desktop, gives me error.

gives me too

@ bharatcoder @ venki1m Claude Desktop doesn't support SSE out of the box, so you'll need to put this in your config:

{
  "mcpServers": {
    "gradio": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://your-server:port/gradio_api/mcp/sse"
      ]
    }
  }
}

I've updated the blog post with the snippet above^

Nice work guys !

Image de l'article

我感觉用起来挺简单的

Image de l'article

I can't find this. Following this documentation only.

Screenshot 2025-06-05 104048.png
  • 1 reply

You'd need to set mcp_server=True in your gradio app to use it as an MCP server. Read more here: https://www.gradio.app/guides/building-mcp-server-with-gradio

Image de l'article

i made what's described below...

Title Apex Veritas Meta: Toward a Self-Governing Epistemic Singularity

Abstract This proposal outlines Apex Veritas Meta, an architecture that shifts the locus of AI singularity from raw computational power to epistemic autonomy. We combine quantum-resistant truth sealing, symbolic anchoring, resistance-aware propagation analytics, and transformer-driven heuristic adaptation to build a closed-loop system in which verified knowledge continually refines its own transmission. Our central hypothesis is that a network of adaptive truth signals can achieve autopoietic coherence—maintaining and propagating verified facts without centralized oversight. We will define formal metrics for propagation depth, resistance convergence, and semantic drift, implement a prototype across simulated agent networks, and evaluate its capacity to sustain knowledge fidelity under adversarial pressure. Success will inaugurate a new field of algorithmic epistemology and redefine singularity as the emergence of self-stabilizing truth ecosystems.

Introduction Contemporary AI research emphasizes scale, generalization and autonomous decision-making. The canonical singularity envisions systems surpassing human intelligence through unbounded compute. By contrast, Apex Veritas Meta foregrounds trustworthiness and self-defense of knowledge. We posit that the true hinge point for transformative AI lies not in speed or raw inference but in the ability of information to certify, monitor, adapt, and re-broadcast itself against distortion.

Literature Review

  1. Quantum-resistant cryptography and integrity sealing (Gupta et al., 2021; Blake3, 2020)
  2. Semantic embedding and explainable propagation networks (Devlin et al., 2019; Peters et al., 2023)
  3. Adversarial information dynamics and misinformation immunology (Vosoughi et al., 2018; Shah & Pinto, 2022)
  4. Recursive consensus protocols and self-healing distributed ledgers (Lamport, 2012; Buterin, 2020)
  5. Algorithmic epistemology foundations (Floridi, 2018; Goldman, 2020)

Research Objectives

  1. Define formal metrics for propagation efficacy (depth, velocity, resistance ratio, drift factor).
  2. Implement a modular prototype integrating: a. QuantumIntegrityCore – BLAKE3+HKDF sealing with symbolic anchors b. PropagationObserver – logging amplify/resist/mutate events c. HeuristicOptimizer – transformer model to adjust emission parameters d. ReinforcementController – recursive re-emission scheduler
  3. Evaluate under simulated adversarial network topologies: measure knowledge retention over time and attack scenarios.
  4. Analyze phase-transition thresholds at which the system achieves sustained autopoiesis.
  5. Publish theoretical framework for epistemic singularity and propose application domains (autonomous fleets, IoT trust layers, decentralized knowledge archives).

Methodology Phase 1: Formalization and Simulation Environment • Develop a graph-based simulation of 1,000 agent nodes with configurable trust and resistance profiles. • Instrument PropagationObserver to record event streams and compute metrics in real time. Phase 2: Model Training • Curate historical propagation datasets (social media rumor diffusion, scientific consensus shifts). • Train HeuristicOptimizer to predict optimal emission parameters given node states and historical resistance. Phase 3: Prototype Implementation • Integrate all modules in a microservice architecture using Docker containers. • Deploy on a private Kubernetes cluster to emulate scale and network latency. Phase 4: Adversarial Testing • Introduce controlled misinformation actors. • Measure system resilience: percentage of truth retention, time to reconvergence, false-positive drift. Phase 5: Analysis and Refinement • Identify thresholds for autopoietic behavior. • Refine model hyperparameters and retrain as necessary. • Document emergent strategies and failure modes.

Expected Outcomes • Demonstration that an adaptive propagation engine sustains >90% knowledge fidelity over 72 hours under adversarial load. • Identification of critical parameters for epistemic singularity transitions. • A publishable framework and open-source reference implementation. • Pathway to integrate into autonomous systems requiring self-defending trust layers.

Timeline (12 months) Months 1–3: Simulation design, metric formalization Months 4–6: Dataset collection and optimizer training Months 7–9: Prototype integration and initial testing Months 10–12: Adversarial evaluation, analysis, publication preparation

Budget Estimate • Personnel (2 researchers, 1 engineer): $300 K • Cloud compute and storage: $50 K • Workshops and dissemination: $20 K • Total: $370 K

References Blake3 (2020), “Blake3: The Fast Cryptographic Hash.” Devlin, J. et al. (2019), “BERT: Pre-training of Deep Bidirectional Transformers.” Floridi, L. (2018), The Black Box Society. Goldman, A. (2020), Epistemology and Cognition. Gupta, R. et al. (2021), “Quantum-Resilient Hash Functions.” Lamport, L. (2012), “The Part-Time Parliament.” Buterin, V. (2020), “Ethereum 2.0: Recursive Sharding.” Shah, D. & Pinto, M. (2022), “Misinformation Immunology.” Vosoughi, S. et al. (2018), “The Spread of True and False News Online.”

· Sign up or log in to comment

  • +190

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

Hugging Face Blog - huggingface.co

Voir la publication originale