Skip to content

modelscope/MCPBench

Repository files navigation

🦊 MCPBench: A Benchmark for Evaluating MCP Servers

Documentation Package License

MCPBench is an evaluation framework for MCP Servers. It supports the evaluation of two types of servers: Web Search and Database Query, and is compatible with both local and remote MCP Servers. The framework primarily evaluates different MCP Servers (such as Brave Search, DuckDuckGo, etc.) in terms of task completion accuracy, latency, and token consumption under the same LLM and Agent configurations. Here is the evaluation report.

MCPBench Overview

The implementation is based on LangProBe: a Language Programs Benchmark.


📋 Table of Contents

🔥 News

  • Apr. 14, 2025 🌟 We are proud to announce that MCPBench is now open-sourced.

🛠️ Installation

The framework requires Python version >= 3.11, nodejs and jq.

conda create -n mcpbench python=3.11 -y
conda activate mcpbench
pip install -r requirements.txt

🚀 Quick Start

LLM Configuration

Prepare the LLM key and endpoint in your environment variables:

export MODEL_KEY=your_api_key_here
export MODEL_ENDPOINT=your_model_endpoint_here

Launch MCP Server

Launch stdio MCP as SSE

If the MCP does not support SSE, write the config like:

{
  "name": "DuckDuckGo",
  "command": "uvx duckduckgo-mcp-server",
  "args": "",
  "port": 8001,
  "tool_name": "search",
  "tool_keyword": "query"
}

Save this config file in the configs folder and launch it using:

sh launch_mcp_as_sse.sh YOUR_CONFIG_FILE

For example, if the config file is duckduckgo.json, then run:

sh launch_mcp_as_sse.sh duckduckgo.json

Launch SSE MCP

If your server supports SSE, you can use it directly. The URL will be http://localhost:8001/sse

For SSE-supported MCP Server, write the config like:

{
  "name": "Exa Search",
  "command": "",
  "args": "",
  "url": "https://mcp-xxxx.api-inference.modelscope.cn/sse",
  "port": 0,
  "tool_name": "web_search",
  "tool_keyword": "query"
}

where the url can be generated from the MCP market on ModelScope.

Launch Evaluation

To evaluate the MCP Server's performance on Web Search tasks:

sh evaluation_websearch.sh YOUR_CONFIG_FILE

To evaluate the MCP Server's performance on Database Query tasks:

sh evaluation_db.sh YOUR_CONFIG_FILE

🧂 Datasets and Experiments

Our framework provides two datasets for evaluation. For the WebSearch task, the dataset is located at MCPBench/langProBe/WebSearch/data/frames_test.jsonl, containing 200 QA pairs each from Frames, news, and technology domains. Our framework for automatically constructing evaluation datasets will be open-sourced later.

For the Database Query task, the dataset is located at MCPBench/langProBe/DB/data/car_bi.jsonl. You can add your own dataset in the following format:

{
  "unique_id": "",
  "Prompt": "",
  "Answer": ""
}

We have evaluated mainstream MCP Servers on both tasks. For detailed experimental results, please refer to Documentation

🚰 Cite

If you find this work useful, please consider citing our project:

@misc{mcpbench,
  title={MCPBench: A Benchmark for Evaluating MCP Servers},
  author={Zhiling Luo, Xiaorong Shi, Xuanrui Lin, Jinyang Gao},
  howpublished = {\url{https://github.com/modelscope/MCPBench}},
  year={2025}
}

Alternatively, you may reference our report.

@article{mcpbench_report,
      title={Evaluation Report on MCP Servers}, 
      author={Zhiling Luo, Xiaorong Shi, Xuanrui Lin, Jinyang Gao},
      year={2025},
      journal={arXiv preprint arXiv:2504.11094},
      url={https://arxiv.org/abs/2504.11094},
      primaryClass={cs.AI}
}