MCP规范——2025年6月18日
MCP Specification – 2025-06-18

原始链接: https://modelcontextprotocol.io/specification/2025-06-18

模型上下文协议 (MCP) 是一种开放标准,能够实现大型语言模型 (LLM) 应用与外部数据和工具的无缝集成。它使用 JSON-RPC 来促进以下组件之间的通信:主机(LLM 应用)、客户端(连接器)和服务器(上下文和能力提供者)。 MCP 的主要功能包括:服务器可以提供资源(上下文数据)、提示(模板化消息)和工具(可执行函数);客户端可以向服务器提供采样(自主行为)、根目录访问(文件系统访问)和信息请求(请求更多信息)。 至关重要的是,MCP 强调安全性和信任。用户对其数据访问和工具执行的同意和控制至关重要。主机必须在共享用户数据或调用工具之前获得明确同意,并且应该实施强大的授权流程。这确保了 LLM 交互的透明度和用户控制,从而减轻了与任意数据访问和代码执行相关的风险。实施者应在其设计中优先考虑安全最佳实践和隐私。

相关文章

原文

Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.

This specification defines the authoritative protocol requirements, based on the TypeScript schema in schema.ts.

For implementation guides and examples, visit modelcontextprotocol.io.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

Overview

MCP provides a standardized way for applications to:

  • Share contextual information with language models
  • Expose tools and capabilities to AI systems
  • Build composable integrations and workflows

The protocol uses JSON-RPC 2.0 messages to establish communication between:

  • Hosts: LLM applications that initiate connections
  • Clients: Connectors within the host application
  • Servers: Services that provide context and capabilities

MCP takes some inspiration from the Language Server Protocol, which standardizes how to add support for programming languages across a whole ecosystem of development tools. In a similar way, MCP standardizes how to integrate additional context and tools into the ecosystem of AI applications.

Key Details

Base Protocol

  • JSON-RPC message format
  • Stateful connections
  • Server and client capability negotiation

Features

Servers offer any of the following features to clients:

  • Resources: Context and data, for the user or the AI model to use
  • Prompts: Templated messages and workflows for users
  • Tools: Functions for the AI model to execute

Clients may offer the following features to servers:

  • Sampling: Server-initiated agentic behaviors and recursive LLM interactions
  • Roots: Server-initiated inquiries into uri or filesystem boundaries to operate in
  • Elicitation: Server-initiated requests for additional information from users

Additional Utilities

  • Configuration
  • Progress tracking
  • Cancellation
  • Error reporting
  • Logging

Security and Trust & Safety

The Model Context Protocol enables powerful capabilities through arbitrary data access and code execution paths. With this power comes important security and trust considerations that all implementors must carefully address.

Key Principles

  1. User Consent and Control

    • Users must explicitly consent to and understand all data access and operations
    • Users must retain control over what data is shared and what actions are taken
    • Implementors should provide clear UIs for reviewing and authorizing activities
  2. Data Privacy

    • Hosts must obtain explicit user consent before exposing user data to servers
    • Hosts must not transmit resource data elsewhere without user consent
    • User data should be protected with appropriate access controls
  3. Tool Safety

    • Tools represent arbitrary code execution and must be treated with appropriate caution.
      • In particular, descriptions of tool behavior such as annotations should be considered untrusted, unless obtained from a trusted server.
    • Hosts must obtain explicit user consent before invoking any tool
    • Users should understand what each tool does before authorizing its use
  4. LLM Sampling Controls

    • Users must explicitly approve any LLM sampling requests
    • Users should control:
      • Whether sampling occurs at all
      • The actual prompt that will be sent
      • What results the server can see
    • The protocol intentionally limits server visibility into prompts

Implementation Guidelines

While MCP itself cannot enforce these security principles at the protocol level, implementors SHOULD:

  1. Build robust consent and authorization flows into their applications
  2. Provide clear documentation of security implications
  3. Implement appropriate access controls and data protections
  4. Follow security best practices in their integrations
  5. Consider privacy implications in their feature designs

Learn More

Explore the detailed specification for each protocol component:

联系我们 contact @ memedata.com