GitHub MCP被利用:通过MCP访问私有仓库
GitHub MCP exploited: Accessing private repositories via MCP

原始链接: https://invariantlabs.ai/blog/mcp-github-vulnerability

Invariant 发现了一个流行的 GitHub MCP 集成中的严重漏洞,攻击者可以通过恶意的 GitHub Issues 劫持用户的代理。这种“恶意代理流程”允许攻击者强制代理泄露来自私有仓库的数据,即使是像 Claude 4 Opus 这样的对齐模型也无法幸免。 攻击利用公共仓库中的间接提示注入,诱使代理在处理看似良性的 Issue 列表时提取和暴露敏感信息。这凸显了一个超越模型对齐的基本安全漏洞,需要系统级的防护措施。 Invariant 建议两种关键的缓解策略:细粒度的权限控制(限制代理访问特定仓库)和使用专用扫描器(如 Invariant 的 MCP-scan)进行持续的安全监控。实施动态运行时安全层,例如 Invariant Guardrails,以强制执行上下文感知的访问控制并防止跨仓库数据泄漏。这种主动方法对于保护代理系统和 MCP 集成免受不断变化的威胁至关重要。

Hacker News new | past | comments | ask | show | jobs | submit login Accessing private GitHub repositories via MCP (invariantlabs.ai) 108 points by gokhan 1 day ago | hide | past | favorite | 1 comment dang 22 hours ago [–] Comments moved to https://news.ycombinator.com/item?id=44097390. reply Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact Search:
近期的一份报告揭示了GitHub的MCP(机器通信协议)中的一个漏洞,攻击者可以利用该漏洞诱导拥有公共和私有代码库访问权限的大型语言模型(LLM)泄露私有数据。攻击者向公共GitHub问题中注入恶意指令,而被配置为拥有广泛访问权限的LLM则会执行这些指令,从而可能暴露私有代码库中的敏感信息。 许多评论者批评了给予LLM过宽的权限,强调细粒度访问控制和用户意识的重要性。他们认为,该漏洞源于对不受信任的数据的信任以及赋予LLM过多的权限。一些人建议将LLM视为潜在的对手,并对其进行相应的沙盒隔离。另一些人指出,需要在AI系统中进行安全意识的设计,并且不能仅仅依靠基于LLM的防护措施。尽管关于严重程度的争论仍在继续,但讨论强调了将LLM、私有数据和不受信任的输入结合在一起的风险,突出了改进安全实践的必要性。

原文

Invariant has discovered a critical vulnerability affecting the widely-used GitHub MCP integration (14k stars on GitHub). The vulnerability allows an attacker to hijack a user's agent via a malicious GitHub Issue, and coerce it into leaking data from private repositories.

The issue is among the first, discovered by Invariant's automated security scanners for detecting so-called Toxic Agent Flows. In such a scenario, an agent is manipulated into performing unintended actions, such as leaking data or executing malicious code. For more information, see below.

It is highly relevant to raise awareness about this issue at this time, as the industry is racing to deploy coding agents and IDEs widely, potentially exposing users to similar attacks on critical software development tools.

Contents

Attack Setup

In this attack setup, the user is using an MCP client like Claude Desktop with the Github MCP server connected to their account.

We assume the user has created two repositories:

  • <user>/public-repo: A publicly accessible repository, allowing everyone on GitHub to create issues and bug reports.
  • <user>/private-repo: A private repository, e.g. with proprietary code or private company data.

By standard GitHub rules, an attacker can now create a malicious issue on the public repository, containing a prompt injection waiting for the agent to interact.

The actual attack triggers as soon as the user and owner of the GitHub account queries their agent with a benign request, such as Have a look at the open issues in <user>/public-repo, which will lead to the agent fetching the issues from the public repository and getting injected.

See below for an illustration of the ensuing flow.

As shown here, as soon as the agent encounters the malicious GitHub issue, it can be coerced into pulling private repository data into context, and leaking it in an autonomously-created PR in the public repository, freely accessible to the attacker or anyone else.

Toxic Flows We call this use of indirect prompt injection to trigger a malicious tool use sequence, a toxic agent flow. We have found this vulnerability by applying Invariant's security analyzer to GitHub MCP, allowing us to automate the process of discovering the flow in the wild.

Attack Demonstration

To illustrate more concretely, we implement this attack practically using a set of demo repositories:

  • ukend0464/pacman: A public repository with a simple implementation of a Pacman game (available here)
  • Multiple private repositories containing personal projects and sensitive information about the user.

'About The Author' injection We now place a malicious issue in the public repository, which is accessible to the attacker. The issue contains a payload that will be executed by the agent as soon as it queries the public repository's list of issues.

A malicious GitHub issue injecting the agent

User Interaction To trigger the attack, the user merely prompts Claude 4 Opus with the following request:

Claude prompt used to trigger the attack

Claude then uses the GitHub MCP integration to follow the instructions. Throughout this process, Claude Desktop by default requires the user to confirm individual tool calls. However, many users already opt for an “Always Allow” confirmation policy when using agents, and stop monitoring individual actions.

Attack Rollout The agent now goes through the list of issues until it finds the attack payload. It willingly pulls private repository data into context, and leaks it into a pull request of the pacman repo, which is freely accessible to the attacker since it is public.

The pull request contains the following new information:

commit where the agent leaks private data

We thus successfully exfiltrated several pieces of private information about our user ukend0464: information about their private repositories, such as Jupiter Star, their plan to relocate to South America, and even their salary.

Below, we include a screenshot of the full chat with the agent, showing its reasoning and tool use sequence in action.

Full chat with the agent, showing the attack in action Click to see full chat with the agent.

Detecting Toxic Agent Flows

Unlike previously-discovered tool poisoning attacks with MCP, this vulnerability does not require the MCP tools themselves to be compromised. Instead, the issue emerges even with fully trusted tools, as agents can be exposed to untrusted information when connected to external platforms like GitHub.

Understanding, analyzing, and mitigating such issues in agentic systems is a highly complex undertaking that's difficult to perform manually and at scale. To address this challenge, Invariant has developed automated methods for detecting toxic agent flows, enabling organizations to identify and model potential threats before they can be exploited by malicious actors.

If you're interested in conducting a comprehensive threat analysis of your agent systems and tools, please contact us at [email protected]. We'll be happy to onboard you to our early access security program. Below is a preview of our security analyzer in action.

Toxic flows preview
Preview: Invariant's security analyzer for proactively detecting toxic agent flows.

Scope and Mitigations

While our experiments focused on Claude Desktop, the vulnerability is not specific to any particular agent or MCP client. It affects any agent that uses the GitHub MCP server, regardless of the underlying model or implementation.

Importantly, this is not a flaw in the GitHub MCP server code itself, but rather a fundamental architectural issue that must be addressed at the agent system level. This means that GitHub alone cannot resolve this vulnerability through server-side patches.

We thus recommend the following two key mitigation strategies to prevent such attacks and strengthen the security posture of your agent systems.

Enforce Dataflow Rules

1. Implement Granular Permission Controls

When using MCP integrations like GitHub's, it's critical to limit agent access to only the repositories it needs to interact with—following the principle of least privilege. While traditional token-based permissions offer some protection, they often impose rigid constraints that limit an agent's functionality.

For more effective security without sacrificing capability, we recommend implementing dynamic runtime security layers specifically designed for agent systems. Solutions like Invariant Guardrails provide context-aware access control that adapts to your agent's workflow while enforcing security boundaries.

To illustrate, here's an example policy that prevents cross-repository information leaks using Invariant Guardrails:

raise Violation("You can access only one repo per session.") if:
    (call_before: ToolCall) -> (call_after: ToolCall)

    call_before.function.name in (...set of repo actions)
    call_after.function.name in (...set of repo actions)

    call_before.function.arguments["repo"] != call_after.function.arguments["repo"] or
    call_before.function.arguments["owner"] != call_after.function.arguments["owner"]

You can find the complete policy here. See the MCP-scan documentation, for more information on how to apply this policy to your MCP deployments.

This approach effectively restricts an agent to working with only one repository per session, preventing cross-repository information leakage while maintaining full functionality within authorized boundaries.

To experiment more with Guardrails, you can also use the Guardrails Playground to test policies before deploying them.

Inspect with Explorer

2. Conduct Continuous Security Monitoring

Beyond preventative measures, implement robust monitoring solutions to detect and respond to potential security threats in real time. We recommend deploying specialized security scanners such as Invariant's MCP-scan to continuously audit interactions between agents and MCP systems.

The recently introduced proxy mode in MCP-scan significantly simplifies this process by enabling real-time security scanning of MCP connections without requiring modifications to your existing agent infrastructure. Simply route your MCP traffic through the proxy to gain immediate visibility and real-time scanning for potential security violations.

Implementing comprehensive monitoring also creates an audit trail that helps identify potential vulnerabilities, detect exploitation attempts, and ensure your agent systems remain protected against emerging attacks.

Why Model Alignment Is Not Enough

As demonstrated by our findings, even state-of-the-art aligned models are vulnerable to these attacks. In our experiments, we used Claude 4 Opus, a very recent, highly aligned and secure AI model. Despite its robust safety training, the agent was still susceptible to manipulation through relatively simplistic prompt injections. Similarly, many off-the-shelf prompt injection detector defenses, fail to catch this attack.

The vulnerability persists because the security of agent systems is fundamentally contextual and environment-dependent. While general model alignment training creates some guardrails, it cannot anticipate the specific security requirements of every deployment scenario or organizational context. Security measures must be implemented at the system level, complementing model-level safeguards.

Conclusion

In this blog post, we have shown a critical vulnerability affecting the GitHub MCP server, allowing attackers to hijack a user's agent via a malicious GitHub Issue, and coerce it into leaking data from private repositories. The vulnerability is among the first discovered by Invariant's security analyzer for detecting toxic agent flows.

While the vulnerability that we uncover is specific to GitHub MCP, similar attacks keep emerging in other settings. For instance, Legit Security recently reported a vulnerability in GitLab Duo.

It is crucial to safeguard agent systems and MCP integrations using designated security scanners such as Invariant's MCP-scan and Guardrails to ensure responsible deployment at scale.

Work With Us

If you are interested in learning more about how to secure your agent systems, please reach out to us at [email protected]. We are happy to onboard you to our early access security program, and help you secure your agent systems.

Authors:

Marco Milanta
Luca Beurer-Kellner

联系我们 contact @ memedata.com