43 comments

  • Jimmc414 a day ago ago

    Some of the comments seem to imply that MCP servers should be safe to connect to regardless of trust level, like websites you can safely visit.

    But MCP servers are more analogous to a PyPI packages you pip install, npm modules you add to your project or a VSCode extension.

    Nobody would argue that pip is fundamentally broken because running pip install malicious-package can compromise your system. That's expected behavior when you execute untrusted code.

    • mehdibl a day ago ago

      There is confusion.

      1. Not all MCP tools connect to the web or fetch emails. So the shortcut all MCP's are doomed is also the wrong way to adress this.

      2. Issue is with MCP with untrusted external sources like web/email that need sanitization like we do with web forms.

      3. A lot of warning point bad MCP's! But that apply to any code you might download/ use from the internet. Any package can be flawed. Are you audit them all?

      So yeah, on my side I feel this security frenzy over MCP is over hyped. VS the real risk and there is a lot of shortcuts, masking a key issue that is supply chain as an MCP owned issue here and I see that in so many doom comment here.

      • esseph a day ago ago

        This is a blanket statement, just an anecdote from my career.

        Every developer I have ever met that wasn't in the security space underestimates security problems. Every one.

        YMMV

        • mehdibl a day ago ago

          I'm in the security space so? And been deep in this MCP thingy.

          Did you check where I pointed the root issues?

          All I'm trying to say there is shortcuts, and confusing over the hype buzz too that is on going in AI, as MCP took off, I so a lot of of papers with IF IF IF condition to point security issues in MCP, while most of the them expect you to pick random stuff at the start. This is why I'm saying "Supply chain" is not MCP. As you can't blame MCP for issue coming from random code you pick. MCP is a transport protocol, you can do similar without MCP but you have to bake your tools inside the AI app, thus loosing the plug & play ability.

          • datadrivenangel a day ago ago

            You are correct that it is possible to use MCP securely. Like if you build a custom client, and only use trusted third party servers one at a time.

            But the hype-promise of "AI" is that you can make the commercial off the shelf ClaudeGPT client magically discover MCP servers and automate everything. And if the majority of people's expectations require vulnerability, you're going to have a bad time.

          • esseph a day ago ago

            Ultimately to use Agentic AI, you have to put faith in the model, the training data, the chain of custody, the authentication, the network discovery and connectivity between components, the other tools themselves that get called, and their chain of custody, etc.

            It's a massive liability.

            Maybe future history will prove me wrong.

    • jdns a day ago ago

      i'd honestly say it's closer (but not analogous) to opening a website in your browser. you wouldn't expect javascript on a website to be able to escape the sandbox and run arbitrary code on your computer.

      companies taking this seriously and awarding bounties is indicative it's fairly severe

      • datadrivenangel a day ago ago

        Malware from untrusted websites is as old as the internet. With advertisements, even trusted sites can deliver hostile content.

        The RCE/Malware issue aside, if the website you go to is a login page for some service, do you know it's the legitimate website? MCP Phishing is going to be a thing

      • mehdibl a day ago ago

        this issue is not even MCP at the core. Claude Code/ Gemini CLI were opening "url's" without sanitization and validation. That's the core flaw. There is a second issue with an XSS flawed package too in the bridge that is easy to patch.

        So there is a chain of issues and you need to leverage them to get there and first pick an MCP that is flawed from a bad actor.

        • jdns a day ago ago

          yeah, i was comparing MCP clients to browsers. connecting to an MCP shouldn't leave you vulnerable to RCE on your host.

          also, the way MCP servers are presented right now is in sort of a "marketplace" fashion meaning it's not out of the question you could find one hosted by a bad actor. PyPI/npm are also like this, but it's different since it's not like you can vet the source code of a running MCP. packages are also versioned, unlike MCP where whoever is hosting them can change the behaviour at any time without notice.

      • fulafel 17 hours ago ago

        JS has been able to escape the sandbox as long as browsers had JS support.

        The stream of vulnerability discoveries has been constant.

  • Jimmc414 a day ago ago

    Also, while I'm generally uncomfortable with being in a position to defend Google, it's a bit questionable calling the Google fix "not very robust" for escaping single quotes in PowerShell.

    Perhaps minimal, but this does in fact prevent the specific attack vector they demonstrated. The criticism seems unnecessarily harsh given that Google addressed the vulnerability immediately.

  • eranation a day ago ago

    With my limited understanding of LLMs and MCPs (and please correct me if I'm wrong), even without having to exploit an XSS vulnerability as described in the post (sorry for being slightly off topic), I believe MCPs (and any tool calls protocol) suffer from a fundamental issue, a token is a token, hence prompt injection is probably impossible to 100% protect against. The main root cause of any injection attack is the duality of input, we use bytes, (and in many cases in the form of a string) to convey both commands and data, "rm -rf /" can be an input in a document about dangerous commands, or a command passed to a shell command executor by a tool call. To mitigate such injection attacks, in most programming language there are ways to clearly separate data from commands, in the most basic way, via deterministic lexical structure (double quotes) or or escaping / sanitizing user input, denly-list of dangerous keywords (e.g. "eval", "javascript:", "__proto__") or dedicated DSLs for building commands that pass user input separately (Stored procedures, HTML builders, shell command builders). The solution to the vulnerability in the post is one of them (sanitizing user input / deny-list)

    But even if LLMs will have a fundamental hard separation between "untrusted 3rd party user input" (data) and "instructions by the 1st party user that you should act upon" (commands) because LLMs are expected to analyze the data using the same inference models as interpreting commands, there is no separate handling of "data" input vs "command" input to the best of my understanding, therefore this is a fundamentally an unsolvable problem. We can put guardrails, give MCPs least privilege permissions, but even with that confused deputy attacks can and will happen. Just like a human can be fooled by a fake text from the CEO asking them to help them reset their password as they are locked out before an important presentation to a customer, and there is no single process that can 100% prevent all such phishing attempts, I don't believe there will be a 100% solution to prevent prompt injection attacks (only mitigated to become statistically improbable or computationally hard, which might be good enough)

    Is this a well known take and I'm just exposing my ignorance?

    EDIT: my apologies if this is a bit off topic, yes, it's not directly related to the XSS attack in the OP post, but I'm past the window of deleting it.

    • Jimmc414 a day ago ago

      While this vulnerability has nothing to do with prompt injection or LLMs interpreting tokens, you do raise a debatable point about prompt injection being potentially unsolvable.

      edit: after parent clarification

      • eranation a day ago ago

        Yes, my bad, I'm not talking about this particular XSS attack, I'm wondering if MCPs in general have a fundamental injection problem that isn't solvable, indeed a bit off topic.

        edit: thanks for the feedback!

    • mattigames a day ago ago

      Aside from being offtopic or not I want to add that it is indeed well known https://news.ycombinator.com/item?id=41649832

      • wunderwuzzi23 15 hours ago ago

        Thanks for sharing! I'm actually the person the Ars Technica article references. :)

        For recent examples check out my Month of AI bugs with of a focus on coding agents at https://embracethered.com/blog/posts/2025/wrapping-up-month-...

        Lots of interesting new prompt injection exploits, from data exfil via DNS to remote code execution by having agents rewrite their own configuration settings.

      • eranation a day ago ago

        Thanks! Although thinking of it, while it's not deterministically solvable, I'm sure something like this is what currently being done, e.g, let's say <user-provided-input> </user-provided-input> <tool-response></tool-response> are agreed upon tags to demarcate user generated input, then sanitizing is merely, escaping any injected closing tag, (e.g. </user-provided-input>) to &lt;/user-provided-input&gt; (and flagging it as an injection attempt)

        Then we just need to train LLMs to 1. not treat user provided / tool provided input as instructions (although sometimes this is the magic, e.g. after doing tool call X, do tool call Y, but this is something the MCP authors will need to change, by not just being an API wrapper...)

        2. distinguish between a real close tag and an escaped one, although unless it's "hard wired" somewhere in the inference layer, it's only a matter of statistically improbable for an LLM to "fall for it" (I assume some will attempt, e.g. convince the LLM there is instruction from OpenAI corporate to change how these tags are escaped, or that there is a new tag, I'm sure there are ways to bypass it, but it's probably going to make it less of an issue).

        I assume this is what currently being done?

        • brap a day ago ago

          The problem is that once you load a tool’s response into context, there’s no telling what the LLM will do. You can escape it all you want, but maybe it contains the right magic words you haven’t thought of.

          The solution is to not load it into context at all. I’ve seen a proposal for something like this but I can’t find it (I think from Google?). The idea is (if I remember it correctly) to spawn another dedicated (and isolated) LLM that would be in charge of the specific response. The main LLM would ask it questions and the answers would be returned as variables that it may then pass around (but it can’t see the content of those variables).

          Edit: found it. https://arxiv.org/abs/2503.18813

          Then there’s another problem: how do you make sure the LLM doesn’t leak anything sensitive via its tools (not just the payload, but the commands themselves can encode information)? I think it’s less of a threat if you solve the first problem, but still… I didn’t see a practical solution for this yet.

          • eranation 2 hours ago ago

            Thanks for the link to the article, very interesting!

  • 0xbadcafebee a day ago ago

    I know secure code isn't easy to write, but every line of code I've seen come from AI companies (including the big ones) has looked like an unpaid intern wrote it. "Do you trust AI" is not the right question; it's "Do you trust the engineers building AI products?" So far I don't. It doesn't help that it all feels like a repeat of "move fast, break stuff".

  • behnamoh a day ago ago

    I've disabled all MCP servers on my machine until this security nightmare is fully resolved.

    MCP is not that elegant anyway, looks more like a hack and ignores decades of web dev/security best practices.

    • mehdibl a day ago ago

      What the issues, if you use quality MCP tools?

      Also MCP is only transport and there is a lot of mixup to blame the MCP, as most of the prompt injection and similar come from the "TOOLS" behind the MCP. Not MCP as it self here.

      Seem this security hype forget one key point: Supply chain & trusted sources.

      What is the risk running an MCP server from Microsoft? Or Anthropic? Google?

      All the reports explain attacks using flawed MCP servers, so from sources that either are malicious or compromised.

      • agoodusername63 a day ago ago

        > What the issues, if you use quality MCP tools?

        Really doesn't help when discovery of "quality" MCP tools, whatever that means, is so difficult.

  • electric_muse a day ago ago

    MCP feels like the 1903 Wright Flyer right now.

    MCP is a novel technology that will probably transform our world, provides numerous advantages, comes with some risks, and requires skill to operate effectively.

    Sure, none of the underlying technologies (JSON-RPC, etc.) are particularly novel. But the capability negotiation handshake built into the protocol is pretty darn powerful. It's a novel use of existing stuff.

    I spent years in & around the domain of middleware and integrations. There's something really special about the promise of universal interoperability MCP offers.

    Just like early-aviation, there are going to be tons of risks. But the upside is pretty compelling and worth the risks. You could sit around waiting for the kinks to get worked out or dive in and help figure out those kinks.

    In fact, it seems I'm the first person to seriously draw attention to the protocol's lack of timeout coordination, which is a serious problem[0]. I'm just a random person in the ecosystem who got fed up with timeout issues and realized it's up to all of us to fix the problems as we see them. So there's still plenty of opportunity out there to jump in and contribute.

    Kudos to this team for responsibly contributing what they found. These risks are inherent in any new technology.

    [0]: https://github.com/modelcontextprotocol/modelcontextprotocol...

    • troupo a day ago ago

      Neither the protocol, nor the technologies it uses, nor the capabilities it exposes are new or even novel.

      What is novel is the "yolo vibe code protocol with complete disregard to any engineering practices, and not even reading at least something about that was there before". That is, it's world's first widely used vibe-coded protocol.

      That's why you have one-way protocols awkwardly wrapped to support two-way communication (are they on their third already?). That's why auth is an afterthought. That's why there's no timeout coordination.

      • electric_muse a day ago ago

        Agreed. I think most can agree that the protocol itself leaves a lot to be desired.

        But the idea itself is compelling: documentation + invocation in a bi-directional protocol. And enough real players have thrown their weight behind making this thing work that it probably some day will.

        I don't understand fully the "it's immature so it's worthy or ridicule" rationale so much. Don't most good things start out really rough around the edges? Why does MCP get so much disdain?

        • blcknight a day ago ago

          The problem is the roll out as the bees knees by anthropic, when its.. just some JSON slop without a ton of careful thought behind it.

          I think it should be mostly thrown away and start over with an MCPv2 that has first class auth, RBAC/identity, error handling, quotas, human-in-the-loop controls, and more.

      • a day ago ago
        [deleted]
  • caust1c a day ago ago

    Good research. I'm glad people are hopping on this. Lots of surface area to cover and not enough time!

  • fennecbutt a day ago ago

    Unsurprising. I've left many a comment on what I think of MCP and so have many others.

    I'm still not sure why everyone's acting like it's some well thought out system and not just tool descriptions shoveled into JSON and then shoved at an LLM. It's not a fundamental architectural change to enhance tool calls, it just got given a fancy name.

    I do get that having a common structure for tool calling is very convenient but it's not revolutionary. What's revolutionary is everyone training their models for a tool calling spec and I'm just not sure that we've seen that yet.

    • CuriouslyC a day ago ago

      MCP is legit bad, and it won't last long, just polluting context with MCP output alone is enough to make it a poor long term solution. We're going to end up with some sort of agent VM, where tool data can be conditionally expanded for processing in a given turn without persistently polluting context (think context templates).

      • mehdibl a day ago ago

        MCP is only transport protocol here.

        And you need tools to connect to external "systems", the context "pollution" can be managed easily. Also even if you don't use MCP you need to use tools and they need to expose their schema to the AI model.

        I feel the MCP hype over bad security got a lot confused and very defensive over MCP or more globally tools use.

    • amannm a day ago ago

      MCP doesn't make any sense to exist at this point in time. All you need is CLIs and existing service interfaces. We don't need a new protocol for something whose purpose is to make more protocols unnecessary

    • moduspol a day ago ago

      LLMs are supposed to be smart. Why can't I point it to API docs and have it call an API?

      And why wouldn't we move toward that direction instead of inventing a new protocol?

    • greysteil a day ago ago

      I dunno, I’m still pretty surprised the MCP server auth process could pop a calculator on widely adopted clients. The protocol isn’t perfect but that’s totally unnecessary unsafe. Glad it’s fixed!

      • orphea a day ago ago

          > Glad it’s fixed!
        
        ...and they used some random package with version 0.0.1 instead of writing 20 lines of code themselves.

        It's astonishing how allergic some people are to writing their own code, even the simplest shit has to be a dependency. Let's increase the attack surface, that's fine, what can go wrong, right?

        https://github.com/modelcontextprotocol/use-mcp/commit/96063...

        • chrisweekly a day ago ago

          You have a valid point about dependency management in general, but in this case, the v0.0.1 package was created by the same author "geelen" as the commit you linked. So, they're not allergic to writing the code, and it's not "some random package".

  • mehdibl a day ago ago

    IT start with "Evil MCP Server".

    So you need a server flawed + XSS issue on Cloudflare.

    Then you need to use Claude Code, so it's more an issue in Claude Code/Gemini implementation already than MCP.

    So if you are ok to run any MCP from any source you have worse issues.

    But good find in the open command how it's used in Claude Code/Gemini.

  • greysteil a day ago ago

    Is $2,300 the going rate for an RCE with a totally believable attack vector these days?

  • techlatest_net a day ago ago

    [dead]

  • ath3nd a day ago ago

    [dead]