How to Secure MCP Servers Posted in Security Kristopher Sandoval July 31, 2025 In November of 2023, Anthropic — the company behind Claude AI — released the Model Context Protocol (MCP), a new open standard designed to connect AI applications and external tools and data sources. The proposal was immediately met with acclaim, as AI interconnection stands as one of the major problems that still needs to be solved. While the MCP approach is great, it does come with some security considerations and concerns. Leading voices in the space have surfaced several possible vulnerabilities, and it has quickly become apparent that MCP servers, while offering significant evolution in connectivity, will require rethinking some core security approaches. Today, we’re going to dive deeper into the security implications of the MCP paradigm. We’ll look at some general risks inherent in the approach and consider some best practices that any provider can generally apply. Understanding the Security Implications of MCP To understand the specific vulnerabilities of the MCP approach, we should first clarify how MCP actually works. The protocol is designed to essentially sit between AI agents, APIs, orchestration layers, and data systems, facilitating interactions through context-defined behaviors. For anyone familiar with shim systems like the backend-for-frontend, this might sound familiar, and in many ways it is — essentially, you’re creating chained services which interact through a defined translation layer, a sort of meta API between models and interconnected services. Let’s say we have a user who is working with an AI model like Claude through an IDE connected to an MCP client. This MCP client can take the user requests from the host and route these requests via the MCP protocol to the MCP servers. The servers, in turn, can receive these requests, and, based upon their logical routing and established contracts, determine what needs to be connected to the client and how those connections can be established. In the inverse, the data from the data sources and remote services can then flow backwards to the MCP server and through the MCP protocol to the requesting MCP client, thereby completing the circuit. This approach unlocks some pretty powerful connections, but even in this basic example, we can start to see some potential vulnerabilities. The MCP approach is fundamentally one of connected systems, and as such, each step along the way can introduce complexity, risk, and attack vectors. Key Security Vulnerabilities in MCP Let’s take a look at just a few of the potential attack vectors in a generic MCP setup. Context Injection When it comes to LLM systems, context is king. Context gives the prompts and the data flow a basis of understanding to meet the need of the request, but it also carries a lot of information about how this request should be fulfilled. With MCP, context injection is a huge concern. In theory, when context is externally sourced via connected systems, an attacker could craft malicious payloads that inject behaviors into the connected models. This could then severely undermine the security of the system in a way that would be obfuscated by the “game of telephone” nature of communication in an MCP environment. These attacks could be everything from appending misleading instructions or altering configuration parameters to causing runaway processing with direct financial damages. This risk can be largely handled through iterative prompt validation and MCP server rules, but this kind of attack unfortunately does not have a perfect solution in place as of yet. Trust Chain Exploits MCP systems, by design, are meant to be a trust chain: agents are chained together with upstream and downstream dependencies, which have a certain level of trust (or at least a system to determine validity) baked in. This system has a lot of heavy lifting to do to ensure that the data and agentic connections throughout aren’t able to be exploited or manipulated, and the difficulty of ensuring this scales in complexity with the complexity of the system. The interconnects and rule-based flow controls within this system are best thought of as “trust boundaries.” The reality is that if these trust boundaries are not explicitly enforced or observed, compromised agents may be able to inject malicious code, misleading context, or malformations in a supply-chain man-in-the-middle attack. Some of these attacks may be obvious, but the real danger comes in the form of attacks that aren’t apparent. Some of these attack vectors may lead to foundational exposures that take place over days, weeks, or months, subtly undermining the value of the system and exposing critical systems. Sensitive Data Leakage In many ways, the security issues with MCP are the same as general security issues with APIs, just with an extra layer of obfuscation due to the nature and structure of agentic systems. In the case of sensitive data leakage, context payloads are the big worry point. Context payloads often carry user metadata, prior inputs, system configurations, prompt data, and much more. While these payloads can be stripped of much of this data, the amount of context removed could affect the quality of the system. You can encrypt a lot of it, but in this case, you start running into mounting slowdowns and code issues where every layer has to decode every prompt from every encrypted system. In practice, you often target a balance, and it is this balance that is the potential threat vector. If not properly filtered, secured, encrypted, and monitored, payload leakage may expose sensitive information, exposing critical data and flows to external attackers. This is a compounding issue: as more data gets leaked, more data is exposed, thereby resulting in an ever-increasing stream of leaky context and information. Contextual Drift While some issues are similar to API security concerns, some are very specific to agent-based connections. Contextual drift is one such issue. The idea of contextual drift is that when stale or inconsistent data is used — such as that data from outdated memory states, failed serializations, or out-of-step dependencies — this can lead to contextual drift, causing reduced model performance, unexpected outputs, or security policy violations. Getting around this problem requires making sure that all systems are updated and aligned, which is generally good practice, albeit a heavy process. With MCPs, the problem arises largely with the ease of connecting to external services and systems. APIs get around this problem by focusing on observability and monitoring. But with MCP, there’s a layer of semi-obfuscation at play, requiring significantly more effort to get this right. Build-Specific Issues Finally, many more issues are specific to MCP distributions and builds. While MCP proposes a standard communication protocol, actual MCP setups are likely to look similar but have very different underlying stacks, at least for the next few years. For this reason, every MCP build is going to have its own particular issues, caveats, and considerations. To be fair, this is true of any technical implementation, but the issues at hand could be exacerbated by the idea of just throwing MCP into the mix. The MCP paradigm is not a panacea to underlying issues, though it’s often framed as being one. Best Practices for Securing MCP Servers With all of this in mind, let’s look at some best practices you can adopt to ensure your MCP instances are secure. Payload Validation and Sanitization In an MCP instance, every context payload should be validated against a strict schema, such as JSON Schema, Protobufs, or others. These inputs should be sanitized to remove script injection attempts, extraneous fields, probing injections, etc. Your best bet to secure your payloads is to use schema enforcement on both the ingress and egress processes, allowing you stricter controls over the internal flow of data and requests. Related: How to Implement Input Validation for APIs Access Control and Context Scoping Not all agents or services should have the ability to read or write to all context fields. Giving agents total control could have significant impacts on your access control and your ability to scope access to contexts. Implement access policies based on source identity, task type, or sensitivity level, and be as strict as you can be while still ensuring regular operation and interoperability. Audit Logging and Replay Protection Every context transmission or transformation should be logged, and each payload should include a timestamp or unique ID. Consider using HMACs or signatures to verify authenticity. This will allow you to audit your systems and ensure that you’re preventing replays or other middleman attacks. Beyond this specific use, logging unlocks a bunch of other benefits, including observability, tamper detection, and much more. Context Minimization Avoid bloated or catch-all context payloads, and include only the minimum necessary information required for downstream agents or models. This is essentially the concept of least privilege applied to agentic communications and interactions. Minimizing context will have add-on benefits too, like improving efficiency and reducing costs. Encryption and Secure Transport Use TLS or mTLS for all context exchanges over the network, and where possible, encrypt context fields at rest and in transit. Context should be treated as a sensitive application state that requires strong security by default, both at rest and in transit. Leverage Solutions for the Cutting Edge It’s important to remember that although many of the systems in play here are cutting edge, that doesn’t mean that relatively mature solutions for testing and scanning don’t already exist. While current solutions are far from perfect, good security can’t wait for perfect. Solutions such as MCPSafetyScanner, MasterMCP, and the Docker MCP Toolkit offer substantial security solutions in the here and now that can help secure your MCP servers with repeatable scanning and testing. Also read: 10 Tools for Securing MCP Servers Real-World Considerations MCP is still an emerging paradigm, and as with any emerging solution, there’s still a lot of work to do, and the best practices we advise today might change over time. While the paradigm goes a long way towards solving some critical issues inherent to the agentic revolution, it lacks many secure defaults and offers few solutions for payload structure enforcement or field-level controls. Ultimately, the best approach for right now is to be suspicious of all traffic and contextual interactions. In multi-agent environments, it’s easy to assume all context is safe. This is dangerous considering that, in fact, any one agent may compromise the entire chain. Security-conscious developers should inspect and sanitize every context handoff. Just as microservices taught us to never blindly trust the network, the reality of MCP will show that we shouldn’t blindly trust upstream agents or context. Conclusion MCP introduces powerful new capabilities for agent-based AI systems, but it also opens up a broad set of novel or recontextualized attack surfaces. To responsibly adopt the MCP paradigm, developers and organizations must proactively identify the risks facing their agentic systems and implement rigorous security controls. By treating context as sensitive infrastructure, applying validation, scoping, and encryption, and embracing zero-trust principles, teams can protect the integrity of their AI systems and unlock the true potential of interoperable intelligence. That being said, this is going to be an ongoing process, and as much as providers should revisit their threat models often, the MCP reality is going to require a revisiting of security implementations as it evolves. The latest API insights straight to your inbox