What Is Model Context Protocol (MCP)? Posted in Design J Simpson June 3, 2025 Model Context Protocol (MCP) has taken off exponentially since Anthropic released the protocol in November 2024. Since then, the ecosystem has grown to include thousands of servers that extend functionality for AI. But what is MCP, exactly, and how has it gotten so popular so quickly? To answer this question, we’ve compiled a high-level guide to (almost) everything you need to know about MCP. What Is MCP? Generally speaking, MCP is a protocol designed to extend the functionality of an AI tool like a large language model (LLM). This has made MCP a compelling option for connecting AI agents with APIs and data sources. To put it succinctly, if an LLM were a smartphone, MCP would be its apps. AI-driven applications, like Claude Desktop or Cursor AI, have begun incorporating MCP servers into their platforms. MCP servers connect hosts with external functionality, and can perform everything from pushing code to GitHub to generating images with AI. Examples of MCP Servers MCP is based on an open-source standard similar to the OpenAPI specification. Like OpenAPI, MCP uses a JSON object to create and configure MCP components. However, unlike OpenAPI, MCP lacks a formal governing body, and the exact server configuration can vary depending on the host and client implementation. For example, the MCP server for Brave Search, an MCP server that allows AI tools to search the web as well as local files, looks like this: { "mcpServers": { "brave-search": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "BRAVE_API_KEY", "mcp/brave-search" ], "env": { "BRAVE_API_KEY": "YOUR_API_KEY_HERE" } } } } This JSON object specifies the name of the MCP server, in this case, brave-search. It then directs the system to implement the server using Docker. The args section further configures the server. First, it tells Docker to start a container, which then runs in interactive mode thanks to the -i flag. The --rm flag tells the system to remove the container once the code is complete. The -e flag tells the system to expect an environment variable, which is defined by the BRAVE_API_KEY variable. Finally, the mcp/brave-search command runs the container. For another example, here’s the GitHub MCP Server, which allows AI to create, manage, and edit GitHub repositories: { "mcpServers": { "github": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-github" ], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>" } } } } The GitHub MCP server tells the system to create a server named github, which is run using the npx command. The arguments make it so that any prompt automatically runs the @modelcontextprotocol/server-github package. The env object creates an environment variable using your GitHub Access Token. For another example, check out the Perplexity Ask MCP Server, an MCP server that allows tools to interact with the web in real-time using the Perplexity API: { "mcpServers": { "perplexity-ask": { "command": "npx", "args": [ "-y", "@chatmcp/server-perplexity-ask" ], "env": { "PERPLEXITY_API_KEY": "YOUR_API_KEY_HERE" } } } } This code creates a server named perplexity-ask, which is run using npx. The args variables tell the system to execute the @chatmcp/server-perplexity-ask package when the `-y’ flag is present. The Architecture of MCP The architecture of MCP can be broken down into three main components: MCP servers, clients, and hosts. An MCP host is an application that uses the MCP tools like Claude for Desktop or Cursor. The MCP host can sometimes include the MCP client, which refers explicitly to the component inside of an application that interacts with MCP tools or a data exchange. Let’s consider a hypothetical scenario to get an idea of how MCP architecture works in real life. Imagine a user who wants to query an LLM like Claude AI about the current weather in New York City. To start, the prompt is presented to the LLM, which would be the MCP host. This prompt is then passed to the appropriate MCP server using the MCP Client. Finally, the appropriate information is retrieved from the MCP server and returned to the original LLM, which configures the response appropriately. The Benefits of MCP MCP is an important step in the increasing sophistication of AI. It also elaborates on the roles traditionally fulfilled by APIs, making MCP an important step forward for API developers as well. MCP servers and APIs are complementary, and the barrier to creating one is relatively low. Therefore, there are few reasons for API providers not to explore MCP servers. MCP comes with the same advantages as APIs and brings unique strengths. MCPs supply AI-driven tools with up-to-date data and information rather than relying on training data, for instance. It also helps make data more secure, as an MCP server can act as a data repository that can be accessed when needed. This helps prevent unnecessary data from being exposed via API calls. It can also help ensure data compliance, as MCP servers could be configured to prevent sensitive data needing to be shipped across national borders. Finally, MCP can make tools more scalable and easier to deploy. Like APIs and microservices, MCP servers can be accessed as needed and scaled back when they’re not. This can also make your digital tools more affordable by reducing expenses and helping get your products to market as quickly as possible. Also read: How to Turn Any API Into an MCP Server The Advantages of MCP Over Direct API Access In other respects, MCP is the next evolution of APIs. It’s easier to link services together using MCP servers than with APIs, as it requires less conversion and translation. MCP can also handle newer protocols like GraphQL. This can also help make your products more accessible as it reduces the need for unnecessary bandwidth. MCP servers can also take on the burden of certain API management duties, like routing and authorization. Some of the advantages of MCP over APIs are similar to those of microservices. MCP servers are available as external services, making them ideal for testing environments. This prevents APIs from needing to be rebuilt and recompiled every time there’s a change. MCP can even be invoked directly in the code, making it ideal for CI/CD environments. Comparison: Traditional APIs vs. Modern API Platform Using MCP Aspect Traditional APIs Modern API Platforms with MCP Integration Architecture Often monolithic – A single system may handle many responsibilities (e.g., login, data, payments). Modular – MCP supports connecting LLMs to many independent services or tools that fulfill specific tasks. Scalability Scaling often requires scaling the full stack or entire service. MCP-compatible services can be added or removed individually as needed, enabling more granular scaling. Protocols May use legacy protocols like SOAP or RPC. MCP-compatible services often expose REST or GraphQL APIs, or custom tools wrapped as callable functions. Management Developers must manually integrate services, secure endpoints, and handle routing. MCP servers abstract away some integration logic, and may offload tasks like routing or invocation to clients. Flexibility Updating or changing functionality may require coordinated system-wide changes. New capabilities can be added or updated by integrating new MCP servers, without modifying core logic. Deployment Traditional deployments may require full app redeploys even for small changes. New services can be made available via MCP configuration changes, with minimal impact to the host app. Fault Isolation A failure in one area can sometimes affect the entire API or app. Failures in individual MCP servers typically do not crash the host, supporting more fault-tolerant execution. The Future of Model Context Protocol MCP is a major step forward for AI, LLMs, and APIs. MCP works well alongside microservices and can simplify access to distributed functionality. It can also make your products and services more secure due to its modular architecture, keeping errors and service outages contained. MCP will likely play an important role in the future of AI, as well. Along with retrieval augmented generation (RAG), MCP is an important component of giving AI-driven tools like LLMs access to real-time information and the ability to perform functions like making changes directly to code or looking things up on a web browser. MCP will likely also play an important role in the future of APIs, helping to encourage their use in new ways. As such, it’s in your best interest to get acquainted with MCP to see its capabilities. The latest API insights straight to your inbox