Using MCP For API Documentation Discovery Posted in DesignStrategy Kristopher Sandoval February 12, 2026 Model Context Protocol (MCP) has made huge waves in the industry as of late. Since MCP makes it incredibly easy to point agentic implementations towards tools and resources, it’s been used for everything from context-driven customer service tools to order fulfillment backends. One of the most interesting use cases, and one that is currently emerging from some pretty big names, is the use of MCP in surfacing and supporting developer documentation. It’s a promising use case for a variety of reasons, and some big names — among them Google and Amazon — are starting to invest in MCP for documentation-related purposes. Today, we’re going to take a look at why MCP is promising for discovering API documentation. We’ll look at a handful of great examples in the market today, and we’ll answer why this approach represents a sea change in how developers relate to documentation. What Is MCP? Before we dive into our examples, let’s briefly discuss what MCP is and what benefits it gives us in this particular use case. MCP is an open standard from agentic provider Anthropic that is designed to connect AI assistants to resources and tools. It was given by Anthropic to the Agentic AI Foundation, an arm of the non-profit Linux Foundation, creating a vendor-neutral methodology for universal connectivity that isn’t just limited to any one AI provider. The core benefit of MCP is the ability to denote a tool or resource that the agent can use — in other words, it clarifies the operational reality of any AI-powered agent by creating a sort of universal index of functions, tools, and systems. This creates significant clarity of the AI implementation, and also gives developers a tool to clarify their intent and service availability in a much more deterministic way than previously possible. Documentation as Bottleneck With this in mind, you can see how MCP can play a huge role in the documentation discovery process. The limiting factor of most services and agents now is no longer pure code generation — it is the ability for providers to surface the availability of their services, tools, and resources in an easier way. MCP provides a methodology by which all of this can be made clear — and then made searchable, indexable, and digestible through MCP. Previously, if a developer wanted to create a service using an API, they had to dig through page after page of API documentation. With a good MCP deployment, they can now connect agentically and ask in natural language how something can be connected or developed — and the MCP broker can return ample documentation and design information in just seconds. Example MCP Implementations With all of this said, let’s look at three excellent MCP implementations to see how this looks in practice. Mastercard’s MCP Server Mastercard has implemented a service called the Agent Toolkit, offering direct documentation to AI agents. Instead of manually searching for documentation, users can simply query the MCP broker and get a bevy of APIs, guides, specs, and documentation pages programmatically and conversationally. The toolkit allows agents live structured access to the entire Mastercard API ecosystem with a specific service role, meaning that agents operate as a limited user to discover what products exist, navigate documentation hierarchies, pull up full documentation context, or even enumerate specific API operations and schemas. This process surfaces real, live information on demand, but also allows Mastercard the ability to specifically delineate expected form and function at scale. To use this, developers can install the toolkit into any MCP-compliant environment — for example, Claude Desktop or VSCode — and then follow Mastercard’s configuration guide. With this in place, developers can use conversational requests like “list services” or “retrieve endpoints” to start surfacing service documentation. AWS Knowledge MCP Server AWS is as ubiquitous as it is incredibly large — part of the challenge of working within AWS to find documentation or service data is navigating the almost oppressively large suite of offerings and services in the first place. The AWS Knowledge MCP Server seeks to resolve this issue by offering a fully-managed MCP solution that runs as a global AWS service. Instead of offering a local MCP service that is run on client-hosted environments, the AWS Knowledge MCP Server is a fully-managed remote service that exposes official AWS documentation, service knowledge, and context over an HTTP API. The server gives agents real-time structured access via an API, allowing developers a very clear and clean way to integrate the endpoint and connect to this context. Developers simply connect to the MCP broker as a remote MCP endpoint in any compliant client and then feed their request directly. This does abstract away some control from the MCP user, but centralizes it inside of AWS itself, offering a slightly unbalanced control paradigm for developers who need that extra level of detailed determinism. The API can return everything from AWS documentation to live metrics for APIs or specific services, resulting in an AI-assisted AWS development superservice. Google Merchant API MCP Server This last example is a very specific one — the Google Merchant API MCP Server is particularly concerned with the Google Merchant API, exposing the documentation, migration resources, and code samples relevant to the Google merchant directly to any MCP-enabled IDEs or coding assistants. This allows for a very specific group of people using a very specific API to have agentic access to documentation and development tools without resorting to manual or user-limited processing. Of note, this process uses retrieval-augmented generation (RAG) to fetch specifically relevant pieces of data from the official merchant API, enabling both natural language processing (NLP) style access to data as well as more general fuzzy examples for custom use cases or implementations. This service is built very specifically for developer processes as opposed to more general contextual processes. Whereas the AWS service might be used as a general documentation or intelligence provider, the Google Merchant solution is very specifically for developers working with the Google Merchant API within an IDE. Documentation as a “Safe” Use Case Notably, this is also generally considered a “safe” MCP use case. MCP itself is quite powerful, and because it accesses documentation so readily, there are many voices concerned with what that means for service security. Using MCP to surface documentation, however, has several properties that make it uniquely low-risk compared to other patterns. First, this approach is read-only — agents are not mutating infrastructure, calling production APIs, or triggering tack-on processes. They are very cleanly retrieving and surfacing data that is already publicly accessible, just through a structured and clean interface. Secondly, this approach uniquely keeps humans in the loop by default. The agent can propose answers, explanations, and code based on the relevant documentation, but the human developer is still the first step and the ultimate reviewer and implementer. By its very nature, this process is determined by the human in the loop, even if the underlying process is itself an agentic one. Finally, this process notably does not require any re-engineered systems or resources. Organizations do not need to redesign their APIs, surface other external systems, or refactor offerings for transparency. They simply index documentation they already have and expose it through MCP — a low-cost and high-impact solution. What About Search — and RAG? At first glance, it might seem like this problem has already been solved. After all, we have advanced search functions and retrieval-augmented generation, a system designed to reference rich context when answering development and utilization questions. The key point of clarity here is around the degree of availability and the interaction model itself. MCP offers a brand new interaction system. In advanced search solutions, documentation is ranked and then returned depending on keywords. This is a more dynamic and opaque process than MCP because it requires guessing the intent of both the searcher and the provider in the process of returning the answer. For instance, if a user searches for “socket,” what should we assume the proper response to be? Are they looking for WebSocket documentation? For what service? Inbound or outbound? Connecting to WebSocket solutions, or the WebSocket offering from the provider itself? You might see the problem here, but what about RAG? These systems are meant to provide this context, so they should at least resolve the provider clarity problem, right? Now the issue becomes one of control and structure. Typical RAG solutions treat documentation as a general corpus, a body of unstructured text that then needs chunking, embedding, prompt construction, context injection, and more to actually deliver results to the user — and these are results which are even more opaque in processing than those from search processes. All of this is ultimately resolved with MCP — you get the ability to deterministically surface specific documentation and then control how that documentation is coordinated and organized. You have the ability to provide this data to the agentic solution and allow the NLP processing to categorize and surface documentation, which removes a lot of guesswork and effort from the AI provider. Ultimately, you get a better result with less effort that serves more developers and agentic implementations far better — a nearly universal win-win. Looking to the Future of MCP-Driven Discovery The reality is that documentation is no longer just a piece of context or data found when an external developer runs into an issue — it’s a first-class context object that needs to be treated with the same focus and intentionality as the API itself. Within this context, MCP offers something more than just putting all the documentation in a single store and hoping for the best — it provides a direct pathway between the developer and the provider, allowing you to discover intent, and clarity like no other process currently on offer. As we move towards a future focused around API discovery, we need to rethink how we look at documentation and its discovery — and solutions like MCP are going to play a huge part in making documentation and data clearer, more contextual, and more available. AI Summary This article examines how the Model Context Protocol (MCP) is reshaping API documentation discovery by making documentation a first-class, agent-accessible resource rather than a static reference artifact. MCP is an open, vendor-neutral standard that allows AI agents to discover, index, and retrieve tools and resources, including API documentation, in a structured and deterministic way. Using MCP for API documentation discovery addresses a growing bottleneck in modern development, where the challenge is no longer code generation but efficiently surfacing accurate, up-to-date service capabilities. Real-world implementations from Mastercard, AWS, and Google demonstrate different MCP deployment models, ranging from local toolkits to fully managed remote MCP services, each balancing control, scalability, and determinism. Documentation-focused MCP servers are considered a low-risk use case because they are read-only, keep humans in the loop, and expose information that is already publicly available without mutating production systems. Compared to traditional search and retrieval-augmented generation (RAG), MCP offers clearer intent signaling, stronger structural control, and more predictable results by explicitly modeling documentation as callable, indexable resources. Intended for API providers, API architects, platform engineers, and developer experience teams evaluating MCP as a foundation for scalable API documentation discovery in agentic development environments. The latest API insights straight to your inbox