3 New Patterns for Connecting AI Agents to APIs Posted in Design Kristopher Sandoval April 10, 2025 APIs are often reflective of the broader tech market. Since APIs facilitate connections, they tend to be a microcosm of the underlying technologies and trends in the broader market. For this reason, the movements and patterns adopted in the API space for novel or emerging technologies reflect how the industry at large is reacting to these technologies, for better or for worse. In the realm of APIs, some of the most interesting patterns currently being adopted revolve around AI and large language models. These systems are powering many innovative solutions, but how they are adopted varies from implementation to implementation and sometimes even from team to team. Today, we’re going to look at some of the patterns that are coming to the forefront in the API space around connecting AI agents to APIs. We’ll examine how these patterns reflect different perspectives on AI and what they mean for the average AI developer. 1. AI Agent Platforms The rise of the platform-as-a-service (PaaS) provider has been a hallmark of the last decade or so. With AI coming to the forefront of many API conversations, it’s no wonder that platform solutions have come to mind. The idea behind this pattern is simple: connecting with a platform that has many pre-built API integrations. This abstracts away much of the complexity of other solutions, allowing for a lighter, streamlined solution on the developer side. Postman’s AI Agent Builder is a notable example of an API platform for AI agents. The idea is to allow users to connect AI agents to any API in the Postman API Network, enabling rapid prototyping and deployment without going through cumbersome agent coordination and connectivity audits. The drawback of this approach is the same as the drawback of platforms more generally — with few exceptions, you are ultimately tying yourself intrinsically to the platform in question. While this can be beneficial, allowing for closer alignment with a larger governance organization or body of services, it can also be somewhat limiting and certainly makes moving away from that vendor more difficult in the future, should the need arise. 2. API-Driven AI Orchestration Another approach is using the AI itself to coordinate AI orchestration. Instead of using a platform to connect APIs to AIs, the AI connects to a gateway or collective endpoint, orchestrating the API calls themselves to coordinate responses, fulfill requests, retrieve data, and so forth. For example, an AI agent may receive a request through its ingest endpoint and, based on this request, parse it into new requests that are then routed through GraphQL endpoints to fulfill each part before collecting the data and responding. For the end user, this might look like a simple endpoint request passing the body as text to an agent, but on the agent side, this can then be transformed, remixed, or stored. Arguably, the most mature form of this is Hasura’s supergraph approach with PromptQL. This solution combines APIs through a unified supergraph endpoint that can then be queried. This endpoint can connect everything from APIs to data sources, databases, and even other graphs. This is a fundamental inversion of the API-to-AI model that is currently most popular, but it does come with some significant downsides. First, it expects a certain level of quality from your agent that may not be achievable by all models and systems. This can also introduce significant complexity into your system, increasing the potential for faults. And when those faults occur, troubleshooting can also be complex, as the issues can be abstracted away by hallucinatory behavior quite readily. 3. Protocol-Driven Interactions As demand for more connected AI systems has increased, some providers have adopted a protocol-based solution. One relevant solution is the Model Context Protocol, or MCP standard, developed by Anthropic. The idea of the MCP is to standardize how applications provide context to large language models, codifying an ingest point that various systems can then use to connect with the AI models themselves. The MCP protocol offers specifications and SDKs for implementing the standard and additionally provides server support through both their Claude service desktop applications and an open-source repository of MCP servers. In essence, the idea is that you can now use these systems to create your API connections using a standard ingest format — the hope being that, with broader acknowledgment and implementation of the standard, more models could be brought into the fray, allowing for more connected systems at scale. Of course, the issue with this kind of approach is humorously explained in the famous XKCD comic on standards — much like the USB-C standard cited in the Anthropic blog post introducing the protocol, deviation and new standards tend to crop up quickly, drifting from the original intention over time and adding complexity that undermines the standard in the first place. While Anthropic has good intentions, they’re not the only LLM provider on the market working on standards of this kind. Choosing Your Solution All of this comes down to an obvious question — which of these solutions is best, and are they complementary or competing? As with any technical implementation, the answer depends highly on your specific build, use case, environment, and AI agent library or platform. Ultimately, your build structure is going to define your best solution. One thing to note, however, is that the approaches noted herein aren’t necessarily competing. In many cases, they can be complementary. This is especially true in the modern microservice-centric approach to development, where different services, functions, and core offerings might have agent needs that are wildly divergent, or even needs that are incompatible. In such a reality, it might make sense, for instance, to have one service call a platform agent hosted by a library or framework provider that they’ve already integrated with while allowing another microservice needing a much simpler agent to call a local service or library. These use cases may require more of a dynamic solution, making a one-size-fits-all implementation less realistic. APIs: Unlocking the Potential of AI Agents AI agents need robust API connectivity to unlock their full potential. APIs need to connect with AI systems as they offer incredible value and new potential avenues for processing data and functions. By selecting the right approach and modality based on your use case and specific needs, organizations can leverage this technology to substantial new heights. However, it’s important to remember that AI is a revolutionary technology. As with any huge step forward, it will need deep thought and development to become as disruptive and powerful as it has the potential to be. Developers utilizing and leveraging AI will undoubtedly find their best modalities for connecting to these services, and from that effort, other protocols, standards, and approaches will surely evolve. While the paradigms noted in this piece are burgeoning and promising, the next decade will certainly see massive evolution and innovation. As AI evolves, so will the need for smarter, standardized API connectivity. As such, you should consider this a snapshot of current solutions and look ever more toward the horizon for more efficient methodologies as they emerge. The latest API insights straight to your inbox