Are Microservices Still Relevant in the Age of AI? Posted in DesignPlatformsStrategy Janet Wagner March 17, 2026 If you build distributed applications, you’re likely already familiar with microservices. While the definition varies across the tech industry, I prefer the one from Sam Newman’s book Building Microservices. In it, he concisely describes microservices as “small, autonomous services that work together.” The concepts of microservices have been around for about fifteen years now. However, the rapid rise of AI has some wondering whether microservices are still relevant in the age of AI or if they are destined to become obsolete. We reached out to four microservices experts — Mike Amundsen, Matt McLarty, Christian Posta, and Chris Richardson — to find out. Their conclusion? AI is far from rendering microservices obsolete. In fact, AI may make microservices more relevant. This article highlights a few reasons why. How Microservices Architecture Supports AI Systems Companies building production-grade AI systems and AI agents tend to have the same goals in mind, such as the ability to scale, strong security and safety, and high accuracy and reliability. The foundational principles of microservices align with these goals, making them highly relevant. Matt McLarty, Chief Technology Officer at Boomi, explains to Nordic APIs that the original principles outlined by James Lewis and Martin Fowler — clear componentization, alignment to business capabilities, and keeping stateful operations within well-defined boundaries—have proven themselves at scale. Those principles will continue to hold as AI moves into the core of enterprise systems. “While AI hype is peaking, most organizations are still early in operationalizing AI, and when they do, AI-infused components will need to coexist with an already complex digital landscape,” says McLarty. “That coexistence will require new architectural patterns, but not a departure from these foundational ideas.” He further explains that where microservices have fallen out of favor, it’s largely due to implementation mistakes rather than flawed principles, confusing containers with microservices, or pushing service granularity too far. Those approaches weren’t well-suited for any environment, including AI-driven ones. “Organizations that have embraced the original microservices principles and paired them with API-first thinking will be best positioned for what comes next: agentic architectures, where AI systems communicate and act through well-defined interfaces,” McLarty says. “In that world, microservices done right enable greater accuracy, safety, and scalability than monolithic architectures ever could.” Mike Amundsen, an internationally known technology author, speaker, and advisor, shares a similar view to McLarty. He explains to Nordic APIs that when he thinks of services, he thinks of interfaces — whether that is a remote one (like HTTP APIs) or a local one (like sockets) — services always use interface descriptions like JSON 2.0 files (used by MCP). “I expect the options to change over time (new protocols, patterns), but I don’t see the need for interfaces ever going away,” Amundsen muses. Since tool-calling protocols (MCP, A2A, UTCP) turn APIs into runtime building blocks, microservices are still relevant, maybe more so, because AI-driven clients favor legible systems. “A service that declares what it does, what it needs, and what it will change is an easier tool for an agent to enlist with a high likelihood of success,” comments Amundsen. “We are moving from ‘developers compose workflows’ to ‘agents compose calls’ given a goal,” he says. “That puts service advertising and discovery back in the critical path, along with clear contracts, safe side effects, and improved observability.” Microservice platforms have been focused on those problems for years, and the rise of agentic programming means a wider audience needs to deal with these same issues, too. GenAI Accelerates Software Delivery Generative AI is dramatically changing how companies design and build software. Development teams must now take into account the probabilistic nature of AI applications, also known as non-determinism, and the speed and scope at which AI can induce changes in applications and systems. Companies need to implement architecture, like microservices, that support AI’s rapid pace of change. Chris Richardson, founder of Eventuate and Microservices.io, and an internationally known technology author and consultant, recently published an article explaining why GenAI-based software delivery needs a fast flow architecture and how microservices play a critical part. For instance, developers can use a microservices architecture to constrain a coding agent to work within a single service. This constraint prevents it from making system-wide changes that disregard ownership boundaries. “Of course, an agent can still make changes that affect other services — for example, by changing an API contract or adding collaborators,” writes Richardson. “But such changes are explicit and therefore visible, reviewable, and governable. As a result, architectural boundaries replace informal social cues as the primary mechanism for governing agent behavior at scale.” With a microservices architecture, each coding agent can focus on a single service when making a change. A single service typically has a small code base, so it’s easier for agents to work with. In addition, agents have less code to send to the LLM context window — reducing the number of tokens required to process a change. More tokens mean more LLM costs. Richardson tells Nordic APIs that the acceleration of software delivery by GenAI-based coding agents makes the microservices architecture even more relevant. “Because these agents dramatically increase the volume and speed of change, organizations need a fast flow architecture that can reliably deploy that stream of changes to production,” Richardson comments. “Without the microservice architecture — which enables fast flow at scale — deployment can easily become a brittle bottleneck.” “Moreover, a microservice architecture’s clear boundaries provide essential guardrails for coding agents, reducing the risk of rapidly accumulating technical debt,” Richardson adds. The Need for Deterministic Systems AI agents are typically powered by large language models (LLMs), which behave probabilistically by design. This means that an AI agent can yield different outputs even if users provide the same exact inputs. The LLM’s output changes depending on different factors like context, sampling, or temperature. This unpredictability is problematic for business processes, which is why probabilistic models should be wrapped in deterministic infrastructure. Implementing determinism reduces the liability of unpredictability when building production-grade AI systems. Christian Posta, author, international speaker, and VP, Global Field CTO at Solo.io, comments to Nordic APIs about the relevance of microservices for agentic AI and the continued need for determinism in systems. “I think microservices and APIs are still going to be relevant,” Posta said. “Any moderately successful implementation of agentic AI that I see still includes some sort of unified data layer… and it’s served by APIs.” Those APIs may also be served by Model Context Protocol (MCP), adds Posta, but the foundations are still what we see in APIs and microservices. The infrastructure to connect this layer to LLMs via MCP is evolving, but the foundations aren’t changing drastically. “Once the initial excitement wears off, we’ll see more and more deterministic rails placed around AI agents, from things like MCP, APIs, agent skills, and of course, the underpinnings of microservices,” says Posta. “While natural language processing and non-deterministic agents are great for a certain class of use cases, they won’t replace everything,” he adds. Another way to introduce determinism to LLMs and AI agents is by using Arazzo to orchestrate multiple APIs for MCP server tools. Arazzo provides a mechanism for defining sequences of API calls and their dependencies to create deterministic API workflows. An MCP server that deterministically guides the AI agent through multi-step, API-driven workflows will be far better than one that lets the agent guess at those workflows on its own, leading to unpredictable results. The Future Looks Bright for Microservices Ultimately, AI isn’t eliminating the need for microservices — it’s validating it. First, this architectural pattern includes foundational principles that align with developer goals. Developers want AI applications to be highly scalable and secure. They want precise results and reliability. Microservices architecture helps with that. Second, these autonomous services enable the acceleration of software delivery by GenAI coding agents. They allow agents and human teams to keep up with the high volume and rapid speed of AI-driven change. Finally, microservices make it possible for developers to build deterministic infrastructures that serve as guardrails for LLM-driven applications like AI agents. These guardrails reduce business risk and liability from unpredictable AI applications. Overall, the future of microservices in the age of AI looks bright. AI Summary This article examines whether microservices architecture remains relevant in the age of AI and concludes that it plays a critical role in enabling scalable, secure, and governable AI systems. Microservices architecture — a distributed design pattern composed of independently deployable services communicating via APIs — aligns closely with AI system requirements such as scalability, isolation, and clear ownership boundaries. As generative AI accelerates software delivery, microservices provide structural constraints that limit system-wide disruption, allowing coding agents to operate within defined service boundaries. AI agents powered by large language models (LLMs) behave probabilistically, making deterministic infrastructure essential for production environments where predictable outcomes are required. Tool-calling protocols such as the Model Context Protocol (MCP) reinforce the importance of well-defined interfaces, explicit contracts, and discoverable services in agentic architectures. Rather than replacing microservices, AI increases the need for modular systems that provide observability, governance, and controlled change at scale. Intended for API architects, platform engineers, and technology leaders evaluating how microservices architecture supports AI agents, generative AI workflows, and distributed enterprise systems. The latest API insights straight to your inbox