What Is Agent Experience (AX)? Posted in Strategy Art Anthony December 24, 2025 For as long as most of us can remember, “developer experience” has been the umbrella term when it comes to measuring the usability, reliability, and effectiveness of APIs. A great developer experience, meaning one that makes things straightforward and reduces friction as much as possible, is the gold standard. Get your DX right, and word of your API will spread across the land. (Well, that’s the idea anyway.) But those days may be coming to an end. Agentic data consumption is well and truly on the rise, with Gartner predicting that 33% of enterprise applications will include agentic AI by 2028, making 15% of day-to-day work decisions autonomously. At Dreamforce 2024, Salesforce CEO Marc Benioff shared his vision of one billion AI agents by the end of 2026. To misquote Paul Revere, the agents are coming. In this post, we’ll look at what the rise of agentic consumption means for today’s digital services. We’ll highlight some of the key differences between designing technologies for humans and agents, and cover a few ways to start thinking about — and improving — the AX of your products. The Evolution of Agent Experience (Didn’t Happen Overnight) Let us kick off with a quick definition of AX: agent experience is the act of designing a product in a way that AI agents can “understand” it and reliably interact with it autonomously. No more, no less. Rather than thinking of AX as the latest buzzword in the tech space, let us consider it as the next step in a natural progression that has been taking shape for some time. A 2017 Adobe blog post suggests that the term UX was popularized (if not coined) by Donald Norman when he joined Apple as a “user experience architect” in the early 1990s. Although the concept itself has, of course, been around for much longer than that. Good user experience is nothing more than building a product that is as intuitive for end users as possible. When Jeremiah Lee Cohick introduced the term developer experience (DX) in his article “Effective Developer Experience” in 2011, it is worth pointing out that no one accused DX of trying to replace or displace UX. (When it comes to APIs, where virtually all consumers are developers, DX and UX are synonymous.) It was just another way of viewing a certain type of user experience. Likewise, AX is just UX where the user happens to be an agent. In a talk on building the ideal agent experience at our 2025 Platform Summit — well worth a watch for more on agentic AI best practices — Gravitee’s John Gren highlighted that “an agent needs to be a user within your ecosystem. It needs to be a first-class citizen, a first-class user just like humans are.” There is no need for API developers to throw all the existing UX best practices they have applied to their experiences out the window, not least because human developers will still be interacting with APIs for the foreseeable future. In other words, AX is not about ditching humans in favor of autonomous agents — it’s about catering to both of these audiences simultaneously. But, even though we are still in the early days of autonomous API usage, we are already seeing some significant differences in how humans and agents interact with them — agents, for example, handle decisions programmatically and are more likely to chain calls together. As agentic consumption increases, optimizing for it will become essential for a good overall experience. AI Agents and Humans Do Things Differently Autonomous AI agents are not just another API consumer. They cannot connect disparate dots that a previous user of your products might be able to, they cannot use common sense to figure out what you really mean by that vague definition, and they definitely do not appreciate that Back to the Future reference in the intro to your documentation. As Gravitee’s Gren puts it, “poor agent experience means that agents cannot understand what to do, and they will pick other tools instead.” Here are some things you can do to avoid that happening. 1. Agents Need Clear, Unambiguous Documentation Artificial intelligence tools are bad at inferring context or reading between the lines. If you present a gen AI tool with phrases like “a couple” or “try a few,” it is likely to get confused about what you mean. The same applies to more complex terminology and API-related language. It’s been said that content designed for consumption by AI should look less like a blog post and more like a legal contract. Great agent experience is all about being as clear and predictable as possible. That means no marketing speak, no variance in your definitions, and using rules and constraints that leave no room for interpretation or wiggle room. Because agentic “wiggling” can be disastrous. “We need to be able to trust agents to know that they are not stepping above their permissions,” Gren states. “We also need to have explainability, to understand what [actions] they actually performed.” Making processes as clear as possible is a good first step toward that. 2. Discoverability Should Be a Priority If you don’t tell an AI agent that your API is capable of doing something, it is doubtful that it will ever figure it out on its own. It might even turn to another API or service to fill in a blank that some undocumented function of your API is actually more than capable of. Beyond ensuring that all functions of your API are well-documented, that means embracing things like well-defined schemas, thorough standards like OpenAPI specifications, regular updates to your docs, and Model Control Protocol (MCP) servers. We will get into those below. Word of mouth does not exist among AI agents. They are not standing around the water cooler talking about that incredible API they just discovered, so we need to make sure we are posting about them where agents can find them. 3. Accounting for Authentication and Authorization AI struggles with auth patterns and flows as we currently imagine them. A redirect here and a captcha there are no problem for human operators, but can create a dead end for autonomous agents. There are, however, ways we can create new paths for agents. Effective agent experience requires non-interactive authentication, as well as clear paths for behavior when authorization or authentication fails. Gren suggests that “as you give agents control, as the agent is acting on your behalf, it is very important to have short-lived tokens and small consents in terms of what they can do.” Clearly defined permission boundaries of what agents can and cannot access may also be beneficial for inspiring trust among human users who are AI skeptics. When it comes to identity and access management for autonomous AI agents, it’s essential to take a cautious approach. Experts recommend designing access control rules targeting applications, adding relevant identity information in access tokens, and being able to identify and optionally authenticate clients. 4. Embrace the Rise of MCP MCP, open-sourced and championed by Anthropic, is a standard for connecting AI applications to external platforms. It was a hot topic at our latest Platform Summit, with top billing in our editor Bill Doerrfeld’s keynote speech. Do not be surprised to read lots more about MCP from us in the near future. Until then, it is worth getting more familiar with MCP as its adoption is very likely to grow rapidly. MCP’s website calls it “the USB-C for AI applications” and, as Gren puts it in his presentation, “if an agent can talk MCP, it can integrate with your service.” Our own J Simpson put together a great intro to Model Context Protocol that is well worth checking out if you have not had the opportunity to dabble with it yet. One hurdle to be aware of when working with MCP is optimizing context windows for LLM-based AI agents to avoid token bloat. You might need to consider practices like schema slimming, context trimming, or code mode for compression to avoid performance degradation, reduce hallucinations, and ensure that agents continue to run reliably when working at scale. 5. Catering for Stability and Errors When an API call fails and breaks an integration, you might get an email or a support request from a user along the lines of “Hey, this is not working. Any chance a fix is coming?” You will not get that from an agentic consumer — it will just break and remain broken until you notice it. Designing error messages for agents that actually explain the problem (and define a recovery path) and using API observability tools to monitor your products are two essential ingredients to increase API usability for agents. Without them, an agent will just keep spinning its wheels until it ditches your service — likely permanently. We can mitigate that risk, Gren suggests, by “including agent acceptance criteria when building tools, and automating AI agent tests against tools and agents to ensure that they actually work.” In other words, to improve agent experience, give your own bots a crack at the process before the external agents arrive. Agent Experience Drives AI Agent Consumption In our deep dive into Postman’s State of the API 2025, we highlighted a couple of key stats that are relevant here: even though 89% of those surveyed are using gen AI tools in their daily work (to improve code, identify errors, generate documentation, and more), just 24% of respondents are designing APIs with AI agents in mind. Postman also finds that 60% of respondents design primarily “for humans only.” By those metrics, the vast majority of API providers are still woefully underprepared when it comes to this shift. Because this practice is still emerging, there are not yet many stats around the pace of its adoption. The broader signals, however, make it clear we are heading toward a future of increased machine-driven API consumption: Forbes suggests that up to 70% of office work tasks will be automated using AI within the next decade. Within an article on the subject of APIs in the workplace, Kin Lane suggests that the average enterprise uses between 0.5 and 2 APIs per employee. Back in 2018, Akamai estimated that as much as 83% of web traffic is API-related. Stats like these underscore that we are almost inevitably going to see a massive amount of AI-driven API usage taking place soon. The right time to start preparing for that wave is now, before it really becomes a tsunami and makes landfall. AI Summary This article explains what agent experience (AX) is, how it extends UX and DX for autonomous AI agents, and why API providers must prepare their systems for increasing machine-driven API usage. It outlines how AX functions as a design discipline focused on making APIs understandable, discoverable, and reliable for AI agents acting without human intervention. AX is defined as designing products and APIs so AI agents can interpret documentation, follow rules, and interact autonomously, building on established UX and DX practices rather than replacing them. AI agents behave differently from humans — they cannot infer intent, resolve ambiguity, or rely on contextual cues, making clear documentation, predictable rules, and machine-readable structures essential. Strong AX requires explicit discovery mechanisms, well-defined schemas, consistent documentation, and support for non-interactive authentication and precise permission boundaries. The article highlights the rising importance of Model Context Protocol (MCP) as a standardized way for agents to interact with external systems, emphasizing context optimization and reliable tool design. It stresses the need for robust error handling, observability, and automated agent testing to prevent silent failures and ensure stable long-term agent-to-API interactions. Intended for API providers, architects, and platform teams preparing their APIs for AI-driven, autonomous agent consumption. The latest API insights straight to your inbox