HATEOAS: The API Design Style That Was Waiting for AI Posted in Design Janet Wagner September 25, 2025 You’ve heard about AI agents. You may also know about hypermedia as the engine of application state (HATEOAS), a well-established API design style and REST constraint. But have you ever thought about AI agents and HATEOAS together? Hypermedia provides ways to address several AI agent problems — specifically around tool calling, maintaining context, and managing runtime environments. Below, we’ll explore how HATEOAS can help AI agents overcome these challenges. What is HATEOAS? HATEOAS is an API architectural style constraint often applied to hypermedia APIs, and the focus of ongoing debates about the true definition of a REST API. HATEOAS enables developers to create self-descriptive APIs with embedded hypermedia links that clients can discover dynamically. Clients don’t have to know all hardcoded API paths beforehand. HATEOAS’ structure makes for highly flexible and backwards-compatible APIs. However, it adds complexity to API design because developers must consider link and resource relationships, and ensure they include hyperlinks properly in API responses. This complexity adds more obstacles and time to the API design process, something most developers would rather avoid. Tool Calling and Maintaining Context: Critical to Autonomous AI Agents Before we can discuss why HATEOAS is relevant to AI agents, we first need to explain two critical components of autonomous AI systems: tool calling and maintaining context. AI agents act autonomously based on decisions formed from their own reasoning and contextual knowledge. That ability to “reason” comes from large language models (LLMs), and that contextual knowledge comes from multiple sources, including the underlying LLM’s pre-training and external tools like APIs and MCP servers. Tool calling enables autonomous AI systems to dynamically access external tools and perform actions using them. For example, an MCP server could provide an LLM a list of tools that includes a weather API the model could call to answer weather-related questions based on location. For an AI agent to be autonomous, the underlying LLM must carry context throughout entire conversations and as it completes various actions. The LLM must also understand the context of each available tool to know when, why, and how to use each one. Existing AI systems struggle with tool selection when presented with an excessive list of tools to choose from or tool descriptions that don’t provide enough context. These systems are also not adept at presenting tools as options based on resource state or past tool choices. Hypermedia Solves Several Key AI Agent Problems Two key problems AI agents face are tool calling and maintaining context. Another challenge is managing runtime. AI agents require runtime environments because they use tools in a loop. Hypermedia can help AI systems overcome these challenges. Tool Calling and Maintaining Context Hypermedia helps provide better context into tools at runtime. “AI agents don’t solve the hypermedia problem, but hypermedia does solve the AI agent problem of tool selection and efficiently maintaining context,” Darrel Miller, partner API architect at Microsoft, commented on LinkedIn. “Two of the things LLMs need to do in ‘agentic’ systems is select appropriate tools and construct machine readable parameters to send to those tools based on previous human to agent interactions,” he added. “Hypermedia is an effective way of accumulating the results of past choices and constraining the potential useful tools available for the next interaction.” “The probabilistic behavior of LLMs has led us to recreate hypermedia patterns,” Kevin Swiber, API strategist and consultant, expressed on LinkedIn. LLMs’ structured outputs, tool calling, and human-in-the-loop requirements already resemble hypermedia. “I may be getting close to wrapping hypermedia APIs around LLM agents,” Swiber said. Managing Runtime Environments AI agents need to understand at runtime the APIs they could consume. Miller pointed out that runtime API calling already occurs in AI orchestrators: “The OpenAI function calling description is effectively a hypermedia affordance that is used by the LLM to do selection. The only missing piece is agents/tools using application state to determine the current set of relevant tools. Some of this tool discovery is being done by frameworks, but if it moves into application code then you have a hypermedia-driven application.” Mike Amundsen has created a framework called GRAIL (Goal-Resolution through Affordance-Informed Logic) that demonstrates the concept of hypermedia for AI agents. He explained on LinkedIn that the GRAIL framework lets “agents try things, fail gracefully, learn what’s needed, and then continue on.” A GRAIL client can discover all this at runtime without any prior knowledge. “I’m always looking for new ways to automate machine-to-machine communications, and hypermedia has always been a powerful engine for those interactions,” Amundsen explained to Nordic APIs. GRAIL makes it possible for a client to navigate its way from start to finish by discovering and activating affordances in order to solve a problem. “My hope is that experiments like this will lead to useful M2M tools and inspire others to create even more powerful and reliable hypermedia environments.” All in all, hypermedia can help developers address challenges involving tool calling, maintaining context, and the runtime environment. However, adopting HATEOAS requires developers to rethink API design. APIs Designed with AI Agents in Mind As more organizations deploy AI agents, API developers and providers will need to consider the agent experience (AX) — and HATEOAS has a significant role to play in that regard. “HATEOAS is becoming relevant (again) in the age of AI and AI agents,” Ton Donker, systems architect at Enexis Groep), commented on LinkedIn. “As these agents take over as the primary users of APIs, the ability for APIs to guide and describe themselves using hypermedia links is becoming increasingly important.” “Using Arazzo, or HATEOAS-style hypermedia links can improve the agent experience by improving context, reducing ambiguity, and helping navigate multi-step processes,” he adds. Donker told Nordic APIs that he thinks of HATEOAS as “powering discovery-driven AI, moving agent state forward via hypermedia.” Kevin Duffey, an independent consultant, expressed a similar sentiment: “I genuinely believe HATEOAS was just waiting for the right technology. Had we had powerful AI like LLMs and frameworks like the Model Context Protocol (MCP) two decades ago, the entire API design philosophy might be radically different today.” Imagine if HATEOAS links were paired with natural language descriptions, as well as payload details and HTTP method hints — many possibilities would open up. “Suddenly, you could point an AI at an API’s entry point, give it a high-level goal, and it could autonomously navigate complex workflows. The AI becomes the ‘intelligent browser,’ interpreting context and making decisions,” Duffey remarked. APIs with HATEOAS can offer AI agents the flexibility they require. However, developers will need to design their APIs so that they can have some control over what agents do with them. More Control Through HATEOAS APIs API providers and website owners face the prospect of many more AI agent consumers. Both want to prevent machine clients from overwhelming their resources and behaving in ways they can’t control. Most website owners also frown on webscraping via AI, putting countermeasures in place like bot detection tools. To control how AI agents consume resources, they could offer AI-specific hypermedia APIs with adaptive rate limiting. For example, API designers could use HATEOAS to guide AI agents through allowed operations, controlling the flow of interaction. They could also add limits to link exploration (AI agents can only follow links provided in API responses). “HATEOAS is an API style analogous to the way humans interact with the web — fetch a resource, inspect the available links, actions and supporting information, decide what to do next, then do it,” Tom Akehurst, CTO and co-founder at WireMock, shared with Nordic APIs. “Repeat until the task is complete.” “AI agents are very good at following this process via the human-optimized web, but this is inefficient and often inhibited by bot protection,” he adds. “Providing HATEOAS APIs seems like a good solution – websites can control how AI agents interact with their data, and agent vendors can integrate with an efficient, reliable, LLM-optimized interface.” API design philosophy could radically change because of AI, starting with more HATEOAS-driven APIs. HATEOAS: Ahead of Its Time AI agents demand more from APIs than other API consumers. They need every API to provide semantic context so the LLMs that power them can understand what it does, when to use it, and why. They also need APIs that can help them maintain context throughout interactions and support efficient tool calling at runtime. HATEOAS checks all these boxes. It’s an API design style that was just waiting for the right technology — AI! The latest API insights straight to your inbox