Is the AI Revolution Leaving APIs Behind? Posted in Strategy Art Anthony January 28, 2025 Just in case you haven’t been paying attention, artificial intelligence is everywhere. Trying to read any online magazine or watch a Sunday Night Football game without encountering a mention of AI is now officially impossible. If you believe recent ads for AI-enhanced website builders, you no longer need a copywriter, designer, or developer. The reality, of course, may be quite different. For every announcement about AI being integrated into a product, there’s another YouTube essay about the limitations of generative AI or an insightful op-ed on it plateauing. The truth probably lies somewhere in the middle. In recent months, we’ve written extensively about generative AI and APIs, which seem like natural bedfellows. Analysts like Paul Dumas paint a future in which AI will consume (and write) APIs, invoking them to retrieve data and expand their models. If this vision becomes a reality, pivoting to support that type of API consumption will be a necessity for API providers to stay relevant. At our Platform Summit 2024, Superface.ai’s Zdenek “Z” Nemec identified the chasm between potential and reality. According to Nemec, when using AI agents, in many situations, it’s easier, faster, and cheaper to access data and perform actions via the UI than using APIs. “Often, it is the only way to do so,” he says. Below, we’ll look at Nemec’s assertion that modern APIs are a bottleneck for autonomous agents, limiting the ability to innovate, and how (or indeed whether) we can resolve that. Limitations of LLM-Powered Agents Despite what some people still think, AI agents are not all-seeing or all-knowing (yet). They are, as Nemec says, “essentially pieces of software that act on behalf of a user or another program. They can make decisions and do other useful stuff, but they’re only as good as the tools they have at their disposal.” And many agents still don’t have the right tools. When agents can infer semantics, they can work like a human. In other words, they can use API documentation (especially from standardized formats like the OpenAPI Specification) to figure out which endpoints to call to get the data that they need. Unfortunately, there is no guarantee that they’ll actually be able to call those endpoints. Nemec provides the example of LinkedIn — although the service has an API, he says that “it is not easily accessible” and “its functionality is limited.” He suggests that successfully obtaining access to the API can take several weeks, then compares it with a third-party scraping service that can be connected to an agent in just one minute. There are similar examples for Instagram and Salesforce, in which third-party screen scraping services are more readily available (and easier to use) than official APIs. Automating human input through AI agents, for example, to take advantage of such services and retrieve data automatically is fast becoming an appealing alternative to APIs. OpenAI’s “Operator”, due for imminent launch at the time of writing, is one of many such tools. And that’s a win for end users because, in Nemec’s words, “LLMs perform miserably when finishing complex tasks with complicated APIs.” APIs typically have many methods, object IDs, fields, and data types, and this high granularity adds a great deal of abstraction that AI tools struggle with. Limitations of APIs (for AI agents) As it stands, several factors limit the extent to which AI tools like chatbots can effectively interface with APIs. Nemec highlights the following: No, or limited, access to APIs for developers (and therefore agents) Legacy APIs (WS/RPC) and/or a lack of up-to-date documentation APIs only cover a fraction of what can be accomplished using the UI Overcomplicated or bloated APIs that are difficult to call Until recently, and even now, the appetite to expand the scope of APIs to make life easier for developers using AI tools to consume them has been limited. And there’s a reason for that. API consumption facilitates cherry-picking different functions, which can help users avoid vendor lock-in. AI agents are, in Nemec’s words, “challenging the stickiness of today’s products because they are essentially a different interface to those cloud and SaaS applications.” This ties in, he suggests, to the shift from APIs as fun side projects to legitimate business offerings, making up what we call the API economy. The implication is that if companies don’t see a financial outcome from changing the above, they have no incentive to do so. Also read: How API Description Languages Can Empower AI Will the API Economy Adapt to AI? In outlining some of the API economy‘s key business drivers — like direct monetization, stickiness, value add, and building relationships with partners — he acknowledges that innovation and efficiency, which enable what he calls “remix culture,” are significant. In isolation, however, they may not be seen as important enough to motivate change. But they’re not in isolation. “B2B agents are coming,” Nemec says. “They’ll be checking out warehouses, stock availability, manufacturers, and so on.” In other words, AI doesn’t just add value through data aggregation and improving presentation — it has legitimate purchasing capabilities that have a direct financial impact. Where it falls, however, is the stickiness factor. “Companies want us,” Nemec states, “to stick in front of the screen, using their UI.” (While that’s true right now, it underscores the importance of standardized APIs on the backend to retain economy if UIs do fade away). But it explains, at least in part, why so many companies are rushing to embed white-labeled AI functionalities into their products. They want to keep us locked in on their branded experience rather than running off to ask ChatGPT, Gemini, or some other third-party tool. Such moves may not be enough. Early adopters tend to fall into camps, and ChatGPT vs. Claude already feels like where the battle lines are drawn. It remains to be seen whether or not companies can integrate AI in smart enough ways to hold onto users and keep us away from those third-party services many are already using regularly. Will AI Adapt to APIs? Despite the above, it would be unfair to portray this as a one-sided problem. Crucially, opening up APIs to tools like AI agents presents a set of governance problems. When users employ third-party services to scrape data, providers have plausible deniability — they can throw up their hands and say, “Well, we never intended to let people use it like that.” As soon as they start providing the data officially, that becomes a very different story. It places an onus on API providers to ensure that they’re meeting their compliance obligations… because LLMs and AI tools won’t “think” twice about disseminating sensitive data in an improper manner. Nevertheless, Nemec asserts that agents are the future of API consumption. In the meantime, he posits that large action models (LAMs) could act as the bridge to that future. LAMs use web and screen actors, similar to robotic process automation, to interact with systems on your behalf. So, will LAMs replace APIs? Probably not. (Good news for us, as ‘Nordic LAMs’ doesn’t have quite the same ring to it). Echoing our own Bill Doerffeld, Nemec suggests that we need APIs more than ever. However, we must apply “more directed force” and “think about how we are building and for whom we are building.” We also need to ensure that the APIs we provide actually work effectively — no incomplete documentation or missing endpoints or fields! — to avoid the issue of developers circumventing APIs using retrieval-augmented generation (RAG) and screen scraping software. Nemec theorizes that we may be headed for an 80:20 AI-to-human split. As API consumption shifts from humans towards autonomous agents, we need to figure out ways to support that type of consumption through streamlined access and machine-readable documentation. Or else we risk missing out on a massive audience. All hail the robot overlords! The latest API insights straight to your inbox