3 Ways to String Multiple APIs Together

3 Ways to String Multiple APIs Together

Posted in

APIs are the composable building blocks of the modern web, but stringing them together into a secure, coherent, and resilient workflow has always required more than just duct-taping the calls together and calling it good. Whether you’re coordinating microservices or linking third-party APIs into networked products, stringing APIs together is a complex process.

Today, we’re going to look at three common approaches to connecting multiple APIs together. We’ll dive into what these look like practically and see what the world of AI has to say about this process.

1. Serialized API Flows

The easiest — though least flexible — approach is to serialize API flows from one call to the next. In this model, you take the output of one API and directly feed it into another, transforming it slightly to match the expected input format.

Consider a simple serialization using a signup API and an enrichment API to build a more complete user profile. When a user signs up for our service, we want to enrich their profile using our data sources and then store the data as a user profile. This involves three APIs working in sequence.

First, consider the initial signup request:

signup_response = requests.post("https://api.example.com/signup", json={
    "name": "Jane Doe",
    "email": "jane@example.com",
    "password": "secure123"
})
signup_data = signup_response.json()
email = signup_data["email"]

From here, we can take the email and feed it into a separate enrichment endpoint:

enrich_response = requests.get(f"https://api.example.com/enrichment?email={email}", headers={
    "Authorization": "Bearer bearerkey"
})
enrich_data = enrich_response.json()

By passing the email returned from the first request to the /enrichment API, we’re chaining output from one call directly into the next. We can now take the results of the enrichment process and push them to another endpoint to update the internal user record:

requests.post("https://api.example.com/users", json={
    "user_id": signup_data["id"],
    "company": enrich_data.get("company"),
    "title": enrich_data.get("title"),
    "linkedin": enrich_data.get("linkedin")
})
`

This process is straightforward to implement. But it’s also limited — you’re simply serializing data from step to step, with no built-in handling for more complex actions. It’s fine for quick workflows, but if you’re orchestrating multiple dependent APIs or handling long-lived state, you’ll need something more robust.

2. Specification-Driven API Sequences

Taking it a step further, we can look at implementations driven by specifications, such as Arazzo Specification. Arazzo is a specification that has been developed under the OpenAPI Initiative as part of the Linux Foundation’s effort to standardize specification design and implementation.

This approach provides a formal specification to serialize workflows, describing the sequencing of calls and the shared elements across those calls. This allows us to create a more complex flow that has predictability through declaration, better interoperability, and cleaner portability of Arazzo-based workflows.

Let’s look at our example from above, where we want to allow a user to sign up and then enrich this profile. Notably, we’ll include a verification stage in this flow to ensure the data we’ve collected is accurate.

arazzo: 1.0.1
info:
  title: User Onboarding Workflow
  version: 1.0.0
  description: >
    A workflow that handles user signup, enriches the user's profile,
    and verifies the enriched data.

sourceDescriptions:
  - name: userAPI
    url: https://api.example.com/openapi.yaml
    type: openapi

workflows:
  - workflowId: userOnboarding
    summary: Sign up a user, enrich their profile, and verify the data.
    inputs:
      type: object
      properties:
        name:
          type: string
        email:
          type: string
        password:
          type: string
      required:
        - name
        - email
        - password
    steps:
      - stepId: signUpUser
        operationId: createUser
        parameters:
          - name: name
            in: body
            value: $inputs.name
          - name: email
            in: body
            value: $inputs.email
          - name: password
            in: body
            value: $inputs.password
        outputs:
          userId: $response.body.id
          email: $response.body.email

      - stepId: enrichProfile
        operationId: enrichUser
        parameters:
          - name: email
            in: query
            value: $steps.signUpUser.outputs.email
        outputs:
          company: $response.body.company
          title: $response.body.title
          linkedin: $response.body.linkedin

      - stepId: verifyData
        operationId: verifyUserData
        parameters:
          - name: userId
            in: body
            value: $steps.signUpUser.outputs.userId
          - name: company
            in: body
            value: $steps.enrichProfile.outputs.company
          - name: title
            in: body
            value: $steps.enrichProfile.outputs.title

Let’s break this down piece by piece. Firstly, we have the fixed required fields in Arazzo:

arazzo: 1.0.1
info:
  title: User Onboarding Workflow
  version: 1.0.0
  description: >
    A workflow that handles user signup, enriches the user's profile,
    and verifies the enriched data.

sourceDescriptions:
  - name: userAPI
    url: https://api.example.com/openapi.yaml
    type: openapi

workflows:

The arazzo string states the version number that is used to interpret the rest of the document. From here, the info object sets the metadata for the workflow, stating the title, version, and description. Next, we need to define the sourceDescriptions, stating what the description applies to in the larger OpenAPI body. Finally, we start defining the actual workflows.

First, the workflow for the user signup:

workflows:
  - workflowId: userOnboarding
    summary: Sign up a user, enrich their profile, and verify the data.
    inputs:
      type: object
      properties:
        name:
          type: string
        email:
          type: string
        password:
          type: string
      required:
        - name
        - email
        - password
    steps:
      - stepId: signUpUser
        operationId: createUser
        parameters:
          - name: name
            in: body
            value: $inputs.name
          - name: email
            in: body
            value: $inputs.email
          - name: password
            in: body
            value: $inputs.password
        outputs:
          userId: $response.body.id
          email: $response.body.email

This allows us to capture the basic input values for the user signup flow. Next, we need to set up a continued workflow around the enrichment process:

      - stepId: enrichProfile
        operationId: enrichUser
        parameters:
          - name: email
            in: query
            value: $steps.signUpUser.outputs.email
        outputs:
          company: $response.body.company
          title: $response.body.title
          linkedin: $response.body.linkedin

This will use the email provided in the query as an input, pairing it with outputs from the enrichment database. These outputs will then be packaged into the overall request, and used in the verification step:

      - stepId: verifyData
        operationId: verifyUserData
        parameters:
          - name: userId
            in: body
            value: $steps.signUpUser.outputs.userId
          - name: company
            in: body
            value: $steps.enrichProfile.outputs.company
          - name: title
            in: body
            value: $steps.enrichProfile.outputs.title

Here, the data provided so far will be used to search for valid and verified data, ensuring that the data attached to the user is indeed correctly attributed.

3. AI-Orchestrated API Connections

A quite modern evolution of this process is the use of AI-driven agentic solutions for API sequences. AI agents represent the inverse of a specification-driven solution. Rather than predefining a flow structure, you instead create a reasoning engine that determines the next API call, based on the current context and intent of the request. This is especially useful in situations where hypermedia, that RESTful element connecting complex media types through navigable links, adds an additional layer of complexity that is not as simple to resolve through IF/THEN endpoint routing.

This can take a lot of formats, but here’s a simple flow example:

import requests
import openai
import json

openai.api_key = "sk-..."  # Your OpenAI API key

Here we set the OpenAI key to facilitate our requests throughout this process. To get started, we need to have somewhere for the user signup to occur. Let’s assume this happens in an external API call, and we get a JSON object like this as a response into the system:

signup_response = requests.post("https://api.example.com/signup", json={
    "name": "Jane Doe",
    "email": "jane@example.com",
    "password": "secure123"
}).json()

With this, we can pass the JSON object to GTP using the following code:

messages = [
    {
        "role": "system",
        "content": (
            "You are an API orchestration engine. Based on API responses you receive,"
            "decide what API should be called next, using structured JSON. "
            "Do not speculate - respond only using the data provided. Format:\n\n"
            "{\n"
            "  \"action\": \"next_api_to_call\",\n"
            "  \"payload\": { ...fields... }\n"
            "}\n\n"
            "If no further steps are needed, respond with { \"action\": \"complete\" }"
        )
    },
    {
        "role": "user",
        "content": f "Here is the response from the signup API:\n{json.dumps(signup_response)}"
    }
]
`

This allows us to set the role and content of the request, letting OpenAI know that they are an API orchestration engine that will be determining actions based on the API flow in question. Next, the GPT will make a decision, which will be returned to us using this code:

decision_1 = openai.ChatCompletion.create(
    model="gpt-4",
    messages=messages,
    temperature=0
)['choices'][0]['message']['content']

print("GPT Decision 1:\n", decision_1)

With the response in hand, we can parse this output and take action, or alert the internal system that there’s an error that needs manual action:

try:
    decision_data = json.loads(decision_1)
except json.JSONDecodeError:
    print("Error - GPT response was not valid JSON. Cannot proceed.")
    exit(1)

action = decision_data.get("action")
payload = decision_data.get("payload", {})

if action == "call_enrichment_api":
    email = payload.get("email")
    enrich_response = requests.get(
        f"https://api.example.com/enrichment?email={email}",
        headers={"Authorization": "Bearer bearerkey"}
    )
    print("Enrichment Response:\n", enrich_response.json())

elif action == "call_verification_api":
    verify_response = requests.post(
        "https://api.example.com/verify",
        json=payload
    )
    print("Verification Response:\n", verify_response.json())

elif action == "complete":
    print("Workflow complete.")

else:
    print(f"Error - Unknown action '{action}'. Manual intervention required.")

This is a relatively simple implementation, but offloading this logic to GPT does quite a bit to enable complex interactions with minimal overhead. Where this becomes a bit more complicated is in multi-stage interactions — for instance, if we need OpenAI to make a decision based on both the initial decision as well as decisions it can render based upon data in the verification system. In such a scenario, we start to look at stored arrays, database objects, and so forth, which is beyond the scope of this article.

API Provider-Led Documentation

Sometimes, the API provider explicitly states how to link together a sequence of API calls. For example, an API provider might outline a multi-step authentication flow within their documentation.

Yet, this is rare in practice. Even in cases such as Stripe, which does advise a specific methodology for connecting multiple API instances together, those are much more generally applicable, still requiring specificity in implementation for more complex use cases.

Where the Industry is Headed

This discussion is still a very active one. The API space has been grappling with how to link up APIs for some time, and novel developments in the AI space have turned recent conversations on their head. Interconnected agents, middleware layers connecting models to models for conflict-based multimodal reasoning, and even multi-agent chains for complex parsing have become much more common than a year or two ago, largely driven by advancements like the Model Context Protocol (MCP).

If the question is ‘how to string multiple APIs together,’ the answer is that there’s no “right” way, absent any determined or advised process from the API provider themselves. The reality is that your API sequencing options are going to be primarily defined by the systems underpinning them. For instance, an LLM-native application is going to naturally gravitate towards AI-driven linking as the tech is already there and ready to be used. For traditional APIs without LLM integration that just want to pipe data from one API to another, an AI solution is going to be overkill.

What clouds this discussion is the fact that all of this advice may be obsolete a year from now. We are in a time of rapid evolution, and as LLM systems evolve and new solutions such as MCP promise to govern interconnections, the world of API connectivity is seeing the greatest shakeup it has seen since RESTful design overtook SOAP as the paradigm of choice in the 2000s and 2010s.

The advice, then, is simple: choose your best solution for now, but do not consider this set in stone or final. The chances are you are likely to change your methodology in short order.