Comparing Tools For GraphQL Schema Stitching: Hasura vs. Apollo

In the world of microservices, an API could reference tens or more services within a single function. This significantly elevated functionality has unfortunately brought with it some greater complexity.

One way to deal with such complexity is schema stitching. Stitching is a robust process that ties many microservices under a single relational network. This strategy can help provide a powerful, singular system that routes queries correctly.

So how does stitching work, and what technology can we deploy to help? Let’s take a deeper look.

What is Schema Stitching?

Schema stitching is exactly what it sounds like – stitching multiple schemas together to make a single seamless experience for the end user. In practice, actually doing this is a little complicated, and is more akin to taking a mass of squares, circles, and triangles, and combining them into a singular, understandable shape – each subcomponent still needs to exist as a steady structure with the same integrity in combination as in singular being, while facilitating communication to a single external entity.

In API terms, what this essentially means is taking a great many APIs, defining these internal relationships, schema endpoints, and generalized relations, and then forming a proxy layer to translate these requests. End-user requests are typically made through a Master API, which serves as an omnibus of schemas.

Why Stitch at All?

Stitching is a solution to a core problem of the microservice model. While microservices boast many positive aspects, their complexity can be a significant issue.

Each additional module increases the need to understand the traffic flow, the implicit data handling, and the relative method by which data is queried. Many solutions have arisen to streamline this architecture, and stitching is simply an evolution, allowing for seamless querying.

Stitching offers the best of two worlds – enabling you to reference many microservices while acting as a singular API. This means our offering is less confusing, more effective, and ultimately provides a better user experience.

The 4 Steps of Stitching

The end goal of this process is to have a very specific workflow – the client makes a request to a Maser API, which is then sent to a proxy layer of translation. At this layer, the request is split to several different APIs, with each responding to the request with the portion of data they’re responsible for. Next, these responses are combined together, again in the proxy layer. This combined response is then sent to the client through the Master API output.

1. Introspect the API

In order to do this, there are four basic steps. First, we must Introspect the API. This process entails looking at the numerous APIs currently on offer, the way our data is handled, and the general flow of data within our API ecosystem. In effect, we’re learning what tools and data we have to manipulate with, as we can’t even begin to consider what the end result will look like until we figure out what our current state is.

2. Rename the API

Next, we need to Rename the API. At this stage, we’re looking at mitigating the collisions between the various internal APIs, specifically to prevent two differently named development products from clashing. In most cases, the last API in wins in name collisions, but that’s not what we want – we want each API to be able to work internally, and we want to avoid collisions, both in calls, internal function calls, naming conventions, and generally as a point of data integrity.

3. Connect the Fields

Now we want to engage the third step of this process, which is to Connect the Fields. Now that we understand our APIs, and we’ve ensured that collisions are handled internally, we need to define which fields specifically connect to other fields and data types. Since we’re essentially combining a great many calls into a singular client call, we need to know where each data path connects, how it connects, and what the expected output is. By doing this, we can begin to route these calls dependent on their function and type, thereby facilitating the entirety of the internal conversation that occurs when a client request is made.

4. Resolve the Types

Finally, we want to Resolve the Types. Here, we’re going to take everything that we’ve learned, all the mitigation measures we’ve implemented, and the larger field connections we’ve made, and specify what schema handles which data, and how this data is actually resolved internally between different endpoints. While this can be done in a wide variety of ways, the ultimate result is a system which clearly resolves data requests to the appropriate systems, and which combines the outputs of each individual internal API into an output appropriate for external consumption.

Caveat Emptor

This process is highly effective, but it comes with some major caveats that should be considered. Do note that not all of these caveats apply to each situation – as with any API implementation, the specific architecture, schema, and implementation of the codebase is going to dictate success or failure, so a carte blanche warning or statement is never going to cover every single case. That being said, these caveats are generally applicable to most solutions when it comes to API Stitching.

Pros

There are some major pros to this approach – the sum total of these can be termed as “efficiency.” These include:

  • One API to Rule Them All – Since there’s a single API orchestrating all of the internal conversations occurring, you don’t need to orchestrate the round trips that would otherwise be needed to mirror the same functionality in a non-stitched API.
  • One API to Discover It All – In a typical environment, API introspection would require an introspection query to each involved process and API, which would take a lot of time and would have to be manually combined. Using the stitched method, a single introspection query at the top client level will expose all the data available through microservices via your stitched interface.
  • Surface Reduction – Visually, your API appears to have a smaller attack surface as there is only a single endpoint exposed to the client. The inverse, in which multiple microservices would have multiple endpoints exposed, gives the appearance of a larger attack surface.

Cons

There are some majors cons to this approach as well – these can be summarized as “points of failure”:

  • One API to Crash Them All – Because you have a single endpoint, if that endpoint goes down, it’s “game over”. A single point of failure to the external client can be a huge deal in certain applications and should be mitigated regardless of whether or not stitching is adopted.
  • Not Always a Great Idea – Stitching is a great implementation for many use cases, but in others, it’s far from ideal. Providing a cached, single source of truth content is not a great environment for stitching, and the reality of stitching itself introducing complexity into what might otherwise be a relatively simple system are both strong arguments against the approach.
  • Just Because You Can – The reality is that, just because you can do something, doesn’t mean you should do it. There have been many instances of a strong technology being adopted as the standard de jure, and stitching very much falls into this line of thinking – not every collection of microservices needs to have their schema be stitched together, and in some cases, this is simply complexity for complexity’s sake.
  • Proxy Requirements – This approach requires a proxy, and not all environments, standards, or requirements might support that. While GraphQL and other architectures suggest proxies as a best practice in many cases, there are certain situations in which a proxy is absolutely a poor choice, and can cause more trouble than they fix.

Use Cases

As an example of why this stitching can be a good idea, let’s look at a possible use case. In essence, stitching API schemas allow you to present a unified experience to your user without having to make a multitude of clearly external or complicated round-trip calls. This sort of stitching, then, is perfect for events and is highly effective in localized events specifically.

Let’s assume you are running some sort of event for API professionals in Sweden. As the primary organizer, you’ve created an application for smartphones that leverages a stitched API to provide a wide variety of information. Because of stitching, you can have a unified experience that stays within the app at all times, seamlessly delivering data while only exposing a central endpoint. Your stitched API can:

  • Call a booking API to look for local vacancies, streamline check-in processes for partnered hotels, and report on partnered hotel utilization to justify business investment in future events;
  • Call a weather API to provide information on environmental conditions at all times – perhaps even collating, comparing, and contrasting differing sources of information to provide a more accurate estimated weather condition;
  • Organize a variety of microservice APIs utilized for pass management, exhibition registration, and workshop provisioning for guests attending specific development tracks in your seminars;
  • Provide several tolerance and fallback endpoints to catch poor API calls, broken calls, or otherwise critical endpoints that need redundancy (such as calling multiple endpoints for scheduling systems to ensure all schedules are updated and accurate).

The bottom line is that with a stitched API, you can do a ton of functions with very little obvious crosstalk, leveraging a multitude of APIs through a single endpoint. You can do more with less.

Stitching Options

Now that we’ve looked at stitching in a general sense, let’s take a brief look at two specific providers in the space – Hasura and Apollo.

Hasura and Apollo – A Brief Overview

Hasura and Apollo deal with stitching in generally the same way, serving as a great example for how this process might actually look.

Hasura

Under Hasura, all external APIs are considered “remote schema” – as such, the first step in utilizing Hasura has you build a custom GraphQL instance that includes specific resolvers to each remote schema.

After these remote resources are named and appended to a server URL endpoint, Hasura can then resolve each request to each resource through the singular frontend, taking a request for a specific function and pushing it instead to the remote schema, but treating the data output returned as if it’s still part of the singular API solution.

Hasura requires that all top-level field names are unique using a case-sensitive match to prevent collisions. This also allows types with the exact same name to be merged, rather than treated as separate entities, which may allow for greater clarity through design.

At this time, however, Hasura does have some limitations. Different GraphQL nodes cannot be used together in the same query – all fields must be from the same server in order to function correctly. Additionally, Hasura does not support subscription methods for remote schemas, which removes some possibilities in the design phase.

Remote schemas are handled through a GUI on Hasura’s side, which looks as follows. The use of a GUI is significant here and makes for a strong value proposition for this solution for developers who favor such implementations.

Apollo

Apollo functions much the same but instead uses “mergeSchemas” as a named method to combine the schemas together. By shifting in this way, Apollo can support either remote or local schemas, expanding the possible schema permutations greatly. Remote schemas are merged using a local proxy that calls the remote endpoint, which then returns the schema for local merging processes.

Apollo also handles typing in an interesting way. Apollo allows for custom fields that extend existing types to translate between content types – in their documentation instance, this allows a user to bridge the gap between content and its author in a single query, using a custom field to relay this information without making separate contextual requests.

Additionally, Apollo supports transforms to change these schemas before merging as well, allowing for greater control over what is merged and for what purpose. Transforms allow new field delegation, translation between new and old field types and names, and further complex processes to fully control and customize the output of the merged schema.

Per the Apollo documentation, a completed stitching example looks as follows.

const mergedSchema = mergeSchemas({
schemas: [
chirpSchema,
authorSchema,
linkTypeDefs,
],
resolvers: {
User: {
chirps: {
fragment: `... on User { id }`,
resolve(user, args, context, info) {
return info.mergeInfo.delegateToSchema({
schema: chirpSchema,
operation: 'query',
fieldName: 'chirpsByAuthorId',
args: {
authorId: user.id,
},
context,
info,
});
},
},
},
Chirp: {
author: {
fragment: `... on Chirp { authorId }`,
resolve(chirp, args, context, info) {
return info.mergeInfo.delegateToSchema({
schema: authorSchema,
operation: 'query',
fieldName: 'userById',
args: {
id: chirp.authorId,
},
context,
info,
});
},
},
},
},
});

Conclusion

Schema stitching is a great way of removing needless complexity while still reaping the benefits of a complex underlying microservice architectural design. While it has its own caveats, as with any implementation, this process delivers the promise of effective microservice design – greater flexibility, segmented design, but with a unique user experience.

What do you think about schema stitching? What are your thoughts on Hasura and Apollo – and do you have any preferred alternatives? Let us know in the comments below.