The Power of Relay; The Entry Point to GraphQL

The Power of Relay; The Entry Point to GraphQL

The Power of Relay; The Entry Point to GraphQLIn many ways, GraphQL is a futuristic approach to dealing with all the headaches surrounding high-data transfer, large-volume relational content. As more is written about the technology and as its implementation is discussed, it goes without saying that related components are becoming increasingly more interesting as well.

One of these components, Relay, often falls to the wayside in the conversation — and that’s a shame, given that Relay is incredibly powerful, useful, and interesting, given the right use cases.

Accordingly, this piece will focus on Relay as an extension of GraphQL per Facebook’s stated development guidelines and documentation. We’ll discuss how Relay does what it does, what specifically makes it special, and why pairing Relay with some — but not all — GraphQL implementations is a good idea.

What’s the Difference Between Relay and GraphQL?

While GraphQL and Relay are often pushed in the same sentence (and are treated like a package by Facebook and many other advocates), they are actually two very different parts of a greater mechanism.

What is the Difference Between GraphQL and Relay?

GraphQL is, fundamentally, a way to model and expose data in the native application. That is to say that GraphQL is the methodology by which all of its extended functionality is prepared for fetching and interaction.

Relay, on the other hand, is the client-side data-fetching solution that ties into this stated model to render data efficiently for the end user. It ties into the GraphQL schema, it uses the GraphQL schema, and with further server-side additions, it augments GraphQL schema, but to say they’re one in the same is like saying “gasoline” and “tires” are one in the same because they’re both used to power a car.

To further drive home the point, GraphQL is able to be used entirely independently of Relay, while Relay depends on GraphQL (or, at least, GraphQL-like schemas) to function. GraphQL can be used with any fetching technology that is designed to handle the query in question (which is easily done with most modern solutions).

The GraphQL specification describes how Relay makes three core assumptions on what a GraphQL server provides:

  1. A mechanism for refetching an object.
  2. A description of how to page through connections.
  3. Structure around mutations to make them predictable.

What is Relay?

So what exactly is Relay? At its most base level, Relay is a JavaScript framework crafted to build React.js services. It was designed as a component to Facebook’s GraphQL, expressly crafted to handle high data throughput and output the requested data in a dynamically stated way.

A big power point behind Relay is how it handles this data fetching. Relay handles data through declarative statements in GraphQL, composing the data query into efficient batches while keeping to the stated data structuring. Because of this, Relay is very fast, very efficient, and more important, extensible to the application demands in a dynamic manner.

That’s not the only thing that makes Relay users sing its praises, of course. Colocation is present in Relay, allowing for aggregate queries and limited fetching. Mutations are supported widely as well, and it provides optimistic updates to create a more seamless user experience by presenting data as a positive throughput even while the server is still managing the request.

Essentially, Relay does what it does well, in very specific applications, and more efficiently than other solutions.

The Good

So with all this in mind, why use Relay at all? If GraphQL does so much, and operates outside of Relay, why do we need it? Well, GraphQL isn’t perfect. It lacks the ability to poll and reactively update, and it has some built-in inefficiencies that make the system less than optimum.

Relay, on the other hand, fixes many of these issues, extending its usefulness into new heights. With Relay, data requirements are expressly stated and fetched much more efficiently than just standard fetching in GraphQL. This increase in efficiency stems largely from the data caching built into Relay, allowing existing data to be reused instead of forcing a new fetch for each round trip on the server.

Part of this boost in efficiency comes from aggregation and colocation of queries into single, streamlined data requests. While this has a huge benefit in terms of logic, the main benefit is in the network traffic and pure volumetric reduction.

The improvements don’t stop there. Relay offers efficient mutations, and provides for cataloguing, caching, and altering these mutations after the fact, including dynamic column/value alteration.

A huge benefit here is the support for optimistic updates. Optimistic updates are an interesting methodology of client mutations wherein the client simulates the mutation as the server commits the mutation to the backend, allowing the user to interact with the changes they made and simulate their experience without waiting for the server to commit.

As part of the support for optimistic updating, Relay provides a system for Relay mutation updates, status reporting, and rollback, which allows for more seamless management of client and server states. Relay also supports rich pagination, easing the heavy burden of large data returns and making them easier to consume, further improving user experience.

We can see the effectiveness of Relay by looking at its implementation. Messaging application Drift is designed to use real-time messaging natively on the provider’s website. Because of previous experiences with multiple endpoints and large data requests, the team at Drift knew that speed would be affected — and dramatically.

When they started a new company, Drift, and saw themselves falling into the same hole they previously did, they made the decision to fix the issue early on, and integrated GraphQL into their services. When faced with the following complex data set:

Customer attributes come from a single request that returns name, title, location, time zone and avatar. But in order to display the account owner, we’ll need to query the organization’s team endpoint to fetch that name and avatar. To render those colorful tags, we need to fetch the tag definitions from another endpoint. The list of all contact avatars requires a search for all customers in the same company against our ElasticSearch backend. The chat count and last contact requires multiple calls to our Conversation API and one last call to fetch that user’s online status or last active timestamp.

They coded the following data query:

{
user(id:1) {
   name
   title
   avatarUrl
   timezone
   locale
   lastSeenOnline
   email
   phone
   Location
  accountOwner {
     name
     avatarUrl
   }
   tags {
     edges {
       node {
         label
         color
       }
     }
   }
   accountUsers(first:10) {
     edges {
       node {
         id
         avatarUrl
       }
     }
     pageInfo {
       totalAccountUsers
     }
   }
   recentConversations(first:10) {
     edges {
       node {
         lastMessage
         updatedAt
         status
       }
       pageInfo {
         totalConversationCount
       }
     }
   }
 }
}

Their reaction?

We were able to expand our query based on the needs of the client and request a ton of information that usually would have taken multiple requests, a lot of boilerplate and unnecessary code written on both the client and the server.The payload now conforms to exactly what the customer wanted, and it gives the server the ability to optimize the resources necessary to compute the answer.Best of all, this was all done in a single request. Unbelievable. Welcome to the future.

Welcome to the future, indeed.

The Bad

There’s a lot of good things about Relay, but there’s some underlying issues behind each benefit. Take, for instance, the idea of mutation handling within Relay itself. When mutations occur, especially when they’re done in an optimistic update paradigm, you run into some significant issues.

For instance, when querying a database with multiple fields in the GraphQL schema, you’re essentially updating the client multiple times, the backend multiple times, and hoping the related graph part each updates properly. Nine times out of ten, they do — but even a single failure could have a rolling effect on the backend at large.

Further, the idea of optimistic updates is great in theory, but adds some logic responsibility on the client-side developer that may or may not be useful in all use cases. While large edits would definitely benefit from such an update scheme, simple updates and mutations do not require simulation. What this means is a ton of logic required to be implemented in the client-side team, with the server-side team having very little responsibility for ensuring cross compatibility.

There is, of course, the concern over loss of data and validation in such a system as well. With each increasing level of complexity in this situation, you’re reducing the efficiency that makes Relay such a good sell in the first place.

The main issue raised towards Relay, however, is that it’s not technically necessary — while the functionality of Relay is impressive, there’s a bevy of solutions both unique and already integrated into common languages and architectures that mirror the functionality such that Relay might simply be reinventing the wheel.

Standard GraphQL can be used without Relay, especially on projects with a smaller scope than Facebook, without much loss of functionality. Other solutions like Cashay mirror the cache storage solution for domain states in a simpler, more user-friendly format.

A REST Replacement

There’s been some contention over exactly why we need Relay — or even GraphQL, for that matter. The development of GraphQL and Relay comes from the idea that there’s something fundamentally wrong or done poorly in REST, a prospect that not everyone agrees with.

Let’s state upfront that almost everything that GraphQL does can be done in REST, though perhaps less efficiently. Fetching complex object graphs is easier done in GraphQL, but the same functionality can be replicated with constructing an endpoint around the given data sets as a subset of the greater whole.

These complex requests are made easier with the fact that HTTP can send parallel network requests, though with greater overhead. And what limitations exist in this current solution, HTTP/2 is attempting to solve them, and (in some opinions) to great effect.

There’s also the issue of just what Relay and GraphQL are designed to handle. The two were developed by Facebook for Facebook — a site that deals with thousands of metadata points and relations that the average developer may never come across. In this case, for many people, it’s a case of killing weeds with a flamethrower — yes, it’s a solution, but it’s an over-engineered one.

A good portion of criticism towards GraphQL and Relay in principle is the fact that many of the issues people have with REST aren’t issues with RESTful architecture, but with the common implementations. REST supports content negotiation, is feature rich with everything HTTP has to offer, and has basic over and under fetching prevention solutions.

Essentially speaking, the problems that Facebook set out to fix are, in the eyes of many people issues of poor REST implementations and improper coding techniques, not the actual REST architecture itself.

Regardless of which side you fall on, it basically boils down to this — is adopting GraphQL and Relay worth the effort when considering that many of its features can be replicated somewhat in REST? What are your specific needs? Do you have Facebook level (and style) data to manage? If not, GraphQL, and thereby Relay, may not be the best choice in the world.

Conclusion

Relay is powerful — but like GraphQL, it’s not a magic bullet. High-volume data in Relay is the gold standard for efficient data handling, but for standard data handling, you really have to question whether or not it is truly better than proper RESTful design.

That being said, Relay is still in its infancy. It came from Flux, and just like Flux, is constantly being iterated upon and expanded into bigger, better things. As time goes on, much of the concern about Relay will likely be assuaged, with its functionality expanded further while being made more efficient.