What Are Over-Fetching and Under-Fetching?

What Are Over-Fetching and Under-Fetching?

Posted in

It’s common for the apps we use daily to be driven by APIs. And these APIs fetch data constantly. Because of this commonality, however, the mechanics of fetching are often underappreciated.

Due to this lack of thought, issues such as over and under-fetching are more common than one might think. These concepts refer to getting too much or too little data. In this piece, we’ll look at fetching as a core function of the API space. We’ll consider what exactly over and under-fetching are and whether there are simple solutions to resolve this common complaint.

What is Fetching?

“Fetching” is a term used quite widely in the API space. It simply refers to a request that results in a response. When we say something is fetched, it means that a user or a system made a request via a URI or other endpoint, and the service in question then performed a series of functions in order to return a response to that user according to the form and structure of the initial request.

Let’s take a very simple interaction. Imagine we are trying to request weather data for Stockholm from a simple weather API. We might make a request such as this:

GET https://weather.nordicapis.com/1.1/weather.json?location=StockholmSE

The web server would then fulfill such a request on the backend in a pre-determined format. The output might look something like this:

 {  
     "country":"SE",
     "city":"Stockholm",
     "temp":"21.3",
     "tempFormat":"c",
 }

This request is relatively simple, and as such, one would assume the response would be simple as well, but that’s not always the case.

What is Over-Fetching?

Let’s imagine we’re running the request again. What if the response were to look more like this?

 {  
     "country":"SE",
     "city":"Stockholm",
     "temp":" 21.3",
     "tempFormat":" c",
     “userStatus”:"loggedOut",
     “userIP”:"192.168.0.1",
     “timeAtRequest”:"0:0:1UTC”
     “timeAtDestination”:"0:0:1UTC",
     “timeSinceLastRequested”:"1523ms",
     “cluster”:"14029a67d",
     “authenticationState”:"none",
     “forwardedInfo”:"none",
     “appVerified”:"false",
     “coords”:"unknown",
 }

In this example, we have a relatively verbose request. Furthermore, much of the data in the response body is actually missing as no parameters have been passed. What we have generated in this response is a classic example of over-fetching.

Over-fetching is when a response is more verbose and contains more information than was initially requested. While over-fetching can be relatively minor, as in our case, there are other examples in which the entirety of data is astronomical in size.

So what’s the issue? Why is over-fetching a problem? Well, while sending verbose data may not be an issue for many modern systems, it can run afoul of older systems or modern implementations in which memory and transit bandwidth are in short supply. The Internet of Things (IoT) is a great modern example in which low-power devices have less computational power and rely on lean data transmissions. In such a scenario, a lengthy response may not only be inefficient — it might break the network as a whole.

There is also the issue of efficiency. What if you only want a single point of data from that response? Every time you run a query, you are processing the same insane amount of data for what is arguably no additional value added. You quickly run into a situation where efficiency is decreased. Usability is impaired, too, since you’re obscuring the data and content you desire. If a response is a thousand entities worth of data, it will be challenging to parse.

Finally, there is an obvious security concern with over-fetching. Providing more data than necessary is a form of data overexposure, which is against best practice. When it comes to security, it’s always best to provide only as much data and permissions as required to avoid breaking the rule of least privilege.

What is Under-Fetching?

Under-fetching is the opposite problem, but it carries many of the same issues. Under-fetching is when a request provides less verbose data than expected and is often less useful than required. Let’s imagine the output from our initial weather API request, but this time, reflected as an under-fetching example.

 {  
     "temp":" 21.3",
 }

While one could argue that the output is precisely what was asked for — the weather in a specific location — there’s some missing data here that makes for a less than usable and extensible solution.

First, we’re missing a critical element in terms of geography:

 {  
     "country":"SE",
     "city":"Stockholm",
 }

While this may seem to be a useless supplement to the response — after all, we know we made the request pertaining to Stockholm, so why do we need this data? — the reality is that much of the output of a response is not going to be used solely by the person making the request in human-readable terms. This request may originate from a device or an application. In that request, there may be a desire to display relevant contextual information such as the country and the city, enabling the app to serve additional data and information alongside the temperature.

Additionally, we are missing an extremely critical element from the original response:

 {  
     "tempFormat":" c",
 }

While most of the world uses Celsius for their temperature measurement, there are a handful of countries that do not, and a user may likely originate from such a country. In that case, we want our users to know exactly what format their temperature is being served in. After all, there is a very marked difference between 20 Degrees Centigrade (68 Degrees Fahrenheit) and 20 Degrees Fahrenheit (-6 Degrees Centigrade).

While the security argument relevant to over-fetching is no longer an issue here, usability certainly is. The data provided is all but useless absent context. While this kind of response is excellent for small power devices and low-memory use cases, it might be useless without context.

Best Practices For Avoiding Fetching Issues

Now that we have a firm understanding of over and under-fetching, it’s natural to wonder how this problem occurs at all. The reality is that most fetching issues are really issues of malformed API systems and request-response couplings. When more or less data is provided than requested, this suggests that the API design itself is to blame, and the developer has implemented a response system that is out of sync with the needs of the user.

In some cases, this can arise from simple mistakes made during development. Endpoints that were appropriate during testing or initial development might become bloated at scale, and when this is replicated as part of reusable examples or defaults, this can quickly result in a series of endpoints that provide far too much data.

Of course, this can also come from a simple misunderstanding of the data’s purpose and function. Yes, you may have a lot of really cool data to show off, but if someone is asking for a very specific thing from a very specific endpoint, the developer should keep this in mind and alter the endpoint as appropriate. Sometimes the most useful response is not the response the developer desires but the one the user desires.

Under-fetching is a less severe concern than over-fetching, but it is still valid. It’s easy to oscillate too far to the other side once you have realized the issues inherent in over-fetching. In such situations, developers may find themselves locking down as far as they can to the point of making their data response useless.

In either of these situations, the answer is to simply revisit the API design in a way that is abstracted away from the developer. What does the user want? What is the use case being fulfilled? Only once you can answer those questions can you begin to develop appropriate responses that meet the use case in question while also ensuring that over-fetching is not occurring.

A Paradigm Shift in Data Fetching: GraphQL

A major paradigm shift in API development has occurred in recent years. Technologies such as GraphQL have made it possible for front-end developers to define exactly what they want. This style eliminates the need to develop specific outputs combined with proper headers and request formats, effectively solving many of the issues we’ve discussed in this piece.

What is important to note, however, is that over-fetching is still very possible using GraphQL, given the greater degree of control the user has in crafting the response. It’s tempting to simply make all data available and thus requestable through specific requests as formatted by the user, but this only really solves the usability and efficiency issue. In many cases, this could actually exacerbate the security issue by making it possible to request more data in a single request. This can also raise additional issues with processing overload, memory overflow, and other operational concerns.

We’ve discussed this previously on NordicAPIs, so it bears repeating — GraphQL provides a greater level of granular control over data, but the systems must be crafted to do so as well.

Conclusion

Ultimately, over and under-fetching is an easy problem to fix. API design that balances the user’s needs and the developer’s unique data set should allow for a comprehensive view of what is correct to serve as a response — and what is potentially incorrect.

What do you think of our summary in this piece? Let us know in the comments below!