API.Expert Exposes Common API Performances

API.expert ranks 200 top APIs to set performance standards for the industry.

Let’s say that you’re considering using a new API or trying to find a solution that might work for a particular problem you’re having. There are plenty of directories out there, like ProgrammableWeb and RapidAPI, that provide articles and changelogs.

While API catalogs certainly have their place, they won’t necessarily provide information about uptime, performance issues, security and compliance, latency, accountability, and so on. Trying to glean this information from Google, via reviews by others who have used the API, is an uphill battle. As a result, it’s tough to know if your service will meet your standards.

Released to the public in early 2020, API.expert, by APIMetrics, is designed to provide insight into the performance of a range of different APIs to empower developers looking to monitor services they either use or might want to use. Beyond that, it seeks to encourage minimum industry standards in a space where performance varies hugely.

But can this third party service really provide all of the information you need? Let’s find out.

Benefits of API.expert

API.expert collates a ton of information – pass rate, median latency, downtime, outliers, and others – to determine a CASC (Cloud API Service Consistency) scoring system. CASC runs from 1 to 10. A score of 8 or above indicates that an API is performing “perfectly or pretty near to it.”

API.expert generates a CASC score for each API it tracks.

API.expert looks at APIs from various industries, such as banking, FinTech, government, and social networks. Pick a category, and you can see how some of the major players stack up against each other, either during this particular week or over the course of the last month. The idea of benchmarking APIs is definitely ambitious but, provided there’s transparency around how figures are calculated, it’s helpful to compare APIs directly.

Stripe leads the FinTech API category. (Week of Oct 12, 2020, to Oct 19, 2020.)

The breakdown of information comparing different clouds — AWS, Azure, Google, and IBM Cloud — alongside performance in different regions is potentially a game-changer. An API might have outstanding performance with a certain cloud in North America, but if it’s subpar in Europe, then developers may want to look elsewhere.

API.expert compares API latency among different cloud providers and regions. (Week of Oct 12, 2020, to Oct 19, 2020).

For example, at the time of writing, Google’s API ranks best on Google Cloud in North America, with a median latency of 319 ms. It performs the worst with IBM Cloud in Oceania with a median latency of 567ms. Anecdotally, we can’t help but observe that AWS and North America frequently appear in the top spot for cloud/region latency rankings.

Cloud and region performance for Google API. (Week of Oct 12, 2020, to Oct 19, 2020).

APImetrics has publicly stated that a basic version of API.expert — which currently ranks more than 200 APIs from the US, UK, and beyond — will remain free. That should be reassuring for folks who want to make this tool part of their regular rotation. The prospect of “other, related services” being built on top of it offers some intriguing possibilities of where they’ll go next.

There’s currently no way to compare all APIs in a single list, but regular blog posts by APImetrics do highlight some of the winners and losers of the week. For example, at the time of writing, GitHub has a market-leading CASC of 9.68 and 100% uptime. On the other hand, Barclays (Sandbox Auth) had a CASC of just 1.00 thanks in part due to downtime of 2 days in the last week.

Limitations of API.expert

One issue with API.expert is that it doesn’t necessarily provide all the context that users might want to know when looking at possible problems with APIs they want to use (or are already consuming). For example, an API having some downtime might look bad on the surface, but it might not be such a big deal if it’s planned and has been heavily publicized well in advance with outage notices.

Beyond that, in its quest to make the information available via API.expert easily skimmable and digestible, APImetrics may have gone a little too far. For example, the “More Information” column reveals in some cases that warnings or failures have occurred when testing APIs but fails to provide additional information about the circumstances or severity of those issues.

Another minor issue with API.expert stems from the way rankings are calculated and displayed by default. At the time of writing, Corporate Infrastructure APIs (the default category) like GitHub, Salesforce, and Microsoft Office are all sitting in the top 5. We’d expect these apps would be highly ranked. Still, there’s an argument to be made that featuring apps that already have the resources to ensure perfect uptime and low latency limits the discovery of newer innovative apps that may be a little rougher around the edges.

Despite the promises by APImetrics of additional connected services to come, this tool stands alone for the time being. It’s not extensible, and there’s no way to plug it into your system or use it to generate downtime alerts. The product sort of feels like an MVP, albeit a very impressive one; the number of APIs it tracks is far more limited than that of typical API directories.

Implications for the Future with API.expert

For the time being, the CASC benchmark isn’t even close to mandatory. APImetrics writes on their website that “your company can include in your Service Level Agreements that a given API must achieve a certain CASC score.” Emphasis has been added by us to highlight the voluntary nature of embracing CASC.

Meeting minimum standards before approval in directories, etc., a la Apple’s App Store is a good idea in theory. However, in practice, setting universal API performance benchmarks might prove stifling for API developers who are building side projects as a labor of love that they can’t afford to dedicate much time to.

Perhaps the ideal situation lies somewhere in the middle, with directories displaying more detailed information about APIs and user reviews to help people formulate opinions. TeejLab’s API Discovery, for example, aims to help users combine the art of discovery (no surprises there, based on the app’s name) with an assessment of how suitable an API would be for their needs from a security and regulation perspective.

APIs are becoming more and more important to the general public, whether they realize it or not. Indeed, COVID–19 and Open Banking are two categories featured heavily on API.expert. Anything that leads to greater accountability on the part of API developers can only be a good thing in our book.