API gateways are nothing new. They’ve been helping API developers bring multiple APIs together for years, and we’ve previously written about their value in a microservices architecture.

But the ability to manage multiple APIs across different Cloud Service Providers (CSPs), as well as production and development environments, in one place? Now that’s something worth getting excited about.

Yaara Letz, from Tyk, joined us at our 2019 Platform Summit to talk about some of the advantages of using multiple cloud vendors for API management. There are a few different scenarios in which this might apply, so let’s start with some definitions:

  • Multicloud – Service running on more than one CSP, including on-premise
  • Intercloud – Transferring data between different CSPs
  • Hybrid – Mix of private and public clouds (a subgroup of intercloud or multicloud)

Multicloud might be suitable in cases where different cloud vendors offer integrations with, and support for, different platforms that are deemed essential. On the other hand, maintaining a hybrid environment allows businesses to keep security-focused workloads in a private cloud and use public cloud networks for less sensitive data.

Despite that, many organizations choose to lock themselves in with a single CSP and avoid the approaches above because they can increase the complexity of managing a service. As we’ll see below, however, it doesn’t necessarily have to be this way.

This post tracks a talk given by Yaara Letz from Tyk at the 2019 Platform Summit:

Every Cloud Has a Silver Lining

When working on APIs, most developers try to keep things as simple as they can and avoid complexity as much as possible. So, why do some of them dabble with multicloud and intercloud when doing so threatens to make things more complicated?

“According to Gartner,” Yaara says, “80% of people who are using public clouds are using more than one Cloud Service Provider.” That number may seem high, but it’s not surprising because there are plenty of advantages to using multiple CSPs.

We’ve touched on some of these above, but Letz lists some of the pros of utilizing a multi-cloud strategy as follows:

  • Avoiding outages: Redundancy and high availability help prevent interruptions, and recovery is possible within seconds.
  • Traffic geo-distribution: Multicloud helps you get closer to your users, wherever they are, thus decreasing latency.
  • Release management failover: Blue-green deployment maintains two production environments that are as identical as possible.
  • Disaster recovery: These recovery options should equate to zero downtime.
  • Edge gateway in the point of presence: With multicloud, you can keep a server in one location and cache elsewhere.

Beyond actively considering the above, there are a few different reasons why organizations might already be using (or considering using) more than one cloud provider across different regions:

  • Historic use: Organic growth from SaaS consumption means organizations already use different clouds.
  • Customization: Seeking out the best of breed platform for the application
  • Hybrid requirements: Bare-metal, private cloud etc.
  • Geographical considerations: First provider choice isn’t the best in the region(s) being targeted
  • Avoid lock-in: Desire to avoid vendor lock-in if service isn’t good enough or price changes.
  • Development: Multi environment setup, to enable developers to work on their device of choice

That list of reasons is not exhaustive, with things like changes to data regulation playing a part too. Azure, AWS and Google Cloud are all GDPR compliant, but Microsoft was the first to really play up this element of their product. In May of 2018, they highlighted that “Azure offers 11 privacy-focused compliance offerings, more than any other cloud provider” and called their array of GDPR compliance measures “unmatched” in the space.

With all of this in mind, the question for some developers is “why should I use multiple Cloud Service Providers?” but “how can I use multiple CSPs to their full potential?”

Down, But Not Out

Yaara highlights that, when it comes to downtime, major cloud providers “all have it.” With almost 2000 hours of downtime in 2018–19, compared with Google Cloud and Amazon Web Services’ ~350 hours each, Microsoft Azure was the worst for this in that period.

Since high availability is always a concern for API developers, downtime is the enemy. If API consumers using your service don’t feel like they can rely on it to work 100% of the time, or very close to it, then they won’t stick with you for long.

Yaara mentions the concept of “lift and shift” on a couple of occasions in her talk. She touts this as one of the advantages of using a product like Tyk’s Multi Data Centre Bridge: lift and shift is possible with zero downtime, and only a DNS switch required.

She touts some of the other advantages of using a data center bridge, including:

  • Makes it possible to avoid vendor lock-in
  • Single control plane for API management across regions and clouds
  • Designed to work in complex intercloud and multi-region environments
  • Light solution to integrate, light on network usage and no need to send analytics
  • No database replication, sync dependency or migration problems
  • Resilient and high availability in nature

Just as you might use data center bridging to improve the performance of an Ethernet protocol in data centers, the aim of using a data center bridge in this context is to maximize performance and uptime while maintaining as smooth of a performance as possible for end-users.

Think of it as an API gateway on steroids!

A (Data) Bridge to the Future

Yaara paraphrases Gartner to suggest that “trying to restrict yourself to a single cloud is doomed to fail” because this approach “limits your users, as well as your multi-national business opportunities.” It’s a bold statement, but one that might stick in your mind next time you’re waiting for AWS or Microsoft to recover from downtime and bring your API back online.

Gartner expanded on this idea in January 2020, writing that “A cloud strategy must be able to accommodate the use of more and more cloud services. The organization needs to realize that it will be relatively impossible to get everything from one vendor. A single cloud strategy makes sense only if it uses a decision framework that allows for and expects multiple answers.”

Data bridges, as they relate to APIs, are still in their infancy but it’s easy to see their potential. Of course, there’s a certain irony in that, while plugging all of your APIs and environments into a single control plane makes a lot of things simpler, it might result in a different type of vendor lock-in further down the line. But perhaps that’s an issue best left for another time.

Multicloud and data center bridges may not be able to eradicate issues like downtime and lock-in completely, but they certainly give developers a better chance of doing so than becoming overly fixated on a single cloud solution. And if they can streamline and unify the management of different services in one place, so much the better.

Art Anthony

About Art Anthony

Art is a copywriter/blogger/content creator who gave up the big city grind to go freelance and live out in the countryside. He writes about everything from financial services and software/technology to health and fitness for big corporations and startups alike. He started his own company, Copywriting Is Art, several years ago and tweets at @ArtCopywriter.