Serverless, GraphQL, Go, microservices… disruptive tech offers great benefits, but is “boring” tech better from a pragmatic viewpoint?
If it ain’t broke, don’t fix it. No need to reinvent the wheel. The KISS (Keep It Simple, Stupid!) principle. There are plenty of expressions and the like out there that advocate embracing the tried and tested over the complex and newfangled, but sometimes the siren song of cutting-edge technology is just too much to resist.
At the Nordic APIs 2018 Platform Summit, Martin Buhr – creator and CEO of the Tyk API Management Platform – stood up and made a case for embracing best practices and robust technology in the API space, even if it might initially seem outdated and boring.
We’ll let you be the judge of how he did…
Boring Is Dependable
Martin kicked off his talk by looking at some of the reasons why “boring” is often the best way to go. “We’ve seen a lot of best practice,” he says. “We’ve also seen a lot of people try to do some really cool things…and some really bad things.”
He points out that one of the issues with taking the road less traveled, particularly a road that is new and appears particularly exciting, is the type of competition it pits you against. He gives the example of hiring Go developers, and how you’re suddenly competing with the likes of Google, Facebook etc. for a very small talent pool.
In addition to being able to offer substantial salaries, such companies have the financial backing to explore new technology whether or not it’s successful. And, if we’re being honest, we could fill several pages with Google and Facebook experiments that haven’t succeeded.
Contrast this with hiring, say, a Java or Python developer. Martin says that you immediately get a ton of advantages:
- Large talent pool
- Well distributed seniority
- Increased supply = more affordable
- Widespread best practice
- Good for team churn
Embracing unfamiliar technology, on the other hand, means you risk being unable to find great hires or (even worse) spending way too much to hire someone who turns out to be a bust.
Doc Browns are Dangerous
Those of you who have seen Back to the Future and its sequels will already know who Doc Brown is. For those of you who haven’t, he’s a walking embodiment of the mad scientist trope. An eternal optimist and eccentric thinker, Doc Brown isn’t afraid to think outside the box.
“Having a Doc Brown on your team is great,” says Martin. “Even if you don’t have an R&D team, these guys are the ones who’ll do it for you.” The danger, he says, comes when you take their ideas and recommendations at face value.
He provides the example of serverless architecture, about which a Doc Brown would say ‘You just write it once then you can deploy anywhere!’ Except, says Martin, “you don’t really. You’re deploying it to AWS or Google Cloud Function or the Azure service.”
Before you know it, you need a framework like Serverless or Apex that actually lets you deploy anywhere. Then you need a database like DynamoDB. Before you know it, you’re locked in. “That’s how serverless actually becomes ‘buy more services,’” Martin says.
New tech can be limiting
Martin also talks about other trends in development seen by many Doc Browns as “the next big thing” in their respective spaces, namely GraphQL and Microservices.
When talking about GraphQL Martin paraphrases Xzibit, rapper and star of MTV’s Pimp My Ride, in what will likely be the only time he’ll ever be referenced on this blog saying “yo dawg, we put SQL in your SQL so you can SQL while you SQL.”
We’ve been fans of GraphQL for certain scenarios, and have written elsewhere about Lee Byron’s master plan for the ubiquity of GraphQL. However, Martin makes some legitimate points about the limitations it poses:
- GraphQL introduces a single point of failure
- Duplicate data schemas
- If a service fails, the whole query will fail
- You can’t predict all code paths
That said, we probably wouldn’t go as far as his cutting statement that “it’s great for interface developers, but nobody else”! He also has some scathing, but again valid, concerns about how microservices can increase the complexity of an application and runtime environment:
- Successfully offering microservices often requires the ability to scale extremely rapidly
- Most organizations don’t have availability poor enough to necessitate microservices
- Microservices require a dedicated team for each service, which is resource intensive
- Trying to navigate the world of microservices without someone who has experience in doing so is very difficult
“It’s OK to build a monolith,” Martin argues. “It’s not a bad word! And the domains that require microservices? We can do that later.”
API developers might like to think that they’re above the temptation of tech bandwagoning, but it’s commonplace in our industry. Consider, for example, how many jumped onto the REST bandwagon without giving a second thought to whether or not ditching SOAP was actually a smart move for their particular service.
The problem with this comparison is that, with the benefit of hindsight, making the transition from SOAP to REST actually looks like it’s been a very smart move for many API developers…which complicates Martin’s standpoint that “boring is best.”
It’s worth pointing out, however, that Martin actually does still advocate taking a look at emerging technologies, acknowledging that a Doc Brown is a valuable addition to any team. What he’s really advocating is to take a step back and consider, at length, whether the benefits of jumping into a new technology headfirst outweigh the risks.
Perhaps the least controversial point Martin makes is that “good application design yields good API design.” Aside from being something that we can all agree on, this is a nice reminder that the tech a product uses is rarely as important as how it functions.
As long as everything works as it should, it’s unlikely that anyone will call you out for using new/old tech or the “wrong” language…unless they have way too much time on their hands.