The number of Internet-connected devices is growing at an astonishing rate. According to Cisco, in 2012, there were about 8.7 billion connected devices! In 2013 alone more than 280 thousand new devices were connected every hour, on average. This year, that number grew to more than 360 thousand new devices per hour. Cisco predicts that six years from now there will be more than 50 billion total connected devices making this new mesh the largest available network ever.
This prediction creates interesting connectivity challenges and, more importantly, lots of API- related challenges. Irakli Nadareishvili, who presented CA Technologies at the NordicAPIs Platform Summit, believes that “you’ll be writing APIs that millions of devices depend on. If you break your API, millions of devices will break.”
While cloud-based software is easily updated if there is an API change, things could be different where devices are concerned. To start with, devices have much more limited storage and processing power. Just as an example, Arduino Uno, which is one of the most popular DIY devices available, has only 32 KB of memory, and operates at just 16 MHz. This means that most of the instructions will have to be hardcoded into the hardware itself, and making changes will be extremely difficult.
Another thing to remember is that after devices are deployed it will be also be difficult to perform software updates. Updating devices remotely will, most certainly, become prohibitive due to bandwidth costs. Furthermore, performing a manual update is only possible if devices are within easy reach — which is not always the case.
Designing APIs that Last
An interesting comparison can be drawn between software development and civil engineering. While civil engineering worries about the longevity of solutions, software development is more focused on how the solution can be evolved and reshaped over time. Eric S. Raymond, a well known open source advocate and author of the book the Cathedral and the Bazaar, popularized the “release early, release often” motto followed by millions of developers today. This philosophy is perfect for cloud-based software development but proves disastrous in an environment where machines will be the main consumers.
According to Roy Fielding, one of the creators of the HTTP specification, most software is built following the assumption that there is a single entity controlling the whole system. In the case of the Internet of Things, the whole system is more distributed, and there is no single, central, controlling entity. This makes it hard for devices to consume APIs that haven’t been built for a distributed world. A possible solution is to follow our previous steps, and use a proven technology that allowed us to arrive where we are today. One part of the Web architecture that offers longevity and works well in a distributed world is hypermedia. In fact, “the Web as we know it is nothing more than millions of hypermedia entities interacting with each other”, says Nadareishvili.
Hypermedia offers better longevity than other solutions because it decouples server implementations from the way clients consume APIs. Jakob Mattsson, from FishBrain, believes that “the only thing clients really need is a generic understanding of hypermedia.” There will be no need to change client implementations due to changes on the server, because the clients will adapt themselves. To make that happen, API responses should include data and also controls that describe API affordances. Clients will then read those controls and find their way on the list of possible affordances. Clients will, in fact, behave like we humans do when consuming a Web site. Whenever a Web site changes, we don’t need to read any documentation, we simply browse and find our way.
Making Machines Think?
The challenge with hypermedia is that, while we humans are very good at adapting to changes and finding our way, machines are not. While designing APIs for humans should be about understanding how end users will interact with your API, designing APIs for machines should be all about making responses easy to process. The key to this challenge is finding familiarity among similar resources. By defining a set of similar affordances for similar resources, it is possible to create a vocabulary that machines are able to understand. This has been the strategy behind user interface design for decades.
The main difference between Web sites and APIs, according to Nadareishvili, is that “most Web sites are consumed by real people that can understand semantic meaning and learn how to perform actions on it.” The fact that machines do not have the ability to interpret semantic meaning the way people do is defined as the semantic gap. One solution is to use Artificial Intelligence techniques. If machines are enabled with the ability to understand a limited vocabulary, they can go on to derive the appropriate actions from it.
RFC6906 describes a way to accomplish this by defining profiles that specify how servers and clients communicate a set of semantics associated with resources. A good example of a profile is the podcast. Podcasts have a very specific list of semantics and associated affordances. While it should be possible to consume a podcast using any client, a client that is aware of these semantics is able to provide a more sophisticated experience. A growing number of profile-related standards exist to implement the required semantics. The following are two of the standards that deserve special attention:
- XMDP: the XHTML Meta Data Profiles format is used for defining HTML meta data profiles that are easy for both humans and machines to read.
- ALPS: the Application-Level Profile Semantics specification is a data format for defining simple descriptions of application-level semantics, similar in complexity to HTML microformats.
It’s All About the Standards
Although ALPS is becoming the standard for machine-readable profiles, Nadareishvili says that “there’s a lot of opportunity for collaboration.” The key is to use the appropriate media type so that clients can adapt accordingly. It’s expected that some clients will only understand certain media types, and will simply discard any profile information they don’t understand. Ideally, clients should parse, and possibly cache, all information about interesting profiles.
How this can be done by billions of low-powered machines with limited connectivity is still an open question. Nadareishvili believes that “devices can use MQTT and even lower-power communication.” Connectivity protocols like MQTT are designed to be abstracted from the communication layer, and are specifically well suited for limited or intermittent connectivity. Another option is to make extensive use of local caching, and implement communication protocols in a very efficient way.
Another obvious solution is one that allows API management providers to play a bigger role. API management services can provide a middle layer, and translate API calls between servers and machines. This will supply API endpoints that machines can consume — plus a guarantee that they will never change. These translation services can then parse the profiles themselves, and make the necessary adaptations for the consumer. Questioned about this, Nadareishvili expressed the belief that API management providers will eventually offer such a service as soon as the market demands it.
There are huge differences between cloud-based and device-based API clients. The old “release early, release often” philosophy that makes so much sense in the startup world is not as workable when providing an API for mass consumption by billions of low powered devices. Because it is so difficult to update code on deployed devices, APIs cannot change. This makes their maintenance an interesting challenge.
One of the possible solutions is to employ hypermedia-related tactics when designing APIs, so that device-based clients can easily adapt to any changes. By defining the appropriate media types and profiles, API providers can offer the client clues about what affordances they support, and the best way to consume them. Clients, on the other hand, will be required to parse and understand those hints in order to adapt accordingly. The big challenge in all this is that with limited processing power and bandwidth, how will all those billions of devices gain that capability?
How do you think this challenge can be solved? Leave a comment here or get tweet to us with your thoughts!
Here are the Nordic APIs Platform Summit videos of the talks referenced in this article, by order of appearance:
- “The Internet of Things Challenge: Building APIs that last for Decades“, Irakli Nadareishvili
- “Your HTTP-based API is not RESTful”, Jakob Mattsson
[Editor’s note: Nordic APIs is an independent publication and has not been authorized, sponsored, or otherwise approved by any company mentioned in this article; however, CA Technologies was a sponsor of the Nordic APIs Platform Summit event where Irakli Nadareishvili presented.]