7 Protocols Good For Documenting With AsyncAPI

Documentation is arguably the most important part of any API strategy, as it’s often the direct route between developer and user, and a direct conduit through which the developer can inform, educate, and contextualize. Accordingly, finding good options for documenting your API is of prime importance.

AsyncAPI is one such option. AsyncAPI is essentially the messaging paradigm’s alternative to OpenAPI or RAML. Those solutions could still be used for such an API, however, AsyncAPI is simply more appropriate for such situations. This solution enables documentation for messaging APIs, utilizing a powerful specification that is appropriate for both machine and human interaction. In order to do this, though, AsyncAPI needs to leverage one or more protocols.

Today, we’re going to talk about exactly that. Keep in mind this is a very broad base, general overview; more information can be found in the links at the end of this piece.

AsyncAPI creator Fran Méndez spoke at our last Platform Summit. You can watch his talk here:

First, What is AsyncAPI?

AsyncAPI is a specification that allows developers to define their APIs using a variety of machine-readable formats. It is functionally based around the idea of utilizing a generally defined messaging system, where the message contains a header and a payload, and then utilizing this to sort messages into topics. These are broadly analogous to URLs in the classic HTTP stack.

AsyncAPI is a method by which an API can be represented using a commonly defined and understood language. The main ethos here is that, with a common language, creating interoperable services and products is made easier. This can be seen as well in the Async API approach to open source; AsynAPI has been – and, ostensibly, will remain – open source.

AsynAPI also places a heavy weight on the fact that it is human and machine-readable. By describing APIs using JSON and YAML, and offering a variety of hooks for GUI functionality, AsyncAPI has tried to position itself as an offering that is appropriate for both machine-to-machine functions and human to machine understanding.

Finally, a big “sell” for AsyncAPI is that it was designed for a given situation, rather than designed and then used in various environments. AsyncAPI is entirely built around the concept of message-centric API interaction, which is typically found in IoT and similar platforms. In fact, AsyncAPI is built around the limitations found in these systems, which itself created the demand for such a standard. This is a master of purpose, rather than a jack of all trades attempting to handle a wide range of paradigms and development approaches.

AsyncAPI can work with a variety of protocols. Let’s take a look at a few of these. Keep in mind that there are protocols which are considered currently supported, and protocols that are not currently supported, but are instead currently being investigated. We present both here for completeness.

Read more: Tooling Review: AsyncAPI

Current Support

The following protocols are currently (as of November, 2018) supported by AsyncAPI.

1. AMQP

AMQP standards for Advanced Message Queuing Protocol. It was originally designed in 2003 by John O’Hara at the London branch of JPMorgan Chase and was created to function as an open effort for a cooperative communication standard. Because of this focus, the working group has expanded dramatically to include some of the biggest movers in various industries, including Goldman Sachs, Microsoft, Red Hat, and VMware, not to mention JPMorgan Chase, Cisco Systems, and Barclays.

One of the biggest benefits of AMQP is the fact that it allows for typed data to be annotated, and this, combined with its “self-describing” encoding system, allows for greater understanding and compatibility between a wide variety of clients and users. AMQP considers each basic data function to be a “frame” and considers it part of nine AMQP frame bodies (open, begin, attach, transfer, flow, disposition, detach, end, and close).

2. MQTT

MQTT, or Message Queuing Telemetry Transport, was first authored in 1999 by Andy Stanford-Clark and Arlen Nipper, both part of IBM and Cirrus Link respectively. The protocol was later added as an ISO standard under ISO/IEC PRF 20922. It works as a layer on top of the TCP/IP stack — this protocol is Publish/Subscribe based, and is designed specifically for IoT devices and other systems that are low-bandwidth and high-latency, as well as on networks that are otherwise unreliable. It has been specifically designed for machine to machine communication.

MQTT has three general messaging types. Connect waits for a connection to be established to the server, and facilitates the creation of this link. Disconnect waits for the client to finish its synchronization and transfer, and then disconnects the TCP/IP session. Then, Publish returns to the application thread and returns the data to the client.

3. WebSocket

WebSocket is a protocol that has been standardized by the Internet Engineering Task Force as RFC 6455 alongside the inclusion of the WebSocket API in Web IDL under the W3C. The protocol is designed to function as an intermediary in the client-server relationship and is based largely upon TC with additional HTTP functionality baked in.

WebSocket was designed from the ground up to support standardized communication with low overhead and optimized throughput. This is compounded through the utilization of standard web traffic ports, which means that it’s often useful in environments where non-traditional non-web-based communication systems are otherwise blocked or limited.

Most importantly, it should be mentioned that WebSocket is supported by all major browsers natively – a big selling point for any Web API.

4. Kafka

Kafka is an open-source protocol that was designed in and released in 2011 by the Apache Software Foundation. It is principally written in Scala and Java, and is designed to provide a high-throughput, high-efficiency, low-latency methodology of handling and integrating real-time data source feeds. It was originally designed for LinkedIn, but very quickly became open source.

Kafka is really four APIs under the guise of one – the Producer API publishes a stream of records to the Kafka Cluster. The Consumer API allows consumers to subscribe to topics and processes the records for utilization and distribution. The Connector API actually links the topics to existing applications. The Streams API converts the input streams to an output stream and provides this result to the various APIs and their clients.

Kafka’s main use case in its documentation is as a replacement for what they consider “more traditional message broker[s]”. By handling this communication methodology for what is essentially a glorified pub-sub relationship, Kafka can boast high throughput and effective communication. Unfortunately, this also (by their own admission) requires low end-to-end latency, and in some cases (especially IoT applications), this may be a limiting factor.

Others Include JMS, STOMP, and… HTTP

In addition to the above four, Fran also notes that under the set of already supported protocols, AsyncAPI also has JMS, STOMP, and HTTP. As he describes, “One of the less known features of AsyncAPI is its capability to define HTTP streaming APIs, with support for Server-Sent Events and Chunked encoding.”

Under Consideration

The following protocols are currently being investigated by ASyncAPI for inclusion. In addition to these several, Fran also notes that they are also looking into protobufs and Avro; protocols that AsyncAPI is considering due to community requests.

5. NATS

NATS is a classic pub-sub application protocol. Originally developed as the messaging control system for Cloud Foundry, NATS was designed as a Ruby-based messaging-centric system to share published messages to connected clients. Later, it was ported to Go and then released into the open source community.

Of interest is the fact that NATS has been designed from the ground up to be what the developers consider cloud-native. This moves away from traditional methods, which are more locally-oriented and then ported to the cloud.

A big sell for NATS is it’s modular, and that these modules are open source. This means that the application of this protocol can be highly scalable, modular, and exactly specific to your given circumstance. The pedigree of companies who utilize NATS speaks to this, with Ericsson, HTC, Siemens and Pivotal all utilizing NATS.

6. Google Cloud Pub/Sub

Formerly known simply as “Google Pubsub”, the Google Cloud Pub/Sub is part of the Google Cloud Platform offering from Google. It was initially released in 2008 and boasts a wide variety of language support, with the codebase itself being written in Java, C++, Python, Go, and Ruby.

Google considers its Pub/Sub offering to be enterprise-level and message-oriented, with a system that provides “many-to-many, asynchronous messaging that decouples senders and receivers.” While Google’s offering is certainly tempting, especially when considering a use case such as pushing data from a single node to a server backend, there are of course concerns when using a portion of an entire suite. Pub/Sub can still be used independently, but for many, the argument for using a part of a suite as opposed to the suite itself is a hard sell.

That being said, there are many open source projects that make utilizing this single component possible, with a Kafka Connector, Load Testing Framework, and more offering high functionality.

7. CoAP

CoAP, or the Constrained Application Protocol, is much like MQTT, in that it was designed to function in the Internet of Things and other constrained network situations. It is being developed as part of RFC 7252 as a web standard, and utilizes calls that are very “HTTP-like”. For this reason, it is considered to be easier for many developers to begin utilizing, as it feels very familiar.

A big selling point for CoAP is the idea that, with lower “node” costs, more “nodes” can be created – in the Internet of Things, where thousands of nodes might make up a network segment, this lower data expense may allow for smaller devices, less resource-intensive computational requirements, and less waste.

CoAP also considers itself to be model agnostic in terms of its data integration – it boasts that the codebase “integrates with XML, JSON, CBOR, or any data format of your choice.”

Conclusion

There’s a wide variety of protocols that can be used here, but ultimately, the proper protocol is going to be highly dependent on your specific use case and your codebase. While we’ve only briefly discussed these protocols here, we will go into them in greater depth at a later date. For now, keep in mind that for use with AsyncAPI, these protocols are separated into two broad categories between those currently implemented, and those set to be investigated. In some cases, this investigation might mean adoption is right around the corner, and in other cases, it might be very far into the future.

Additional Resources

For more reading, check the following links: