HTTP/3 is a robust protocol, offering many gains with few adoption blockers. The promise of this protocol, however, is just that – a promise. While the protocol is, in theory, an attractive proposition, it still has quite a lot of iteration to go through, which has left many programmers with a simple question – what is the current state of HTTP/3 and QUIC, and what is explicitly the protocol offering compared to HTTP/2 and HTTP/1?
Today, we’re going to answer those questions. We’ll dig a bit into the history of the protocol, and figure out what the current state of HTTP/3 is as of mid-2019.
What is HTTP/3?
HTTP/3 is the next iteration of the oft-used HTTP protocol family. It’s meant to be a replacement of sorts, though just as with HTTP/1, some level of co-utilization is expected across the internet in the foreseeable future due to the nature of adopting a new protocol. HTTP/3 is very similar to HTTP/2, but it offers some significant advancements and changes to the underlying method of utilization. Its mode of deployment makes it a strange beast – while we’ll expand on this shortly, HTTP/2 is challenging to update in a way that alleviates its core failings since it’s built upon TCP.
HTTP/3 on the other hand, while still functional with TCP (for obvious reasons), is actually built upon QUIC, a Google/IETF hybrid which is foundationally a transport protocol developed upon UDP. By being built upon UDP, QUIC manages to fix many of the core issues found in HTTP/2 while operating under a new implementation methodology. This adoption of UDP also allows significant increases in speed, not to mention reliability.
Why Upgrade From HTTP/2?
Before we discuss QUIC, we should look at why one would want to upgrade HTTP/2 in the first place. Because HTTP/2 is fundamentally an upgrade of HTTP/1, many of the core issues found in the first implementation have propagated forward. These core issues are primarily derived from the reality of TCP and the way in which TCP is implemented across networks and the internet at large.
Perhaps one of the most notable of these issues is the fact that the single connection of HTTP/2 ends up being a bottleneck for data in a low network quality environment – as network quality degrades, and packets are dropped, the single connection slows the entire process down, and no additional data can be transferred during this time of retransmission. HTTP/1 originally offered six connections, which solved much of this issue, but both protocols were designed for a network and a time in which the current latency, speed, and concurrency demands weren’t yet a reality.
QUIC, and thereby HTTP/3, utilizes multiplexing to solve this issue. If one packet is lost, the additional connection streams established by HTTP/3 allows for independent functionality. In other words, if one packet fails, the rest of the connection streams can keep going while that stream attempts to repair itself. This ultimately reduces congestion pretty handily, not to mention improves the general reliability of the protocol.
An intrinsic issue with HTTP/2 is actually not an issue of HTTP/2 itself – rather, it’s an issue of how vendors have chosen to implement it. Because this protocol is often “baked in” to routers, firewalls, and other network devices (not to mention middleboxes), any deviation from HTTP/2 is often seen as invalid, or worse, an attack. These devices are configured to only accept TCP or UDP between contacted servers and their users within a very strict, narrow definition of what expected traffic should look like – any deviation, such as when a protocol has updated, new functionality, is almost instantly rejected because the devices just don’t want to deal with them.
This issue is known as protocol ossification and is a huge problem in resolving the underlying issues of HTTP/2. New TCP options are either severely limited or outright blocked, so fixing HTTP/2 becomes less an issue of “what do we fix,” and more an issue of “how do we implement the fix.”
To solve the underlying issues of HTTP/2, and more specifically the issue of protocol ossification, HTTP/3 is based around QUIC. QUIC, once an acronym for the “Quick UDP Interaction Connections,” was built by Google as a solution to many of the issues intrinsic in the current network protocol stack. It is low-latency by design. The protocol has also been designed to be secure – because there is no cleartext version of the protocol (since everything is routed through TLS 1.3), it’s both highly secure and not subject to the issues of ossification. TLS traffic is neither understandable nor “scannable” in terms of inspection by standard middleboxes, and thus, the traffic is simply routed rather than held up.
QUIC is also designed to be very fast. By offering 0-RTT and 1-RTT (Round Trip Time) handshakes against the TCP 3-way handshakes, the transfer process of QUIC is very fast. QUIC is highly reliable, due to its support of the aforementioned additional streams, meaning that data transmission is assured with greater speed and accuracy. This reliability, combined with speed, offers superior congestion control and stream re-transmission. In fact, the main issue raised against HTTP/3 – the fact that it utilizes UDP, a relatively unreliable transport method – is largely negated by these facets.
Additionally, the fact that QUIC has been developed for implementation in the user space is also notable. This means that, unlike protocols built into the OS or firmware level, QUIC can iterate rather quickly and effectively without having to deal with the entrenchment of each protocol version. This is a big deal for such a large protocol, and in many ways should be considered a core feature in its own right.
Implementation and Iteration
The history of QUIC is essentially the history of HTTP/3 up until a certain point, so to understand the current state of implementation and iteration, we must go back to the first days of QUIC. QUIC, as noted earlier, was originally called “Quick UDP Interaction Connections.” It was first designed at Google by Jim Roskind, and was formally implemented in 2012. QUIC was at this point strictly a Google product – it was implemented in Chrome, Google Search, YouTube, and other Google products, delivering excellent speed increases and improving user experience on low-quality networks.
In June of 2015, a QUIC IETF draft was developed, proposing the integration of QUIC as a formalized standard. In 2016, the QUIC Working Group was approved, and standardization quickly began. By 2017, QUIC engineers at Google stated that almost 7% of all internet traffic was being routed using the Google variant of QUIC, and with this data, the idea of turning it into a global standard gained weight.
During this time, the Google QUIC implementation essentially splintered. While Google continued its own QUIC development, the working group decided to base the standardized QUIC implementation on top of TLS 1.3 rather than continuing to use the custom encryption method deployed by Google. The working group stated that QUIC should eventually be more than just HTTP over QUIC, which required the separation of the protocol into QUIC as a transport method, and HTTP over QUIC as a protocol. The HTTP over QUIC layer was then renamed as HTTP/3 in November of 2018, and is the current primary focus of the working group.
It should be noted that, as of right now, nobody is running the IETF variant of QUIC – the only current variant in use is the Google version of QUIC, which Google is actively attempting to move towards the IETF standard with each iteration.
HTTP/3 and APIs, IoT
One of the most substantial gains behind HTTP/3 is the impact it will have on APIs and the Internet of Things (IoT). APIs and the Internet of Things, more often than not, find themselves operating on unpredictable networks. The network quality, the quality of transmission media, and the general security underlying it all is highly dynamic, with packet loss and transmission errors being primary components behind most data failures.
With HTTP/3, however, many of these issues are, ostensibly at least, negated. The fact that packet loss derived speed issues are mostly fixed with multiplexing and that handshakes are made fundamentally lighter and more efficient leads to an environment where APIs and IoT devices have a more stable, secure, and predictable transport profile. Unfortunately, these gains aren’t entirely easy to implement just yet. There are no core API functions right now like there are for TCP, and as such, while HTTP/3 can be enabled, it requires some specific libraries and implementations. This, in turn, leads to library lock-in and may lead to inflexibility in business and data logic for APIs attempting to implement QUIC via HTTP/3.
In concept, HTTP/3 is an excellent protocol. However, in current application and implementation, it still has a lot to iterate upon. While the promise of QUIC and UDP, in general, have delivered excellent real-world examples for Google, the IETF standard is still not quite in reach. While adoption of QUIC as it stands under Google is certainly a strong argument, locking an API or application into a specific protocol with proprietary libraries is risky, and should be weighed against the benefits delivered to see if it’s appropriate for the given use case.
What do you think about QUIC and HTTP/3? Is this the protocol of the future, or is it simply an architecture improvement rather than a new standard? Let us know what you think below.