Exploring The gRPC Framework for Building Microservices Kristopher Sandoval June 27, 2017 There are very few names as ubiquitous as Google. While Google is best known for its search engine and for its suite of productivity applications, some of its most powerful work is in the functional aspects of the world wide web and the protocols that drive it. To this end, Google’s gRPC is a powerful protocol for microservices. It’s designed to be efficient, fast, and lean. In this piece, we’re going to discuss gRPC, Google’s open-source RPC framework. We’ll dive a bit into the history of RPC as a protocol, and what its historical use has been. We’ll also highlight some benefits to adopt gRPC (and indeed, RPC as a whole), and the potential impact gRPC will have on API design. Finally, we’ll answer the question that has been on many lips since gRPC was announced — what does this mean for REST? RPC – A Definition and History gRPC is a new framework broadly based on a not-so-new mechanism called RPC, or Remote Procedure Call. When we talk about RPC, what we’re actually talking about is a methodology for executing a procedure, or subroutine. Subroutines are simply chunks of code that we write and execute to carry out a function used in a larger function or process — where this execution happens is key to the concept of RPC. In traditional computing, a subroutine might be executed on a local resource. For simplicity, we can think of these subroutines like we think of simple math problems. Suppose you were given this problem and were expected to solve it using pen and paper: 1+1+(2+2) By the laws of mathematics, you would have to calculate the parenthetical elements first before continuing throughout the rest of the problem. Subroutines work the same way — calculations and requests are done in the order requested as part of a larger system of functions or requests. Now imagine that instead of solving a simple parenthetical argument, you have a much more complex problem with multiple subroutines. In this case, we may instead send an argument to a mathematician with a better calculator. In this situation we would need an agreed upon means of solving the problem and an efficient method of transmitting the results. This is the basis of RPC — it is a protocol that allows one to relay problems in the same format regardless of whether it is being calculated locally, or using better, faster, remote resources. The earliest computing models utilizing Remote Procedure Calls to frame network operations dates back to early ARPANET documents in the 1970’s, with practical applications of the technology appearing in the 1980’s. The term itself was coined by Bruce Jay Nelson in 1981, and was used to describe the system, saying: Remote procedure call is the synchronous language-level transfer of control between programs in disjoint address spaces whose primary communication medium is a narrow channel. An RPC Workflow RPC is easy to understand once you see how it’s laid out. A basic RPC process can be broadly separated into five stages: “origination,” “marshaling,” “broadcast,” “reception,” “processing,” and “response.” Origination: In this stage, the client calls what is known as a “Client Stub.” This stub is a local procedure or subroutine process that is called as a normal, local process. Once this process has been called, it moves to the second step of this stage, known as “marshaling.” Marshaling: The Client Stub, having been passed a set of parameters, packs the call and parameters and issues a system call to send the message. Broadcast: With the packaged set of call and parameters, the local operating system sends the packaged message from the client to the server. Reception: The package of the call and parameters is received by the server from the client. These packets of data are assembled and passed to what is called the “Server Stub.” This stub functions just as the Client Stub. Processing: Now that the server stub has received the data packet, it is unpacked in a process called “unmarshaling.” The call is extracted along with the parameters, and is then considered a local process or subroutine. Response: The server calls the local unpacked process or subroutine. The results are then sent back in the opposite fashion, moving from Response to Processing and so on until it is delivered to the Origination stage client. Why RPC? With all of this said, what makes RPC such a promising concept to bring back from the depths of proto-internet protocols? While much has been made of REST and other such frameworks, this work has been primarily in a space where computation power both in the client and the server has been rising dramatically. Since the advent of RPC, both sides of the equation have only gotten more and more powerful, and so more power-hungry, large-scale solutions have been acceptable tradeoffs for what power they delivered. There’s a huge monkey wrench in the mechanism, however — the IoT, or Internet of Things. With so many small sensors and microdevices utilizing microservices to compute massive amounts of data, a need for small, power-efficient, yet still powerful protocols has given rise to the concept of RPC once again. The rise of the IoT actually mimics a lot of the what created RPC in the first place. RPC was initially conceived of at a time where a mainframe provided more computing power than all the computers that would tie into it combined. The same can be said of the IoT, where a device has just enough processing power to do a small, microscopic local call. This is where RPC shines — truly microscopic services work well with RPC, and is what makes the concept poised for a strong comeback. Add this to the strong ability to internally connect cloud resources hosted by Google itself, and you’ve got a system of distributed computing that is greater than the sum of its parts, delivering effective, powerful computational weight to even the smallest of devices. What Makes gRPC Awesome Protobufs and Compiling One of the best selling points of gRPC is the fact that it’s built upon Protocol Buffers, also referred to as protobufs. Protobufs are Google’s “[language-neutral, platform-neutral, extensible mechanism for serializing structured data”(https://github.com/google/protobuf)”; the method is specifically designed to be a lightweight methodology of allowing communication and data storage in a predictable, parseable way. As protobufs are backed by one of the largest tech companies in the world, there are many resources to make adoption seamless. Google provides a set of code generators to create stubs for a wide variety of languages, and with third party implementations, this number of support languages grows dramatically. As of “proto3 2.0,” the current beta application of the code generator, the following languages are either officially supported or supported through third party applications: C++ Java Python JavaNano Go Ruby Objective-C C# javaScript Perl PHP Scala Julia Idiomatic and Open Source gRPC was specifically designed from the ground up to automatically generate idiomatic client and server stubs. Idiomatic is just a fancy way of saying “natural and natively understood.” Being able to be understood in the native language in which it functions is supremely important, and can lead not only to dramatically increased adoption, but better retention and understanding. There is of course the fact that gRPC is also open source; having a resource that is open to inspection, modulation, mutation, and development often results in more secure, stable, and useful solutions. Since it’s open source, as the protocol ages third party modifications will likely fill any implementation gaps that exist. Also Read our Review of Red Hat’s Apiman Open Source API Management Efficient and Scalable RPC is, by design, meant to be efficient. The structure of the protocol itself is lean, with the processing occurring at the marshaling and unmarshaling stage, requiring very little in the way of additional processing and modulation. Because of this, gRPC is inherently efficient, made only better by the streamlined methodology that Google has put in place by building upon HTTP/2. HTTP/2, a modern revision of HTTP, enables highly effective, efficient uses of network resources utilizing methodologies as defined under RFC 7540. The new framing in HTTP/2 allows for decreased latency on the wire, higher data compression, and effective minifying by reducing the total amount of code without reducing greater functionality. By building gRPC within the framing of HTTP/2, you get the benefit of RPC magnified by the gains in HTTP/2, meaning smaller data than ever before with equal functionality. Additionally, HTTP/2 enables bi-directional streaming in its transport specification, something that gRPC takes advantage of to minimize waste data and decrease overall latency. What you end up with is a lean platform using a lean transport system to deliver lean bits of code — an overall decrease in latency, size, and demand that is noticeable and enables smaller, less adept hardware the same processing power of larger, more powerful contemporaries. Related: 5 Protocols For Event-Driven API Architectures Baked In Authentication Support and Solutions gRPC was designed from the ground up to not only have an effective built-in authentication system, but to support a wide array of authentication solutions. First and foremost, there’s the supported mechanism that is baked into the protocol — SSL/TLS is supported with and without Google’s token-based systems for authentication. There is in fact an Authentication API that is provided with gRPC that utilizes a Credentials object to grant, revoke, and control both channel credentials and call credentials. This API, in addition to having the obvious support structures for the Google token solution, also provides the MetadataCredentialsPlugin class and MetadataCredentialsFromPlugin function to tie into external authentication solutions. This authentication is relatively lightweight, as well. The following is an official implementation of authentication in Ruby using Google authentication in Ruby: require 'googleauth' # from https://www.rubydoc.info/gems/googleauth/0.1.0 ... ssl_creds = GRPC::Core::ChannelCredentials.new(load_certs) # load_certs typically loads a CA roots file authentication = Google::Auth.get_application_default() call_creds = GRPC::Core::CallCredentials.new(authentication.updater_proc) combined_creds = ssl_creds.compose(call_creds) stub = Helloworld::Greeter::Stub.new('greeter.googleapis.com', combined_creds) Use Cases Now that we understand gRPC, what are some specific use cases that would benefit from its implementation? We can see the specific use cases in the benefits as noted above — lightweight, efficient, and scalable. Any system that demands these attributes would dramatically benefit from RPC, assuming that the system in question is utilizing external resources routinely in their function. Systems that require low latency and efficient, fast scaling, such as microservice-driven IoT devices, can use gRPC to great effect, and would likely see massive short-term gains and long-term benefits. Additionally, mobile clients, regardless of the device power in a local sense, can use gRPC to efficiently tie into external cloud systems, servers, and processes, leveraging these mechanics for co-processing to match local processing ability. Having the ability to offload data crunching while using local resources to process local, extreme low latency needs can do a lot to multiply the apparent power of said local device, and could go a long way towards making mobile devices lighter, more power efficient, yet more effective in their common functions. REST vs. RPC — A Settled Argument? So what does all of this mean for API design? And more specifically, what does this mean for REST? The RPC versus REST argument is age-old for many developers, but it seems like they’ve missed the point — they both exist in a specific space for a specific purpose. The difference between REST and RPC is really quite simple — REST exposes data as resources to be acted upon, whereas RPC exposes operations as a method for acting upon data. While many early applications used REST and RPC in similar ways, their basic functionality is starkly contrasting, and as such, how we build with REST and RPC should accordingly be separate. The problem is that we’ve conflated functionality. We can get a lot of RPC-like functionality using REST, and we can likewise get a lot of REST-like functionality using RPC. The better way, then, is to treat the discussion between REST and RPC as a matter of scope and approach. While REST is a standard methodology for dealing with microservices amongst the IoT, as these connected devices become smaller and require more functionality from the same resources, protocols like gRPC are going to steadily become a better option for many developers. This is really not an argument over REST and RPC — this is an argument over two solutions without context. The context of the problem will drive the solution. Using REST with HTTP, for instance, has the benefit of predictability coming from its HTTP verbiage. RPC does not have this benefit, but conversely, solutions like gRPC make better use of HTTP/2 and the efficiency it offers to blow away REST in terms of raw speed and efficient communication. The takeaway is simply this — your context will determine which is better, and as with any application in the API space, this will come down to your particular use case. In Summary In conclusion, gRPC represents a welcome evolution of the classic RPC structure that takes advantage of modern protocols for a new generation of highly efficient, lean functionality. As we move forward to smaller devices with higher processing demand, solutions like gRPC and the forks it generates will likely come to be a serious competitor to the REST-centric microservice framework paradigm that has, until this point, gone relatively unchallenged. The latest API insights straight to your inbox