Using gRPC to Connect a Microservices Ecosystem

Posted in

gRPC is a powerful system, enabling a wide range of forms, functions, and capabilities. While we’ve previously explored gRPC from a top-down view, recent interest in gRPC has made the pursuit of contextual examples quite important. This piece will discuss some gRPC use cases from the official documentation. We’ll look at some specific use cases and theoretical implementations, and discuss a larger microservices context that shows the real power of gRPC.

gRPC: A Brief Review

While we’ll assume that any reader of this piece has a passing familiarity with gRPC, it bears some repetition to ensure understanding. gRPC is based upon the concept of a Remote Procedure Call (RPC), or the action of calling a method on a server application as if it were an object local to the requester. In essence, you are requesting a non-local resource as if it were a local resource. To do this, a few things are needed to unify the call and the resource.

Firstly, some sort of Interface Definition Language (IDL) is needed, as well as an agreed-upon format for the message interchange. gRPC utilizes Protocol Buffers to this end. As the “g” in gRPC stands for “Google”, denoting its origin, it makes sense that it would use a standardized system from Google in the form of Protocol Buffers. In essence, these buffers are methods for serializing structured data in a way that can morph the data from language to language (and service to service). To quote Google’s documentation:

“Protocol buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.”

Though gRPC works with Protocol Buffers by default, you can make it work with other data formats, such as JSON. gRPC’s broad support of data formats and languages means that it can be interoperable, with Go, Python, Ruby, and other options.

Protocol Buffers Expanded

In this article, we’ll assume a gRPC instance is using the default Protocol Buffers. Protocol Buffers are based around the idea of defined structure. Understanding how data will be interpreted and shared allows developers to have this data operate interchangeably regardless of locale. To facilitate this, Protocol Buffers utilizes a “proto file”. These .proto files create a messages structured piece of data, where fields set a name-value pair. For example, the following protocol buffer creates an Article piece of data containing a set of name-value pairs.

message Article {
 string name = 1;
 Int32 id = 2;
}

Developers can then use this .proto data to generate the data access classes through the included protocol buffer compiler (known as protoc). This will create accessors for each named field, allowing you to access, serialize, and parse the contents and data structure. Taking it a step further, entire gRPC services can be specified in a .proto file defining RPC method parameters and return types in the messages format. As the code is generated using an included compiler, the gRPC client, server code, and protocol buffer code output can essentially create an entire remote-to-local service as long as the methods are correctly mapped and named.

A Primer in Go

As gRPC is a Google-affiliated project, it seems appropriate to dive into this from the perspective of Go. Go, initially known as “golang”, was a passion project of several Google employees that quickly morphed into a user-friendly, extensible option for API development. Throughout this piece, we will refer to the official gRPC Go documentation.

Define the Service

Before we can do anything, we must first define the service. gRPC operates upon defined services in .proto files, so our first step will be to create said file. Creating our first definition is as simple as naming our service:

service RouteGuide {
}

Now that we have named our service, we can begin to set the specific RPC methods that we want to provide. gRPC has four basic response types, each appropriate for particular use cases:

  • Simple RPC: As its name states, this is the most simple RPC form. The Simple RPC is a basic function call, where a client makes a request, waits for that request to generate a response, and then ingests the response. This type of response type is best for singular requests that result in simple outputs. Think of this as requesting a single library book and receiving that book.
  • Server-Side Streaming RPC: This RPC form is when the client sends a request to the server and receives a stream of sequenced messages. These messages are then read to the end of the message stream. This RPC type is created when “stream” is prepended to the response type. This is a more complex layer of interaction. To carry our library book metaphor forward, this is as if you requested the entire Edgar Allen Poe collection, and the librarian returned with a box of all the books within that body of work.
  • Client-Side Streaming RPC: This RPC form is an inverse of the Server-Side Streaming RPC. The client will send a sequenced string of messages, which are then read by the server in sequence. The output response is then returned to the client. This is the inverse of our previous analogy — you list a series of books you like, and the librarian responds with what collection they belong to.
  • Bidirectional Streaming RPC: This is a combination of the former two types of RPCs in that both the client and server send a sequence of messages using a read-write system. Either the client or the server can read in any order they want, as both message streams are independent of one another. This is obviously the most complex type, and as such, we’ll avoid the librarian metaphor. Suffice it to say; this would be closer to a complex conversation than a simple request for a collection.

With these four types defined, let’s create the .proto file so far:

Service RouteGuide {
// Simple RPC – defines the feature at a given point
rpc GetFeature(Point) returns (Feature) {}

// Server-Side Streaming RPC – Shows features within a region defined by a rectangle
rpc ListFeatures(Rectangle) returns (stream Feature) {}

// Client-Side Streaming RPC – Accepts a stream of Points on a route – once the route has finished, a "RouteSummary" response is returned
rpc RecordRoute(stream Point) returns (RouteSummary) {} 

// Directional Streaming RPC – Accepts a stream of content known as RouteNotes while traversing a route
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}

}

Additional to this functionality is the need to define a request and response type for the service methods. Per the documentation, the service being created is geographic in nature. Accordingly, the Points within the message Point are defined as two numbers of the int32 type:

message Point {
 int32 latitude = 1;
 int32 longitude = 2;
}

With all of this defined, we can finally issue a command to create the client and server code. The aforementioned protoc compiler will use the gRPC Go plugin to generate both the protocol buffer code and the interfaces for the client (referred to in the documentation as the “stub”) and server.

Creating the Server

Now that we have defined our service, we need to implement the interface. While the documentation is quite in-depth as to each service’s nature, we will only briefly summarize it here. We recommend reading deeper into the tutorial, as there are some interesting points and caveats that are outside of the scope of this piece.

There are two core aspects of this process that the documentation surfaces briefly, but they both are worth talking about a little bit more at length.

First of these two aspects is the actual service interface. This is the functional part of the service on the server-side and forms the working methodologies for each request. While we stated and defined the service earlier, we need to actually know how these interactions are governed and what they equate to on the remote systems — this is the core function of the interface, as it serves to bridge meaning and carry out the request.

The second of these aspects is the actual gRPC server. This server is essentially a listening service that waits for client requests to forward them to the appropriate systems. This is best thought of as the facilitator for the “inbetween” service interface, as it’s the system that actually sends the requests along the route.

Within the theoretical project in the documentation, RouteGuide has a server (called, interestingly enough, routeGuideServer) which implements the generated interfaces as such:

type routeGuideServer struct {
        ...
}
...

func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) {
        ...
}
...

func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error {
        ...
}
...

func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error {
        ...
}
...

func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
        ...
}
...

Here, we see all four of the response types represented. Let’s dive into one of the included RPC functions and dig down into the functionality and purpose.

func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
  for {
    in, err := stream.Recv()
    if err == io.EOF {
      return nil
    }
    if err != nil {
      return err
    }
    key := serialize(in.Location)
                ... // look for notes to be sent to client
    for _, note := range s.routeNotes[key] {
      if err := stream.Send(note); err != nil {
        return err
      }
    }
  }
}

This is our bidirectional streaming RPC, and as you can see, it has a few interesting methods. First off, note the error handling in the code as represented by err:

   if err == io.EOF {

In this case, io.EOF notes that this is an end-of-stream and proceeds to the next step, which is a closure of the read cycle. Elsewhere, we can see another error paradigm for errors where a note is not available.

Before we move on, we should discuss the Send method and some other variations. Send, in this case, is a simple, self-explanatory function and works as a method to simply send content.

A related method, although not present in this example, is CloseSend. CloseSend is quite interesting in that it functions nearly identically to the Send method but also denotes the end of the interaction. An additional variation is CloseAndRecv, which does not send content directly, but instead follows a sent piece of content and then denotes that the author expects a response.

The choice of method for sending content will largely hinge upon what kind of content is being sent and the response’s expectation. As the example we’re diving into is a bidirectional streaming RPC, we can see the use of Send rather than SendAndClose, as we are not expecting a set amount of messages or interactions. If we were expecting a set response in a client-side streaming RPC, we might utilize SendAndClose to express the expectation of said response.

This final example code actually starts the service. It does this by stating the port and protocol that is being used (in this case, through the net.Listen method noting tcp), creating the gRPC server using the NewServer method (grpc.NewServer), and finally calling the grpcServer.Serve(lis) method to note that the server is listening and will continue to do a “blocking wait” until we shut it down.

Now that we have a server let’s look at how we create a client.

Client-Side Integrations

In the gRPC paradigm of development, a non-server resource interacts with a remote server through the stub. The stub is sometimes referred to as a “client”, but in all cases, they are simply methods designed to call resources as if they were local. Our first step to create this stub is to create the gRPC channel it will utilize to communicate with the remote server. This is done through the grpc.Dial function as follows (per the documentation):

var opts []grpc.DialOption
...
conn, err := grpc.Dial(*serverAddr, opts...)
if err != nil {
  ...
}
defer conn.Close()

Next, we must create the actual stub. At this moment, we’ll want to deviate from the documentation to discuss how the client itself can be leveraged within a microservices architecture. So far, we’ve discussed some pretty standard methods as expressed within the documentation, but we’ve not discussed the internal machinations of microservices as they relate to RPCs. Let’s take our server as we’ve currently designed it and extrapolate some interesting interactions we can support.

One possible implementation may be to have our gRPC system function as a sort of shim layer between multiple internal APIs and an external, resource-driven API. In our example, we’ve been creating an API that delivers certain content based upon geographic location data derived from Points. In reality, the data served could be of almost any type, as long as the stub defines this.

Let’s say we wanted to expand the API we’ve discussed previously, but this time, we wanted our notes to return warning data. In theory, this data could easily be served by leveraging the Printf function to append additional warning data based upon geographic proximity to the Points as defined. If this connection is made using a bidirectional streaming RPC connection, then the messaging could be ongoing, warning hikers of upcoming obstacles or interesting features.

What about users that aren’t actively traveling their intended route? This is where our microservice approach could especially come in handy. If we developed a secondary microservice that stored planned routes locally, we could store these points as a discrete plannedRoute and simulate the route traversal through the initial point onward. The documentation provides a server-side streaming RPC that would fit this function. We’ve appended it below:

rect := &pb.Rectangle{ ... }  // initialize a pb.Rectangle
stream, err := client.ListFeatures(context.Background(), rect)
if err != nil {
  ...
}
for {
    feature, err := stream.Recv()
    if err == io.EOF {
        break
    }
    if err != nil {
        log.Fatalf("%v.ListFeatures(_) = _, %v", client, err)
    }
    log.Println(feature)
}

It’s an interesting note here that, while a client-side streaming RPC may seem more sensible, a server-side streaming RPC services us better. This is because we are collecting a stream of geographic features related to a single possible point. While we could use the bidirectional RPC model to facilitate this, using a server-side streaming RPC will allow us to simulate points along the route, as well as change that route and see possible changes garnered from a singular change in course.

Much of the user experience based upon the output, in this case, would actually be modeled by the microservices architecture rather than gRPC itself. Thus we find the real power and beauty of gRPC — it can control an incredible amount of backend functionality without necessarily requiring the resources to be local. This can reduce payload size, increase efficiency, and thus reduce data size when used correctly. In our theoretical use case, users like hikers would prefer a lightweight payload on a waterproof, low-demand device that can provide relatively high-quality interactions over a low-quality network.

Conclusion

While we covered much of the initial documentation from gRPC in this piece, there is a vast world of potential implementations and possibilities. We suggest first digging into language-specific tutorials, and then into broader agnostic tutorials, as gRPC’s flexibility and power is best understood in the context of multiple implementation paradigms.

What do you think of gRPC? Is it better to insist on local resources or remote? Let us know below.