Building APIs on the JVM Using Kotlin and Spark – Part 2

Building-APIs-JVM-kotlin-spark-java-nordic-apis-pt-2

If you are building APIs or microservices on the Java Virtual Machine (JVM), you owe it to yourself to check out the micro-framework Spark. This tiny toolkit is designed after a similar Ruby framework called Sinatra, providing a library that makes it easy to create APIs and web sites. Much of this simplicity comes from its use of new language features introduced in Java 8, like lambdas, which give programmers an elegant way to define their APIs.

In this second post in our series on Kotlin, a new programming language from Jetbrains, we will show you how you can make the programming model even sweeter using this new JVM language. We’ll use it to build some additional components you will need to create truly great APIs using Spark. Building off the previous Spark intro we published, the new components we’ll create in this part of our series will give you a useful starting point to leverage Spark in your APIs, while demonstrating the potential and power of Kotlin.

Watch Travis Spencer present on this topic at the Stockholm Java Meetup

Recapped Intro to Spark

In our introduction to Spark, we explain that Spark is a toolkit that you link into your API to define and dispatch routes to functions that handle requests made to your API’s endpoints (i.e., a multiplexer or router). It is designed to make the definition of these routes quick and easy. Because Spark is written in Java and that is its target language, it provides a simple means to do this using Java 8’s lambdas. It also does not rely on annotations or XML configuration files like some comparable frameworks do, making it easier to get going.

A typical Hello World example (which we also included in our Spark intro) is this:

import static spark.Spark.*;

public class HelloWorld {
    public static void main(String[] args) {
            get("/hello", (request, response) -> "Hello World");
    }
}

Note that this snippet is Java and not Kotlin; we’ll get to our Kotlin version in a bit.

When executed, Spark will start a Web server that will serve up our API. Compile, run this, and surf to https://localhost:4567/hello. When hit, Spark will call the lambda mapped with the spark.Spark.get method. This will result in a dial tone.

Check out our Spark intro for more about Spark’s history, routing capabilities, including wildcards in routes, processing request/responses, and templatizing output.

Building a Robust API with Spark

From our intro (which includes a number of links to additional docs and samples), you can see that Spark is a useful framework on its own. Purpose built to perform routing, it can start a Web server and can be plugged with certain templating engines. These are the only bells and whistles you’ll get with it, though. When you begin using it to build a production-caliber API, you will need more.

Before going live with your API, you will probably write a lot of code. To make this evolvable, testable, and supportable, you will also need:

  • Controllers: Spark gives you a way to model your data and views to present it, but there’s no concept of controllers. You will need these if you’re going to follow the Model View Controller (MVC) pattern (which you should).
  • Dependency Injection (DI): To make your API more modular and robust, you will need a way to invert the resolution of dependencies, allowing them to be easily swapped in tests. Spark doesn’t provide out-of-the-box integration with any particular Dependency Injection (DI) framework.
  • Localization: Spark makes it easy to define views using any number of templating engines, but resolving message IDs is beyond what the framework provides. You will need this if you are targeting a global market with your API.
  • Server Independence: Spark starts Jetty by default. If you want to use another server or different version of Jetty, you will have to do some additional work. This is trivial if you are distributing your app as a WAR, but you will have to write some code if this isn’t the case.

If you take Spark, its support for various template languages, and add these things, you have a very complete toolset for building APIs. If you wrap all this up in a fluent API using Kotlin and utilize this language to build your controllers, you will be able to rapidly develop your service.

We won’t show you how to extend Spark with all of the above, but we will delve into the first two. Send a pull request with your ideas on how to support other features.

It should be noted that there are other frameworks for building APIs that already include these (e.g., Spring Boot). With these, however, the third-party libraries used (if any), and how these features are provided, is already set. If you would like to use a different DI framework, for example, it may not be possible. If it is, you may incur bloat as the provided DI framework is not needed in your case. With Spark, these decisions are yours to make (for better or worse).

Templating in Kotlin

We explain Spark’s templating support in our intro. We talk about how the toolkit comes with support for a half-dozen template engines that you can use to render responses. In this post, we want to show you how this can be done with the Kotlin-based sample we’ve been building. We use Spark’s same object model, but wrap in our own fluent API that makes the syntax a lot cleaner. With this sugary version, rendering a response with a template is nearly the same as without. You can see this in the service’s entry point where we fluently define all the routes:

route(
    path("/login", to = LoginController::class, renderWith = "login.vm"),
    path("/authorize", to = AuthorizeController::class, renderWith = "authorize.vm"),
    path("/token", to = TokenController::class))

Here we are passing the name of the template in the renderWith named argument. You can read more about the path function in the first part of this series, but the important part to note here is that, in contrast to our simple Java-based templating sample, the data model is not mixed up in the definition of the routes — that is left to the controllers. At this point, we are only defining which template should be used with which route.

You can also define this type of syntactic sugar in Java. The Kotlin sample was originally written in that language and converted using the Jetbrains-provided tools. Before it was Kotlinized, the Spark API was wrapped up in more convenient API that fit our usage model better. You can see the old Java version in the GitHub history, but suffice it to say that the Kotlin version is a lot cleaner.

Adding DI Capabilities to a Spark-based API

To implement DI in your API, you can use various frameworks, including Guice, Pico and the Kotlin-native, Injekt. Whichever you decide on, you will need to integrate them with Spark. In this subsection, we will walk you through this using Pico.

It is beyond the scope of this article to introduce DI. If you are unaware of how this pattern works, refer to Jacob Jenkov’s introductory article on DI.

The Pico integration is handled in our Application class which inherits from the SparkApplication class. It uses another of our classes called Router; the two are what ties Spark and Pico together. The important parts of the Application class are shown in the following listing:

public class Application(
        var composer: Composable = Noncomposer(),
        var appContainer: MutablePicoContainer = DefaultPicoContainer(),
        var routes: () -> List) : SparkApplication
{
    private var router = Router()

    init
    {
        composer.composeApplication(appContainer)
    }    

    fun host()
    {
    	// Explained below...
    }

    // ...
}

The Application class’ constructor takes three arguments:

  1. An instance of type Composable which defaults to an object that won’t do any composition of dependencies (e.g., in simple cases where DI isn’t used by the API)
  2. A MutuablePicoContainer that will house the API’s dependencies
  3. The lambda function that will produce the routes (as described in part 1 of this series).

To see more clearly how this class wires up Pico and Spark, we need to look at how we compose dependencies. Then we will talk about the Router in detail.

Composing Dependencies

With DI, an object does not instantiate its dependencies directly. Instead, object creation is inverted and dependent objects are provided — not created. In order to do this, a DI container must be populated with objects and object resolvers. In our sample API boilerplate, this is done through subtypes of the Composable interface. A composer registers the API’s dependencies, relating an interface to a concrete implementation that should be used by all objects that depend on that interface. Objects are resolved at various levels or scopes, resulting in a hierarchy of object resolution. We can also create factories and add these to the containers; these will produce objects that others depends on, allowing us to do complicated object creation outside of the composers.

As the Application object comes to life and the init method is called, the first thing it will do is compose the application’s dependencies. It does this using the given Composable. This interface looks like this:

interface Composable {

    fun composeApplication(appContainer: MutablePicoContainer) { }

    fun composeRequest(container: MutablePicoContainer) { }
}

As you can see, composers do two things:

  1. Compose the API’s application-wide dependencies
  2. Compose dependencies that should have request-level scope (i.e. objects that only exist for the lifetime of an HTTP request/response)

The former method, composeApplication, is what is called by the Application class’ init method. This method is called once, as the API server is started. The later method, composeRequest, is called per request by the Router class (described below).

In the composer, you can register dependencies in any way Pico supports. It offers a number of very useful mechanisms that make it a good tool to consider using in your API implementation. While we will not dive into Pico in this post, we will show you a simple implementation of a Composable subclass that is included in the sample project:

class ContainerComposer : Composable
{
    public override fun composeApplication(appContainer: MutablePicoContainer)
    {
        appContainer.addComponent(javaClass())
        appContainer.addComponent(javaClass())
        appContainer.addComponent(javaClass())
    }

    public override fun composeRequest(container: MutablePicoContainer) { }
}

This particular composer is pretty dumb — yours will probably be much more complicated. The important things here are that:

  • The controllers are in the application container
  • All of their dependencies will be resolved from the application or request container when instances of them are fetched in the Router.

This will make it easy to create controllers because their dependencies will be given to them when they spring to life. (This will be made even easier using some reflection that automatically sets up routes, which we’ll explain below.)

Resolving Dependencies as Requests are Routed

The Router class works with the Application class to glue all these frameworks together. As you add support for localization and more advanced things that are not covered in this post, you will find that your version becomes quite intricate. For the sake of this post we’ll keep our sample relatively simple.

Router inherits from SparkBase, so that it can gain access to its protected addRoute method. This low-level Spark API is what is called by the higher-level static methods, get, post, etc., which were discussed in our Spark intro and shown in the Hello World listing above. We don’t use those — instead we use our own fluent API that ends up invoking this lower-level Spark interface. Our Router exposes one public method, routeTo which you can see here:

class Router constructor() : SparkBase() 
{
    public fun  routeTo(
        path: String, container: PicoContainer, controllerClass: Class,
        composer: Composable, template: String? = null) 
    {
        // ...
    }
}

The routeTo method is called in the Application class for each route that is setup with our DSL. You can see this in the host method of Application (which was elided from the above listing of that class):

fun host()
{
    var routes = routes.invoke() // Invoke the lambda that produces all the routes

    for (routeData in routes)
    {
        val (path, controllerClass, template) = routeData

        router.routeTo(path, appContainer, controllerClass, composer, template)
    }
}

Refer to the previous installment of this series for information about the routeData data class and how its being used to simultaneously assign multiple values.

When routeTo is called like this, it does two important things:

  1. Reflectively looks for the methods defined in the given controller to determine what HTTP methods should be routed; and
  2. Tests to see if a template has been assigned to the route, and, if so, calls different overloads of the SparkBase class to register the route appropriately.

You can see this in the source code more clearly, but the important part is shown below. Note how two private methods — addRoute and addTemplatizedRoute — are called for each method on the controller which are found using reflection.

if (template == null || template.isBlank()) {
    addRoute(methodName, path, container, controllerClass, composer)
}
else {
    addTemplatizedRoute(methodName, template, path, container, controllerClass, composer)
}

Note here that we used a smartcast to convert the nullable template variable to a string after first checking if it is null. This is one of Kotlin’s coolest features.

Regardless of whether or not a template should be used, both of these private methods create a Spark RouteImpl instance. To do this, a closure must be instantiated and passed to Spark’s addRoute method. In the case of a templatized route, the closure and RouteImpl are created like this:

val r = fun (request: Request, response: Response): ModelAndView
{
    var model = router(request, response, container, controllerClass, composer)

    return ModelAndView(model, template)
}

SparkBase.addRoute(httpMethod, TemplateViewRouteImpl.create(path, r, VelocityTemplateEngine()))

We do this similarly in the method that handles the non-templatized case (which can be seen in the source repository).

The part to note there is that the Router’s private route method is called in the closure, r. This function pointer, and thus route, gets called with every request. This is how we can integrate DI at the request level.

The route method starts by creating a new request-level container that has the application-wide container as its parent. This will cause dependencies to be resolved firstly from the request-level child container. Only if they aren’t found there will the parent be investigated. (This is that hierarchical dependency resolution we alluded to above.) Then, we call the composer’s composeRequest method, passing in this new container. Once composition is done, we fetch the controller from the container, and invoke it.

You can see this in the following snippet:

private fun  router(request: Request, response: Response, appContainer: PicoContainer,
                                        controllerClass: Class, composer: Composable) : Map<String, Any>
{
   val requestContainer = DefaultPicoContainer(appContainer)
   var model : Map<String, Any> = emptyMap()

   composer.composeRequest(requestContainer)

   try
   {
       val controller = requestContainer.getComponent(controllerClass)

       // ...
   }
   catch (e: Exception)
   {
       halt(500, "Server Error")
   }

   return model
}

We will return to this method a bit later when we discuss the controllers, but this gives you a good overview of how to integrate a DI framework like Pico with Spark. For more details, review the source or leave a comment below.

Java Meetup Medium CTA-01

Implementing API Logic in Controllers

As you build out your API, you are very likely to have dozens, hundreds, or even thousands of endpoints. Each of these will have a different logic — validating inputs, calling back-end services, looking up info in a data store — the list goes on. This processing has to be done in an orderly manner or else your code base will become unmaintainable. To avoid this, your API’s logic should be encapsulated in one controller per endpoint.

With Spark, you get a routing system that dispatches to functions. You can use lambdas as in the samples, but this becomes untenable as the size of your API grows. Once you’re past the prototype phase, you’ll realize this is not enough. There are many ways to add this capability on top of Spark, and this is what makes it such a great framework. As with DI, you are free to choose the way that works best for you (for better or worse). In this post, we will offer you one suggestion that will satisfy these goals:

  • It should be fast and easy to create controllers.
  • Programmers should not need to focus on how routing is done as they build controllers.
  • All of a controller’s dependencies should be injected using constructor injection.
  • A controller should not be cluttered with a bunch of noisy annotations.

With these goals in mind, we start with the Controllable type (which we touched on in the last post). Every controller within our API will inherit from this class.

abstract class Controllable 
{
    public open fun before(request: Request, response: Response): Boolean = true
    public open fun get(request: Request, response: Response): ControllerResult = ControllerResult()
    public open fun post(request: Request, response: Response): ControllerResult = ControllerResult()
    public open fun put(request: Request, response: Response): ControllerResult = ControllerResult()
    public open fun delete(request: Request, response: Response): ControllerResult = ControllerResult()
    public open fun patch(request: Request, response: Response): ControllerResult = ControllerResult()
    public open fun head(request: Request, response: Response): ControllerResult = ControllerResult()
    public open fun trace(request: Request, response: Response): ControllerResult = ControllerResult()
    public open fun connect(request: Request, response: Response): ControllerResult = ControllerResult()
    public open fun options(request: Request): ControllerResult = ControllerResult()
    public open fun after(request: Request, response: Response) { }
}

Extending this type does not require the subclass to override any methods. In practice, this wouldn’t happen, as that would mean that no routes would be set up and the server wouldn’t respond to requests. The point though is that controllers only need to implement the actions they require — no others, making it simple and fast to implement one.

In the sample code, we have defined three controllers that simulate the logic of the OAuth code flow:

  1. AuthorizeController
  2. LoginController
  3. TokenController

If you are not familiar with how this message exchange works, we will briefly explain:

Firstly, a user accesses the authorize endpoint by making an HTTP GET to /authorize. If the user isn’t authenticated, they are redirected to /login. There they are presented with a view that allows them to identify themselves. Upon success, they are redirected back to /authorize. Now, finding an authenticated session, the authorize endpoint renders a consent screen (another view). If the user allows the client to act on their behalf, they are given a one-time-usage code that it submits to the token endpoint which issues an access token in the end. There is a lot more to it, so dig deep into OAuth if this is the logic your API must implement.

To achieve our goal of keeping it simple to create these controllers, and to not burden the programmer with routes and annotations, we use reflection to discover which of the Controllable class’ methods have been overridden. This is done in the Router just before it calls Spark’s addRoute method (described above):

public fun  routeTo(path: String, container: PicoContainer, controllerClass: Class,
                                          composer: Composable, template: String? = null)
{
    for (classMethod in controllerClass.getDeclaredMethods())
    {
        val methodName = classMethod.getName()

        for (interfaceMethod in javaClass().getMethods())
        {
            if (methodName == interfaceMethod.getName() && // method names match?
                    classMethod.getReturnType() == interfaceMethod.getReturnType() && // method return the same type?
                    Arrays.deepEquals(classMethod.getParameterTypes(), interfaceMethod.getParameterTypes())) // Params match?
            {
                // Call templatized or non-templatized version of Spark's addRoute method (shown above) to 
                // get route wired up

                break
            }
        }
    }
}

This is taking the controller class we passed to the path method (in the API’s entry point), and checking each of its methods to see if the name, return type, and parameter types match any of those defined in the Controllable base class. If so, a route is setup, causing that method to be called when the path is requested with the right action.

To use Java reflection from Kotlin like this, you need to ensure that you have kotlin-reflect.jar in your classpath (in addition to kotlin-runtime.jar). If you are using Maven, add this dependency to your POM like this:



     org.jetbrains.kotlin
     kotlin-reflect
     ${kotlin.version}

To make this more concrete, let’s look at the AuthorizeController which is the first one called in our simplified OAuth flow:

public class AuthorizeController : Controllable()
{
    public override fun before(request: Request, response: Response): Boolean
    {
        if (request.session(false) == null)
        {
            // No session exists. Redirect to login
            response.redirect("/login")

            // Return false to abort any further processing
            return false
        }

        return true
    }

    // ...
}

The important part here is the before method, which is not routed. Spark has these kind of pre and post processing filters, but we don’t use those because we want to abort the call to the routed method if before returns false. So, we have our own before/after filters that the Router class uses to implement this algorithm in the router method. This is done just after we create and compose the request container (described above):

if (controller.before(request, response))
{
   // Fire the controller's method depending on the HTTP method of the request
   val httpMethod = request.requestMethod().toLowerCase()
   val method = controllerClass.getMethod(httpMethod, javaClass(), javaClass())
   val result = method.invoke(controller, request, response)

   if (result is ControllerResult && result.continueProcessing)
   {
       controller.after(request, response)

       model = result.model
   }
}

This if condition will be false for the AuthorizationController when the user isn’t logged in. So, the GET made by the client will never be dispatched to the controller’s get method. Instead, the redirect in the before filter will cause the user to be sent to the login endpoint.

The LoginController handles the GET made by the client that follows the redirect. They are presented with the view that was associated with that endpoint. This allows the user to enter their credentials and post them back to the same controller. To process this, the LoginController also overrides Controllable’s post method like this:

public override fun post(request: Request, response: Response): ControllerResult
{
    var session = request.session() // Create session

    // Save the username in the session, so that it can be used in the authorize endpoint (e.g., for consent)
    session.attribute("username", request.queryParams("username"))

    // Redirect back to the authorize endpoint now that "login" has been performed
    response.redirect("/authorize")

    return ControllerResult(continueProcessing = false)
}

Here, we create a session for the user using Spark’s Session class (which is a thin wrapper around javax.servlet.http.HttpSession), saving the username for later processing. Then, we redirect the user back to the AuthorizeController. (We also abort further processing in this method, causing the after method of the controller to not be called.) When the user follows this redirect, the before filter of the AuthorizeController will return true, allowing the access token to be issued this time by ensuring that the overriden get method is called.

We admit that this OAuth example is contrived, but it shows how you can add controllers to Spark and how these can have their dependencies injected using DI. With the niceties of the Kotlin syntax, we can even make it easy to wire up all these components. Fork the sample, and see if you can prototype the logic of your API. If it doesn’t work, please let us know in a comment!

Conclusion and Next Steps

This series has been a long couple of posts with the Spark intro sandwiched in between. If you have read this far though, you now have three powerful tools in your API toolbelt:

  • The Java Virtual Machine: an open platform that prioritizes backward compatibility, ensuring the longevity of your code.
  • A flora of frameworks: Spark, Pico, and the others demonstrated in this series are only the tip of the iceberg of what’s available in the Java ecosystem.
  • Kotlin: an open source language being developed by Jetbrains, a forerunner in the Java community.

This triad will make you more productive and help you deliver higher quality APIs in a shorter amount of time. Using the boilerplate developed during this series, you now have a starting point to get going even faster. Twobo Technologies plans to adopt Kotlin now for non-shipping code, and to include it in product code as soon as version 1.0 of the language is released. We encourage you to use the sample and your new knowledge to formulate a similar roadmap.

Another place to apply this new know-how is at hackathons. When participating in such events, it is important to choose your tools ahead of time and to code quickly. Using Spark and Kotlin will help you do this. One such hackathon that is happening soon in Stockholm is PayPal’s upcoming BattleHack which is happening on November 14th and 15th. Tickets for that go on sale this week, so get them now and use your Kotlin/Spark experience to build an API that could win you an axe trophy and $100,000!

meetupAlso, if you’re in Stockholm, attend the upcoming Java user group meeting taking place on August 25th at SUP46. At this event, we will be talking about Spark and Kotlin integration in particular. In addition, we will also delve into Clojure and Groovy. If you can’t make it to Stockholm, we’ll record the Kotlin talk, so subscribe to our YouTube channel today and catch it as soon as it’s out.

We really hope you have enjoyed this series, and welcome your comments and corrections below, on Twitter, or Facebook.

[Disclosure: PayPal and JetBrains are sponsors of the Java Stockholm meetup being produced by Nordic APIs]