Rust Vs. Haskell: Which Language is Best for API Design?

When it comes to designing, building, and maintaining an API, it’s not immediately obvious which development tools and programming languages you should use. Seeing as how APIs are essentially the nervous system of mobile apps, it makes sense that there would be copious amounts of resources for programmers and developers.

Knowing which development tools to use to create your own API depends on your level of technical expertise. Some development environments offer a barebones command-line programming environment. Others function more like an app, with fancy GUIs and lots of bells and whistles, with code debuggers and copious built-in libraries.

Today, we’re going to compare two popular programming languages that might not immediately spring to mind when you thinking of designing an API. We’ll be doing a side-by-side comparison of Haskell vs. Rust, to determine which language is best for API design.

Introducing Haskell

Haskell is one of the most powerful and reliable functional programming languages out there. Haskell’s emphasis on top-level programming lets developers focus on getting results rather than getting bogged down in endless minutiae.

Programming in Haskell also allows for fast prototyping, thanks to its excellent compiler, getting apps and software onto the market much more quickly than other development languages. This makes Haskell a good fit for smaller startups or those looking to launch their first app.

Meet Rust

Mozilla is dedicated to developing tools for and evolving the web using Open Standards, starting with their flagship Internet browser Firefox. Every Internet browser on the market, including Firefox, is written in C++. Firefox features 12,900,992 lines of code. Google Chrome has 4,490,488. While this makes the programs fast, some argue they are more unsafe. The memory manipulations of C and C++ are not checked for validity. If something goes wrong, it can lead to a program crashing, memory leaks, buffer overflows, segmentation faults, and null pointers.

Rust defaults to writing “safe code,” by allocating memory to objects and not unallocating it until the process has been completed. This eliminates ‘dangling pointers’ which pose a security risk and make the code much less efficient.

The security and efficiency are some of the reasons why Rust is one of the most beloved programming languages among developers and programmers as shown in this Stack Overflow survey.

Haskell Vs. Rust

According to this StackShare chart, Rust and Haskell have a number of similarities and a few notable differences. For starters, Rust is slightly more popular, with 416 developers using Rust as opposed to 347 developing with Haskell.

Due to its popularity, there’s a great deal more Rust content on the Internet than there is for Haskell. There are over 23,000 references to Rust on Hacker News while Haskell only has 763. Haskell’s got more than three times as much content on Stack Overflow than Rust, due to its longevity.

The advantages of Rust, according to Stack Overflow programmers include:

  • Guaranteed memory safety (75 votes)
  • Speed (64 votes)
  • Minimal runtime (46 votes)
  • Open source (46 votes)
  • Pattern matching (38 votes)
  • Type inference (36 votes)
  • Algebraic data types (34 votes)
  • Concurrent (34 votes)
  • Efficient C bindings (28 votes)
  • Practical (28 votes)

The advantages of Haskell, on the other hand, include:

  • Purely-functional programming (66 votes)
  • Statically typed (53 votes)
  • Type-safe (44 votes)
  • Great community (29 votes)
  • Open source (29 votes)
  • Composable (28 votes)
  • Built-in concurrency (24 votes)
  • Built-in parallelism (22 votes)
  • Referentially transparent (17 votes)
  • Generics (15 votes)

The cons of using Rust include:

  • Ownership learning curve
  • Variable shadowing
  • Hard to learn

The cons of using Haskell:

  • No good ABI
  • Unpredictable performance
  • Poor documentation for libraries
  • Poor packaging for apps
  • Confusing error messages
  • Slow compiling
  • No best practices
  • Too many distractions in language extensions

Programs that integrate with Rust:

  • Remacs
  • Sentry
  • Iron
  • Leaf
  • Pencil
  • Ruru
  • Sapper
  • Helix
  • Tokamak
  • Rocket
  • Airbrake
  • Yew Framework
  • Dependabot
  • Tower Web

Programs that interact with Haskell:

  • Eta
  • Yesod
  • Rollbar
  • Miso
  • Buddy

Finally, take a look at this Google Trends graph of interest over time in Rust vs. Haskell:

As you can see, while both programming languages have their ups and downs, Rust is exponentially more popular than Haskell. This means there are more resources available for Rust, which makes it a better pick for building APIs if you want something that will work straight out of the gate.

Haskell is adept at fast prototyping and building frameworks, however. The code you write in Haskell can be a part of the finished product, as one additional benefit.

Haskell vs. Rust: Which Is Better For Designing APIs?

Now that we know a bit more Haskell vs. Rust, let’s delve into the heart of the matter.

Which programming language is best for API design? That will depend on what you’re trying to do with as well as how comfortable you are with programming.

Let’s take a look at some specific instances, to help you figure out which approach is right for your API design.

Designing A RESTful API With Haskell

Designing an API with a functional programming language may seem like a lot to take on. It doesn’t have to be, however, as there are 3rd-party tools to make web development with Haskell easy. For example, the SNAP framework acts as a translator, letting Haskell communicate with the web easily and painlessly.

Getting Started With Haskell and SNAP

You’re going to start off by loading a few commands in Haskell. You can also clone these commands directly from ThoughtBot.com.

    git clone git@github.com:thoughtbot/snap-api-tutorial.git
    cd snap-api-tutorial
    git checkout baseline
    cabal sandbox init
    cabal install snap
    cabal install --dependencies-only

Creating An API Snaplet

Snaplets are composable pieces of a SNAP application. SNAP applications are created by nesting applets. Look at the application.hs and you’ll notice the application initializer ‘app’ is composed of makeSnaplet functions.

We’re going to start by making a snaplet called ‘API’. This snaplet is responsible for creating the top level /api namespace. You’re going to load a few language extensions, import the necessary modules, and define the ‘API’ data type. Then you’ll define the initializer of the snaplet.

    -- new file: src/api/Core.hs
    {-# LANGUAGE OverloadedStrings #-}
    module Api.Core where
    import Snap.Snaplet
    data Api = Api
    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet "api" "Core Api" Nothing $ return Api

Notice the ‘b’ in the ‘apilnit :: Snapletinit b Api’ line instead of the ‘app’ command. This means this snaplet can be loaded in any base operation, not just App. This is the basis of SNAP composability.

Now you’re going to tell the ‘App’ datatype to expect an API snaplet.

    -- src/Application.hs
    import Api.Core (Api(Api))
    data App = App { _api :: Snaplet Api }

Then, you’ll nest the Api snaplet within the App snaplet, using nestSnaplet:

    nestSnaplet :: ByteString -> Lens v (Snaplet v1) -> SnapletInit b v1 -> Initializer b v         (Snaplet v1)

The first command defines the root base url for the snaplet’s route, /api in this instance. The second argument is a Lens, identifying the snaplet, generated by the makeLenses function in src/application.hs. The final argument is the snaplet initializer apiInit we’ve previously defined.

-- src/Site.hs

import Api.Core (Api(Api), apiInit)

    app :: SnapletInit App App
    app = makeSnaplet "app" "An snaplet example application." Nothing $ do
      api <- nestSnaplet "api" api apiInit
      addRoutes routes
      return $ App api

Now you’ve nested your first Api snaplet. It doesn’t have any routes yet, however, so you don’t know if it’s working or not. Adding an /api/status route that always responds with a ‘200 ok’ will let you see output from this snaplet.

Snap route handlers normally return a type of Handler (). The Handler is an example of HandlerSnap, which provides stateful access to the HTTP request and response.

All of the requests and response modifications take place inside a Handler monad. So we’ll define `respondOk :: Handler b Api ()’

    -- src/api/Core.hs
    import           Snap.Core
    import qualified Data.ByteString.Char8 as B

    apiRoutes :: [(B.ByteString, Handler b Api ())]
    apiRoutes = [("status", method GET respondOk)]

    respondOk :: Handler b Api ()
    respondOk = modifyResponse $ setResponseCode 200

    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet "api" "Core Api" Nothing $ do
        addRoutes apiRoutes
        return Api

Now look at the type signatures for modifyResponse and setResponseCode:

modifyResponse :: (MonadSnap m) => (Response -> Response) -> m ()
setResponseCode :: Int -> Response -> Response

This means the setResponseCode takes in an integer and returns a Response modifying function that can be passed on to modifyResponse. modifyResponse performs the response modification within the monad function.

Now run the following code:

    $ cabal run -- -p 9000
    $ curl -I -XGET "localhost:9000/api/status"

    HTTP/1.1 200 OK
    Server: Snap 0.9.4.6
    Date: ...
    Transfer-Encoding: chunked

This should give you your first response.

A Todo Snaplet

Now that we’ve seen how to get a simple response out of a snaplet, let’s learn how to make a Todo snaplet inside of the Api snaplet. Then we’ll learn how to connect that Todo snaplet to a database, write Get and Post handlers for /api/todos, which allows you to create and fetch new todo items.

We’ll start with some boilerplate code, which will define our snaplet, then nest it inside of the API snaplet.

   -- new file: src/api/services/TodoService.hs

    {-# LANGUAGE OverloadedStrings #-}

    module Api.Services.TodoService where

    import Api.Types (Todo(Todo))
    import Control.Lens (makeLenses)
    import Snap.Core
    import Snap.Snaplet

    data TodoService = TodoService

    todoServiceInit :: SnapletInit b TodoService
    todoServiceInit = makeSnaplet "todos" "Todo Service" Nothing $ return TodoService

    -- src/api/Core.hs

    {-# LANGUAGE TemplateHaskell #-}

    import Control.Lens (makeLenses)
    import Api.Services.TodoService(TodoService(TodoService), todoServiceInit)
    -- ...

    data Api = Api { _todoService :: Snaplet TodoService }

    makeLenses ''Api
    -- ...

    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet "api" "Core Api" Nothing $ do
      ts <- nestSnaplet "todos" todoService todoServiceInit
      addRoutes apiRoutes
      return $ Api ts

Next, we’ll nest a PostgreSQL, provided by snaplet-postgresql-simple, into the TodoService. This provides the TodoService with a connection to the database and makes queries possible. Then you’re going to import Aeson, encoding your responses into JSON using ToJSON using the ToJSON instance we defined before.

   -- src/api/services/TodoService.hs

    {-# LANGUAGE TemplateHaskell -#}
    {-# LANGUAGE FlexibleInstances -#}

    import Control.Lens (makeLenses)
    import Control.Monad.State.Class (get)
    import Data.Aeson (encode)
    import Snap.Snaplet.PostgresqlSimple
    import qualified Data.ByteString.Char8 as B
    -- ...

    data TodoService = TodoService { _pg :: Snaplet Postgres }

    makeLenses ''TodoService
    -- ...

    todoServiceInit :: SnapletInit b TodoService
    todoServiceInit = makeSnaplet "todos" "Todo Service" Nothing $ do
      pg <- nestSnaplet "pg" pg pgsInit
      return $ TodoService pg

    instance HasPostgres (Handler b TodoService) where
      getPostgresState = with pg get

A little bit of SQL sets up the database and inserts a few lines of test data:

    CREATE DATABASE snaptutorial;
    CREATE TABLE todos (id SERIAL, text TEXT);
    INSERT INTO todos (text) VALUES ('First todo');
    INSERT INTO todos (text) VALUES ('Second todo');

Finally, you’re going to configure the postgres snaplet by editing the following file:

snaplets/api/snaplets/todos/snaplets/postgresql-simple/devel.cfg

Now you’re ready to run your first GET to /api/todos. We’ll retrieve all of the rows of the todos table, convert them into Todo data, then serialize them as JSON to get your first response.

First, you’re going to use the query- function, which converts a SQL string and returns a monadic array of data that implements the FromRow typeclass:
query_ :: (HasPostgres m, FromRow r) => Query -> m [r]

Next, you’re going to use the writeLBS together with the encode function to write a JSON string to the response body:
writeLBS :: MonadSnap m => ByteString -> m ()

This function calls the modifyResponse function mentioned earlier.

Then you’re going to use the execute function (which is the database version of query) to insert the data gathered from the getPostParam into the database:

    todoRoutes :: [(B.ByteString, Handler b TodoService ())]
    todoRoutes = [("/", method GET getTodos)
                 ,("/", method POST createTodo)]

    createTodo :: Handler b TodoService ()
    createTodo = do
      todoTextParam <- getPostParam "text"
      newTodo <- execute "INSERT INTO todos (text) VALUES (?)" (Only todoTextParam)
      modifyResponse $ setResponseCode 201

Here, the Only is postgresql-simple’s version of single value collections.

Here’s the finished version:

    $ cabal run -- -p 9000
    $ curl -i -XPOST --data "text=Third todo" "localhost:9000/api/todos"

    HTTP/1.1 201 Created
    Server: Snap 0.9.4.6
    Date: ...
    Transfer-Encoding: chunked

    $ psql snaptutorial
    $ SELECT * FROM todos;

     id |     text
     ----+--------------
      1 | First todo
      2 | Second todo
      3 | Third todo

Now you have a working REST API written for Haskell and SNAP. If you want to know more about the SNAP framework, you can read the SNAP documentation or visit the #snapframework on freenode.

Designing a REST API in Rust

Now that we’ve learned how to set up an API in Haskell, let’s turn our attention to Rust. Seeing how to set up an API in Rust will help give you an idea of which language might be best for designing your API.

First off, we’re going to load some crates, which are Rust’s libraries. We’ll be using Rocket to create the API and Diesel to handle the database. Diesel works with Postgres, MySQL, and SQLite.

Define Your Dependencies

Before you begin, you’re going to define your dependencies:

    [dependencies]
    rocket = "0.3.6"
    rocket_codegen = "0.3.6"
    diesel = { version = "1.0.0", features = ["postgres"] }
   dotenv = "0.9.0"
   r2d2-diesel = "1.0"
    r2d2 = "0.8"
    serde = "1.0"
    serde_derive = "1.0"
    serde_json = "1.0"
    [dependencies.rocket_contrib]
   version = "*"
   default-features = false
   features = ["json"]

You’ll notice that there are a number of crates being loaded. We’ve already mentioned Rocket and Diesel. Rocket_codegen calls on macros, while dotenv allows variables to be called from an external file. R2d2 and r2d2-diesel connects to the database using diesel. Last but not least, serde, serde_derive, and serde_json are used for serialization and deserialization of data sent and retrieved from the REST Api.

In this instance, postgres has been specified to only include Postgres modules in the diesel crate. If you were wanting to access another database, or multiple databases, you only need to specify them or eliminate the features list altogether.

One final note, to use Rocket, you need to use the nightly build of Rust since it uses features not included in the stable builds.

Accessing The Database With Diesel

First, we’re going to start off by setting up Diesel. Once that’s set up, you’ll have your schema defined that you’ll use to construct the application.

To set up Diesel, begin by constructing a Diesel CLI. If you don’t know how to do that, here’s a guide on getting started with Diesel.

We’ll be using Postgres, as you won’t be able to access all of the MySQL features. PostGRES is fast and easy to set up and will give you all of the features you need to create a database.

Create A Table

Start off by setting up the database_url to connect to PostGRES or simply add it to the .env file.

   echo       
   DATABASE_URL = postgres://postgres:password@localhost/rust-web-with-rocket > .env

Now you’re going to run diesel setup to create a database and an empty migrations folder to use later.

You will be modeling people who can be added to, retrieved, modified, or deleted from the database. You’re going to need a table to store them in. To do so, you’re going to create your first migration.

    diesel migration generate create_people

This creates two new files which are stored in the migrations folder. Up.sql is for upgrading and is where you’ll place the SQL to create your table. Down.sql is for downgrading so you can undo the upgrades if need be.

In this example, you’re going to create the people table.

    CREATE TABLE people(
    id SERIAL PRIMARY KEY,
    first_name VARCHAR NOT NULL,
    last_name VARCHAR NOT NULL,
   age INT NOT NULL,
   profession VARCHAR NOT NULL,
   salary INT NOT NULL
  )

To undo the table creation, you only have to use:

    DROP TABLE people 

To execute a migration, run:
DROP TABLE people

If you need to undo the migration, simply use:
diesel migration redo

Map To Structs

Now that your people table is created, you’re ready to start adding data into it. Seeing as how Diesel is an ORM, you’re going to need to translate it into something Rust will be able to read. You’re going to use a struct to do that.

    use super::schema::people;
    #[derive(Queryable, AsChangeset, Serialize, Deserialize)]
    #[table_name = "people"]
      pub struct Person {
      pub id: i32,
      pub first_name: String,
     pub last_name: String,
     pub age: i32,
     pub profession: String,
    pub salary: i32,
 }

You’re going to write a struct that will represent each record in the people table, otherwise known as a person. You’re going to use three commands particular to Diesel - #[derive(Queryable)], #[derive(AsChangeSet)] and #[table_name].

#[derive(Queryable)] generates the code that will retrieve a person from the database. #[derive(AsChangeSet)] makes it possible for you to use update.set in the future, if you so choose. #[table_name = "people"] names the table, as Diesel interprets the plural of Person and Persons. If you were using another name with a more common plural, this step wouldn’t be necessary.

The other functions are used to allow JSON data to interact with the REST API. #[derive(Serialize)] and #[derive(Deserialize)] both come from the serde crate. We will delve more fully into these commands a little later on.

Now you’re going to create a schema, specifically a Rust Schema using the “table!” command that handles the Rust to database mapping.

Run the following command:
diesel print-schema > src/schema.rs

This generates the following file:

    table! {
      people (id) {
          id -> Int4,
         first_name -> Varchar,
         last_name -> Varchar,
         age -> Int4,
         profession -> Varchar,
        salary -> Int4,
     }
  }

Now you’re going to run SELECT and UPDATE queries, using the Person struct we created earlier. DELETE doesn’t require a struct as it only requires the record’s ID. You’re also going to use INSERT, but in a different way than what’s recommended in the Diesel documentation.

    #[derive(Insertable)]
    #[table_name = "people"]
    struct InsertablePerson {
      first_name: String,
      last_name: String,
      age: i32,
      profession: String,
      salary: i32,
  }
    impl InsertablePerson {
      fn from_person(person: Person) -> InsertablePerson {
          InsertablePerson {
              first_name: person.first_name,
              last_name: person.last_name,
              age: person.age,
              profession: person.profession,
              salary: person.salary,
          }
      }
  }

InsertablePerson is almost identical to the Person struct but with one key difference - there’s no ID table. The ID table is generated automatically when you use Insert, so it’s not necessary.

Finally, #[derive(Insertable)] is added to generate the code to insert a new record.

Running Queries

Now that your tables are created and the structs are mapped to it, you’re going to put them into action.

Here’s how you implement the basic REST API:

    use diesel;
    use diesel::prelude::*;
    use schema::people;
   use people::Person;
   pub fn all(connection: &PgConnection) -> QueryResult> {
     people::table.load::(&*connection)
  }
    pub fn get(id: i32, connection: &PgConnection) -> QueryResult {
      people::table.find(id).get_result::(connection)
  }
    pub fn insert(person: Person, connection: &PgConnection) -> QueryResult {
      diesel::insert_into(people::table)
        .  values(&InsertablePerson::from_person(person))
        .  get_result(connection)
}
  pub fn update(id: i32, person: Person, connection: &PgConnection) ->     QueryResult {
     diesel::update(people::table.find(id))
        .  set(&person)
        .  get_result(connection)
  }
   pub fn delete(id: i32, connection: &PgConnection) -> QueryResult {
      diesel::delete(people::table.find(id))
        .  execute(connection)
  }

Diesel is used to access insert_into, update and delete functions. diesel::prelude::* accesses a range of structs and modules that are useful for running Diesel. In this example, we’re using PgConnection and QueryResult. We’re also including schema::people so we can access the People table with Rust and run methods in it.

Let’s look at one of these functions more closely:

    pub fn get(id: i32, connection: &PgConnection) -> QueryResult {
      people::table.find(id).get_result::(connection)
  }

In this example, the QueryResult is returned from each function. Diesel returns QueryResult<t> from each method and is an abbreviation for Result<T, Error> because of this line:

pub type QueryResult = Result;

Using QueryResult lets us see if something goes wrong if the query fails for any reason. If you wanted to return a Person result directly from a function, you’d use expect to log the error immediately.

Since we’re using Postgres, the Pgconnection command is used. There are other commands for different databases — like Mysqlconnection, for example.

Here’s another example:

    pub fn insert(person: Person, connection: &PgConnection) -> QueryResult {
      diesel::insert_into(people::table)
          .values(&InsertablePerson::from_person(person))
          .get_result(connection)
   }

This is a little different than the earlier get function. Instead of accessing an entry from the people::table it is passed along into the insert_into Diesel function. Earlier, we create the InsertablePerson variable to receive new records. The values from the Person table are retrieved using the from_person command. Get_result allows the statement to be executed.

Launching Rocket

At this point, your database should be up and running. Now we just need to create the REST API and link it to the back-end. In Rocket, this consists of incoming requests and handler functions that deal with the requests. So you’ve got to create the routes and the handler functions.

Handler Functions

It’s slightly easier to start with the handlers and work backwards, so you know what your routes are being mapped to. Here are all of the handlers you’ll need to implement the HTTP verbs GET, POST, PUT, DELETE:

    use connection::DbConn;
    use diesel::result::Error;
    use std::env;
   use people;
   use people::Person;
  use rocket::http::Status;
   use rocket::response::{Failure, status};
    use rocket_contrib::Json;
    #[get("/")]
    fn all(connection: DbConn) -> Result>, Failure> {
      people::repository::all(&connection)
          .map(|people| Json(people))
          .map_err(|error| error_status(error))
  }
    fn error_status(error: Error) -> Failure {
      Failure(match error {
          Error::NotFound => Status::NotFound,
          _ => Status::InternalServerError
      })
  }
     #[get("/")]
     fn get(id: i32, connection: DbConn) -> Result, Failure> {
        people::repository::get(id, &connection)
            .map(|person| Json(person))
          .map_err(|error| error_status(error))
  }
     #[post("/", format = "application/json", data = "")]
    fn post(person: Json, connection: DbConn) ->        Result>, Failure> {
      people::repository::insert(person.into_inner(), &connection)
          .map(|person| person_created(person))
          .map_err(|error| error_status(error))
  }
    fn person_created(person: Person) -> status::Created> {
        let host = env::var("ROCKET_ADDRESS").expect("ROCKET_ADDRESS must be    set");
    let port = env::var("ROCKET_PORT").expect("ROCKET_PORT must be set");
      status::Created(
          format!("{host}:{port}/people/{id}", host = host, port = port, id =  person.id).to_string(),
          Some(Json(person)))
  }
     #[put("/", format = "application/json", data = "")]
     fn put(id: i32, person: Json, connection: DbConn) -> Result, Failure> {
    people::repository::update(id, person.into_inner(), &connection)
          .map(|person| Json(person))
          .map_err(|error| error_status(error))
  }
     #[delete("/")]
    fn delete(id: i32, connection: DbConn) -> Result {
      match people::repository::get(id, &connection) {
          Ok(_) => people::repository::delete(id, &connection)
              .map(|_| status::NoContent)
              .map_err(|error| error_status(error)),
            Err(error) => Err(error_status(error))
      }
  }

Each of these functions defines a REST verb and the path needed to get there. Part of the path is still missing, as they’ll be defined when the routes are created.

Assume the handler methods are localhost:8000/people until we get into the routing.

Here’s one of the easier handlers:

    #[get("/")]
    fn all(connection: DbConn) -> Result>, Failure> {
      people::repository::all(&connection)
          .map(|people| Json(people))
          .map_err(|error| error_status(error))
  }
    fn error_status(error: Error) -> Failure {
      Failure(match error {
          Error::NotFound => Status::NotFound,
          _ => Status::InternalServerError
      })
  }

To access Curl for this function, use:
curl localhost:8000/people

Now let’s look at the PUT handler:

    #[put("/", format = "application/json", data = "")]
    fn put(id: i32, person: Json, connection: DbConn) -> Result,     Failure> {
      people::repository::update(id, person.into_inner(), &connection)
          .map(|person| Json(person))
          .map_err(|error| error_status(error))
  }

The difference between this function and the previous ALL example are the id and person variables. </id> stands for the id variable. Data = "<person"> stands for the request that maps the person variable onto the function arguments. The format property specifies the form the content request body will take, in this case JSON, due to the Json<Person>.

We’re using serde again to retrieve JSON<body> from the request body.

We’re going to use into_inner() to call the contents of Person. We’re also going to use update, which will map either the error or result into the Result variable. Since we’re also using error_status, an error will occur if a record does not match with an existing ID.

If you want to insert a new record, you’d use the error::not found and use a call code similar to what’s used in the Post function.

On that note, the Post function looks like:

    #[post("/", format = "application/json", data = "")]
    fn post(person: Json, connection: DbConn) ->    Result>, Failure> {
      people::repository::insert(person.into_inner(), &connection)
          .map(|person| person_created(person))
        .map_err(|error| error_status(error))
  }
     fn person_created(person: Person) -> status::Created> {
      status::Created(
          format!("{host}:{port}/people/{id}", host = host(), port = port(), id =    person.id).to_string(),
          Some(Json(person)))
  }
    fn host() -> String {
      env::var("ROCKET_ADDRESS").expect("ROCKET_ADDRESS must be set")
  }
     fn port() -> String {
      env::var("ROCKET_PORT").expect("ROCKET_PORT must be set")
  }

This function uses pieces similar to the PUT handle we’ve already discussed. The main difference is that POST will return a 201 Created rather than a 200 OK. To yield a different result, the Result variable should use the status::Created handle instead of Json<person>. This is what causes the 201 status code.

To make the status::created struct, the created record and the path to retrieve it must be passed into the constructor using the Get request.

Routing

The handlers have all been set up. Now we need to route them to the different functions. Each of the handles in this function are related to people, so they’re all going to be mapped to /people.

use people;
use rocket;
use connection;
pub fn create_routes() {
    rocket::ignite()
        .manage(connection::init_pool())
        .mount("/people",
               routes![people::handler::all,
                    people::handler::get,
                    people::handler::post,
                    people::handler::put,
                    people::handler::delete],
        ).launch();
}

Create_routes is called by the main function to get everything started. Ignite creates a new version of Rocket. Handler functions are loaded into a base request path of /people, defining them all inside of routes! Launch runs the application.

Configurations

Earlier, we created environment variables for retrieving the port and host of the running server. Here’s how to change the port and host of the running server. There are multiple ways of going about this. You can create an .env file or a rocket.toml file.

When using .env files, values most be formatted as ROCKET_{PARAM}. PARAM is the variable you’re trying to define. {Address} stands for the host while {port} stands for port.

The .env file would look something like this:

    ROCKET_ADDRESS=localhost
    ROCKET_PORT=8000

If you wanted to use Rocket.toml instead, it might look like:

     [development]
    address = "localhost"
    port = 8000

If you don’t include either of these, Rocket will return to its default configuration.

If you want to learn more about configuring Rocket, check out Rocket’s documentation.

Last Step: Create The Main Method

The final step is to create the main method so the application can run.

     #![feature(plugin, decl_macro, custom_derive)]
    #![plugin(rocket_codegen)]
    #[macro_use]
    extern crate diesel;
    extern crate dotenv;
    extern crate r2d2;
    extern crate r2d2_diesel;
    extern crate rocket;
    extern crate rocket_contrib;
   #[macro_use]
   extern crate serde_derive;
  use dotenv::dotenv;
  mod people;
  mod schema;
 mod connection;
 fn main() {
      dotenv().ok();
      people::router::create_routes();
  }

In this instance, all main is doing is loading the environment variables and starting Rocket by calling create_routes. The rest of the code just loads a bunch of crates so they’re not littered throughout the code.

To see the complete code, you can check out LankyDan’s GitHub.

Conclusion: Rust Vs. Haskell: Which Language Is Best For Building APIs?

As we stated at the beginning, knowing which programming language will best suit your needs isn’t just following a step-by-step recipe or formula. It depends on numerous variables, including your technical proficiency and what you’re working on.

That being said, there are a few reasons why Rust has some advantages over Haskell for building APIs, most notably its popularity. Rust has been trending in recent years, so there’s a ton of useful libraries and frameworks, not to mention a vibrant community to help you answer any questions you might encounter.

Secondly, Rust is also preferable when size, speed, and security matter, which is most of the time, at this point in the Web’s evolution.

Haskell definitely requires more technical understanding. There are certain advantages to using functional programming for constructing APIs. The main advantage for using Haskell for your API design is its utility in rapid prototyping. The code you write while constructing your prototype should still be able to be used in your official product.

Haskell requires additional frameworks to easily connect to web services, however, while Rust seems to interact a bit more naturally with web services.

Unless you’re a seasoned API designer, or you’re trying to get an app to market as quickly as possible, Rust has a slight advantage over Haskell for API design, in our opinion.