Expert advice on strangling the monolith, and not the business

Breaking apart the monolith into specialized microservices is a trendy concept. The idea is to modernize legacy systems by creating smaller, isolated services around specific domains. This could bring incredible benefits — such as improved reusability, better partner integrations, and accelerated DevOps.

But, breaking apart a large monolithic system is a task that requires slow diligence. If attempted all at once, like a Big Bang, your IT transformation could cause catastrophic complexity. Instead, experts recommend The Strangler Pattern, in which specific domains are strategically exorcised from the monolith and transmuted into microservices one at a time through the use of API gateways. By doing so, businesses can smartly transform their assets in a manageable way.

There are many angles to cover in legacy modernization. So, at our Legacy Modernization LiveCast, we brought in Erik Wilde, Digital Catalyst at Axway, and Gibson Nascimento, Head of Solutions at Sensedia, to present tips and best practices for utilizing microservices as a tool to smartly expose legacy applications as APIs.

Below, we’ll provide a step-by-step walkthrough of The Strangler Pattern and demonstrate how to extend API-first patterns with Event Brokers to transform older monolithic systems progressively.

Watch the entire Legacy Modernization LiveCast here:

What’s Wrong With a Monolith?

First off, what’s a monolith? Are we talking about the mysterious space-age monuments popping up around the world? Well, no. Software monoliths are just a little less sci-fi than that.

In software theory, a monolith is a classical application architecture pattern. This approach consolidates code and dependencies into a single vertical stack. It’s self-contained and non-distributed.

Although having everything in a single silo may seem simpler, easier to manage, and easier to deploy, moving around this massive object is cumbersome. Scalability suffers, and deployments are difficult if you’re updating everything all at once. You may run into connectivity and security issues when attempting to integrate with partners. Also, since everything is connected, small bugs could have a disproportionately large impact.

Benefits of Microservices

Microservices architecture, on the other hand, breaks software into smaller segments that are more decoupled and distributed. Arguably, this pattern is more appropriate for the needs of today’s software environments: applications are more decomposed, made of many third-party components, and data is increasingly more open. Teams have different software stacks, releasing to various clients, on different timelines, and adopting continuous delivery and deployment for better agility.

Building niche, client-agnostic services means you could reuse them throughout an organization in different contexts — especially if built with a standard interface for externalization (i.e., The Bezos Mandate).

Caveats and Considerations

Breaking apart the monolith into microservices is a trendy concept, and a lot has already been said on the topic. But, as enterprises are realizing, microservices do bring unintended caveats.

For example, how can we manage the additional complexity of microservices? Should background microservices match API endpoints exposed to developer consumers? Should all monolithic systems be broken apart? What scenarios necessitate this sot of drastic change? How large should a single microservice be? What exactly constitutes a specific business domain?

As you can see, delineating microservices domains and implementing the pattern in practice opens a can of worms. To begin this journey, it helps to have a skeleton structure to implement this change.

The Strangler Pattern

Erik Wilde discussed how to iteratively transform monoliths with API-driven microservices.

The Strangler Pattern is one such structure that helps companies test modernization in a risk-averse and relatively inexpensive manner. It enables you to iteratively evolve only elements that need to be changed. According to Erik Wilde, The Strangler Pattern allows a step-by-step, repeatable, and controlled modernization process.

Reasons for Strangling the Monolith

First, it helps to understand motivations — why do you want to modernize with microservices in the first place? Well, Erik sees two main reasons. One driver is bottom-up, and the other is top-down.

  1. Operational Concerns: With a monolith, scaling a large codebase becomes demanding and expensive. A stack utilizing microservices can leverage tech like containers, Kubernetes, and serverless — a much better match for on-demand scalability.
  2. Digital Transformation: Perhaps too much architecture represents what a group has been doing in the past, and they need to transform to enable innovations.

“It’s really important to think about the reasons,” said Erik. As you can see, a significant driver is operational concerns. The other, to digitally transform, is more top-down. Other drivers may include business agility, designing for change, increasing operational flexibility, and compliance with regulatory requirements.

Strangler Pattern: Step-by-Step

Only once a company has a clear purpose should they begin their modernization efforts. The Strangler Pattern is a way to build replacement services around a pre-existing monolith strategically. You grow parallel microservices around unique monolith domains until using the monolith for those functions is no longer necessary.

Erik outlined 13 steps to strangle the monolith:

  1. Start with a monolith: Understand why you want to transform.
  2. Identify components: Select what could be useful in a decoupled way, externalized outside of the monolith.
  3. Carve it out: Identity a specific domain you want to focus on and carve out this capability with well-defined boundaries.
  4. Understand how domain functions: Understand how the domain works in the monolith. In many cases, the microservice will share a state or database with the monolith. (This implicit coupling is what makes modernization difficult).
  5. Expose an API: Design an API for this capability and design independent of the monolith.
  6. Manage the API: Set up proper access control, throttling, and security. Introduce infrastructure to manage this process, like an API Gateway.
  7. Duplicate: Reimplement your capability in a standalone microservice.
  8. Make new API: Give your new implementation an API — this API should be identical for both instances, which will require governance.
  9. Replicate state: Replicate shared conditions in the standalone service. This may be a bit tricky, involving database exports, imports, or other actions to move the state from the monolith to the microservice.
  10. Switch: Expose the microservice and motivate its use. This could involve a hard-switch or expose two sources in parallel for some time. Regardless, hide complexity with an API Gateway and ensure the identical functionality occurs for consumers.
  11. Deprecate: Stop using the monolith implementation. The Gateway now only exposes the new microservice.
  12. Retire: Retire the old implementation in the monolith.
  13. Carve out more: Keep repeating steps 1–12 for each service that may be useful as a standalone service.

If you complete these steps, you will successfully replicate monolithic behavior in a standalone microservice! By repeating this process piece-by-piece, IT teams can slowly (and safely) retire reliance on a monolith.

Bringing Event-Driven Architecture into the Mix

Gibson Nascimento, Sensedia, explored the benefits of bringing EDA to legacy IT.

In addition to RESTful API-first microservices, what are some other ways we can empower our legacy applications? Gibson agrees that teams shouldn’t tackle monolith decomposition all at once. Another way to expose legacy without breaking the business, argues Gibson Nascimento, is through building an event-driven layer.

Behind most apps we use, there is a backend application delivering and processing information. This backend may be composed of multiple layers and components. A simple click to purchase a ticket, for example, could trigger many functions to occur, like payments, geolocation, security, backend information, and third-party data collection.

Lately, connected software architecture is adopting more asynchronous integrations. Many business moments, as Gibson calls them, are triggered by events. Transforming a legacy IT with Event-Driven Architecture (EDA) could bring significant gains for internal use and partner integrations. One way to achieve this is by using an Events Broker.

Legacy Enablement with Events Broker (Internal)

Microservices enable smaller deployments, reduced scope, and better scalability. An Events Broker extends these benefits, providing the ability to support volumes that legacy software can’t sustain.

According to Gibson, an Events Broker helps internal architecture by significantly reducing back-pressure. Take a 10-year-old application — it was likely not built to support thousands of transactions per second.

Event Broker platforms help organize requests by putting information into a queue to be processed. Defining a cadence of requests aids traffic management and is good for error management, Gibson says.

Legacy Enablement with Events Broker (External)

Whereas standard REST APIs enable synchronous communications (request-response), events enable asynchronous communications. These scenarios receive information when events are triggered — Gibson calls these business moments.

In addition to streamlining internal transactions, Events Brokers bring external benefits to legacy architecture, said Gibson. They could help companies define new strategies, build new digital experiences, and reach new partners by tapping new markets and new customers.

Strangler Pattern Tips

The Strangler Pattern is a great starting point for reducing risk. It enables an iterative and strategic process to imagine monolithic capabilities as APIs. Teams could implement an Event Broker within the Strangler Pattern without changing the order of actions too much.

However, don’t expect modernization to be easy, reminded Erik. Teams should avoid reintroducing coupling. Also, carving out domains will be hard work. When delineating a domain, size and scope are variable.

For example, a single Payments API could technically have many smaller domains, such as deposit, funds check, batch processing, remittance, and so on. Things can get granular fast. A financial service for risk modeling, noted Gibson, could act as a standalone service that could evolve independently.

“One big failure of SOA was that it was all about exposing APIs,” said Erik. “SOA did not focus on how we manage the code that is exposing the APIs.”

Whether 1 or 1000 microservices serve an API, complexity must be hidden from the consumer. Now that modern management layers do focus on the code management aspect, you could essentially expose multiple microservices in a unified way, within the same API portal even. The same logic applies to business events, said Gibson.

Lastly, start small and don’t over-engineer, reiterated Gibson. The Strangler Pattern avoids a Big Bang approach and privileges slow evolution. Similarly, while Kubernetes orchestration for microservices may seem appealing, you might not need a sophisticated deployment model right away. Let the microservices library grow first.

Incremental Legacy Transformation

Legacy infrastructure is typically pretty opaque. It’s not easy to externalize vital information for partners, customers, and internal use. However, by developing a second track around a specialized monolithic business utility and encouraging its use via APIs, any IT department can slowly transition from a monolith to microservices — and do so safely. Do that enough, and you could shave off a monolithic dependency entirely. Furthermore, by utilizing new event-driven styles, architects are well-positioned to reimplement hidden IT value in a high-performant, optimized fashion.

Bill Doerrfeld

Bill Doerrfeld is a tech journalist and API specialist, focusing on API economy research and marketing strategy for developer programs. He is the Editor in Chief for Nordic APIs. He leads content direction and oversees the publishing schedule for the Nordic APIs blog. Bill personally reviews all submissions for the blog and is always on the hunt for API stories; you can pitch your article ideas on our Create With Us page. Follow him on Twitter, or visit his personal website.