From Monoliths To Microservices — And Beyond
Now that the hype around microservices is ending, what lessons have we learned? And what’s next?
The microservice architecture has reigned for many years. In its era, we’ve experienced the good, the bad, and the ugly. In this article, we’ll look at the lessons learned and explore new developments in the field of building future-proof applications.
This won’t be a history lesson but a practical journey in the present time. We’ll look at the struggles of a development team facing rapid growth and uncertainties. Monoliths have a bad reputation, and microservices come with a cost. Migrating from one to the other mostly ends up in a nightmare. What options do they have? Let’s dive in.
Suppose we need to build an application for a starting company that sells cars using a unique formula. The MVP requirements are simple: keep a list of customers and manage orders. There’s limited time and money for the application’s development.
In this context, the monolithic architecture seems to be the best option because it’s the simplest and cheapest way to build an application. All features and components are placed in a single codebase, so there’s only one thing to worry about. It’s easy to develop, test, and deploy. Both management and the development team unanimously decided to go for this option.
As time passes, the application gets frequently updated with new features, and the development team slowly grows. Business is doing great and is ready to expand its services with another new formula, but this time for renting cars. Of course, the application should fully support this new service. It’s up to the development team to make this happen. They have gathered to discuss the impact of the application and how to implement the new requirements in a maintainable fashion.
The monolithic architecture is simple but doesn’t scale well. Once the number of features grows and more developers are needed to keep up the pace, it becomes harder to avoid creating a Big Ball of Mud. Also, growing applications take more and more time to complete the build, test, and deploy. No matter the change, even if it’s a simple one, it has to go through the full process. CI/CD becomes a nightmare.
The development team wants to avoid this. But what are the options? Well… as many wise people would say, it depends. In any case, the application needs to be broken up into smaller, independent pieces. Each piece can then be developed, tested, and deployed by a separate team. Developers only have to focus on their piece of the puzzle, making it more manageable. Great stuff.
The biggest question is, however, how small the pieces need to be. Its answer fully depends on how much the sales and rental services overlap in terms of features and information. If there’s no overlap, the team can develop independent applications for both services. Besides having two codebases to worry about, there’s no additional complexity.
When there’s a little overlap (e.g., customer management), it’s still valuable to develop independent applications. A service-oriented architecture (SOA) will probably suffice here for coupling the applications. This architecture type allows (enterprise) organizations to compose a single system of multiple applications.
In this architecture style, applications are called (coarse-grained) services. For the implementation, the team can choose to make the rental service use the existing customer management implementation of the sales service. Alternatively, they can split off and move the customer management to its own service for better reusability.
Besides the two or three codebases, there’s also the interaction (coupling) to worry about. This makes it a bit more complex to set up and maintain the application. More on this later.
In case of a lot of overlap (e.g., users, customers, cars, orders, billing, etc.), the microservice architecture is likely a better fit. This architecture style is a specific implementation of an SOA aimed at maximizing flexibility and reusability by using smaller (fine-grained) services and allows for composing an application of feature-oriented services.
For the implementation, the team can implement independent services for the user, customer, sales and rental features, and more. Besides the many codebases and interactions, there’s also the team division (responsibilities) to worry about. This adds a lot of complexity to the setup and maintenance of the application.
A lot of the complexity comes with a characteristic bound to any service-oriented architecture: it’s a distributed system. This type of system is not known for being simple or cheap. It brings a lot of infrastructure overhead and additional services required for running the individual services and providing the interaction between them.
Also, managing data transactions between services is a tough nut to crack. The math is simple: more services equal more overhead and complexity.
The interaction between services can be organized in several ways. The simplest option is “orchestration,” where a process gets executed (conducted) step-by-step in a single thread by one of the services or a workflow engine. This centralizes the control of services, making it easy to build, maintain, and debug. On the other hand, it introduces a single point of failure and adds coupling between the services. When done wrong, the independence of services is broken, and a Big Ball of Mud lurks.
An alternative that overcomes these risks is “choreography.” It decentralizes the control of services by applying an event-driven approach. In this approach, each service is making its own decisions on when (not) to act. Like dancers, they listen and react to their cue of the music (event stream). This strongly decouples the services, making the application highly adaptable. Events can be stored for later consumption, making processes repeatable and resumable.
This makes the application highly fault-tolerant. On the other hand, the strong decoupling makes the application harder to build, debug, understand, and monitor. When not implemented and documented properly, figuring out what’s going on becomes nearly impossible.
With all options explored, the team can start weighing them to come to a well-thought-out solution. This solution should be future-proof and affordable. Keeping the monolith isn’t an option because it doesn’t scale well enough. An event-driven microservices approach does, but it is the most complex and expensive option.
So, the team wants to find the most cost-effective solution somewhere in between. They know that finding the right spot heavily depends on the business needs (overlap), and these needs might not be the same tomorrow. But how do you design for a spot that’s constantly moving?
In the ideal world, applications can start as a monolith and painlessly progress to a service architecture. But the world hasn’t been ideal, and this progression has been a big problem for many organizations. Breaking up a monolith involved a lot of heavy refactoring and, in more than a few cases, a complete rewrite of the application.
There are multiple causes at the root of the problem. For example, programming languages lacked support for strong modularization, allowing us to create a Big Ball of Mud. Also, architectures encouraged using a technical decomposition (e.g., MVC, layered), which is very different from a typical service decomposition (feature-oriented). Both make unraveling the features and translating them into services very hard. Adding multi-team development and high pressure to the mix completes the disaster recipe.
Luckily, time hasn’t been standing still, and technology has evolved. Languages have adopted support for strong modularization (e.g., Java modes, ES modules), architecture started to encourage feature-based decomposition (e.g., vertical slicing), and new tooling emerged to improve multi-team development (e.g., monorepos). This strong combination allows us to build evolvable modular monoliths.
In a modular monolithic architecture, an application comprises small, independent feature-oriented modules. This might sound familiar because it’s almost a complete analogy of the microservice architecture. The only difference is that local modules are used instead of distributed services.
Simply put, it’s a non-distributed version of the microservice architecture. It has all the flexibility and reusability advantages but without the complexity of a distributed system. Because both architectural styles are so similar, the same decomposition and interaction strategies can be used. Meaning that, when done properly, a module can be transformed into a service and the other way around.
Now, let’s go back to the team’s question on how to design for the future while minimizing costs. By modularizing the application, they can break the big application into small independent pieces (services) while keeping a monolithic architecture for as long as possible.
Modules can be split off and transformed into services when required for the right reasons, like independent deployments, fault tolerance, etc. They can be split off per group (SOA), one by one (microservices), or a mix of both. This suits the team’s needs well because they’ve already concluded that they don’t need a service architecture.
Transforming a module into a service is fairly easy because its internal structure doesn’t change. But there’s still quite some work involved to make it happen. Like setting up a new project, end-points (or subscribers), requests (or publishers), operations, etc.
Depending on the number of splits, this still can lead to a lot of overhead. Also, the number of codebases to worry about starts growing again. This concerns the team because the business isn’t sitting still, and new ideas pop up weekly. The team finds it very likely that they need to do a lot of split-offs in the short term. So, the question has been raised again if they won’t be better off starting with a microservice architecture right away.
Answering the team’s question is a difficult one. Although the modular monolith makes the architecture more agile, the transformation process to a service architecture can still be a hurdle. They can go for the modular monolith if they can overcome this hurdle. Otherwise, they need to go for microservices to play safe.
Overcoming this hurdle requires a solution that separates the development model from the deployment model in a way that the application can be built as a modular monolith and deployed as a service without any development effort. The only problem is that such a solution doesn’t exist, leaving the team behind with a hard choice.
I have often struggled with this decision-making process, always striving to find the best possible solution. I adopted non-technical decomposition strategies and monorepos quite early on and attempted to maintain a strong modularization pattern before languages even began to adopt it. But I’ve learned that this wasn’t enough, and it motivated me to do something about it.
For the past few years, I’ve been developing a distributed runtime solution that allows users to build a monolith and deploy it as services. It refracts the monolith into services at the runtime level, like a prism does with light.
The services are defined by configuration, and the communication between the services is fully automated. Although I started in Java, I quickly found that a backend-only solution wasn’t enough because I mostly build full-stack applications. So, I’ve switched to TypeScript and extended the capabilities to include the frontend. This enabled me to combine the frontend and backend in a single code base without worrying about the end-to-end communication and (future) deployment needs.
The project is called Jitar (abbreviation of Just-In-Time-ArchitectuRe) and is fully open source. Although it’s still a work in progress, it’s already fully functional for most applications. More information can be found on its website and GitHub. I’ve also written an article about my view on just-in-time architecture. It overlaps this article, but it is written from my personal experience.
With this, the article has come to an end. A big thanks for reading. I hope you’ve enjoyed it and maybe learned something. I’m always open to feedback, and I’m happy to answer any questions.
Bonus: Technology has also evolved on the infrastructure level. Service mashes, serverless and edge computing have emerged to help bring down the complexity of distributed systems. This is outside the scope of this article, but it’s good to know that if an application evolves into services, it has become less painful. Cheers!
From Monoliths To Microservices — And Beyond was originally published in Better Programming on Medium, where people are continuing the conversation by highlighting and responding to this story.