Microservices solve a myriad of problems for developers, allowing them to deliver software solutions that solve complex problems for their organizations or clients. But as we’re fond of saying, “There’s no such thing as a free lunch.” Microservices don’t solve everything, and they come with their own set of issues developers have to work around or solve. Perhaps they’re not “bad,” but there are things to keep in mind if you want your microservice architecture to be a success.
Don’t do microservices just for the sake of doing microservices
First, remember that not every monolith is so large and complicated that it needs to be decomposed. Some are cohesive and well-architected, making them easy to code against. As the saying goes, if it’s not broken, don’t try to fix it. Further, remember that Netflix, Amazon, Google, and other “giants” took the lead in defining the early best practices for microservices. However, a lot of these practices only make sense if you’re doing billions of transactions and have an army of thousands of developers at your disposal. That’s why it’s important to frame and scale what you do with microservices to the needs and resources of your own company and the applications you are trying to build and maintain.
Coding may be simpler—but beware of multiple services interacting
When you introduce multiple services, you should take into account the potential performance considerations of their interactions. Let’s use an Order service as an example. Say you want to look for patterns in orders for male shoppers between 30 and 35. You might consider first going to the Customer service to get the list of all shoppers that match these criteria, then send those to the Order service to get all their orders. But if there are a million customers in this age range and only 500 that have placed an order, that approach can be horribly inefficient and slow.
Be aware of de-normalized data and asynchronous updates
One way to address the above issue is to de-normalize the customer data into the order data. However, what happens when a key piece of customer data changes, such as someone getting married and changing their name? If your Order history service relies on de-normalized customer data, it will continue to show the old last name, even if the Customer name has been updated. At some point, the Customer service will replicate and update the denormalized data used by the Order service, but that may take time. While that may be fine for reporting, it isn’t fine for real-time interactions with a customer.
Deployments can be tricky if you’re not disciplined
If your team is used to deploying a single service (think monolith), you might now have 15 microservices to deploy. What if you deploy 14 services and the 15th one fails? That’s why backward compatibility is really important. One of the key tenets of microservices patterns and continuous deployments is backward compatibility. If you refactor a monolith, you don’t have to worry about backward compatibility: you can just deploy the whole thing. When you refactor a microservices, you can’t do it without maintaining backward compatibility, or the other services that rely on it can (and will) break.
It can be hard to know how big (or small) your microservices should be
Knowing the size of how big and how small your microservices should be is really hard. There’s a lot of factors you have to take into account, but there’s no one deterministic method you can use to decompose and set up your microservices architecture. If you do it right, it’s a great pattern. If you do it poorly, it’s worse than the monolith. Think of it this way. If a company the size of Netflix has 1,000 or even 2,000 software engineers, and they maintain say 1,000 services, they’ve got two people for every service. They’re in great shape. On the other hand, if you have five engineers, you can’t expect to decompose your business so finely that you have 300 services. Five people maintaining 300 services is impossible.
Service discovery is problematic
If you have a monolith such as an e-commerce application, you communicate with it at my-e-commerce.com or some such domain name. Easy. If you have a microservice that talks to seven other services, how do you know where they are? Are you going to use a domain name? What if you need to migrate it elsewhere? What if they added more capacity to the cluster? How are you going to load balance it? What if one service is down? And the questions go on. The synchronization and orchestration of how you discover routes between services, where services are, and whether they’re up or down is significantly more complicated with microservices.
Reporting and analytics can be problematic, too
Most analytics and business intelligence solutions use a central repository for all the data, such as an operational data store, a data warehouse, or a data lake. What often drives complexity in BI is how many data sources it has to pull from. If you take three monoliths and decompose them into 70 services, that’s a lot more data your BI has to pull in for a wider variety of things.
For analytics, you should avoid using a de-normalized copy of the data, even though there are situations where your microservices may need it for efficiency. If you de-normalize your User or Customer data into your Order service, you should never have that service send such data to your analytics or BI component: it’s not the system of record. When you decompose your app into microservices, you have to be aware of which data is from the system of record as opposed to a copy being used for a particular service or use case.
Want to learn more about what microservices can do for your enterprise? Schedule a free consultation.
The JBS Quick Launch Lab
Free Qualified Assessment
Quantify what it will take to implement your next big idea!
Our assessment session will deliver tangible timelines, costs, high-level requirements, and recommend architectures that will work best. Let JBS prove to you and your team why over 24 years of experience matters.