I have been building micro-service enterprise applications my entire career – 25 years as of this writing. Over the years, I learned that there is a balance between pure adherence to design patterns and practice.
Most micro-service architecture articles, such as this one about what they do at Netflix appear to consider enterprise-scale architectures. Enterprise architects must consider enterprise as a whole, but it is independent, self-contained applications that make up an enterprise architecture. Each such application contributes APIs to the enterprise, but microservices drive it’s internal workings.
Loose coupling with stable shared contracts
A change in one micro-service should not require changing others. By declaring and adhering to API contracts, you balance continuous evolution and backward compatibility.
The API contracts are not merely human-readable documentation, though human readability is essential. The contracts must be machine-readable and usable for runtime and static validation of API requests and responses.
If you standardize your application ecosystem on a specific programming language and platform, then use that language to declare and reuse all interfaces. On the other hand, if you have a multi-lingual architecture in which components are written in different languages, you can utilize a cross-platform mechanism for declaring data structures and generating code, such as Apache Thrift, protobufs or OpenAPI / swagger.
While a separate data store per micro-service may seem like a good idea from the micro-service perspective, it inevitably turns out to be a horrible idea from data integrity, transactions and reporting.
I do not subscribe to the philosophy that each microservice should have its datastore. Instead, I prefer an architecture in which the datastore is abstracted from all microservices.
I highly recommend using GraphQL for queries and mutations as an abstraction layer. The abstraction layer can be a set of micro-services hidden behind a GraphQL URL endpoint. The underlying data store itself can be flexible and adapted as the project evolves without having to rebuild any of the business logic in the micro-services.
Moreover, GraphQL imposes a degree of discipline on managing the backward compatibility of the logical data model by providing tools for the continuous evolution of the schema:
While nothing prevents a GraphQL service from being versioned just like any other REST API, GraphQL takes a strong opinion on avoiding versioning by providing the tools for the continuous evolution of a GraphQL schema.
Monorepo with dedicated micro-service build and deployment lifecycle
I recommend placing the entire application ecosystem into a single Git monorepo. I will discuss structuring such a monorepo in another post.
The reason for a monorepo is that it facilitates code reuse and signifies a microservice ecosystem known to work together. That does not mean that all microservices are always built and deployed together.
Though all of your microservices will live in the same monorepo, they each need to have their lifecycle. Following the loose coupling principle described above, changing one microservice should not require changes to others under most circumstances. Only modified microservices get deployed together.
A single microservice may perform more than one task. I think it’s an overkill to limit microservices to one individual function. The tasks should be related and have the following common characteristics:
- The tasks are related and tightly coupled. Usually, such tasks are modified together. If you frequently find yourself changing multiple microservices at the same time, it’s a good indication that they should either be a single microservice or you need to rethink their coupling;
- The tasks have similar performance and scaling characteristics. Suppose your microservice serves, say, 5 APIs, of which three always complete in 500 milliseconds and must serve thousands of requests per second. One requires 20 seconds to run but only runs once an hour, and another one is a long-running asynchronous task that runs overnight. In this example you have 3 tasks that share code and have similar performance characteristics, and the other two don’t. That is 3 independent microservices;
- The tasks have similar development lifecycles. Suppose your microservice serves 5 seemingly related APIs. Four of these rarely change. But one changes with every release. As a result, due to changes to one API, you have to rebuild and redeploy the other four. It is time to refactor;
- Periodic reviews of performance, scalability, and development lifecycle. You should periodically review the data from your cloud service and code commit history to see whether you need to refactor or combine microservices. You do not need to stick to some permanent architecture. Microservice architecture should be fluid and easily movable, depending on performance, scalability, and development lifecycle characteristics.
Each microservice is its own deployable asset
Though the entire ecosystem lives in the same monorepo, each microservice is its own deployable asset. It can be a container or a AWS Lambda function. Only microservices that are modified should be rebuilt and redeployed.
A choice between a container or a Lambda is something I’d like to explore in another post.
Strive for stateless micro-services
Micro-services should be stateless. There could be a state associated with interacting with a micro-service, but the micro-service itself should not be the one to maintain that state.
My approach is to pass the state around between interactions in the form of a session object. You could also use something like Redis, but I don’t like using Redis for things that cannot be restored without enabling persistence — I exclusively use Redis as an LRU cache. Using Redis for durable storage is another topic we should explore in another post.
What I described in this post is my philosophy for building microservice architectures. I do not consider myself a purist, and my views are very pragmatic. I do not like team silos, and I like architectures that are natural to create and evolve in practice and do not impose contrived constraints. The best practices I described above are based on years of practical hands-on experience.