Microservices and Data Integrity

MICROSERVICES AND DATA INTEGRITY

  • Olivier Collard
  • Published: 29 March 2019

 

The microservice architectural style is seen by businesses today as being an effective approach for delivering scalable and modular technology applications. Unlike traditional ‘monolithic’ applications, it requires careful handling of distributed state management. In this article, I will share why ‘event sourcing’ and CQRS patterns can help enable this.

Ensuring microservices can maintain a consistent state despite asynchronous calls, or upon service failure, tends to be particularly complicated when reworking an existing ‘monolith’ system that is backed by a single data store.

The primary benefit of the microservices model is its ability to organize development teams according to bounded contexts. Bounded contexts are defined based on the functional concepts (or sub-models) a system deals with, and as such provide a straightforward way to organize a system’s features and to support its development and deployment roadmaps or lifecycles.

However, when re-architecting legacy applications, distributing the logic alone is not enough. A shared database creates strong coupling between services around the data model. This creates a risk of inconsistencies due to concurrent data access and requires careful impact analysis and synchronization before running service upgrades. A shared-database approach typically requires downtime periods for upgrades, which defeat the benefits of microservices.

To avoid this happening, and to ensure microservices have a truly autonomous lifecycle, it is essential that each microservice owns its data exclusively. This comes with significant constraints when designing new systems, and can have an even more significant impact when migrating or transforming an existing application.

While a traditional ‘monolithic’ three-tier architecture would rely on ACID transactions (Atomicity, Consistency, Isolation and Durability ) and RDBMS (Relational Database Management System) normalization to ensure integrity and avoid data duplicates, the microservice ‘share-nothing’ approach does the following:

  • It requires careful engineering of operations spanning multiple microservices to ensure coherence (in a shopping application, for example, the model will reliably withdraw items from the stock service when the order is validated on the payment service)
  • It results in de-normalized and distributed schemas where part of the information is duplicated between services to make data local and reduce service dependencies; but at the same time raises subtle consistency issues due to data propagation delays.

The traditional approach to ensuring all resources in a transaction are consistent is through using distributed transactions. This is the purpose of frameworks implementing the XA standard (eXtended Architecture) or two-phase commit protocol. However, distributed transactions come with limitations on scalability and availability due to the use of a transaction co-ordinator and a requirement that all participants are available to execute properly.

A better approach would be to implement the Saga pattern, which relies on orchestrating a sequence of local transactions. If a service fails, the Saga implementation is responsible for handling errors explicitly, reverting or compensating changes executed so far. Compared to the traditional RDBMS and monolith approaches, implementations must account for the lack of isolation (as in ACID – possible conflicts between concurrent transactions) and design compensation logic.

Another challenge with the Saga pattern is the risk of partial execution at the service level. Partial execution happens when a service commits its database but fails when calling another service, or when a service executes the next services but fails to commit local changes. Correct execution requires that all operations occur, or nothing occurs. This is known as atomicity.

Atomicity can be addressed by event sourcing and CQRS (Command Query Responsibility Segregation) as follows:

  • Use an event broker to notify state changes as a series of functional events
  • Implement services, including the originating service, to process the command as a local transaction or emit additional events

As long as the event broker guarantees an ‘at least once’ delivery to all interested services, and the emitting service only updates its local database upon receiving its own notification, all services should eventually execute the command (‘eventual consistency’) in the intended sequence.

When choosing a microservice architecture  approach, architects should consider event sourcing and CQRS early in the design. These patterns help address data integrity concerns typical of distributed systems, and at the same time, contribute to building highly decoupled applications.