Anti-Pattern: Optimistic Consistency

Anti-Pattern: Optimistic Consistency

Ladies and Gentlemen, lo and behold a new consistency model - “Optimistic Consistency”. Implementation of this pattern is quite simple: commit as many transactions as you want, against as many storage mechanisms as you need. That’s all. Let’s see an example.

Example

Say as the result of some user’s operation the system has to:

  1. Modify data in MongoDB
  2. Publish some events to Kafka
  3. Update search model in Elasticsearch; and
  4. Execute an operation on some remote system

You implement this logic by executing those operations one after another. Simple and elegant. After all, why over-engineer? Everything will be just fine, namely:

  1. Servers will always be up
  2. Network is the most reliable thing on earth
  3. Your process will only be shut down gracefully …like, why would anyone stop it in the middle of execution, right?

In addition to simplicity of implementation, this approach brings multiple additional benefits.

Advantages of Optimistic Consistency

The main advantage is that you can always promise strong consistency. Without the hassle of actually ensuring it. If someone will ever question the systems consistency, always use eventual consistency as an excuse. After all, we are living in the brave new world of distributed systems, right?

Optimistic consistency brings another huge benefit. In the *very unlikely * scenario of something indeed going wrong, you can always refer the extremely low percentage of such possible issues. Like, who will ever notice if the system looses a transaction or two???

Ok, seriously now

Hope is not a strategy and optimistic consistency is nothing but negligence. Things will go wrong: systems will crash, network will be partitioned, and occasionally, servers’ plugs will be pulled. It cannot be called “eventual consistency”, but rather “eventual inconsistency”.

Claiming that no one will ever notice such corruption just emphasizes how big of an issue this is - the system will be silently corrupting data, but “no one will ever notice”!

Please don’t do it.

How to handle such scenarios? Read Jimmy Bogard’s awesome “Life Beyond Distributed Transactions” series.

Check out my blog for my other posts on software architecture, domain-driven design, and microservices: vladikk.com

Mazen Mohamed

Web Developer | Next.js | React.js | Typescript

1y

Great article If I have a simple state change for the data after let's say a form-submit/fetch/mutation, what is your opinion about the following? - Updating the data after the success of the form-submit/fetch/mutation. - Re fetching the data after the success of the form-submit/fetch/mutation.

Like
Reply

To view or add a comment, sign in

More articles by Vlad Khononov

  • The Golden Age of Modularity

    Why Modern Architecture—and AI—Depend on Better Boundaries My news feed these days is 90% AI-related clickbait: “Take a…

    24 Comments
  • The Essence of Coupling and Cohesion in Two Minutes

    When you couple two components, they need to share some form of knowledge: public interfaces, functional behavior…

    29 Comments
  • The Zen of Software Engineering (or silverbulletitis, and its treatment)

    There is a pet peeve of mine, that lately I’m encountering way too often: the condition I call Silverbulletitis. This…

  • Tackling Complexity in CQRS

    The CQRS pattern can do wonders: it can maximize scalability, performance, security, and even “beat” the CAP theorem…

  • A Quick and Dirty Hack for Interviewing Candidates

    One simple question can shed a lot of light on one’s competency in a given field: “On a scale of 1 to 10, please rate…

    3 Comments

Insights from the community

Explore topics