From Crash to Cash: How Design Patterns Rescued My Systems (Ep 3)

From Crash to Cash: How Design Patterns Rescued My Systems (Ep 3)

Introduction: My Final Lap Around Design Patterns

Here we are, in episode three of my design patterns saga. If you’ve stuck with me through the first two rounds, you’ve already met eight rockstar patterns that make distributed systems tick: Ambassador, Circuit Breaker, CQRS, Event Sourcing, Leader Election, Pub-Sub, Sharding, and Strangler. I’ve shared how they’ve pulled me out of tech quicksand more times than I can count, and I hope you’ve picked up a trick or two. In those chats, we defined design patterns as a network of independent computers working as one cohesive, scalable, and reliable unit. 

Today, we’re wrapping up by putting these patterns to work in a real-world scenario, plus, unpacking their key characteristics. Think of this as the grand finale where theory meets practice, with a side of “I’ve been there” stories to keep it real.

From Theory to Reality

The Key Characteristics: What Makes a Design Pattern Tick?

Before we dive into a juicy scenario, let’s talk about what makes design patterns worth the hype. I’ve learned the hard way that a good distributed system needs four things: scalability, reliability, availability, and efficiency. Scalability is like stretching a rubber band—your app should expand (or shrink) with demand. I once watched a client’s e-commerce site crash during a Black Friday sale because it couldn’t scale up. Lesson learned! You can scale vertically (bigger servers, like upgrading from 2GB to 5GB) or horizontally (more servers, like adding clones).

Then there’s reliability, where your system keeps going even when a cog breaks. Availability is how often it’s open for business, measured in those fancy “nines” (99.99% uptime means mere minutes of downtime yearly). Efficiency ties it all together, keeping latency and throughput in check. A McKinsey report says that well-designed systems that balance these traits boost performance by 25%. But here’s the rub: the CAP theorem says you can only nail two out of three—consistency, availability, or partition tolerance. More on that later.

The Overloaded Server Conundrum

Picture this: I’m managing a web app with multiple servers, and one’s getting hammered with traffic like a lone barista during a coffee rush. What do we do? Enter the load balancer, a traffic cop directing requests to less-busy servers. I’ve used this trick to save a client’s app from choking during a product launch. Techniques like round-robin (even distribution) or geoproximity (send users to nearby servers) work wonders. AWS’s Elastic Load Balancer is a go-to here.

But what if one server still lags? That’s where Circuit Breaker steps in, pausing requests to let it recover, while Sharding splits the database load across nodes. A research study shows load balancing with HAproxy cuts latency by 120% in high-traffic setups. I’d layer in Caching too which helps in storing static data (like product images) close to users via a Content Delivery Network (CDN) like Cloudflare.

The Speedy Sidekick

Caching is my secret weapon for speed. It’s like keeping snacks in your desk drawer; no need to trek to the kitchen or, in this case, the database. But consistency is the catch, as stale data is a buzzkill. Strategies like write-through i.e, update cache and database together, or write-back i.e, cache first, then update database later, are some of the sync methods for caching. When the cache fills up, eviction policies like Least Recently Used (LRU) kick out old data. Pair it with CQRS for read-heavy apps, and you’re golden. In one instance, I once forgot to adjust a cache TTL and therefore served outdated prices for an hour. Oops!

Storage and Partitioning: Where Data Lives

Choosing storage is like picking a house. Structured data (think customer records) fits SQL databases following the ACID principles (Atomicity, Consistency, Isolation, Durability). I’ve enjoyed working with PostgreSQL for its reliability, as most times the Devs recommend. Unstructured stuff like JSON, Key-Value pairs metadatas, XMLs, etc, usually land in NoSQL, while the media like pictures, videos, pdf, etc, will find their way to object stores like AWS S3. But when relational data grows, Sharding or Data Partitioning saves the day. Take note that rebalancing shards across servers can give a headache. Therefore, this should always be a last resort. 

The CAP Trade-Off: Pick Your Poison

Back to CAP: Consistency (all nodes show the same data), Availability (always get a response), and Partition Tolerance (survive network splits). You only get two. This has been a good guide for me on every project. Usually, I discuss with my Team for the best 2 to adopt based on our project, while we devise a means to make up for the 3. For instance, we can achieve high availability with Multi-AZ AutoScalers, consistency using infrastructure as code while we patch the partitioning with microservices. It's not so cool, but still manageable.

My Team faced this on an ERP app where favoring consistency and partition tolerance meant occasional downtime, but losing money wasn’t an option. It’s a business call; know your priorities.

Tying It All Together: Patterns in Harmony

In this scenario, I’d blend patterns like a DJ mixing tracks. Load Balancer with Circuit Breaker handles traffic, Sharding scales the database, Caching speeds reads, and CQRS separates writes. If it’s a legacy app, Strangler could modernize it over time. It’s not one-size-fits-all—start small, then layer as needs grow.

My Love Letter to Distributed Systems

Wrapping up this series feels bittersweet. These patterns (eight in total) have been my lifeline, turning chaotic projects into manageable wins. Scalability, reliability, availability, and efficiency they’re the North Stars, and CAP reminds me that perfection’s a myth. I’ve experienced enough server meltdowns to know these tools aren’t just theory; they’re survival gear. My Team can bear witness. Whether it’s sharding a database or strangling a monolith, design patterns have made my job as a cloud architect less “scream into the void” and more “high-five the team.”

So, what’s next? Take these patterns, test them in your world, and tell me how it goes. I’d love to hear your war stories. For me, they’re proof that even in tech’s Wild West, there’s a method to the madness.

To view or add a comment, sign in

More articles by Dare Omotosho AWS CCP SAA

Insights from the community

Others also viewed

Explore topics