Micro services for scale

A microservice is a small, independently deployable unit of software that focuses on a specific functionality within an application. Each microservice operates autonomously, allowing for easier scaling, flexibility, and development across distributed systems.

Why is it important to understand microservices? Microservices have become the go-to standard for building scalable web applications. In a typical monolithic setup, all application logic is centralized and deployed together in a single environment, running under a shared process. This structure limits scalability, as relying solely on vertical scaling (increasing RAM and CPU resources) is often insufficient when serving a large user base. Vertical scaling has its limits, and beyond those, microservices offer a more effective solution.

Example

Consider a typical web application which allows a customer to browse products and buy products online. This involves customer, product, payment & order management.

These 4 can be considered as separate concerns & managed as 4 independent services.

Article content

Why?

Huh? why we are complicating the simple thing ? why 4 services ?

  1. Each microservice can be scaled independently, allowing teams to allocate resources to only the parts of the application that need them (e.g., a high-traffic user service or payment service). For example, on a particular day there can be more demand to the products view service & that can be scaled up separately, without necessarily scaling the payment service, as not every product view converts into a paid order.
  2. Each service in a microservices architecture is independently developed, tested, and deployed, meaning teams can work on different services concurrently without waiting on each other.
  3. In a microservices architecture, if one service fails, it doesn’t necessarily bring down the entire application. This improves resilience and makes it easier to identify and address the cause of the failure. For example if payment service is down temporarily, users can still browse products & maintain the cart.
  4. With microservices, different services can be built using different programming languages, databases, or frameworks, based on what best suits the specific service’s requirements. This allows teams to leverage a wide range of technologies and choose the best tool for each function. For example, we can have elastic search deployed for product search, not necessarily required for other services.
  5. Microservices enable a decentralized approach, where small, cross-functional teams can own and manage specific services. This encourages specialization and clear ownership within the team, enhancing collaboration and efficiency.
  6. Microservices break up a large codebase into smaller, more manageable units, which simplifies maintenance and reduces technical debt.
  7. Microservices architecture lends itself well to CI/CD practices, as services can be automatically built, tested, and deployed independently.
  8. Because microservices enable rapid deployment and updates, businesses can respond more quickly to customer feedback and market changes.

Principles

  • Independent Databases: Each microservice should have its own database to ensure data encapsulation and independence, avoiding direct connections to shared databases used by other services. Although there can be a bit of duplication of shared entities like customer, which gets propagated to order service through message broker & maintained there as well. Source of truth for customer info still remains as a customer micro service.
  • Single Responsibility: Each microservice should focus on a single, specific business capability, making it easier to maintain, understand, and modify independently.
  • Loose Coupling: Microservices should be loosely coupled so that changes in one service do not impact others, allowing for independent deployment and easier updates.
  • Autonomous Deployment: Each service should be independently deployable, meaning that it can be released, scaled, or replaced without affecting the entire system.
  • Lightweight Communication: Services should communicate using lightweight protocols (like HTTP/REST or messaging queues) to reduce dependency on specific platforms and to ensure flexibility.
  • Fault Isolation: Microservices should be designed to handle failures gracefully, isolating faults so that issues in one service do not cascade to others, improving resilience.
  • Decentralized Governance: Allow teams to choose the best tools and technologies for each microservice, encouraging flexibility and innovation while aligning with business needs.
  • Observability: Each service should have comprehensive logging, monitoring, and tracing to ensure system health can be tracked and issues quickly diagnosed.

Ways to deploy

AWS serverless

AWS Cognito is a service that provides user authentication, authorization, and user management for web and mobile applications, allowing developers to easily add sign-up, sign-in, and access control to applications. It also supports social logins (like Google and Facebook) and integrates with AWS IAM for secure API access.

AWS Lambda is a serverless computing service that enables developers to run code in response to events without managing servers, automatically scaling based on demand. It’s ideal for running short, event-driven functions, as you only pay for the compute time you use.

AWS API Gateway API Gateway provides features like centralized authentication, rate limiting, load balancing, and request routing, which can simplify managing and securing multiple microservices. However, for simpler applications or internal systems where these features are not required, clients can communicate directly with microservices, though this may lead to increased complexity in managing cross-cutting concerns in each service individually.

For customer buys a product use case, one can use AWS Cognito, which allows to manage users & their authentication, which then can be used to authorize different AWS lambdas for signing up or logging in the customers, browsing the products & paying for the orders.

Containerized

By using platform like Docker, which packages an application and its dependencies into a standardized unit called a container, we can deploy each micro service separately on EC2 instances.

Concepts to be applied as needed

Idempotent consumption refers to the ability of a system to handle the same request or message multiple times without changing the outcome, ensuring that repeated operations don’t cause unintended side effects or errors. Consider a customer creation message processed by order service, it should not create another customer record if same event id is received multiple times.

Fanout is a messaging pattern where a single message or event is broadcast to multiple consumers or services, allowing parallel processing and distribution of tasks. Suppose a customer creation message needs to be received by multiple micro services, the customer service can apply fanout pattern & publish a message without explicitly referring to the target receiver, the interested micros services would subscribe to the queue & consume the messages.

A circuit breaker is a design pattern used to detect failures in a system and prevent further attempts to execute an operation that is likely to fail, thus allowing the system to recover gracefully. Suppose the payment was processed by payment service, but order service was not available during for the acknowledgement, it can either create the data inconsistency or end up with an aggressive or looped attempts to connect to order services, which can make the situation worse. We can avoid bombarding order service with payment confirmation & apply a circuit breaker there, so that the acknowledgement is done by order service whenever it recovers.

CQRS (Command Query Responsibility Segregation) is an architectural pattern that separates the logic for reading data (queries) from the logic for modifying data (commands). It encourages the use of different models for reading and writing, allowing them to be optimized independently. Since the product viewing is more accessed functionality as compared to product creation, it can be done using read replica database, where we can optimize the search using indexing & configure resources separately for it.

The Sidecar Pattern is a design pattern commonly used in micro services architectures to handle cross-cutting concerns such as logging, monitoring, security, or configuration management without cluttering the core logic of the individual services. We can deploy app loggers as a sidecar which logs to the AWS cloud watch from all services.

The Saga pattern is a design pattern used to manage long-running and distributed transactions in microservices, where a series of smaller, isolated transactions (or steps) are executed across multiple services. If one step fails, the pattern ensures consistency by triggering compensating actions to roll back or mitigate the effects of previous steps. Consider a sequence of creating an order & making a payment, these can be done in 2 phase commits which can be reverted in the case of payment failure.

Whats Next?

We went trough lot of a theory, but essential to understand as we develop a practical use case.

To view or add a comment, sign in

More articles by Sudarshan Vidhate

Insights from the community

Others also viewed

Explore topics