SD Microservices Architecture

Microservices architecture is a design approach where a large application gets built as a collection of small, independent services. Each service handles a specific business function, runs in its own process, and communicates with other services through APIs or message queues. Each service deploys, scales, and fails independently.

Compare this to a monolith — one large application that contains all features. A monolith is like a single large machine that does everything. If one part breaks, the whole machine stops. Microservices are like a factory with specialized machines — if one breaks, the others keep running.

Monolith vs Microservices

Monolithic Architecture

+-----------------------------------------------+
|                  MONOLITH APP                  |
|  +----------+ +---------+ +------------------+ |
|  |  User    | | Product | |    Order         | |
|  |  Module  | | Module  | |    Module        | |
|  +----------+ +---------+ +------------------+ |
|  +----------+ +---------+                      |
|  | Payment  | | Notify  |                      |
|  | Module   | | Module  |                      |
|  +----------+ +---------+                      |
|                ONE codebase, ONE deployment     |
+-----------------------------------------------+
              One Database

Microservices Architecture

+----------+  +----------+  +----------+
|  User    |  | Product  |  |  Order   |
| Service  |  | Service  |  | Service  |
|  Own DB  |  |  Own DB  |  |  Own DB  |
+----------+  +----------+  +----------+
     |              |             |
     +------+--------+------+-----+
            |               |
     +-----------+    +----------+
     | Payment   |    | Notify   |
     | Service   |    | Service  |
     |  Own DB   |    |  Own DB  |
     +-----------+    +----------+
Each service: independent deployment, independent scaling
AspectMonolithMicroservices
DevelopmentSimple initially, complex as it growsComplex from start, manageable long-term
DeploymentDeploy entire app every timeDeploy only the changed service
ScalingScale the whole appScale only the bottleneck service
TechnologyOne language for everythingDifferent tech per service
Failure ImpactOne bug can crash entire appOne service failure is isolated
Team StructureAll teams work on same codebaseEach team owns one service

Core Principles of Microservices

Single Responsibility

Each service does one thing well. The User Service manages users. The Payment Service manages payments. They do not overlap.

Independent Deployment

Updating the email template requires only deploying the Notification Service. The rest of the system keeps running uninterrupted.

Database Per Service

Each service owns its own database. No two services share a database directly. This prevents one service's schema changes from breaking another service.

User Service     → PostgreSQL (relational, users, auth)
Product Service  → MongoDB (flexible schema, catalog)
Session Service  → Redis (fast key-value, sessions)
Order Service    → MySQL (relational, order history)
Search Service   → Elasticsearch (full-text search)

Communication Via API or Events

Services never access each other's databases directly. They communicate through REST APIs (synchronous) or message queues (asynchronous).

Service Communication Patterns

Synchronous Communication (REST/gRPC)

Service A calls Service B and waits for a response. Simple, but creates coupling — if Service B is slow, Service A is also slow.

Order Service → HTTP POST → Payment Service → Payment Service responds
Order Service waits for response before continuing

Asynchronous Communication (Events)

Service A publishes an event and moves on. Interested services consume the event independently.

Order Service → Publishes "order_placed" event → Message Queue
Payment Service:   Consumes event → Processes payment
Email Service:     Consumes event → Sends confirmation
Inventory Service: Consumes event → Reduces stock count
All happen concurrently, independently.

API Gateway

When a mobile app or browser needs data from multiple services, it should not make five separate network calls. An API Gateway is a single entry point that routes requests to the appropriate services, aggregates responses, and handles cross-cutting concerns.

Mobile App sends ONE request to API Gateway:

Request: GET /homepage

API Gateway internally calls:
→ User Service     (get user profile)
→ Product Service  (get recommendations)
→ Order Service    (get recent orders)
→ Notification Service (get unread count)

API Gateway aggregates all responses:
→ Returns ONE combined response to mobile app

Benefits:
- Client makes one request instead of four
- Authentication handled centrally at gateway
- Rate limiting applied once at gateway
- Services stay simple, not client-aware

Service Discovery

In a microservices environment, services start and stop dynamically. IP addresses change constantly. Service discovery solves the problem of "how does Service A find Service B?"

Client-Side Discovery

Service A queries Service Registry: "Where is Payment Service?"
Registry responds: "Payment Service is at 10.0.0.5:8080"
Service A calls 10.0.0.5:8080 directly

Tools: Eureka (Netflix), Consul

Server-Side Discovery

Service A calls Load Balancer: "I need Payment Service"
Load Balancer checks registry and routes to right instance
Service A does not need to know the IP

Tools: AWS ALB with ECS, Kubernetes Services

Circuit Breaker Pattern

In a chain of microservices, if one service fails, the failure can cascade — Service A calls Service B which calls Service C, and if C fails, B fails, then A fails. The Circuit Breaker pattern prevents this cascade.

Normal (CLOSED state):
Service A → Service B  (all requests pass through)

Service B starts failing:
Circuit Breaker OPEN:
Service A → Circuit Breaker → Fallback response (do not call B)
                             (returns cached data or error message)

Service B recovers after timeout:
Circuit Breaker HALF-OPEN:
Allows 1 test request → If success → Circuit CLOSES again → Normal flow resumes
States:
CLOSED  → Requests flow normally (monitoring failure rate)
OPEN    → All requests blocked (B is failing, protect system)
HALF-OPEN → Test requests allowed (is B recovered?)

Tools: Resilience4j (Java), Hystrix (Netflix), Polly (.NET)

Saga Pattern for Distributed Transactions

A traditional database transaction is atomic — all steps succeed or all roll back. In microservices, a business transaction spans multiple services and databases. The Saga pattern handles this without a global transaction.

Place Order Saga:

Step 1: Order Service   → Creates order (status: PENDING)
Step 2: Payment Service → Charges card
Step 3: Inventory       → Reduces stock
Step 4: Shipping        → Creates shipment
Step 5: Order Service   → Updates status to CONFIRMED

If Step 3 (Inventory) fails:
Compensating transactions run in reverse:
Step 3 undo: Inventory releases reservation
Step 2 undo: Payment Service refunds charge
Step 1 undo: Order Service cancels order

Challenges of Microservices

  • Distributed system complexity: Network calls fail, services go down, timeouts occur.
  • Data consistency: No shared database means eventual consistency, not instant consistency.
  • Operational overhead: Dozens of services to deploy, monitor, and scale.
  • Service versioning: Multiple services must coordinate API changes.
  • Testing: Integration testing across multiple services is complex.
  • Latency: Network calls between services add up. 10 services each adding 5ms = 50ms extra overhead.

When to Use Microservices

Microservices are not always the right choice. Start with a monolith and migrate to microservices when clear boundaries and scaling needs emerge.

Use Microservices When...Stick With Monolith When...
Different parts scale differentlyTeam is small (fewer than 10 developers)
Multiple teams work independentlyProduct requirements are still evolving
Different parts need different techLow to moderate traffic requirements
High availability per business functionStartup with tight deadlines

Summary

Microservices break a large application into small, independent services that each own their functionality, data, and deployment. This enables teams to work in parallel, scale individual components, and isolate failures. The trade-off is significant operational complexity — service discovery, circuit breakers, distributed tracing, and saga patterns all become necessary. Understanding microservices deeply requires understanding all the components that support them: load balancers, API gateways, message queues, and containerization.

Leave a Comment