Cloud-Native Microservices Architecture
Exercise Overview
Design and implement a complete cloud-native microservices application that can be deployed to Kubernetes. You'll build a multi-service system with proper containerization, service discovery, load balancing, and observability.
Learning Objectives
- Containerize Go applications with Docker multi-stage builds
- Design microservices with proper separation of concerns
- Implement service discovery and load balancing
- Add health checks, graceful shutdown, and circuit breakers
- Configure Kubernetes deployments, services, and ingress
- Implement observability with metrics, logging, and tracing
- Apply cloud-native patterns and best practices
Architecture Overview
You'll build a simple e-commerce system with the following services:
- API Gateway - Routes requests and handles authentication
- User Service - Manages user accounts and authentication
- Product Service - Manages product catalog
- Order Service - Handles order processing
- Notification Service - Sends notifications
Initial Structure
1// Gateway service - main.go
2package main
3
4import (
5 "context"
6 "encoding/json"
7 "fmt"
8 "log"
9 "net/http"
10 "net/http/httputil"
11 "net/url"
12 "os"
13 "strings"
14 "time"
15)
16
17// TODO: Implement service discovery
18type ServiceRegistry struct {
19 services map[string]string
20 // Add service registry implementation
21}
22
23// TODO: Implement API gateway with routing
24type APIGateway struct {
25 registry *ServiceRegistry
26 proxies map[string]*httputil.ReverseProxy
27}
28
29// TODO: Add health check middleware
30type HealthChecker struct {
31 // Add health check implementation
32}
33
34func main() {
35 // TODO: Implement API gateway
36}
1// User service - main.go
2package main
3
4import (
5 "context"
6 "encoding/json"
7 "fmt"
8 "log"
9 "net/http"
10 "time"
11)
12
13// TODO: Implement user service with database
14type UserService struct {
15 // Add user service implementation
16}
17
18type User struct {
19 ID string `json:"id"`
20 Username string `json:"username"`
21 Email string `json:"email"`
22 Created time.Time `json:"created"`
23}
24
25func main() {
26 // TODO: Implement user service with cloud-native features
27}
Tasks
Task 1: Containerize Services with Docker
Create optimized Docker multi-stage builds:
1# Dockerfile.gateway
2FROM golang:1.21-alpine AS builder
3
4WORKDIR /app
5COPY go.mod go.sum ./
6RUN go mod download
7
8COPY . .
9RUN CGO_ENABLED=0 GOOS=linux go build -o gateway ./cmd/gateway
10
11FROM alpine:latest
12RUN apk --no-cache add ca-certificates tzdata
13WORKDIR /root/
14
15COPY --from=builder /app/gateway .
16COPY --from=builder /app/configs ./configs
17
18EXPOSE 8080
19CMD ["./gateway"]
1# Dockerfile.service
2FROM golang:1.21-alpine AS builder
3
4WORKDIR /app
5COPY go.mod go.sum ./
6RUN go mod download
7
8ARG SERVICE_NAME
9COPY ./cmd/${SERVICE_NAME} ./cmd/${SERVICE_NAME}
10COPY ./internal ./internal
11COPY ./pkg ./pkg
12
13RUN CGO_ENABLED=0 GOOS=linux go build -o ${SERVICE_NAME} ./cmd/${SERVICE_NAME}
14
15FROM alpine:latest
16RUN apk --no-cache add ca-certificates tzdata
17WORKDIR /root/
18
19COPY --from=builder /app/${SERVICE_NAME} .
20
21# Add non-root user
22RUN addgroup -g 1001 -S appgroup && \
23 adduser -u 1001 -S appuser -G appgroup
24USER appuser
25
26EXPOSE 8080
27CMD ["./user-service"]
Task 2: Implement Service Discovery
Build a service registry with health checking:
1type ServiceRegistry struct {
2 services map[string][]ServiceInstance
3 mutex sync.RWMutex
4 health *HealthChecker
5}
6
7type ServiceInstance struct {
8 ID string `json:"id"`
9 Name string `json:"name"`
10 Address string `json:"address"`
11 Port int `json:"port"`
12 Healthy bool `json:"healthy"`
13 LastSeen time.Time `json:"last_seen"`
14}
15
16func Register(instance ServiceInstance) error {
17 sr.mutex.Lock()
18 defer sr.mutex.Unlock()
19
20 if sr.services == nil {
21 sr.services = make(map[string][]ServiceInstance)
22 }
23
24 instances := sr.services[instance.Name]
25
26 // Check if instance already exists
27 for i, existing := range instances {
28 if existing.ID == instance.ID {
29 instances[i] = instance
30 sr.services[instance.Name] = instances
31 return nil
32 }
33 }
34
35 // Add new instance
36 sr.services[instance.Name] = append(instances, instance)
37 return nil
38}
39
40func Discover(serviceName string) {
41 sr.mutex.RLock()
42 defer sr.mutex.RUnlock()
43
44 instances, exists := sr.services[serviceName]
45 if !exists || len(instances) == 0 {
46 return ServiceInstance{}, fmt.Errorf("no instances found for service: %s", serviceName)
47 }
48
49 // Simple round-robin load balancing
50 healthyInstances := make([]ServiceInstance, 0)
51 for _, instance := range instances {
52 if instance.Healthy {
53 healthyInstances = append(healthyInstances, instance)
54 }
55 }
56
57 if len(healthyInstances) == 0 {
58 return ServiceInstance{}, fmt.Errorf("no healthy instances for service: %s", serviceName)
59 }
60
61 // Select instance
62 return healthyInstances[rand.Intn(len(healthyInstances))], nil
63}
Task 3: Implement Cloud-Native Service Features
Add health checks, graceful shutdown, and metrics:
1type CloudNativeService struct {
2 server *http.Server
3 registry *ServiceRegistry
4 instanceID string
5 serviceName string
6 port int
7 healthCheck *HealthCheck
8 metrics *Metrics
9 shutdownChan chan os.Signal
10}
11
12type HealthCheck struct {
13 checks map[string]func() error
14 mutex sync.RWMutex
15}
16
17func AddCheck(name string, check func() error) {
18 hc.mutex.Lock()
19 defer hc.mutex.Unlock()
20 hc.checks[name] = check
21}
22
23func CheckHealth() error {
24 hc.mutex.RLock()
25 defer hc.mutex.RUnlock()
26
27 for name, check := range hc.checks {
28 if err := check(); err != nil {
29 return fmt.Errorf("health check '%s' failed: %w", name, err)
30 }
31 }
32 return nil
33}
34
35func setupRoutes() {
36 mux := http.NewServeMux()
37
38 // Health check endpoint
39 mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
40 if err := cns.healthCheck.CheckHealth(); err != nil {
41 w.WriteHeader(http.StatusServiceUnavailable)
42 json.NewEncoder(w).Encode(map[string]string{
43 "status": "unhealthy",
44 "error": err.Error(),
45 })
46 return
47 }
48
49 w.WriteHeader(http.StatusOK)
50 json.NewEncoder(w).Encode(map[string]interface{}{
51 "status": "healthy",
52 "service": cns.serviceName,
53 "instance_id": cns.instanceID,
54 "timestamp": time.Now(),
55 })
56 })
57
58 // Readiness probe endpoint
59 mux.HandleFunc("/ready", func(w http.ResponseWriter, r *http.Request) {
60 // Check if service is ready to accept traffic
61 ready := cns.isReady()
62 if ready {
63 w.WriteHeader(http.StatusOK)
64 json.NewEncoder(w).Encode(map[string]string{"status": "ready"})
65 } else {
66 w.WriteHeader(http.StatusServiceUnavailable)
67 json.NewEncoder(w).Encode(map[string]string{"status": "not ready"})
68 }
69 })
70
71 // Metrics endpoint
72 mux.HandleFunc("/metrics", func(w http.ResponseWriter, r *http.Request) {
73 // Prometheus metrics format
74 cns.metrics.WritePrometheus(w)
75 })
76
77 cns.server.Handler = mux
78}
79
80func gracefulShutdown() {
81 <-cns.shutdownChan
82
83 log.Println("Shutting down gracefully...")
84
85 // Deregister from service registry
86 cns.registry.Deregister(cns.instanceID)
87
88 // Shutdown HTTP server
89 ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
90 defer cancel()
91
92 if err := cns.server.Shutdown(ctx); err != nil {
93 log.Printf("Server shutdown error: %v", err)
94 }
95
96 log.Println("Graceful shutdown completed")
97}
Task 4: Kubernetes Deployment Configuration
Create Kubernetes manifests:
1# k8s/namespace.yaml
2apiVersion: v1
3kind: Namespace
4metadata:
5 name: ecommerce
6 labels:
7 name: ecommerce
8
9---
10# k8s/user-service-deployment.yaml
11apiVersion: apps/v1
12kind: Deployment
13metadata:
14 name: user-service
15 namespace: ecommerce
16 labels:
17 app: user-service
18spec:
19 replicas: 3
20 selector:
21 matchLabels:
22 app: user-service
23 template:
24 metadata:
25 labels:
26 app: user-service
27 spec:
28 containers:
29 - name: user-service
30 image: ecommerce/user-service:v1.0.0
31 ports:
32 - containerPort: 8080
33 env:
34 - name: PORT
35 value: "8080"
36 - name: DB_HOST
37 value: "postgres-service"
38 - name: DB_PORT
39 value: "5432"
40 - name: DB_NAME
41 value: "ecommerce"
42 - name: DB_USER
43 valueFrom:
44 secretKeyRef:
45 name: db-credentials
46 key: username
47 - name: DB_PASSWORD
48 valueFrom:
49 secretKeyRef:
50 name: db-credentials
51 key: password
52 resources:
53 requests:
54 memory: "64Mi"
55 cpu: "50m"
56 limits:
57 memory: "128Mi"
58 cpu: "100m"
59 livenessProbe:
60 httpGet:
61 path: /health
62 port: 8080
63 initialDelaySeconds: 30
64 periodSeconds: 10
65 readinessProbe:
66 httpGet:
67 path: /ready
68 port: 8080
69 initialDelaySeconds: 5
70 periodSeconds: 5
71 lifecycle:
72 preStop:
73 exec:
74 command: ["/bin/sh", "-c", "sleep 15"]
75
76---
77# k8s/user-service.yaml
78apiVersion: v1
79kind: Service
80metadata:
81 name: user-service
82 namespace: ecommerce
83 labels:
84 app: user-service
85spec:
86 selector:
87 app: user-service
88 ports:
89 - protocol: TCP
90 port: 80
91 targetPort: 8080
92 type: ClusterIP
93
94---
95# k8s/ingress.yaml
96apiVersion: networking.k8s.io/v1
97kind: Ingress
98metadata:
99 name: ecommerce-ingress
100 namespace: ecommerce
101 annotations:
102 nginx.ingress.kubernetes.io/rewrite-target: /
103 nginx.ingress.kubernetes.io/rate-limit: "100"
104 cert-manager.io/cluster-issuer: "letsencrypt-prod"
105spec:
106 tls:
107 - hosts:
108 - api.ecommerce.example.com
109 secretName: ecommerce-tls
110 rules:
111 - host: api.ecommerce.example.com
112 http:
113 paths:
114 - path: /users
115 pathType: Prefix
116 backend:
117 service:
118 name: user-service
119 port:
120 number: 80
121 - path: /products
122 pathType: Prefix
123 backend:
124 service:
125 name: product-service
126 port:
127 number: 80
Task 5: Observability Integration
Add metrics, logging, and tracing:
1type Metrics struct {
2 requestsTotal prometheus.Counter
3 requestDuration prometheus.Histogram
4 errorRate prometheus.Counter
5 activeConnections prometheus.Gauge
6}
7
8func NewMetrics() *Metrics {
9 return &Metrics{
10 requestsTotal: prometheus.NewCounter(prometheus.CounterOpts{
11 Name: "http_requests_total",
12 Help: "Total number of HTTP requests",
13 ConstLabels: prometheus.Labels{
14 "service": os.Getenv("SERVICE_NAME"),
15 },
16 }),
17 requestDuration: prometheus.NewHistogram(prometheus.HistogramOpts{
18 Name: "http_request_duration_seconds",
19 Help: "HTTP request duration in seconds",
20 Buckets: prometheus.DefBuckets,
21 }),
22 errorRate: prometheus.NewCounter(prometheus.CounterOpts{
23 Name: "http_errors_total",
24 Help: "Total number of HTTP errors",
25 }),
26 activeConnections: prometheus.NewGauge(prometheus.GaugeOpts{
27 Name: "active_connections",
28 Help: "Number of active connections",
29 }),
30 }
31}
32
33func InstrumentHandler(next http.Handler) http.Handler {
34 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
35 start := time.Now()
36
37 // Wrap response writer to capture status code
38 wrapped := &responseWriter{ResponseWriter: w, statusCode: http.StatusOK}
39
40 // Increment active connections
41 m.activeConnections.Inc()
42 defer m.activeConnections.Dec()
43
44 // Call next handler
45 next.ServeHTTP(wrapped, r)
46
47 // Record metrics
48 duration := time.Since(start).Seconds()
49 m.requestsTotal.Inc()
50 m.requestDuration.Observe(duration)
51
52 if wrapped.statusCode >= 400 {
53 m.errorRate.Inc()
54 }
55 })
56}
57
58type responseWriter struct {
59 http.ResponseWriter
60 statusCode int
61}
62
63func WriteHeader(code int) {
64 rw.statusCode = code
65 rw.ResponseWriter.WriteHeader(code)
66}
Solution Approach
Click to see detailed solution
This would include complete implementations of all services, Docker configurations, Kubernetes manifests, and observability setups. The solution demonstrates a production-ready cloud-native microservices architecture.
Deployment Instructions
1. Build and Push Docker Images
1# Build Docker images
2docker build -f Dockerfile.gateway -t ecommerce/gateway:v1.0.0 .
3docker build -f Dockerfile.service --build-arg SERVICE_NAME=user-service -t ecommerce/user-service:v1.0.0 .
4docker build -f Dockerfile.service --build-arg SERVICE_NAME=product-service -t ecommerce/product-service:v1.0.0 .
5docker build -f Dockerfile.service --build-arg SERVICE_NAME=order-service -t ecommerce/order-service:v1.0.0 .
6
7# Push to registry
8docker push ecommerce/gateway:v1.0.0
9docker push ecommerce/user-service:v1.0.0
10docker push ecommerce/product-service:v1.0.0
11docker push ecommerce/order-service:v1.0.0
2. Deploy to Kubernetes
1# Create namespace
2kubectl apply -f k8s/namespace.yaml
3
4# Deploy database
5kubectl apply -f k8s/postgres/
6kubectl apply -f k8s/secrets/
7
8# Deploy services
9kubectl apply -f k8s/user-service-deployment.yaml
10kubectl apply -f k8s/product-service-deployment.yaml
11kubectl apply -f k8s/order-service-deployment.yaml
12
13# Deploy gateway
14kubectl apply -f k8s/gateway-deployment.yaml
15
16# Configure ingress
17kubectl apply -f k8s/ingress.yaml
18
19# Check deployment status
20kubectl get pods -n ecommerce
21kubectl get services -n ecommerce
22kubectl get ingress -n ecommerce
3. Monitor and Test
1# Check logs
2kubectl logs -f deployment/user-service -n ecommerce
3
4# Port forward for testing
5kubectl port-forward service/user-service 8080:80 -n ecommerce
6
7# Test health checks
8curl http://localhost:8080/health
9curl http://localhost:8080/ready
10curl http://localhost:8080/metrics
11
12# Test through ingress
13curl https://api.ecommerce.example.com/users
Testing Cloud-Native Features
1. Test Service Discovery
1# Scale services
2kubectl scale deployment user-service --replicas=5 -n ecommerce
3
4# Verify load balancing
5for i in {1..10}; do
6 curl https://api.ecommerce.example.com/users/health
7done
2. Test Health Checks and Self-Healing
1# Simulate service failure
2kubectl exec -it deployment/user-service -n ecommerce -- kill 1
3
4# Watch Kubernetes restart the pod
5kubectl get pods -n ecommerce -w
3. Test Graceful Shutdown
1# Deploy new version
2kubectl set image deployment/user-service user-service=ecommerce/user-service:v1.1.0 -n ecommerce
3
4# Watch rolling update
5kubectl rollout status deployment/user-service -n ecommerce
Extension Challenges
- Add service mesh - Implement Istio or Linkerd for advanced traffic management
- Implement autoscaling - Configure HPA and VPA based on metrics
- Add distributed tracing - Integrate Jaeger or Zipkin
- Implement circuit breakers - Add resilience patterns
- Add blue-green deployment - Implement zero-downtime deployments
Key Takeaways
- Multi-stage Docker builds create small, secure container images
- Health checks are essential for Kubernetes self-healing
- Service discovery enables dynamic scaling and load balancing
- Observability is crucial for debugging distributed systems
- Graceful shutdown prevents data loss during deployments
- Resource limits prevent resource starvation in multi-tenant environments
- Ingress controllers provide external access to cluster services
This exercise provides hands-on experience with building and deploying cloud-native microservices using Go and Kubernetes, covering essential production patterns and practices.