Consider a restaurant manager who needs to coordinate multiple chefs, waiters, and kitchen staff. When a customer cancels their order, you need to instantly notify everyone to stop working on that dish. This is exactly what Go's context package does for your programs - it coordinates cancellation, timeouts, and request information across different parts of your application.
The context package is essential for managing cancellation, deadlines, and request-scoped values across API boundaries and goroutines. It's a core pattern in production Go code, and understanding it deeply is crucial for building reliable, efficient server applications.
Why Context Matters
Think about the last time you clicked a link on a slow website and then hit the back button. If that website was well-built, all the database queries, API calls, and processing for that request should have stopped immediately. That's context in action - coordinating cancellation across your entire application stack.
The context package is mandatory in production Go. Every major library and framework expects you to use it:
- HTTP servers pass context to every handler
- Database drivers require context for queries
- gRPC services use context for RPC calls
- Cloud SDKs use context for API operations
Context solves critical problems:
- Cancellation Propagation - Stop operations when users disconnect or requests timeout
- Deadline Management - Prevent operations from running beyond their time budget
- Request-scoped Values - Pass request IDs, auth tokens, and metadata through call chains
- Resource Management - Clean up resources when operations are canceled
- Coordination - Synchronize complex operations across goroutines
Real Impact: Without context, a slow database query continues running even after the user closed their browser, wasting database connections and server resources. In high-traffic systems, this leads to connection pool exhaustion, cascading failures, and system outages.
Required by: net/http, database/sql, grpc, AWS SDK, Google Cloud SDK, and virtually every modern Go library.
💡 Key Takeaway: Context is not optional in modern Go applications. It's the standard way to handle cancellation and timeouts, making your applications more efficient, reliable, and production-ready. Learning context patterns is essential for building systems that scale.
⚠️ Important: Always propagate context through your function calls. A common mistake is to create a new context or use context.Background() when you should be passing down the existing context. This breaks the cancellation chain and defeats the entire purpose of the pattern.
Understanding Context Philosophy
Before diving into the API, it's important to understand the design philosophy behind the context package. Context was added to Go in version 1.7 after years of experience with production systems revealed common patterns and pain points.
The Problem Context Solves
In distributed systems and concurrent programs, you often have operations that spawn multiple sub-operations:
HTTP Request
├── Database Query 1
├── Database Query 2
└── External API Call
├── Retry 1
├── Retry 2
└── Retry 3
Without context, if the HTTP request is canceled (user navigates away), all these sub-operations continue running, wasting resources. Context provides a mechanism to propagate cancellation signals throughout this operation tree.
Core Design Principles
1. Immutability - Contexts are immutable. Each modification creates a new context.
2. Tree Structure - Contexts form a tree. Parent contexts can cancel children, but not vice versa.
3. Goroutine-Safe - The same context can be safely passed to multiple goroutines.
4. Value Scope - Values are request-scoped, not global configuration.
5. No Storage - Context carries temporary state, not persistent data.
When to Use Context
Use context for:
- Cancellation signals across API boundaries
- Deadlines and timeouts for operations
- Request-scoped values (request IDs, auth tokens, tracing data)
- Coordinating goroutines in the same operation
Don't use context for:
- Optional parameters (use function arguments)
- Global configuration (use config structs)
- Mutable state (use proper synchronization)
- Long-lived application state (use databases or caches)
Basic Context Creation
Every context chain needs to start somewhere. Think of this like the foundation of a building - everything else builds on top of it. Go provides two root contexts that serve as the starting points for all context chains.
Background Context
The root context that's never canceled. Think of this as the main electrical panel of your building - it's always on and everything connects back to it.
context.Background() is typically used:
- At the top-level of your application (main function)
- In tests where you don't need cancellation
- When starting long-running background operations
- As the parent for all other contexts
1package main
2
3import (
4 "context"
5 "fmt"
6)
7
8func main() {
9 ctx := context.Background()
10 fmt.Printf("Context type: %T\n", ctx)
11 fmt.Printf("Context error: %v\n", ctx.Err())
12 fmt.Printf("Context done channel: %v\n", ctx.Done())
13}
Characteristics:
- Never canceled
- No deadline
- No values (initially)
- Thread-safe
- Singleton (same instance returned every time)
TODO Context
Use when you're not sure which context to use yet. Think of this as a temporary placeholder, like marking a wall with "paint here later" - you know you need to do something, but you're not ready to decide what color.
context.TODO() should be used:
- During refactoring when context needs are unclear
- When designing a new API and context usage isn't finalized
- As a clear signal that context handling needs attention
- Temporarily during development (should never ship to production)
1package main
2
3import (
4 "context"
5 "fmt"
6)
7
8func main() {
9 // Use when refactoring or context is unclear
10 ctx := context.TODO()
11 fmt.Printf("Context type: %T\n", ctx)
12
13 // In real code, replace TODO with proper context
14 // ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
15 // defer cancel()
16}
💡 Key Takeaway: Use context.Background() at the entry points of your application (main, init, HTTP handlers). Use context.TODO() only temporarily during development or refactoring - it should never appear in production code. If you see context.TODO() in code review, it's a signal that context handling needs more thought.
⚠️ Production Tip: Set up a linter rule to flag context.TODO() in your codebase. It's a useful development tool but indicates incomplete work that shouldn't make it to production.
Cancellation - The Heart of Context
Cancellation is context's most powerful feature. Imagine you're running a restaurant and a customer cancels their order. You immediately tell the chefs to stop cooking, the servers to cancel the table setup, and the bartender to stop preparing drinks. Context cancellation works the same way - it propagates cancellation signals across your entire application.
Understanding cancellation is crucial because it's the primary reason context exists. Without proper cancellation, your application will leak resources, waste CPU cycles, and eventually degrade under load.
Manual Cancellation
Manual cancellation gives you complete control over when to cancel operations. It's like having a stop button that you can press at any time.
🌍 Real-world Example: This is exactly what happens when you press Ctrl+C in a CLI tool. The signal handler calls the cancel function, and all ongoing operations gracefully stop. Similarly, in a web server, when a request context is canceled (user closes browser), all database queries and API calls for that request stop immediately.
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func doWork(ctx context.Context) {
10 for {
11 select {
12 case <-ctx.Done():
13 fmt.Println("Work canceled:", ctx.Err())
14 return
15 default:
16 fmt.Println("Working...")
17 time.Sleep(500 * time.Millisecond)
18 }
19 }
20}
21
22func main() {
23 // Create cancellable context
24 ctx, cancel := context.WithCancel(context.Background())
25
26 // Start work in background
27 go doWork(ctx)
28
29 // Let it run for a bit
30 time.Sleep(2 * time.Second)
31
32 // Cancel the work
33 fmt.Println("Canceling work...")
34 cancel()
35
36 // Give it time to clean up
37 time.Sleep(1 * time.Second)
38 fmt.Println("Main exiting")
39}
How It Works:
context.WithCancel(parent)creates a new context and a cancel function- The child context inherits from parent but can be independently canceled
- Calling
cancel()closes theDone()channel - All operations watching
ctx.Done()receive the signal simultaneously ctx.Err()returnscontext.Canceledafter cancellation
💡 Key Takeaway: The select statement with ctx.Done() is the fundamental pattern for making your operations cancellable. Always check ctx.Done() in loops and long-running operations. This pattern allows your code to respond immediately to cancellation requests.
Best Practices:
1// ✅ Good: Always defer cancel
2ctx, cancel := context.WithCancel(parent)
3defer cancel() // Prevents resource leaks
4
5// ❌ Bad: Forgetting to call cancel
6ctx, _ := context.WithCancel(parent) // Resource leak!
7
8// ✅ Good: Check Done() in loops
9for {
10 select {
11 case <-ctx.Done():
12 return ctx.Err()
13 default:
14 // Do work
15 }
16}
17
18// ❌ Bad: No cancellation checking
19for {
20 // Work never stops, even if canceled
21}
Cancellation Propagation
Think of cancellation like turning off the main power switch in a building - everything connected to that circuit turns off simultaneously. When you cancel a parent context, all its children are automatically canceled. This cascading cancellation is what makes context so powerful for coordinating complex operations.
🌍 Real-world Example: This is what happens when an HTTP request times out in a web server. The original request context is canceled, which cascels the database query, the cache lookup, and the external API calls all at once. This prevents resource waste and ensures clean shutdown.
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func worker(ctx context.Context, name string) {
10 for {
11 select {
12 case <-ctx.Done():
13 fmt.Printf("%s stopped: %v\n", name, ctx.Err())
14 return
15 default:
16 fmt.Printf("%s working...\n", name)
17 time.Sleep(500 * time.Millisecond)
18 }
19 }
20}
21
22func main() {
23 // Create parent context
24 ctx, cancel := context.WithCancel(context.Background())
25
26 // Start multiple workers with the same context
27 go worker(ctx, "Worker 1")
28 go worker(ctx, "Worker 2")
29 go worker(ctx, "Worker 3")
30
31 time.Sleep(2 * time.Second)
32
33 // One cancel stops all workers
34 fmt.Println("Canceling all workers...")
35 cancel()
36
37 time.Sleep(1 * time.Second)
38}
Cancellation Flow:
Parent Context (canceled)
├── Child Context 1 (auto-canceled)
│ └── Worker 1 (stops)
├── Child Context 2 (auto-canceled)
│ └── Worker 2 (stops)
└── Child Context 3 (auto-canceled)
└── Worker 3 (stops)
⚠️ Important: Context cancellation is one-way - only parents can cancel children, not the other way around. This prevents child operations from accidentally canceling their parent operations. If you need bidirectional cancellation, you need separate contexts.
Advanced Pattern: Selective Cancellation
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func main() {
10 // Parent context for all operations
11 parentCtx := context.Background()
12
13 // Separate contexts for different operation groups
14 ctx1, cancel1 := context.WithCancel(parentCtx)
15 ctx2, cancel2 := context.WithCancel(parentCtx)
16
17 go worker(ctx1, "Group 1 Worker 1")
18 go worker(ctx1, "Group 1 Worker 2")
19 go worker(ctx2, "Group 2 Worker 1")
20 go worker(ctx2, "Group 2 Worker 2")
21
22 time.Sleep(2 * time.Second)
23
24 // Cancel only Group 1
25 fmt.Println("Canceling Group 1...")
26 cancel1()
27
28 time.Sleep(1 * time.Second)
29
30 // Group 2 still running, cancel it now
31 fmt.Println("Canceling Group 2...")
32 cancel2()
33
34 time.Sleep(1 * time.Second)
35}
36
37func worker(ctx context.Context, name string) {
38 for {
39 select {
40 case <-ctx.Done():
41 fmt.Printf("%s stopped\n", name)
42 return
43 default:
44 fmt.Printf("%s working\n", name)
45 time.Sleep(500 * time.Millisecond)
46 }
47 }
48}
This pattern allows fine-grained control over which operations to cancel, while still maintaining the benefits of cascading cancellation within each group.
Timeouts - Automatic Cancellation
Timeouts are like having a kitchen timer for your operations. Just as you wouldn't want to wait forever for food to cook, you don't want your operations to run forever. Timeouts prevent your application from hanging when things go wrong.
Timeouts are critical for building resilient systems. Without them, a single slow operation can cascade into system-wide failure as resources are exhausted waiting for operations that will never complete.
WithTimeout
Think of WithTimeout as setting an egg timer - it automatically goes off after a specified duration, canceling whatever is happening.
🌍 Real-world Example: HTTP clients use timeouts everywhere. When your browser requests a webpage, it typically waits 30 seconds before giving up. Database drivers use timeouts to prevent queries from running forever and blocking other operations. Cloud APIs use timeouts to prevent cascading failures in distributed systems.
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func fetchData(ctx context.Context) error {
10 // Simulate slow operation
11 select {
12 case <-time.After(3 * time.Second):
13 fmt.Println("Data fetched successfully")
14 return nil
15 case <-ctx.Done():
16 return ctx.Err()
17 }
18}
19
20func main() {
21 // Create context with 2-second timeout
22 ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
23 defer cancel() // Always defer cancel
24
25 fmt.Println("Starting operation with 2s timeout...")
26
27 err := fetchData(ctx)
28 if err != nil {
29 fmt.Println("Error:", err) // context deadline exceeded
30 }
31}
Understanding the Timeout Flow:
Time 0s: Context created with 2s timeout
Time 0s: fetchData() starts
Time 2s: Timeout fires, ctx.Done() closes
Time 2s: fetchData() receives cancellation, returns error
Time 3s: (Operation would have completed, but was canceled)
💡 Key Takeaway: Always use defer cancel() immediately after creating a context with timeout. This ensures the timer resources are cleaned up, even if the operation completes successfully before the timeout. Forgetting defer cancel() causes resource leaks.
Why Defer Cancel?
1// ✅ Good: Cancel called even on early return
2ctx, cancel := context.WithTimeout(parent, 5*time.Second)
3defer cancel()
4
5if err := validate(); err != nil {
6 return err // cancel() still called
7}
8
9// ❌ Bad: Cancel only called on success path
10ctx, cancel := context.WithTimeout(parent, 5*time.Second)
11
12if err := validate(); err != nil {
13 return err // cancel() NOT called - resource leak!
14}
15cancel()
WithDeadline
Think of WithDeadline as scheduling a specific cutoff time - like a store that closes at exactly 9 PM, regardless of when customers arrive. Unlike WithTimeout which counts from now, WithDeadline specifies an absolute time.
🌍 Real-world Example: Batch processing systems often use deadlines. For example, a daily report generation might need to complete by 8 AM to be ready for the morning meeting, regardless of when it starts. Similarly, trading systems might have hard deadlines for order processing based on market close times.
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func processRequest(ctx context.Context) {
10 deadline, ok := ctx.Deadline()
11 if ok {
12 fmt.Printf("Request must complete by: %s\n", deadline.Format(time.RFC3339))
13 remaining := time.Until(deadline)
14 fmt.Printf("Time remaining: %v\n", remaining)
15 }
16
17 select {
18 case <-time.After(2 * time.Second):
19 fmt.Println("Processing complete")
20 case <-ctx.Done():
21 fmt.Println("Deadline exceeded:", ctx.Err())
22 }
23}
24
25func main() {
26 // Set deadline to 1 second from now
27 deadline := time.Now().Add(1 * time.Second)
28 ctx, cancel := context.WithDeadline(context.Background(), deadline)
29 defer cancel()
30
31 processRequest(ctx)
32}
Timeout vs Deadline - When to Use Each:
| Scenario | Use | Example |
|---|---|---|
| Operation should take max 5s | WithTimeout(5*time.Second) |
API call with SLA |
| Must complete by 2 PM | WithDeadline(twoPM) |
Batch job deadline |
| Relative to start time | WithTimeout |
Database query |
| Absolute wall-clock time | WithDeadline |
Trading cutoff |
Advanced: Deadline Chaining
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func main() {
10 // Parent context with 10-second deadline
11 parent, cancel1 := context.WithDeadline(
12 context.Background(),
13 time.Now().Add(10*time.Second),
14 )
15 defer cancel1()
16
17 // Child context with 5-second deadline
18 // Will use the shorter (5s) deadline
19 child, cancel2 := context.WithDeadline(
20 parent,
21 time.Now().Add(5*time.Second),
22 )
23 defer cancel2()
24
25 // Check which deadline is active
26 deadline, _ := child.Deadline()
27 fmt.Printf("Effective deadline: %s\n", deadline.Format("15:04:05"))
28 fmt.Printf("Time remaining: %v\n", time.Until(deadline))
29}
⚠️ Important: When you create a child context with a deadline, the effective deadline is the minimum of the parent and child deadlines. You cannot extend a deadline beyond the parent's deadline.
Timezone Considerations:
1// ✅ Good: Use time.Now() for relative deadlines
2deadline := time.Now().Add(5 * time.Minute)
3
4// ⚠️ Be careful with absolute times and timezones
5deadline := time.Date(2024, 1, 15, 14, 0, 0, 0, time.UTC)
6
7// ✅ Better: Use location-aware times when needed
8loc, _ := time.LoadLocation("America/New_York")
9deadline := time.Date(2024, 1, 15, 14, 0, 0, 0, loc)
⚠️ Important: Be careful with timezones when using WithDeadline! Always use time.Now() in the same timezone, or explicitly specify UTC time to avoid confusion. Mixing timezones can lead to unexpected deadline behavior.
Context Values - Carrying Request-Scoped Data
Context values are like the special notes that waiters carry with them - they contain important information about the current request that follows it through the entire restaurant operation. Think of request IDs, authentication tokens, or user preferences.
⚠️ Important: Use context values sparingly! They're designed for request-scoped data, not for passing function parameters. Overusing context values can make your code hard to understand, test, and maintain. If you find yourself putting more than 3-4 values in context, you're probably misusing the pattern.
🌍 Real-world Example: This is how web frameworks track request IDs across multiple services. When a request comes into API Gateway, it gets a unique ID that travels through the auth service, database, and notification service - making it easy to trace the entire request flow in logs. Similarly, authentication tokens are passed through context to avoid passing them as explicit parameters to every function.
Basic Context Values
1package main
2
3import (
4 "context"
5 "fmt"
6)
7
8type key string
9
10const userKey key = "user"
11
12func processRequest(ctx context.Context) {
13 if user, ok := ctx.Value(userKey).(string); ok {
14 fmt.Printf("Processing request for user: %s\n", user)
15 } else {
16 fmt.Println("No user in context")
17 }
18}
19
20func main() {
21 // Create context with a value
22 ctx := context.WithValue(context.Background(), userKey, "alice")
23 processRequest(ctx)
24
25 // Context without the value
26 emptyCtx := context.Background()
27 processRequest(emptyCtx)
28}
Type Assertion Pattern:
1// ✅ Good: Always use type assertion with ok check
2if user, ok := ctx.Value(userKey).(string); ok {
3 // Use user safely
4}
5
6// ❌ Bad: Direct assertion can panic
7user := ctx.Value(userKey).(string) // Panics if value is missing or wrong type
💡 Key Takeaway: Context values should be immutable and request-scoped. Don't use them to pass configuration or mutable data that changes during the request lifecycle. If the data might change, it doesn't belong in context.
Custom Key Types - Avoiding Collisions
Think of context keys like apartment building numbers - you wouldn't want two families to have the same apartment number, or the mail might get delivered to the wrong place. Using custom key types prevents this kind of collision.
🌍 Real-world Example: Multiple libraries might both want to store a "user" value in the context. If they both use the string "user" as a key, they'll overwrite each other. Custom key types ensure each library's values remain separate, preventing subtle bugs in production.
1package main
2
3import (
4 "context"
5 "fmt"
6)
7
8// Custom key type prevents collisions
9type contextKey int
10
11const (
12 userIDKey contextKey = iota
13 requestIDKey
14 sessionKey
15 traceIDKey
16)
17
18func main() {
19 ctx := context.Background()
20
21 // Add multiple values with type-safe keys
22 ctx = context.WithValue(ctx, userIDKey, 123)
23 ctx = context.WithValue(ctx, requestIDKey, "req-456")
24 ctx = context.WithValue(ctx, sessionKey, "sess-789")
25
26 // Retrieve values with type safety
27 if userID, ok := ctx.Value(userIDKey).(int); ok {
28 fmt.Printf("User ID: %d\n", userID)
29 }
30
31 if reqID, ok := ctx.Value(requestIDKey).(string); ok {
32 fmt.Printf("Request ID: %s\n", reqID)
33 }
34
35 // String keys from different packages don't collide
36 ctx = context.WithValue(ctx, "user", "string-key-user")
37 if userID, ok := ctx.Value(userIDKey).(int); ok {
38 fmt.Printf("User ID still safe: %d\n", userID)
39 }
40}
💡 Key Takeaway: Custom key types are Go's way of creating namespaces for context values. The iota constant ensures each key has a unique value, preventing collisions even when multiple packages are involved. This is the recommended pattern in the Go community.
Package-Level Context Keys:
1package mypackage
2
3// Unexported key type
4type contextKey string
5
6// Exported constants for keys
7const (
8 UserKey contextKey = "user"
9 SessionKey contextKey = "session"
10)
11
12// Type-safe setters
13func WithUser(ctx context.Context, user string) context.Context {
14 return context.WithValue(ctx, UserKey, user)
15}
16
17// Type-safe getters
18func GetUser(ctx context.Context) (string, bool) {
19 user, ok := ctx.Value(UserKey).(string)
20 return user, ok
21}
This pattern provides:
- Type safety at package boundaries
- Clear API for context value access
- Protection against key collisions
- Easier testing and documentation
Request-Scoped Value Patterns
Pattern 1: Request Metadata
1type RequestMetadata struct {
2 RequestID string
3 UserID string
4 UserAgent string
5 RemoteAddr string
6 StartTime time.Time
7}
8
9type metadataKey struct{}
10
11func WithMetadata(ctx context.Context, meta *RequestMetadata) context.Context {
12 return context.WithValue(ctx, metadataKey{}, meta)
13}
14
15func GetMetadata(ctx context.Context) *RequestMetadata {
16 meta, _ := ctx.Value(metadataKey{}).(*RequestMetadata)
17 return meta
18}
Pattern 2: Authentication Context
1type AuthContext struct {
2 Token string
3 UserID string
4 Roles []string
5 Expiry time.Time
6 Verified bool
7}
8
9type authKey struct{}
10
11func WithAuth(ctx context.Context, auth *AuthContext) context.Context {
12 return context.WithValue(ctx, authKey{}, auth)
13}
14
15func GetAuth(ctx context.Context) (*AuthContext, bool) {
16 auth, ok := ctx.Value(authKey{}).(*AuthContext)
17 return auth, ok
18}
19
20func IsAuthenticated(ctx context.Context) bool {
21 auth, ok := GetAuth(ctx)
22 return ok && auth.Verified
23}
Pattern 3: Structured Logging Context
1type LogFields map[string]interface{}
2
3type logKey struct{}
4
5func WithLogFields(ctx context.Context, fields LogFields) context.Context {
6 existing, _ := ctx.Value(logKey{}).(LogFields)
7 if existing == nil {
8 return context.WithValue(ctx, logKey{}, fields)
9 }
10
11 // Merge fields
12 merged := make(LogFields)
13 for k, v := range existing {
14 merged[k] = v
15 }
16 for k, v := range fields {
17 merged[k] = v
18 }
19
20 return context.WithValue(ctx, logKey{}, merged)
21}
22
23func GetLogFields(ctx context.Context) LogFields {
24 fields, _ := ctx.Value(logKey{}).(LogFields)
25 return fields
26}
HTTP Server with Context
Now let's see how context works in real-world web applications. HTTP servers are where context truly shines - they handle thousands of concurrent requests, each with their own lifecycle and cancellation needs.
Every HTTP request in Go comes with its own context. Think of this like each restaurant table getting its own dedicated waiter - that waiter handles everything for that specific table, from taking orders to serving food to handling payment.
Request Context
🌍 Real-world Example: This pattern is used by every major web framework. When you make an API call and then navigate away before it completes, the request context gets canceled, automatically stopping any database queries or external API calls that were in progress. This prevents wasted resources and ensures clean shutdown.
1package main
2
3import (
4 "context"
5 "fmt"
6 "net/http"
7 "time"
8)
9
10func handler(w http.ResponseWriter, r *http.Request) {
11 // Get request context
12 ctx := r.Context()
13
14 // Add timeout to request
15 ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
16 defer cancel()
17
18 // Simulate work
19 select {
20 case <-time.After(3 * time.Second):
21 fmt.Fprint(w, "Done")
22 case <-ctx.Done():
23 err := ctx.Err()
24 http.Error(w, err.Error(), http.StatusRequestTimeout)
25 fmt.Println("Request canceled:", err)
26 }
27}
28
29func main() {
30 http.HandleFunc("/", handler)
31 fmt.Println("Server starting on :8080")
32 http.ListenAndServe(":8080", nil)
33}
Request Context Lifecycle:
1. Client connects → HTTP server creates request context
2. Handler receives request → Context available via r.Context()
3. Handler spawns operations → Pass context to all operations
4. Client disconnects OR timeout → Context canceled
5. Operations check ctx.Done() → Clean up and return
💡 Key Takeaway: HTTP contexts are automatically canceled when the client disconnects. This means your expensive operations stop automatically when users navigate away, saving resources on your server. This is one of the most important performance optimizations in web applications.
Advanced: Request Timeout Middleware
1func timeoutMiddleware(timeout time.Duration) func(http.Handler) http.Handler {
2 return func(next http.Handler) http.Handler {
3 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
4 ctx, cancel := context.WithTimeout(r.Context(), timeout)
5 defer cancel()
6
7 // Create new request with timeout context
8 r = r.WithContext(ctx)
9
10 // Channel to signal completion
11 done := make(chan struct{})
12
13 go func() {
14 next.ServeHTTP(w, r)
15 close(done)
16 }()
17
18 select {
19 case <-done:
20 // Handler completed normally
21 return
22 case <-ctx.Done():
23 // Timeout occurred
24 http.Error(w, "Request timeout", http.StatusGatewayTimeout)
25 }
26 })
27 }
28}
29
30// Usage
31func main() {
32 mux := http.NewServeMux()
33 mux.HandleFunc("/", handler)
34
35 // Wrap with timeout middleware
36 http.ListenAndServe(":8080", timeoutMiddleware(5*time.Second)(mux))
37}
Middleware with Context
Middleware is like the security guards and receptionists at a building entrance - they check credentials, assign visitor badges, and direct people to the right places. In web applications, middleware uses context to pass this information through the request chain.
🌍 Real-world Example: This is how distributed tracing works in microservices. A request ID gets added at the API gateway and flows through authentication, business logic, database calls, and external API calls - making it possible to trace a single request across multiple services.
1package main
2
3import (
4 "context"
5 "fmt"
6 "net/http"
7)
8
9type contextKey string
10
11const requestIDKey contextKey = "requestID"
12
13func requestIDMiddleware(next http.Handler) http.Handler {
14 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
15 // Generate or extract request ID
16 requestID := r.Header.Get("X-Request-ID")
17 if requestID == "" {
18 requestID = generateRequestID() // Implement this
19 }
20
21 // Add request ID to context
22 ctx := context.WithValue(r.Context(), requestIDKey, requestID)
23
24 // Add to response headers for client tracking
25 w.Header().Set("X-Request-ID", requestID)
26
27 // Pass modified context to next handler
28 next.ServeHTTP(w, r.WithContext(ctx))
29 })
30}
31
32func generateRequestID() string {
33 // Simplified - use UUID in production
34 return fmt.Sprintf("req-%d", time.Now().UnixNano())
35}
36
37func handler(w http.ResponseWriter, r *http.Request) {
38 // Extract request ID from context
39 requestID := r.Context().Value(requestIDKey)
40 fmt.Fprintf(w, "Request ID: %v\n", requestID)
41}
42
43func main() {
44 mux := http.NewServeMux()
45 mux.HandleFunc("/", handler)
46
47 http.ListenAndServe(":8080", requestIDMiddleware(mux))
48}
⚠️ Important: When using middleware with context, always call r.WithContext(ctx) and pass the new request to the next handler. Forgetting to do this is a common bug that causes context values to be lost.
Middleware Chaining:
1func loggingMiddleware(next http.Handler) http.Handler {
2 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
3 start := time.Now()
4
5 // Add start time to context
6 ctx := context.WithValue(r.Context(), "start_time", start)
7
8 // Call next handler
9 next.ServeHTTP(w, r.WithContext(ctx))
10
11 // Log after request completes
12 duration := time.Since(start)
13 requestID := r.Context().Value(requestIDKey)
14 log.Printf("[%v] %s %s - %v", requestID, r.Method, r.URL.Path, duration)
15 })
16}
17
18func authMiddleware(next http.Handler) http.Handler {
19 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
20 token := r.Header.Get("Authorization")
21
22 // Validate token and get user
23 user, err := validateToken(r.Context(), token)
24 if err != nil {
25 http.Error(w, "Unauthorized", http.StatusUnauthorized)
26 return
27 }
28
29 // Add user to context
30 ctx := context.WithValue(r.Context(), "user", user)
31 next.ServeHTTP(w, r.WithContext(ctx))
32 })
33}
34
35// Chain middlewares
36func main() {
37 mux := http.NewServeMux()
38 mux.HandleFunc("/", handler)
39
40 handler := requestIDMiddleware(
41 loggingMiddleware(
42 authMiddleware(mux),
43 ),
44 )
45
46 http.ListenAndServe(":8080", handler)
47}
Database Queries with Context
Database operations are where context becomes absolutely critical. Think of a busy restaurant kitchen - if a customer cancels their order, you need to immediately tell the chefs to stop cooking that dish and free up the burners for other orders.
🌍 Real-world Example: This is how connection pools work in high-traffic applications. When a request times out, the database context gets canceled, immediately returning the database connection to the pool instead of waiting for the query to complete. This prevents connection pool exhaustion and cascading failures.
1package main
2
3import (
4 "context"
5 "database/sql"
6 "fmt"
7 "time"
8)
9
10func queryWithTimeout(db *sql.DB) error {
11 // Create context with timeout
12 ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
13 defer cancel()
14
15 // Query with context
16 var name string
17 query := "SELECT name FROM users WHERE id = ?"
18 err := db.QueryRowContext(ctx, query, 1).Scan(&name)
19 if err != nil {
20 return fmt.Errorf("query failed: %w", err)
21 }
22
23 fmt.Println("User:", name)
24 return nil
25}
26
27func transactionWithContext(db *sql.DB) error {
28 ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
29 defer cancel()
30
31 // Begin transaction with context
32 tx, err := db.BeginTx(ctx, nil)
33 if err != nil {
34 return err
35 }
36 defer tx.Rollback() // Rolled back if not committed
37
38 // Execute queries with transaction context
39 _, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance - 100 WHERE id = ?", 1)
40 if err != nil {
41 return err
42 }
43
44 _, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance + 100 WHERE id = ?", 2)
45 if err != nil {
46 return err
47 }
48
49 // Commit with context
50 return tx.Commit()
51}
💡 Key Takeaway: Always pass context to database operations. The database driver will automatically cancel the query if the context is canceled, preventing long-running queries from blocking connection pools. This is essential for building resilient, high-performance database applications.
Connection Pool Management:
1func queryWithRetry(ctx context.Context, db *sql.DB) error {
2 maxRetries := 3
3
4 for i := 0; i < maxRetries; i++ {
5 // Check if context is still valid
6 if ctx.Err() != nil {
7 return ctx.Err()
8 }
9
10 // Try query with remaining time
11 var name string
12 err := db.QueryRowContext(ctx, "SELECT name FROM users WHERE id = ?", 1).Scan(&name)
13
14 if err == nil {
15 fmt.Println("Success:", name)
16 return nil
17 }
18
19 // Check if error is temporary
20 if isTemporary(err) && i < maxRetries-1 {
21 time.Sleep(time.Second * time.Duration(i+1))
22 continue
23 }
24
25 return err
26 }
27
28 return fmt.Errorf("max retries exceeded")
29}
Goroutine Coordination
Context shines when coordinating multiple goroutines. Think of it like a symphony conductor - when the conductor raises their hands, all musicians stop playing simultaneously. Context provides this coordination mechanism for your concurrent code.
Wait for Multiple Workers
This pattern is incredibly common in concurrent programming. Imagine you're running a data processing pipeline with multiple workers - when you need to shut down, you want all workers to stop gracefully at the same time.
🌍 Real-world Example: This is how web servers handle graceful shutdown. When you press Ctrl+C, the server uses context to signal all HTTP handlers, background workers, and database connections to finish their current work and shut down together.
1package main
2
3import (
4 "context"
5 "fmt"
6 "sync"
7 "time"
8)
9
10func worker(ctx context.Context, id int, wg *sync.WaitGroup) {
11 defer wg.Done()
12
13 for {
14 select {
15 case <-ctx.Done():
16 fmt.Printf("Worker %d stopped: %v\n", id, ctx.Err())
17 return
18 default:
19 fmt.Printf("Worker %d working\n", id)
20 time.Sleep(500 * time.Millisecond)
21 }
22 }
23}
24
25func main() {
26 // Create context with timeout
27 ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
28 defer cancel()
29
30 var wg sync.WaitGroup
31
32 // Start multiple workers
33 for i := 1; i <= 3; i++ {
34 wg.Add(1)
35 go worker(ctx, i, &wg)
36 }
37
38 // Wait for all workers to finish
39 wg.Wait()
40 fmt.Println("All workers stopped gracefully")
41}
💡 Key Takeaway: Combine context with sync.WaitGroup for graceful shutdown patterns. The context signals when to stop, and the WaitGroup ensures all goroutines have finished before the main function exits. This prevents resource leaks and ensures clean shutdown.
Advanced: Worker Pool with Context
1type Job struct {
2 ID int
3 Data string
4}
5
6type Result struct {
7 JobID int
8 Output string
9 Error error
10}
11
12func workerPool(ctx context.Context, jobs <-chan Job, results chan<- Result, numWorkers int) {
13 var wg sync.WaitGroup
14
15 for i := 0; i < numWorkers; i++ {
16 wg.Add(1)
17 go func(workerID int) {
18 defer wg.Done()
19
20 for {
21 select {
22 case <-ctx.Done():
23 return
24 case job, ok := <-jobs:
25 if !ok {
26 return // Channel closed
27 }
28
29 // Process job with context
30 output, err := processJob(ctx, job)
31
32 // Send result
33 select {
34 case results <- Result{JobID: job.ID, Output: output, Error: err}:
35 case <-ctx.Done():
36 return
37 }
38 }
39 }
40 }(i)
41 }
42
43 wg.Wait()
44 close(results)
45}
46
47func processJob(ctx context.Context, job Job) (string, error) {
48 // Simulate work that respects cancellation
49 select {
50 case <-time.After(100 * time.Millisecond):
51 return fmt.Sprintf("Processed: %s", job.Data), nil
52 case <-ctx.Done():
53 return "", ctx.Err()
54 }
55}
Best Practices
These best practices come from years of production experience with Go applications. Following them will save you from common bugs and performance issues.
-
Pass context as first parameter -
func Do(ctx context.Context, ...)- Makes context usage obvious
- Consistent with stdlib and community conventions
- Easy to find in function signatures
- Required by Go standard library design (net/http, database/sql, etc.)
-
Don't store context in structs - pass it explicitly
- Contexts have lifetimes tied to operations, not objects
- Storing context makes cancellation scope unclear
- Exception: very short-lived structs (e.g., request builders)
- Creates confusion about context lifecycle and ownership
-
Use context.Background() at the top level - main(), init(), tests
- Clear entry points for context chains
- Makes it obvious where contexts originate
- Tests can create isolated contexts
- Provides clean separation between application layers
-
Always call cancel() - use
defer cancel()immediately- Prevents resource leaks
- Stops timers even if operation completes early
- Good practice even if you "know" it's not needed
- Canceling twice is safe, not canceling causes leaks
-
Don't pass nil context - use
context.TODO()if unsure- Nil contexts cause panics
- context.TODO() signals that context handling needs attention
- Makes refactoring easier
- Provides clear signal in code reviews
-
Use context values sparingly - only for request-scoped data
- Not for optional parameters
- Not for dependencies or configuration
- Only for data that crosses API boundaries
- Keep values immutable and request-scoped
-
Make custom key types - avoid key collisions
- Use unexported types for keys
- Prevents collisions between packages
- Provides type safety
- Use struct{} as key type for zero memory footprint
-
Check ctx.Done() in loops - enable cancellation
- Makes long-running operations cancellable
- Prevents resource waste
- Improves system responsiveness
- Critical for scalable applications
Code Review Checklist:
1// ✅ Good context usage
2func ProcessRequest(ctx context.Context, data string) error {
3 // Context as first parameter
4 ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
5 defer cancel() // Always defer
6
7 // Pass context to all operations
8 if err := validateData(ctx, data); err != nil {
9 return err
10 }
11
12 return saveData(ctx, data)
13}
14
15// ❌ Bad context usage
16type Processor struct {
17 ctx context.Context // Don't store context
18}
19
20func (p *Processor) Process(data string) error {
21 // Timeout but no defer cancel
22 ctx, cancel := context.WithTimeout(p.ctx, 5*time.Second)
23
24 // Forgot to check ctx.Done() in loop
25 for i := 0; i < 1000; i++ {
26 process(data[i])
27 }
28
29 cancel()
30 return nil
31}
Advanced Best Practices
1. Context Propagation Through Layers
Always propagate context through all layers of your application. Don't create new background contexts in the middle of a call chain - this breaks the cancellation signal.
1// ✅ Good: Context propagates through all layers
2func HandleRequest(w http.ResponseWriter, r *http.Request) {
3 ctx := r.Context()
4 data, err := fetchDataFromService(ctx)
5 // ...
6}
7
8func fetchDataFromService(ctx context.Context) (*Data, error) {
9 return queryDatabase(ctx, "SELECT * FROM data")
10}
11
12func queryDatabase(ctx context.Context, query string) (*Data, error) {
13 // Context flows all the way to database layer
14 return db.QueryContext(ctx, query)
15}
16
17// ❌ Bad: Creating new background context breaks chain
18func fetchDataFromService(ctx context.Context) (*Data, error) {
19 // This breaks the cancellation chain!
20 return queryDatabase(context.Background(), "SELECT * FROM data")
21}
2. Timeout Composition
When composing operations with different timeout requirements, use the parent context but add specific timeouts for each operation.
1func processOrder(ctx context.Context, orderID string) error {
2 // Parent context might have a 30-second timeout
3 // Each operation gets its own shorter timeout
4
5 // Payment processing: 5 seconds
6 paymentCtx, cancel1 := context.WithTimeout(ctx, 5*time.Second)
7 defer cancel1()
8 if err := processPayment(paymentCtx, orderID); err != nil {
9 return fmt.Errorf("payment failed: %w", err)
10 }
11
12 // Inventory update: 3 seconds
13 inventoryCtx, cancel2 := context.WithTimeout(ctx, 3*time.Second)
14 defer cancel2()
15 if err := updateInventory(inventoryCtx, orderID); err != nil {
16 return fmt.Errorf("inventory update failed: %w", err)
17 }
18
19 // Email notification: 2 seconds
20 emailCtx, cancel3 := context.WithTimeout(ctx, 2*time.Second)
21 defer cancel3()
22 if err := sendConfirmationEmail(emailCtx, orderID); err != nil {
23 // Email failure might be non-fatal
24 log.Printf("email failed: %v", err)
25 }
26
27 return nil
28}
3. Context Values with Type Safety
Create typed accessors for context values to prevent type assertion errors and improve code maintainability.
1// Define custom key type
2type contextKey int
3
4const (
5 userIDKey contextKey = iota
6 requestIDKey
7 sessionKey
8)
9
10// Type-safe setters
11func WithUserID(ctx context.Context, userID int64) context.Context {
12 return context.WithValue(ctx, userIDKey, userID)
13}
14
15func WithRequestID(ctx context.Context, requestID string) context.Context {
16 return context.WithValue(ctx, requestIDKey, requestID)
17}
18
19// Type-safe getters with defaults
20func GetUserID(ctx context.Context) (int64, bool) {
21 userID, ok := ctx.Value(userIDKey).(int64)
22 return userID, ok
23}
24
25func GetRequestID(ctx context.Context) string {
26 if requestID, ok := ctx.Value(requestIDKey).(string); ok {
27 return requestID
28 }
29 return "unknown"
30}
31
32// Usage
33func handleRequest(w http.ResponseWriter, r *http.Request) {
34 ctx := r.Context()
35 ctx = WithUserID(ctx, 12345)
36 ctx = WithRequestID(ctx, generateID())
37
38 processRequest(ctx)
39}
40
41func processRequest(ctx context.Context) {
42 userID, ok := GetUserID(ctx)
43 if !ok {
44 log.Println("No user ID in context")
45 return
46 }
47
48 requestID := GetRequestID(ctx)
49 log.Printf("[%s] Processing for user %d", requestID, userID)
50}
4. Graceful Degradation with Context
Use context timeouts to implement graceful degradation when external services are slow.
1func getRecommendations(ctx context.Context, userID string) []Recommendation {
2 // Try to get personalized recommendations with a short timeout
3 personalizedCtx, cancel := context.WithTimeout(ctx, 500*time.Millisecond)
4 defer cancel()
5
6 recommendations, err := fetchPersonalizedRecommendations(personalizedCtx, userID)
7 if err == nil {
8 return recommendations
9 }
10
11 // Fall back to popular recommendations if personalized fails
12 log.Printf("Personalized recommendations failed, using fallback: %v", err)
13 popularRecommendations, err := fetchPopularRecommendations(ctx)
14 if err != nil {
15 log.Printf("Popular recommendations also failed: %v", err)
16 return getDefaultRecommendations()
17 }
18
19 return popularRecommendations
20}
5. Context in Tests
Create test-specific contexts with appropriate timeouts to prevent tests from hanging.
1func TestDataProcessing(t *testing.T) {
2 // Create context with test timeout
3 ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
4 defer cancel()
5
6 // Use context in test
7 result, err := processData(ctx, testData)
8 if err != nil {
9 t.Fatalf("processData failed: %v", err)
10 }
11
12 // Verify result
13 if result != expected {
14 t.Errorf("got %v, want %v", result, expected)
15 }
16}
17
18// For table-driven tests
19func TestWithTimeout(t *testing.T) {
20 tests := []struct {
21 name string
22 timeout time.Duration
23 wantErr bool
24 }{
25 {"fast operation", 2 * time.Second, false},
26 {"slow operation", 100 * time.Millisecond, true},
27 }
28
29 for _, tt := range tests {
30 t.Run(tt.name, func(t *testing.T) {
31 ctx, cancel := context.WithTimeout(context.Background(), tt.timeout)
32 defer cancel()
33
34 err := performOperation(ctx)
35 if (err != nil) != tt.wantErr {
36 t.Errorf("got error %v, wantErr %v", err, tt.wantErr)
37 }
38 })
39 }
40}
6. Monitoring Context Cancellation
Instrument your code to track context cancellation rates and sources for better observability.
1type ContextMetrics struct {
2 Canceled int64
3 DeadlineExceeded int64
4 Completed int64
5}
6
7var metrics ContextMetrics
8
9func trackContextCompletion(ctx context.Context, operationName string) func() {
10 start := time.Now()
11
12 return func() {
13 duration := time.Since(start)
14
15 switch ctx.Err() {
16 case nil:
17 atomic.AddInt64(&metrics.Completed, 1)
18 log.Printf("[%s] completed in %v", operationName, duration)
19 case context.Canceled:
20 atomic.AddInt64(&metrics.Canceled, 1)
21 log.Printf("[%s] canceled after %v", operationName, duration)
22 case context.DeadlineExceeded:
23 atomic.AddInt64(&metrics.DeadlineExceeded, 1)
24 log.Printf("[%s] deadline exceeded after %v", operationName, duration)
25 }
26 }
27}
28
29// Usage
30func performOperation(ctx context.Context) error {
31 defer trackContextCompletion(ctx, "performOperation")()
32
33 // Do work...
34 return doWork(ctx)
35}
7. Context with Cleanup Functions
Use context cancellation to trigger cleanup operations automatically.
1func processWithCleanup(ctx context.Context) error {
2 // Set up resources
3 tempFile, err := os.CreateTemp("", "process-*.tmp")
4 if err != nil {
5 return err
6 }
7
8 // Register cleanup on context cancellation
9 cleanupCtx, cancel := context.WithCancel(ctx)
10 defer cancel()
11
12 go func() {
13 <-cleanupCtx.Done()
14 log.Println("Cleaning up temporary file")
15 os.Remove(tempFile.Name())
16 }()
17
18 // Do work with context
19 return doWorkWithFile(ctx, tempFile)
20}
8. Detecting Context Leaks
Use the race detector and static analysis tools to detect context-related issues.
1# Run with race detector
2go test -race ./...
3
4# Use golangci-lint with context checks
5golangci-lint run --enable=contextcheck
6
7# Check for context.TODO() in production code
8grep -r "context.TODO()" --include="*.go" .
When to Use Different Context Types
Think of context types like different types of communication in an organization:
Use context.Background() when:
- Starting your application (main function, init)
- Running background jobs that have no parent context
- Writing tests that don't involve cancellation
- At the top level of request handlers (before adding timeout/values)
Use context.WithCancel() when:
- You want to manually control cancellation timing
- Coordinating multiple goroutines with explicit shutdown
- Implementing graceful shutdown patterns
- Need to cancel operations based on application logic
Use context.WithTimeout() when:
- Calling external APIs or databases with max duration
- Preventing operations from running too long
- Setting maximum time limits relative to now
- HTTP client requests, RPC calls, database queries
Use context.WithDeadline() when:
- You have absolute time constraints (specific clock time)
- Running batch jobs that must complete by a specific time
- Coordinating time-sensitive operations (market close, scheduled events)
- Need to align multiple operations to the same deadline
Use context.WithValue() when:
- Passing request IDs for tracing/logging
- Carrying authentication/authorization data
- Propagating request metadata through layers
- Data needs to cross API boundaries but doesn't fit as a parameter
Common Pitfalls
Learning from others' mistakes is the fastest way to master context. Here are the most common issues that even experienced Go developers encounter:
-
Not checking ctx.Done() - operations can't be canceled
1// ❌ Bad 2for i := 0; i < 1000000; i++ { 3 process(data[i]) // Can never be canceled 4} 5 6// ✅ Good 7for i := 0; i < 1000000; i++ { 8 select { 9 case <-ctx.Done(): 10 return ctx.Err() 11 default: 12 process(data[i]) 13 } 14} -
Storing context in structs - contexts should flow through calls
1// ❌ Bad 2type Server struct { 3 ctx context.Context 4} 5 6// ✅ Good 7type Server struct { 8 // No context field 9} 10func (s *Server) Handle(ctx context.Context, req Request) { 11 // Pass context as parameter 12} -
Using string keys for values - can cause collisions
1// ❌ Bad 2ctx = context.WithValue(ctx, "user", user) 3 4// ✅ Good 5type userKey struct{} 6ctx = context.WithValue(ctx, userKey{}, user) -
Forgetting to call cancel() - causes resource leaks
1// ❌ Bad 2ctx, cancel := context.WithTimeout(parent, 5*time.Second) 3// Missing defer cancel() 4 5// ✅ Good 6ctx, cancel := context.WithTimeout(parent, 5*time.Second) 7defer cancel() -
Passing context.Background() everywhere - defeats the purpose
1// ❌ Bad 2func Handle(w http.ResponseWriter, r *http.Request) { 3 doWork(context.Background()) // Ignores request context 4} 5 6// ✅ Good 7func Handle(w http.ResponseWriter, r *http.Request) { 8 doWork(r.Context()) // Uses request context 9} -
Not propagating context - breaks cancellation chain
1// ❌ Bad 2func doWork(ctx context.Context) { 3 callDB(context.Background()) // New context breaks chain 4} 5 6// ✅ Good 7func doWork(ctx context.Context) { 8 callDB(ctx) // Propagates context 9}
⚠️ Important: The most dangerous pitfall is forgetting to call cancel(). This can cause goroutine leaks that gradually consume memory and bring down your application in production. Always use defer cancel() immediately after creating a context.
💡 Key Takeaway: Context is not just a library - it's a design pattern. Think about the lifecycle of your operations and how they should be cancelled or timed out. Good context usage makes your applications more reliable and efficient.
Context Patterns
Understanding common context patterns helps you build more robust and maintainable applications. These patterns solve recurring problems in production systems and have been battle-tested across countless Go applications.
Timeout with Retry
Retrying operations with exponential backoff is critical for handling transient failures in distributed systems. This pattern combines context timeouts with retry logic to make operations resilient to temporary failures.
🌍 Real-world Example: Cloud APIs often experience temporary throttling or network issues. AWS SDK, Google Cloud SDK, and most HTTP clients use this pattern to automatically retry failed requests, improving reliability without requiring manual intervention.
1func callWithRetry(ctx context.Context, maxRetries int) error {
2 for i := 0; i < maxRetries; i++ {
3 select {
4 case <-ctx.Done():
5 return ctx.Err()
6 default:
7 err := makeAPICall(ctx)
8 if err == nil {
9 return nil
10 }
11
12 // Don't retry on context errors
13 if err == context.Canceled || err == context.DeadlineExceeded {
14 return err
15 }
16
17 // Exponential backoff
18 if i < maxRetries-1 {
19 backoff := time.Duration(1<<uint(i)) * time.Second
20 time.Sleep(backoff)
21 }
22 }
23 }
24 return fmt.Errorf("max retries exceeded")
25}
Advanced Retry Pattern with Jitter:
1func retryWithJitter(ctx context.Context, maxRetries int, fn func(context.Context) error) error {
2 var lastErr error
3
4 for attempt := 0; attempt < maxRetries; attempt++ {
5 // Check context before attempting
6 select {
7 case <-ctx.Done():
8 return ctx.Err()
9 default:
10 }
11
12 // Attempt the operation
13 err := fn(ctx)
14 if err == nil {
15 return nil
16 }
17
18 lastErr = err
19
20 // Don't retry on context errors
21 if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
22 return err
23 }
24
25 // Calculate backoff with jitter to prevent thundering herd
26 if attempt < maxRetries-1 {
27 baseDelay := time.Duration(1<<uint(attempt)) * time.Second
28 jitter := time.Duration(rand.Int63n(int64(baseDelay / 4)))
29 delay := baseDelay + jitter
30
31 log.Printf("Retry %d/%d failed: %v, waiting %v", attempt+1, maxRetries, err, delay)
32
33 select {
34 case <-time.After(delay):
35 continue
36 case <-ctx.Done():
37 return ctx.Err()
38 }
39 }
40 }
41
42 return fmt.Errorf("exhausted retries (%d attempts): %w", maxRetries, lastErr)
43}
Graceful Shutdown
Graceful shutdown ensures that ongoing operations complete before the application exits. This pattern is essential for preventing data loss and maintaining system consistency.
🌍 Real-world Example: Production web servers need to complete ongoing requests before shutting down. Kubernetes uses this pattern during rolling updates - the old pods gracefully finish their work before new pods take over, ensuring zero downtime.
1func gracefulShutdown(server *http.Server) {
2 // Create shutdown context with timeout
3 ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
4 defer cancel()
5
6 // Shutdown server gracefully
7 if err := server.Shutdown(ctx); err != nil {
8 log.Fatal("Server forced to shutdown:", err)
9 }
10
11 log.Println("Server exited gracefully")
12}
Complete Graceful Shutdown Pattern:
1func runServerWithGracefulShutdown() error {
2 server := &http.Server{Addr: ":8080"}
3
4 // Start server in goroutine
5 serverErrors := make(chan error, 1)
6 go func() {
7 log.Println("Server starting on :8080")
8 serverErrors <- server.ListenAndServe()
9 }()
10
11 // Listen for shutdown signals
12 shutdown := make(chan os.Signal, 1)
13 signal.Notify(shutdown, os.Interrupt, syscall.SIGTERM)
14
15 // Block until shutdown signal or server error
16 select {
17 case err := <-serverErrors:
18 return fmt.Errorf("server error: %w", err)
19
20 case sig := <-shutdown:
21 log.Printf("Received shutdown signal: %v", sig)
22
23 // Give ongoing requests time to complete
24 ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
25 defer cancel()
26
27 // Gracefully shutdown server
28 if err := server.Shutdown(ctx); err != nil {
29 // Force close if graceful shutdown fails
30 server.Close()
31 return fmt.Errorf("forced shutdown: %w", err)
32 }
33
34 log.Println("Server stopped gracefully")
35 }
36
37 return nil
38}
Fan-Out Fan-In with Context
Fan-out/fan-in patterns distribute work across multiple goroutines and collect results. Context enables cancellation of all workers when one fails or the parent context is canceled.
🌍 Real-world Example: This is how search engines work - when you search for something, the query fans out to multiple data centers or shards simultaneously. The first results come back quickly, and the system can cancel slower queries to maintain responsiveness.
1func fanOut(ctx context.Context, tasks []Task) []Result {
2 results := make(chan Result, len(tasks))
3 var wg sync.WaitGroup
4
5 for _, task := range tasks {
6 wg.Add(1)
7 go func(t Task) {
8 defer wg.Done()
9
10 // Process with context
11 result := process(ctx, t)
12
13 // Try to send result
14 select {
15 case results <- result:
16 case <-ctx.Done():
17 }
18 }(task)
19 }
20
21 // Close results when all done
22 go func() {
23 wg.Wait()
24 close(results)
25 }()
26
27 // Collect results
28 var collected []Result
29 for result := range results {
30 collected = append(collected, result)
31 }
32
33 return collected
34}
Production Fan-Out Pattern with Error Handling:
1type Result struct {
2 Value interface{}
3 Error error
4 Index int
5}
6
7func fanOutWithErrors(ctx context.Context, inputs []string, processFn func(context.Context, string) (interface{}, error)) ([]Result, error) {
8 // Buffer results channel to prevent goroutine leaks
9 results := make(chan Result, len(inputs))
10
11 // Use errgroup for coordinated cancellation
12 g, ctx := errgroup.WithContext(ctx)
13
14 // Fan out work to goroutines
15 for i, input := range inputs {
16 i, input := i, input // Capture loop variables
17 g.Go(func() error {
18 value, err := processFn(ctx, input)
19
20 select {
21 case results <- Result{Value: value, Error: err, Index: i}:
22 return err // Return error to errgroup
23 case <-ctx.Done():
24 return ctx.Err()
25 }
26 })
27 }
28
29 // Wait for all goroutines in separate goroutine
30 go func() {
31 g.Wait()
32 close(results)
33 }()
34
35 // Collect all results
36 collected := make([]Result, 0, len(inputs))
37 for result := range results {
38 collected = append(collected, result)
39 }
40
41 // Check if any goroutine failed
42 if err := g.Wait(); err != nil {
43 return collected, fmt.Errorf("fan-out error: %w", err)
44 }
45
46 return collected, nil
47}
Circuit Breaker with Context
Circuit breakers prevent cascading failures by failing fast when a dependency is unhealthy. Context integration allows the circuit breaker to respect parent timeouts and cancellation.
🌍 Real-world Example: Netflix's Hystrix library popularized this pattern. When a microservice starts failing, the circuit breaker trips and returns errors immediately instead of waiting for timeouts, preventing resource exhaustion and allowing the system to recover.
1type CircuitBreaker struct {
2 mu sync.Mutex
3 failureCount int
4 lastFailure time.Time
5 state string // "closed", "open", "half-open"
6 threshold int
7 timeout time.Duration
8}
9
10func NewCircuitBreaker(threshold int, timeout time.Duration) *CircuitBreaker {
11 return &CircuitBreaker{
12 threshold: threshold,
13 timeout: timeout,
14 state: "closed",
15 }
16}
17
18func (cb *CircuitBreaker) Call(ctx context.Context, fn func(context.Context) error) error {
19 cb.mu.Lock()
20
21 // Check if circuit is open
22 if cb.state == "open" {
23 if time.Since(cb.lastFailure) > cb.timeout {
24 cb.state = "half-open"
25 cb.failureCount = 0
26 } else {
27 cb.mu.Unlock()
28 return fmt.Errorf("circuit breaker is open")
29 }
30 }
31
32 cb.mu.Unlock()
33
34 // Execute function with context
35 err := fn(ctx)
36
37 cb.mu.Lock()
38 defer cb.mu.Unlock()
39
40 if err != nil {
41 cb.failureCount++
42 cb.lastFailure = time.Now()
43
44 if cb.failureCount >= cb.threshold {
45 cb.state = "open"
46 log.Printf("Circuit breaker opened after %d failures", cb.failureCount)
47 }
48
49 return err
50 }
51
52 // Success - reset circuit breaker
53 if cb.state == "half-open" {
54 cb.state = "closed"
55 cb.failureCount = 0
56 log.Println("Circuit breaker closed")
57 }
58
59 return nil
60}
Rate Limiting with Context
Rate limiting controls how frequently operations can be performed. Context integration ensures rate-limited operations can still be canceled.
🌍 Real-world Example: API gateways like Kong and AWS API Gateway use rate limiting to protect backend services from overload. Context ensures that when a client cancels a request, it doesn't consume the rate limit quota.
1type RateLimiter struct {
2 tokens chan struct{}
3 rate time.Duration
4}
5
6func NewRateLimiter(requestsPerSecond int) *RateLimiter {
7 rl := &RateLimiter{
8 tokens: make(chan struct{}, requestsPerSecond),
9 rate: time.Second / time.Duration(requestsPerSecond),
10 }
11
12 // Fill initial tokens
13 for i := 0; i < requestsPerSecond; i++ {
14 rl.tokens <- struct{}{}
15 }
16
17 // Refill tokens periodically
18 go func() {
19 ticker := time.NewTicker(rl.rate)
20 defer ticker.Stop()
21
22 for range ticker.C {
23 select {
24 case rl.tokens <- struct{}{}:
25 default:
26 // Token bucket is full
27 }
28 }
29 }()
30
31 return rl
32}
33
34func (rl *RateLimiter) Wait(ctx context.Context) error {
35 select {
36 case <-rl.tokens:
37 return nil
38 case <-ctx.Done():
39 return ctx.Err()
40 }
41}
42
43// Usage
44func makeRateLimitedRequest(ctx context.Context, rl *RateLimiter, url string) error {
45 if err := rl.Wait(ctx); err != nil {
46 return fmt.Errorf("rate limit wait canceled: %w", err)
47 }
48
49 return makeHTTPRequest(ctx, url)
50}
Request Hedging Pattern
Request hedging sends duplicate requests to improve latency by racing multiple attempts. Context enables canceling slower requests once a fast one completes.
🌍 Real-world Example: Google and other large-scale systems use hedging for critical read operations. After a short delay, they send a duplicate request to a different replica. The first response wins, and the slower request is canceled, dramatically reducing tail latency.
1func hedgedRequest(ctx context.Context, urls []string, delay time.Duration) (string, error) {
2 ctx, cancel := context.WithCancel(ctx)
3 defer cancel()
4
5 type response struct {
6 result string
7 err error
8 }
9
10 responses := make(chan response, len(urls))
11
12 // Send first request immediately
13 go func() {
14 result, err := makeRequest(ctx, urls[0])
15 responses <- response{result, err}
16 }()
17
18 // Send additional hedged requests after delay
19 timer := time.After(delay)
20 hedgesSent := 0
21
22 for {
23 select {
24 case <-timer:
25 if hedgesSent < len(urls)-1 {
26 hedgesSent++
27 go func(url string) {
28 result, err := makeRequest(ctx, url)
29 responses <- response{result, err}
30 }(urls[hedgesSent])
31
32 // Schedule next hedge
33 timer = time.After(delay)
34 }
35
36 case resp := <-responses:
37 if resp.err == nil {
38 // Cancel remaining requests
39 cancel()
40 return resp.result, nil
41 }
42
43 // If all requests failed
44 if hedgesSent >= len(urls)-1 {
45 return "", resp.err
46 }
47
48 case <-ctx.Done():
49 return "", ctx.Err()
50 }
51 }
52}
Bulkhead Pattern with Context
The bulkhead pattern isolates resources to prevent failures in one area from affecting others. Context enables timeout and cancellation within resource-limited execution.
🌍 Real-world Example: Thread pools in application servers act as bulkheads. Each pool is limited in size, so a slow database query can't exhaust all threads and block HTTP request processing. This pattern prevents cascading failures across system components.
1type Bulkhead struct {
2 semaphore chan struct{}
3 timeout time.Duration
4}
5
6func NewBulkhead(maxConcurrent int, timeout time.Duration) *Bulkhead {
7 return &Bulkhead{
8 semaphore: make(chan struct{}, maxConcurrent),
9 timeout: timeout,
10 }
11}
12
13func (b *Bulkhead) Execute(ctx context.Context, fn func(context.Context) error) error {
14 // Try to acquire semaphore with timeout
15 select {
16 case b.semaphore <- struct{}{}:
17 defer func() { <-b.semaphore }()
18 case <-ctx.Done():
19 return ctx.Err()
20 case <-time.After(b.timeout):
21 return fmt.Errorf("bulkhead: timeout waiting for slot")
22 }
23
24 // Execute function with context
25 return fn(ctx)
26}
27
28// Usage example
29func queryDatabaseWithBulkhead(ctx context.Context, bulkhead *Bulkhead, query string) error {
30 return bulkhead.Execute(ctx, func(ctx context.Context) error {
31 // This execution is rate-limited by bulkhead
32 return db.QueryContext(ctx, query)
33 })
34}
Pipeline with Backpressure
Pipelines process data through multiple stages. Context with buffered channels provides backpressure to prevent overwhelming downstream stages.
🌍 Real-world Example: Data processing pipelines in systems like Apache Kafka use backpressure to prevent producers from overwhelming consumers. Context allows graceful shutdown of the entire pipeline when needed.
1type Pipeline struct {
2 ctx context.Context
3 cancel context.CancelFunc
4 wg sync.WaitGroup
5}
6
7func NewPipeline(ctx context.Context) *Pipeline {
8 ctx, cancel := context.WithCancel(ctx)
9 return &Pipeline{
10 ctx: ctx,
11 cancel: cancel,
12 }
13}
14
15func (p *Pipeline) Source(data []int) <-chan int {
16 out := make(chan int, 10) // Buffered for backpressure
17 p.wg.Add(1)
18
19 go func() {
20 defer p.wg.Done()
21 defer close(out)
22
23 for _, d := range data {
24 select {
25 case out <- d:
26 case <-p.ctx.Done():
27 return
28 }
29 }
30 }()
31
32 return out
33}
34
35func (p *Pipeline) Transform(in <-chan int, fn func(int) int) <-chan int {
36 out := make(chan int, 10)
37 p.wg.Add(1)
38
39 go func() {
40 defer p.wg.Done()
41 defer close(out)
42
43 for {
44 select {
45 case val, ok := <-in:
46 if !ok {
47 return
48 }
49
50 result := fn(val)
51
52 select {
53 case out <- result:
54 case <-p.ctx.Done():
55 return
56 }
57
58 case <-p.ctx.Done():
59 return
60 }
61 }
62 }()
63
64 return out
65}
66
67func (p *Pipeline) Sink(in <-chan int, fn func(int)) {
68 p.wg.Add(1)
69
70 go func() {
71 defer p.wg.Done()
72
73 for {
74 select {
75 case val, ok := <-in:
76 if !ok {
77 return
78 }
79 fn(val)
80
81 case <-p.ctx.Done():
82 return
83 }
84 }
85 }()
86}
87
88func (p *Pipeline) Wait() {
89 p.wg.Wait()
90}
91
92func (p *Pipeline) Cancel() {
93 p.cancel()
94}
95
96// Usage
97func runPipeline() {
98 pipeline := NewPipeline(context.Background())
99
100 source := pipeline.Source([]int{1, 2, 3, 4, 5})
101 doubled := pipeline.Transform(source, func(n int) int { return n * 2 })
102 squared := pipeline.Transform(doubled, func(n int) int { return n * n })
103
104 pipeline.Sink(squared, func(n int) {
105 log.Printf("Result: %d", n)
106 })
107
108 pipeline.Wait()
109}
💡 Key Takeaway: These patterns represent battle-tested solutions to common distributed systems problems. Context integration makes them robust to cancellation and timeouts, which is essential for building resilient production systems. Combine these patterns to build sophisticated concurrent applications that handle failures gracefully.
Practice Exercises
Exercise 1: Timeout Function
Learning Objectives: Master context timeout management, implement proper cancellation handling, and understand context propagation patterns.
Real-World Context: Timeout handling is critical in distributed systems where operations might hang indefinitely. This pattern is used extensively in HTTP clients, database connections, and API calls to prevent system deadlocks and ensure responsive applications under varying network conditions.
Difficulty: Intermediate | Time Estimate: 25 minutes
Create a function that runs another function with a timeout while properly handling cancellation signals and resource cleanup.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func runWithTimeout(timeout time.Duration, fn func(context.Context) error) error {
10 ctx, cancel := context.WithTimeout(context.Background(), timeout)
11 defer cancel()
12
13 return fn(ctx)
14}
15
16func slowOperation(ctx context.Context) error {
17 select {
18 case <-time.After(2 * time.Second):
19 fmt.Println("Operation completed")
20 return nil
21 case <-ctx.Done():
22 return ctx.Err()
23 }
24}
25
26func main() {
27 err := runWithTimeout(1*time.Second, slowOperation)
28 if err != nil {
29 fmt.Println("Error:", err) // context deadline exceeded
30 }
31
32 err = runWithTimeout(3*time.Second, slowOperation)
33 if err != nil {
34 fmt.Println("Error:", err)
35 } else {
36 fmt.Println("Success!")
37 }
38}
Exercise 2: Cancellable Search
Learning Objectives: Implement graceful cancellation mechanisms, handle user-initiated interrupts, and build responsive search operations.
Real-World Context: Cancellable operations are essential for user-friendly applications. Whether users are searching large datasets, running database queries, or performing file system operations, they need the ability to interrupt long-running processes. This pattern is fundamental in CLI tools and web applications.
Difficulty: Intermediate | Time Estimate: 35 minutes
Implement a search that can be canceled mid-operation while properly cleaning up resources and providing feedback about the cancellation status.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func search(ctx context.Context, query string) ([]string, error) {
10 results := []string{}
11
12 // Simulate searching through items
13 items := []string{"apple", "apricot", "banana", "application", "avocado"}
14
15 for _, item := range items {
16 select {
17 case <-ctx.Done():
18 return results, ctx.Err()
19 default:
20 time.Sleep(500 * time.Millisecond) // Simulate slow search
21
22 if contains(item, query) {
23 results = append(results, item)
24 }
25 }
26 }
27
28 return results, nil
29}
30
31func contains(s, substr string) bool {
32 return len(s) >= len(substr) && s[:len(substr)] == substr
33}
34
35func main() {
36 ctx, cancel := context.WithTimeout(context.Background(), 1500*time.Millisecond)
37 defer cancel()
38
39 results, err := search(ctx, "app")
40 if err != nil {
41 fmt.Println("Search canceled:", err)
42 }
43
44 fmt.Println("Results:", results)
45}
Exercise 3: Pipeline with Cancellation
Learning Objectives: Construct multi-stage processing pipelines, propagate cancellation through goroutine chains, and implement resource-efficient data flow patterns.
Real-World Context: Data processing pipelines are the backbone of modern data engineering and ETL systems. From log processing to real-time analytics, these pipelines must handle graceful shutdown to prevent data loss and resource leaks. This pattern mirrors systems like Apache Kafka, AWS Lambda chains, and microservice architectures.
Difficulty: Advanced | Time Estimate: 50 minutes
Build a data processing pipeline that can be canceled at any stage while ensuring proper resource cleanup and preventing goroutine leaks.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6)
7
8func generator(ctx context.Context, nums ...int) <-chan int {
9 out := make(chan int)
10 go func() {
11 defer close(out)
12 for _, n := range nums {
13 select {
14 case out <- n:
15 case <-ctx.Done():
16 return
17 }
18 }
19 }()
20 return out
21}
22
23func square(ctx context.Context, in <-chan int) <-chan int {
24 out := make(chan int)
25 go func() {
26 defer close(out)
27 for n := range in {
28 select {
29 case out <- n * n:
30 case <-ctx.Done():
31 return
32 }
33 }
34 }()
35 return out
36}
37
38func main() {
39 ctx, cancel := context.WithCancel(context.Background())
40 defer cancel()
41
42 nums := generator(ctx, 1, 2, 3, 4, 5)
43 squared := square(ctx, nums)
44
45 // Cancel after receiving 3 results
46 count := 0
47 for n := range squared {
48 fmt.Println(n)
49 count++
50 if count == 3 {
51 cancel()
52 }
53 }
54}
Exercise 4: Request Logger
Learning Objectives: Implement middleware patterns, utilize context values for request tracking, and build observability into HTTP applications.
Real-World Context: Request logging and tracing is fundamental to observability in microservices. Systems like Jaeger, Zipkin, and AWS XRay use similar context propagation patterns to track requests across service boundaries. This exercise teaches you the foundation of distributed tracing and application monitoring.
Difficulty: Intermediate | Time Estimate: 30 minutes
Create middleware that logs request duration using context values while implementing proper request ID propagation and structured logging.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "net/http"
7 "time"
8)
9
10type contextKey string
11
12const startTimeKey contextKey = "startTime"
13
14func loggingMiddleware(next http.Handler) http.Handler {
15 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
16 start := time.Now()
17 ctx := context.WithValue(r.Context(), startTimeKey, start)
18
19 next.ServeHTTP(w, r.WithContext(ctx))
20
21 duration := time.Since(start)
22 fmt.Printf("%s %s - %v\n", r.Method, r.URL.Path, duration)
23 })
24}
25
26func handler(w http.ResponseWriter, r *http.Request) {
27 // Simulate some work
28 time.Sleep(100 * time.Millisecond)
29 fmt.Fprint(w, "Hello, World!")
30}
31
32func main() {
33 mux := http.NewServeMux()
34 mux.HandleFunc("/", handler)
35
36 http.ListenAndServe(":8080", loggingMiddleware(mux))
37}
Exercise 5: Batch Processor
Process items in batches with timeout.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func processBatch(ctx context.Context, items []int, batchSize int) error {
10 for i := 0; i < len(items); i += batchSize {
11 select {
12 case <-ctx.Done():
13 return ctx.Err()
14 default:
15 end := i + batchSize
16 if end > len(items) {
17 end = len(items)
18 }
19
20 batch := items[i:end]
21 fmt.Printf("Processing batch: %v\n", batch)
22 time.Sleep(500 * time.Millisecond) // Simulate processing
23 }
24 }
25
26 return nil
27}
28
29func main() {
30 items := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
31
32 ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
33 defer cancel()
34
35 err := processBatch(ctx, items, 3)
36 if err != nil {
37 fmt.Println("Processing stopped:", err)
38 } else {
39 fmt.Println("All batches processed")
40 }
41}
Exercise 6: Context-Aware Pipeline
Build a data processing pipeline that respects context cancellation at each stage.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "time"
7)
8
9func generate(ctx context.Context, nums ...int) <-chan int {
10 out := make(chan int)
11 go func() {
12 defer close(out)
13 for _, n := range nums {
14 select {
15 case out <- n:
16 case <-ctx.Done():
17 return
18 }
19 }
20 }()
21 return out
22}
23
24func square(ctx context.Context, in <-chan int) <-chan int {
25 out := make(chan int)
26 go func() {
27 defer close(out)
28 for n := range in {
29 select {
30 case out <- n * n:
31 case <-ctx.Done():
32 return
33 }
34 }
35 }()
36 return out
37}
38
39func filter(ctx context.Context, in <-chan int, predicate func(int) bool) <-chan int {
40 out := make(chan int)
41 go func() {
42 defer close(out)
43 for n := range in {
44 if predicate(n) {
45 select {
46 case out <- n:
47 case <-ctx.Done():
48 return
49 }
50 }
51 }
52 }()
53 return out
54}
55
56func main() {
57 ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
58 defer cancel()
59
60 nums := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
61
62 // Pipeline: generate -> square -> filter
63 pipeline := filter(ctx,
64 square(ctx,
65 generate(ctx, nums...)),
66 func(n int) bool { return n%2 == 0 })
67
68 for result := range pipeline {
69 fmt.Println(result)
70 time.Sleep(500 * time.Millisecond)
71 }
72
73 fmt.Println("Pipeline completed or canceled")
74}
Exercise 7: HTTP Request with Deadline
Create an HTTP client that enforces request deadlines using context.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "io"
7 "net/http"
8 "time"
9)
10
11type HTTPClient struct {
12 client *http.Client
13 timeout time.Duration
14}
15
16func NewHTTPClient(timeout time.Duration) *HTTPClient {
17 return &HTTPClient{
18 client: &http.Client{},
19 timeout: timeout,
20 }
21}
22
23func (c *HTTPClient) Get(ctx context.Context, url string) (string, error) {
24 ctx, cancel := context.WithTimeout(ctx, c.timeout)
25 defer cancel()
26
27 req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
28 if err != nil {
29 return "", fmt.Errorf("creating request: %w", err)
30 }
31
32 resp, err := c.client.Do(req)
33 if err != nil {
34 return "", fmt.Errorf("making request: %w", err)
35 }
36 defer resp.Body.Close()
37
38 if resp.StatusCode != http.StatusOK {
39 return "", fmt.Errorf("unexpected status: %d", resp.StatusCode)
40 }
41
42 body, err := io.ReadAll(resp.Body)
43 if err != nil {
44 return "", fmt.Errorf("reading response: %w", err)
45 }
46
47 return string(body), nil
48}
49
50func (c *HTTPClient) GetWithRetry(ctx context.Context, url string, maxRetries int) (string, error) {
51 var lastErr error
52
53 for i := 0; i < maxRetries; i++ {
54 select {
55 case <-ctx.Done():
56 return "", ctx.Err()
57 default:
58 }
59
60 body, err := c.Get(ctx, url)
61 if err == nil {
62 return body, nil
63 }
64
65 lastErr = err
66 fmt.Printf("Attempt %d failed: %v\n", i+1, err)
67
68 if i < maxRetries-1 {
69 backoff := time.Duration(i+1) * time.Second
70 select {
71 case <-time.After(backoff):
72 case <-ctx.Done():
73 return "", ctx.Err()
74 }
75 }
76 }
77
78 return "", fmt.Errorf("max retries exceeded: %w", lastErr)
79}
80
81func main() {
82 client := NewHTTPClient(5 * time.Second)
83 ctx := context.Background()
84
85 // Example with a fast endpoint
86 body, err := client.GetWithRetry(ctx, "https://httpbin.org/delay/1", 3)
87 if err != nil {
88 fmt.Printf("Error: %v\n", err)
89 } else {
90 fmt.Printf("Success! Body length: %d bytes\n", len(body))
91 }
92
93 // Example with timeout
94 ctx2, cancel := context.WithTimeout(context.Background(), 2*time.Second)
95 defer cancel()
96
97 _, err = client.Get(ctx2, "https://httpbin.org/delay/5")
98 if err != nil {
99 fmt.Printf("Expected timeout: %v\n", err)
100 }
101}
Exercise 8: Graceful Server Shutdown
Implement a server with graceful shutdown using context.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "log"
7 "net/http"
8 "os"
9 "os/signal"
10 "sync"
11 "syscall"
12 "time"
13)
14
15type Server struct {
16 http *http.Server
17 wg sync.WaitGroup
18 mu sync.Mutex
19 requests int
20}
21
22func NewServer(addr string) *Server {
23 s := &Server{}
24
25 mux := http.NewServeMux()
26 mux.HandleFunc("/", s.handleRequest)
27 mux.HandleFunc("/slow", s.handleSlowRequest)
28 mux.HandleFunc("/stats", s.handleStats)
29
30 s.http = &http.Server{
31 Addr: addr,
32 Handler: mux,
33 }
34
35 return s
36}
37
38func (s *Server) handleRequest(w http.ResponseWriter, r *http.Request) {
39 s.trackRequest()
40 fmt.Fprintf(w, "Hello from server!\n")
41}
42
43func (s *Server) handleSlowRequest(w http.ResponseWriter, r *http.Request) {
44 s.trackRequest()
45
46 // Simulate slow operation
47 select {
48 case <-time.After(5 * time.Second):
49 fmt.Fprintf(w, "Slow request completed\n")
50 case <-r.Context().Done():
51 log.Println("Request canceled by client")
52 return
53 }
54}
55
56func (s *Server) handleStats(w http.ResponseWriter, r *http.Request) {
57 s.mu.Lock()
58 count := s.requests
59 s.mu.Unlock()
60
61 fmt.Fprintf(w, "Total requests: %d\n", count)
62}
63
64func (s *Server) trackRequest() {
65 s.mu.Lock()
66 s.requests++
67 s.mu.Unlock()
68}
69
70func (s *Server) Start() error {
71 log.Printf("Server starting on %s\n", s.http.Addr)
72 return s.http.ListenAndServe()
73}
74
75func (s *Server) Shutdown(ctx context.Context) error {
76 log.Println("Server shutting down...")
77
78 if err := s.http.Shutdown(ctx); err != nil {
79 return fmt.Errorf("server shutdown failed: %w", err)
80 }
81
82 log.Println("Server stopped")
83 return nil
84}
85
86func main() {
87 server := NewServer(":8080")
88
89 // Start server in goroutine
90 go func() {
91 if err := server.Start(); err != nil && err != http.ErrServerClosed {
92 log.Fatal(err)
93 }
94 }()
95
96 // Wait for interrupt signal
97 quit := make(chan os.Signal, 1)
98 signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
99 <-quit
100
101 log.Println("Received shutdown signal")
102
103 // Create shutdown context with timeout
104 ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
105 defer cancel()
106
107 if err := server.Shutdown(ctx); err != nil {
108 log.Fatal("Server forced to shutdown:", err)
109 }
110
111 log.Println("Server exited gracefully")
112}
Exercise 9: Concurrent Task Execution with Timeout
Execute multiple tasks concurrently with overall timeout.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "math/rand"
7 "sync"
8 "time"
9)
10
11type Task struct {
12 ID int
13 Name string
14 Fn func(context.Context) (interface{}, error)
15}
16
17type TaskResult struct {
18 TaskID int
19 Result interface{}
20 Error error
21}
22
23type TaskExecutor struct {
24 timeout time.Duration
25}
26
27func NewTaskExecutor(timeout time.Duration) *TaskExecutor {
28 return &TaskExecutor{timeout: timeout}
29}
30
31func (te *TaskExecutor) Execute(ctx context.Context, tasks []Task) []TaskResult {
32 ctx, cancel := context.WithTimeout(ctx, te.timeout)
33 defer cancel()
34
35 results := make([]TaskResult, len(tasks))
36 var wg sync.WaitGroup
37
38 for i, task := range tasks {
39 wg.Add(1)
40 go func(index int, t Task) {
41 defer wg.Done()
42
43 resultChan := make(chan TaskResult, 1)
44
45 go func() {
46 result, err := t.Fn(ctx)
47 resultChan <- TaskResult{
48 TaskID: t.ID,
49 Result: result,
50 Error: err,
51 }
52 }()
53
54 select {
55 case r := <-resultChan:
56 results[index] = r
57 case <-ctx.Done():
58 results[index] = TaskResult{
59 TaskID: t.ID,
60 Error: ctx.Err(),
61 }
62 }
63 }(i, task)
64 }
65
66 wg.Wait()
67 return results
68}
69
70func simulateWork(ctx context.Context, duration time.Duration, id int) (interface{}, error) {
71 select {
72 case <-time.After(duration):
73 return fmt.Sprintf("Task %d completed", id), nil
74 case <-ctx.Done():
75 return nil, ctx.Err()
76 }
77}
78
79func main() {
80 executor := NewTaskExecutor(5 * time.Second)
81
82 tasks := []Task{
83 {
84 ID: 1,
85 Name: "Quick Task",
86 Fn: func(ctx context.Context) (interface{}, error) {
87 return simulateWork(ctx, 1*time.Second, 1)
88 },
89 },
90 {
91 ID: 2,
92 Name: "Medium Task",
93 Fn: func(ctx context.Context) (interface{}, error) {
94 return simulateWork(ctx, 3*time.Second, 2)
95 },
96 },
97 {
98 ID: 3,
99 Name: "Slow Task",
100 Fn: func(ctx context.Context) (interface{}, error) {
101 return simulateWork(ctx, 7*time.Second, 3)
102 },
103 },
104 {
105 ID: 4,
106 Name: "Random Task",
107 Fn: func(ctx context.Context) (interface{}, error) {
108 duration := time.Duration(rand.Intn(4)+1) * time.Second
109 return simulateWork(ctx, duration, 4)
110 },
111 },
112 }
113
114 fmt.Println("Executing tasks with 5-second timeout...")
115 start := time.Now()
116
117 results := executor.Execute(context.Background(), tasks)
118
119 elapsed := time.Since(start)
120
121 fmt.Printf("\nExecution completed in %.2fs\n\n", elapsed.Seconds())
122
123 for _, result := range results {
124 if result.Error != nil {
125 fmt.Printf("Task %d failed: %v\n", result.TaskID, result.Error)
126 } else {
127 fmt.Printf("Task %d: %v\n", result.TaskID, result.Result)
128 }
129 }
130}
Exercise 10: Context Value Propagation for Request Tracing
Implement request tracing using context values.
Solution
1package main
2
3import (
4 "context"
5 "fmt"
6 "log"
7 "math/rand"
8 "time"
9)
10
11type contextKey string
12
13const (
14 requestIDKey contextKey = "requestID"
15 userIDKey contextKey = "userID"
16)
17
18type RequestMetadata struct {
19 RequestID string
20 UserID string
21 StartTime time.Time
22}
23
24func WithRequestID(ctx context.Context, requestID string) context.Context {
25 return context.WithValue(ctx, requestIDKey, requestID)
26}
27
28func GetRequestID(ctx context.Context) string {
29 if id, ok := ctx.Value(requestIDKey).(string); ok {
30 return id
31 }
32 return "unknown"
33}
34
35func WithUserID(ctx context.Context, userID string) context.Context {
36 return context.WithValue(ctx, userIDKey, userID)
37}
38
39func GetUserID(ctx context.Context) string {
40 if id, ok := ctx.Value(userIDKey).(string); ok {
41 return id
42 }
43 return "anonymous"
44}
45
46func logWithContext(ctx context.Context, level, message string) {
47 requestID := GetRequestID(ctx)
48 userID := GetUserID(ctx)
49 log.Printf("[%s] [%s] [user:%s] %s", level, requestID, userID, message)
50}
51
52func fetchUser(ctx context.Context, userID string) error {
53 logWithContext(ctx, "INFO", fmt.Sprintf("Fetching user: %s", userID))
54 time.Sleep(100 * time.Millisecond)
55
56 select {
57 case <-ctx.Done():
58 return ctx.Err()
59 default:
60 logWithContext(ctx, "INFO", "User fetched successfully")
61 return nil
62 }
63}
64
65func fetchOrders(ctx context.Context) ([]string, error) {
66 userID := GetUserID(ctx)
67 logWithContext(ctx, "INFO", fmt.Sprintf("Fetching orders for user: %s", userID))
68 time.Sleep(200 * time.Millisecond)
69
70 select {
71 case <-ctx.Done():
72 return nil, ctx.Err()
73 default:
74 orders := []string{"order-1", "order-2", "order-3"}
75 logWithContext(ctx, "INFO", fmt.Sprintf("Found %d orders", len(orders)))
76 return orders, nil
77 }
78}
79
80func processPayment(ctx context.Context, orderID string) error {
81 logWithContext(ctx, "INFO", fmt.Sprintf("Processing payment for order: %s", orderID))
82 time.Sleep(150 * time.Millisecond)
83
84 select {
85 case <-ctx.Done():
86 return ctx.Err()
87 default:
88 logWithContext(ctx, "INFO", "Payment processed successfully")
89 return nil
90 }
91}
92
93func handleRequest(ctx context.Context, userID string) error {
94 // Add request metadata to context
95 ctx = WithUserID(ctx, userID)
96
97 logWithContext(ctx, "INFO", "Starting request processing")
98
99 // Fetch user
100 if err := fetchUser(ctx, userID); err != nil {
101 logWithContext(ctx, "ERROR", fmt.Sprintf("Failed to fetch user: %v", err))
102 return err
103 }
104
105 // Fetch orders
106 orders, err := fetchOrders(ctx)
107 if err != nil {
108 logWithContext(ctx, "ERROR", fmt.Sprintf("Failed to fetch orders: %v", err))
109 return err
110 }
111
112 // Process payment for first order
113 if len(orders) > 0 {
114 if err := processPayment(ctx, orders[0]); err != nil {
115 logWithContext(ctx, "ERROR", fmt.Sprintf("Failed to process payment: %v", err))
116 return err
117 }
118 }
119
120 logWithContext(ctx, "INFO", "Request completed successfully")
121 return nil
122}
123
124func generateRequestID() string {
125 return fmt.Sprintf("req-%d", rand.Intn(100000))
126}
127
128func main() {
129 rand.Seed(time.Now().UnixNano())
130
131 // Simulate multiple concurrent requests
132 for i := 1; i <= 3; i++ {
133 go func(reqNum int) {
134 requestID := generateRequestID()
135 userID := fmt.Sprintf("user-%d", reqNum)
136
137 ctx := context.Background()
138 ctx = WithRequestID(ctx, requestID)
139 ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
140 defer cancel()
141
142 if err := handleRequest(ctx, userID); err != nil {
143 logWithContext(ctx, "ERROR", fmt.Sprintf("Request failed: %v", err))
144 }
145 }(i)
146 }
147
148 // Wait for all requests to complete
149 time.Sleep(3 * time.Second)
150}
Summary
context.Background()for top-level contextcontext.WithCancel()for manual cancellationcontext.WithTimeout()for time-limited operationscontext.WithDeadline()for deadline-based cancellationcontext.WithValue()for request-scoped values- Always pass context as first parameter
- Always defer cancel() to prevent leaks
- Check ctx.Done() in long-running operations
- Use custom key types for context values
- Don't store contexts in structs
- Propagate context through all function calls
Master context and you'll write robust, cancellable Go services that handle graceful shutdown, prevent resource leaks, and scale to production workloads!