Structured Logging

Think of structured logging like upgrading from scribbled notes to organized spreadsheets. Traditional logs are like freeform text notes - humans can read them, but computers struggle to understand them. Structured logs are like well-organized data with clear labels - both humans and machines can easily parse and query them.

Go 1.21+ includes the log/slog package for structured logging out of the box, making it the modern standard for Go applications.

💡 Key Takeaway: Structured logging transforms your application's output from unparseable text into queryable, machine-readable data that's perfect for modern monitoring and debugging tools.

Why Structured Logging Matters

Before we dive into the technical details, let's understand why structured logging has become the industry standard for modern applications.

The Problem with Traditional Logging

Traditional logging using fmt.Printf or the old log package creates unstructured text output:

2024-01-15 14:32:11 User john_doe logged in from IP 192.168.1.100
2024-01-15 14:32:15 Processing order #12345 for user john_doe total $299.99
2024-01-15 14:32:18 Error processing payment: card declined

This looks fine to humans, but creates major challenges:

  1. Hard to Parse: Each log message has a different format
  2. Impossible to Query: Try finding "all errors for user X" across millions of logs
  3. No Context: Related log entries aren't easily connected
  4. Performance Issues: String concatenation is expensive at high volumes
  5. Lost Information: Important data might be buried in free-form text

The Structured Logging Solution

With structured logging, the same events become queryable data:

1{"time":"2024-01-15T14:32:11Z","level":"INFO","msg":"User logged in","user":"john_doe","ip":"192.168.1.100"}
2{"time":"2024-01-15T14:32:15Z","level":"INFO","msg":"Processing order","user":"john_doe","order_id":12345,"total":299.99}
3{"time":"2024-01-15T14:32:18Z","level":"ERROR","msg":"Payment failed","user":"john_doe","order_id":12345,"error":"card declined"}

Now you can easily:

  • Find all errors for a specific user
  • Calculate average order values
  • Track login patterns by IP address
  • Correlate related events across distributed systems
  • Build dashboards and alerts

This isn't just about prettier logs - it's about making your application observable and debuggable at scale.

Introduction to slog

The log/slog package brings structured logging to Go's standard library with a clean, performant API designed for production use.

Basic Usage

Let's start with the fundamental difference between traditional and structured logging. Traditional logging uses string formatting: fmt.Sprintf("User %s logged in at %s", user, time). Structured logging uses key-value pairs: logger.Info("User logged in", "user", user, "time", time).

The key advantage? With structured logs, you can ask questions like "show me all errors for user X" or "what's the average response time for API Y" - something that's incredibly difficult with unstructured text logs.

 1package main
 2
 3import (
 4    "log/slog"
 5    "os"
 6)
 7
 8func main() {
 9    // Default logger
10    slog.Info("Application starting")
11    slog.Debug("Debug message") // Won't show
12    slog.Warn("Warning message")
13    slog.Error("Error message")
14
15    // With attributes
16    slog.Info("User logged in",
17        "user_id", 12345,
18        "username", "john_doe",
19        "ip", "192.168.1.1")
20
21    // JSON handler
22    jsonHandler := slog.NewJSONHandler(os.Stdout, nil)
23    jsonLogger := slog.New(jsonHandler)
24
25    jsonLogger.Info("JSON formatted log",
26        "service", "api",
27        "version", "1.0.0")
28}
29// run

Understanding slog's Architecture

The slog package uses a clean separation of concerns through three main components:

  1. Logger: The API you interact with to create log entries
  2. Handler: Processes and outputs log records
  3. Record: Contains the actual log data (level, message, attributes, time)

This design makes slog both flexible and performant. You can easily swap handlers without changing your logging code, and handlers can optimize how they process records.

Configuring Log Level

Log levels are like volume controls for your application's output. In development, you want to hear everything. In production, you only want to hear important things. This prevents log spam while ensuring you capture critical information.

 1package main
 2
 3import (
 4    "log/slog"
 5    "os"
 6)
 7
 8func main() {
 9    // Create handler with specific level
10    opts := &slog.HandlerOptions{
11        Level: slog.LevelDebug,
12    }
13    handler := slog.NewJSONHandler(os.Stdout, opts)
14    logger := slog.New(handler)
15
16    // Now debug logs will appear
17    logger.Debug("Debug information", "details", "verbose data")
18    logger.Info("Normal operation", "status", "running")
19    logger.Warn("Potential issue", "metric", 95)
20    logger.Error("Something failed", "error", "connection timeout")
21}
22// run

⚠️ Important: The log level you choose significantly impacts performance and storage costs. Debug logging can generate 10x more log volume than info-level logging in production applications.

The Four Standard Log Levels

Understanding when to use each log level is crucial for effective logging:

DEBUG: Detailed information for diagnosing problems. Examples:

  • Variable values during execution
  • Entry/exit of functions
  • Query parameters and results
  • Internal state transitions

INFO: Confirmation that things are working as expected. Examples:

  • Application startup/shutdown
  • Configuration changes
  • Major business events (user registered, order placed)
  • External service calls

WARN: Something unexpected happened, but the application can continue. Examples:

  • Deprecated API usage
  • Configuration issues with fallbacks
  • Slow response times
  • Resource usage approaching limits

ERROR: A serious problem that prevented an operation from completing. Examples:

  • Failed database queries
  • Network connection errors
  • Invalid user input that crashed a handler
  • Unable to write to disk

Log Levels

Custom Log Levels

While Go provides standard levels, sometimes you need more granularity. For example, you might want a Trace level for extremely detailed debugging or a Fatal level for critical errors that should halt the application.

Real-world Example: In a distributed system, you might use:

  • Trace: Individual network packets, detailed state transitions
  • Debug: Function entry/exit, variable values
  • Info: User actions, major operations
  • Warn: Recoverable errors, performance issues
  • Error: Failed operations, exceptions
  • Fatal: System failures that require immediate shutdown
 1package main
 2
 3import (
 4    "log/slog"
 5    "os"
 6)
 7
 8const (
 9    LevelTrace = slog.Level(-8)
10    LevelFatal = slog.Level(12)
11)
12
13func main() {
14    // Replace default level names
15    opts := &slog.HandlerOptions{
16        Level: LevelTrace,
17        ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {
18            if a.Key == slog.LevelKey {
19                level := a.Value.Any().(slog.Level)
20                switch {
21                case level < slog.LevelDebug:
22                    a.Value = slog.StringValue("TRACE")
23                case level >= LevelFatal:
24                    a.Value = slog.StringValue("FATAL")
25                }
26            }
27            return a
28        },
29    }
30
31    logger := slog.New(slog.NewJSONHandler(os.Stdout, opts))
32
33    logger.Log(nil, LevelTrace, "Very detailed trace information", "packet_id", 12345)
34    logger.Debug("Debug information")
35    logger.Info("Normal information")
36    logger.Warn("Warning message")
37    logger.Error("Error occurred")
38    logger.Log(nil, LevelFatal, "Critical system failure")
39}
40// run

Dynamic Log Level Adjustment

In production, you often need to change log levels without restarting your application. This is crucial for debugging live issues without service disruption.

 1package main
 2
 3import (
 4    "log/slog"
 5    "os"
 6    "sync/atomic"
 7)
 8
 9type LevelVar struct {
10    val atomic.Int64
11}
12
13func (v *LevelVar) Level() slog.Level {
14    return slog.Level(v.val.Load())
15}
16
17func (v *LevelVar) Set(l slog.Level) {
18    v.val.Store(int64(l))
19}
20
21func main() {
22    // Create level variable
23    levelVar := &LevelVar{}
24    levelVar.Set(slog.LevelInfo)
25
26    opts := &slog.HandlerOptions{
27        Level: levelVar,
28    }
29    logger := slog.New(slog.NewJSONHandler(os.Stdout, opts))
30
31    logger.Debug("This won't show initially")
32    logger.Info("This will show")
33
34    // Change level at runtime
35    levelVar.Set(slog.LevelDebug)
36    logger.Debug("Now this will show")
37    logger.Info("This still shows")
38}
39// run

Structured Attributes

Attributes are the heart of structured logging - they turn your logs from text into data.

Adding Attributes

There are several ways to add attributes to your log entries, each with different performance characteristics and use cases:

 1package main
 2
 3import (
 4    "log/slog"
 5    "os"
 6    "time"
 7)
 8
 9func main() {
10    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
11
12    // Simple key-value pairs
13    logger.Info("User action",
14        "user_id", 12345,
15        "action", "login",
16        "timestamp", time.Now())
17
18    // Using slog.Attr for type safety
19    logger.Info("Order processed",
20        slog.Int("order_id", 67890),
21        slog.String("status", "completed"),
22        slog.Float64("total", 299.99))
23
24    // Grouping related attributes
25    logger.Info("HTTP request",
26        slog.Group("request",
27            slog.String("method", "POST"),
28            slog.String("path", "/api/users"),
29            slog.Int("status", 201)),
30        slog.Group("client",
31            slog.String("ip", "192.168.1.1"),
32            slog.String("user_agent", "Mozilla/5.0")))
33}
34// run

Grouping Attributes

When you have many related pieces of information, grouping them makes your logs much more readable and queryable. Think of groups like folders in a file system - they organize related data together.

Real-world Example: When logging an HTTP request, you might group request details separately from client information. This lets you query "all requests with duration > 100ms" or "all requests from specific IP addresses" more easily.

 1package main
 2
 3import (
 4    "log/slog"
 5    "os"
 6    "time"
 7)
 8
 9func main() {
10    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
11
12    // HTTP request logging with groups
13    logger.Info("Request processed",
14        slog.Group("http",
15            slog.String("method", "GET"),
16            slog.String("path", "/api/users/123"),
17            slog.Int("status", 200),
18            slog.Duration("duration", 45*time.Millisecond)),
19        slog.Group("client",
20            slog.String("ip", "203.0.113.42"),
21            slog.String("country", "US"),
22            slog.String("user_agent", "curl/7.68.0")),
23        slog.Group("server",
24            slog.String("hostname", "api-server-1"),
25            slog.String("version", "1.2.3")))
26}
27// run

Using LogValuer Interface

The LogValuer interface is one of slog's most powerful features. It lets your custom types control exactly how they're logged, perfect for redacting sensitive information or formatting complex data structures.

⚠️ Security Alert: Never log sensitive data like passwords, credit card numbers, or personal information. The LogValuer interface is your first line of defense against accidentally exposing sensitive data in logs.

 1package main
 2
 3import (
 4    "fmt"
 5    "log/slog"
 6    "os"
 7)
 8
 9// User implements slog.LogValuer
10type User struct {
11    ID       int
12    Username string
13    Email    string
14    Password string // Sensitive!
15}
16
17// LogValue controls how User is logged
18func (u User) LogValue() slog.Value {
19    return slog.GroupValue(
20        slog.Int("id", u.ID),
21        slog.String("username", u.Username),
22        slog.String("email_domain", emailDomain(u.Email)),
23        // Password is NOT logged!
24    )
25}
26
27func emailDomain(email string) string {
28    for i := len(email) - 1; i >= 0; i-- {
29        if email[i] == '@' {
30            return email[i+1:]
31        }
32    }
33    return ""
34}
35
36// AppError with context
37type AppError struct {
38    Code    string
39    Message string
40    Cause   error
41}
42
43func (e AppError) LogValue() slog.Value {
44    attrs := []slog.Attr{
45        slog.String("code", e.Code),
46        slog.String("message", e.Message),
47    }
48    if e.Cause != nil {
49        attrs = append(attrs, slog.String("cause", e.Cause.Error()))
50    }
51    return slog.GroupValue(attrs...)
52}
53
54func main() {
55    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
56
57    user := User{
58        ID:       123,
59        Username: "john_doe",
60        Email:    "john@example.com",
61        Password: "secret123",
62    }
63
64    // User's LogValue() is called automatically
65    logger.Info("User action", "user", user, "action", "login")
66
67    // Error logging
68    err := AppError{
69        Code:    "DB_ERROR",
70        Message: "Failed to connect",
71        Cause:   fmt.Errorf("timeout"),
72    }
73
74    logger.Error("Operation failed", "error", err)
75}
76// run

Pre-allocation and Performance

For high-performance logging, consider pre-allocating attributes to reduce allocations:

 1package main
 2
 3import (
 4    "log/slog"
 5    "os"
 6    "time"
 7)
 8
 9func main() {
10    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
11
12    // Pre-allocate attributes slice
13    attrs := make([]slog.Attr, 0, 8)
14    attrs = append(attrs,
15        slog.String("service", "api"),
16        slog.String("version", "1.0.0"),
17        slog.String("environment", "production"))
18
19    // Use With to create logger with pre-set attributes
20    serviceLogger := logger.With(attrs...)
21
22    // These logs automatically include service, version, environment
23    serviceLogger.Info("Server started", "port", 8080)
24    serviceLogger.Info("Processing request", "request_id", "abc-123")
25
26    // Measure performance impact
27    start := time.Now()
28    for i := 0; i < 1000; i++ {
29        serviceLogger.Info("Performance test", "iteration", i)
30    }
31    duration := time.Since(start)
32
33    logger.Info("Performance test completed",
34        "iterations", 1000,
35        "duration_ms", duration.Milliseconds())
36}
37// run

Custom Handlers

Handlers are the engine of slog - they determine where and how logs are written. Custom handlers let you route logs to multiple destinations, filter sensitive data, or format output for specific monitoring systems.

Multi-Writer Handler

Sometimes you need to send logs to multiple destinations simultaneously - maybe JSON to stdout for development, text files for local debugging, and external services for production monitoring. Custom handlers let you create sophisticated logging pipelines.

Real-world Example: A production system might log:

  • JSON to stdout (captured by container orchestrator)
  • Text to local files (for emergency debugging)
  • Filtered errors to external monitoring (Datadog, New Relic)
  • Sampled debug logs to performance analysis tools
 1package main
 2
 3import (
 4    "context"
 5    "log/slog"
 6    "os"
 7)
 8
 9// MultiHandler writes to multiple handlers
10type MultiHandler struct {
11    handlers []slog.Handler
12}
13
14func NewMultiHandler(handlers ...slog.Handler) *MultiHandler {
15    return &MultiHandler{handlers: handlers}
16}
17
18func (h *MultiHandler) Enabled(ctx context.Context, level slog.Level) bool {
19    // Enabled if any handler is enabled
20    for _, handler := range h.handlers {
21        if handler.Enabled(ctx, level) {
22            return true
23        }
24    }
25    return false
26}
27
28func (h *MultiHandler) Handle(ctx context.Context, record slog.Record) error {
29    // Write to all handlers
30    for _, handler := range h.handlers {
31        if handler.Enabled(ctx, record.Level) {
32            if err := handler.Handle(ctx, record); err != nil {
33                return err
34            }
35        }
36    }
37    return nil
38}
39
40func (h *MultiHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
41    handlers := make([]slog.Handler, len(h.handlers))
42    for i, handler := range h.handlers {
43        handlers[i] = handler.WithAttrs(attrs)
44    }
45    return &MultiHandler{handlers: handlers}
46}
47
48func (h *MultiHandler) WithGroup(name string) slog.Handler {
49    handlers := make([]slog.Handler, len(h.handlers))
50    for i, handler := range h.handlers {
51        handlers[i] = handler.WithGroup(name)
52    }
53    return &MultiHandler{handlers: handlers}
54}
55
56func main() {
57    // Write to stdout and file
58    file, _ := os.Create("app.log")
59    defer file.Close()
60
61    multiHandler := NewMultiHandler(
62        slog.NewJSONHandler(os.Stdout, nil),
63        slog.NewTextHandler(file, nil),
64    )
65
66    logger := slog.New(multiHandler)
67
68    logger.Info("This goes to both outputs",
69        "service", "api",
70        "version", "1.0.0")
71
72    logger.Warn("Warning message", "component", "auth")
73}
74// run

Filtering Handler

In high-volume applications, you often need to filter logs to reduce noise and costs. A filtering handler can drop logs from noisy components, only keep errors above certain thresholds, or implement complex routing logic.

Common Filtering Scenarios:

  • Drop health check logs (they're just noise)
  • Only keep database queries slower than 100ms
  • Filter out sensitive information before logging
  • Route different log types to different destinations
  • Implement sampling for high-frequency debug logs
 1package main
 2
 3import (
 4    "context"
 5    "log/slog"
 6    "os"
 7)
 8
 9// FilterHandler filters logs based on attributes
10type FilterHandler struct {
11    handler    slog.Handler
12    filterFunc func([]slog.Attr) bool
13    attrs      []slog.Attr
14}
15
16func NewFilterHandler(handler slog.Handler, filterFunc func([]slog.Attr) bool) *FilterHandler {
17    return &FilterHandler{
18        handler:    handler,
19        filterFunc: filterFunc,
20        attrs:      []slog.Attr{},
21    }
22}
23
24func (h *FilterHandler) Enabled(ctx context.Context, level slog.Level) bool {
25    return h.handler.Enabled(ctx, level)
26}
27
28func (h *FilterHandler) Handle(ctx context.Context, record slog.Record) error {
29    // Collect all attributes
30    attrs := make([]slog.Attr, len(h.attrs))
31    copy(attrs, h.attrs)
32
33    record.Attrs(func(a slog.Attr) bool {
34        attrs = append(attrs, a)
35        return true
36    })
37
38    // Apply filter
39    if !h.filterFunc(attrs) {
40        return nil // Filtered out
41    }
42
43    return h.handler.Handle(ctx, record)
44}
45
46func (h *FilterHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
47    newAttrs := make([]slog.Attr, len(h.attrs)+len(attrs))
48    copy(newAttrs, h.attrs)
49    copy(newAttrs[len(h.attrs):], attrs)
50
51    return &FilterHandler{
52        handler:    h.handler,
53        filterFunc: h.filterFunc,
54        attrs:      newAttrs,
55    }
56}
57
58func (h *FilterHandler) WithGroup(name string) slog.Handler {
59    return &FilterHandler{
60        handler:    h.handler.WithGroup(name),
61        filterFunc: h.filterFunc,
62        attrs:      h.attrs,
63    }
64}
65
66func main() {
67    // Filter out logs from "noisy" component
68    filterFunc := func(attrs []slog.Attr) bool {
69        for _, attr := range attrs {
70            if attr.Key == "component" && attr.Value.String() == "noisy" {
71                return false
72            }
73        }
74        return true
75    }
76
77    handler := NewFilterHandler(
78        slog.NewJSONHandler(os.Stdout, nil),
79        filterFunc,
80    )
81
82    logger := slog.New(handler)
83
84    logger.Info("This will be logged", "component", "api")
85    logger.Info("This will be filtered out", "component", "noisy")
86    logger.Info("This will also be logged", "component", "db")
87}
88// run

Level-Based Routing Handler

Route different log levels to different destinations - a common pattern in production environments:

 1package main
 2
 3import (
 4    "context"
 5    "log/slog"
 6    "os"
 7)
 8
 9// RoutingHandler routes logs based on level
10type RoutingHandler struct {
11    defaultHandler slog.Handler
12    errorHandler   slog.Handler
13    minErrorLevel  slog.Level
14}
15
16func NewRoutingHandler(defaultHandler, errorHandler slog.Handler) *RoutingHandler {
17    return &RoutingHandler{
18        defaultHandler: defaultHandler,
19        errorHandler:   errorHandler,
20        minErrorLevel:  slog.LevelError,
21    }
22}
23
24func (h *RoutingHandler) Enabled(ctx context.Context, level slog.Level) bool {
25    if level >= h.minErrorLevel {
26        return h.errorHandler.Enabled(ctx, level)
27    }
28    return h.defaultHandler.Enabled(ctx, level)
29}
30
31func (h *RoutingHandler) Handle(ctx context.Context, record slog.Record) error {
32    if record.Level >= h.minErrorLevel {
33        return h.errorHandler.Handle(ctx, record)
34    }
35    return h.defaultHandler.Handle(ctx, record)
36}
37
38func (h *RoutingHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
39    return &RoutingHandler{
40        defaultHandler: h.defaultHandler.WithAttrs(attrs),
41        errorHandler:   h.errorHandler.WithAttrs(attrs),
42        minErrorLevel:  h.minErrorLevel,
43    }
44}
45
46func (h *RoutingHandler) WithGroup(name string) slog.Handler {
47    return &RoutingHandler{
48        defaultHandler: h.defaultHandler.WithGroup(name),
49        errorHandler:   h.errorHandler.WithGroup(name),
50        minErrorLevel:  h.minErrorLevel,
51    }
52}
53
54func main() {
55    // Normal logs to stdout, errors to stderr
56    handler := NewRoutingHandler(
57        slog.NewTextHandler(os.Stdout, nil),
58        slog.NewJSONHandler(os.Stderr, nil),
59    )
60
61    logger := slog.New(handler)
62
63    logger.Info("Normal log to stdout")
64    logger.Warn("Warning to stdout")
65    logger.Error("Error to stderr", "error", "something failed")
66}
67// run

Context Integration

In modern applications, especially web services and microservices, you need to carry request-specific information through your entire call chain. Context makes this possible, and when you combine it with structured logging, you get automatic correlation across all your services.

Passing Logger in Context

Real-world Pattern: Imagine a user request that flows through API gateway → auth service → database service → notification service. With context-based logging, every log entry automatically includes the request ID, user ID, and trace ID - no manual parameter passing required!

 1package main
 2
 3import (
 4    "context"
 5    "log/slog"
 6    "os"
 7)
 8
 9type contextKey string
10
11const loggerKey = contextKey("logger")
12
13// WithLogger adds logger to context
14func WithLogger(ctx context.Context, logger *slog.Logger) context.Context {
15    return context.WithValue(ctx, loggerKey, logger)
16}
17
18// FromContext retrieves logger from context
19func FromContext(ctx context.Context) *slog.Logger {
20    if logger, ok := ctx.Value(loggerKey).(*slog.Logger); ok {
21        return logger
22    }
23    return slog.Default()
24}
25
26func processRequest(ctx context.Context, userID int) {
27    logger := FromContext(ctx)
28
29    logger.Info("Processing request", "user_id", userID)
30
31    if userID == 0 {
32        logger.Error("Invalid user ID", "user_id", userID)
33        return
34    }
35
36    logger.Info("Request processed successfully", "user_id", userID)
37}
38
39func main() {
40    // Create base logger
41    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
42
43    // Add request-specific attributes
44    requestLogger := logger.With(
45        "request_id", "abc-123",
46        "session_id", "xyz-789",
47    )
48
49    ctx := WithLogger(context.Background(), requestLogger)
50
51    // All logs in this context will have request_id and session_id
52    processRequest(ctx, 42)
53    processRequest(ctx, 0)
54}
55// run

Context-Aware Logging Methods

slog provides context-aware methods that make it easier to pass context through your logging:

 1package main
 2
 3import (
 4    "context"
 5    "log/slog"
 6    "os"
 7    "time"
 8)
 9
10func processWithContext(ctx context.Context, orderID int) {
11    logger := slog.Default()
12
13    logger.InfoContext(ctx, "Starting order processing", "order_id", orderID)
14
15    // Simulate work
16    time.Sleep(50 * time.Millisecond)
17
18    if orderID%2 == 0 {
19        logger.ErrorContext(ctx, "Order processing failed",
20            "order_id", orderID,
21            "error", "validation failed")
22        return
23    }
24
25    logger.InfoContext(ctx, "Order processed successfully", "order_id", orderID)
26}
27
28func main() {
29    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
30    slog.SetDefault(logger)
31
32    ctx := context.Background()
33
34    // Process multiple orders
35    for i := 1; i <= 3; i++ {
36        processWithContext(ctx, i)
37    }
38}
39// run

Sampling and Rate Limiting

In high-traffic applications, logging everything can be expensive and overwhelming. Smart sampling lets you keep a representative sample of logs while reducing volume. Think of it like polling - you don't need to ask every single person to get accurate results.

Sampling Handler

💡 Key Takeaway: Good sampling preserves important events (errors, warnings) while reducing noise from high-frequency, low-value events (debug logs, health checks). The goal is to keep the signal while reducing the noise.

 1package main
 2
 3import (
 4    "context"
 5    "log/slog"
 6    "os"
 7    "sync/atomic"
 8)
 9
10// SamplingHandler samples logs at a fixed rate
11type SamplingHandler struct {
12    handler slog.Handler
13    rate    int64
14    counter atomic.Int64
15}
16
17func NewSamplingHandler(handler slog.Handler, rate int) *SamplingHandler {
18    return &SamplingHandler{
19        handler: handler,
20        rate:    int64(rate),
21    }
22}
23
24func (h *SamplingHandler) Enabled(ctx context.Context, level slog.Level) bool {
25    return h.handler.Enabled(ctx, level)
26}
27
28func (h *SamplingHandler) Handle(ctx context.Context, record slog.Record) error {
29    // Always log errors and warnings
30    if record.Level >= slog.LevelWarn {
31        return h.handler.Handle(ctx, record)
32    }
33
34    // Sample other logs
35    count := h.counter.Add(1)
36    if count%h.rate != 0 {
37        return nil // Skip this log
38    }
39
40    return h.handler.Handle(ctx, record)
41}
42
43func (h *SamplingHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
44    return &SamplingHandler{
45        handler: h.handler.WithAttrs(attrs),
46        rate:    h.rate,
47    }
48}
49
50func (h *SamplingHandler) WithGroup(name string) slog.Handler {
51    return &SamplingHandler{
52        handler: h.handler.WithGroup(name),
53        rate:    h.rate,
54    }
55}
56
57func main() {
58    // Keep every 5th log
59    handler := NewSamplingHandler(
60        slog.NewJSONHandler(os.Stdout, nil),
61        5,
62    )
63
64    logger := slog.New(handler)
65
66    // Generate 20 logs, only ~4 info logs should appear
67    // But all errors/warnings will appear
68    for i := 1; i <= 20; i++ {
69        logger.Info("Log message", "number", i)
70    }
71
72    // These will always appear
73    logger.Warn("Warning message")
74    logger.Error("Error message")
75}
76// run

Advanced Sampling with Burst Protection

Implement more sophisticated sampling that allows bursts but maintains overall rate limits:

  1package main
  2
  3import (
  4    "context"
  5    "log/slog"
  6    "os"
  7    "sync"
  8    "time"
  9)
 10
 11// BurstSamplingHandler allows bursts but limits overall rate
 12type BurstSamplingHandler struct {
 13    handler      slog.Handler
 14    maxPerSecond int
 15    burstSize    int
 16    mu           sync.Mutex
 17    tokens       int
 18    lastRefill   time.Time
 19}
 20
 21func NewBurstSamplingHandler(handler slog.Handler, maxPerSecond, burstSize int) *BurstSamplingHandler {
 22    return &BurstSamplingHandler{
 23        handler:      handler,
 24        maxPerSecond: maxPerSecond,
 25        burstSize:    burstSize,
 26        tokens:       burstSize,
 27        lastRefill:   time.Now(),
 28    }
 29}
 30
 31func (h *BurstSamplingHandler) Enabled(ctx context.Context, level slog.Level) bool {
 32    return h.handler.Enabled(ctx, level)
 33}
 34
 35func (h *BurstSamplingHandler) Handle(ctx context.Context, record slog.Record) error {
 36    // Always log errors
 37    if record.Level >= slog.LevelError {
 38        return h.handler.Handle(ctx, record)
 39    }
 40
 41    h.mu.Lock()
 42    defer h.mu.Unlock()
 43
 44    // Refill tokens based on time elapsed
 45    now := time.Now()
 46    elapsed := now.Sub(h.lastRefill)
 47    tokensToAdd := int(elapsed.Seconds() * float64(h.maxPerSecond))
 48
 49    if tokensToAdd > 0 {
 50        h.tokens += tokensToAdd
 51        if h.tokens > h.burstSize {
 52            h.tokens = h.burstSize
 53        }
 54        h.lastRefill = now
 55    }
 56
 57    // Check if we have tokens
 58    if h.tokens <= 0 {
 59        return nil // Rate limited
 60    }
 61
 62    h.tokens--
 63    return h.handler.Handle(ctx, record)
 64}
 65
 66func (h *BurstSamplingHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
 67    return &BurstSamplingHandler{
 68        handler:      h.handler.WithAttrs(attrs),
 69        maxPerSecond: h.maxPerSecond,
 70        burstSize:    h.burstSize,
 71        tokens:       h.tokens,
 72        lastRefill:   h.lastRefill,
 73    }
 74}
 75
 76func (h *BurstSamplingHandler) WithGroup(name string) slog.Handler {
 77    return &BurstSamplingHandler{
 78        handler:      h.handler.WithGroup(name),
 79        maxPerSecond: h.maxPerSecond,
 80        burstSize:    h.burstSize,
 81        tokens:       h.tokens,
 82        lastRefill:   h.lastRefill,
 83    }
 84}
 85
 86func main() {
 87    // Allow bursts of 10, but max 5 per second sustained
 88    handler := NewBurstSamplingHandler(
 89        slog.NewTextHandler(os.Stdout, nil),
 90        5,  // max per second
 91        10, // burst size
 92    )
 93
 94    logger := slog.New(handler)
 95
 96    // First 10 will all log (burst)
 97    for i := 1; i <= 15; i++ {
 98        logger.Info("Burst message", "number", i)
 99    }
100
101    // Errors always log
102    logger.Error("Critical error")
103}
104// run

Correlation IDs

Correlation IDs are the secret sauce that makes distributed systems debuggable. They're like putting a unique tracking number on every package - you can follow it through every step of its journey, even as it passes through different services and systems.

Request Tracing

Real-world Scenario: When a user reports "my request failed," you can search your logs for their correlation ID and see exactly what happened at every step - authentication, database queries, external API calls, caching, everything. Without correlation IDs, you're just guessing which logs belong to which request.

 1package main
 2
 3import (
 4    "context"
 5    "crypto/rand"
 6    "encoding/hex"
 7    "log/slog"
 8    "os"
 9    "time"
10)
11
12type correlationKey string
13
14const correlationIDKey = correlationKey("correlation_id")
15
16func generateCorrelationID() string {
17    b := make([]byte, 16)
18    rand.Read(b)
19    return hex.EncodeToString(b)
20}
21
22func WithCorrelationID(ctx context.Context, id string) context.Context {
23    return context.WithValue(ctx, correlationIDKey, id)
24}
25
26func GetCorrelationID(ctx context.Context) string {
27    if id, ok := ctx.Value(correlationIDKey).(string); ok {
28        return id
29    }
30    return ""
31}
32
33// NewCorrelationLogger creates logger with correlation ID
34func NewCorrelationLogger(logger *slog.Logger, ctx context.Context) *slog.Logger {
35    if id := GetCorrelationID(ctx); id != "" {
36        return logger.With("correlation_id", id)
37    }
38    return logger
39}
40
41func serviceA(ctx context.Context) {
42    logger := NewCorrelationLogger(slog.Default(), ctx)
43    logger.Info("Service A processing")
44    time.Sleep(10 * time.Millisecond)
45
46    serviceB(ctx)
47}
48
49func serviceB(ctx context.Context) {
50    logger := NewCorrelationLogger(slog.Default(), ctx)
51    logger.Info("Service B processing")
52    time.Sleep(10 * time.Millisecond)
53
54    serviceC(ctx)
55}
56
57func serviceC(ctx context.Context) {
58    logger := NewCorrelationLogger(slog.Default(), ctx)
59    logger.Info("Service C processing")
60    time.Sleep(10 * time.Millisecond)
61}
62
63func main() {
64    // Setup logger
65    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
66    slog.SetDefault(logger)
67
68    // Simulate two independent requests
69    for i := 1; i <= 2; i++ {
70        correlationID := generateCorrelationID()
71        ctx := WithCorrelationID(context.Background(), correlationID)
72
73        reqLogger := NewCorrelationLogger(logger, ctx)
74        reqLogger.Info("Request started", "request_num", i)
75
76        serviceA(ctx)
77
78        reqLogger.Info("Request completed", "request_num", i)
79
80        time.Sleep(100 * time.Millisecond)
81    }
82}
83// run

Trace ID and Span ID

For distributed tracing systems like OpenTelemetry, you often need both trace IDs (for the entire request) and span IDs (for individual operations):

 1package main
 2
 3import (
 4    "context"
 5    "crypto/rand"
 6    "encoding/hex"
 7    "log/slog"
 8    "os"
 9)
10
11type traceKey string
12
13const (
14    traceIDKey = traceKey("trace_id")
15    spanIDKey  = traceKey("span_id")
16)
17
18func generateID(length int) string {
19    b := make([]byte, length)
20    rand.Read(b)
21    return hex.EncodeToString(b)
22}
23
24func WithTraceContext(ctx context.Context) context.Context {
25    ctx = context.WithValue(ctx, traceIDKey, generateID(16))
26    ctx = context.WithValue(ctx, spanIDKey, generateID(8))
27    return ctx
28}
29
30func WithNewSpan(ctx context.Context) context.Context {
31    return context.WithValue(ctx, spanIDKey, generateID(8))
32}
33
34func LoggerFromContext(ctx context.Context) *slog.Logger {
35    logger := slog.Default()
36
37    if traceID, ok := ctx.Value(traceIDKey).(string); ok {
38        logger = logger.With("trace_id", traceID)
39    }
40
41    if spanID, ok := ctx.Value(spanIDKey).(string); ok {
42        logger = logger.With("span_id", spanID)
43    }
44
45    return logger
46}
47
48func databaseQuery(ctx context.Context) {
49    ctx = WithNewSpan(ctx) // New span for this operation
50    logger := LoggerFromContext(ctx)
51
52    logger.Info("Database query started", "query", "SELECT * FROM users")
53    logger.Info("Database query completed", "rows", 42)
54}
55
56func cacheCheck(ctx context.Context) {
57    ctx = WithNewSpan(ctx) // New span for this operation
58    logger := LoggerFromContext(ctx)
59
60    logger.Info("Cache check started", "key", "user:123")
61    logger.Info("Cache hit", "value", "cached_data")
62}
63
64func main() {
65    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
66    slog.SetDefault(logger)
67
68    // Create request with trace context
69    ctx := WithTraceContext(context.Background())
70
71    reqLogger := LoggerFromContext(ctx)
72    reqLogger.Info("Request started")
73
74    // Each operation gets its own span
75    cacheCheck(ctx)
76    databaseQuery(ctx)
77
78    reqLogger.Info("Request completed")
79}
80// run

Production Patterns

Let's bring everything together into production-ready logging patterns that you can use in real applications.

Complete Logging System

Let's put everything together into a production-ready logging system. This is the kind of setup you'd use in real applications where you need to balance performance, cost, and debugging capability.

⚠️ Important: In production, logging isn't free. Every log entry consumes CPU, memory, network bandwidth, and storage. A well-designed logging system provides maximum insight with minimum overhead.

  1package main
  2
  3import (
  4    "context"
  5    "fmt"
  6    "log/slog"
  7    "os"
  8    "runtime"
  9    "time"
 10)
 11
 12// LogConfig configures the logging system
 13type LogConfig struct {
 14    Level      slog.Level
 15    Format     string // "json" or "text"
 16    AddSource  bool
 17    TimeFormat string
 18}
 19
 20// NewLogger creates a production logger
 21func NewLogger(config LogConfig) *slog.Logger {
 22    var handler slog.Handler
 23
 24    opts := &slog.HandlerOptions{
 25        Level:     config.Level,
 26        AddSource: config.AddSource,
 27        ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {
 28            // Customize time format
 29            if a.Key == slog.TimeKey && config.TimeFormat != "" {
 30                if t, ok := a.Value.Any().(time.Time); ok {
 31                    a.Value = slog.StringValue(t.Format(config.TimeFormat))
 32                }
 33            }
 34            return a
 35        },
 36    }
 37
 38    if config.Format == "json" {
 39        handler = slog.NewJSONHandler(os.Stdout, opts)
 40    } else {
 41        handler = slog.NewTextHandler(os.Stdout, opts)
 42    }
 43
 44    return slog.New(handler)
 45}
 46
 47// HTTPLog represents structured HTTP log data
 48type HTTPLog struct {
 49    Method     string
 50    Path       string
 51    Status     int
 52    Duration   time.Duration
 53    RemoteAddr string
 54    UserAgent  string
 55}
 56
 57func (h HTTPLog) LogValue() slog.Value {
 58    return slog.GroupValue(
 59        slog.String("method", h.Method),
 60        slog.String("path", h.Path),
 61        slog.Int("status", h.Status),
 62        slog.Duration("duration", h.Duration),
 63        slog.String("remote_addr", h.RemoteAddr),
 64        slog.String("user_agent", h.UserAgent),
 65    )
 66}
 67
 68func logWithMetrics(ctx context.Context, logger *slog.Logger, operation string, fn func() error) error {
 69    start := time.Now()
 70
 71    var m1, m2 runtime.MemStats
 72    runtime.ReadMemStats(&m1)
 73
 74    err := fn()
 75
 76    runtime.ReadMemStats(&m2)
 77    duration := time.Since(start)
 78
 79    logger.Info("Operation completed",
 80        "operation", operation,
 81        "duration_ms", duration.Milliseconds(),
 82        "memory_alloc_kb", (m2.Alloc-m1.Alloc)/1024,
 83        "success", err == nil)
 84
 85    return err
 86}
 87
 88func main() {
 89    // Configure logger
 90    config := LogConfig{
 91        Level:      slog.LevelDebug,
 92        Format:     "json",
 93        AddSource:  true,
 94        TimeFormat: time.RFC3339,
 95    }
 96
 97    logger := NewLogger(config)
 98    ctx := context.Background()
 99
100    // Structured HTTP logging
101    httpLog := HTTPLog{
102        Method:     "GET",
103        Path:       "/api/users/123",
104        Status:     200,
105        Duration:   45 * time.Millisecond,
106        RemoteAddr: "192.168.1.1",
107        UserAgent:  "Mozilla/5.0",
108    }
109
110    logger.Info("HTTP request", "request", httpLog)
111
112    // Log with metrics
113    err := logWithMetrics(ctx, logger, "database_query", func() error {
114        // Simulate work
115        time.Sleep(100 * time.Millisecond)
116        data := make([]byte, 1024*1024) // Allocate 1MB
117        _ = data
118        return nil
119    })
120
121    if err != nil {
122        logger.Error("Failed", "error", err)
123    }
124
125    // Error with stack trace
126    logger.Error("Critical error occurred",
127        "error", fmt.Errorf("database connection failed"),
128        "stack", string(runtime.Stack()),
129    )
130}
131// run

Application-Wide Logger Setup

Here's a complete pattern for setting up logging across an entire application:

 1package main
 2
 3import (
 4    "log/slog"
 5    "os"
 6)
 7
 8// Application holds application-wide dependencies
 9type Application struct {
10    logger *slog.Logger
11}
12
13func NewApplication() *Application {
14    // Determine environment
15    env := os.Getenv("ENVIRONMENT")
16    if env == "" {
17        env = "development"
18    }
19
20    // Configure based on environment
21    var handler slog.Handler
22    if env == "production" {
23        // JSON for production
24        opts := &slog.HandlerOptions{
25            Level: slog.LevelInfo,
26        }
27        handler = slog.NewJSONHandler(os.Stdout, opts)
28    } else {
29        // Text for development
30        opts := &slog.HandlerOptions{
31            Level:     slog.LevelDebug,
32            AddSource: true,
33        }
34        handler = slog.NewTextHandler(os.Stdout, opts)
35    }
36
37    // Add application metadata
38    logger := slog.New(handler).With(
39        "environment", env,
40        "version", "1.0.0",
41    )
42
43    // Set as default
44    slog.SetDefault(logger)
45
46    return &Application{
47        logger: logger,
48    }
49}
50
51func (app *Application) Run() {
52    app.logger.Info("Application starting")
53
54    // Create component-specific loggers
55    apiLogger := app.logger.With("component", "api")
56    dbLogger := app.logger.With("component", "database")
57
58    apiLogger.Info("API server started", "port", 8080)
59    dbLogger.Info("Database connected", "host", "localhost")
60
61    app.logger.Info("Application ready")
62}
63
64func main() {
65    app := NewApplication()
66    app.Run()
67}
68// run

Further Reading

Practice Exercises

Exercise 1: Custom JSON Handler

Difficulty: Intermediate | Time: 30-45 minutes

Learning Objectives:

  • Master custom slog handler implementation
  • Understand JSON log formatting and metadata enrichment
  • Learn production-ready logging patterns with hostname and version tracking

Real-World Context: In production microservices, you need logs that include application metadata like hostname, version, and container IDs for proper log aggregation and debugging in distributed systems.

Create a custom slog handler that outputs logs in a custom JSON format with additional metadata. The handler should automatically enrich log entries with system information, application details, and structured formatting optimized for log aggregation tools like ELK stack or Splunk.

Solution with Explanation
  1package main
  2
  3import (
  4	"context"
  5	"encoding/json"
  6	"fmt"
  7	"log/slog"
  8	"os"
  9	"runtime"
 10	"time"
 11)
 12
 13// CustomJSONHandler outputs logs with additional metadata
 14type CustomJSONHandler struct {
 15	opts      *slog.HandlerOptions
 16	attrs     []slog.Attr
 17	groups    []string
 18	hostname  string
 19	appName   string
 20	appVersion string
 21}
 22
 23func NewCustomJSONHandler(appName, appVersion string, opts *slog.HandlerOptions) *CustomJSONHandler {
 24	hostname, _ := os.Hostname()
 25	return &CustomJSONHandler{
 26		opts:       opts,
 27		attrs:      []slog.Attr{},
 28		groups:     []string{},
 29		hostname:   hostname,
 30		appName:    appName,
 31		appVersion: appVersion,
 32	}
 33}
 34
 35func (h *CustomJSONHandler) Enabled(_ context.Context, level slog.Level) bool {
 36	minLevel := slog.LevelInfo
 37	if h.opts != nil && h.opts.Level != nil {
 38		minLevel = h.opts.Level.Level()
 39	}
 40	return level >= minLevel
 41}
 42
 43func (h *CustomJSONHandler) Handle(ctx context.Context, record slog.Record) error {
 44	// Build JSON structure
 45	output := map[string]interface{}{
 46		"timestamp":  record.Time.Format(time.RFC3339Nano),
 47		"level":      record.Level.String(),
 48		"message":    record.Message,
 49		"hostname":   h.hostname,
 50		"app":        h.appName,
 51		"version":    h.appVersion,
 52		"thread_id":  getGoroutineID(),
 53	}
 54
 55	// Add source location if enabled
 56	if h.opts != nil && h.opts.AddSource {
 57		fs := runtime.CallersFrames([]uintptr{record.PC})
 58		f, _ := fs.Next()
 59		output["source"] = map[string]interface{}{
 60			"file":     f.File,
 61			"line":     f.Line,
 62			"function": f.Function,
 63		}
 64	}
 65
 66	// Add handler-level attributes
 67	for _, attr := range h.attrs {
 68		output[attr.Key] = attr.Value.Any()
 69	}
 70
 71	// Add record attributes
 72	attrs := make(map[string]interface{})
 73	record.Attrs(func(a slog.Attr) bool {
 74		attrs[a.Key] = a.Value.Any()
 75		return true
 76	})
 77	if len(attrs) > 0 {
 78		output["attributes"] = attrs
 79	}
 80
 81	// Add groups if any
 82	if len(h.groups) > 0 {
 83		output["groups"] = h.groups
 84	}
 85
 86	// Encode and write
 87	encoder := json.NewEncoder(os.Stdout)
 88	encoder.SetIndent("", "  ")
 89	return encoder.Encode(output)
 90}
 91
 92func (h *CustomJSONHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
 93	newHandler := *h
 94	newHandler.attrs = append(h.attrs, attrs...)
 95	return &newHandler
 96}
 97
 98func (h *CustomJSONHandler) WithGroup(name string) slog.Handler {
 99	newHandler := *h
100	newHandler.groups = append(h.groups, name)
101	return &newHandler
102}
103
104func getGoroutineID() int {
105	var buf [64]byte
106	n := runtime.Stack(buf[:], false)
107	// Parse goroutine ID from stack trace
108	return len(buf) - n // Simple placeholder
109}
110
111func main() {
112	// Create custom handler
113	handler := NewCustomJSONHandler("my-app", "1.0.0", &slog.HandlerOptions{
114		Level:     slog.LevelDebug,
115		AddSource: true,
116	})
117
118	logger := slog.New(handler)
119
120	// Test logging
121	logger.Debug("Debug message", "debug_info", "some details")
122	logger.Info("Application started",
123		"port", 8080,
124		"environment", "production")
125	logger.Warn("High memory usage detected",
126		"usage_percent", 85.5,
127		"threshold", 80.0)
128	logger.Error("Database connection failed",
129		"error", "connection timeout",
130		"retry_count", 3)
131
132	// With additional context
133	reqLogger := logger.With(
134		"request_id", "abc-123",
135		"user_id", 12345,
136	)
137	reqLogger.Info("Processing request")
138}
139// run

Explanation:

This custom handler demonstrates:

  • Metadata Enrichment: Automatically adds hostname, app name/version, and goroutine info
  • Custom JSON Structure: Creates a structured JSON format optimized for log aggregation
  • Source Location: Includes file, line, and function information
  • Attribute Handling: Properly collects and groups attributes
  • Handler Chaining: Implements WithAttrs and WithGroup for composability

Exercise 2: Log Level Filter

Difficulty: Intermediate | Time: 25-35 minutes

Learning Objectives:

  • Implement dynamic log level management at runtime
  • Master context-aware logging with module-specific configurations
  • Understand thread-safe logging configuration patterns

Real-World Context: Production applications often need to adjust logging levels without restarting - for example, increasing debug logging for a specific component during troubleshooting while keeping other components at normal levels to avoid log spam.

Implement a log filter that dynamically changes log levels based on context or conditions. Your filter should support global level changes as well as module-specific overrides, allowing fine-grained control over logging verbosity in different parts of your application. This is essential for production debugging where you need to increase logging for specific components without restarting the entire application.

Solution with Explanation
  1package main
  2
  3import (
  4	"context"
  5	"fmt"
  6	"log/slog"
  7	"os"
  8	"strings"
  9	"sync"
 10)
 11
 12// DynamicLevelHandler allows runtime log level changes
 13type DynamicLevelHandler struct {
 14	handler      slog.Handler
 15	mu           sync.RWMutex
 16	globalLevel  slog.Level
 17	moduleLevels map[string]slog.Level
 18}
 19
 20type contextKey string
 21
 22const moduleKey = contextKey("module")
 23
 24func NewDynamicLevelHandler(handler slog.Handler, defaultLevel slog.Level) *DynamicLevelHandler {
 25	return &DynamicLevelHandler{
 26		handler:       handler,
 27		globalLevel:   defaultLevel,
 28		moduleLevels:  make(map[string]slog.Level),
 29	}
 30}
 31
 32func (h *DynamicLevelHandler) Enabled(ctx context.Context, level slog.Level) bool {
 33	h.mu.RLock()
 34	defer h.mu.RUnlock()
 35
 36	// Check for module-specific level
 37	if module, ok := ctx.Value(moduleKey).(string); ok {
 38		if moduleLevel, exists := h.moduleLevels[module]; exists {
 39			return level >= moduleLevel
 40		}
 41	}
 42
 43	// Fall back to global level
 44	return level >= h.globalLevel
 45}
 46
 47func (h *DynamicLevelHandler) Handle(ctx context.Context, record slog.Record) error {
 48	if !h.Enabled(ctx, record.Level) {
 49		return nil
 50	}
 51	return h.handler.Handle(ctx, record)
 52}
 53
 54func (h *DynamicLevelHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
 55	return &DynamicLevelHandler{
 56		handler:       h.handler.WithAttrs(attrs),
 57		globalLevel:   h.globalLevel,
 58		moduleLevels:  h.moduleLevels,
 59	}
 60}
 61
 62func (h *DynamicLevelHandler) WithGroup(name string) slog.Handler {
 63	return &DynamicLevelHandler{
 64		handler:       h.handler.WithGroup(name),
 65		globalLevel:   h.globalLevel,
 66		moduleLevels:  h.moduleLevels,
 67	}
 68}
 69
 70// SetGlobalLevel changes the global log level
 71func (h *DynamicLevelHandler) SetGlobalLevel(level slog.Level) {
 72	h.mu.Lock()
 73	defer h.mu.Unlock()
 74	h.globalLevel = level
 75	fmt.Printf("[Config] Global log level set to %s\n", level)
 76}
 77
 78// SetModuleLevel sets log level for a specific module
 79func (h *DynamicLevelHandler) SetModuleLevel(module string, level slog.Level) {
 80	h.mu.Lock()
 81	defer h.mu.Unlock()
 82	h.moduleLevels[module] = level
 83	fmt.Printf("[Config] Module '%s' log level set to %s\n", module, level)
 84}
 85
 86// WithModule adds module context
 87func WithModule(ctx context.Context, module string) context.Context {
 88	return context.WithValue(ctx, moduleKey, module)
 89}
 90
 91func main() {
 92	// Create dynamic handler
 93	baseHandler := slog.NewTextHandler(os.Stdout, nil)
 94	handler := NewDynamicLevelHandler(baseHandler, slog.LevelInfo)
 95	logger := slog.New(handler)
 96
 97	// Test with global level
 98	fmt.Println("=== Testing with Info level ===")
 99	logger.Debug("This won't show")
100	logger.Info("This will show")
101	logger.Warn("This will show")
102
103	// Change global level
104	fmt.Println("\n=== Changing to Debug level ===")
105	handler.SetGlobalLevel(slog.LevelDebug)
106	logger.Debug("Now this will show")
107	logger.Info("This will show")
108
109	// Module-specific levels
110	fmt.Println("\n=== Module-specific levels ===")
111	handler.SetGlobalLevel(slog.LevelWarn)
112	handler.SetModuleLevel("database", slog.LevelDebug)
113	handler.SetModuleLevel("auth", slog.LevelInfo)
114
115	// Test with different modules
116	ctx := context.Background()
117
118	dbCtx := WithModule(ctx, "database")
119	logger.DebugContext(dbCtx, "Database query executed", "module", "database")
120	logger.InfoContext(dbCtx, "Connection pool status", "module", "database")
121
122	authCtx := WithModule(ctx, "auth")
123	logger.DebugContext(authCtx, "This won't show", "module", "auth")
124	logger.InfoContext(authCtx, "User authenticated", "module", "auth")
125
126	apiCtx := WithModule(ctx, "api")
127	logger.InfoContext(apiCtx, "This won't show", "module", "api")
128	logger.WarnContext(apiCtx, "Rate limit approaching", "module", "api")
129}
130// run

Explanation:

This dynamic filter demonstrates:

  • Runtime Level Changes: Adjust logging levels without restarting
  • Module-Specific Levels: Different log levels for different parts of your application
  • Context-Based Filtering: Uses context to determine which module is logging
  • Thread-Safe: Proper mutex protection for concurrent level changes
  • Production Pattern: Common in production where you want to increase logging for specific components during debugging

Exercise 3: Request Logger Middleware

Difficulty: Intermediate | Time: 20-30 minutes

Learning Objectives:

  • Build production-ready HTTP logging middleware
  • Implement request correlation and tracing
  • Master structured HTTP request/response logging patterns

Real-World Context: Every production web service needs comprehensive request logging for monitoring, debugging, and audit trails. Correlation IDs are crucial for tracing requests across microservices, making it possible to follow a single user request through multiple service calls.

Create HTTP middleware that logs requests with structured information and correlation IDs. The middleware should capture request details, response information, and generate unique request IDs for tracing. This middleware is essential for production web services where you need to monitor performance, debug issues, and maintain audit trails of all HTTP traffic.

Solution with Explanation
  1package main
  2
  3import (
  4	"context"
  5	"crypto/rand"
  6	"encoding/hex"
  7	"fmt"
  8	"log/slog"
  9	"net/http"
 10	"os"
 11	"time"
 12)
 13
 14type contextKey string
 15
 16const requestIDKey = contextKey("request_id")
 17
 18// RequestLoggerMiddleware logs HTTP requests with detailed information
 19type RequestLoggerMiddleware struct {
 20	logger *slog.Logger
 21}
 22
 23func NewRequestLoggerMiddleware(logger *slog.Logger) *RequestLoggerMiddleware {
 24	return &RequestLoggerMiddleware{logger: logger}
 25}
 26
 27// responseWriter captures response status and size
 28type responseWriter struct {
 29	http.ResponseWriter
 30	status int
 31	size   int
 32}
 33
 34func (rw *responseWriter) WriteHeader(status int) {
 35	rw.status = status
 36	rw.ResponseWriter.WriteHeader(status)
 37}
 38
 39func (rw *responseWriter) Write(b []byte) (int, error) {
 40	size, err := rw.ResponseWriter.Write(b)
 41	rw.size += size
 42	return size, err
 43}
 44
 45func (m *RequestLoggerMiddleware) Middleware(next http.Handler) http.Handler {
 46	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
 47		start := time.Now()
 48
 49		// Generate request ID
 50		requestID := generateRequestID()
 51		ctx := context.WithValue(r.Context(), requestIDKey, requestID)
 52		r = r.WithContext(ctx)
 53
 54		// Create request-scoped logger
 55		requestLogger := m.logger.With(
 56			"request_id", requestID,
 57			"method", r.Method,
 58			"path", r.URL.Path,
 59			"remote_addr", r.RemoteAddr,
 60			"user_agent", r.UserAgent(),
 61		)
 62
 63		// Log request start
 64		requestLogger.Info("Request started")
 65
 66		// Wrap response writer
 67		rw := &responseWriter{ResponseWriter: w, status: http.StatusOK}
 68
 69		// Handle request
 70		defer func() {
 71			duration := time.Since(start)
 72
 73			// Determine log level based on status code
 74			level := slog.LevelInfo
 75			if rw.status >= 500 {
 76				level = slog.LevelError
 77			} else if rw.status >= 400 {
 78				level = slog.LevelWarn
 79			}
 80
 81			requestLogger.Log(r.Context(), level, "Request completed",
 82				"status", rw.status,
 83				"duration_ms", duration.Milliseconds(),
 84				"response_size", rw.size,
 85			)
 86		}()
 87
 88		next.ServeHTTP(rw, r)
 89	})
 90}
 91
 92func generateRequestID() string {
 93	b := make([]byte, 16)
 94	rand.Read(b)
 95	return hex.EncodeToString(b)
 96}
 97
 98// Example handlers
 99func helloHandler(w http.ResponseWriter, r *http.Request) {
100	requestID := r.Context().Value(requestIDKey).(string)
101	fmt.Fprintf(w, "Hello! Request ID: %s\n", requestID)
102}
103
104func slowHandler(w http.ResponseWriter, r *http.Request) {
105	time.Sleep(100 * time.Millisecond)
106	w.Write([]byte("Slow response\n"))
107}
108
109func errorHandler(w http.ResponseWriter, r *http.Request) {
110	http.Error(w, "Internal Server Error", http.StatusInternalServerError)
111}
112
113func main() {
114	// Create logger
115	logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
116		Level: slog.LevelDebug,
117	}))
118
119	// Create middleware
120	middleware := NewRequestLoggerMiddleware(logger)
121
122	// Setup routes
123	mux := http.NewServeMux()
124	mux.HandleFunc("/hello", helloHandler)
125	mux.HandleFunc("/slow", slowHandler)
126	mux.HandleFunc("/error", errorHandler)
127
128	// Wrap with logging middleware
129	handler := middleware.Middleware(mux)
130
131	// Simulate requests for demo
132	fmt.Println("=== Simulating HTTP Requests ===\n")
133
134	simulateRequest(handler, "GET", "/hello")
135	simulateRequest(handler, "GET", "/slow")
136	simulateRequest(handler, "GET", "/error")
137	simulateRequest(handler, "POST", "/hello")
138}
139
140func simulateRequest(handler http.Handler, method, path string) {
141	req, _ := http.NewRequest(method, path, nil)
142	req.RemoteAddr = "192.168.1.100:54321"
143	req.Header.Set("User-Agent", "TestClient/1.0")
144
145	rw := &testResponseWriter{header: make(http.Header)}
146	handler.ServeHTTP(rw, req)
147
148	time.Sleep(50 * time.Millisecond)
149}
150
151type testResponseWriter struct {
152	header http.Header
153	body   []byte
154	status int
155}
156
157func (w *testResponseWriter) Header() http.Header {
158	return w.header
159}
160
161func (w *testResponseWriter) Write(b []byte) (int, error) {
162	w.body = append(w.body, b...)
163	return len(b), nil
164}
165
166func (w *testResponseWriter) WriteHeader(status int) {
167	w.status = status
168}
169// run

Explanation:

This middleware demonstrates:

  • Request Tracing: Generates unique request IDs for correlation
  • Response Capture: Wraps ResponseWriter to capture status and size
  • Structured Logging: Records method, path, headers, and performance metrics
  • Context Propagation: Passes request ID through context for downstream logging
  • Smart Log Levels: Uses Error for 5xx, Warn for 4xx, Info for success
  • Production Ready: Includes all metrics needed for monitoring and debugging

Exercise 4: Performance Logger

Difficulty: Advanced | Time: 35-45 minutes

Learning Objectives:

  • Implement comprehensive performance monitoring with structured logging
  • Master memory usage tracking and garbage collection monitoring
  • Learn to build performance profiling tools with Go's runtime package

Real-World Context: Production applications need detailed performance monitoring to identify bottlenecks, memory leaks, and resource usage patterns. Performance logging helps developers understand how their application behaves under load and where optimization efforts should be focused.

Build a performance monitoring logger that tracks operation timing and resource usage. Your logger should measure execution time, memory allocation/deallocation, garbage collection activity, and provide both individual operation metrics and batch summaries. This tool is invaluable for identifying performance bottlenecks and memory leaks in production applications where you need to understand resource usage patterns without external profiling tools.

Solution with Explanation
  1package main
  2
  3import (
  4	"context"
  5	"fmt"
  6	"log/slog"
  7	"os"
  8	"runtime"
  9	"sync"
 10	"time"
 11)
 12
 13// PerformanceLogger tracks and logs performance metrics
 14type PerformanceLogger struct {
 15	logger *slog.Logger
 16}
 17
 18type PerfMetrics struct {
 19	Operation    string
 20	StartTime    time.Time
 21	Duration     time.Duration
 22	MemoryBefore uint64
 23	MemoryAfter  uint64
 24	MemoryDelta  int64
 25	GCCount      uint32
 26	Success      bool
 27	Error        error
 28}
 29
 30func NewPerformanceLogger(logger *slog.Logger) *PerformanceLogger {
 31	return &PerformanceLogger{logger: logger}
 32}
 33
 34// Track wraps an operation and logs performance metrics
 35func (pl *PerformanceLogger) Track(ctx context.Context, operation string, fn func() error) error {
 36	metrics := &PerfMetrics{
 37		Operation: operation,
 38		StartTime: time.Now(),
 39	}
 40
 41	// Capture initial memory stats
 42	var m1 runtime.MemStats
 43	runtime.ReadMemStats(&m1)
 44	metrics.MemoryBefore = m1.Alloc
 45	gcBefore := m1.NumGC
 46
 47	// Execute operation
 48	err := fn()
 49
 50	// Capture final metrics
 51	var m2 runtime.MemStats
 52	runtime.ReadMemStats(&m2)
 53	metrics.MemoryAfter = m2.Alloc
 54	metrics.MemoryDelta = int64(m2.Alloc) - int64(m1.Alloc)
 55	metrics.GCCount = m2.NumGC - gcBefore
 56	metrics.Duration = time.Since(metrics.StartTime)
 57	metrics.Success = err == nil
 58	metrics.Error = err
 59
 60	// Log metrics
 61	pl.logMetrics(ctx, metrics)
 62
 63	return err
 64}
 65
 66func (pl *PerformanceLogger) logMetrics(ctx context.Context, m *PerfMetrics) {
 67	attrs := []any{
 68		"operation", m.Operation,
 69		"duration_ms", m.Duration.Milliseconds(),
 70		"duration_us", m.Duration.Microseconds(),
 71		"memory_before_kb", m.MemoryBefore / 1024,
 72		"memory_after_kb", m.MemoryAfter / 1024,
 73		"memory_delta_kb", m.MemoryDelta / 1024,
 74		"gc_runs", m.GCCount,
 75		"success", m.Success,
 76	}
 77
 78	if m.Error != nil {
 79		attrs = append(attrs, "error", m.Error.Error())
 80	}
 81
 82	// Determine log level based on performance
 83	level := slog.LevelInfo
 84	if !m.Success {
 85		level = slog.LevelError
 86	} else if m.Duration > 1*time.Second {
 87		level = slog.LevelWarn
 88		attrs = append(attrs, "slow", true)
 89	}
 90
 91	pl.logger.Log(ctx, level, "Performance metrics", attrs...)
 92}
 93
 94// TrackBatch tracks multiple operations and provides summary
 95type BatchTracker struct {
 96	logger     *slog.Logger
 97	operations []PerfMetrics
 98	mu         sync.Mutex
 99}
100
101func NewBatchTracker(logger *slog.Logger) *BatchTracker {
102	return &BatchTracker{
103		logger:     logger,
104		operations: make([]PerfMetrics, 0),
105	}
106}
107
108func (bt *BatchTracker) Track(operation string, fn func() error) error {
109	start := time.Now()
110
111	var m1 runtime.MemStats
112	runtime.ReadMemStats(&m1)
113
114	err := fn()
115
116	var m2 runtime.MemStats
117	runtime.ReadMemStats(&m2)
118
119	metrics := PerfMetrics{
120		Operation:    operation,
121		StartTime:    start,
122		Duration:     time.Since(start),
123		MemoryBefore: m1.Alloc,
124		MemoryAfter:  m2.Alloc,
125		MemoryDelta:  int64(m2.Alloc - m1.Alloc),
126		GCCount:      m2.NumGC - m1.NumGC,
127		Success:      err == nil,
128		Error:        err,
129	}
130
131	bt.mu.Lock()
132	bt.operations = append(bt.operations, metrics)
133	bt.mu.Unlock()
134
135	return err
136}
137
138func (bt *BatchTracker) Summary() {
139	bt.mu.Lock()
140	defer bt.mu.Unlock()
141
142	if len(bt.operations) == 0 {
143		return
144	}
145
146	var totalDuration time.Duration
147	var totalMemory int64
148	successCount := 0
149
150	for _, op := range bt.operations {
151		totalDuration += op.Duration
152		totalMemory += op.MemoryDelta
153		if op.Success {
154			successCount++
155		}
156	}
157
158	avgDuration := totalDuration / time.Duration(len(bt.operations))
159
160	bt.logger.Info("Batch performance summary",
161		"total_operations", len(bt.operations),
162		"successful", successCount,
163		"failed", len(bt.operations)-successCount,
164		"total_duration_ms", totalDuration.Milliseconds(),
165		"avg_duration_ms", avgDuration.Milliseconds(),
166		"total_memory_delta_kb", totalMemory/1024,
167	)
168}
169
170func main() {
171	logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
172		Level: slog.LevelDebug,
173	}))
174
175	perfLogger := NewPerformanceLogger(logger)
176	ctx := context.Background()
177
178	// Test 1: Fast operation
179	fmt.Println("=== Test 1: Fast Operation ===")
180	perfLogger.Track(ctx, "fast_operation", func() error {
181		time.Sleep(10 * time.Millisecond)
182		return nil
183	})
184
185	// Test 2: Slow operation
186	fmt.Println("\n=== Test 2: Slow Operation ===")
187	perfLogger.Track(ctx, "slow_operation", func() error {
188		time.Sleep(1100 * time.Millisecond)
189		return nil
190	})
191
192	// Test 3: Memory intensive operation
193	fmt.Println("\n=== Test 3: Memory Intensive ===")
194	perfLogger.Track(ctx, "memory_operation", func() error {
195		data := make([]byte, 10*1024*1024) // 10MB allocation
196		_ = data
197		return nil
198	})
199
200	// Test 4: Failed operation
201	fmt.Println("\n=== Test 4: Failed Operation ===")
202	perfLogger.Track(ctx, "failed_operation", func() error {
203		return fmt.Errorf("operation failed")
204	})
205
206	// Test 5: Batch tracking
207	fmt.Println("\n=== Test 5: Batch Tracking ===")
208	batchTracker := NewBatchTracker(logger)
209
210	for i := 0; i < 5; i++ {
211		operation := fmt.Sprintf("batch_op_%d", i)
212		batchTracker.Track(operation, func() error {
213			time.Sleep(time.Duration(10+i*10) * time.Millisecond)
214			return nil
215		})
216	}
217
218	batchTracker.Summary()
219}
220// run

Explanation:

This performance logger provides:

  • Detailed Metrics: Tracks execution time, memory usage, and GC activity
  • Before/After Comparison: Captures resource state before and after operations
  • Automatic Classification: Flags slow operations based on thresholds
  • Batch Tracking: Aggregates metrics across multiple operations
  • Context Support: Integrates with context for distributed tracing
  • Production Use: Useful for identifying performance bottlenecks and memory leaks

Exercise 5: Log Aggregation

Difficulty: Advanced | Time: 40-50 minutes

Learning Objectives:

  • Build a centralized log aggregation system
  • Implement flexible log querying and filtering mechanisms
  • Master in-memory log management with rolling windows and statistics

Real-World Context: In microservices architectures, logs from multiple services need to be aggregated for comprehensive monitoring and debugging. A log aggregator enables centralized log analysis, making it easier to trace issues across service boundaries and generate system-wide insights.

Implement a log aggregator that collects logs from multiple sources and provides querying capabilities. Your aggregator should handle multiple log sources, support flexible filtering, maintain rolling buffers for memory efficiency, and provide aggregation statistics. This system demonstrates the patterns used in production log aggregation tools like Fluentd or Logstash, where centralized log collection enables comprehensive monitoring and debugging across distributed systems.

Solution with Explanation
  1package main
  2
  3import (
  4	"context"
  5	"fmt"
  6	"log/slog"
  7	"os"
  8	"sync"
  9	"time"
 10)
 11
 12// LogEntry represents a stored log entry
 13type LogEntry struct {
 14	Timestamp time.Time
 15	Level     slog.Level
 16	Message   string
 17	Attrs     map[string]interface{}
 18	Source    string
 19}
 20
 21// LogAggregator collects and queries logs
 22type LogAggregator struct {
 23	entries []LogEntry
 24	mu      sync.RWMutex
 25	maxSize int
 26}
 27
 28func NewLogAggregator(maxSize int) *LogAggregator {
 29	return &LogAggregator{
 30		entries: make([]LogEntry, 0, maxSize),
 31		maxSize: maxSize,
 32	}
 33}
 34
 35// AggregatorHandler implements slog.Handler to collect logs
 36type AggregatorHandler struct {
 37	aggregator *LogAggregator
 38	source     string
 39	attrs      []slog.Attr
 40}
 41
 42func NewAggregatorHandler(aggregator *LogAggregator, source string) *AggregatorHandler {
 43	return &AggregatorHandler{
 44		aggregator: aggregator,
 45		source:     source,
 46		attrs:      []slog.Attr{},
 47	}
 48}
 49
 50func (h *AggregatorHandler) Enabled(_ context.Context, level slog.Level) bool {
 51	return true
 52}
 53
 54func (h *AggregatorHandler) Handle(_ context.Context, record slog.Record) error {
 55	attrs := make(map[string]interface{})
 56
 57	// Collect handler attributes
 58	for _, attr := range h.attrs {
 59		attrs[attr.Key] = attr.Value.Any()
 60	}
 61
 62	// Collect record attributes
 63	record.Attrs(func(a slog.Attr) bool {
 64		attrs[a.Key] = a.Value.Any()
 65		return true
 66	})
 67
 68	entry := LogEntry{
 69		Timestamp: record.Time,
 70		Level:     record.Level,
 71		Message:   record.Message,
 72		Attrs:     attrs,
 73		Source:    h.source,
 74	}
 75
 76	h.aggregator.Add(entry)
 77	return nil
 78}
 79
 80func (h *AggregatorHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
 81	newHandler := *h
 82	newHandler.attrs = append(h.attrs, attrs...)
 83	return &newHandler
 84}
 85
 86func (h *AggregatorHandler) WithGroup(name string) slog.Handler {
 87	return h // Simplified - could implement grouping
 88}
 89
 90func (a *LogAggregator) Add(entry LogEntry) {
 91	a.mu.Lock()
 92	defer a.mu.Unlock()
 93
 94	// Add entry
 95	a.entries = append(a.entries, entry)
 96
 97	// Enforce max size
 98	if len(a.entries) > a.maxSize {
 99		a.entries = a.entries[1:]
100	}
101}
102
103// Query filters logs based on criteria
104type LogQuery struct {
105	MinLevel   *slog.Level
106	MaxLevel   *slog.Level
107	Source     string
108	TimeFrom   *time.Time
109	TimeTo     *time.Time
110	MessageContains string
111	Limit      int
112}
113
114func (a *LogAggregator) Query(q LogQuery) []LogEntry {
115	a.mu.RLock()
116	defer a.mu.RUnlock()
117
118	results := make([]LogEntry, 0)
119
120	for _, entry := range a.entries {
121		// Filter by level
122		if q.MinLevel != nil && entry.Level < *q.MinLevel {
123			continue
124		}
125		if q.MaxLevel != nil && entry.Level > *q.MaxLevel {
126			continue
127		}
128
129		// Filter by source
130		if q.Source != "" && entry.Source != q.Source {
131			continue
132		}
133
134		// Filter by time range
135		if q.TimeFrom != nil && entry.Timestamp.Before(*q.TimeFrom) {
136			continue
137		}
138		if q.TimeTo != nil && entry.Timestamp.After(*q.TimeTo) {
139			continue
140		}
141
142		// Filter by message
143		if q.MessageContains != "" {
144			if !contains(entry.Message, q.MessageContains) {
145				continue
146			}
147		}
148
149		results = append(results, entry)
150
151		// Apply limit
152		if q.Limit > 0 && len(results) >= q.Limit {
153			break
154		}
155	}
156
157	return results
158}
159
160func contains(s, substr string) bool {
161	return len(s) >= len(substr) && (s[:len(substr)] == substr ||
162		(len(s) > len(substr) && contains(s[1:], substr)))
163}
164
165// Stats provides aggregation statistics
166type LogStats struct {
167	TotalEntries int
168	ByLevel      map[slog.Level]int
169	BySource     map[string]int
170}
171
172func (a *LogAggregator) Stats() LogStats {
173	a.mu.RLock()
174	defer a.mu.RUnlock()
175
176	stats := LogStats{
177		TotalEntries: len(a.entries),
178		ByLevel:      make(map[slog.Level]int),
179		BySource:     make(map[string]int),
180	}
181
182	for _, entry := range a.entries {
183		stats.ByLevel[entry.Level]++
184		stats.BySource[entry.Source]++
185	}
186
187	return stats
188}
189
190func main() {
191	// Create aggregator
192	aggregator := NewLogAggregator(1000)
193
194	// Create multiple loggers with different sources
195	apiLogger := slog.New(NewAggregatorHandler(aggregator, "api"))
196	dbLogger := slog.New(NewAggregatorHandler(aggregator, "database"))
197	authLogger := slog.New(NewAggregatorHandler(aggregator, "auth"))
198
199	// Generate logs from different sources
200	fmt.Println("=== Generating Logs ===")
201	apiLogger.Info("API server started", "port", 8080)
202	dbLogger.Debug("Database connection established", "host", "localhost")
203	authLogger.Info("User logged in", "user_id", 123)
204	apiLogger.Warn("High request rate", "rate", 1000)
205	dbLogger.Error("Query failed", "error", "timeout")
206	authLogger.Info("User logged out", "user_id", 123)
207	apiLogger.Info("API request processed", "path", "/users")
208
209	time.Sleep(10 * time.Millisecond)
210
211	// Query examples
212	fmt.Println("\n=== Query 1: All ERROR logs ===")
213	errorLevel := slog.LevelError
214	errorLogs := aggregator.Query(LogQuery{
215		MinLevel: &errorLevel,
216	})
217	printLogs(errorLogs)
218
219	fmt.Println("\n=== Query 2: API source logs ===")
220	apiLogs := aggregator.Query(LogQuery{
221		Source: "api",
222	})
223	printLogs(apiLogs)
224
225	fmt.Println("\n=== Query 3: Logs containing 'User' ===")
226	userLogs := aggregator.Query(LogQuery{
227		MessageContains: "User",
228	})
229	printLogs(userLogs)
230
231	fmt.Println("\n=== Query 4: Recent WARNING+ logs ===")
232	warnLevel := slog.LevelWarn
233	recentWarnings := aggregator.Query(LogQuery{
234		MinLevel: &warnLevel,
235		Limit:    5,
236	})
237	printLogs(recentWarnings)
238
239	// Print statistics
240	fmt.Println("\n=== Log Statistics ===")
241	stats := aggregator.Stats()
242	fmt.Printf("Total entries: %d\n", stats.TotalEntries)
243	fmt.Println("\nBy Level:")
244	for level, count := range stats.ByLevel {
245		fmt.Printf("  %s: %d\n", level, count)
246	}
247	fmt.Println("\nBy Source:")
248	for source, count := range stats.BySource {
249		fmt.Printf("  %s: %d\n", source, count)
250	}
251}
252
253func printLogs(entries []LogEntry) {
254	for _, entry := range entries {
255		fmt.Printf("[%s] [%s] [%s] %s",
256			entry.Timestamp.Format("15:04:05"),
257			entry.Source,
258			entry.Level,
259			entry.Message)
260		if len(entry.Attrs) > 0 {
261			fmt.Printf(" %v", entry.Attrs)
262		}
263		fmt.Println()
264	}
265	if len(entries) == 0 {
266		fmt.Println("  (no matching logs)")
267	}
268}
269// run

Explanation:

This log aggregator provides:

  • Multi-Source Collection: Aggregates logs from different application components
  • Flexible Querying: Filter by level, source, time, message content
  • Statistics: Provides breakdown by level and source
  • Rolling Window: Maintains a fixed-size buffer of recent logs
  • Thread-Safe: Proper locking for concurrent access
  • Production Pattern: Useful for centralized logging before shipping to external systems

Summary

Structured logging with slog represents a fundamental shift from traditional text-based logging to data-driven observability. It's not just about pretty formatting - it's about making your application's behavior queryable, analyzable, and debuggable at scale.

💡 Key Takeaways:

  • Structured beats formatted - Key-value pairs are infinitely more queryable than formatted strings
  • Context is king - Request IDs, trace IDs, and user context make distributed debugging possible
  • Performance matters - Smart logging includes sampling, filtering, and efficient handlers
  • Security first - Never log sensitive data; use LogValuer for automatic redaction
  • Think in queries - Log data the way you'll want to search for it later

⚠️ Production Realities:

  • Logging costs money - Storage, bandwidth, and CPU all add up at scale
  • Volume is the enemy - Too many logs hide the important signals
  • Correlation is essential - Without request tracing, microservices are impossible to debug
  • Format consistency matters - All services should log in the same structured format

When to use structured logging:

  • Microservices and distributed systems
  • Production applications with monitoring requirements
  • Applications that need detailed audit trails
  • Systems requiring automated log analysis
  • High-traffic applications needing efficient log processing

The bottom line: Modern applications generate massive amounts of log data. Structured logging transforms this from overwhelming noise into valuable, queryable insights that help you understand, debug, and optimize your systems effectively.

Remember: Good logging doesn't just help you find problems - it helps you understand your system's behavior, optimize performance, and make data-driven decisions about your application's architecture and scaling needs.