Introduction
These exercises synthesize the advanced concepts covered in Section 3, building upon the fundamentals from Sections 1-2. You'll work with generics, reflection, design patterns, and performance optimization to solve real-world programming challenges.
What You'll Practice:
- Implementing type-safe generic data structures and algorithms
- Using reflection for runtime type inspection and validation
- Applying Go design patterns idiomatically
- Profiling and optimizing code for performance
- Building concurrent systems with advanced synchronization
- Working with build tags, atomic operations, and memory optimization
Prerequisites:
- Completed Sections 1-2
- Understanding of generics, reflection, and concurrency
- Familiarity with profiling tools and benchmarking
Time Estimate: 4-6 hours for all exercises
Exercise 1 - Generic Stack
Problem: Implement a type-safe generic stack data structure with push, pop, and peek operations.
Requirements:
- Create
Stack[T any]generic type - Implement
Push(T),Pop(),Peek() - Handle empty stack cases gracefully
- Thread-safety NOT required
Function Signature:
1type Stack[T any] struct {
2 // Your implementation
3}
4
5func Push(item T)
6func Pop()
7func Peek()
8func Len() int
Solution
1// run
2package main
3
4import "fmt"
5
6// Stack is a generic LIFO stack
7type Stack[T any] struct {
8 items []T
9}
10
11// Push adds an item to the top of the stack
12func Push(item T) {
13 s.items = append(s.items, item)
14}
15
16// Pop removes and returns the top item
17// Returns zero value and false if stack is empty
18func Pop() {
19 if len(s.items) == 0 {
20 var zero T
21 return zero, false
22 }
23
24 lastIdx := len(s.items) - 1
25 item := s.items[lastIdx]
26 s.items = s.items[:lastIdx]
27 return item, true
28}
29
30// Peek returns the top item without removing it
31// Returns zero value and false if stack is empty
32func Peek() {
33 if len(s.items) == 0 {
34 var zero T
35 return zero, false
36 }
37
38 return s.items[len(s.items)-1], true
39}
40
41// Len returns the number of items in the stack
42func Len() int {
43 return len(s.items)
44}
45
46func main() {
47 // Integer stack
48 intStack := &Stack[int]{}
49 intStack.Push(10)
50 intStack.Push(20)
51 intStack.Push(30)
52
53 if val, ok := intStack.Peek(); ok {
54 fmt.Printf("Peek: %d\n", val) // 30
55 }
56
57 for intStack.Len() > 0 {
58 val, _ := intStack.Pop()
59 fmt.Printf("Pop: %d\n", val)
60 }
61
62 // String stack
63 strStack := &Stack[string]{}
64 strStack.Push("hello")
65 strStack.Push("world")
66
67 val, ok := strStack.Pop()
68 fmt.Printf("Popped: %s, OK: %v\n", val, ok) // world, true
69}
Explanation:
- Uses a slice as the underlying storage mechanism
- Generic type parameter
T anyallows any type - Returns tuple
(T, bool)to indicate success/failure - Zero value returned when stack is empty using
var zero T - Time complexity: O(1) for all operations
Key Takeaways:
- Generics enable type-safe collections without code duplication
- The
anyconstraint allows maximum flexibility - Boolean return values provide clear empty-stack semantics
- Slice-backed implementation is simple and efficient
Exercise 2 - Reflection Validator
Problem: Build a struct field validator using reflection that validates struct fields based on struct tags.
Requirements:
- Support
validate:"required"tag for non-zero values - Support
validate:"min=N"for minimum numeric values - Support
validate:"max=N"for maximum numeric values - Return all validation errors, not just the first
Function Signature:
1type ValidationError struct {
2 Field string
3 Message string
4}
5
6func Validate(v interface{}) []ValidationError
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "reflect"
7 "strconv"
8 "strings"
9)
10
11// ValidationError represents a field validation error
12type ValidationError struct {
13 Field string
14 Message string
15}
16
17// Validate performs struct field validation using reflection
18func Validate(v interface{}) []ValidationError {
19 var errors []ValidationError
20
21 val := reflect.ValueOf(v)
22 typ := reflect.TypeOf(v)
23
24 // Dereference pointer if needed
25 if val.Kind() == reflect.Ptr {
26 val = val.Elem()
27 typ = typ.Elem()
28 }
29
30 // Only validate structs
31 if val.Kind() != reflect.Struct {
32 return errors
33 }
34
35 // Iterate through struct fields
36 for i := 0; i < val.NumField(); i++ {
37 field := typ.Field(i)
38 fieldVal := val.Field(i)
39
40 // Get validation tag
41 tag := field.Tag.Get("validate")
42 if tag == "" {
43 continue
44 }
45
46 // Parse validation rules
47 rules := strings.Split(tag, ",")
48 for _, rule := range rules {
49 rule = strings.TrimSpace(rule)
50
51 if rule == "required" {
52 if isZero(fieldVal) {
53 errors = append(errors, ValidationError{
54 Field: field.Name,
55 Message: "field is required",
56 })
57 }
58 } else if strings.HasPrefix(rule, "min=") {
59 minStr := strings.TrimPrefix(rule, "min=")
60 min, err := strconv.ParseFloat(minStr, 64)
61 if err != nil {
62 continue
63 }
64
65 if fieldVal.Kind() >= reflect.Int && fieldVal.Kind() <= reflect.Int64 {
66 if float64(fieldVal.Int()) < min {
67 errors = append(errors, ValidationError{
68 Field: field.Name,
69 Message: fmt.Sprintf("must be at least %v", min),
70 })
71 }
72 }
73 } else if strings.HasPrefix(rule, "max=") {
74 maxStr := strings.TrimPrefix(rule, "max=")
75 max, err := strconv.ParseFloat(maxStr, 64)
76 if err != nil {
77 continue
78 }
79
80 if fieldVal.Kind() >= reflect.Int && fieldVal.Kind() <= reflect.Int64 {
81 if float64(fieldVal.Int()) > max {
82 errors = append(errors, ValidationError{
83 Field: field.Name,
84 Message: fmt.Sprintf("must be at most %v", max),
85 })
86 }
87 }
88 }
89 }
90 }
91
92 return errors
93}
94
95// isZero checks if a value is the zero value for its type
96func isZero(v reflect.Value) bool {
97 switch v.Kind() {
98 case reflect.String:
99 return v.String() == ""
100 case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
101 return v.Int() == 0
102 case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
103 return v.Uint() == 0
104 case reflect.Float32, reflect.Float64:
105 return v.Float() == 0
106 case reflect.Bool:
107 return !v.Bool()
108 case reflect.Ptr, reflect.Interface:
109 return v.IsNil()
110 default:
111 return false
112 }
113}
114
115// Example usage
116type User struct {
117 Name string `validate:"required"`
118 Age int `validate:"required,min=18,max=100"`
119 Email string `validate:"required"`
120}
121
122func main() {
123 // Invalid user
124 user1 := User{
125 Name: "",
126 Age: 15,
127 Email: "",
128 }
129
130 errors := Validate(user1)
131 fmt.Printf("Validation errors for user1: %d\n", len(errors))
132 for _, err := range errors {
133 fmt.Printf(" %s: %s\n", err.Field, err.Message)
134 }
135
136 // Valid user
137 user2 := User{
138 Name: "Alice",
139 Age: 25,
140 Email: "alice@example.com",
141 }
142
143 errors = Validate(user2)
144 if len(errors) == 0 {
145 fmt.Println("user2 is valid!")
146 }
147}
Explanation:
- Uses
reflect.TypeOf()to get struct type information - Uses
reflect.ValueOf()to access field values - Parses struct tags using
field.Tag.Get("validate") - Implements
isZero()helper for different types - Accumulates all errors rather than failing fast
Key Takeaways:
- Reflection enables runtime type inspection and manipulation
- Struct tags provide metadata for validation rules
- Type switches handle different field types appropriately
- Collecting all errors provides better user experience
Exercise 3 - Factory Pattern
Problem: Implement a factory pattern with a type registry that can create different types of database connections.
Requirements:
- Create
Connectioninterface withConnect()andClose()methods - Implement factory that registers and creates connection types
- Support "postgres", "mysql", "sqlite" connection types
- Return error for unknown connection types
Function Signature:
1type Connection interface {
2 Connect() error
3 Close() error
4}
5
6type ConnectionFactory interface {
7 Register(name string, creator func() Connection)
8 Create(name string)
9}
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "errors"
7)
8
9// Connection interface for database connections
10type Connection interface {
11 Connect() error
12 Close() error
13}
14
15// PostgresConnection implements Connection for PostgreSQL
16type PostgresConnection struct {
17 connected bool
18}
19
20func Connect() error {
21 fmt.Println("Connecting to PostgreSQL...")
22 p.connected = true
23 return nil
24}
25
26func Close() error {
27 fmt.Println("Closing PostgreSQL connection")
28 p.connected = false
29 return nil
30}
31
32// MySQLConnection implements Connection for MySQL
33type MySQLConnection struct {
34 connected bool
35}
36
37func Connect() error {
38 fmt.Println("Connecting to MySQL...")
39 m.connected = true
40 return nil
41}
42
43func Close() error {
44 fmt.Println("Closing MySQL connection")
45 m.connected = false
46 return nil
47}
48
49// SQLiteConnection implements Connection for SQLite
50type SQLiteConnection struct {
51 connected bool
52}
53
54func Connect() error {
55 fmt.Println("Connecting to SQLite...")
56 s.connected = true
57 return nil
58}
59
60func Close() error {
61 fmt.Println("Closing SQLite connection")
62 s.connected = false
63 return nil
64}
65
66// Factory for creating database connections
67type Factory struct {
68 creators map[string]func() Connection
69}
70
71// NewFactory creates a new connection factory
72func NewFactory() *Factory {
73 return &Factory{
74 creators: make(map[string]func() Connection),
75 }
76}
77
78// Register adds a new connection type to the factory
79func Register(name string, creator func() Connection) {
80 f.creators[name] = creator
81}
82
83// Create instantiates a connection by name
84func Create(name string) {
85 creator, exists := f.creators[name]
86 if !exists {
87 return nil, fmt.Errorf("unknown connection type: %s", name)
88 }
89 return creator(), nil
90}
91
92func main() {
93 // Create factory and register connection types
94 factory := NewFactory()
95
96 factory.Register("postgres", func() Connection {
97 return &PostgresConnection{}
98 })
99
100 factory.Register("mysql", func() Connection {
101 return &MySQLConnection{}
102 })
103
104 factory.Register("sqlite", func() Connection {
105 return &SQLiteConnection{}
106 })
107
108 // Create connections
109 connections := []string{"postgres", "mysql", "sqlite", "oracle"}
110
111 for _, connType := range connections {
112 conn, err := factory.Create(connType)
113 if err != nil {
114 fmt.Printf("Error creating %s: %v\n", connType, err)
115 continue
116 }
117
118 conn.Connect()
119 conn.Close()
120 fmt.Println()
121 }
122}
Explanation:
- Factory pattern separates object creation from usage
- Registry maps connection names to creator functions
- Creator functions return interface types for flexibility
- Each connection type implements the
Connectioninterface - Factory validates connection type before creation
Key Takeaways:
- Factory pattern centralizes object creation logic
- Interfaces enable polymorphic behavior
- Registration pattern makes factories extensible
- Error handling for invalid types improves robustness
Exercise 4 - Functional Options
Problem: Implement a functional options pattern for configuring an HTTP server.
Requirements:
- Create
Serverstruct with host, port, timeout, and maxConnections - Implement option functions that modify server configuration
- Provide
NewServer(opts ...Option)constructor - Set sensible defaults for unspecified options
Function Signature:
1type Server struct {
2 host string
3 port int
4 timeout time.Duration
5 maxConnections int
6}
7
8type Option func(*Server)
9
10func NewServer(opts ...Option) *Server
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "time"
7)
8
9// Server represents an HTTP server configuration
10type Server struct {
11 host string
12 port int
13 timeout time.Duration
14 maxConnections int
15}
16
17// Option is a functional option for configuring Server
18type Option func(*Server)
19
20// NewServer creates a new server with the given options
21func NewServer(opts ...Option) *Server {
22 // Default configuration
23 server := &Server{
24 host: "localhost",
25 port: 8080,
26 timeout: 30 * time.Second,
27 maxConnections: 100,
28 }
29
30 // Apply options
31 for _, opt := range opts {
32 opt(server)
33 }
34
35 return server
36}
37
38// WithHost sets the server host
39func WithHost(host string) Option {
40 return func(s *Server) {
41 s.host = host
42 }
43}
44
45// WithPort sets the server port
46func WithPort(port int) Option {
47 return func(s *Server) {
48 s.port = port
49 }
50}
51
52// WithTimeout sets the server timeout
53func WithTimeout(timeout time.Duration) Option {
54 return func(s *Server) {
55 s.timeout = timeout
56 }
57}
58
59// WithMaxConnections sets the maximum connections
60func WithMaxConnections(max int) Option {
61 return func(s *Server) {
62 s.maxConnections = max
63 }
64}
65
66// Start simulates starting the server
67func Start() {
68 fmt.Printf("Starting server on %s:%d\n", s.host, s.port)
69 fmt.Printf(" Timeout: %v\n", s.timeout)
70 fmt.Printf(" Max Connections: %d\n", s.maxConnections)
71}
72
73func main() {
74 // Server with defaults
75 server1 := NewServer()
76 server1.Start()
77 fmt.Println()
78
79 // Server with custom configuration
80 server2 := NewServer(
81 WithHost("0.0.0.0"),
82 WithPort(3000),
83 WithTimeout(60 * time.Second),
84 WithMaxConnections(1000),
85 )
86 server2.Start()
87 fmt.Println()
88
89 // Server with partial configuration
90 server3 := NewServer(
91 WithPort(9000),
92 WithTimeout(15 * time.Second),
93 )
94 server3.Start()
95}
Explanation:
- Option functions are closures that capture configuration values
NewServerapplies options to a default configuration- Each
WithXfunction returns anOptionthat modifies the server - This pattern provides flexible, readable configuration
- Defaults ensure servers are always in a valid state
Key Takeaways:
- Functional options pattern is idiomatic in Go
- Provides clean API for optional parameters
- Defaults prevent invalid configurations
- Extensible without breaking existing code
Exercise 5 - CPU Profiling
Problem: Profile and optimize a function that concatenates strings in different ways.
Requirements:
- Implement three string concatenation approaches:
+,strings.Builder,bytes.Buffer - Write benchmarks for each approach
- Identify the most efficient method
- Generate CPU profile for the slowest method
Function Signature:
1func ConcatPlus(strs []string) string
2func ConcatBuilder(strs []string) string
3func ConcatBuffer(strs []string) string
Solution
1// run
2package main
3
4import (
5 "bytes"
6 "fmt"
7 "strings"
8 "testing"
9)
10
11// ConcatPlus concatenates strings using the + operator
12func ConcatPlus(strs []string) string {
13 result := ""
14 for _, s := range strs {
15 result += s
16 }
17 return result
18}
19
20// ConcatBuilder concatenates strings using strings.Builder
21func ConcatBuilder(strs []string) string {
22 var builder strings.Builder
23
24 // Pre-allocate capacity for better performance
25 totalLen := 0
26 for _, s := range strs {
27 totalLen += len(s)
28 }
29 builder.Grow(totalLen)
30
31 for _, s := range strs {
32 builder.WriteString(s)
33 }
34 return builder.String()
35}
36
37// ConcatBuffer concatenates strings using bytes.Buffer
38func ConcatBuffer(strs []string) string {
39 var buffer bytes.Buffer
40
41 // Pre-allocate capacity
42 totalLen := 0
43 for _, s := range strs {
44 totalLen += len(s)
45 }
46 buffer.Grow(totalLen)
47
48 for _, s := range strs {
49 buffer.WriteString(s)
50 }
51 return buffer.String()
52}
53
54// Benchmark functions
55func BenchmarkConcatPlus(b *testing.B) {
56 strs := make([]string, 1000)
57 for i := range strs {
58 strs[i] = "test"
59 }
60
61 b.ResetTimer()
62 for i := 0; i < b.N; i++ {
63 _ = ConcatPlus(strs)
64 }
65}
66
67func BenchmarkConcatBuilder(b *testing.B) {
68 strs := make([]string, 1000)
69 for i := range strs {
70 strs[i] = "test"
71 }
72
73 b.ResetTimer()
74 for i := 0; i < b.N; i++ {
75 _ = ConcatBuilder(strs)
76 }
77}
78
79func BenchmarkConcatBuffer(b *testing.B) {
80 strs := make([]string, 1000)
81 for i := range strs {
82 strs[i] = "test"
83 }
84
85 b.ResetTimer()
86 for i := 0; i < b.N; i++ {
87 _ = ConcatBuffer(strs)
88 }
89}
90
91func main() {
92 strs := []string{"Hello", " ", "World", "!", " ", "Go", " ", "is", " ", "awesome"}
93
94 fmt.Println("ConcatPlus:", ConcatPlus(strs))
95 fmt.Println("ConcatBuilder:", ConcatBuilder(strs))
96 fmt.Println("ConcatBuffer:", ConcatBuffer(strs))
97
98 fmt.Println("\nRun benchmarks with:")
99 fmt.Println(" go test -bench=. -benchmem")
100 fmt.Println("\nGenerate CPU profile with:")
101 fmt.Println(" go test -bench=BenchmarkConcatPlus -cpuprofile=cpu.prof")
102 fmt.Println(" go tool pprof cpu.prof")
103}
Expected Benchmark Results:
BenchmarkConcatPlus-8 2000 800000 ns/op 2000000 B/op 999 allocs/op
BenchmarkConcatBuilder-8 50000 30000 ns/op 4096 B/op 1 allocs/op
BenchmarkConcatBuffer-8 50000 32000 ns/op 4096 B/op 1 allocs/op
Explanation:
+operator creates new string on each iteration allocations)strings.Builderandbytes.Bufferuse growable buffers allocations)- Pre-allocation with
Grow()minimizes reallocations strings.Builderis optimized specifically for string building- CPU profiling with
go test -cpuprofileidentifies bottlenecks
Performance Notes:
strings.Builderis fastest for string concatenation- Pre-allocating capacity prevents multiple buffer resizes
+operator causes quadratic memory allocations- Benchmarks show 25-30x performance difference
Key Takeaways:
- Always benchmark before optimizing
- Use
strings.Builderfor efficient string concatenation - Pre-allocation improves performance significantly
- CPU profiling identifies performance bottlenecks
Exercise 6 - Build Tags
Problem: Create a feature that behaves differently on different platforms using build tags.
Requirements:
- Create
GetTempDir()function that returns OS-specific temp directory - Use build tags for Linux, macOS, and Windows
- Provide a default fallback implementation
- Demonstrate build tag syntax
File Structure:
temp.go // Interface and default
temp_linux.go // Linux implementation
temp_darwin.go // macOS implementation
temp_windows.go // Windows implementation
Solution
File: temp.go
1package main
2
3// GetTempDir returns the OS-specific temporary directory
4// This is the fallback implementation for unsupported platforms
5func GetTempDir() string {
6 return "/tmp"
7}
File: temp_linux.go
1//go:build linux
2
3package main
4
5// GetTempDir returns the Linux temporary directory
6func GetTempDir() string {
7 return "/tmp"
8}
File: temp_darwin.go
1//go:build darwin
2
3package main
4
5// GetTempDir returns the macOS temporary directory
6func GetTempDir() string {
7 return "/var/tmp"
8}
File: temp_windows.go
1//go:build windows
2
3package main
4
5import "os"
6
7// GetTempDir returns the Windows temporary directory
8func GetTempDir() string {
9 // Try environment variables
10 if tmp := os.Getenv("TEMP"); tmp != "" {
11 return tmp
12 }
13 if tmp := os.Getenv("TMP"); tmp != "" {
14 return tmp
15 }
16 return "C:\\Temp"
17}
File: main.go
1// run
2package main
3
4import (
5 "fmt"
6 "runtime"
7)
8
9func main() {
10 fmt.Printf("Platform: %s\n", runtime.GOOS)
11 fmt.Printf("Temp Directory: %s\n", GetTempDir())
12
13 // Demonstrate build tag usage
14 fmt.Println("\nBuild tags enable platform-specific implementations")
15 fmt.Println("without runtime checks or if/else branching.")
16}
Explanation:
- Build tags
//go:build <tag>control file compilation - Only one
temp_*.gofile compiles per platform - Go compiler selects appropriate file based on GOOS
- Fallback in
temp.gohandles unsupported platforms - Zero runtime overhead compared to if/else checks
Build Tag Examples:
1//go:build linux // Only Linux
2//go:build darwin // Only macOS
3//go:build windows // Only Windows
4//go:build linux || darwin // Linux OR macOS
5//go:build !windows // NOT Windows
6//go:build cgo // When CGo is enabled
Key Takeaways:
- Build tags enable compile-time platform selection
- More efficient than runtime platform checks
- Each platform gets optimized implementation
- Fallback implementations handle edge cases
Exercise 7 - Atomic Counter
Problem: Implement a thread-safe counter using atomic operations instead of mutexes.
Requirements:
- Create
AtomicCounterstruct usingatomic.Int64 - Implement
Increment(),Decrement(),Get(),Set()methods - Benchmark against mutex-based counter
- Demonstrate lock-free concurrency
Function Signature:
1type AtomicCounter struct {
2 // Your implementation
3}
4
5func Increment() int64
6func Decrement() int64
7func Get() int64
8func Set(val int64)
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "sync"
7 "sync/atomic"
8 "testing"
9)
10
11// AtomicCounter is a lock-free thread-safe counter
12type AtomicCounter struct {
13 value atomic.Int64
14}
15
16// Increment atomically increments the counter and returns the new value
17func Increment() int64 {
18 return c.value.Add(1)
19}
20
21// Decrement atomically decrements the counter and returns the new value
22func Decrement() int64 {
23 return c.value.Add(-1)
24}
25
26// Get returns the current counter value
27func Get() int64 {
28 return c.value.Load()
29}
30
31// Set sets the counter to a specific value
32func Set(val int64) {
33 c.value.Store(val)
34}
35
36// MutexCounter is a mutex-based counter for comparison
37type MutexCounter struct {
38 mu sync.Mutex
39 value int64
40}
41
42func Increment() int64 {
43 c.mu.Lock()
44 defer c.mu.Unlock()
45 c.value++
46 return c.value
47}
48
49func Get() int64 {
50 c.mu.Lock()
51 defer c.mu.Unlock()
52 return c.value
53}
54
55// Benchmarks
56func BenchmarkAtomicCounter(b *testing.B) {
57 var counter AtomicCounter
58
59 b.RunParallel(func(pb *testing.PB) {
60 for pb.Next() {
61 counter.Increment()
62 }
63 })
64}
65
66func BenchmarkMutexCounter(b *testing.B) {
67 var counter MutexCounter
68
69 b.RunParallel(func(pb *testing.PB) {
70 for pb.Next() {
71 counter.Increment()
72 }
73 })
74}
75
76func main() {
77 var counter AtomicCounter
78
79 // Concurrent increments
80 var wg sync.WaitGroup
81 workers := 10
82 iterations := 1000
83
84 for i := 0; i < workers; i++ {
85 wg.Add(1)
86 go func() {
87 defer wg.Done()
88 for j := 0; j < iterations; j++ {
89 counter.Increment()
90 }
91 }()
92 }
93
94 wg.Wait()
95
96 expected := int64(workers * iterations)
97 actual := counter.Get()
98
99 fmt.Printf("Expected: %d\n", expected)
100 fmt.Printf("Actual: %d\n", actual)
101 fmt.Printf("Match: %v\n", expected == actual)
102
103 // Demonstrate all operations
104 fmt.Println("\nOperations:")
105 counter.Set(100)
106 fmt.Printf("Set to 100: %d\n", counter.Get())
107
108 counter.Increment()
109 fmt.Printf("After increment: %d\n", counter.Get())
110
111 counter.Decrement()
112 fmt.Printf("After decrement: %d\n", counter.Get())
113
114 fmt.Println("\nRun benchmarks with:")
115 fmt.Println(" go test -bench=. -benchmem")
116}
Expected Benchmark Results:
BenchmarkAtomicCounter-8 50000000 25 ns/op 0 B/op 0 allocs/op
BenchmarkMutexCounter-8 20000000 75 ns/op 0 B/op 0 allocs/op
Explanation:
atomic.Int64provides lock-free atomic operationsAdd(1)atomically increments,Add(-1)decrementsLoad()reads current value,Store()sets value- No mutex contention means better performance
- Atomic operations are 2-3x faster than mutex-based
Key Takeaways:
- Atomic operations avoid mutex overhead
- Lock-free algorithms scale better with concurrency
- Use atomics for simple counters and flags
- Mutexes needed for complex critical sections
Exercise 8 - Race-Free Cache
Problem: Implement a thread-safe cache using sync.RWMutex that allows concurrent reads.
Requirements:
- Create
Cache[K comparable, V any]generic type - Implement
Set(K, V),Get(K),Delete(K)methods - Use
RWMutexfor optimal read performance - Support concurrent readers, exclusive writers
Function Signature:
1type Cache[K comparable, V any] struct {
2 // Your implementation
3}
4
5func Set(key K, value V)
6func Get(key K)
7func Delete(key K)
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "sync"
7 "time"
8)
9
10// Cache is a thread-safe generic cache
11type Cache[K comparable, V any] struct {
12 mu sync.RWMutex
13 items map[K]V
14}
15
16// NewCache creates a new cache
17func NewCache[K comparable, V any]() *Cache[K, V] {
18 return &Cache[K, V]{
19 items: make(map[K]V),
20 }
21}
22
23// Set adds or updates a key-value pair
24func Set(key K, value V) {
25 c.mu.Lock()
26 defer c.mu.Unlock()
27 c.items[key] = value
28}
29
30// Get retrieves a value by key
31// Returns zero value and false if key doesn't exist
32func Get(key K) {
33 c.mu.RLock()
34 defer c.mu.RUnlock()
35 val, exists := c.items[key]
36 return val, exists
37}
38
39// Delete removes a key from the cache
40func Delete(key K) {
41 c.mu.Lock()
42 defer c.mu.Unlock()
43 delete(c.items, key)
44}
45
46// Len returns the number of items in the cache
47func Len() int {
48 c.mu.RLock()
49 defer c.mu.RUnlock()
50 return len(c.items)
51}
52
53// Clear removes all items from the cache
54func Clear() {
55 c.mu.Lock()
56 defer c.mu.Unlock()
57 c.items = make(map[K]V)
58}
59
60func main() {
61 // String cache
62 cache := NewCache[string, string]()
63
64 // Concurrent writes
65 var wg sync.WaitGroup
66 for i := 0; i < 100; i++ {
67 wg.Add(1)
68 go func(id int) {
69 defer wg.Done()
70 cache.Set(fmt.Sprintf("key-%d", id), fmt.Sprintf("value-%d", id))
71 }(i)
72 }
73 wg.Wait()
74
75 fmt.Printf("Cache size after writes: %d\n", cache.Len())
76
77 // Concurrent reads
78 readers := 10
79 for i := 0; i < readers; i++ {
80 wg.Add(1)
81 go func(id int) {
82 defer wg.Done()
83 for j := 0; j < 10; j++ {
84 key := fmt.Sprintf("key-%d", j)
85 if val, ok := cache.Get(key); ok {
86 fmt.Printf("Reader %d: %s = %s\n", id, key, val)
87 }
88 time.Sleep(10 * time.Millisecond)
89 }
90 }(i)
91 }
92 wg.Wait()
93
94 // Delete some keys
95 for i := 0; i < 50; i++ {
96 cache.Delete(fmt.Sprintf("key-%d", i))
97 }
98
99 fmt.Printf("Cache size after deletes: %d\n", cache.Len())
100
101 // Clear cache
102 cache.Clear()
103 fmt.Printf("Cache size after clear: %d\n", cache.Len())
104}
Explanation:
RWMutexallows multiple concurrent readers- Write operations use exclusive lock`)
- Read operations use shared lock`)
- Generics enable type-safe caching for any types
- Proper defer ensures locks are always released
Performance Characteristics:
- Multiple readers can access cache simultaneously
- Writers block all readers and other writers
- Much faster than
Mutexfor read-heavy workloads - Small overhead compared to
Mutexfor write-heavy workloads
Key Takeaways:
RWMutexoptimizes for concurrent reads- Use
Lock/Unlockfor writes,RLock/RUnlockfor reads - Generics enable reusable, type-safe data structures
- Always defer unlock to prevent deadlocks
Exercise 9 - Dependency Injection
Problem: Build a simple dependency injection container that can register and resolve dependencies.
Requirements:
- Create
Containerthat stores service constructors - Support
Register(name, constructor)andResolve(name)methods - Handle circular dependency detection
- Support singleton and transient lifetimes
Function Signature:
1type Container struct {
2 // Your implementation
3}
4
5func Register(name string, constructor interface{})
6func Resolve(name string)
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "errors"
7 "sync"
8)
9
10// Container is a simple dependency injection container
11type Container struct {
12 mu sync.RWMutex
13 constructors map[string]func() interface{}
14 singletons map[string]interface{}
15 resolving map[string]bool // For circular dependency detection
16}
17
18// NewContainer creates a new DI container
19func NewContainer() *Container {
20 return &Container{
21 constructors: make(map[string]func() interface{}),
22 singletons: make(map[string]interface{}),
23 resolving: make(map[string]bool),
24 }
25}
26
27// Register registers a constructor function for a service
28func Register(name string, constructor func() interface{}) {
29 c.mu.Lock()
30 defer c.mu.Unlock()
31 c.constructors[name] = constructor
32}
33
34// RegisterSingleton registers a singleton service
35func RegisterSingleton(name string, constructor func() interface{}) {
36 c.mu.Lock()
37 defer c.mu.Unlock()
38 c.constructors[name] = constructor
39 // Mark as singleton by pre-creating
40 c.resolveSingleton(name, constructor)
41}
42
43// Resolve resolves a service by name
44func Resolve(name string) {
45 c.mu.Lock()
46 defer c.mu.Unlock()
47
48 // Check for circular dependency
49 if c.resolving[name] {
50 return nil, fmt.Errorf("circular dependency detected: %s", name)
51 }
52
53 // Check if singleton already exists
54 if instance, exists := c.singletons[name]; exists {
55 return instance, nil
56 }
57
58 // Get constructor
59 constructor, exists := c.constructors[name]
60 if !exists {
61 return nil, fmt.Errorf("service not registered: %s", name)
62 }
63
64 // Mark as resolving
65 c.resolving[name] = true
66 defer delete(c.resolving, name)
67
68 // Create instance
69 instance := constructor()
70 return instance, nil
71}
72
73// resolveSingleton creates and caches a singleton instance
74func resolveSingleton(name string, constructor func() interface{}) {
75 instance := constructor()
76 c.singletons[name] = instance
77}
78
79// Example services
80type Database struct {
81 ConnectionString string
82}
83
84func NewDatabase() interface{} {
85 return &Database{
86 ConnectionString: "postgres://localhost:5432/mydb",
87 }
88}
89
90type UserRepository struct {
91 DB *Database
92}
93
94type EmailService struct {
95 From string
96}
97
98func NewEmailService() interface{} {
99 return &EmailService{
100 From: "noreply@example.com",
101 }
102}
103
104func main() {
105 container := NewContainer()
106
107 // Register services
108 container.RegisterSingleton("database", NewDatabase)
109 container.Register("email", NewEmailService)
110
111 // Resolve database
112 db1, err := container.Resolve("database")
113 if err != nil {
114 fmt.Printf("Error: %v\n", err)
115 return
116 }
117 fmt.Printf("Database 1: %+v\n", db1)
118
119 // Resolve again - should be same instance
120 db2, err := container.Resolve("database")
121 if err != nil {
122 fmt.Printf("Error: %v\n", err)
123 return
124 }
125 fmt.Printf("Database 2: %+v\n", db2)
126 fmt.Printf("Same instance: %v\n", db1 == db2) // true
127
128 // Resolve email service
129 email1, err := container.Resolve("email")
130 if err != nil {
131 fmt.Printf("Error: %v\n", err)
132 return
133 }
134 fmt.Printf("Email 1: %+v\n", email1)
135
136 email2, err := container.Resolve("email")
137 if err != nil {
138 fmt.Printf("Error: %v\n", err)
139 return
140 }
141 fmt.Printf("Email 2: %+v\n", email2)
142 fmt.Printf("Same instance: %v\n", email1 == email2) // false
143
144 // Try to resolve unregistered service
145 _, err = container.Resolve("unknown")
146 if err != nil {
147 fmt.Printf("Expected error: %v\n", err)
148 }
149}
Explanation:
- Container stores constructor functions, not instances
Registerstores transient service constructorsRegisterSingletoncreates and caches instanceResolvechecks singleton cache first, then constructs- Circular dependency detection prevents infinite loops
Simplified Version:
1type SimpleContainer struct {
2 services map[string]interface{}
3}
4
5func Register(name string, instance interface{}) {
6 c.services[name] = instance
7}
8
9func Get(name string) {
10 instance, exists := c.services[name]
11 return instance, exists
12}
Key Takeaways:
- DI containers decouple service creation from usage
- Singleton vs transient lifetimes affect instance reuse
- Circular dependency detection prevents infinite recursion
- Type assertions needed when resolving services
Exercise 10 - Plugin System
Problem: Create a simple plugin system that can dynamically load plugins implementing a common interface.
Requirements:
- Define
Plugininterface withName(),Execute()methods - Create plugin registry that stores plugins
- Implement
Register()andExecute()methods - Support multiple plugin implementations
Function Signature:
1type Plugin interface {
2 Name() string
3 Execute(args map[string]string)
4}
5
6type PluginRegistry struct {
7 // Your implementation
8}
9
10func Register(plugin Plugin)
11func Execute(name string, args map[string]string)
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "errors"
7 "strings"
8)
9
10// Plugin interface that all plugins must implement
11type Plugin interface {
12 Name() string
13 Execute(args map[string]string)
14}
15
16// PluginRegistry manages registered plugins
17type PluginRegistry struct {
18 plugins map[string]Plugin
19}
20
21// NewPluginRegistry creates a new plugin registry
22func NewPluginRegistry() *PluginRegistry {
23 return &PluginRegistry{
24 plugins: make(map[string]Plugin),
25 }
26}
27
28// Register adds a plugin to the registry
29func Register(plugin Plugin) {
30 r.plugins[plugin.Name()] = plugin
31}
32
33// Execute runs a plugin by name
34func Execute(name string, args map[string]string) {
35 plugin, exists := r.plugins[name]
36 if !exists {
37 return "", fmt.Errorf("plugin not found: %s", name)
38 }
39 return plugin.Execute(args)
40}
41
42// List returns all registered plugin names
43func List() []string {
44 names := make([]string, 0, len(r.plugins))
45 for name := range r.plugins {
46 names = append(names, name)
47 }
48 return names
49}
50
51// Example plugins
52
53// GreetPlugin greets a user
54type GreetPlugin struct{}
55
56func Name() string {
57 return "greet"
58}
59
60func Execute(args map[string]string) {
61 name, ok := args["name"]
62 if !ok {
63 return "", errors.New("missing argument: name")
64 }
65 return fmt.Sprintf("Hello, %s!", name), nil
66}
67
68// UppercasePlugin converts text to uppercase
69type UppercasePlugin struct{}
70
71func Name() string {
72 return "uppercase"
73}
74
75func Execute(args map[string]string) {
76 text, ok := args["text"]
77 if !ok {
78 return "", errors.New("missing argument: text")
79 }
80 return strings.ToUpper(text), nil
81}
82
83// ReversePlugin reverses a string
84type ReversePlugin struct{}
85
86func Name() string {
87 return "reverse"
88}
89
90func Execute(args map[string]string) {
91 text, ok := args["text"]
92 if !ok {
93 return "", errors.New("missing argument: text")
94 }
95
96 runes := []rune(text)
97 for i, j := 0, len(runes)-1; i < j; i, j = i+1, j-1 {
98 runes[i], runes[j] = runes[j], runes[i]
99 }
100 return string(runes), nil
101}
102
103// CountPlugin counts words in text
104type CountPlugin struct{}
105
106func Name() string {
107 return "count"
108}
109
110func Execute(args map[string]string) {
111 text, ok := args["text"]
112 if !ok {
113 return "", errors.New("missing argument: text")
114 }
115
116 words := strings.Fields(text)
117 return fmt.Sprintf("Word count: %d", len(words)), nil
118}
119
120func main() {
121 // Create registry and register plugins
122 registry := NewPluginRegistry()
123 registry.Register(&GreetPlugin{})
124 registry.Register(&UppercasePlugin{})
125 registry.Register(&ReversePlugin{})
126 registry.Register(&CountPlugin{})
127
128 // List available plugins
129 fmt.Println("Available plugins:", registry.List())
130 fmt.Println()
131
132 // Execute plugins
133 examples := []struct {
134 plugin string
135 args map[string]string
136 }{
137 {"greet", map[string]string{"name": "Alice"}},
138 {"uppercase", map[string]string{"text": "hello world"}},
139 {"reverse", map[string]string{"text": "Go is awesome"}},
140 {"count", map[string]string{"text": "The quick brown fox jumps"}},
141 }
142
143 for _, ex := range examples {
144 result, err := registry.Execute(ex.plugin, ex.args)
145 if err != nil {
146 fmt.Printf("Error executing %s: %v\n", ex.plugin, err)
147 } else {
148 fmt.Printf("%s: %s\n", ex.plugin, result)
149 }
150 }
151
152 // Try executing non-existent plugin
153 fmt.Println()
154 _, err := registry.Execute("unknown", nil)
155 if err != nil {
156 fmt.Printf("Expected error: %v\n", err)
157 }
158}
Explanation:
Plugininterface defines contract for all plugins- Registry stores plugins in a map keyed by name
- Plugins self-identify through
Name()method Execute()receives arguments as map for flexibility- Error handling for missing plugins and arguments
Advanced Features:
- Plugin versioning and compatibility checks
- Plugin dependencies and loading order
- Hot-reloading plugins without restart
- Plugin isolation and sandboxing
Key Takeaways:
- Interfaces enable plugin architecture
- Registry pattern manages plugin lifecycle
- Map-based arguments provide flexibility
- Plugin pattern enables extensibility without recompilation
Exercise 11 - Memory Pool
Problem: Implement a memory pool using sync.Pool to reuse byte buffers and reduce allocations.
Requirements:
- Create
BufferPoolusingsync.Pool - Implement
Get()andPut()methods - Demonstrate allocation reduction in benchmark
- Compare against non-pooled allocation
Function Signature:
1type BufferPool struct {
2 pool *sync.Pool
3}
4
5func NewBufferPool() *BufferPool
6func Get() *bytes.Buffer
7func Put(buf *bytes.Buffer)
Solution
1// run
2package main
3
4import (
5 "bytes"
6 "fmt"
7 "sync"
8 "testing"
9)
10
11// BufferPool manages a pool of reusable byte buffers
12type BufferPool struct {
13 pool *sync.Pool
14}
15
16// NewBufferPool creates a new buffer pool
17func NewBufferPool() *BufferPool {
18 return &BufferPool{
19 pool: &sync.Pool{
20 New: func() interface{} {
21 return new(bytes.Buffer)
22 },
23 },
24 }
25}
26
27// Get retrieves a buffer from the pool
28func Get() *bytes.Buffer {
29 return p.pool.Get().(*bytes.Buffer)
30}
31
32// Put returns a buffer to the pool after resetting it
33func Put(buf *bytes.Buffer) {
34 buf.Reset() // Clear the buffer before returning to pool
35 p.pool.Put(buf)
36}
37
38// ProcessWithPool processes data using a pooled buffer
39func ProcessWithPool(pool *BufferPool, data string) string {
40 buf := pool.Get()
41 defer pool.Put(buf)
42
43 buf.WriteString("Processed: ")
44 buf.WriteString(data)
45 return buf.String()
46}
47
48// ProcessWithoutPool processes data with a new buffer each time
49func ProcessWithoutPool(data string) string {
50 var buf bytes.Buffer
51 buf.WriteString("Processed: ")
52 buf.WriteString(data)
53 return buf.String()
54}
55
56// Benchmarks
57func BenchmarkWithPool(b *testing.B) {
58 pool := NewBufferPool()
59 data := "test data"
60
61 b.ResetTimer()
62 for i := 0; i < b.N; i++ {
63 _ = ProcessWithPool(pool, data)
64 }
65}
66
67func BenchmarkWithoutPool(b *testing.B) {
68 data := "test data"
69
70 b.ResetTimer()
71 for i := 0; i < b.N; i++ {
72 _ = ProcessWithoutPool(data)
73 }
74}
75
76func main() {
77 pool := NewBufferPool()
78
79 // Demonstrate pool usage
80 data := []string{"hello", "world", "go", "pooling"}
81
82 fmt.Println("Processing with pool:")
83 for _, d := range data {
84 result := ProcessWithPool(pool, d)
85 fmt.Println(result)
86 }
87
88 // Concurrent usage
89 var wg sync.WaitGroup
90 workers := 10
91
92 fmt.Println("\nConcurrent processing:")
93 for i := 0; i < workers; i++ {
94 wg.Add(1)
95 go func(id int) {
96 defer wg.Done()
97 for j := 0; j < 5; j++ {
98 data := fmt.Sprintf("worker-%d-item-%d", id, j)
99 result := ProcessWithPool(pool, data)
100 fmt.Println(result)
101 }
102 }(i)
103 }
104 wg.Wait()
105
106 fmt.Println("\nRun benchmarks with:")
107 fmt.Println(" go test -bench=. -benchmem")
108}
Expected Benchmark Results:
BenchmarkWithPool-8 10000000 150 ns/op 16 B/op 1 allocs/op
BenchmarkWithoutPool-8 5000000 300 ns/op 64 B/op 1 allocs/op
Explanation:
sync.Poolmaintains a cache of reusable objectsNewfunction creates objects when pool is emptyGet()retrieves from pool or creates new if neededPut()returns objects to pool for reuseReset()clears buffer state before returning to pool- Reduces GC pressure by reusing allocations
Performance Benefits:
- Fewer allocations reduce GC overhead
- Object reuse improves memory locality
- Particularly effective for temporary buffers
- Pool automatically clears unused objects during GC
Key Takeaways:
sync.Poolreduces allocation overhead- Always reset objects before returning to pool
- Best for temporary objects created frequently
- Pool is safe for concurrent access
Exercise 12 - Benchmark Comparison
Problem: Write benchmarks comparing different slice allocation strategies.
Requirements:
- Compare: no preallocation,
makewith length,makewith capacity - Benchmark appending 10,000 integers to slice
- Measure allocations per operation
- Identify most efficient approach
Function Signature:
1func NoPrealloc() []int
2func PreallocLength() []int
3func PreallocCapacity() []int
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "testing"
7)
8
9const numItems = 10000
10
11// NoPrealloc appends to a nil slice
12func NoPrealloc() []int {
13 var slice []int
14 for i := 0; i < numItems; i++ {
15 slice = append(slice, i)
16 }
17 return slice
18}
19
20// PreallocLength creates slice with length
21func PreallocLength() []int {
22 slice := make([]int, numItems)
23 for i := 0; i < numItems; i++ {
24 slice[i] = i
25 }
26 return slice
27}
28
29// PreallocCapacity creates slice with capacity
30func PreallocCapacity() []int {
31 slice := make([]int, 0, numItems)
32 for i := 0; i < numItems; i++ {
33 slice = append(slice, i)
34 }
35 return slice
36}
37
38// Benchmarks
39func BenchmarkNoPrealloc(b *testing.B) {
40 for i := 0; i < b.N; i++ {
41 _ = NoPrealloc()
42 }
43}
44
45func BenchmarkPreallocLength(b *testing.B) {
46 for i := 0; i < b.N; i++ {
47 _ = PreallocLength()
48 }
49}
50
51func BenchmarkPreallocCapacity(b *testing.B) {
52 for i := 0; i < b.N; i++ {
53 _ = PreallocCapacity()
54 }
55}
56
57func main() {
58 fmt.Println("Slice Allocation Strategies Comparison")
59 fmt.Println()
60
61 // Demonstrate each approach
62 slice1 := NoPrealloc()
63 fmt.Printf("NoPrealloc: len=%d, cap=%d\n", len(slice1), cap(slice1))
64
65 slice2 := PreallocLength()
66 fmt.Printf("PreallocLength: len=%d, cap=%d\n", len(slice2), cap(slice2))
67
68 slice3 := PreallocCapacity()
69 fmt.Printf("PreallocCapacity: len=%d, cap=%d\n", len(slice3), cap(slice3))
70
71 fmt.Println("\nRun benchmarks with:")
72 fmt.Println(" go test -bench=. -benchmem")
73 fmt.Println("\nExpected results:")
74 fmt.Println(" NoPrealloc: ~15 reallocations")
75 fmt.Println(" PreallocLength: 0 reallocations")
76 fmt.Println(" PreallocCapacity: 0 reallocations")
77}
Expected Benchmark Results:
BenchmarkNoPrealloc-8 50000 35000 ns/op 386432 B/op 15 allocs/op
BenchmarkPreallocLength-8 100000 15000 ns/op 80896 B/op 1 allocs/op
BenchmarkPreallocCapacity-8 100000 16000 ns/op 80896 B/op 1 allocs/op
Analysis:
| Strategy | Allocations | Memory | Speed | Use Case |
|---|---|---|---|---|
| No Prealloc | 15 | 386 KB | Slowest | Unknown size |
| Preallocate Length | 1 | 81 KB | Fastest | Known size, indexing |
| Preallocate Capacity | 1 | 81 KB | Fast | Known size, appending |
Explanation:
- No preallocation causes multiple reallocations
- Preallocating length allows direct indexing
- Preallocating capacity uses
appendbut avoids reallocations - Both preallocation strategies use ~5x less memory
- Preallocation improves performance by 50-60%
Growth Pattern:
Capacity progression: 0 → 1 → 2 → 4 → 8 → 16 → 32 → ... → 16384
Total allocations: ~15 for 10,000 items
Key Takeaways:
- Always preallocate when size is known
- Use
make([]T, length)when indexing - Use
make([]T, 0, capacity)when appending - Preallocation dramatically reduces allocations and GC pressure
Exercise 13 - Fuzzing Test
Problem: Write a fuzz test for a JSON parser function to discover edge cases.
Requirements:
- Create
ParseConfig(data string)function - Implement fuzz test using
testing.F - Handle malformed JSON gracefully
- Seed fuzzer with valid and invalid inputs
Function Signature:
1func ParseConfig(data string)
2func FuzzParseConfig(f *testing.F)
Solution
File: config.go
1// run
2package main
3
4import (
5 "encoding/json"
6 "fmt"
7)
8
9// ParseConfig parses a JSON string into a map
10func ParseConfig(data string) {
11 var config map[string]string
12
13 if err := json.Unmarshal([]byte(data), &config); err != nil {
14 return nil, fmt.Errorf("invalid JSON: %w", err)
15 }
16
17 return config, nil
18}
19
20func main() {
21 // Valid JSON
22 config1, err := ParseConfig(`{"name": "app", "version": "1.0"}`)
23 if err != nil {
24 fmt.Printf("Error: %v\n", err)
25 } else {
26 fmt.Printf("Config: %v\n", config1)
27 }
28
29 // Invalid JSON
30 config2, err := ParseConfig(`{invalid}`)
31 if err != nil {
32 fmt.Printf("Expected error: %v\n", err)
33 } else {
34 fmt.Printf("Config: %v\n", config2)
35 }
36}
File: config_test.go
1package main
2
3import (
4 "testing"
5)
6
7// FuzzParseConfig fuzzes the ParseConfig function
8func FuzzParseConfig(f *testing.F) {
9 // Seed corpus with valid and invalid inputs
10 f.Add(`{"name": "test"}`)
11 f.Add(`{"key": "value", "foo": "bar"}`)
12 f.Add(`{}`)
13 f.Add(`{"": ""}`)
14 f.Add(`{invalid}`)
15 f.Add(``)
16 f.Add(`null`)
17 f.Add(`{"a":`)
18 f.Add(`{"nested": {"key": "value"}}`) // Nested, but we expect flat map
19
20 f.Fuzz(func(t *testing.T, data string) {
21 config, err := ParseConfig(data)
22
23 // We don't require success, but if it succeeds:
24 if err == nil {
25 // Config should be a valid map
26 if config == nil {
27 t.Error("config is nil but no error returned")
28 }
29
30 // All keys and values should be strings
31 for k, v := range config {
32 if len(k) == 0 && len(v) == 0 {
33 // Empty key and value is technically valid JSON
34 continue
35 }
36 _ = k
37 _ = v
38 }
39 }
40
41 // If error occurred, config should be nil
42 if err != nil && config != nil {
43 t.Errorf("config is not nil when error occurred: %v", config)
44 }
45
46 // Function should not panic
47 })
48}
49
50// TestParseConfig validates basic functionality
51func TestParseConfig(t *testing.T) {
52 tests := []struct {
53 name string
54 input string
55 want map[string]string
56 wantErr bool
57 }{
58 {
59 name: "valid config",
60 input: `{"name": "app", "version": "1.0"}`,
61 want: map[string]string{"name": "app", "version": "1.0"},
62 wantErr: false,
63 },
64 {
65 name: "empty config",
66 input: `{}`,
67 want: map[string]string{},
68 wantErr: false,
69 },
70 {
71 name: "invalid JSON",
72 input: `{invalid}`,
73 want: nil,
74 wantErr: true,
75 },
76 {
77 name: "empty string",
78 input: ``,
79 want: nil,
80 wantErr: true,
81 },
82 {
83 name: "unclosed brace",
84 input: `{"key": "value"`,
85 want: nil,
86 wantErr: true,
87 },
88 }
89
90 for _, tt := range tests {
91 t.Run(tt.name, func(t *testing.T) {
92 got, err := ParseConfig(tt.input)
93
94 if != tt.wantErr {
95 t.Errorf("ParseConfig() error = %v, wantErr %v", err, tt.wantErr)
96 return
97 }
98
99 if !tt.wantErr {
100 if len(got) != len(tt.want) {
101 t.Errorf("ParseConfig() = %v, want %v", got, tt.want)
102 return
103 }
104
105 for k, v := range tt.want {
106 if got[k] != v {
107 t.Errorf("ParseConfig()[%s] = %v, want %v", k, got[k], v)
108 }
109 }
110 }
111 })
112 }
113}
Running Fuzzing:
1# Run fuzz test for 30 seconds
2go test -fuzz=FuzzParseConfig -fuzztime=30s
3
4# Run until failure found
5go test -fuzz=FuzzParseConfig
6
7# Run with specific seed
8go test -fuzz=FuzzParseConfig -fuzztime=10s -fuzzseed=12345
Expected Output:
fuzz: elapsed: 0s, gathering baseline coverage: 0/4 completed
fuzz: elapsed: 0s, gathering baseline coverage: 4/4 completed
fuzz: elapsed: 3s, execs: 125000, new interesting: 12
fuzz: elapsed: 6s, execs: 250000, new interesting: 15
...
Explanation:
- Fuzzing automatically generates test inputs
- Seed corpus provides initial interesting inputs
- Fuzzer mutates inputs to explore edge cases
- Function should handle all inputs without panicking
- Crashes and failures are automatically saved
What Fuzzing Discovers:
- Malformed JSON
- Unicode edge cases
- Very long strings
- Nested structures
- Special characters and escape sequences
Key Takeaways:
- Fuzzing discovers edge cases humans miss
- Seed corpus guides fuzzer to interesting inputs
- Functions should gracefully handle invalid input
- Fuzzing complements traditional unit tests
Comprehensive Section Takeaways
Generics
- Enable type-safe data structures without code duplication
- Type constraints control what operations are allowed
anyconstraint provides maximum flexibility- Generics eliminate need for
interface{}and type assertions
Reflection
- Runtime type inspection and manipulation
- Essential for validation, serialization, and frameworks
- Performance overhead compared to static typing
- Use judiciously; prefer static types when possible
Design Patterns
- Factory pattern centralizes object creation
- Functional options provide clean configuration APIs
- Dependency injection decouples components
- Plugin systems enable extensibility
Performance Optimization
- Always measure before optimizing
- Use
strings.Builderfor string concatenation - Preallocate slices when size is known
sync.Poolreduces allocation overhead- Atomic operations faster than mutexes for simple cases
Advanced Concurrency
RWMutexoptimizes for concurrent reads- Atomic operations provide lock-free primitives
- Memory pools reduce GC pressure
- Always protect shared state
Build Tags & Testing
- Build tags enable platform-specific code
- Fuzzing discovers edge cases automatically
- Benchmarks measure performance objectively
- Race detector catches concurrency bugs
Next Steps
Immediate Actions:
- Complete all 13 exercises above
- Run benchmarks to see performance differences
- Experiment with fuzzing to discover edge cases
- Review Section 3 tutorials for deeper understanding
Practice Projects:
Apply these concepts in the section project:
- Generic data structures
- Reflection-based validation framework
- Plugin-based application architecture
- Performance-optimized service
Production Readiness:
- Profile production code regularly
- Use build tags for feature flags
- Implement fuzzing in CI/CD
- Monitor allocation patterns
Further Learning:
- Advanced generics patterns
- Runtime code generation
- Custom build tools
- Compiler optimizations
Summary
You've now practiced the core advanced Go techniques:
- Generics - Type-safe, reusable code
- Reflection - Runtime type inspection
- Design Patterns - Idiomatic Go architectures
- Performance - Benchmarking and optimization
- Concurrency - Advanced synchronization primitives
- Build System - Platform-specific code
- Testing - Fuzzing and comprehensive validation
These skills prepare you for production Go development and complex system design.
Ready to move on? Proceed to Section 4 to learn cloud-native development, web frameworks, and testing strategies!