Welcome to the comprehensive practice exercises for Section 1: The Go Language! These exercises synthesize everything you've learned across the 14 tutorials in this section.
About These Exercises
These exercises are designed to:
- Reinforce Core Concepts: Apply what you learned in tutorials 01-14
- Build Integration Skills: Combine multiple concepts in single solutions
- Develop Idiomatic Style: Practice writing Go the way experienced developers do
- Prepare for Advanced Topics: Build a solid foundation for sections 2-4
Section 1 Tutorial Coverage
These exercises build on the following tutorials:
- Getting Started with Go
- Variables and Types
- Functions
- Structs and Methods
- Concurrency
- Error Handling
- Control Flow
- Slices and Arrays
- Maps
- Pointers
- Interfaces
- Go Packages
- Go Modules
- Code Quality and Best Practices
Let's begin!
Exercise 1 - Variable Manipulation and Type System
Build a type conversion utility that safely handles different numeric types and demonstrates understanding of Go's type system.
Function Signature
1package solution
2
3type ConversionResult struct {
4 Success bool
5 Value interface{}
6 Error string
7}
8
9func SafeConvert(value interface{}, targetType string) ConversionResult
Requirements
- Support conversion between int, int64, float64, and string types
- Handle overflow/underflow cases safely
- Return detailed error messages for invalid conversions
- Demonstrate proper use of zero values and type assertions
Click to see solution
1package solution
2
3import (
4 "fmt"
5 "math"
6 "strconv"
7)
8
9type ConversionResult struct {
10 Success bool
11 Value interface{}
12 Error string
13}
14
15func SafeConvert(value interface{}, targetType string) ConversionResult {
16 switch v := value.(type) {
17 case int:
18 return convertFromInt(v, targetType)
19 case int64:
20 return convertFromInt64(v, targetType)
21 case float64:
22 return convertFromFloat64(v, targetType)
23 case string:
24 return convertFromString(v, targetType)
25 default:
26 return ConversionResult{
27 Success: false,
28 Error: fmt.Sprintf("unsupported source type: %T", value),
29 }
30 }
31}
32
33func convertFromInt(v int, targetType string) ConversionResult {
34 switch targetType {
35 case "int":
36 return ConversionResult{Success: true, Value: v}
37 case "int64":
38 return ConversionResult{Success: true, Value: int64(v)}
39 case "float64":
40 return ConversionResult{Success: true, Value: float64(v)}
41 case "string":
42 return ConversionResult{Success: true, Value: strconv.Itoa(v)}
43 default:
44 return ConversionResult{Success: false, Error: "unsupported target type"}
45 }
46}
47
48func convertFromInt64(v int64, targetType string) ConversionResult {
49 switch targetType {
50 case "int":
51 if v > math.MaxInt32 || v < math.MinInt32 {
52 return ConversionResult{
53 Success: false,
54 Error: "int64 value overflows int",
55 }
56 }
57 return ConversionResult{Success: true, Value: int(v)}
58 case "int64":
59 return ConversionResult{Success: true, Value: v}
60 case "float64":
61 return ConversionResult{Success: true, Value: float64(v)}
62 case "string":
63 return ConversionResult{Success: true, Value: strconv.FormatInt(v, 10)}
64 default:
65 return ConversionResult{Success: false, Error: "unsupported target type"}
66 }
67}
68
69func convertFromFloat64(v float64, targetType string) ConversionResult {
70 switch targetType {
71 case "int":
72 if v > math.MaxInt32 || v < math.MinInt32 {
73 return ConversionResult{
74 Success: false,
75 Error: "float64 value overflows int",
76 }
77 }
78 return ConversionResult{Success: true, Value: int(v)}
79 case "int64":
80 if v > math.MaxInt64 || v < math.MinInt64 {
81 return ConversionResult{
82 Success: false,
83 Error: "float64 value overflows int64",
84 }
85 }
86 return ConversionResult{Success: true, Value: int64(v)}
87 case "float64":
88 return ConversionResult{Success: true, Value: v}
89 case "string":
90 return ConversionResult{Success: true, Value: strconv.FormatFloat(v, 'f', -1, 64)}
91 default:
92 return ConversionResult{Success: false, Error: "unsupported target type"}
93 }
94}
95
96func convertFromString(v string, targetType string) ConversionResult {
97 switch targetType {
98 case "int":
99 val, err := strconv.Atoi(v)
100 if err != nil {
101 return ConversionResult{Success: false, Error: err.Error()}
102 }
103 return ConversionResult{Success: true, Value: val}
104 case "int64":
105 val, err := strconv.ParseInt(v, 10, 64)
106 if err != nil {
107 return ConversionResult{Success: false, Error: err.Error()}
108 }
109 return ConversionResult{Success: true, Value: val}
110 case "float64":
111 val, err := strconv.ParseFloat(v, 64)
112 if err != nil {
113 return ConversionResult{Success: false, Error: err.Error()}
114 }
115 return ConversionResult{Success: true, Value: val}
116 case "string":
117 return ConversionResult{Success: true, Value: v}
118 default:
119 return ConversionResult{Success: false, Error: "unsupported target type"}
120 }
121}
Explanation
Key Concepts:
- Type Assertions: Using
value.(type)in switch statements for type-safe conversions - Overflow Detection: Checking bounds before converting to smaller types
- Zero Values: Implicit zero value initialization in ConversionResult
- Error Handling: Returning detailed error messages instead of panicking
Why This Works:
The solution uses Go's type switch to handle different source types, validates ranges to prevent overflow, and returns a structured result that communicates both success and failure states clearly.
Key Takeaways
- Go requires explicit type conversions—no implicit casting
- Always validate numeric conversions to prevent overflow
- Type assertions with
interface{}enable flexible APIs
Exercise 2 - Function Composition with Higher-Order Functions
Create a pipeline builder that demonstrates closures, higher-order functions, and variadic parameters.
Function Signature
1package solution
2
3type Transform func(int) int
4
5func Pipeline(transforms ...Transform) Transform
6func Map(slice []int, fn Transform) []int
7func Filter(slice []int, predicate func(int) bool) []int
Requirements
- Implement Pipeline to compose multiple transform functions
- Create Map to apply transformations to slices
- Build Filter to select elements matching a predicate
- Use closures to capture state where appropriate
Click to see solution
1package solution
2
3// Transform is a function that transforms an integer
4type Transform func(int) int
5
6// Pipeline composes multiple transform functions into a single function
7func Pipeline(transforms ...Transform) Transform {
8 return func(value int) int {
9 result := value
10 for _, transform := range transforms {
11 result = transform(result)
12 }
13 return result
14 }
15}
16
17// Map applies a transform function to each element in a slice
18func Map(slice []int, fn Transform) []int {
19 result := make([]int, len(slice))
20 for i, v := range slice {
21 result[i] = fn(v)
22 }
23 return result
24}
25
26// Filter returns a new slice containing only elements matching the predicate
27func Filter(slice []int, predicate func(int) bool) []int {
28 result := make([]int, 0, len(slice))
29 for _, v := range slice {
30 if predicate(v) {
31 result = append(result, v)
32 }
33 }
34 return result
35}
36
37// Example usage functions demonstrating closures
38
39// Multiplier returns a function that multiplies its input by factor
40func Multiplier(factor int) Transform {
41 return func(value int) int {
42 return value * factor
43 }
44}
45
46// Adder returns a function that adds offset to its input
47func Adder(offset int) Transform {
48 return func(value int) int {
49 return value + offset
50 }
51}
52
53// GreaterThan returns a predicate that checks if value > threshold
54func GreaterThan(threshold int) func(int) bool {
55 return func(value int) bool {
56 return value > threshold
57 }
58}
Explanation
Key Concepts:
- Closures: Functions like
MultiplierandAddercapture variables from their outer scope - Higher-Order Functions: Pipeline accepts functions as parameters and returns a function
- Variadic Parameters:
transforms ...Transformaccepts any number of transform functions - Function Composition: Pipeline chains functions by applying them sequentially
Why This Works:
By returning functions from functions and accepting functions as parameters, we create reusable, composable building blocks that can be combined in countless ways without modifying the original implementations.
Key Takeaways
- Closures capture variables from their enclosing scope
- Higher-order functions enable powerful abstractions
- Function composition creates reusable transformation pipelines
Exercise 3 - Struct Constructor with Builder Pattern
Implement a configuration builder demonstrating struct initialization, validation, and the builder pattern.
Function Signature
1package solution
2
3type ServerConfig struct {
4 Host string
5 Port int
6 Timeout int
7 TLS bool
8}
9
10type ConfigBuilder struct {
11 config ServerConfig
12}
13
14func NewConfigBuilder() *ConfigBuilder
15func WithHost(host string) *ConfigBuilder
16func WithPort(port int) *ConfigBuilder
17func WithTimeout(timeout int) *ConfigBuilder
18func WithTLS(enabled bool) *ConfigBuilder
19func Build()
Requirements
- Provide sensible defaults for all configuration fields
- Validate configuration on Build()
- Support method chaining for fluent API
- Return validation errors without panicking
Click to see solution
1package solution
2
3import (
4 "errors"
5 "fmt"
6)
7
8type ServerConfig struct {
9 Host string
10 Port int
11 Timeout int // seconds
12 TLS bool
13}
14
15type ConfigBuilder struct {
16 config ServerConfig
17 errors []error
18}
19
20// NewConfigBuilder creates a new builder with sensible defaults
21func NewConfigBuilder() *ConfigBuilder {
22 return &ConfigBuilder{
23 config: ServerConfig{
24 Host: "localhost",
25 Port: 8080,
26 Timeout: 30,
27 TLS: false,
28 },
29 errors: make([]error, 0),
30 }
31}
32
33// WithHost sets the host address
34func WithHost(host string) *ConfigBuilder {
35 if host == "" {
36 b.errors = append(b.errors, errors.New("host cannot be empty"))
37 } else {
38 b.config.Host = host
39 }
40 return b
41}
42
43// WithPort sets the port number
44func WithPort(port int) *ConfigBuilder {
45 if port < 1 || port > 65535 {
46 b.errors = append(b.errors, fmt.Errorf("port %d out of valid range", port))
47 } else {
48 b.config.Port = port
49 }
50 return b
51}
52
53// WithTimeout sets the timeout in seconds
54func WithTimeout(timeout int) *ConfigBuilder {
55 if timeout < 0 {
56 b.errors = append(b.errors, errors.New("timeout cannot be negative"))
57 } else {
58 b.config.Timeout = timeout
59 }
60 return b
61}
62
63// WithTLS enables or disables TLS
64func WithTLS(enabled bool) *ConfigBuilder {
65 b.config.TLS = enabled
66 return b
67}
68
69// Build constructs the final configuration or returns validation errors
70func Build() {
71 if len(b.errors) > 0 {
72 return nil, fmt.Errorf("configuration validation failed: %v", b.errors)
73 }
74
75 // Create a copy to prevent external modification
76 result := b.config
77 return &result, nil
78}
79
80// Validate performs additional cross-field validation
81func Validate() error {
82 if c.TLS && c.Port == 80 {
83 return errors.New("TLS enabled but using non-secure port 80")
84 }
85 return nil
86}
Explanation
Key Concepts:
- Builder Pattern: Separates construction from representation, enabling step-by-step configuration
- Method Chaining: Returning
*ConfigBuilderfrom each setter enables fluent syntax - Validation: Collecting errors during building and checking on Build()
- Defensive Copying: Build() returns a copy to prevent external mutation
Why This Works:
The builder pattern provides a clean API for constructing complex objects while maintaining invariants through validation. Method chaining makes the code readable and self-documenting.
Key Takeaways
- Builder pattern separates object construction from representation
- Method chaining creates fluent, readable APIs
- Validate inputs early and return errors rather than panicking
Exercise 4 - Interface Implementation and Polymorphism
Create a notification system demonstrating interfaces, type assertions, and polymorphic behavior.
Function Signature
1package solution
2
3type Notifier interface {
4 Send(message string) error
5}
6
7type EmailNotifier struct {
8 Address string
9}
10
11type SMSNotifier struct {
12 PhoneNumber string
13}
14
15type SlackNotifier struct {
16 WebhookURL string
17}
18
19type NotificationBatch struct {
20 notifiers []Notifier
21}
22
23func NewNotificationBatch() *NotificationBatch
24func Add(notifier Notifier)
25func SendToAll(message string) []error
Requirements
- Implement Notifier interface for Email, SMS, and Slack
- Support adding multiple notifiers to a batch
- SendToAll should attempt all notifications even if some fail
- Return all errors encountered
Click to see solution
1package solution
2
3import (
4 "fmt"
5 "regexp"
6 "strings"
7)
8
9// Notifier interface defines the contract for sending notifications
10type Notifier interface {
11 Send(message string) error
12}
13
14// EmailNotifier sends notifications via email
15type EmailNotifier struct {
16 Address string
17}
18
19func Send(message string) error {
20 if !isValidEmail(e.Address) {
21 return fmt.Errorf("invalid email address: %s", e.Address)
22 }
23 // Simulate sending email
24 fmt.Printf("[EMAIL] Sending to %s: %s\n", e.Address, message)
25 return nil
26}
27
28// SMSNotifier sends notifications via SMS
29type SMSNotifier struct {
30 PhoneNumber string
31}
32
33func Send(message string) error {
34 if !isValidPhone(s.PhoneNumber) {
35 return fmt.Errorf("invalid phone number: %s", s.PhoneNumber)
36 }
37 // Simulate sending SMS
38 fmt.Printf("[SMS] Sending to %s: %s\n", s.PhoneNumber, message)
39 return nil
40}
41
42// SlackNotifier sends notifications via Slack webhook
43type SlackNotifier struct {
44 WebhookURL string
45}
46
47func Send(message string) error {
48 if !strings.HasPrefix(sl.WebhookURL, "https://") {
49 return fmt.Errorf("invalid webhook URL: %s", sl.WebhookURL)
50 }
51 // Simulate sending to Slack
52 fmt.Printf("[SLACK] Posting to %s: %s\n", sl.WebhookURL, message)
53 return nil
54}
55
56// NotificationBatch manages multiple notifiers
57type NotificationBatch struct {
58 notifiers []Notifier
59}
60
61// NewNotificationBatch creates a new notification batch
62func NewNotificationBatch() *NotificationBatch {
63 return &NotificationBatch{
64 notifiers: make([]Notifier, 0),
65 }
66}
67
68// Add adds a notifier to the batch
69func Add(notifier Notifier) {
70 nb.notifiers = append(nb.notifiers, notifier)
71}
72
73// SendToAll sends the message to all notifiers, collecting errors
74func SendToAll(message string) []error {
75 errors := make([]error, 0)
76
77 for i, notifier := range nb.notifiers {
78 if err := notifier.Send(message); err != nil {
79 errors = append(errors, fmt.Errorf("notifier %d failed: %w", i, err))
80 }
81 }
82
83 return errors
84}
85
86// Helper validation functions
87func isValidEmail(email string) bool {
88 pattern := `^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$`
89 matched, _ := regexp.MatchString(pattern, email)
90 return matched
91}
92
93func isValidPhone(phone string) bool {
94 // Simple validation: at least 10 digits
95 digits := regexp.MustCompile(`\d`).FindAllString(phone, -1)
96 return len(digits) >= 10
97}
Explanation
Key Concepts:
- Interface Implementation: All three notifiers implement the
Notifierinterface implicitly - Polymorphism: NotificationBatch works with any type implementing Notifier
- Error Collection: SendToAll continues on error, collecting all failures
- Encapsulation: Each notifier encapsulates its own validation logic
Why This Works:
Interfaces in Go are satisfied implicitly—any type with a Send(string) error method is a Notifier. This enables polymorphism without inheritance and makes the code extensible.
Key Takeaways
- Interfaces are implemented implicitly in Go
- Polymorphism enables treating different types uniformly
- Collecting errors allows batch operations to complete fully
Exercise 5 - Goroutine Coordination with WaitGroup
Build a parallel web scraper demonstrating goroutines, WaitGroup, and proper synchronization.
Function Signature
1package solution
2
3type ScrapResult struct {
4 URL string
5 Content string
6 Error error
7}
8
9func ScrapeURLs(urls []string, concurrency int) []ScrapResult
Requirements
- Process multiple URLs concurrently with limited parallelism
- Use WaitGroup to wait for all goroutines to complete
- Collect results in a slice without data races
- Simulate scraping with a delay
Click to see solution
1package solution
2
3import (
4 "fmt"
5 "sync"
6 "time"
7)
8
9type ScrapResult struct {
10 URL string
11 Content string
12 Error error
13}
14
15// ScrapeURLs scrapes multiple URLs concurrently with limited parallelism
16func ScrapeURLs(urls []string, concurrency int) []ScrapResult {
17 // Channel to limit concurrent workers
18 semaphore := make(chan struct{}, concurrency)
19
20 // Results collection with mutex for thread safety
21 var mu sync.Mutex
22 results := make([]ScrapResult, 0, len(urls))
23
24 // WaitGroup to track completion
25 var wg sync.WaitGroup
26
27 for _, url := range urls {
28 wg.Add(1)
29
30 go func(u string) {
31 defer wg.Done()
32
33 // Acquire semaphore slot
34 semaphore <- struct{}{}
35 defer func() { <-semaphore }() // Release slot
36
37 // Simulate scraping
38 result := scrapeURL(u)
39
40 // Safely append result
41 mu.Lock()
42 results = append(results, result)
43 mu.Unlock()
44 }(url)
45 }
46
47 // Wait for all goroutines to complete
48 wg.Wait()
49
50 return results
51}
52
53// scrapeURL simulates scraping a single URL
54func scrapeURL(url string) ScrapResult {
55 // Simulate network delay
56 time.Sleep(time.Millisecond * 100)
57
58 // Simulate occasional errors
59 if len(url) > 30 {
60 return ScrapResult{
61 URL: url,
62 Error: fmt.Errorf("URL too long: %s", url),
63 }
64 }
65
66 return ScrapResult{
67 URL: url,
68 Content: fmt.Sprintf("Content from %s", url),
69 Error: nil,
70 }
71}
Explanation
Key Concepts:
- WaitGroup: Tracks goroutine completion with Add(), Done(), and Wait()
- Semaphore Pattern: Buffered channel limits concurrent goroutines
- Mutex: Protects shared results slice from concurrent writes
- Goroutine Parameter Capture: Passing URL as parameter prevents closure issues
Why This Works:
The semaphore limits concurrency to prevent resource exhaustion, WaitGroup ensures we wait for all work to complete, and the mutex prevents data races when appending results.
Key Takeaways
- WaitGroup coordinates goroutine completion
- Semaphore pattern limits concurrency
- Always protect shared data structures with mutexes
Exercise 6 - Channel Patterns and Select
Create a task scheduler demonstrating buffered channels, select statements, and channel closing.
Function Signature
1package solution
2
3type Task struct {
4 ID int
5 Duration time.Duration
6}
7
8type Result struct {
9 TaskID int
10 Status string
11}
12
13func RunScheduler(tasks []Task, workers int, timeout time.Duration) []Result
Requirements
- Use buffered channel for task distribution
- Implement worker pool pattern with multiple goroutines
- Handle timeout using select and time.After
- Properly close channels and signal completion
Click to see solution
1package solution
2
3import (
4 "fmt"
5 "sync"
6 "time"
7)
8
9type Task struct {
10 ID int
11 Duration time.Duration
12}
13
14type Result struct {
15 TaskID int
16 Status string
17}
18
19// RunScheduler executes tasks with a worker pool and timeout
20func RunScheduler(tasks []Task, workers int, timeout time.Duration) []Result {
21 // Buffered channels for task distribution and results
22 taskChan := make(chan Task, len(tasks))
23 resultChan := make(chan Result, len(tasks))
24
25 // Fill task channel
26 for _, task := range tasks {
27 taskChan <- task
28 }
29 close(taskChan) // No more tasks will be added
30
31 // Start worker pool
32 var wg sync.WaitGroup
33 for i := 0; i < workers; i++ {
34 wg.Add(1)
35 go worker(i, taskChan, resultChan, &wg)
36 }
37
38 // Close results channel after all workers finish
39 go func() {
40 wg.Wait()
41 close(resultChan)
42 }()
43
44 // Collect results with timeout
45 return collectResults(resultChan, timeout, len(tasks))
46}
47
48// worker processes tasks from the task channel
49func worker(id int, tasks <-chan Task, results chan<- Result, wg *sync.WaitGroup) {
50 defer wg.Done()
51
52 for task := range tasks {
53 // Simulate work
54 time.Sleep(task.Duration)
55
56 results <- Result{
57 TaskID: task.ID,
58 Status: fmt.Sprintf("completed by worker %d", id),
59 }
60 }
61}
62
63// collectResults gathers results with timeout handling
64func collectResults(results <-chan Result, timeout time.Duration, expectedCount int) []Result {
65 collected := make([]Result, 0, expectedCount)
66 timeoutChan := time.After(timeout)
67
68 for {
69 select {
70 case result, ok := <-results:
71 if !ok {
72 // Channel closed, all workers finished
73 return collected
74 }
75 collected = append(collected, result)
76
77 case <-timeoutChan:
78 // Timeout occurred
79 for i := len(collected); i < expectedCount; i++ {
80 collected = append(collected, Result{
81 TaskID: -1,
82 Status: "timeout",
83 })
84 }
85 return collected
86 }
87 }
88}
Explanation
Key Concepts:
- Buffered Channels: Pre-allocate space for all tasks to avoid blocking
- Channel Closing: Closing taskChan signals workers to stop; checking
okin receive detects closure - Select Statement: Multiplexes between result channel and timeout channel
- Worker Pool: Fixed number of goroutines process from shared task channel
Why This Works:
Closing the task channel signals completion to workers, select handles timeout elegantly, and the separate goroutine closes the result channel only after all workers finish.
Key Takeaways
- Close channels to signal no more values will be sent
- Select enables timeout and cancellation patterns
- Worker pool pattern efficiently processes task queues
Exercise 7 - Error Wrapping and Custom Error Types
Implement a file processor with comprehensive error handling using wrapping and custom error types.
Function Signature
1package solution
2
3type ValidationError struct {
4 Field string
5 Message string
6}
7
8type ProcessingError struct {
9 Stage string
10 Err error
11}
12
13func ProcessFile(filename string) error
14func IsValidationError(err error) bool
15func IsProcessingError(err error) bool
Requirements
- Create custom error types for different failure categories
- Use fmt.Errorf with %w for error wrapping
- Implement error inspection with errors.Is and errors.As
- Provide detailed error context at each layer
Click to see solution
1package solution
2
3import (
4 "errors"
5 "fmt"
6 "strings"
7)
8
9// ValidationError represents validation failures
10type ValidationError struct {
11 Field string
12 Message string
13}
14
15func Error() string {
16 return fmt.Sprintf("validation failed for %s: %s", e.Field, e.Message)
17}
18
19// ProcessingError wraps errors that occur during processing
20type ProcessingError struct {
21 Stage string
22 Err error
23}
24
25func Error() string {
26 return fmt.Sprintf("processing failed at stage '%s': %v", e.Stage, e.Err)
27}
28
29func Unwrap() error {
30 return e.Err
31}
32
33// ProcessFile simulates file processing with layered error handling
34func ProcessFile(filename string) error {
35 // Validation layer
36 if err := validateFilename(filename); err != nil {
37 return fmt.Errorf("file validation failed: %w", err)
38 }
39
40 // Reading layer
41 content, err := readFile(filename)
42 if err != nil {
43 return &ProcessingError{
44 Stage: "read",
45 Err: err,
46 }
47 }
48
49 // Parsing layer
50 if err := parseContent(content); err != nil {
51 return &ProcessingError{
52 Stage: "parse",
53 Err: err,
54 }
55 }
56
57 // Processing layer
58 if err := processContent(content); err != nil {
59 return &ProcessingError{
60 Stage: "process",
61 Err: err,
62 }
63 }
64
65 return nil
66}
67
68func validateFilename(filename string) error {
69 if filename == "" {
70 return &ValidationError{
71 Field: "filename",
72 Message: "cannot be empty",
73 }
74 }
75
76 if !strings.HasSuffix(filename, ".txt") {
77 return &ValidationError{
78 Field: "filename",
79 Message: "must have .txt extension",
80 }
81 }
82
83 return nil
84}
85
86func readFile(filename string) {
87 // Simulate file reading
88 if strings.Contains(filename, "missing") {
89 return "", fmt.Errorf("file not found: %s", filename)
90 }
91 return "file content", nil
92}
93
94func parseContent(content string) error {
95 if content == "" {
96 return errors.New("empty content")
97 }
98 return nil
99}
100
101func processContent(content string) error {
102 if len(content) < 10 {
103 return errors.New("content too short")
104 }
105 return nil
106}
107
108// IsValidationError checks if err is or wraps a ValidationError
109func IsValidationError(err error) bool {
110 var ve *ValidationError
111 return errors.As(err, &ve)
112}
113
114// IsProcessingError checks if err is or wraps a ProcessingError
115func IsProcessingError(err error) bool {
116 var pe *ProcessingError
117 return errors.As(err, &pe)
118}
119
120// GetValidationError extracts ValidationError from error chain
121func GetValidationError(err error) {
122 var ve *ValidationError
123 if errors.As(err, &ve) {
124 return ve, true
125 }
126 return nil, false
127}
128
129// GetProcessingError extracts ProcessingError from error chain
130func GetProcessingError(err error) {
131 var pe *ProcessingError
132 if errors.As(err, &pe) {
133 return pe, true
134 }
135 return nil, false
136}
Explanation
Key Concepts:
- Custom Error Types: Structs implementing the error interface carry additional context
- Error Wrapping:
fmt.Errorfwith%wpreserves the error chain - Unwrap Method: Enables errors.Is and errors.As to traverse the error chain
- Error Inspection: errors.As extracts specific error types from the chain
Why This Works:
Each layer adds context while preserving the underlying error. Callers can inspect the error chain to determine the root cause and handle different error types appropriately.
Key Takeaways
- Custom error types carry structured context
- Use %w in fmt.Errorf to wrap errors and preserve chains
- errors.As enables type-safe error inspection
Exercise 8 - Slice Operations and Capacity Management
Build a dynamic buffer that efficiently manages slice capacity and demonstrates slice internals.
Function Signature
1package solution
2
3type DynamicBuffer struct {
4 data []int
5 size int
6}
7
8func NewDynamicBuffer(initialCapacity int) *DynamicBuffer
9func Append(value int)
10func Get(index int)
11func Slice(start, end int) []int
12func Capacity() int
13func Size() int
Requirements
- Manage capacity growth efficiently
- Implement safe bounds checking for all operations
- Support slice operations that return new slices
- Demonstrate understanding of slice internals
Click to see solution
1package solution
2
3// DynamicBuffer is a dynamically-sized integer buffer
4type DynamicBuffer struct {
5 data []int
6 size int
7}
8
9// NewDynamicBuffer creates a buffer with specified initial capacity
10func NewDynamicBuffer(initialCapacity int) *DynamicBuffer {
11 if initialCapacity < 1 {
12 initialCapacity = 8 // Minimum capacity
13 }
14
15 return &DynamicBuffer{
16 data: make([]int, 0, initialCapacity),
17 size: 0,
18 }
19}
20
21// Append adds a value to the buffer, growing capacity if needed
22func Append(value int) {
23 // Check if we need to grow
24 if db.size == cap(db.data) {
25 db.grow()
26 }
27
28 // Append using slice operations
29 db.data = append(db.data, value)
30 db.size++
31}
32
33// grow doubles the buffer capacity
34func grow() {
35 newCapacity := cap(db.data) * 2
36 if newCapacity == 0 {
37 newCapacity = 8
38 }
39
40 newData := make([]int, db.size, newCapacity)
41 copy(newData, db.data)
42 db.data = newData
43}
44
45// Get retrieves a value at the specified index
46func Get(index int) {
47 if index < 0 || index >= db.size {
48 return 0, false
49 }
50 return db.data[index], true
51}
52
53// Slice returns a copy of elements from start to end
54func Slice(start, end int) []int {
55 // Bounds checking
56 if start < 0 {
57 start = 0
58 }
59 if end > db.size {
60 end = db.size
61 }
62 if start >= end {
63 return []int{}
64 }
65
66 // Return a copy to prevent external modification
67 result := make([]int, end-start)
68 copy(result, db.data[start:end])
69 return result
70}
71
72// Capacity returns the current capacity of the buffer
73func Capacity() int {
74 return cap(db.data)
75}
76
77// Size returns the number of elements in the buffer
78func Size() int {
79 return db.size
80}
81
82// Compact reduces capacity to match size, freeing unused memory
83func Compact() {
84 if db.size < cap(db.data) {
85 newData := make([]int, db.size)
86 copy(newData, db.data)
87 db.data = newData
88 }
89}
90
91// Clear removes all elements but keeps capacity
92func Clear() {
93 db.data = db.data[:0]
94 db.size = 0
95}
Explanation
Key Concepts:
- Length vs Capacity: Size tracks logical elements, capacity is allocated space
- Capacity Growth: Doubling strategy amortizes allocation cost over insertions
- Defensive Copying: Slice() returns a copy to prevent external mutation
- Bounds Checking: All index operations validate ranges before accessing
Why This Works:
By managing capacity explicitly and using copy() for safe transfers, we create an efficient, safe buffer. The doubling strategy ensures O(1) amortized append performance.
Key Takeaways
- Slices have separate length and capacity
- Doubling capacity amortizes growth cost
- Always copy slices when exposing internal data
Exercise 9 - Concurrent-Safe Map Operations
Create a thread-safe cache with expiration using maps and proper synchronization.
Function Signature
1package solution
2
3type CacheItem struct {
4 Value interface{}
5 Expiration time.Time
6}
7
8type Cache struct {
9 items map[string]CacheItem
10 mu sync.RWMutex
11}
12
13func NewCache() *Cache
14func Set(key string, value interface{}, ttl time.Duration)
15func Get(key string)
16func Delete(key string)
17func Cleanup()
Requirements
- Use sync.RWMutex for concurrent read/write safety
- Implement TTL expiration for cache entries
- Provide cleanup method to remove expired entries
- Support multiple concurrent readers and writers
Click to see solution
1package solution
2
3import (
4 "sync"
5 "time"
6)
7
8// CacheItem represents a cached value with expiration
9type CacheItem struct {
10 Value interface{}
11 Expiration time.Time
12}
13
14// Cache is a thread-safe cache with TTL support
15type Cache struct {
16 items map[string]CacheItem
17 mu sync.RWMutex
18}
19
20// NewCache creates a new cache instance
21func NewCache() *Cache {
22 cache := &Cache{
23 items: make(map[string]CacheItem),
24 }
25
26 // Start background cleanup goroutine
27 go cache.cleanupLoop()
28
29 return cache
30}
31
32// Set adds or updates a cache entry with TTL
33func Set(key string, value interface{}, ttl time.Duration) {
34 c.mu.Lock()
35 defer c.mu.Unlock()
36
37 expiration := time.Now().Add(ttl)
38 c.items[key] = CacheItem{
39 Value: value,
40 Expiration: expiration,
41 }
42}
43
44// Get retrieves a value from the cache if it exists and hasn't expired
45func Get(key string) {
46 c.mu.RLock()
47 defer c.mu.RUnlock()
48
49 item, exists := c.items[key]
50 if !exists {
51 return nil, false
52 }
53
54 // Check expiration
55 if time.Now().After(item.Expiration) {
56 return nil, false
57 }
58
59 return item.Value, true
60}
61
62// Delete removes a key from the cache
63func Delete(key string) {
64 c.mu.Lock()
65 defer c.mu.Unlock()
66
67 delete(c.items, key)
68}
69
70// Cleanup removes all expired entries
71func Cleanup() {
72 c.mu.Lock()
73 defer c.mu.Unlock()
74
75 now := time.Now()
76 for key, item := range c.items {
77 if now.After(item.Expiration) {
78 delete(c.items, key)
79 }
80 }
81}
82
83// cleanupLoop periodically cleans up expired entries
84func cleanupLoop() {
85 ticker := time.NewTicker(time.Minute)
86 defer ticker.Stop()
87
88 for range ticker.C {
89 c.Cleanup()
90 }
91}
92
93// Size returns the number of items in the cache
94func Size() int {
95 c.mu.RLock()
96 defer c.mu.RUnlock()
97
98 return len(c.items)
99}
100
101// Keys returns all non-expired keys
102func Keys() []string {
103 c.mu.RLock()
104 defer c.mu.RUnlock()
105
106 now := time.Now()
107 keys := make([]string, 0, len(c.items))
108
109 for key, item := range c.items {
110 if now.Before(item.Expiration) {
111 keys = append(keys, key)
112 }
113 }
114
115 return keys
116}
Explanation
Key Concepts:
- RWMutex: Allows multiple concurrent readers but exclusive writers
- Lock vs RLock: Use RLock for reads, Lock for writes
- Defer Unlock: Ensures locks are released even if panic occurs
- Background Cleanup: Separate goroutine periodically removes expired entries
Why This Works:
RWMutex enables high-performance concurrent reads while ensuring write safety. The background cleanup goroutine prevents unbounded memory growth from expired entries.
Key Takeaways
- Use RWMutex when reads vastly outnumber writes
- Always defer mutex unlocks to prevent deadlocks
- Background goroutines enable periodic maintenance tasks
Exercise 10 - Pointer Semantics and Method Receivers
Build a linked list demonstrating proper use of pointers and value vs pointer receivers.
Function Signature
1package solution
2
3type Node struct {
4 Value int
5 Next *Node
6}
7
8type LinkedList struct {
9 Head *Node
10 Tail *Node
11 size int
12}
13
14func NewLinkedList() *LinkedList
15func Append(value int)
16func Prepend(value int)
17func Remove(value int) bool
18func Size() int
19func ToSlice() []int
Requirements
- Use pointer receivers for methods that modify the list
- Use value receivers for methods that only read
- Properly manage pointers to avoid nil dereferences
- Maintain head, tail pointers for efficient operations
Click to see solution
1package solution
2
3// Node represents a single element in the linked list
4type Node struct {
5 Value int
6 Next *Node
7}
8
9// LinkedList is a singly-linked list
10type LinkedList struct {
11 Head *Node
12 Tail *Node
13 size int
14}
15
16// NewLinkedList creates an empty linked list
17func NewLinkedList() *LinkedList {
18 return &LinkedList{
19 Head: nil,
20 Tail: nil,
21 size: 0,
22 }
23}
24
25// Append adds a value to the end of the list
26func Append(value int) {
27 newNode := &Node{Value: value}
28
29 if l.Head == nil {
30 // Empty list
31 l.Head = newNode
32 l.Tail = newNode
33 } else {
34 // Non-empty list
35 l.Tail.Next = newNode
36 l.Tail = newNode
37 }
38
39 l.size++
40}
41
42// Prepend adds a value to the beginning of the list
43func Prepend(value int) {
44 newNode := &Node{Value: value, Next: l.Head}
45 l.Head = newNode
46
47 if l.Tail == nil {
48 // Was empty
49 l.Tail = newNode
50 }
51
52 l.size++
53}
54
55// Remove deletes the first occurrence of value
56func Remove(value int) bool {
57 if l.Head == nil {
58 return false
59 }
60
61 // Special case: removing head
62 if l.Head.Value == value {
63 l.Head = l.Head.Next
64 if l.Head == nil {
65 l.Tail = nil
66 }
67 l.size--
68 return true
69 }
70
71 // Find the node before the one to remove
72 current := l.Head
73 for current.Next != nil {
74 if current.Next.Value == value {
75 // Found it
76 if current.Next == l.Tail {
77 l.Tail = current
78 }
79 current.Next = current.Next.Next
80 l.size--
81 return true
82 }
83 current = current.Next
84 }
85
86 return false
87}
88
89// Size returns the number of elements
90func Size() int {
91 return l.size
92}
93
94// ToSlice converts the list to a slice
95func ToSlice() []int {
96 result := make([]int, 0, l.size)
97 current := l.Head
98
99 for current != nil {
100 result = append(result, current.Value)
101 current = current.Next
102 }
103
104 return result
105}
106
107// Contains checks if a value exists
108func Contains(value int) bool {
109 current := l.Head
110
111 for current != nil {
112 if current.Value == value {
113 return true
114 }
115 current = current.Next
116 }
117
118 return false
119}
120
121// Clear removes all elements
122func Clear() {
123 l.Head = nil
124 l.Tail = nil
125 l.size = 0
126}
Explanation
Key Concepts:
- Pointer Receivers: Methods that modify the receiver use
*LinkedList - Value Receivers: Methods that only read use
LinkedList - Nil Handling: Careful checks prevent nil pointer dereferences
- Pointer Bookkeeping: Maintaining both head and tail enables efficient operations
Why This Works:
Pointer receivers allow methods to modify the list structure. Value receivers for read-only methods communicate intent and allow the compiler to optimize. Proper nil checks prevent runtime panics.
Key Takeaways
- Use pointer receivers when methods modify the receiver
- Use value receivers for read-only methods
- Always check for nil before dereferencing pointers
Exercise 11 - Defer and Panic Recovery
Build a resource manager demonstrating defer for cleanup and panic recovery.
Function Signature
1package solution
2
3type Resource struct {
4 Name string
5 Closed bool
6}
7
8type ResourceManager struct {
9 resources []*Resource
10}
11
12func NewResourceManager() *ResourceManager
13func Acquire(name string) *Resource
14func Release(r *Resource)
15func Execute(fn func())
16func CleanupAll()
Requirements
- Use defer to ensure resources are always released
- Implement panic recovery to convert panics to errors
- Track all acquired resources for cleanup
- Demonstrate defer execution order
Click to see solution
1package solution
2
3import (
4 "fmt"
5 "sync"
6)
7
8// Resource represents a managed resource
9type Resource struct {
10 Name string
11 Closed bool
12 mu sync.Mutex
13}
14
15// Close marks the resource as closed
16func Close() error {
17 r.mu.Lock()
18 defer r.mu.Unlock()
19
20 if r.Closed {
21 return fmt.Errorf("resource %s already closed", r.Name)
22 }
23
24 r.Closed = true
25 fmt.Printf("Resource %s closed\n", r.Name)
26 return nil
27}
28
29// ResourceManager manages multiple resources
30type ResourceManager struct {
31 resources []*Resource
32 mu sync.Mutex
33}
34
35// NewResourceManager creates a new resource manager
36func NewResourceManager() *ResourceManager {
37 return &ResourceManager{
38 resources: make([]*Resource, 0),
39 }
40}
41
42// Acquire creates and tracks a new resource
43func Acquire(name string) *Resource {
44 rm.mu.Lock()
45 defer rm.mu.Unlock()
46
47 resource := &Resource{
48 Name: name,
49 Closed: false,
50 }
51
52 rm.resources = append(rm.resources, resource)
53 fmt.Printf("Resource %s acquired\n", name)
54
55 return resource
56}
57
58// Release closes a resource
59func Release(r *Resource) {
60 if err := r.Close(); err != nil {
61 fmt.Printf("Error releasing resource: %v\n", err)
62 }
63}
64
65// Execute runs a function with panic recovery
66func Execute(fn func()) {
67 // Panic recovery with defer
68 defer func() {
69 if r := recover(); r != nil {
70 err = fmt.Errorf("panic recovered: %v", r)
71 }
72 }()
73
74 // Execute the function
75 fn()
76
77 return nil
78}
79
80// CleanupAll closes all resources in reverse order
81func CleanupAll() {
82 rm.mu.Lock()
83 defer rm.mu.Unlock()
84
85 // Iterate in reverse order
86 for i := len(rm.resources) - 1; i >= 0; i-- {
87 resource := rm.resources[i]
88 if !resource.Closed {
89 rm.Release(resource)
90 }
91 }
92
93 rm.resources = rm.resources[:0] // Clear the slice
94}
95
96// ExecuteWithResources demonstrates multiple defers for resource cleanup
97func ExecuteWithResources(fn func([]*Resource)) error {
98 // Acquire resources
99 r1 := rm.Acquire("database")
100 defer rm.Release(r1) // Released second
101
102 r2 := rm.Acquire("file")
103 defer rm.Release(r2) // Released first
104
105 r3 := rm.Acquire("network")
106 defer rm.Release(r3) // Released third
107
108 // Panic recovery
109 defer func() {
110 if r := recover(); r != nil {
111 fmt.Printf("Panic in ExecuteWithResources: %v\n", r)
112 }
113 }()
114
115 // Execute function with acquired resources
116 fn([]*Resource{r1, r2, r3})
117
118 return nil
119}
120
121// SafeExecute demonstrates named return values with defer
122func SafeExecute(fn func() error) {
123 resource := rm.Acquire("temp")
124
125 // Defer can modify named return value
126 defer func() {
127 releaseErr := rm.Release(resource)
128 if err == nil {
129 err = releaseErr
130 }
131 }()
132
133 return fn()
134}
Explanation
Key Concepts:
- Defer Execution: Deferred functions execute in LIFO order when the surrounding function returns
- Panic Recovery: Using recover() in a deferred function catches panics
- Named Return Values: Deferred functions can modify named return values
- Resource Safety: Defer ensures cleanup happens even if panics occur
Why This Works:
Defer provides deterministic cleanup regardless of how a function exits. The LIFO order matches typical resource acquisition patterns.
Key Takeaways
- Defer executes in LIFO order
- Use defer for guaranteed cleanup
- Recover from panics only in deferred functions
Exercise 12 - Package Organization and Visibility
Create a modular calculator package demonstrating proper package structure and visibility.
Function Signature
1// File: calculator/calculator.go
2package calculator
3
4type Calculator struct {
5 history []operation
6}
7
8func New() *Calculator
9func Add(a, b float64) float64
10func Subtract(a, b float64) float64
11func Multiply(a, b float64) float64
12func Divide(a, b float64)
13func History() []string
14
15// File: calculator/operation.go
16package calculator
17
18type operation struct {
19 operator string
20 operands [2]float64
21 result float64
22}
Requirements
- Separate concerns across multiple files in the same package
- Use unexported types and functions for internal implementation
- Provide exported API with clear documentation
- Demonstrate proper encapsulation
Click to see solution
1// File: calculator/operation.go
2package calculator
3
4import "fmt"
5
6// operation represents a calculation operation
7type operation struct {
8 operator string
9 operands [2]float64
10 result float64
11}
12
13// String returns a string representation of the operation
14func String() string {
15 return fmt.Sprintf("%.2f %s %.2f = %.2f",
16 op.operands[0], op.operator, op.operands[1], op.result)
17}
18
19// newOperation creates a new operation
20func newOperation(operator string, a, b, result float64) operation {
21 return operation{
22 operator: operator,
23 operands: [2]float64{a, b},
24 result: result,
25 }
26}
27
28// File: calculator/calculator.go
29package calculator
30
31import (
32 "errors"
33 "fmt"
34)
35
36// Calculator performs arithmetic operations with history tracking
37type Calculator struct {
38 history []operation
39}
40
41// New creates a new Calculator instance
42func New() *Calculator {
43 return &Calculator{
44 history: make([]operation, 0),
45 }
46}
47
48// Add performs addition and records the operation
49func Add(a, b float64) float64 {
50 result := a + b
51 c.recordOperation("+", a, b, result)
52 return result
53}
54
55// Subtract performs subtraction and records the operation
56func Subtract(a, b float64) float64 {
57 result := a - b
58 c.recordOperation("-", a, b, result)
59 return result
60}
61
62// Multiply performs multiplication and records the operation
63func Multiply(a, b float64) float64 {
64 result := a * b
65 c.recordOperation("*", a, b, result)
66 return result
67}
68
69// Divide performs division with error handling for divide-by-zero
70func Divide(a, b float64) {
71 if b == 0 {
72 return 0, errors.New("division by zero")
73 }
74 result := a / b
75 c.recordOperation("/", a, b, result)
76 return result, nil
77}
78
79// History returns a copy of the operation history as strings
80func History() []string {
81 history := make([]string, len(c.history))
82 for i, op := range c.history {
83 history[i] = op.String()
84 }
85 return history
86}
87
88// ClearHistory removes all recorded operations
89func ClearHistory() {
90 c.history = c.history[:0]
91}
92
93// recordOperation is an unexported helper that adds operations to history
94func recordOperation(operator string, a, b, result float64) {
95 op := newOperation(operator, a, b, result)
96 c.history = append(c.history, op)
97}
98
99// File: calculator/advanced.go
100package calculator
101
102import "math"
103
104// Power calculates a raised to the power of b
105func Power(a, b float64) float64 {
106 result := math.Pow(a, b)
107 c.recordOperation("^", a, b, result)
108 return result
109}
110
111// SquareRoot calculates the square root of a
112func SquareRoot(a float64) {
113 if a < 0 {
114 return 0, fmt.Errorf("cannot calculate square root of negative number: %.2f", a)
115 }
116 result := math.Sqrt(a)
117 c.recordOperation("√", a, 0, result)
118 return result, nil
119}
120
121// File: calculator/statistics.go
122package calculator
123
124// Statistics provides statistical calculations
125type Statistics struct {
126 calc *Calculator
127}
128
129// NewStatistics creates a Statistics calculator
130func NewStatistics() *Statistics {
131 return &Statistics{
132 calc: New(),
133 }
134}
135
136// Mean calculates the average of values
137func Mean(values ...float64) float64 {
138 if len(values) == 0 {
139 return 0
140 }
141
142 sum := 0.0
143 for _, v := range values {
144 sum = s.calc.Add(sum, v)
145 }
146
147 return sum / float64(len(values))
148}
149
150// Sum calculates the total of values
151func Sum(values ...float64) float64 {
152 result := 0.0
153 for _, v := range values {
154 result = s.calc.Add(result, v)
155 }
156 return result
157}
Explanation
Key Concepts:
- Package Organization: Related functionality grouped in the same package across multiple files
- Visibility: Exported types/functions vs unexported for internal use
- Encapsulation: Internal
operationtype hidden from external users - Helper Functions: Unexported helpers like
recordOperationkeep the code DRY
Why This Works:
All files in the calculator package can access each other's unexported identifiers. External packages can only use exported identifiers. This enables strong encapsulation and clean APIs.
Key Takeaways
- Multiple files can share the same package
- Capitalization controls visibility
- Use unexported helpers for internal implementation details
Final Summary - Key Takeaways from All Exercises
Congratulations on completing all 12 exercises! Here's what you've mastered:
Type System and Fundamentals
- Exercise 1: Explicit type conversions, overflow handling, type safety
- Exercise 8: Slice internals, capacity management, defensive copying
Functions and Composition
- Exercise 2: Closures, higher-order functions, function composition
- Exercise 7: Error wrapping, custom error types, error inspection
Structs and Interfaces
- Exercise 3: Builder pattern, method chaining, validation
- Exercise 4: Interface implementation, polymorphism, type assertions
- Exercise 10: Pointer vs value receivers, linked data structures
Concurrency Patterns
- Exercise 5: Goroutines, WaitGroup, semaphore pattern
- Exercise 6: Channels, select statements, worker pools
- Exercise 9: Concurrent-safe data structures, RWMutex
Error Handling and Safety
- Exercise 7: Custom errors, wrapping, error chains
- Exercise 11: Defer for cleanup, panic recovery
Package Design
- Exercise 12: Package organization, visibility, encapsulation
Next Steps
You've built a solid foundation in Go fundamentals! Ready to continue your journey?
Continue to: Section Project: Building a Production-Ready CLI Tool
The section project will combine all these concepts into a complete, production-ready application!
Or Explore:
- Section 2: Standard Library - Master Go's powerful standard library
- Practice Exercises - More hands-on coding challenges
Exercise Type: Section Review
Difficulty: Beginner
Estimated Time: 2-4 hours
Skills Practiced: All Go language fundamentals
Keep practicing, and happy coding!