Overview
These exercises consolidate everything you've learned in Sections 1 and 2. You'll apply fundamental Go concepts alongside standard library packages to solve real-world programming challenges.
Why These Exercises Matter:
The Go standard library is one of the language's greatest strengths. It provides production-ready packages for common tasks like HTTP handling, JSON encoding, database access, and file I/O. Mastering these packages means you can build complete applications without reaching for third-party dependencies.
What You'll Practice:
- File I/O operations and path handling
- JSON marshaling and unmarshaling
- HTTP servers and clients
- Database CRUD operations
- Context management and timeouts
- Text processing and validation
- Logging and error handling patterns
Recommended Approach:
- Complete exercises in order
- Read requirements carefully before implementing
- Run test cases to validate your solution
- Review the solution explanations to understand best practices
Exercise 1 - File Reading Methods
Objective: Implement three different approaches to reading files and understand when to use each.
Problem: Create a program that reads a text file using three different methods: os.ReadFile, bufio.Scanner, and io.Reader. Compare memory usage and performance characteristics.
Requirements:
- Implement
ReadFileAll()usingos.ReadFile- loads entire file into memory - Implement
ReadFileLines()usingbufio.Scanner- processes line by line - Implement
ReadFileChunks()usingio.Reader- reads in 4KB chunks - Handle errors appropriately for each method
- Return content as string for comparison
Function Signatures:
1func ReadFileAll(filename string)
2func ReadFileLines(filename string)
3func ReadFileChunks(filename string)
Solution
1// run
2package main
3
4import (
5 "bufio"
6 "bytes"
7 "fmt"
8 "io"
9 "os"
10)
11
12// ReadFileAll reads entire file into memory at once
13func ReadFileAll(filename string) {
14 data, err := os.ReadFile(filename)
15 if err != nil {
16 return "", fmt.Errorf("read file: %w", err)
17 }
18 return string(data), nil
19}
20
21// ReadFileLines reads file line by line using buffered scanner
22func ReadFileLines(filename string) {
23 file, err := os.Open(filename)
24 if err != nil {
25 return "", fmt.Errorf("open file: %w", err)
26 }
27 defer file.Close()
28
29 var buf bytes.Buffer
30 scanner := bufio.NewScanner(file)
31
32 for scanner.Scan() {
33 buf.WriteString(scanner.Text())
34 buf.WriteByte('\n')
35 }
36
37 if err := scanner.Err(); err != nil {
38 return "", fmt.Errorf("scan file: %w", err)
39 }
40
41 return buf.String(), nil
42}
43
44// ReadFileChunks reads file in 4KB chunks
45func ReadFileChunks(filename string) {
46 file, err := os.Open(filename)
47 if err != nil {
48 return "", fmt.Errorf("open file: %w", err)
49 }
50 defer file.Close()
51
52 var buf bytes.Buffer
53 chunk := make([]byte, 4096)
54
55 for {
56 n, err := file.Read(chunk)
57 if err == io.EOF {
58 break
59 }
60 if err != nil {
61 return "", fmt.Errorf("read chunk: %w", err)
62 }
63 buf.Write(chunk[:n])
64 }
65
66 return buf.String(), nil
67}
68
69func main() {
70 filename := "example.txt"
71
72 // Method 1: Read all at once
73 content1, err := ReadFileAll(filename)
74 if err != nil {
75 fmt.Println("ReadFileAll error:", err)
76 } else {
77 fmt.Printf("Method 1: %d bytes\n", len(content1))
78 }
79
80 // Method 2: Line by line
81 content2, err := ReadFileLines(filename)
82 if err != nil {
83 fmt.Println("ReadFileLines error:", err)
84 } else {
85 fmt.Printf("Method 2: %d bytes\n", len(content2))
86 }
87
88 // Method 3: Chunks
89 content3, err := ReadFileChunks(filename)
90 if err != nil {
91 fmt.Println("ReadFileChunks error:", err)
92 } else {
93 fmt.Printf("Method 3: %d bytes\n", len(content3))
94 }
95}
Key Takeaways:
os.ReadFileis simplest for small filesbufio.Scanneris best for line-based processing of large filesio.Readerwith manual chunking gives most control over memory usage
Exercise 2 - JSON Marshal/Unmarshal
Objective: Master JSON encoding and decoding with custom types and struct tags.
Problem: Create a user management system that reads/writes user data as JSON with proper validation.
Requirements:
- Define
Userstruct with JSON tags for name, email, age, active status - Implement
MarshalUser()to convert User to JSON string - Implement
UnmarshalUser()to parse JSON into User struct - Handle nested structures
- Validate that email contains "@" and age is positive
Function Signatures:
1type User struct {
2 Name string `json:"name"`
3 Email string `json:"email"`
4 Age int `json:"age"`
5 Active bool `json:"active"`
6 Address Address `json:"address,omitempty"`
7}
8
9type Address struct {
10 Street string `json:"street"`
11 City string `json:"city"`
12 ZipCode string `json:"zip_code"`
13}
14
15func MarshalUser(user User)
16func UnmarshalUser(jsonData string)
17func ValidateUser(user User) error
Solution
1// run
2package main
3
4import (
5 "encoding/json"
6 "fmt"
7 "strings"
8)
9
10type User struct {
11 Name string `json:"name"`
12 Email string `json:"email"`
13 Age int `json:"age"`
14 Active bool `json:"active"`
15 Address Address `json:"address,omitempty"`
16}
17
18type Address struct {
19 Street string `json:"street"`
20 City string `json:"city"`
21 ZipCode string `json:"zip_code"`
22}
23
24// MarshalUser converts User to JSON string
25func MarshalUser(user User) {
26 if err := ValidateUser(user); err != nil {
27 return "", fmt.Errorf("invalid user: %w", err)
28 }
29
30 data, err := json.Marshal(user)
31 if err != nil {
32 return "", fmt.Errorf("marshal user: %w", err)
33 }
34
35 return string(data), nil
36}
37
38// UnmarshalUser parses JSON into User struct
39func UnmarshalUser(jsonData string) {
40 var user User
41
42 if err := json.Unmarshal([]byte(jsonData), &user); err != nil {
43 return User{}, fmt.Errorf("unmarshal user: %w", err)
44 }
45
46 if err := ValidateUser(user); err != nil {
47 return User{}, fmt.Errorf("invalid user: %w", err)
48 }
49
50 return user, nil
51}
52
53// ValidateUser validates user fields
54func ValidateUser(user User) error {
55 if user.Name == "" {
56 return fmt.Errorf("name is required")
57 }
58 if !strings.Contains(user.Email, "@") {
59 return fmt.Errorf("email must contain @")
60 }
61 if user.Age <= 0 {
62 return fmt.Errorf("age must be positive")
63 }
64 return nil
65}
66
67func main() {
68 // Create user
69 user := User{
70 Name: "Alice Johnson",
71 Email: "alice@example.com",
72 Age: 30,
73 Active: true,
74 Address: Address{
75 Street: "123 Main St",
76 City: "Springfield",
77 ZipCode: "12345",
78 },
79 }
80
81 // Marshal to JSON
82 jsonStr, err := MarshalUser(user)
83 if err != nil {
84 fmt.Println("Marshal error:", err)
85 return
86 }
87 fmt.Println("JSON:", jsonStr)
88
89 // Unmarshal from JSON
90 recovered, err := UnmarshalUser(jsonStr)
91 if err != nil {
92 fmt.Println("Unmarshal error:", err)
93 return
94 }
95 fmt.Printf("Recovered: %+v\n", recovered)
96}
Key Takeaways:
- Struct tags control JSON field names and behavior
omitemptytag excludes zero-value fields from output- Always validate data after unmarshaling from untrusted sources
Exercise 3 - HTTP Client with Retry
Objective: Build a resilient HTTP client with timeout and exponential backoff.
Problem: Create an HTTP client that automatically retries failed requests with increasing delays.
Requirements:
- Implement
FetchWithRetry()that makes HTTP GET requests - Retry up to 3 times on failure
- Use exponential backoff
- Set 5-second timeout per request
- Return response body as string
Function Signature:
1func FetchWithRetry(url string, maxRetries int)
Solution
1// run
2package main
3
4import (
5 "context"
6 "fmt"
7 "io"
8 "net/http"
9 "time"
10)
11
12// FetchWithRetry makes HTTP GET with exponential backoff retry
13func FetchWithRetry(url string, maxRetries int) {
14 var lastErr error
15
16 for attempt := 0; attempt <= maxRetries; attempt++ {
17 // Create context with timeout
18 ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
19 defer cancel()
20
21 // Make request
22 req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
23 if err != nil {
24 return "", fmt.Errorf("create request: %w", err)
25 }
26
27 resp, err := http.DefaultClient.Do(req)
28 if err != nil {
29 lastErr = err
30
31 // Exponential backoff
32 if attempt < maxRetries {
33 delay := time.Duration(1<<attempt) * time.Second
34 fmt.Printf("Attempt %d failed, retrying in %v...\n", attempt+1, delay)
35 time.Sleep(delay)
36 continue
37 }
38 break
39 }
40 defer resp.Body.Close()
41
42 // Read response
43 body, err := io.ReadAll(resp.Body)
44 if err != nil {
45 return "", fmt.Errorf("read response: %w", err)
46 }
47
48 // Check status code
49 if resp.StatusCode >= 200 && resp.StatusCode < 300 {
50 return string(body), nil
51 }
52
53 lastErr = fmt.Errorf("HTTP %d: %s", resp.StatusCode, resp.Status)
54
55 // Retry on 5xx errors
56 if resp.StatusCode >= 500 && attempt < maxRetries {
57 delay := time.Duration(1<<attempt) * time.Second
58 fmt.Printf("Server error, retrying in %v...\n", delay)
59 time.Sleep(delay)
60 continue
61 }
62
63 break
64 }
65
66 return "", fmt.Errorf("max retries exceeded: %w", lastErr)
67}
68
69func main() {
70 url := "https://httpbin.org/get"
71
72 body, err := FetchWithRetry(url, 3)
73 if err != nil {
74 fmt.Println("Error:", err)
75 return
76 }
77
78 fmt.Printf("Response:\n%s\n", len(body), body[:200])
79}
Key Takeaways:
- Use
context.WithTimeoutto enforce request timeouts - Exponential backoff prevents overwhelming failing services
- Distinguish between retryable and non-retryable errors
Exercise 4 - CSV Parser
Objective: Read CSV files, validate data, and handle errors gracefully.
Problem: Parse a CSV file of products and validate required fields.
Requirements:
- Read CSV file with headers
- Parse each row into
Productstruct - Validate that Price > 0 and Stock >= 0
- Return slice of valid products
- Report validation errors without stopping
Function Signature:
1type Product struct {
2 ID int
3 Name string
4 Price float64
5 Stock int
6}
7
8func ParseProductsCSV(filename string)
Solution
1// run
2package main
3
4import (
5 "encoding/csv"
6 "fmt"
7 "os"
8 "strconv"
9)
10
11type Product struct {
12 ID int
13 Name string
14 Price float64
15 Stock int
16}
17
18// ParseProductsCSV reads and validates product CSV
19func ParseProductsCSV(filename string) {
20 file, err := os.Open(filename)
21 if err != nil {
22 return nil, fmt.Errorf("open file: %w", err)
23 }
24 defer file.Close()
25
26 reader := csv.NewReader(file)
27
28 // Read header
29 headers, err := reader.Read()
30 if err != nil {
31 return nil, fmt.Errorf("read headers: %w", err)
32 }
33
34 // Validate headers
35 expected := []string{"ID", "Name", "Price", "Stock"}
36 for i, h := range headers {
37 if i >= len(expected) || h != expected[i] {
38 return nil, fmt.Errorf("invalid headers: expected %v, got %v", expected, headers)
39 }
40 }
41
42 var products []Product
43 lineNum := 1
44
45 // Read records
46 for {
47 record, err := reader.Read()
48 if err != nil {
49 if err.Error() == "EOF" {
50 break
51 }
52 return nil, fmt.Errorf("read row %d: %w", lineNum, err)
53 }
54 lineNum++
55
56 if len(record) != 4 {
57 fmt.Printf("Line %d: invalid column count\n", lineNum)
58 continue
59 }
60
61 // Parse fields
62 id, err := strconv.Atoi(record[0])
63 if err != nil {
64 fmt.Printf("Line %d: invalid ID: %v\n", lineNum, err)
65 continue
66 }
67
68 price, err := strconv.ParseFloat(record[2], 64)
69 if err != nil {
70 fmt.Printf("Line %d: invalid price: %v\n", lineNum, err)
71 continue
72 }
73
74 stock, err := strconv.Atoi(record[3])
75 if err != nil {
76 fmt.Printf("Line %d: invalid stock: %v\n", lineNum, err)
77 continue
78 }
79
80 // Validate
81 if price <= 0 {
82 fmt.Printf("Line %d: price must be positive\n", lineNum)
83 continue
84 }
85 if stock < 0 {
86 fmt.Printf("Line %d: stock cannot be negative\n", lineNum)
87 continue
88 }
89
90 products = append(products, Product{
91 ID: id,
92 Name: record[1],
93 Price: price,
94 Stock: stock,
95 })
96 }
97
98 return products, nil
99}
100
101func main() {
102 products, err := ParseProductsCSV("products.csv")
103 if err != nil {
104 fmt.Println("Error:", err)
105 return
106 }
107
108 fmt.Printf("Loaded %d products:\n", len(products))
109 for _, p := range products {
110 fmt.Printf(" %d: %s - $%.2f\n", p.ID, p.Name, p.Price, p.Stock)
111 }
112}
Key Takeaways:
- CSV files may have inconsistent data - validate each field
- Continue processing on individual row errors
- Use
strconvpackage for string-to-number conversions
Exercise 5 - Database CRUD
Objective: Implement Create, Read, Update, Delete operations using database/sql.
Problem: Build a user repository with prepared statements and transactions.
Requirements:
- Create
UserDBstruct with database connection - Implement
Create(),GetByID(),Update(),Delete() - Use prepared statements to prevent SQL injection
- Use transactions for multi-step operations
- Handle database errors appropriately
Function Signatures:
1type UserDB struct {
2 db *sql.DB
3}
4
5func Create(user User)
6func GetByID(id int64)
7func Update(user User) error
8func Delete(id int64) error
Solution
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11type User struct {
12 ID int64
13 Name string
14 Email string
15 Age int
16 Active bool
17}
18
19type UserDB struct {
20 db *sql.DB
21}
22
23// NewUserDB creates database connection and table
24func NewUserDB(dbPath string) {
25 db, err := sql.Open("sqlite3", dbPath)
26 if err != nil {
27 return nil, fmt.Errorf("open database: %w", err)
28 }
29
30 // Create table
31 query := `
32 CREATE TABLE IF NOT EXISTS users (
33 id INTEGER PRIMARY KEY AUTOINCREMENT,
34 name TEXT NOT NULL,
35 email TEXT UNIQUE NOT NULL,
36 age INTEGER NOT NULL,
37 active BOOLEAN NOT NULL
38 )
39 `
40 if _, err := db.Exec(query); err != nil {
41 return nil, fmt.Errorf("create table: %w", err)
42 }
43
44 return &UserDB{db: db}, nil
45}
46
47// Create inserts new user and returns ID
48func Create(user User) {
49 query := `INSERT INTO users VALUES`
50
51 result, err := udb.db.Exec(query, user.Name, user.Email, user.Age, user.Active)
52 if err != nil {
53 return 0, fmt.Errorf("insert user: %w", err)
54 }
55
56 id, err := result.LastInsertId()
57 if err != nil {
58 return 0, fmt.Errorf("get insert ID: %w", err)
59 }
60
61 return id, nil
62}
63
64// GetByID retrieves user by ID
65func GetByID(id int64) {
66 query := `SELECT id, name, email, age, active FROM users WHERE id = ?`
67
68 var user User
69 err := udb.db.QueryRow(query, id).Scan(
70 &user.ID, &user.Name, &user.Email, &user.Age, &user.Active,
71 )
72
73 if err == sql.ErrNoRows {
74 return User{}, fmt.Errorf("user not found")
75 }
76 if err != nil {
77 return User{}, fmt.Errorf("query user: %w", err)
78 }
79
80 return user, nil
81}
82
83// Update modifies existing user
84func Update(user User) error {
85 query := `UPDATE users SET name = ?, email = ?, age = ?, active = ? WHERE id = ?`
86
87 result, err := udb.db.Exec(query, user.Name, user.Email, user.Age, user.Active, user.ID)
88 if err != nil {
89 return fmt.Errorf("update user: %w", err)
90 }
91
92 rows, err := result.RowsAffected()
93 if err != nil {
94 return fmt.Errorf("check rows affected: %w", err)
95 }
96 if rows == 0 {
97 return fmt.Errorf("user not found")
98 }
99
100 return nil
101}
102
103// Delete removes user by ID
104func Delete(id int64) error {
105 query := `DELETE FROM users WHERE id = ?`
106
107 result, err := udb.db.Exec(query, id)
108 if err != nil {
109 return fmt.Errorf("delete user: %w", err)
110 }
111
112 rows, err := result.RowsAffected()
113 if err != nil {
114 return fmt.Errorf("check rows affected: %w", err)
115 }
116 if rows == 0 {
117 return fmt.Errorf("user not found")
118 }
119
120 return nil
121}
122
123func Close() error {
124 return udb.db.Close()
125}
126
127func main() {
128 db, err := NewUserDB(":memory:")
129 if err != nil {
130 fmt.Println("Error:", err)
131 return
132 }
133 defer db.Close()
134
135 // Create
136 user := User{Name: "Alice", Email: "alice@example.com", Age: 30, Active: true}
137 id, err := db.Create(user)
138 if err != nil {
139 fmt.Println("Create error:", err)
140 return
141 }
142 fmt.Printf("Created user ID: %d\n", id)
143
144 // Read
145 retrieved, err := db.GetByID(id)
146 if err != nil {
147 fmt.Println("Get error:", err)
148 return
149 }
150 fmt.Printf("Retrieved: %+v\n", retrieved)
151
152 // Update
153 retrieved.Age = 31
154 if err := db.Update(retrieved); err != nil {
155 fmt.Println("Update error:", err)
156 return
157 }
158 fmt.Println("Updated user age to 31")
159
160 // Delete
161 if err := db.Delete(id); err != nil {
162 fmt.Println("Delete error:", err)
163 return
164 }
165 fmt.Println("Deleted user")
166}
Key Takeaways:
- Always use
?placeholders to prevent SQL injection - Check
RowsAffected()to verify operations succeeded - Use
QueryRow()for single results,Query()for multiple
Exercise 6 - Context Timeout
Objective: Use context to manage timeouts and cancellation in function chains.
Problem: Implement a data processing pipeline with timeout control.
Requirements:
- Create
ProcessData()that chains three operations - Each operation checks context cancellation
- Pass context through entire call chain
- Simulate work with
time.Sleep - Return error if context times out
Function Signature:
1func ProcessData(ctx context.Context, data string)
Solution
1// run
2package main
3
4import (
5 "context"
6 "fmt"
7 "time"
8)
9
10// step1 validates data
11func step1(ctx context.Context, data string) {
12 select {
13 case <-ctx.Done():
14 return "", ctx.Err()
15 case <-time.After(100 * time.Millisecond):
16 return fmt.Sprintf("validated(%s)", data), nil
17 }
18}
19
20// step2 transforms data
21func step2(ctx context.Context, data string) {
22 select {
23 case <-ctx.Done():
24 return "", ctx.Err()
25 case <-time.After(150 * time.Millisecond):
26 return fmt.Sprintf("transformed(%s)", data), nil
27 }
28}
29
30// step3 saves data
31func step3(ctx context.Context, data string) {
32 select {
33 case <-ctx.Done():
34 return "", ctx.Err()
35 case <-time.After(200 * time.Millisecond):
36 return fmt.Sprintf("saved(%s)", data), nil
37 }
38}
39
40// ProcessData chains operations with context
41func ProcessData(ctx context.Context, data string) {
42 result := data
43
44 // Step 1: Validate
45 validated, err := step1(ctx, result)
46 if err != nil {
47 return "", fmt.Errorf("step1: %w", err)
48 }
49 result = validated
50
51 // Step 2: Transform
52 transformed, err := step2(ctx, result)
53 if err != nil {
54 return "", fmt.Errorf("step2: %w", err)
55 }
56 result = transformed
57
58 // Step 3: Save
59 saved, err := step3(ctx, result)
60 if err != nil {
61 return "", fmt.Errorf("step3: %w", err)
62 }
63
64 return saved, nil
65}
66
67func main() {
68 // Test with sufficient timeout
69 ctx1, cancel1 := context.WithTimeout(context.Background(), 1*time.Second)
70 defer cancel1()
71
72 result, err := ProcessData(ctx1, "test-data")
73 if err != nil {
74 fmt.Println("Error:", err)
75 } else {
76 fmt.Println("Success:", result)
77 }
78
79 // Test with insufficient timeout
80 ctx2, cancel2 := context.WithTimeout(context.Background(), 200*time.Millisecond)
81 defer cancel2()
82
83 result, err = ProcessData(ctx2, "test-data")
84 if err != nil {
85 fmt.Println("Timeout:", err)
86 } else {
87 fmt.Println("Success:", result)
88 }
89}
Key Takeaways:
- Always pass
context.Contextas first parameter - Use
selectwith<-ctx.Done()to check cancellation - Return
ctx.Err()when context is cancelled
Exercise 7 - Time Parsing
Objective: Parse dates in multiple formats and handle timezone conversions.
Problem: Parse log timestamps in various formats and convert to UTC.
Requirements:
- Support formats: RFC3339, "2006-01-02 15:04:05", Unix timestamp
- Implement
ParseFlexibleTime()that tries all formats - Convert result to UTC
- Return error if all formats fail
- Handle timezone information correctly
Function Signature:
1func ParseFlexibleTime(input string)
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "strconv"
7 "time"
8)
9
10// ParseFlexibleTime parses time in multiple formats
11func ParseFlexibleTime(input string) {
12 formats := []string{
13 time.RFC3339, // "2006-01-02T15:04:05Z07:00"
14 "2006-01-02 15:04:05", // Common log format
15 "2006-01-02", // Date only
16 time.RFC1123, // "Mon, 02 Jan 2006 15:04:05 MST"
17 }
18
19 // Try timestamp formats
20 for _, format := range formats {
21 t, err := time.Parse(format, input)
22 if err == nil {
23 return t.UTC(), nil
24 }
25 }
26
27 // Try Unix timestamp
28 if timestamp, err := strconv.ParseInt(input, 10, 64); err == nil {
29 return time.Unix(timestamp, 0).UTC(), nil
30 }
31
32 return time.Time{}, fmt.Errorf("unable to parse time: %s", input)
33}
34
35func main() {
36 tests := []string{
37 "2024-10-21T14:30:00Z",
38 "2024-10-21 14:30:00",
39 "2024-10-21",
40 "1729521000",
41 "Mon, 21 Oct 2024 14:30:00 UTC",
42 }
43
44 for _, test := range tests {
45 parsed, err := ParseFlexibleTime(test)
46 if err != nil {
47 fmt.Printf("Failed: %s - %v\n", test, err)
48 } else {
49 fmt.Printf("Parsed: %s -> %s\n", test, parsed.Format(time.RFC3339))
50 }
51 }
52}
Key Takeaways:
- Go uses reference time "Mon Jan 2 15:04:05 MST 2006" for formats
- Always convert to UTC for consistent comparisons
- Try multiple formats for flexible parsing
Exercise 8 - Regex Validator
Objective: Validate email addresses and phone numbers using regular expressions.
Problem: Create validators for common input formats.
Requirements:
- Implement
ValidateEmail()- check basic email format - Implement
ValidatePhone()- validate US phone numbers - Support phone formats: 123-4567, 555-123-4567, 5551234567
- Return bool and error message
- Use
regexppackage for validation
Function Signatures:
1func ValidateEmail(email string)
2func ValidatePhone(phone string)
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "regexp"
7)
8
9var (
10 emailRegex = regexp.MustCompile(`^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$`)
11 phoneRegex = regexp.MustCompile(`^(\(\d{3}\)\s?|\d{3}[-.]?)\d{3}[-.]?\d{4}$`)
12)
13
14// ValidateEmail validates email format
15func ValidateEmail(email string) {
16 if email == "" {
17 return false, "email is required"
18 }
19
20 if !emailRegex.MatchString(email) {
21 return false, "invalid email format"
22 }
23
24 return true, "valid email"
25}
26
27// ValidatePhone validates US phone numbers
28func ValidatePhone(phone string) {
29 if phone == "" {
30 return false, "phone is required"
31 }
32
33 if !phoneRegex.MatchString(phone) {
34 return false, "invalid phone format 123-4567, 555-123-4567, or 5551234567)"
35 }
36
37 return true, "valid phone"
38}
39
40func main() {
41 // Test emails
42 emails := []string{
43 "user@example.com",
44 "invalid.email",
45 "test@domain",
46 "",
47 }
48
49 fmt.Println("Email Validation:")
50 for _, email := range emails {
51 valid, msg := ValidateEmail(email)
52 fmt.Printf(" %s: %v - %s\n", email, valid, msg)
53 }
54
55 // Test phones
56 phones := []string{
57 "(555) 123-4567",
58 "555-123-4567",
59 "5551234567",
60 "123-456",
61 "",
62 }
63
64 fmt.Println("\nPhone Validation:")
65 for _, phone := range phones {
66 valid, msg := ValidatePhone(phone)
67 fmt.Printf(" %s: %v - %s\n", phone, valid, msg)
68 }
69}
Key Takeaways:
- Compile regex patterns once with
regexp.MustCompile - Use raw strings for regex to avoid escaping
- Provide helpful error messages for invalid formats
Exercise 9 - Structured Logging
Objective: Use log/slog for leveled structured logging.
Problem: Create a logger that outputs structured JSON logs with different levels.
Requirements:
- Configure
slogwith JSON handler - Implement functions for Debug, Info, Warn, Error levels
- Include context fields
- Format timestamps in RFC3339
- Write logs to file
Function Signature:
1func SetupLogger(filename string)
2func LogWithContext(logger *slog.Logger, level slog.Level, msg string, attrs ...any)
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "log/slog"
7 "os"
8)
9
10// SetupLogger creates JSON logger writing to file
11func SetupLogger(filename string) {
12 file, err := os.OpenFile(filename, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0644)
13 if err != nil {
14 return nil, fmt.Errorf("open log file: %w", err)
15 }
16
17 handler := slog.NewJSONHandler(file, &slog.HandlerOptions{
18 Level: slog.LevelDebug,
19 })
20
21 return slog.New(handler), nil
22}
23
24// LogWithContext logs with additional context
25func LogWithContext(logger *slog.Logger, level slog.Level, msg string, attrs ...any) {
26 logger.Log(nil, level, msg, attrs...)
27}
28
29func main() {
30 logger, err := SetupLogger("app.log")
31 if err != nil {
32 fmt.Println("Setup error:", err)
33 return
34 }
35
36 // Log various levels with context
37 logger.Debug("Debug message", "component", "auth", "user_id", 123)
38 logger.Info("User login", "user_id", 123, "ip", "192.168.1.1")
39 logger.Warn("Rate limit approaching", "user_id", 123, "requests", 95)
40 logger.Error("Database connection failed", "error", "timeout", "retries", 3)
41
42 // Custom context logging
43 LogWithContext(logger, slog.LevelInfo, "Request processed",
44 "request_id", "abc-123",
45 "user_id", 456,
46 "duration_ms", 234,
47 "status", 200,
48 )
49
50 fmt.Println("Logs written to app.log")
51}
Key Takeaways:
slogprovides structured logging with key-value pairs- Use JSON handler for machine-readable logs
- Include context fields for better debugging
Exercise 10 - Password Hashing
Objective: Securely hash and verify passwords using bcrypt.
Problem: Implement user authentication with secure password storage.
Requirements:
- Implement
HashPassword()using bcrypt with cost 12 - Implement
VerifyPassword()to check password matches hash - Never store passwords in plaintext
- Return appropriate errors for invalid inputs
- Handle bcrypt errors properly
Function Signatures:
1func HashPassword(password string)
2func VerifyPassword(password, hash string) error
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "golang.org/x/crypto/bcrypt"
7)
8
9const bcryptCost = 12
10
11// HashPassword creates bcrypt hash of password
12func HashPassword(password string) {
13 if len(password) < 8 {
14 return "", fmt.Errorf("password must be at least 8 characters")
15 }
16
17 hash, err := bcrypt.GenerateFromPassword([]byte(password), bcryptCost)
18 if err != nil {
19 return "", fmt.Errorf("hash password: %w", err)
20 }
21
22 return string(hash), nil
23}
24
25// VerifyPassword checks if password matches hash
26func VerifyPassword(password, hash string) error {
27 err := bcrypt.CompareHashAndPassword([]byte(hash), []byte(password))
28 if err == bcrypt.ErrMismatchedHashAndPassword {
29 return fmt.Errorf("incorrect password")
30 }
31 if err != nil {
32 return fmt.Errorf("verify password: %w", err)
33 }
34
35 return nil
36}
37
38func main() {
39 password := "MySecurePassword123!"
40
41 // Hash password
42 hash, err := HashPassword(password)
43 if err != nil {
44 fmt.Println("Hash error:", err)
45 return
46 }
47 fmt.Printf("Hash: %s\n", hash[:60]+"...")
48
49 // Verify correct password
50 if err := VerifyPassword(password, hash); err != nil {
51 fmt.Println("Verify error:", err)
52 } else {
53 fmt.Println("Password verified successfully")
54 }
55
56 // Verify wrong password
57 if err := VerifyPassword("WrongPassword", hash); err != nil {
58 fmt.Println("Wrong password:", err)
59 }
60}
Key Takeaways:
- Never store passwords in plaintext
- bcrypt automatically handles salt generation
- Use cost 10-12 for good security/performance balance
Exercise 11 - Template Rendering
Objective: Use text/template to generate formatted output.
Problem: Create email templates with custom functions.
Requirements:
- Define template for welcome email
- Add custom function
ToUpperfor uppercase text - Use template actions for conditionals and loops
- Execute template with data
- Write output to string
Function Signature:
1func RenderWelcomeEmail(name string, isPremium bool, features []string)
Solution
1// run
2package main
3
4import (
5 "bytes"
6 "fmt"
7 "strings"
8 "text/template"
9)
10
11const welcomeTemplate = `
12Welcome {{.Name | ToUpper}}!
13
14Thank you for joining our service.
15
16{{if .IsPremium}}
17As a PREMIUM member, you have access to:
18{{range .Features}}
19 - {{.}}
20{{end}}
21{{else}}
22You are currently on the FREE plan.
23Upgrade to Premium for more features!
24{{end}}
25
26Best regards,
27The Team
28`
29
30// RenderWelcomeEmail generates welcome email from template
31func RenderWelcomeEmail(name string, isPremium bool, features []string) {
32 funcMap := template.FuncMap{
33 "ToUpper": strings.ToUpper,
34 }
35
36 tmpl, err := template.New("welcome").Funcs(funcMap).Parse(welcomeTemplate)
37 if err != nil {
38 return "", fmt.Errorf("parse template: %w", err)
39 }
40
41 data := struct {
42 Name string
43 IsPremium bool
44 Features []string
45 }{
46 Name: name,
47 IsPremium: isPremium,
48 Features: features,
49 }
50
51 var buf bytes.Buffer
52 if err := tmpl.Execute(&buf, data); err != nil {
53 return "", fmt.Errorf("execute template: %w", err)
54 }
55
56 return buf.String(), nil
57}
58
59func main() {
60 // Premium user
61 email1, err := RenderWelcomeEmail("Alice", true, []string{
62 "Unlimited storage",
63 "Priority support",
64 "Advanced analytics",
65 })
66 if err != nil {
67 fmt.Println("Error:", err)
68 return
69 }
70 fmt.Println(email1)
71
72 // Free user
73 email2, err := RenderWelcomeEmail("Bob", false, nil)
74 if err != nil {
75 fmt.Println("Error:", err)
76 return
77 }
78 fmt.Println(email2)
79}
Key Takeaways:
- Use
FuncMapto add custom template functions - Templates support conditionals and loops
- Pipe operator
|chains functions in templates
Exercise 12 - Flag Parsing
Objective: Build a CLI with subcommands using the flag package.
Problem: Create a tool with multiple subcommands.
Requirements:
- Implement subcommands: create, list, delete
- Parse flags specific to each subcommand
- Show usage help for invalid commands
- Validate required flags
- Execute appropriate action based on command
Function Signature:
1func ParseAndExecute(args []string) error
Solution
1// run
2package main
3
4import (
5 "flag"
6 "fmt"
7 "os"
8)
9
10// ParseAndExecute handles CLI commands
11func ParseAndExecute(args []string) error {
12 if len(args) < 1 {
13 return fmt.Errorf("usage: program <command> [flags]")
14 }
15
16 command := args[0]
17
18 switch command {
19 case "create":
20 return executeCreate(args[1:])
21 case "list":
22 return executeList(args[1:])
23 case "delete":
24 return executeDelete(args[1:])
25 default:
26 return fmt.Errorf("unknown command: %s", command)
27 }
28}
29
30func executeCreate(args []string) error {
31 fs := flag.NewFlagSet("create", flag.ExitOnError)
32 name := fs.String("name", "", "Item name")
33 description := fs.String("desc", "", "Item description")
34
35 fs.Parse(args)
36
37 if *name == "" {
38 return fmt.Errorf("--name is required")
39 }
40
41 fmt.Printf("Creating item: %s\n", *name, *description)
42 return nil
43}
44
45func executeList(args []string) error {
46 fs := flag.NewFlagSet("list", flag.ExitOnError)
47 limit := fs.Int("limit", 10, "Number of items to list")
48
49 fs.Parse(args)
50
51 fmt.Printf("Listing %d items...\n", *limit)
52 return nil
53}
54
55func executeDelete(args []string) error {
56 fs := flag.NewFlagSet("delete", flag.ExitOnError)
57 id := fs.Int("id", 0, "Item ID")
58
59 fs.Parse(args)
60
61 if *id == 0 {
62 return fmt.Errorf("--id is required")
63 }
64
65 fmt.Printf("Deleting item ID: %d\n", *id)
66 return nil
67}
68
69func main() {
70 if err := ParseAndExecute(os.Args[1:]); err != nil {
71 fmt.Fprintf(os.Stderr, "Error: %v\n", err)
72 os.Exit(1)
73 }
74}
Key Takeaways:
- Use
flag.NewFlagSetfor subcommand-specific flags - Validate required flags after parsing
- Provide clear usage messages for invalid input
Exercise 13 - Concurrent File Processing
Objective: Process multiple files concurrently using a worker pool pattern.
Problem: Count words in multiple files using goroutines and channels.
Requirements:
- Implement worker pool with N workers
- Send file paths through channel to workers
- Workers count words and send results
- Collect results and aggregate totals
- Handle errors from any worker
Function Signature:
1func ProcessFilesConcu rrently(filenames []string, workers int)
Solution
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "sync"
9)
10
11type result struct {
12 filename string
13 count int
14 err error
15}
16
17// ProcessFilesConcurrently counts words in files using worker pool
18func ProcessFilesConcurrently(filenames []string, workers int) {
19 jobs := make(chan string, len(filenames))
20 results := make(chan result, len(filenames))
21
22 // Start workers
23 var wg sync.WaitGroup
24 for i := 0; i < workers; i++ {
25 wg.Add(1)
26 go func() {
27 defer wg.Done()
28 for filename := range jobs {
29 count, err := countWords(filename)
30 results <- result{filename, count, err}
31 }
32 }()
33 }
34
35 // Send jobs
36 for _, filename := range filenames {
37 jobs <- filename
38 }
39 close(jobs)
40
41 // Wait for completion
42 go func() {
43 wg.Wait()
44 close(results)
45 }()
46
47 // Collect results
48 counts := make(map[string]int)
49 for res := range results {
50 if res.err != nil {
51 return nil, fmt.Errorf("%s: %w", res.filename, res.err)
52 }
53 counts[res.filename] = res.count
54 }
55
56 return counts, nil
57}
58
59func countWords(filename string) {
60 file, err := os.Open(filename)
61 if err != nil {
62 return 0, err
63 }
64 defer file.Close()
65
66 scanner := bufio.NewScanner(file)
67 scanner.Split(bufio.ScanWords)
68
69 count := 0
70 for scanner.Scan() {
71 count++
72 }
73
74 return count, scanner.Err()
75}
76
77func main() {
78 files := []string{"file1.txt", "file2.txt", "file3.txt"}
79
80 counts, err := ProcessFilesConcurrently(files, 2)
81 if err != nil {
82 fmt.Println("Error:", err)
83 return
84 }
85
86 total := 0
87 for filename, count := range counts {
88 fmt.Printf("%s: %d words\n", filename, count)
89 total += count
90 }
91 fmt.Printf("Total: %d words\n", total)
92}
Key Takeaways:
- Worker pool pattern limits concurrent goroutines
- Use channels to distribute work and collect results
- WaitGroup ensures all workers complete before closing results channel
Exercise 14 - Middleware Chain
Objective: Implement HTTP middleware for logging, authentication, and recovery.
Problem: Build middleware stack for HTTP server.
Requirements:
- Implement
LoggingMiddleware- log requests and response times - Implement
AuthMiddleware- check Authorization header - Implement
RecoveryMiddleware- recover from panics - Chain middleware in correct order
- Use
http.Handlerinterface
Function Signatures:
1func LoggingMiddleware(next http.Handler) http.Handler
2func AuthMiddleware(next http.Handler) http.Handler
3func RecoveryMiddleware(next http.Handler) http.Handler
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "log"
7 "net/http"
8 "time"
9)
10
11// LoggingMiddleware logs requests
12func LoggingMiddleware(next http.Handler) http.Handler {
13 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
14 start := time.Now()
15
16 next.ServeHTTP(w, r)
17
18 duration := time.Since(start)
19 log.Printf("%s %s - %v", r.Method, r.URL.Path, duration)
20 })
21}
22
23// AuthMiddleware checks authorization
24func AuthMiddleware(next http.Handler) http.Handler {
25 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
26 token := r.Header.Get("Authorization")
27
28 if token == "" {
29 http.Error(w, "Unauthorized", http.StatusUnauthorized)
30 return
31 }
32
33 // Validate token
34 if token != "Bearer valid-token" {
35 http.Error(w, "Invalid token", http.StatusForbidden)
36 return
37 }
38
39 next.ServeHTTP(w, r)
40 })
41}
42
43// RecoveryMiddleware recovers from panics
44func RecoveryMiddleware(next http.Handler) http.Handler {
45 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
46 defer func() {
47 if err := recover(); err != nil {
48 log.Printf("Panic recovered: %v", err)
49 http.Error(w, "Internal Server Error", http.StatusInternalServerError)
50 }
51 }()
52
53 next.ServeHTTP(w, r)
54 })
55}
56
57func main() {
58 mux := http.NewServeMux()
59
60 mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
61 w.Write([]byte("Hello, World!"))
62 })
63
64 mux.HandleFunc("/panic", func(w http.ResponseWriter, r *http.Request) {
65 panic("test panic")
66 })
67
68 // Chain middleware: Recovery -> Logging -> Auth -> Handler
69 handler := RecoveryMiddleware(
70 LoggingMiddleware(
71 AuthMiddleware(mux),
72 ),
73 )
74
75 fmt.Println("Server starting on :8080")
76 log.Fatal(http.ListenAndServe(":8080", handler))
77}
Key Takeaways:
- Middleware wraps handlers to add cross-cutting concerns
- Order matters: recovery should be outermost, auth before business logic
- Use
deferin recovery middleware to catch panics
Exercise 15 - WebSocket Echo Server
Objective: Build a basic WebSocket server using gorilla/websocket.
Problem: Create echo server that reflects messages back to clients.
Requirements:
- Upgrade HTTP connection to WebSocket
- Read messages from client
- Echo messages back to sender
- Handle connection errors gracefully
- Close connection when client disconnects
Function Signature:
1func HandleWebSocket(w http.ResponseWriter, r *http.Request)
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "log"
7 "net/http"
8
9 "github.com/gorilla/websocket"
10)
11
12var upgrader = websocket.Upgrader{
13 CheckOrigin: func(r *http.Request) bool {
14 return true // Allow all origins
15 },
16}
17
18// HandleWebSocket upgrades and handles WebSocket connections
19func HandleWebSocket(w http.ResponseWriter, r *http.Request) {
20 conn, err := upgrader.Upgrade(w, r, nil)
21 if err != nil {
22 log.Printf("Upgrade error: %v", err)
23 return
24 }
25 defer conn.Close()
26
27 log.Printf("Client connected from %s", r.RemoteAddr)
28
29 for {
30 // Read message
31 messageType, message, err := conn.ReadMessage()
32 if err != nil {
33 if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway, websocket.CloseAbnormalClosure) {
34 log.Printf("Read error: %v", err)
35 }
36 break
37 }
38
39 log.Printf("Received: %s", message)
40
41 // Echo message back
42 if err := conn.WriteMessage(messageType, message); err != nil {
43 log.Printf("Write error: %v", err)
44 break
45 }
46 }
47
48 log.Printf("Client disconnected from %s", r.RemoteAddr)
49}
50
51func main() {
52 http.HandleFunc("/ws", HandleWebSocket)
53
54 fmt.Println("WebSocket server starting on :8080")
55 fmt.Println("Connect with: ws://localhost:8080/ws")
56 log.Fatal(http.ListenAndServe(":8080", nil))
57}
Key Takeaways:
- Use
websocket.Upgraderto upgrade HTTP to WebSocket - Handle connection lifecycle: read, write, close
- Check for expected close errors vs unexpected errors
Summary
Congratulations on completing the Standard Library exercises! You've now practiced:
Core Skills Mastered:
- File I/O - Reading and writing files efficiently
- JSON Handling - Marshaling and unmarshaling structured data
- HTTP Operations - Building resilient clients and servers
- Database Access - CRUD operations with prepared statements
- Context Management - Timeouts and cancellation
- Time Handling - Parsing and formatting timestamps
- Validation - Using regex for input validation
- Logging - Structured logging with slog
- Security - Password hashing with bcrypt
- Templating - Generating formatted output
- CLI Building - Flag parsing and subcommands
- Concurrency - Worker pools and channels
- Middleware - HTTP middleware patterns
- WebSockets - Real-time bidirectional communication
Key Takeaways:
- The Go standard library is comprehensive and production-ready
- Always handle errors explicitly
- Use contexts for timeout and cancellation
- Validate all external input
- Prefer standard library over third-party dependencies when possible
- Use interfaces for flexibility
Next Steps:
Progress to Section 3: Advanced Topics to learn about:
- Generics and type parameters
- Reflection and metaprogramming
- Design patterns in Go
- Performance optimization techniques
Or dive into the Section 2 Project to build a complete application using everything you've learned!
Additional Practice:
If you want more challenge, try combining multiple exercises:
- Build a REST API with authentication, database access, and logging
- Create a CLI tool that processes files concurrently with progress tracking
- Implement a web service with middleware, WebSockets, and metrics
Keep practicing, and you'll become proficient with Go's powerful standard library!