Why This Matters - The Foundation of Data Storage
Arrays and slices are fundamental data structures that every Go programmer must master. They store ordered collections of elements - the building blocks for virtually every application.
Understanding these structures is crucial because:
- Data Processing: Most applications work with collections of items
- Performance: Choosing the right structure impacts memory usage and speed
- API Design: Function signatures often use slices for flexible data handling
- Memory Management: Understanding how Go manages memory under the hood prevents bugs and leaks
Real-World Impact: Every web API returns slices of data, every database query results come back as slices, every configuration file is read into memory collections. Mastering slices and arrays means you can handle any data processing task efficiently.
Learning Objectives
By the end of this article, you will:
- ✅ Understand the key differences between arrays and slices
- ✅ Master memory layout and performance implications
- ✅ Choose the right structure for your use case
- ✅ Use slicing operations efficiently and safely
- ✅ Implement common algorithms with optimal performance
- ✅ Avoid memory leaks and performance pitfalls
- ✅ Handle 2D and multi-dimensional data effectively
- ✅ Apply best practices for production code
Core Concepts - Understanding the Fundamentals
Arrays vs Slices: The Crucial Choice
Arrays are fixed-size containers stored contiguously in memory. Think of them as exact-sized parking lots - once built, they always have the same number of spaces.
1// Arrays: Fixed, value types
2var numbers [5]int // Exactly 5 integers
3var matrix [3][4]float64 // 3x4 matrix of floats
4var ipv4 [4]byte // Always 4 bytes
Slices are dynamic views into underlying arrays. Think of them as variable-size windows that can grow, shrink, and point to different parts of data.
1// Slices: Dynamic, reference types
2var data []int // Can grow/shrink
3var items []string // Variable length
4results := make([]Result, 0, 100) // Pre-allocated
💡 Key Insight: Slices are the everyday choice. Arrays are specialized tools for fixed-size data or performance-critical code.
Memory Layout: Understanding Performance
Array Memory Layout:
[10][20][30][40][50] // Contiguous in memory
^ ^ ^ ^ ^
0x100 0x108 0x110 0x118 0x11C
Slice Memory Layout:
Slice Header:
┌─────────────┬─────────────┬─────────────┐
│ pointer │ length │ capacity │
└─────────────┴─────────────┴─────────────┘
Underlying Array:
[10][20][30][40][50] // Points to contiguous memory
^ ^ ^ ^ ^
0x200 0x204 0x208 0x20C 0x210
The Three Components of a Slice
Every slice in Go consists of three pieces of information:
- Pointer: Memory address of the first element
- Length: Number of elements currently in the slice
- Capacity: Maximum number of elements the slice can hold without reallocation
1// run
2package main
3
4import (
5 "fmt"
6 "unsafe"
7)
8
9func main() {
10 // Create a slice
11 s := make([]int, 5, 10)
12
13 fmt.Printf("Slice: %v\n", s)
14 fmt.Printf("Length: %d\n", len(s))
15 fmt.Printf("Capacity: %d\n", cap(s))
16 fmt.Printf("Size of slice header: %d bytes\n", unsafe.Sizeof(s))
17
18 // The slice header is always 24 bytes (on 64-bit systems)
19 // - 8 bytes for pointer
20 // - 8 bytes for length
21 // - 8 bytes for capacity
22
23 // Add elements
24 s = append(s, 1, 2, 3)
25 fmt.Printf("\nAfter append:\n")
26 fmt.Printf("Length: %d\n", len(s))
27 fmt.Printf("Capacity: %d\n", cap(s))
28}
Understanding Capacity Growth:
When a slice's capacity is exceeded, Go allocates a new underlying array with larger capacity and copies the existing elements.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 var s []int
8 fmt.Printf("Initial: len=%d cap=%d\n", len(s), cap(s))
9
10 // Observe capacity growth pattern
11 for i := 0; i < 20; i++ {
12 oldCap := cap(s)
13 s = append(s, i)
14 newCap := cap(s)
15
16 if oldCap != newCap {
17 fmt.Printf("After %2d appends: len=%2d cap=%2d (grew from %2d)\n",
18 i+1, len(s), newCap, oldCap)
19 }
20 }
21
22 // Growth strategy:
23 // - For capacity < 1024: double the capacity
24 // - For capacity >= 1024: grow by 25%
25}
Arrays: Fixed-Size Collections
Arrays have their size as part of their type. This means [3]int and [5]int are completely different types.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // Arrays are value types
8 a := [3]int{1, 2, 3}
9 b := a // Creates a copy
10
11 b[0] = 99
12
13 fmt.Printf("Array a: %v\n", a) // [1 2 3] - unchanged
14 fmt.Printf("Array b: %v\n", b) // [99 2 3] - modified
15
16 // Arrays passed to functions are copied
17 modifyArray(a)
18 fmt.Printf("After function: %v\n", a) // [1 2 3] - unchanged
19
20 // Size is part of the type
21 var x [3]int
22 var y [5]int
23 // x = y // Compile error: cannot assign [5]int to [3]int
24
25 fmt.Printf("Type of x: %T\n", x)
26 fmt.Printf("Type of y: %T\n", y)
27}
28
29func modifyArray(arr [3]int) {
30 arr[0] = 999 // Modifies the copy, not the original
31}
When to Use Arrays:
- Known fixed size at compile time - IPv4 addresses
[4]byte, RGB colors[3]uint8 - Embedding in structs - When you want value semantics
- Performance critical code - Avoid heap allocations
- Cryptographic operations - Fixed-size buffers for keys, hashes
1// run
2package main
3
4import (
5 "fmt"
6 "crypto/sha256"
7)
8
9type IPv4 [4]byte
10
11type Color [3]uint8 // RGB
12
13func main() {
14 // IPv4 addresses are naturally fixed-size
15 ip := IPv4{192, 168, 1, 1}
16 fmt.Printf("IP Address: %d.%d.%d.%d\n", ip[0], ip[1], ip[2], ip[3])
17
18 // Colors are fixed RGB values
19 red := Color{255, 0, 0}
20 fmt.Printf("Red color: RGB(%d, %d, %d)\n", red[0], red[1], red[2])
21
22 // Cryptographic hashes are fixed size
23 data := []byte("Hello, World!")
24 hash := sha256.Sum256(data)
25 fmt.Printf("SHA-256 hash length: %d bytes\n", len(hash))
26 fmt.Printf("Hash (first 8 bytes): %x\n", hash[:8])
27}
Slices: Dynamic Views
Slices are the workhorse of Go collections. They provide a flexible, efficient way to work with sequences of data.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // Different ways to create slices
8
9 // 1. Slice literal
10 s1 := []int{1, 2, 3, 4, 5}
11 fmt.Printf("Literal: %v (len=%d cap=%d)\n", s1, len(s1), cap(s1))
12
13 // 2. Make with length
14 s2 := make([]int, 5)
15 fmt.Printf("Make with length: %v (len=%d cap=%d)\n", s2, len(s2), cap(s2))
16
17 // 3. Make with length and capacity
18 s3 := make([]int, 5, 10)
19 fmt.Printf("Make with cap: %v (len=%d cap=%d)\n", s3, len(s3), cap(s3))
20
21 // 4. From an array
22 arr := [5]int{10, 20, 30, 40, 50}
23 s4 := arr[1:4] // Slice from index 1 to 3 (4 not included)
24 fmt.Printf("From array: %v (len=%d cap=%d)\n", s4, len(s4), cap(s4))
25
26 // 5. Nil slice
27 var s5 []int
28 fmt.Printf("Nil slice: %v (len=%d cap=%d) isNil=%t\n",
29 s5, len(s5), cap(s5), s5 == nil)
30
31 // 6. Empty slice (not nil)
32 s6 := []int{}
33 fmt.Printf("Empty slice: %v (len=%d cap=%d) isNil=%t\n",
34 s6, len(s6), cap(s6), s6 == nil)
35}
Slice Internals - Deep Dive
Slicing Operations
Slicing creates a new slice header that points to the same underlying array.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 original := []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
8
9 // Different slicing operations
10 fmt.Printf("original: %v\n", original)
11
12 // From start to index
13 fmt.Printf("[:5]: %v\n", original[:5]) // [0 1 2 3 4]
14
15 // From index to end
16 fmt.Printf("[5:]: %v\n", original[5:]) // [5 6 7 8 9]
17
18 // Between indices
19 fmt.Printf("[2:7]: %v\n", original[2:7]) // [2 3 4 5 6]
20
21 // Full slice (creates new header, same array)
22 fmt.Printf("[:]: %v\n", original[:]) // [0 1 2 3 4 5 6 7 8 9]
23
24 // Three-index slice: [low:high:max]
25 // Controls both length and capacity
26 s := original[2:5:7]
27 fmt.Printf("\n[2:5:7]: %v (len=%d cap=%d)\n", s, len(s), cap(s))
28 // Slice from index 2 to 5 (length = 3)
29 // With capacity limited to index 7 (capacity = 5)
30}
The Three-Index Slice Expression:
The form a[low:high:max] constructs a slice with:
- Length:
high - low - Capacity:
max - low
This is useful for preventing slices from accessing more of the underlying array than intended.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 data := []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
8
9 // Regular slice: shares full capacity with original
10 s1 := data[2:5]
11 fmt.Printf("s1: %v (len=%d cap=%d)\n", s1, len(s1), cap(s1))
12 // Can append and potentially modify elements beyond length
13 s1 = append(s1, 99)
14 fmt.Printf("After append: %v\n", data) // [0 1 2 3 4 99 6 7 8 9]
15
16 // Three-index slice: limits capacity
17 data = []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9} // Reset
18 s2 := data[2:5:5]
19 fmt.Printf("\ns2: %v (len=%d cap=%d)\n", s2, len(s2), cap(s2))
20 // Append triggers reallocation, doesn't affect original
21 s2 = append(s2, 99)
22 fmt.Printf("After append: data=%v\n", data) // Unchanged
23 fmt.Printf("After append: s2=%v\n", s2)
24}
Append and Growth
The append function is crucial for working with slices. Understanding its behavior is key to writing efficient code.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 demonstrateAppend()
8 demonstrateGrowth()
9}
10
11func demonstrateAppend() {
12 fmt.Println("=== Append Behavior ===")
13
14 s := make([]int, 3, 5)
15 fmt.Printf("Initial: %v (len=%d cap=%d)\n", s, len(s), cap(s))
16
17 // Append within capacity
18 s = append(s, 10)
19 fmt.Printf("After append(10): %v (len=%d cap=%d)\n", s, len(s), cap(s))
20
21 s = append(s, 20)
22 fmt.Printf("After append(20): %v (len=%d cap=%d)\n", s, len(s), cap(s))
23
24 // Append beyond capacity - triggers reallocation
25 s = append(s, 30)
26 fmt.Printf("After append(30): %v (len=%d cap=%d)\n", s, len(s), cap(s))
27
28 // Append multiple elements
29 s = append(s, 40, 50, 60)
30 fmt.Printf("After append(40,50,60): %v (len=%d cap=%d)\n", s, len(s), cap(s))
31
32 // Append another slice
33 more := []int{70, 80, 90}
34 s = append(s, more...)
35 fmt.Printf("After append(more...): %v (len=%d cap=%d)\n", s, len(s), cap(s))
36}
37
38func demonstrateGrowth() {
39 fmt.Println("\n=== Growth Strategy ===")
40
41 s := []int{}
42 prevCap := 0
43
44 for i := 0; i < 2048; i++ {
45 s = append(s, i)
46 newCap := cap(s)
47
48 if newCap != prevCap {
49 growthFactor := float64(newCap) / float64(prevCap)
50 if prevCap == 0 {
51 growthFactor = 0
52 }
53 fmt.Printf("Cap %4d → %4d (%.2fx growth)\n", prevCap, newCap, growthFactor)
54 prevCap = newCap
55 }
56 }
57}
Copy: Deep Copies
The copy function creates independent copies of slice data.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 source := []int{1, 2, 3, 4, 5}
8
9 // Method 1: Using copy
10 dest1 := make([]int, len(source))
11 n := copy(dest1, source)
12 fmt.Printf("Copied %d elements: %v\n", n, dest1)
13
14 // Modify dest1
15 dest1[0] = 999
16 fmt.Printf("Source: %v (unchanged)\n", source)
17 fmt.Printf("Dest1: %v (modified)\n", dest1)
18
19 // Copy with different sizes
20 fmt.Println("\n=== Copy with different sizes ===")
21
22 small := make([]int, 3)
23 n = copy(small, source)
24 fmt.Printf("Copied %d elements to smaller slice: %v\n", n, small)
25
26 large := make([]int, 10)
27 n = copy(large, source)
28 fmt.Printf("Copied %d elements to larger slice: %v\n", n, large)
29
30 // Copy overlapping slices
31 fmt.Println("\n=== Overlapping copy ===")
32 data := []int{1, 2, 3, 4, 5}
33 copy(data[2:], data[:3]) // Shift elements right
34 fmt.Printf("After overlap copy: %v\n", data)
35}
Practical Examples - Working with Data Structures
Example 1: Array Fundamentals
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // Declaration and initialization
8 var scores [5]int // Zero-initialized: [0 0 0 0 0]
9 grades := [3]string{"A", "B", "C"} // Literal: [A B C]
10 numbers := [...]int{1, 2, 3} // Compiler counts: [1 2 3]
11
12 fmt.Println("Scores:", scores)
13 fmt.Println("Grades:", grades)
14 fmt.Println("Numbers:", numbers)
15
16 // Access and modification
17 scores[0] = 95
18 scores[2] = 88
19 fmt.Println("Updated scores:", scores)
20
21 // Array length is part of type
22 // differentSizes() won't compile - [3]int vs [5]int are different types!
23 fmt.Printf("Array length: %d\n", len(scores))
24}
Example 2: Slice Creation and Operations
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // Method 1: Slice literal
8 fruits := []string{"apple", "banana", "cherry"}
9 fmt.Println("Fruits:", fruits)
10
11 // Method 2: Make with length and capacity
12 data := make([]int, 3, 10) // len=3, cap=10
13 fmt.Printf("Data: len=%d, cap=%d\n", len(data), cap(data))
14
15 // Method 3: Growing slices with append
16 numbers := []int{1, 2, 3}
17 numbers = append(numbers, 4, 5)
18 fmt.Println("Appended:", numbers)
19
20 // Method 4: Slicing existing data
21 subset := fruits[1:3] // ["banana", "cherry"]
22 fmt.Println("Subset:", subset)
23
24 // Method 5: Copying slices
25 src := []int{10, 20, 30}
26 dst := make([]int, len(src))
27 n := copy(dst, src)
28 fmt.Printf("Copied %d elements: %v\n", n, dst)
29}
Example 3: Length vs Capacity in Action
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // Understanding growth patterns
8 s := make([]int, 0, 2) // Start with capacity 2
9
10 fmt.Printf("Initial: len=%d, cap=%d, %v\n", len(s), cap(s), s)
11
12 // First append - fits in capacity
13 s = append(s, 1)
14 fmt.Printf("After append 1: len=%d, cap=%d, %v\n", len(s), cap(s), s)
15
16 // Second append - fits in capacity
17 s = append(s, 2)
18 fmt.Printf("After append 2: len=%d, cap=%d, %v\n", len(s), cap(s), s)
19
20 // Third append - triggers reallocation
21 s = append(s, 3)
22 fmt.Printf("After append 3: len=%d, cap=%d, %v\n", len(s), cap(s), s)
23
24 // Pattern: exponential growth until capacity > 1024, then grows by 25%
25 for i := 0; i < 20; i++ {
26 s = append(s, i)
27 }
28 fmt.Printf("After 20 appends: len=%d, cap=%d\n", len(s), cap(s))
29}
Example 4: Slice Sharing and Memory Safety
1// run
2package main
3
4import "fmt"
5
6func main() {
7 original := []int{1, 2, 3, 4, 5}
8
9 // Create slices that share underlying array
10 slice1 := original[1:4] // [2, 3, 4]
11 slice2 := original[2:5] // [3, 4, 5]
12
13 fmt.Println("Original:", original)
14 fmt.Println("Slice1:", slice1)
15 fmt.Println("Slice2:", slice2)
16
17 // Modify through slice1
18 slice1[1] = 99 // Changes original[2] to 99
19
20 fmt.Println("\nAfter modifying slice1[1] = 99:")
21 fmt.Println("Original:", original) // [1, 2, 99, 4, 5]
22 fmt.Println("Slice1:", slice1) // [2, 99, 4]
23 fmt.Println("Slice2:", slice2) // [99, 4, 5] - also changed!
24
25 // Create independent copy
26 independent := make([]int, len(original))
27 copy(independent, original)
28 independent[1] = 88
29
30 fmt.Println("\nIndependent copy:")
31 fmt.Println("Original:", original) // Unchanged
32 fmt.Println("Independent:", independent)
33}
Example 5: Efficient Slice Patterns
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // Pattern 1: Filter with pre-allocation
8 numbers := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
9
10 // ❌ Inefficient: growing slice many times
11 var evensBad []int
12 for _, n := range numbers {
13 if n%2 == 0 {
14 evensBad = append(evensBad, n) // May reallocate
15 }
16 }
17
18 // ✅ Efficient: pre-allocate approximate size
19 evensGood := make([]int, 0, len(numbers)/2)
20 for _, n := range numbers {
21 if n%2 == 0 {
22 evensGood = append(evensGood, n) // Unlikely to reallocate
23 }
24 }
25
26 fmt.Printf("Bad method: %v\n", evensBad)
27 fmt.Printf("Good method: %v\n", evensGood)
28
29 // Pattern 2: In-place modification to avoid extra allocations
30 data := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
31
32 // Filter out odd numbers by shifting even ones left
33 n := 0
34 for i := 0; i < len(data); i++ {
35 if data[i]%2 == 0 {
36 data[n] = data[i]
37 n++
38 }
39 }
40 data = data[:n] // Truncate to only even numbers
41
42 fmt.Printf("In-place filtered: %v\n", data)
43
44 // Pattern 3: Two-pointer technique for palindrome check
45 s := "racecar"
46 isPalindrome := true
47
48 for i, j := 0, len(s)-1; i < j; i, j = i+1, j-1 {
49 if s[i] != s[j] {
50 isPalindrome = false
51 break
52 }
53 }
54
55 fmt.Printf("Is '%s' a palindrome? %t\n", s, isPalindrome)
56}
Example 6: Multi-dimensional Data
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // 2D slice
8 matrix := [][]int{
9 {1, 2, 3}, // Row 1: 3 columns
10 {4, 5, 6}, // Row 2: 3 columns
11 {7, 8, 9, 10}, // Row 3: 4 columns
12 }
13
14 fmt.Println("Matrix:")
15 for i, row := range matrix {
16 fmt.Printf("Row %d: %v\n", i, row)
17 }
18
19 // Efficient 2D slice creation
20 rows, cols := 100, 50
21 efficient := make([][]int, rows)
22 for i := range efficient {
23 efficient[i] = make([]int, cols)
24 for j := range efficient[i] {
25 efficient[i][j] = i*cols + j
26 }
27 }
28
29 fmt.Printf("\nCreated %dx%d matrix\n", rows, cols)
30
31 // Access patterns
32 fmt.Printf("Element [0][0]: %d\n", efficient[0][0])
33 fmt.Printf("Element [99][49]: %d\n", efficient[99][49])
34}
Advanced Slice Internals
Memory Aliasing and Unintended Sharing
One of the most subtle bugs with slices comes from unintended sharing of underlying arrays.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // Problem: Unintended sharing
8 original := []int{1, 2, 3, 4, 5}
9
10 // Create a "small" slice
11 small := original[:2]
12 fmt.Printf("Small: %v (len=%d cap=%d)\n", small, len(small), cap(small))
13
14 // Append might modify original
15 small = append(small, 99)
16 fmt.Printf("After append to small:\n")
17 fmt.Printf(" Small: %v\n", small)
18 fmt.Printf(" Original: %v (modified!)\n", original)
19
20 // Solution: Use three-index slice to limit capacity
21 original = []int{1, 2, 3, 4, 5}
22 small = original[:2:2] // Limit capacity to prevent sharing
23 fmt.Printf("\nWith limited capacity: %v (len=%d cap=%d)\n",
24 small, len(small), cap(small))
25
26 small = append(small, 99)
27 fmt.Printf("After append:\n")
28 fmt.Printf(" Small: %v\n", small)
29 fmt.Printf(" Original: %v (unchanged)\n", original)
30}
Nil vs Empty Slices
Understanding the difference between nil and empty slices is important for correct API design.
1// run
2package main
3
4import (
5 "fmt"
6 "encoding/json"
7)
8
9func main() {
10 // Nil slice
11 var nilSlice []int
12 fmt.Printf("Nil slice: %v (len=%d cap=%d nil=%t)\n",
13 nilSlice, len(nilSlice), cap(nilSlice), nilSlice == nil)
14
15 // Empty slice
16 emptySlice := []int{}
17 fmt.Printf("Empty slice: %v (len=%d cap=%d nil=%t)\n",
18 emptySlice, len(emptySlice), cap(emptySlice), emptySlice == nil)
19
20 // Both can be appended to
21 nilSlice = append(nilSlice, 1)
22 emptySlice = append(emptySlice, 1)
23
24 // JSON encoding differs
25 nilJSON, _ := json.Marshal(nilSlice)
26 emptyJSON, _ := json.Marshal(emptySlice)
27 fmt.Printf("\nJSON encoding:\n")
28 fmt.Printf(" Nil slice: %s\n", nilJSON) // null
29 fmt.Printf(" Empty slice: %s\n", emptyJSON) // []
30
31 // Recommendation: Use nil for "no data", empty for "zero items"
32 type Response struct {
33 Items []string
34 }
35
36 noData := Response{} // nil slice
37 zeroItems := Response{Items: []string{}} // empty slice
38
39 fmt.Printf("\nNo data JSON: %s\n", mustMarshal(noData))
40 fmt.Printf("Zero items JSON: %s\n", mustMarshal(zeroItems))
41}
42
43func mustMarshal(v interface{}) string {
44 b, _ := json.Marshal(v)
45 return string(b)
46}
Slice Expressions and Bounds Checking
Go performs bounds checking on all slice operations at runtime.
1// run
2package main
3
4import "fmt"
5
6func main() {
7 s := []int{0, 1, 2, 3, 4}
8
9 // Valid slice operations
10 fmt.Println("s[0:3]:", s[0:3]) // [0 1 2]
11 fmt.Println("s[:3]:", s[:3]) // [0 1 2]
12 fmt.Println("s[2:]:", s[2:]) // [2 3 4]
13 fmt.Println("s[:]:", s[:]) // [0 1 2 3 4]
14
15 // Edge cases that work
16 fmt.Println("s[5:5]:", s[5:5]) // [] - empty slice
17 fmt.Println("s[2:2]:", s[2:2]) // [] - empty slice
18
19 // These would panic (commented out)
20 // fmt.Println(s[10]) // panic: index out of range
21 // fmt.Println(s[1:10]) // panic: slice bounds out of range
22 // fmt.Println(s[3:1]) // panic: invalid slice index
23
24 // Safe access pattern
25 index := 10
26 if index >= 0 && index < len(s) {
27 fmt.Println("Safe access:", s[index])
28 } else {
29 fmt.Printf("Index %d out of range [0:%d)\n", index, len(s))
30 }
31}
Common Patterns and Pitfalls
Pattern 1: Growing Slices Efficiently
1// ✅ Pre-allocate when final size is known
2func knownSize() []int {
3 result := make([]int, 1000) // Allocate exact size
4 for i := 0; i < 1000; i++ {
5 result[i] = i * 2
6 }
7 return result
8}
9
10// ✅ Estimate capacity when size is approximate
11func estimatedSize() []string {
12 result := make([]string, 0, 100) // Reasonable estimate
13 for i := 0; i < 100; i++ {
14 result = append(result, fmt.Sprintf("item_%d", i))
15 }
16 return result
17}
Pattern 2: Safe Slice Modification
1// ❌ WRONG: Modifying slice while iterating
2func removeEvenWrong(numbers []int) []int {
3 for i, num := range numbers {
4 if num%2 == 0 {
5 numbers = append(numbers[:i], numbers[i+1:]...) // Dangerous!
6 }
7 }
8 return numbers
9}
10
11// ✅ CORRECT: Collect indices first
12func removeEvenCorrect(numbers []int) []int {
13 var indices []int
14 for i, num := range numbers {
15 if num%2 == 0 {
16 indices = append(indices, i)
17 }
18 }
19
20 // Remove in reverse order to maintain indices
21 for i := len(indices) - 1; i >= 0; i-- {
22 idx := indices[i]
23 numbers = append(numbers[:idx], numbers[idx+1:]...)
24 }
25 return numbers
26}
Pattern 3: Memory Leak Prevention
1// ❌ WRONG: Keeping large array for small slice
2func memoryLeak() []byte {
3 hugeData := make([]byte, 1024*1024) // 1MB
4 return hugeData[:10] // Returns 10 bytes but keeps 1MB alive!
5}
6
7// ✅ CORRECT: Copy small data to new slice
8func memoryEfficient() []byte {
9 hugeData := make([]byte, 1024*1024) // 1MB
10 result := make([]byte, 10)
11 copy(result, hugeData[:10]) // Copy only what we need
12 return result // Large data can be garbage collected
13}
Pattern 4: Efficient Element Removal
1// run
2package main
3
4import "fmt"
5
6func main() {
7 // Remove element at index (unordered - swap with last)
8 removeUnordered := func(s []int, i int) []int {
9 s[i] = s[len(s)-1]
10 return s[:len(s)-1]
11 }
12
13 // Remove element at index (ordered - shift elements)
14 removeOrdered := func(s []int, i int) []int {
15 return append(s[:i], s[i+1:]...)
16 }
17
18 // Test unordered removal
19 s1 := []int{1, 2, 3, 4, 5}
20 fmt.Printf("Original: %v\n", s1)
21 s1 = removeUnordered(s1, 2)
22 fmt.Printf("Remove index 2 (unordered): %v\n", s1)
23
24 // Test ordered removal
25 s2 := []int{1, 2, 3, 4, 5}
26 s2 = removeOrdered(s2, 2)
27 fmt.Printf("Remove index 2 (ordered): %v\n", s2)
28}
Integration and Mastery - Building Real Applications
Example 1: Data Processing Pipeline
1// run
2package main
3
4import (
5 "fmt"
6 "math/rand"
7 "time"
8)
9
10type DataPoint struct {
11 ID int
12 Value float64
13 Category string
14 Valid bool
15}
16
17type Processor struct {
18 input []DataPoint
19 filtered []DataPoint
20 results map[string][]DataPoint
21 stats map[string]float64
22}
23
24func NewProcessor(input []DataPoint) *Processor {
25 // Pre-allocate for better performance
26 p := &Processor{
27 input: input,
28 filtered: make([]DataPoint, 0, len(input)/2), // Estimate 50% will be filtered
29 results: make(map[string][]DataPoint),
30 stats: make(map[string]float64),
31 }
32 return p
33}
34
35func (p *Processor) FilterByValue(minValue float64) {
36 p.filtered = p.filtered[:0] // Reset
37
38 for _, point := range p.input {
39 if point.Valid && point.Value >= minValue {
40 p.filtered = append(p.filtered, point)
41 }
42 }
43}
44
45func (p *Processor) GroupByCategory() {
46 // Clear previous results
47 for k := range p.results {
48 p.results[k] = p.results[k][:0]
49 }
50
51 // Group filtered data
52 for _, point := range p.filtered {
53 category := point.Category
54 p.results[category] = append(p.results[category], point)
55 }
56}
57
58func (p *Processor) CalculateStats() {
59 for category, points := range p.results {
60 if len(points) == 0 {
61 continue
62 }
63
64 var sum, min, max float64
65 min = points[0].Value
66 max = points[0].Value
67
68 for _, point := range points {
69 sum += point.Value
70 if point.Value < min {
71 min = point.Value
72 }
73 if point.Value > max {
74 max = point.Value
75 }
76 }
77
78 p.stats[category] = sum / float64(len(points))
79 p.stats[category+"_min"] = min
80 p.stats[category+"_max"] = max
81 }
82}
83
84func (p *Processor) PrintReport() {
85 fmt.Println("=== Data Processing Report ===")
86
87 for category, points := range p.results {
88 fmt.Printf("\n%s: %d points\n", category, len(points))
89 fmt.Printf(" Average: %.2f\n", p.stats[category])
90 fmt.Printf(" Min: %.2f\n", p.stats[category+"_min"])
91 fmt.Printf(" Max: %.2f\n", p.stats[category+"_max"])
92
93 // Show first 3 points
94 limit := len(points)
95 if limit > 3 {
96 limit = 3
97 }
98
99 fmt.Printf(" Sample: ")
100 for i := 0; i < limit; i++ {
101 fmt.Printf("%.1f ", points[i].Value)
102 }
103 fmt.Println()
104 }
105}
106
107func main() {
108 // Generate sample data
109 rand.Seed(time.Now().UnixNano())
110
111 categories := []string{"electronics", "clothing", "books", "food"}
112 var data []DataPoint
113
114 for i := 0; i < 1000; i++ {
115 data = append(data, DataPoint{
116 ID: i,
117 Value: rand.Float64() * 100,
118 Category: categories[rand.Intn(len(categories))],
119 Valid: rand.Float64() > 0.1, // 90% valid
120 })
121 }
122
123 // Process the data
124 processor := NewProcessor(data)
125
126 fmt.Printf("Generated %d data points\n", len(data))
127
128 processor.FilterByValue(25.0)
129 fmt.Printf("Filtered by value >= 25: %d points\n", len(processor.filtered))
130
131 processor.GroupByCategory()
132 processor.CalculateStats()
133 processor.PrintReport()
134}
Example 2: Efficient Buffer Management
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9type Buffer struct {
10 data []byte
11 pos int
12}
13
14func NewBuffer(size int) *Buffer {
15 return &Buffer{
16 data: make([]byte, 0, size), // Pre-allocate capacity
17 pos: 0,
18 }
19}
20
21func (b *Buffer) Write(data []byte) (int, error) {
22 available := cap(b.data) - b.pos
23 toWrite := len(data)
24
25 if toWrite > available {
26 toWrite = available
27 }
28
29 copy(b.data[b.pos:], data[:toWrite])
30 b.pos += toWrite
31
32 return toWrite, nil
33}
34
35func (b *Buffer) Flush() error {
36 if b.pos == 0 {
37 return nil
38 }
39
40 // Write to stdout
41 _, err := os.Stdout.Write(b.data[:b.pos])
42 if err != nil {
43 return err
44 }
45
46 // Reset buffer for reuse
47 b.pos = 0
48 return nil
49}
50
51func (b *Buffer) String() string {
52 return string(b.data[:b.pos])
53}
54
55func main() {
56 buffer := NewBuffer(1024) // 1KB buffer
57
58 // Simulate writing different sized chunks
59 chunks := [][]byte{
60 []byte("Hello, "),
61 []byte("this is a test "),
62 []byte("of the buffer system. "),
63 []byte("It handles multiple "),
64 []byte("writes efficiently "),
65 []byte("with pre-allocation "),
66 []byte("and proper slice management."),
67 }
68
69 fmt.Println("=== Buffer Management Demo ===")
70
71 for i, chunk := range chunks {
72 written, err := buffer.Write(chunk)
73 if err != nil {
74 fmt.Printf("Error writing chunk %d: %v\n", i, err)
75 return
76 }
77
78 fmt.Printf("Chunk %d: wrote %d bytes, buffer size: %d\n",
79 i, written, buffer.pos)
80
81 // Flush every 2 chunks
82 if i%2 == 1 {
83 err := buffer.Flush()
84 if err != nil {
85 fmt.Printf("Error flushing: %v\n", err)
86 return
87 }
88 fmt.Println("Flushed buffer")
89 }
90 }
91
92 // Final flush
93 err := buffer.Flush()
94 if err != nil {
95 fmt.Printf("Error final flush: %v\n", err)
96 } else {
97 fmt.Println("\nFinal flush successful")
98 }
99}
Performance Considerations
Memory Layout and Cache Performance
1// run
2package main
3
4import (
5 "fmt"
6 "time"
7 "unsafe"
8)
9
10func main() {
11 size := 1000000
12 data := make([]int, size)
13 for i := range data {
14 data[i] = i
15 }
16
17 // Sequential access
18 start := time.Now()
19 sum1 := 0
20 for i := 0; i < size; i++ {
21 sum1 += data[i]
22 }
23 fmt.Printf("Sequential: %v, sum=%d\n", time.Since(start), sum1)
24
25 // Random access
26 start = time.Now()
27 sum2 := 0
28 for i := 0; i < size; i++ {
29 // Access random positions
30 idx := (i * 7919) % size
31 sum2 += data[idx]
32 }
33 fmt.Printf("Random: %v, sum=%d\n", time.Since(start), sum2)
34
35 // Struct of arrays for better cache locality
36 type Point struct {
37 X, Y, Z float64
38 }
39
40 points := make([]Point, size)
41 for i := range points {
42 points[i] = Point{float64(i), float64(i * 2), float64(i * 3)}
43 }
44
45 // Access with pointer arithmetic
46 start = time.Now()
47 sum3 := 0.0
48 if len(points) > 0 {
49 base := (*[3]float64)(unsafe.Pointer(&points[0]))
50 for i := 0; i < size; i++ {
51 sum3 += base[i].X
52 }
53 }
54 fmt.Printf("Pointer access: %v, sum=%.1f\n", time.Since(start), sum3)
55}
Slice Growth Strategies
1// run
2package main
3
4import (
5 "fmt"
6 "time"
7)
8
9// Strategy 1: No pre-allocation
10func growSlow(n int) []int {
11 var result []int
12 for i := 0; i < n; i++ {
13 result = append(result, i) // Multiple reallocations
14 }
15 return result
16}
17
18// Strategy 2: Exact pre-allocation
19func growFast(n int) []int {
20 result := make([]int, n) // Exact size
21 for i := 0; i < n; i++ {
22 result[i] = i
23 }
24 return result
25}
26
27// Strategy 3: Over-allocate
28func growMedium(n int) []int {
29 result := make([]int, 0, n+n/10) // 10% buffer
30 for i := 0; i < n; i++ {
31 result = append(result, i)
32 }
33 return result
34}
35
36func benchmark() {
37 sizes := []int{1000, 10000, 100000}
38
39 for _, size := range sizes {
40 fmt.Printf("\nSize: %d\n", size)
41
42 start := time.Now()
43 growSlow(size)
44 slow := time.Since(start)
45
46 start = time.Now()
47 growFast(size)
48 fast := time.Since(start)
49
50 start = time.Now()
51 growMedium(size)
52 medium := time.Since(start)
53
54 fmt.Printf(" Slow: %v\n", slow)
55 fmt.Printf(" Fast: %v\n", fast)
56 fmt.Printf(" Medium: %v\n", medium)
57
58 speedup := float64(slow.Nanoseconds()) / float64(fast.Nanoseconds())
59 fmt.Printf(" Speedup: %.2fx faster\n", speedup)
60 }
61}
62
63func main() {
64 benchmark()
65}
Practice Exercises
Exercise 1: Slice Reversal
🎯 Learning Objectives: Master in-place slice manipulation and understand memory layout.
🌍 Real-World Context: Reversing data is essential in many applications from undo/redo functionality to processing data in reverse chronological order. Think about displaying chat messages from newest to oldest, or implementing a back button in a web browser.
⭐ Difficulty: Beginner | ⏱️ Time Estimate: 10 minutes
Write a function that reverses a slice in-place. This exercise teaches you how to efficiently modify slice elements without creating additional memory allocations.
Solution
1// run
2package main
3
4import "fmt"
5
6func reverse[T any](s []T) {
7 for i, j := 0, len(s)-1; i < j; i, j = i+1, j-1 {
8 s[i], s[j] = s[j], s[i]
9 }
10}
11
12func main() {
13 numbers := []int{1, 2, 3, 4, 5}
14 fmt.Println("Before:", numbers)
15
16 reverse(numbers)
17 fmt.Println("After:", numbers)
18
19 // Test with strings
20 text := []string{"hello", "world"}
21 reverse(text)
22 fmt.Println("Reversed text:", text)
23}
Exercise 2: Find Maximum Value
🎯 Learning Objectives: Practice slice iteration and conditional logic for data analysis.
🌍 Real-World Context: Finding maximum values is fundamental to data analysis across all domains. Financial applications use it to find the highest stock price, weather apps use it for record temperatures, and gaming systems use it for high scores.
⭐ Difficulty: Beginner | ⏱️ Time Estimate: 8 minutes
Write a function that finds the maximum value in a slice.
Solution
1// run
2package main
3
4import "fmt"
5
6func max[T comparable](slice []T) T {
7 if len(slice) == 0 {
8 var zero T
9 return zero
10 }
11
12 maximum := slice[0]
13 for _, value := range slice[1:] {
14 if value > maximum {
15 maximum = value
16 }
17 }
18 return maximum
19}
20
21func main() {
22 numbers := []int{3, 7, 2, 9, 1, 5}
23 fmt.Printf("Max number: %d\n", max(numbers))
24
25 floats := []float64{3.14, 2.71, 1.41, 4.20}
26 fmt.Printf("Max float: %.2f\n", max(floats))
27
28 strings := []string{"apple", "zebra", "banana", "xylophone"}
29 fmt.Printf("Max string: %s\n", max(strings))
30}
Exercise 3: Remove Duplicates
🎯 Learning Objectives: Implement efficient data deduplication and understand hash-based algorithms.
🌍 Real-World Context: Data deduplication is crucial in real-world applications. Social media platforms use it to prevent showing the same post multiple times, file management systems use it to identify duplicate files, and e-commerce platforms use it to clean product catalogs.
⭐ Difficulty: Intermediate | ⏱️ Time Estimate: 15 minutes
Write a function that removes duplicate values from a slice while preserving order.
Solution
1// run
2package main
3
4import "fmt"
5
6func removeDuplicates[T comparable](slice []T) []T {
7 seen := make(map[T]bool)
8 result := make([]T, 0, len(slice))
9
10 for _, value := range slice {
11 if !seen[value] {
12 seen[value] = true
13 result = append(result, value)
14 }
15 }
16
17 return result
18}
19
20func main() {
21 numbers := []int{1, 2, 2, 3, 4, 3, 5, 1, 6}
22 fmt.Printf("Original: %v\n", numbers)
23 fmt.Printf("Unique: %v\n", removeDuplicates(numbers))
24
25 strings := []string{"go", "is", "fun", "go", "is", "powerful"}
26 fmt.Printf("Original strings: %v\n", strings)
27 fmt.Printf("Unique strings: %v\n", removeDuplicates(strings))
28}
Exercise 4: Slice Chunking
🎯 Learning Objectives: Master slice slicing operations and understand boundary handling.
🌍 Real-World Context: Data chunking is essential for processing large datasets. Web APIs use pagination to send data in manageable chunks, file upload services break large files into smaller pieces, and data processing pipelines use chunking to handle big data without overwhelming memory.
⭐ Difficulty: Intermediate | ⏱️ Time Estimate: 12 minutes
Write a function that splits a slice into chunks of a specified size.
Solution
1// run
2package main
3
4import "fmt"
5
6func chunk[T any](slice []T, size int) [][]T {
7 if size <= 0 {
8 return nil
9 }
10
11 var chunks [][]T
12 for i := 0; i < len(slice); i += size {
13 end := i + size
14 if end > len(slice) {
15 end = len(slice)
16 }
17 chunks = append(chunks, slice[i:end])
18 }
19
20 return chunks
21}
22
23func main() {
24 numbers := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
25
26 chunks := chunk(numbers, 3)
27
28 for i, chunk := range chunks {
29 fmt.Printf("Chunk %d: %v\n", i+1, chunk)
30 }
31
32 // Test with different chunk sizes
33 fmt.Printf("\nChunk size 4: %v\n", chunk(numbers, 4))
34 fmt.Printf("Chunk size 7: %v\n", chunk(numbers, 7))
35}
Exercise 5: Merge Sorted Slices
🎯 Learning Objectives: Implement efficient merge algorithms and understand two-pointer techniques.
🌍 Real-World Context: Merging sorted data is fundamental to many algorithms and systems. Database systems merge sorted index ranges for efficient queries, search engines combine results from multiple sources, and log analysis systems merge sorted timestamp data.
⭐ Difficulty: Advanced | ⏱️ Time Estimate: 20 minutes
Write a function that merges two sorted slices into one sorted slice.
Solution
1// run
2package main
3
4import "fmt"
5
6func mergeSorted[T comparable](a, b []T) []T {
7 result := make([]T, 0, len(a)+len(b))
8 i, j := 0, 0
9
10 // Merge until one slice is exhausted
11 for i < len(a) && j < len(b) {
12 if a[i] <= b[j] {
13 result = append(result, a[i])
14 i++
15 } else {
16 result = append(result, b[j])
17 j++
18 }
19 }
20
21 // Append remaining elements
22 for i < len(a) {
23 result = append(result, a[i])
24 i++
25 }
26
27 for j < len(b) {
28 result = append(result, b[j])
29 j++
30 }
31
32 return result
33}
34
35func main() {
36 slice1 := []int{1, 3, 5, 7}
37 slice2 := []int{2, 4, 6, 8, 10}
38
39 merged := mergeSorted(slice1, slice2)
40 fmt.Printf("Merged: %v\n", merged)
41
42 // Test with strings
43 words1 := []string{"apple", "cherry"}
44 words2 := []string{"banana", "date"}
45 fmt.Printf("Merged strings: %v\n", mergeSorted(words1, words2))
46}
Exercise 6: Rotate Slice
🎯 Learning Objectives: Implement efficient rotation algorithms and understand in-place manipulation.
🌍 Real-World Context: Array rotation is used in many applications from circular buffers in audio processing to shift scheduling in resource management systems. Video editing software uses rotation for timeline operations, and queue implementations often use rotation to manage circular data structures.
⭐ Difficulty: Advanced | ⏱️ Time Estimate: 18 minutes
Write a function that rotates a slice to the right by k positions.
Solution
1// run
2package main
3
4import "fmt"
5
6func reverse[T any](slice []T) {
7 for i, j := 0, len(slice)-1; i < j; i, j = i+1, j-1 {
8 slice[i], slice[j] = slice[j], slice[i]
9 }
10}
11
12func rotate[T any](slice []T, k int) []T {
13 n := len(slice)
14 if n == 0 {
15 return slice
16 }
17
18 k = k % n // Handle k > n
19 if k < 0 {
20 k += n // Handle negative k
21 }
22
23 // Reverse entire slice
24 reverse(slice)
25
26 // Reverse first k elements
27 reverse(slice[:k])
28
29 // Reverse remaining elements
30 reverse(slice[k:])
31
32 return slice
33}
34
35func main() {
36 numbers := []int{1, 2, 3, 4, 5}
37 fmt.Println("Original:", numbers)
38
39 rotated := rotate(numbers, 2)
40 fmt.Println("Rotated by 2:", rotated)
41
42 // Test with different rotation amounts
43 numbers = []int{1, 2, 3, 4, 5}
44 fmt.Println("Rotated by 3:", rotate(numbers, 3))
45
46 numbers = []int{1, 2, 3, 4, 5}
47 fmt.Println("Rotated by -1:", rotate(numbers, -1))
48
49 numbers = []int{1, 2, 3, 4, 5}
50 fmt.Println("Rotated by 7:", rotate(numbers, 7))
51}
Exercise 7: Matrix Operations
🎯 Learning Objectives: Master 2D slice manipulation and implement fundamental linear algebra operations.
🌍 Real-World Context: Matrix operations are foundational to computer graphics, machine learning, and scientific computing. Image processing uses matrices for transformations, game engines use them for 3D graphics rendering, and data science uses them for statistical analysis.
⭐ Difficulty: Advanced | ⏱️ Time Estimate: 25 minutes
Implement basic matrix operations using 2D slices.
Solution
1// run
2package main
3
4import "fmt"
5
6type Matrix [][]int
7
8func createMatrix(rows, cols int) Matrix {
9 matrix := make(Matrix, rows)
10 for i := range matrix {
11 matrix[i] = make([]int, cols)
12 }
13 return matrix
14}
15
16func fillMatrix(matrix Matrix, start int) {
17 val := start
18 for i := range matrix {
19 for j := range matrix[i] {
20 matrix[i][j] = val
21 val++
22 }
23 }
24}
25
26func printMatrix(matrix Matrix) {
27 for i, row := range matrix {
28 fmt.Printf("Row %d: ", i+1)
29 for _, val := range row {
30 fmt.Printf("%6d ", val)
31 }
32 fmt.Println()
33 }
34}
35
36func transpose(matrix Matrix) Matrix {
37 if len(matrix) == 0 {
38 return matrix
39 }
40
41 rows := len(matrix)
42 cols := len(matrix[0])
43
44 result := createMatrix(cols, rows)
45
46 for i := 0; i < rows; i++ {
47 for j := 0; j < cols; j++ {
48 result[j][i] = matrix[i][j]
49 }
50 }
51
52 return result
53}
54
55func multiply(a, b Matrix) Matrix {
56 if len(a[0]) != len(b) {
57 panic("Cannot multiply: columns of A != rows of B")
58 }
59
60 rowsA, colsA := len(a), len(a[0])
61 rowsB, colsB := len(b), len(b[0])
62
63 result := createMatrix(rowsA, colsB)
64
65 for i := 0; i < rowsA; i++ {
66 for j := 0; j < colsB; j++ {
67 sum := 0
68 for k := 0; k < colsA; k++ {
69 sum += a[i][k] * b[k][j]
70 }
71 result[i][j] = sum
72 }
73 }
74
75 return result
76}
77
78func main() {
79 // Create 3x3 matrices
80 matrix1 := createMatrix(3, 3)
81 matrix2 := createMatrix(3, 3)
82
83 fillMatrix(matrix1, 1)
84 fillMatrix(matrix2, 10)
85
86 fmt.Println("Matrix 1:")
87 printMatrix(matrix1)
88
89 fmt.Println("\nMatrix 2:")
90 printMatrix(matrix2)
91
92 fmt.Println("\nTranspose of Matrix 1:")
93 printMatrix(transpose(matrix1))
94
95 fmt.Println("\nMatrix 1 × Matrix 2:")
96 result := multiply(matrix1, matrix2)
97 printMatrix(result)
98}
Exercise 8: Memory Pool Pattern
🎯 Learning Objectives: Implement object pooling for memory efficiency and understand garbage collection optimization.
🌍 Real-World Context: Object pooling is crucial for high-performance applications like web servers, game engines, and real-time systems. It reduces garbage collection pressure and improves latency by reusing objects instead of constantly allocating new ones.
⭐ Difficulty: Advanced | ⏱️ Time Estimate: 30 minutes
Implement a slice pool for reducing memory allocations in high-throughput scenarios.
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "sync"
7 "time"
8)
9
10type SlicePool[T any] struct {
11 pool sync.Pool
12 maxSize int
13}
14
15func NewSlicePool[T any](maxSize int) *SlicePool[T] {
16 return &SlicePool[T]{
17 pool: sync.Pool{
18 New: func() interface{} {
19 s := make([]T, 0, maxSize)
20 return &s
21 },
22 },
23 maxSize: maxSize,
24 }
25}
26
27func (p *SlicePool[T]) Get() *[]T {
28 obj := p.pool.Get().(*[]T)
29 return obj
30}
31
32func (p *SlicePool[T]) Put(obj *[]T) {
33 *obj = (*obj)[:0] // Reset length
34 p.pool.Put(obj)
35}
36
37type Buffer struct {
38 data []byte
39}
40
41func NewBuffer() *Buffer {
42 return &Buffer{
43 data: make([]byte, 0, 1024), // 1KB buffer
44 }
45}
46
47func (b *Buffer) Reset() {
48 b.data = b.data[:0]
49}
50
51func (b *Buffer) Write(data []byte) int {
52 b.data = append(b.data, data...)
53 return len(data)
54}
55
56func (b *Buffer) Bytes() []byte {
57 return b.data
58}
59
60func main() {
61 // Create buffer pool
62 bufferPool := &sync.Pool{
63 New: func() interface{} {
64 return NewBuffer()
65 },
66 }
67
68 fmt.Println("=== Buffer Pool Demo ===")
69
70 // Simulate high-frequency buffer usage
71 start := time.Now()
72
73 for i := 0; i < 1000; i++ {
74 buffer := bufferPool.Get().(*Buffer)
75
76 // Simulate buffer usage
77 buffer.Write([]byte(fmt.Sprintf("data_%d", i)))
78
79 // Reset and return to pool
80 buffer.Reset()
81 bufferPool.Put(buffer)
82 }
83
84 duration := time.Since(start)
85 fmt.Printf("Processed 1000 buffers in %v\n", duration)
86 fmt.Printf("Average per buffer: %v\n", duration/1000)
87
88 // Without pooling for comparison
89 start = time.Now()
90
91 for i := 0; i < 1000; i++ {
92 buffer := NewBuffer()
93 buffer.Write([]byte(fmt.Sprintf("data_%d", i)))
94 // Buffer goes out of scope and gets garbage collected
95 }
96
97 duration = time.Since(start)
98 fmt.Printf("Without pooling: %v\n", duration)
99 fmt.Printf("Average without pool: %v\n", duration/1000)
100}
Exercise 9: Sliding Window
🎯 Learning Objectives: Master sliding window algorithms and understand O(n) optimization techniques.
🌍 Real-World Context: Sliding window algorithms are essential for time-series analysis and real-time data processing. Network monitoring uses them to detect anomalies in traffic patterns, financial applications use them for calculating moving averages, and video compression uses them for pattern matching.
⭐ Difficulty: Advanced | ⏱️ Time Estimate: 22 minutes
Implement a sliding window algorithm to find maximum sum of k consecutive elements.
Solution
1// run
2package main
3
4import "fmt"
5
6func maxSumSubarray(arr []int, k int) (int, []int) {
7 if len(arr) < k || k <= 0 {
8 return 0, nil
9 }
10
11 // Calculate sum of first window
12 windowSum := 0
13 for i := 0; i < k; i++ {
14 windowSum += arr[i]
15 }
16
17 maxSum := windowSum
18 maxStart := 0
19
20 // Slide window through array
21 for i := k; i < len(arr); i++ {
22 // Remove element leaving window, add element entering window
23 windowSum = windowSum - arr[i-k] + arr[i]
24
25 if windowSum > maxSum {
26 maxSum = windowSum
27 maxStart = i - k + 1
28 }
29 }
30
31 return maxSum, arr[maxStart : maxStart+k]
32}
33
34func movingAverage(arr []int, k int) []float64 {
35 if len(arr) < k || k <= 0 {
36 return nil
37 }
38
39 averages := make([]float64, 0, len(arr)-k+1)
40 windowSum := 0
41
42 // Calculate sum of first window
43 for i := 0; i < k; i++ {
44 windowSum += arr[i]
45 }
46
47 // First average
48 averages = append(averages, float64(windowSum)/float64(k))
49
50 // Slide window and calculate averages
51 for i := k; i < len(arr); i++ {
52 windowSum = windowSum - arr[i-k] + arr[i]
53 averages = append(averages, float64(windowSum)/float64(k))
54 }
55
56 return averages
57}
58
59func main() {
60 numbers := []int{2, 1, 5, 1, 3, 2, 7, 4, 6, 2, 8, 1, 4}
61 k := 3
62
63 fmt.Printf("Array: %v\n", numbers)
64 fmt.Printf("Window size: %d\n\n", k)
65
66 // Maximum sum subarray
67 maxSum, subarray := maxSumSubarray(numbers, k)
68 fmt.Printf("Maximum sum: %v\n", maxSum)
69 fmt.Printf("Subarray: %v\n", subarray)
70
71 // Moving averages
72 averages := movingAverage(numbers, k)
73 fmt.Printf("\nMoving averages:\n")
74 for i, avg := range averages {
75 fmt.Printf("Window %d: %.2f\n", i+1, avg)
76 }
77}
Exercise 10: Slice Filtering Pipeline
🎯 Learning Objectives: Build data processing pipelines with efficient filtering and transformation patterns.
🌍 Real-World Context: Data filtering pipelines are essential to modern applications. E-commerce platforms use them to filter products by criteria, social media uses them to filter content, and analytics systems use them to process streaming data efficiently.
⭐ Difficulty: Advanced | ⏱️ Time Estimate: 40 minutes
Create an efficient data processing pipeline that can filter and transform slices based on multiple criteria.
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "time"
7)
8
9type Product struct {
10 ID int
11 Name string
12 Price float64
13 Category string
14 Rating int
15 Active bool
16}
17
18type FilterFunc[T any] func(T) bool
19
20type Pipeline[T any] struct {
21 data []T
22}
23
24func NewPipeline[T any](data []T) *Pipeline[T] {
25 copied := make([]T, len(data))
26 copy(copied, data)
27 return &Pipeline[T]{data: copied}
28}
29
30func (p *Pipeline[T]) Filter(predicate FilterFunc[T]) *Pipeline[T] {
31 var result []T
32 for _, item := range p.data {
33 if predicate(item) {
34 result = append(result, item)
35 }
36 }
37
38 p.data = result
39 return p
40}
41
42func (p *Pipeline[T]) Map(transformer func(T) T) *Pipeline[T] {
43 for i, item := range p.data {
44 p.data[i] = transformer(item)
45 }
46 return p
47}
48
49func (p *Pipeline[T]) Reduce(reducer func(T, T) T, initial T) T {
50 result := initial
51 for _, item := range p.data {
52 result = reducer(result, item)
53 }
54 return result
55}
56
57func (p *Pipeline[T]) Count() int {
58 return len(p.data)
59}
60
61func (p *Pipeline[T]) Collect() []T {
62 return p.data
63}
64
65func main() {
66 // Generate sample products
67 products := []Product{
68 {1, "Laptop", 999.99, "Electronics", 5, true},
69 {2, "Mouse", 29.99, "Electronics", 3, true},
70 {3, "Keyboard", 79.99, "Electronics", 4, true},
71 {4, "Monitor", 299.99, "Electronics", 4, true},
72 {5, "Book", 19.99, "Books", 2, true},
73 {6, "Chair", 89.99, "Furniture", 3, false}, // Inactive
74 {7, "Desk", 199.99, "Furniture", 5, false}, // Inactive
75 {8, "Phone", 699.99, "Electronics", 5, true},
76 }
77
78 start := time.Now()
79
80 // Create pipeline
81 pipeline := NewPipeline(products)
82
83 fmt.Printf("Initial products: %d\n", pipeline.Count())
84
85 // Filter active products
86 pipeline = NewPipeline(products)
87 activeProducts := pipeline.Filter(func(p Product) bool {
88 return p.Active
89 }).Collect()
90
91 fmt.Printf("Active products: %d\n", len(activeProducts))
92
93 // Filter by category
94 pipeline = NewPipeline(products)
95 electronics := pipeline.Filter(func(p Product) bool {
96 return p.Category == "Electronics" && p.Active
97 }).Collect()
98
99 fmt.Printf("Electronics: %d\n", len(electronics))
100
101 // Filter by price and category
102 pipeline = NewPipeline(products)
103 affordableElectronics := pipeline.Filter(func(p Product) bool {
104 return p.Category == "Electronics" && p.Price < 100 && p.Active
105 }).Collect()
106
107 fmt.Printf("Affordable electronics: %d\n", len(affordableElectronics))
108
109 // Calculate total value of active products
110 pipeline = NewPipeline(products)
111 totalValue := pipeline.Filter(func(p Product) bool {
112 return p.Active
113 }).Reduce(func(sum float64, p Product) float64 {
114 return sum + p.Price
115 }, 0.0)
116
117 fmt.Printf("Total value of active products: $%.2f\n", totalValue)
118
119 // Find highest rated product
120 pipeline = NewPipeline(products)
121 highestRated := pipeline.Filter(func(p Product) bool {
122 return p.Active
123 }).Reduce(func(best Product, p Product) Product {
124 if p.Rating > best.Rating {
125 return p
126 }
127 return best
128 }, Product{})
129
130 fmt.Printf("Highest rated product: %s (rating: %d)\n",
131 highestRated.Name, highestRated.Rating)
132
133 // Complex pipeline chain
134 pipeline = NewPipeline(products)
135 results := pipeline.
136 Filter(func(p Product) bool {
137 return p.Active && p.Category == "Electronics"
138 }).
139 Filter(func(p Product) bool {
140 return p.Price >= 50 && p.Price <= 200
141 }).
142 Filter(func(p Product) bool {
143 return p.Rating >= 4
144 }).
145 Collect()
146
147 fmt.Printf("\nComplex pipeline results (%d items):\n", len(results))
148 for _, p := range results {
149 fmt.Printf(" %s: $%.2f (rating: %d)\n",
150 p.Name, p.Price, p.Rating)
151 }
152
153 fmt.Printf("\nProcessing completed in %v\n", time.Since(start))
154}
Summary
Key Takeaways
- Arrays vs Slices: Arrays are fixed-size, value types; slices are dynamic, reference types
- Memory Layout: Arrays store data contiguously; slices are headers + underlying arrays
- Performance: Pre-allocate when possible, understand growth patterns, avoid unnecessary allocations
- Sharing: Slices can share underlying arrays - be careful with modifications
- Capacity vs Length: Use
make([]T, 0, cap)when you expect growth - Safety: Check bounds, use
copy()for independence, avoid modifying during iteration
Best Practices
- Choose slices over arrays for most use cases
- Pre-allocate capacity when you know approximate final size
- Use
copy()for independence instead of assignment - Avoid memory leaks by copying only what you need from large slices
- Use range for cleaner, safer iteration
- Cache
len()in tight loops when it doesn't change - Consider 2D slice patterns for matrix operations
- Profile when performance matters - different patterns have different trade-offs
When to Use Arrays vs Slices
Use Arrays When:
- Size is known at compile time and never changes
- You need maximum performance and want stack allocation
- Working with low-level protocols or hardware interfaces
- The size has semantic meaning
Use Slices When:
- Size is dynamic or unknown
- You need to grow or shrink the collection
- You're working with APIs or user input
- You need flexibility in function signatures
Next Steps
- Practice: Implement the exercises below to reinforce concepts
- Explore: Learn about Go's memory management and garbage collection
- Apply: Use these patterns in your own projects
- Profile: Use Go's profiling tools to optimize your slice usage
- Advanced: Study generics and how they work with slices
Mastering slices and arrays gives you the foundation for efficient, high-performance Go programming!