Exercise: Map Concurrency
Difficulty - Beginner
Learning Objectives
- Master concurrent access patterns for maps in Go
- Understand race conditions and how to prevent them
- Learn to use sync.RWMutex for safe concurrent map operations
- Practice using sync.Map for built-in concurrent map support
- Understand when to use different concurrent map strategies
Problem Statement
Create a ConcurrentCache package that implements thread-safe caching mechanisms:
- MutexCache: A thread-safe cache using sync.RWMutex
- SyncMapCache: A cache using sync.Map for high concurrency
- ShardedCache: A sharded cache for better performance under heavy load
- GetOrCompute: Atomically get value or compute if missing
- BulkUpdate: Safely update multiple keys atomically
Function Signatures
1package concurrentcache
2
3import "sync"
4
5// MutexCache provides thread-safe map access with RWMutex
6type MutexCache struct {
7 mu sync.RWMutex
8 data map[string]interface{}
9}
10
11func NewMutexCache() *MutexCache
12
13func Set(key string, value interface{})
14func Get(key string)
15func Delete(key string)
16func Len() int
17
18// SyncMapCache uses sync.Map for lock-free operations
19type SyncMapCache struct {
20 data sync.Map
21}
22
23func NewSyncMapCache() *SyncMapCache
24
25func Set(key string, value interface{})
26func Get(key string)
27func Delete(key string)
28func Range(f func(key, value interface{}) bool)
29
30// ShardedCache splits data across multiple shards for better concurrency
31type ShardedCache struct {
32 shards []*MutexCache
33 count int
34}
35
36func NewShardedCache(shardCount int) *ShardedCache
37
38func Set(key string, value interface{})
39func Get(key string)
40
41// ComputeFunc is a function that computes a value for a key
42type ComputeFunc func(key string)
43
44// GetOrCompute gets value or computes it if missing
45func GetOrCompute(key string, compute ComputeFunc)
46
47// BulkUpdate atomically updates multiple key-value pairs
48func BulkUpdate(updates map[string]interface{})
Example Usage
1package main
2
3import (
4 "fmt"
5 "sync"
6 "concurrentcache"
7)
8
9func main() {
10 // MutexCache example
11 cache := concurrentcache.NewMutexCache()
12
13 // Concurrent writes
14 var wg sync.WaitGroup
15 for i := 0; i < 100; i++ {
16 wg.Add(1)
17 go func(n int) {
18 defer wg.Done()
19 cache.Set(fmt.Sprintf("key%d", n), n*2)
20 }(i)
21 }
22 wg.Wait()
23
24 // Concurrent reads
25 value, ok := cache.Get("key42")
26 fmt.Println(value, ok) // 84 true
27
28 fmt.Println(cache.Len()) // 100
29
30 // SyncMapCache example
31 syncCache := concurrentcache.NewSyncMapCache()
32 syncCache.Set("user:1", "Alice")
33 syncCache.Set("user:2", "Bob")
34
35 syncCache.Range(func(key, value interface{}) bool {
36 fmt.Printf("%v: %v\n", key, value)
37 return true // continue iteration
38 })
39
40 // ShardedCache example
41 sharded := concurrentcache.NewShardedCache(16)
42 for i := 0; i < 1000; i++ {
43 sharded.Set(fmt.Sprintf("item%d", i), i)
44 }
45
46 // GetOrCompute example
47 compute := func(key string) {
48 // Expensive computation
49 return "computed_" + key, nil
50 }
51
52 result, _ := cache.GetOrCompute("expensive", compute)
53 fmt.Println(result) // "computed_expensive"
54
55 // Second call uses cached value
56 result2, _ := cache.GetOrCompute("expensive", compute)
57 fmt.Println(result2) // "computed_expensive"
58
59 // BulkUpdate example
60 updates := map[string]interface{}{
61 "status": "active",
62 "count": 42,
63 "name": "service",
64 }
65 cache.BulkUpdate(updates)
66}
Requirements
- MutexCache must use RWMutex for optimal read performance
- All operations must be thread-safe and pass race detector
- ShardedCache must distribute keys evenly across shards
- GetOrCompute must ensure compute function is called only once per key
- BulkUpdate must be atomic
Test Cases
1package concurrentcache
2
3import (
4 "fmt"
5 "sync"
6 "testing"
7 "time"
8)
9
10func TestMutexCacheConcurrency(t *testing.T) {
11 cache := NewMutexCache()
12 var wg sync.WaitGroup
13
14 // Concurrent writes
15 for i := 0; i < 100; i++ {
16 wg.Add(1)
17 go func(n int) {
18 defer wg.Done()
19 key := fmt.Sprintf("key%d", n)
20 cache.Set(key, n)
21 }(i)
22 }
23
24 // Concurrent reads
25 for i := 0; i < 100; i++ {
26 wg.Add(1)
27 go func(n int) {
28 defer wg.Done()
29 key := fmt.Sprintf("key%d", n)
30 cache.Get(key)
31 }(i)
32 }
33
34 wg.Wait()
35
36 if cache.Len() != 100 {
37 t.Errorf("Expected 100 entries, got %d", cache.Len())
38 }
39}
40
41func TestMutexCacheBasicOps(t *testing.T) {
42 cache := NewMutexCache()
43
44 cache.Set("key1", "value1")
45 cache.Set("key2", 42)
46
47 if val, ok := cache.Get("key1"); !ok || val != "value1" {
48 t.Errorf("Get failed: got %v, %v", val, ok)
49 }
50
51 cache.Delete("key1")
52 if _, ok := cache.Get("key1"); ok {
53 t.Error("Delete failed: key still exists")
54 }
55
56 if cache.Len() != 1 {
57 t.Errorf("Expected length 1, got %d", cache.Len())
58 }
59}
60
61func TestSyncMapCache(t *testing.T) {
62 cache := NewSyncMapCache()
63
64 cache.Set("a", 1)
65 cache.Set("b", 2)
66 cache.Set("c", 3)
67
68 if val, ok := cache.Get("b"); !ok || val != 2 {
69 t.Errorf("Get failed: got %v, %v", val, ok)
70 }
71
72 cache.Delete("b")
73 if _, ok := cache.Get("b"); ok {
74 t.Error("Delete failed")
75 }
76
77 count := 0
78 cache.Range(func(key, value interface{}) bool {
79 count++
80 return true
81 })
82
83 if count != 2 {
84 t.Errorf("Expected 2 items, got %d", count)
85 }
86}
87
88func TestShardedCache(t *testing.T) {
89 cache := NewShardedCache(4)
90
91 // Insert many items
92 for i := 0; i < 1000; i++ {
93 key := fmt.Sprintf("key%d", i)
94 cache.Set(key, i)
95 }
96
97 // Verify all items
98 for i := 0; i < 1000; i++ {
99 key := fmt.Sprintf("key%d", i)
100 val, ok := cache.Get(key)
101 if !ok || val != i {
102 t.Errorf("Key %s: got %v, %v", key, val, ok)
103 }
104 }
105}
106
107func TestGetOrCompute(t *testing.T) {
108 cache := NewMutexCache()
109
110 computeCount := 0
111 compute := func(key string) {
112 computeCount++
113 time.Sleep(10 * time.Millisecond) // Simulate expensive operation
114 return "computed_" + key, nil
115 }
116
117 // Call concurrently
118 var wg sync.WaitGroup
119 for i := 0; i < 10; i++ {
120 wg.Add(1)
121 go func() {
122 defer wg.Done()
123 cache.GetOrCompute("shared_key", compute)
124 }()
125 }
126 wg.Wait()
127
128 // Compute should be called only once
129 if computeCount != 1 {
130 t.Errorf("Expected compute to be called once, got %d", computeCount)
131 }
132
133 // Value should be cached
134 val, _ := cache.Get("shared_key")
135 if val != "computed_shared_key" {
136 t.Errorf("Unexpected value: %v", val)
137 }
138}
139
140func TestBulkUpdate(t *testing.T) {
141 cache := NewMutexCache()
142
143 updates := map[string]interface{}{
144 "a": 1,
145 "b": 2,
146 "c": 3,
147 }
148
149 cache.BulkUpdate(updates)
150
151 for key, expectedVal := range updates {
152 val, ok := cache.Get(key)
153 if !ok || val != expectedVal {
154 t.Errorf("Key %s: got %v, want %v", key, val, expectedVal)
155 }
156 }
157}
Hints
Hint 1: RWMutex vs Mutex
Use RWMutex when you have many reads and few writes. Multiple readers can hold the lock simultaneously, but writers need exclusive access.
1// Read lock - allows concurrent readers
2c.mu.RLock()
3defer c.mu.RUnlock()
4
5// Write lock - exclusive access
6c.mu.Lock()
7defer c.mu.Unlock()
Hint 2: ShardedCache Distribution
Use a hash function to distribute keys evenly across shards:
1func getShard(key string) *MutexCache {
2 h := hash(key)
3 return c.shards[h % c.count]
4}
Hint 3: GetOrCompute Race-Free
Use double-checked locking pattern:
1// 1. Check with read lock if value exists
2// 2. If not, acquire write lock
3// 3. Check again
4// 4. If still missing, compute and store
Hint 4: sync.Map Use Cases
sync.Map is optimized for:
- Keys that are written once and read many times
- Disjoint key sets accessed by different goroutines
- Not for general use - benchmark first!
Hint 5: Avoiding Deadlocks
Always lock in consistent order and use defer to ensure unlock:
1c.mu.Lock()
2defer c.mu.Unlock()
3// Even if panic occurs, unlock will happen
Solution
Click to see the complete solution
1package concurrentcache
2
3import (
4 "hash/fnv"
5 "sync"
6)
7
8// MutexCache provides thread-safe map access with RWMutex
9type MutexCache struct {
10 mu sync.RWMutex
11 data map[string]interface{}
12}
13
14func NewMutexCache() *MutexCache {
15 return &MutexCache{
16 data: make(map[string]interface{}),
17 }
18}
19
20func Set(key string, value interface{}) {
21 c.mu.Lock()
22 defer c.mu.Unlock()
23 c.data[key] = value
24}
25
26func Get(key string) {
27 c.mu.RLock()
28 defer c.mu.RUnlock()
29 val, ok := c.data[key]
30 return val, ok
31}
32
33func Delete(key string) {
34 c.mu.Lock()
35 defer c.mu.Unlock()
36 delete(c.data, key)
37}
38
39func Len() int {
40 c.mu.RLock()
41 defer c.mu.RUnlock()
42 return len(c.data)
43}
44
45// SyncMapCache uses sync.Map for lock-free operations
46type SyncMapCache struct {
47 data sync.Map
48}
49
50func NewSyncMapCache() *SyncMapCache {
51 return &SyncMapCache{}
52}
53
54func Set(key string, value interface{}) {
55 c.data.Store(key, value)
56}
57
58func Get(key string) {
59 return c.data.Load(key)
60}
61
62func Delete(key string) {
63 c.data.Delete(key)
64}
65
66func Range(f func(key, value interface{}) bool) {
67 c.data.Range(f)
68}
69
70// ShardedCache splits data across multiple shards for better concurrency
71type ShardedCache struct {
72 shards []*MutexCache
73 count int
74}
75
76func NewShardedCache(shardCount int) *ShardedCache {
77 if shardCount <= 0 {
78 shardCount = 16 // default
79 }
80
81 shards := make([]*MutexCache, shardCount)
82 for i := 0; i < shardCount; i++ {
83 shards[i] = NewMutexCache()
84 }
85
86 return &ShardedCache{
87 shards: shards,
88 count: shardCount,
89 }
90}
91
92func getShard(key string) *MutexCache {
93 h := fnv.New32a()
94 h.Write([]byte(key))
95 return c.shards[h.Sum32()%uint32(c.count)]
96}
97
98func Set(key string, value interface{}) {
99 shard := c.getShard(key)
100 shard.Set(key, value)
101}
102
103func Get(key string) {
104 shard := c.getShard(key)
105 return shard.Get(key)
106}
107
108// ComputeFunc is a function that computes a value for a key
109type ComputeFunc func(key string)
110
111// GetOrCompute gets value or computes it if missing
112func GetOrCompute(key string, compute ComputeFunc) {
113 // First check with read lock
114 c.mu.RLock()
115 val, ok := c.data[key]
116 c.mu.RUnlock()
117
118 if ok {
119 return val, nil
120 }
121
122 // Not found, acquire write lock
123 c.mu.Lock()
124 defer c.mu.Unlock()
125
126 // Double-check: another goroutine might have computed it
127 val, ok = c.data[key]
128 if ok {
129 return val, nil
130 }
131
132 // Still missing, compute it
133 computed, err := compute(key)
134 if err != nil {
135 return nil, err
136 }
137
138 c.data[key] = computed
139 return computed, nil
140}
141
142// BulkUpdate atomically updates multiple key-value pairs
143func BulkUpdate(updates map[string]interface{}) {
144 c.mu.Lock()
145 defer c.mu.Unlock()
146
147 for key, value := range updates {
148 c.data[key] = value
149 }
150}
Explanation
MutexCache:
- Uses
sync.RWMutexfor read-write locking - Read operations use RLock for concurrent reads
- Write operations use Lock for exclusive access
- defer ensures locks are always released, even on panic
SyncMapCache:
- Uses
sync.Mapfor built-in concurrent access - No explicit locking needed
- Optimized for specific use cases
- Range method allows safe iteration
ShardedCache:
- Distributes keys across multiple shards using FNV hash
- Reduces lock contention by splitting the data
- Each shard is an independent MutexCache
- Better performance under heavy concurrent load
GetOrCompute:
- Implements double-checked locking pattern
- First check with read lock
- If missing, acquire write lock and check again
- Ensures compute function is called only once per key
- Prevents race conditions when multiple goroutines request same key
BulkUpdate:
- Acquires write lock once for all updates
- More efficient than individual Set calls
- Ensures atomicity - all updates happen together
Race Condition Example
1// UNSAFE - Race condition!
2type UnsafeCache struct {
3 data map[string]interface{}
4}
5
6func Set(key string, value interface{}) {
7 c.data[key] = value // Multiple goroutines writing = race!
8}
9
10// Run with: go run -race main.go
11// Will detect: WARNING: DATA RACE
Performance Comparison
1func BenchmarkMutexCache(b *testing.B) {
2 cache := NewMutexCache()
3 b.RunParallel(func(pb *testing.PB) {
4 for pb.Next() {
5 cache.Set("key", "value")
6 cache.Get("key")
7 }
8 })
9}
10
11func BenchmarkSyncMapCache(b *testing.B) {
12 cache := NewSyncMapCache()
13 b.RunParallel(func(pb *testing.PB) {
14 for pb.Next() {
15 cache.Set("key", "value")
16 cache.Get("key")
17 }
18 })
19}
20
21func BenchmarkShardedCache(b *testing.B) {
22 cache := NewShardedCache(16)
23 b.RunParallel(func(pb *testing.PB) {
24 for pb.Next() {
25 cache.Set("key", "value")
26 cache.Get("key")
27 }
28 })
29}
Bonus Challenges
- TTL Cache: Implement a cache with time-to-live expiration
1type TTLCache struct {
2 cache *MutexCache
3 ttls map[string]time.Time
4}
5
6func SetWithTTL(key string, value interface{}, ttl time.Duration)
- LRU Cache: Implement Least Recently Used eviction policy
1type LRUCache struct {
2 capacity int
3 cache *MutexCache
4 list *list.List // doubly linked list
5}
- Write-Through Cache: Cache that writes to backing store
1type WriteThroughCache struct {
2 cache *MutexCache
3 storage Storage // interface for persistent storage
4}
- Cache Statistics: Track hits, misses, and evictions
1type StatsCache struct {
2 cache *MutexCache
3 hits int64
4 misses int64
5 evictions int64
6}
7
8func GetStats() CacheStats
Key Takeaways
- Maps are not thread-safe - concurrent access causes race conditions
- Use RWMutex when reads are much more frequent than writes
- sync.Map is specialized - not a general replacement for map + mutex
- Sharding reduces contention by splitting data across multiple locks
- Always use defer with Lock/Unlock to prevent deadlocks
- Race detector is your friend - always run tests with
-raceflag
Concurrent map access is a common source of bugs in Go programs. Understanding these patterns and always protecting shared state with proper synchronization is crucial for writing correct concurrent code.