Why Database Operations Matter
Think of building a house and connecting it to the city's water system - you don't need to know exactly how the water treatment plant works, just the right pipes and connections to get clean water flowing. Go's database/sql package works exactly like this - it provides the standard pipes and connections to work with any SQL database, without getting bogged down in database-specific details.
Real-world Impact: Every modern application stores data - from user profiles and orders to logs and analytics. Database operations are the backbone of e-commerce platforms, social networks, financial systems, and virtually every software product you use daily. When you master Go's database patterns, you can build systems that scale to millions of users while maintaining data integrity and performance.
š” Key Takeaway: Think of
database/sqlas a universal remote control for databases - learn it once, and you can control any SQL database with the same set of tools.
Learning Objectives
By the end of this article, you'll be able to:
ā
Set up database connections with proper driver management and connection pooling
ā
Perform CRUD operations safely and efficiently
ā
Implement transactions to ensure data consistency and handle complex operations
ā
Handle NULL values and edge cases with Go's special SQL types
ā
Use context for timeouts and cancellation in database operations
ā
Apply production patterns including connection pooling, batch operations, and error handling
ā
Work with different databases while writing portable Go code
Core Concepts - Understanding Go's Database Philosophy
The database/sql package is Go's standard approach to relational databases - a beautifully designed abstraction that works with any SQL database through a driver interface.
Real-world Example: When you build a web application that stores user data, you might start with SQLite for development, then switch to PostgreSQL for production. With Go's database/sql, you only need to change the driver import and connection string - all your SQL queries remain exactly the same.
Database Abstraction Layers
Go offers different approaches to database programming, each with trade-offs:
| Approach | Best For | Learning Curve | Performance | Dependencies |
|---|---|---|---|---|
| database/sql | Production systems, microservices | Medium | Excellent | Standard library only |
| GORM | Rapid prototyping, complex relationships | Easy | Good | External dependency |
| sqlx | Balance between control and convenience | Medium | Excellent | External dependency |
Why Go's database/sql is Special:
-
Driver-Agnostic - Write once, switch databases. Same code works with PostgreSQL, MySQL, SQLite, SQL Server.
-
Connection Pooling Built-In - Production-ready connection management with configurable pool sizes.
-
Thread-Safe - Safe for concurrent use across goroutines without external synchronization.
-
Prepared Statements - Automatic SQL injection protection and performance optimization.
-
Standard Library - No ORM dependencies needed for simple to moderate complexity.
When to Choose Each Approach:
1// ā
Choose database/sql when:
2- Building microservices
3- Need maximum performance control
4- Simple to moderate CRUD operations
5- Want explicit SQL control and transparency
6- Learning Go database patterns
7
8// ā
Choose GORM when:
9- Rapid prototyping and development speed
10- Complex associations
11- Need automatic migrations
12- Team prefers ORM patterns
13
14// ā
Choose sqlx when:
15- Want struct mapping convenience
16- Still need SQL control
17- Middle ground between database/sql and GORM
18- Production-ready with better ergonomics
ā ļø Important: Choosing the right database tool is critical for your application's success. While ORMs like GORM are great for rapid development, understanding
database/sqlgives you complete control and helps you debug performance issues when they arise.
Practical Examples - Getting Started with Database Operations
Database Drivers: The Universal Adapter
Drivers are like language interpreters - they translate Go's standard database commands into the specific language each database understands. The beauty is that you only need to learn one interface and the drivers handle all the translation work.
Real-world Example: Think of drivers like universal power adapters. Your Go code is the device, and each database is a different type of power outlet. The driver ensures your device works with any outlet.
Setting up your database driver is simple:
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "log"
8
9 _ "github.com/mattn/go-sqlite3" // SQLite
10 // _ "github.com/lib/pq" // PostgreSQL
11 // _ "github.com/go-sql-driver/mysql" // MySQL
12)
13
14func main() {
15 fmt.Println("Database driver imported and ready!")
16}
š” Key Takeaway: The blank identifier
_tells Go "I need this driver to be available, but I won't call it directly." The driver registers itself automatically.
Opening a Database Connection
Opening a database connection is like establishing a phone line to your database. You need to dial the right number and verify the line is working before you can start talking.
Basic Connection Setup:
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "log"
8
9 _ "github.com/mattn/go-sqlite3"
10)
11
12func main() {
13 // Open database connection
14 db, err := sql.Open("sqlite3", "./test.db")
15 if err != nil {
16 log.Fatal("Failed to open database:", err)
17 }
18 defer db.Close() // Always close connections
19
20 // Verify connection actually works
21 if err := db.Ping(); err != nil {
22 log.Fatal("Database connection failed:", err)
23 }
24
25 fmt.Println("ā Connected to database successfully!")
26}
Production-Ready Connection Pooling:
Connection pooling is like having a team of ready workers instead of hiring a new person for every task. It's much more efficient to keep a few connections open and ready than to open and close connections for every query.
Real-world Example: A busy restaurant doesn't hire a new waiter for each customer. They have a team of waiters who can serve multiple customers throughout their shift. Connection pooling works the same way.
š” Key Takeaway: Proper connection pool configuration is crucial for performance. Too few connections and requests wait; too many and you overwhelm the database.
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7 "time"
8
9 _ "github.com/mattn/go-sqlite3"
10)
11
12func setupDatabase() *sql.DB {
13 db, err := sql.Open("sqlite3", "./production.db")
14 if err != nil {
15 log.Fatal("Failed to open database:", err)
16 }
17
18 // Production-ready connection pool settings
19 db.SetMaxOpenConns(25) // Maximum open connections to database
20 db.SetMaxIdleConns(5) // Maximum idle connections to keep alive
21 db.SetConnMaxLifetime(5 * time.Minute) // Recycle connections after 5 minutes
22 db.SetConnMaxIdleTime(10 * time.Minute) // Close idle connections after 10 minutes
23
24 // Verify connection
25 if err := db.Ping(); err != nil {
26 log.Fatal("Database connection failed:", err)
27 }
28
29 log.Println("Database connection pool configured")
30 return db
31}
32
33func main() {
34 db := setupDatabase()
35 defer db.Close()
36
37 log.Println("ā Production database ready")
38}
Integration and Mastery - Building a Complete User Management System
Now let's put everything together by building a practical example that demonstrates real-world database patterns and best practices.
Creating Tables with Production Schema
Creating tables is like building the blueprint for your data house. You define the structure that will hold your data safely and efficiently.
ā ļø Important: Once you create a table structure, changing it can be difficult, especially with existing data. Plan your schema carefully!
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7 "time"
8
9 _ "github.com/mattn/go-sqlite3"
10)
11
12func setupDatabase(db *sql.DB) error {
13 // Create users table with production-ready schema
14 query := `
15 CREATE TABLE IF NOT EXISTS users (
16 id INTEGER PRIMARY KEY AUTOINCREMENT,
17 username TEXT NOT NULL UNIQUE,
18 email TEXT NOT NULL UNIQUE,
19 password_hash TEXT NOT NULL,
20 created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
21 updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
22 last_login DATETIME,
23 is_active BOOLEAN DEFAULT 1
24 )`
25
26 _, err := db.Exec(query)
27 if err != nil {
28 return err
29 }
30
31 // Create index for performance
32 indexQuery := `
33 CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
34 CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
35 CREATE INDEX IF NOT EXISTS idx_users_active ON users(is_active);`
36
37 _, err = db.Exec(indexQuery)
38 return err
39}
40
41func main() {
42 db, err := sql.Open("sqlite3", "./production.db")
43 if err != nil {
44 log.Fatal(err)
45 }
46 defer db.Close()
47
48 if err := setupDatabase(db); err != nil {
49 log.Fatal("Failed to setup database:", err)
50 }
51
52 log.Println("ā Database tables and indexes created successfully")
53}
Complete CRUD Operations with Error Handling
Let's build a user repository that demonstrates all CRUD operations with proper error handling, parameterized queries, and production-ready patterns.
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "log"
8 "time"
9
10 _ "github.com/mattn/go-sqlite3"
11)
12
13type User struct {
14 ID int `json:"id"`
15 Username string `json:"username"`
16 Email string `json:"email"`
17 CreatedAt time.Time `json:"created_at"`
18 UpdatedAt time.Time `json:"updated_at"`
19 LastLogin *time.Time `json:"last_login"`
20 IsActive bool `json:"is_active"`
21}
22
23type UserRepository struct {
24 db *sql.DB
25}
26
27func NewUserRepository(db *sql.DB) *UserRepository {
28 return &UserRepository{db: db}
29}
30
31// CREATE: Insert new user with proper error handling
32func CreateUser(username, email, passwordHash string) {
33 query := `
34 INSERT INTO users
35 VALUES
36 RETURNING id, username, email, created_at, updated_at, last_login, is_active
37 `
38
39 var user User
40 err := r.db.QueryRow(query, username, email, passwordHash).Scan(
41 &user.ID, &user.Username, &user.Email,
42 &user.CreatedAt, &user.UpdatedAt, &user.LastLogin, &user.IsActive,
43 )
44
45 if err != nil {
46 return nil, fmt.Errorf("failed to create user: %w", err)
47 }
48
49 return &user, nil
50}
51
52// READ: Get user by ID with proper null handling
53func GetUserByID(id int) {
54 query := `
55 SELECT id, username, email, created_at, updated_at,
56 COALESCE(last_login, ''), is_active
57 FROM users WHERE id = ?
58 `
59
60 var user User
61 var lastLoginStr string
62
63 err := r.db.QueryRow(query, id).Scan(
64 &user.ID, &user.Username, &user.Email,
65 &user.CreatedAt, &user.UpdatedAt, &lastLoginStr, &user.IsActive,
66 )
67
68 if err != nil {
69 if err == sql.ErrNoRows {
70 return nil, fmt.Errorf("user not found")
71 }
72 return nil, fmt.Errorf("failed to get user: %w", err)
73 }
74
75 // Handle nullable last_login
76 if lastLoginStr != "" {
77 if parsed, err := time.Parse("2006-01-02 15:04:05", lastLoginStr); err == nil {
78 user.LastLogin = &parsed
79 }
80 }
81
82 return &user, nil
83}
84
85// READ: Get multiple users with pagination
86func GetUsers(limit, offset int, activeOnly bool) {
87 query := `
88 SELECT id, username, email, created_at, updated_at,
89 COALESCE(last_login, ''), is_active
90 FROM users
91 WHERE
92 ORDER BY created_at DESC
93 LIMIT ? OFFSET ?
94 `
95
96 rows, err := r.db.Query(query, !activeOnly, activeOnly, limit, offset)
97 if err != nil {
98 return nil, fmt.Errorf("failed to query users: %w", err)
99 }
100 defer rows.Close()
101
102 var users []User
103 for rows.Next() {
104 var user User
105 var lastLoginStr string
106
107 if err := rows.Scan(
108 &user.ID, &user.Username, &user.Email,
109 &user.CreatedAt, &user.UpdatedAt, &lastLoginStr, &user.IsActive,
110 ); err != nil {
111 return nil, fmt.Errorf("failed to scan user: %w", err)
112 }
113
114 // Handle nullable last_login
115 if lastLoginStr != "" {
116 if parsed, err := time.Parse("2006-01-02 15:04:05", lastLoginStr); err == nil {
117 user.LastLogin = &parsed
118 }
119 }
120
121 users = append(users, user)
122 }
123
124 // Check for iteration errors
125 if err := rows.Err(); err != nil {
126 return nil, fmt.Errorf("error during user iteration: %w", err)
127 }
128
129 return users, nil
130}
131
132// UPDATE: Update user with proper row count checking
133func UpdateUserLastLogin(id int) error {
134 query := `
135 UPDATE users SET last_login = CURRENT_TIMESTAMP, updated_at = CURRENT_TIMESTAMP
136 WHERE id = ?
137 `
138
139 result, err := r.db.Exec(query, id)
140 if err != nil {
141 return fmt.Errorf("failed to update user: %w", err)
142 }
143
144 rowsAffected, err := result.RowsAffected()
145 if err != nil {
146 return fmt.Errorf("failed to get rows affected: %w", err)
147 }
148
149 if rowsAffected == 0 {
150 return fmt.Errorf("user not found")
151 }
152
153 return nil
154}
155
156// DELETE: Soft delete user
157func SoftDeleteUser(id int) error {
158 query := `UPDATE users SET is_active = 0 WHERE id = ?`
159
160 result, err := r.db.Exec(query, id)
161 if err != nil {
162 return fmt.Errorf("failed to soft delete user: %w", err)
163 }
164
165 rowsAffected, err := result.RowsAffected()
166 if err != nil {
167 return fmt.Errorf("failed to get rows affected: %w", err)
168 }
169
170 if rowsAffected == 0 {
171 return fmt.Errorf("user not found")
172 }
173
174 return nil
175}
176
177func main() {
178 db, err := sql.Open("sqlite3", "./production.db")
179 if err != nil {
180 log.Fatal(err)
181 }
182 defer db.Close()
183
184 repo := NewUserRepository(db)
185
186 // Example usage
187 user, err := repo.CreateUser("john_doe", "john@example.com", "hashed_password")
188 if err != nil {
189 log.Fatal("Failed to create user:", err)
190 }
191 fmt.Printf("Created user: %+v\n", user)
192
193 // Get user by ID
194 retrieved, err := repo.GetUserByID(user.ID)
195 if err != nil {
196 log.Fatal("Failed to get user:", err)
197 }
198 fmt.Printf("Retrieved user: %+v\n", retrieved)
199
200 // List users
201 users, err := repo.GetUsers(10, 0, true)
202 if err != nil {
203 log.Fatal("Failed to list users:", err)
204 }
205 fmt.Printf("Found %d users\n", len(users))
206
207 // Update last login
208 if err := repo.UpdateUserLastLogin(user.ID); err != nil {
209 log.Fatal("Failed to update login:", err)
210 }
211 fmt.Println("ā Updated last login time")
212}
Context-Aware Database Operations with Timeouts
Context is like having a timer and panic button for your database operations. It lets you set timeouts and cancel operations if the user disconnects or if something takes too long.
Real-world Example: When a user loads a webpage and clicks away before it finishes loading, the context is canceled. Your database query should automatically stop running instead of continuing to consume resources.
1// run
2package main
3
4import (
5 "context"
6 "database/sql"
7 "fmt"
8 "log"
9 "strings"
10 "time"
11
12 _ "github.com/mattn/go-sqlite3"
13)
14
15func queryUserWithTimeout(db *sql.DB, userID int) {
16 // Create context with 5-second timeout
17 ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
18 defer cancel() // Always cancel to prevent leaks
19
20 query := `
21 SELECT id, username, email, created_at, updated_at,
22 COALESCE(last_login, ''), is_active
23 FROM users WHERE id = ?
24 `
25
26 var user User
27 var lastLoginStr string
28
29 err := db.QueryRowContext(ctx, query, userID).Scan(
30 &user.ID, &user.Username, &user.Email,
31 &user.CreatedAt, &user.UpdatedAt, &lastLoginStr, &user.IsActive,
32 )
33
34 if err != nil {
35 if err == sql.ErrNoRows {
36 return nil, fmt.Errorf("user not found")
37 }
38 if err == context.DeadlineExceeded {
39 return nil, fmt.Errorf("database query timed out")
40 }
41 return nil, fmt.Errorf("failed to query user: %w", err)
42 }
43
44 // Handle nullable last_login
45 if lastLoginStr != "" {
46 if parsed, err := time.Parse("2006-01-02 15:04:05", lastLoginStr); err == nil {
47 user.LastLogin = &parsed
48 }
49 }
50
51 return &user, nil
52}
53
54func batchUsersWithTimeout(ctx context.Context, db *sql.DB, userIDs []int) {
55 if len(userIDs) == 0 {
56 return []User{}, nil
57 }
58
59 // Build placeholders for IN clause
60 placeholders := make([]string, len(userIDs))
61 args := make([]interface{}, len(userIDs))
62 for i, id := range userIDs {
63 placeholders[i] = "?"
64 args[i] = id
65 }
66
67 query := fmt.Sprintf(`
68 SELECT id, username, email, created_at, updated_at,
69 COALESCE(last_login, ''), is_active
70 FROM users WHERE id IN
71 ORDER BY created_at DESC
72 `, strings.Join(placeholders, ","))
73
74 rows, err := db.QueryContext(ctx, query, args...)
75 if err != nil {
76 return nil, fmt.Errorf("failed to query users: %w", err)
77 }
78 defer rows.Close()
79
80 var users []User
81 for rows.Next() {
82 var user User
83 var lastLoginStr string
84
85 if err := rows.Scan(
86 &user.ID, &user.Username, &user.Email,
87 &user.CreatedAt, &user.UpdatedAt, &lastLoginStr, &user.IsActive,
88 ); err != nil {
89 return nil, fmt.Errorf("failed to scan user: %w", err)
90 }
91
92 // Handle nullable last_login
93 if lastLoginStr != "" {
94 if parsed, err := time.Parse("2006-01-02 15:04:05", lastLoginStr); err == nil {
95 user.LastLogin = &parsed
96 }
97 }
98
99 users = append(users, user)
100 }
101
102 if err := rows.Err(); err != nil {
103 return nil, fmt.Errorf("error during user iteration: %w", err)
104 }
105
106 return users, nil
107}
108
109func main() {
110 db, err := sql.Open("sqlite3", "./production.db")
111 if err != nil {
112 log.Fatal(err)
113 }
114 defer db.Close()
115
116 // Example: Query single user with timeout
117 user, err := queryUserWithTimeout(db, 1)
118 if err != nil {
119 log.Printf("User query failed: %v", err)
120 } else {
121 fmt.Printf("Found user: %+v\n", user)
122 }
123
124 // Example: Batch query with context
125 ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
126 defer cancel()
127
128 userIDs := []int{1, 2, 3, 4, 5}
129 users, err := batchUsersWithTimeout(ctx, db, userIDs)
130 if err != nil {
131 log.Printf("Batch query failed: %v", err)
132 } else {
133 fmt.Printf("Found %d users\n", len(users))
134 }
135}
Prepared Statements
Prepared statements are like reusable templates for your SQL queries. Instead of writing the same query over and over, you prepare it once and reuse it with different values. This is both faster and more secure.
Real-world Example: Think of a prepared statement like a form with blank fields. You create the form once, then fill in different information each time you use it. The database can also optimize the query plan once and reuse it.
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11func main() {
12 db, err := sql.Open("sqlite3", "./test.db")
13 if err != nil {
14 log.Fatal(err)
15 }
16 defer db.Close()
17
18 // Prepare statement
19 stmt, err := db.Prepare("INSERT INTO users VALUES")
20 if err != nil {
21 log.Fatal(err)
22 }
23 defer stmt.Close()
24
25 // Execute multiple times
26 users := []struct {
27 username string
28 email string
29 }{
30 {"bob", "bob@example.com"},
31 {"charlie", "charlie@example.com"},
32 {"diana", "diana@example.com"},
33 }
34
35 for _, user := range users {
36 result, err := stmt.Exec(user.username, user.email)
37 if err != nil {
38 log.Printf("Error inserting %s: %v\n", user.username, err)
39 continue
40 }
41
42 id, _ := result.LastInsertId()
43 log.Printf("Inserted %s with ID: %d\n", user.username, id)
44 }
45}
Querying Data
Querying data is like asking a librarian to find specific books - you need to be precise about what you're looking for, and the librarian will return exactly what matches your criteria.
Query Single Row
When you expect exactly one result, QueryRow is your most efficient choice. It's optimized for single-row results and has a simpler interface.
š” Key Takeaway: Use
QueryRowwhen you expect at most one result. It's more efficient thanQueryfor single-row lookups.
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11type User struct {
12 ID int
13 Username string
14 Email string
15}
16
17func main() {
18 db, err := sql.Open("sqlite3", "./test.db")
19 if err != nil {
20 log.Fatal(err)
21 }
22 defer db.Close()
23
24 var user User
25 query := "SELECT id, username, email FROM users WHERE username = ?"
26
27 err = db.QueryRow(query, "alice").Scan(&user.ID, &user.Username, &user.Email)
28 if err != nil {
29 if err == sql.ErrNoRows {
30 log.Println("No user found")
31 return
32 }
33 log.Fatal(err)
34 }
35
36 log.Printf("User: %+v\n", user)
37}
Query Multiple Rows
When you need multiple results, Query returns a cursor that you can iterate through. Think of it like the librarian bringing you a whole shelf of books to browse through.
ā ļø Important: Always remember to close the rows cursor with
defer rows.Close(). Forgetting this will cause connection leaks and exhaust your connection pool!
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11func main() {
12 db, err := sql.Open("sqlite3", "./test.db")
13 if err != nil {
14 log.Fatal(err)
15 }
16 defer db.Close()
17
18 query := "SELECT id, username, email FROM users"
19 rows, err := db.Query(query)
20 if err != nil {
21 log.Fatal(err)
22 }
23 defer rows.Close()
24
25 var users []User
26
27 for rows.Next() {
28 var user User
29 err := rows.Scan(&user.ID, &user.Username, &user.Email)
30 if err != nil {
31 log.Fatal(err)
32 }
33 users = append(users, user)
34 }
35
36 // Check for errors during iteration
37 if err = rows.Err(); err != nil {
38 log.Fatal(err)
39 }
40
41 for _, user := range users {
42 log.Printf("User: %+v\n", user)
43 }
44}
Updating Data
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11func main() {
12 db, err := sql.Open("sqlite3", "./test.db")
13 if err != nil {
14 log.Fatal(err)
15 }
16 defer db.Close()
17
18 query := "UPDATE users SET email = ? WHERE username = ?"
19 result, err := db.Exec(query, "newalice@example.com", "alice")
20 if err != nil {
21 log.Fatal(err)
22 }
23
24 rows, err := result.RowsAffected()
25 if err != nil {
26 log.Fatal(err)
27 }
28
29 if rows == 0 {
30 log.Println("No rows updated")
31 } else {
32 log.Printf("Updated %d row(s)\n", rows)
33 }
34}
Deleting Data
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11func main() {
12 db, err := sql.Open("sqlite3", "./test.db")
13 if err != nil {
14 log.Fatal(err)
15 }
16 defer db.Close()
17
18 query := "DELETE FROM users WHERE username = ?"
19 result, err := db.Exec(query, "bob")
20 if err != nil {
21 log.Fatal(err)
22 }
23
24 rows, err := result.RowsAffected()
25 if err != nil {
26 log.Fatal(err)
27 }
28
29 log.Printf("Deleted %d row(s)\n", rows)
30}
Transactions
Transactions are like banking transactions - either all the operations complete successfully, or none of them do. You wouldn't want money to be deducted from your account without being deposited to someone else's account. Transactions guarantee this "all or nothing" behavior.
Real-world Example: When you transfer money from one bank account to another, two operations must happen: money is removed from Account A AND added to Account B. If the system crashes after removing money but before adding it, you'd lose money. Transactions prevent this by ensuring both operations succeed or both fail together.
Basic Transaction
ā ļø Important: Always handle errors properly in transactions. If any operation fails, you must call
tx.Rollback()to undo any changes made so far.
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11func main() {
12 db, err := sql.Open("sqlite3", "./test.db")
13 if err != nil {
14 log.Fatal(err)
15 }
16 defer db.Close()
17
18 // Begin transaction
19 tx, err := db.Begin()
20 if err != nil {
21 log.Fatal(err)
22 }
23
24 // Execute statements within transaction
25 _, err = tx.Exec("INSERT INTO users VALUES", "eve", "eve@example.com")
26 if err != nil {
27 tx.Rollback()
28 log.Fatal(err)
29 }
30
31 _, err = tx.Exec("UPDATE users SET email = ? WHERE username = ?", "newemail@example.com", "alice")
32 if err != nil {
33 tx.Rollback()
34 log.Fatal(err)
35 }
36
37 // Commit transaction
38 if err = tx.Commit(); err != nil {
39 log.Fatal(err)
40 }
41
42 log.Println("Transaction completed successfully")
43}
Transaction with Defer
Using defer for rollback is like having an insurance policy - if something goes wrong, you're automatically covered. This pattern ensures that if you return early or panic, the transaction gets rolled back properly.
š” Key Takeaway: The defer rollback pattern is safer because it guarantees cleanup even if you forget to handle an error case or your function panics.
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11func transferFunds(db *sql.DB, fromUser, toUser string, amount float64) error {
12 tx, err := db.Begin()
13 if err != nil {
14 return err
15 }
16
17 // Defer rollback in case of error
18 defer func() {
19 if err != nil {
20 tx.Rollback()
21 return
22 }
23 }()
24
25 // Deduct from sender
26 _, err = tx.Exec("UPDATE accounts SET balance = balance - ? WHERE username = ?", amount, fromUser)
27 if err != nil {
28 return err
29 }
30
31 // Add to receiver
32 _, err = tx.Exec("UPDATE accounts SET balance = balance + ? WHERE username = ?", amount, toUser)
33 if err != nil {
34 return err
35 }
36
37 // Commit transaction
38 return tx.Commit()
39}
40
41func main() {
42 db, err := sql.Open("sqlite3", "./test.db")
43 if err != nil {
44 log.Fatal(err)
45 }
46 defer db.Close()
47
48 err = transferFunds(db, "alice", "bob", 100.50)
49 if err != nil {
50 log.Fatal(err)
51 }
52
53 log.Println("Transfer completed")
54}
Advanced Database Patterns
Handling NULL Values
NULL values in databases are like empty boxes - they're different from a box with nothing inside or a box with zero in it. Go provides special types to handle these NULL values safely.
Real-world Example: A user might not have filled in their optional bio field. In the database, this is NULL, not an empty string. This distinction is important for business logic.
š” Key Takeaway: Always use
sql.Null*types for database columns that can be NULL. Trying to scan NULL into a regular Go type will cause an error.
1// run
2package main
3
4import (
5 "database/sql"
6 "log"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11type User struct {
12 ID int
13 Username string
14 Email string
15 Bio sql.NullString // Can be NULL
16 Age sql.NullInt64 // Can be NULL
17}
18
19func main() {
20 db, err := sql.Open("sqlite3", "./test.db")
21 if err != nil {
22 log.Fatal(err)
23 }
24 defer db.Close()
25
26 query := "SELECT id, username, email, bio, age FROM users WHERE id = ?"
27 var user User
28
29 err = db.QueryRow(query, 1).Scan(&user.ID, &user.Username, &user.Email, &user.Bio, &user.Age)
30 if err != nil {
31 log.Fatal(err)
32 }
33
34 log.Printf("Username: %s\n", user.Username)
35
36 if user.Bio.Valid {
37 log.Printf("Bio: %s\n", user.Bio.String)
38 } else {
39 log.Println("Bio: NULL")
40 }
41
42 if user.Age.Valid {
43 log.Printf("Age: %d\n", user.Age.Int64)
44 } else {
45 log.Println("Age: NULL")
46 }
47}
Context Support
Context is like having a timer and panic button for your database operations. It lets you set timeouts and cancel operations if the user disconnects or if something takes too long.
Real-world Example: When a user loads a webpage and clicks away before it finishes loading, the context is canceled. Your database query should automatically stop running instead of continuing to consume resources.
Query with Timeout
1// run
2package main
3
4import (
5 "context"
6 "database/sql"
7 "log"
8 "time"
9
10 _ "github.com/mattn/go-sqlite3"
11)
12
13func main() {
14 db, err := sql.Open("sqlite3", "./test.db")
15 if err != nil {
16 log.Fatal(err)
17 }
18 defer db.Close()
19
20 // Create context with timeout
21 ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
22 defer cancel()
23
24 // Query with context
25 var username string
26 err = db.QueryRowContext(ctx, "SELECT username FROM users WHERE id = ?", 1).Scan(&username)
27 if err != nil {
28 log.Fatal(err)
29 }
30
31 log.Printf("Username: %s\n", username)
32}
Transaction with Context
1// run
2package main
3
4import (
5 "context"
6 "database/sql"
7 "log"
8 "time"
9
10 _ "github.com/mattn/go-sqlite3"
11)
12
13func main() {
14 db, err := sql.Open("sqlite3", "./test.db")
15 if err != nil {
16 log.Fatal(err)
17 }
18 defer db.Close()
19
20 ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
21 defer cancel()
22
23 tx, err := db.BeginTx(ctx, nil)
24 if err != nil {
25 log.Fatal(err)
26 }
27
28 _, err = tx.ExecContext(ctx, "INSERT INTO users VALUES", "frank", "frank@example.com")
29 if err != nil {
30 tx.Rollback()
31 log.Fatal(err)
32 }
33
34 if err = tx.Commit(); err != nil {
35 log.Fatal(err)
36 }
37
38 log.Println("Transaction completed")
39}
SQL Query Builders
Think of query builders like building with LEGOs versus free-form drawing. With string concatenation, you're drawing SQL freehand - one mistake and everything crashes. With query builders, you're snapping together pre-approved LEGO blocks - you can't build something invalid.
Query builders provide type-safe, dynamic SQL query construction.
Real-world Example: At GitHub, when you search for "golang web framework stars:>100", they don't just append this to a SQL query. They use query builders that translate complex search syntax into safe, parameterized queries. This prevents malicious users from searching for 1=1; DROP TABLE repositories;--.
1// run
2package main
3
4import (
5 "fmt"
6 "strings"
7)
8
9type QueryBuilder struct {
10 table string
11 columns []string
12 where []string
13 args []interface{}
14 limit int
15 offset int
16}
17
18func Select(columns ...string) *QueryBuilder {
19 return &QueryBuilder{
20 columns: columns,
21 }
22}
23
24func From(table string) *QueryBuilder {
25 qb.table = table
26 return qb
27}
28
29func Where(condition string, args ...interface{}) *QueryBuilder {
30 qb.where = append(qb.where, condition)
31 qb.args = append(qb.args, args...)
32 return qb
33}
34
35func Limit(limit int) *QueryBuilder {
36 qb.limit = limit
37 return qb
38}
39
40func Offset(offset int) *QueryBuilder {
41 qb.offset = offset
42 return qb
43}
44
45func Build() {
46 query := "SELECT "
47
48 if len(qb.columns) == 0 {
49 query += "*"
50 } else {
51 query += strings.Join(qb.columns, ", ")
52 }
53
54 query += " FROM " + qb.table
55
56 if len(qb.where) > 0 {
57 query += " WHERE " + strings.Join(qb.where, " AND ")
58 }
59
60 if qb.limit > 0 {
61 query += fmt.Sprintf(" LIMIT %d", qb.limit)
62 }
63
64 if qb.offset > 0 {
65 query += fmt.Sprintf(" OFFSET %d", qb.offset)
66 }
67
68 return query, qb.args
69}
70
71func main() {
72 query, args := Select("id", "name", "email").
73 From("users").
74 Where("age > ?", 18).
75 Where("status = ?", "active").
76 Limit(10).
77 Offset(0).
78 Build()
79
80 fmt.Println("Query:", query)
81 fmt.Println("Args:", args)
82 // Output:
83 // Query: SELECT id, name, email FROM users WHERE age > ? AND status = ? LIMIT 10 OFFSET 0
84 // Args: [18 active]
85}
Database Migrations
Manage schema changes systematically with migration tools.
Using golang-migrate:
1go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest
Migration Files:
1-- 001_create_users.up.sql
2CREATE TABLE users (
3 id SERIAL PRIMARY KEY,
4 username VARCHAR(50) UNIQUE NOT NULL,
5 email VARCHAR(100) UNIQUE NOT NULL,
6 created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
7);
8
9-- 001_create_users.down.sql
10DROP TABLE IF EXISTS users;
Programmatic Migrations:
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "log"
8
9 "github.com/golang-migrate/migrate/v4"
10 "github.com/golang-migrate/migrate/v4/database/postgres"
11 _ "github.com/golang-migrate/migrate/v4/source/file"
12 _ "github.com/lib/pq"
13)
14
15func RunMigrations(db *sql.DB) error {
16 driver, err := postgres.WithInstance(db, &postgres.Config{})
17 if err != nil {
18 return fmt.Errorf("create driver: %w", err)
19 }
20
21 m, err := migrate.NewWithDatabaseInstance(
22 "file://migrations",
23 "postgres", driver)
24 if err != nil {
25 return fmt.Errorf("create migrate instance: %w", err)
26 }
27
28 // Run all up migrations
29 if err := m.Up(); err != nil && err != migrate.ErrNoChange {
30 return fmt.Errorf("run migrations: %w", err)
31 }
32
33 return nil
34}
NoSQL Integration
Think of MongoDB like a flexible filing system versus rigid library card catalogs. SQL databases are like libraries where every book must have an ISBN, author, and publication date in specific formats. MongoDB is like a digital document system where you can store any type of document - PDFs, images, text files - without worrying about strict schemas.
MongoDB Integration
Real-world Example: Uber stores trip data in MongoDB because each trip can have different information. A regular taxi ride needs pickup/dropoff locations and fare. A bike share trip needs bike ID and station information. A food delivery trip needs restaurant details and delivery instructions. MongoDB handles this variety naturally, whereas SQL would require complex table designs.
1// run
2package main
3
4import (
5 "context"
6 "fmt"
7 "log"
8 "time"
9
10 "go.mongodb.org/mongo-driver/bson"
11 "go.mongodb.org/mongo-driver/bson/primitive"
12 "go.mongodb.org/mongo-driver/mongo"
13 "go.mongodb.org/mongo-driver/mongo/options"
14)
15
16type User struct {
17 ID primitive.ObjectID `bson:"_id,omitempty"`
18 Name string `bson:"name"`
19 Email string `bson:"email"`
20 Age int `bson:"age"`
21 CreatedAt time.Time `bson:"created_at"`
22}
23
24func main() {
25 // Connect to MongoDB
26 ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
27 defer cancel()
28
29 client, err := mongo.Connect(ctx, options.Client().ApplyURI("mongodb://localhost:27017"))
30 if err != nil {
31 log.Fatal(err)
32 }
33 defer client.Disconnect(ctx)
34
35 db := client.Database("testdb")
36 collection := db.Collection("users")
37
38 // Insert a user
39 user := User{
40 Name: "Alice",
41 Email: "alice@example.com",
42 Age: 30,
43 CreatedAt: time.Now(),
44 }
45
46 result, err := collection.InsertOne(ctx, user)
47 if err != nil {
48 log.Fatal(err)
49 }
50
51 fmt.Printf("Inserted user with ID: %v\n", result.InsertedID)
52
53 // Find a user
54 var foundUser User
55 err = collection.FindOne(ctx, bson.M{"name": "Alice"}).Decode(&foundUser)
56 if err != nil {
57 log.Fatal(err)
58 }
59
60 fmt.Printf("Found user: %+v\n", foundUser)
61}
Redis Integration
Real-world Example: Stack Overflow uses Redis for caching to serve 95% of requests from cache, dramatically reducing database load. When a user asks a question, the answers are cached in Redis with a TTL. Subsequent views hit Redis instead of the database.
1// run
2package main
3
4import (
5 "context"
6 "fmt"
7 "log"
8 "time"
9
10 "github.com/go-redis/redis/v8"
11)
12
13func main() {
14 // Connect to Redis
15 rdb := redis.NewClient(&redis.Options{
16 Addr: "localhost:6379",
17 Password: "", // no password set
18 DB: 0, // use default DB
19 })
20
21 ctx := context.Background()
22
23 // Test connection
24 _, err := rdb.Ping(ctx).Result()
25 if err != nil {
26 log.Fatal(err)
27 }
28
29 // Set a key
30 err = rdb.Set(ctx, "user:1001", "Alice", time.Hour).Err()
31 if err != nil {
32 log.Fatal(err)
33 }
34
35 // Get a key
36 val, err := rdb.Get(ctx, "user:1001").Result()
37 if err != nil {
38 log.Fatal(err)
39 }
40
41 fmt.Printf("User: %s\n", val)
42
43 // Set with expiration
44 rdb.Set(ctx, "session:abc123", "user_data", 30*time.Minute)
45
46 // Check if key exists
47 exists, err := rdb.Exists(ctx, "user:1001").Result()
48 if err != nil {
49 log.Fatal(err)
50 }
51
52 fmt.Printf("Key exists: %d\n", exists)
53}
Production Optimization
Connection Pool Monitoring
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "time"
8
9 _ "github.com/mattn/go-sqlite3"
10)
11
12func monitorConnectionPool(db *sql.DB, interval time.Duration, duration time.Duration) {
13 ticker := time.NewTicker(interval)
14 defer ticker.Stop()
15
16 timeout := time.After(duration)
17
18 for {
19 select {
20 case <-ticker.C:
21 stats := db.Stats()
22 fmt.Printf("[%s] Connection Pool Stats:\n", time.Now().Format("15:04:05"))
23 fmt.Printf(" Open Connections: %d\n", stats.OpenConnections)
24 fmt.Printf(" In Use: %d\n", stats.InUse)
25 fmt.Printf(" Idle: %d\n", stats.Idle)
26 fmt.Printf(" Wait Count: %d\n", stats.WaitCount)
27 fmt.Printf(" Wait Duration: %v\n", stats.WaitDuration)
28 fmt.Println()
29
30 case <-timeout:
31 return
32 }
33 }
34}
35
36func main() {
37 db, err := sql.Open("sqlite3", "./test.db")
38 if err != nil {
39 panic(err)
40 }
41 defer db.Close()
42
43 db.SetMaxOpenConns(5)
44 db.SetMaxIdleConns(2)
45
46 // Monitor for 10 seconds
47 monitorConnectionPool(db, 1*time.Second, 10*time.Second)
48}
Batch Operations
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "strings"
8 "time"
9
10 _ "github.com/mattn/go-sqlite3"
11)
12
13type User struct {
14 Username string
15 Email string
16}
17
18func batchInsert(db *sql.DB, users []User) error {
19 if len(users) == 0 {
20 return nil
21 }
22
23 tx, err := db.Begin()
24 if err != nil {
25 return err
26 }
27 defer tx.Rollback()
28
29 stmt, err := tx.Prepare("INSERT INTO users VALUES")
30 if err != nil {
31 return err
32 }
33 defer stmt.Close()
34
35 for _, user := range users {
36 _, err := stmt.Exec(user.Username, user.Email)
37 if err != nil {
38 return err
39 }
40 }
41
42 return tx.Commit()
43}
44
45func batchInsertMultiRow(db *sql.DB, users []User) error {
46 if len(users) == 0 {
47 return nil
48 }
49
50 // Build multi-row insert query
51 valueStrings := make([]string, len(users))
52 valueArgs := make([]interface{}, 0, len(users)*2)
53
54 for i, user := range users {
55 valueStrings[i] = "(?, ?)"
56 valueArgs = append(valueArgs, user.Username, user.Email)
57 }
58
59 query := fmt.Sprintf("INSERT INTO users VALUES %s",
60 strings.Join(valueStrings, ", "))
61
62 _, err := db.Exec(query, valueArgs...)
63 return err
64}
65
66func main() {
67 db, err := sql.Open("sqlite3", "./test.db")
68 if err != nil {
69 panic(err)
70 }
71 defer db.Close()
72
73 users := make([]User, 1000)
74 for i := 0; i < 1000; i++ {
75 users[i] = User{
76 Username: fmt.Sprintf("user%d", i),
77 Email: fmt.Sprintf("user%d@example.com", i),
78 }
79 }
80
81 // Method 1: Transaction with prepared statement
82 start := time.Now()
83 err = batchInsert(db, users)
84 if err != nil {
85 panic(err)
86 }
87 fmt.Printf("Batch insert: %v\n", time.Since(start))
88
89 // Clean up
90 db.Exec("DELETE FROM users")
91
92 // Method 2: Multi-row insert
93 start = time.Now()
94 err = batchInsertMultiRow(db, users)
95 if err != nil {
96 panic(err)
97 }
98 fmt.Printf("Batch insert: %v\n", time.Since(start))
99}
Best Practices
These best practices will save you from the most common and dangerous database pitfalls. Think of them as the safety rules for operating heavy machinery - follow them religiously.
1. Always Use Prepared Statements
Why: Prevents SQL injection attacks and improves performance.
ā ļø Important: SQL injection is one of the most dangerous web vulnerabilities. Attackers can steal data, delete tables, or take over your entire database through this single mistake.
1// ā DANGEROUS: SQL injection vulnerability!
2username := "admin'; DROP TABLE users; --"
3query := "SELECT * FROM users WHERE username = '" + username + "'"
4db.Query(query) // Executes: DROP TABLE users!
5
6// ā
SAFE: Prepared statement with placeholder
7db.Query("SELECT * FROM users WHERE username = ?", username)
8// Driver escapes special characters automatically
2. Always Use Context for Timeouts
Why: Prevents queries from running forever, enable cancellation.
Real-world Example: Imagine a user closes their browser while a slow query is running. Without context cancellation, that query continues consuming database resources for no reason. With context, it automatically stops.
1// ā BAD: No timeout, hangs forever on slow query
2rows, err := db.Query("SELECT * FROM huge_table")
3
4// ā
GOOD: Timeout after 5 seconds
5ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
6defer cancel()
7
8rows, err := db.QueryContext(ctx, "SELECT * FROM huge_table")
9if err == context.DeadlineExceeded {
10 log.Println("Query timed out")
11}
3. Always Close Rows
Why: Rows hold database connections. Not closing causes connection pool exhaustion.
1// ā WRONG: Connection leak!
2rows, _ := db.Query("SELECT * FROM users")
3for rows.Next() {
4 // ... process ...
5 return // Forgot to close rows!
6}
7
8// ā
CORRECT: Always defer close
9rows, err := db.Query("SELECT * FROM users")
10if err != nil {
11 return err
12}
13defer rows.Close() // Guaranteed cleanup
14
15for rows.Next() {
16 // ... safe to return early ...
17}
4. Check rows.Err() After Iteration
Why: Errors can occur during iteration, not just at Query time.
1// ā WRONG: Missing iteration errors
2rows, _ := db.Query("SELECT * FROM users")
3defer rows.Close()
4
5for rows.Next() {
6 rows.Scan(&user)
7}
8// What if iteration failed mid-way?
9
10// ā
CORRECT: Check for errors
11rows, err := db.Query("SELECT * FROM users")
12if err != nil {
13 return err
14}
15defer rows.Close()
16
17for rows.Next() {
18 if err := rows.Scan(&user); err != nil {
19 return err
20 }
21}
22
23// Check for iteration errors
24if err := rows.Err(); err != nil {
25 return err
26}
5. Use Transactions for Multi-Step Operations
Why: Ensure all-or-nothing operations.
1// ā WRONG: Can leave inconsistent state
2db.Exec("UPDATE accounts SET balance = balance - 100 WHERE id = 1")
3// Crash here ā money disappeared!
4db.Exec("UPDATE accounts SET balance = balance + 100 WHERE id = 2")
5
6// ā
CORRECT: Transaction ensures atomicity
7tx, _ := db.Begin()
8
9_, err := tx.Exec("UPDATE accounts SET balance = balance - 100 WHERE id = 1")
10if err != nil {
11 tx.Rollback()
12 return err
13}
14
15_, err = tx.Exec("UPDATE accounts SET balance = balance + 100 WHERE id = 2")
16if err != nil {
17 tx.Rollback() // Undo first update
18 return err
19}
20
21tx.Commit() // Both succeed or neither
6. Configure Connection Pool Properly
Why: Too few connections = slow, too many = database overload.
1// ā
Production configuration
2db.SetMaxOpenConns(25) // Max connections to DB
3db.SetMaxIdleConns(5) // Connections to keep open when idle
4db.SetConnMaxLifetime(5 * time.Minute) // Recycle old connections
5db.SetConnMaxIdleTime(10 * time.Minute) // Close unused connections
6
7// Rule of thumb:
8// MaxOpenConns = /
9// Example: 100 DB connections, 4 app instances ā MaxOpenConns = 25
7. Handle NULL Values Correctly
Why: NULL in database doesn't map to Go zero values cleanly.
1// ā WRONG: NULL causes scan errors
2var email string
3db.QueryRow("SELECT email FROM users WHERE id = ?", 1).Scan(&email)
4// Error: sql: Scan error converting driver.Value type <nil> to string
5
6// ā
CORRECT: Use sql.NullString
7var email sql.NullString
8db.QueryRow("SELECT email FROM users WHERE id = ?", 1).Scan(&email)
9
10if email.Valid {
11 fmt.Println("Email:", email.String)
12} else {
13 fmt.Println("Email is NULL")
14}
8. Use sql.ErrNoRows for "Not Found"
Why: Distinguish "no results" from actual errors.
1var user User
2err := db.QueryRow("SELECT * FROM users WHERE id = ?", 999).Scan(&user.ID, &user.Name)
3
4if err == sql.ErrNoRows {
5 return fmt.Errorf("user not found") // Expected case
6} else if err != nil {
7 return fmt.Errorf("database error: %w", err) // Actual error
8}
Common Pitfalls
1. Forgetting to Close Rows
Problem: Most common cause of connection pool exhaustion.
1// ā WRONG: Rows not closed, connection leak
2for i := 0; i < 1000; i++ {
3 rows, _ := db.Query("SELECT * FROM users")
4 for rows.Next() {
5 // process...
6 }
7 // Missing rows.Close()!
8}
9// After ~100 iterations: "too many connections" error
10
11// ā
CORRECT: Always defer close
12rows, err := db.Query("SELECT * FROM users")
13if err != nil {
14 return err
15}
16defer rows.Close() // Connection returned to pool
2. SQL Injection via String Concatenation
Problem: Security vulnerability allows attackers to execute arbitrary SQL.
1// ā DANGEROUS: SQL injection!
2username := r.FormValue("username") // User input: "admin'; DROP TABLE users; --"
3query := "SELECT * FROM users WHERE username = '" + username + "'"
4db.Query(query)
5// Executes: SELECT * FROM users WHERE username = 'admin'; DROP TABLE users; --'
6
7// ā
SAFE: Use placeholders
8db.Query("SELECT * FROM users WHERE username = ?", username)
9// Driver automatically escapes input
3. Opening Database in Hot Path
Problem: Creates new connection pool on every request.
1// ā TERRIBLE: Opens DB on every request!
2func handler(w http.ResponseWriter, r *http.Request) {
3 db, _ := sql.Open("postgres", dsn) // New pool!
4 defer db.Close()
5 // Use db...
6}
7// Result: No connection pooling, terrible performance
8
9// ā
CORRECT: Open once at startup
10var db *sql.DB
11
12func init() {
13 var err error
14 db, err = sql.Open("postgres", dsn)
15 if err != nil {
16 log.Fatal(err)
17 }
18}
19
20func handler(w http.ResponseWriter, r *http.Request) {
21 // Reuse global db
22}
Different Database Placeholders
Different databases use different placeholder syntax:
1// PostgreSQL uses $1, $2, $3
2query := "INSERT INTO users VALUES"
3
4// MySQL and SQLite use ?
5query := "INSERT INTO users VALUES"
6
7// Oracle uses :1, :2, :3
8query := "INSERT INTO users VALUES"
Performance Tips
Use Query vs QueryRow
1// Use QueryRow when expecting single result
2var name string
3db.QueryRow("SELECT name FROM users WHERE id = ?", 1).Scan(&name)
4
5// Use Query for multiple results
6rows, _ := db.Query("SELECT name FROM users")
7defer rows.Close()
Limit Result Sets
1// Always use LIMIT when you don't need all results
2rows, _ := db.Query("SELECT * FROM users LIMIT 100")
Use Indexes
1CREATE INDEX idx_users_username ON users(username);
2CREATE INDEX idx_posts_user_id ON posts(user_id);
Testing Database Code
Testing database code properly is essential for reliable applications. Here are production-proven patterns for database testing.
Testing Strategies
Three approaches to database testing:
- In-Memory Database (SQLite) - Fast, isolated, good for unit tests
- Test Database - Real database instance for integration tests
- Mocking - Mock database interface for unit testing business logic
Using SQLite for Testing
SQLite in-memory databases are perfect for fast, isolated tests:
1// run
2package main
3
4import (
5 "database/sql"
6 "testing"
7
8 _ "github.com/mattn/go-sqlite3"
9)
10
11func setupTestDB(t *testing.T) *sql.DB {
12 db, err := sql.Open("sqlite3", ":memory:")
13 if err != nil {
14 t.Fatalf("Failed to open test database: %v", err)
15 }
16
17 // Create schema
18 schema := `
19 CREATE TABLE users (
20 id INTEGER PRIMARY KEY AUTOINCREMENT,
21 username TEXT NOT NULL,
22 email TEXT NOT NULL,
23 created_at DATETIME DEFAULT CURRENT_TIMESTAMP
24 );`
25
26 if _, err := db.Exec(schema); err != nil {
27 t.Fatalf("Failed to create schema: %v", err)
28 }
29
30 return db
31}
32
33func TestUserRepository_Create(t *testing.T) {
34 db := setupTestDB(t)
35 defer db.Close()
36
37 repo := NewUserRepository(db)
38
39 // Test creating a user
40 id, err := repo.Create("testuser", "test@example.com")
41 if err != nil {
42 t.Fatalf("Create failed: %v", err)
43 }
44
45 if id == 0 {
46 t.Error("Expected non-zero ID")
47 }
48
49 // Verify user was created
50 user, err := repo.GetByID(int(id))
51 if err != nil {
52 t.Fatalf("GetByID failed: %v", err)
53 }
54
55 if user.Username != "testuser" {
56 t.Errorf("Username = %q, want %q", user.Username, "testuser")
57 }
58}
59
60func TestUserRepository_Update(t *testing.T) {
61 db := setupTestDB(t)
62 defer db.Close()
63
64 repo := NewUserRepository(db)
65
66 // Create test user
67 id, _ := repo.Create("testuser", "old@example.com")
68
69 // Update email
70 err := repo.Update(int(id), "new@example.com")
71 if err != nil {
72 t.Fatalf("Update failed: %v", err)
73 }
74
75 // Verify update
76 user, _ := repo.GetByID(int(id))
77 if user.Email != "new@example.com" {
78 t.Errorf("Email = %q, want %q", user.Email, "new@example.com")
79 }
80}
81
82func main() {
83 println("Run with: go test")
84}
Table-Driven Database Tests
Testing multiple scenarios efficiently:
1func TestUserRepository_CRUD(t *testing.T) {
2 tests := []struct {
3 name string
4 username string
5 email string
6 wantError bool
7 }{
8 {"valid user", "alice", "alice@example.com", false},
9 {"another valid user", "bob", "bob@example.com", false},
10 {"empty username", "", "test@example.com", true},
11 {"empty email", "charlie", "", true},
12 }
13
14 for _, tt := range tests {
15 t.Run(tt.name, func(t *testing.T) {
16 db := setupTestDB(t)
17 defer db.Close()
18
19 repo := NewUserRepository(db)
20
21 _, err := repo.Create(tt.username, tt.email)
22
23 if tt.wantError && err == nil {
24 t.Error("Expected error but got nil")
25 }
26 if !tt.wantError && err != nil {
27 t.Errorf("Unexpected error: %v", err)
28 }
29 })
30 }
31}
Testing Transactions
Verify transaction rollback behavior:
1func TestTransaction_Rollback(t *testing.T) {
2 db := setupTestDB(t)
3 defer db.Close()
4
5 // Start transaction
6 tx, err := db.Begin()
7 if err != nil {
8 t.Fatal(err)
9 }
10
11 // Insert data in transaction
12 _, err = tx.Exec("INSERT INTO users (username, email) VALUES (?, ?)", "testuser", "test@example.com")
13 if err != nil {
14 t.Fatal(err)
15 }
16
17 // Rollback transaction
18 tx.Rollback()
19
20 // Verify data was NOT persisted
21 var count int
22 db.QueryRow("SELECT COUNT(*) FROM users").Scan(&count)
23
24 if count != 0 {
25 t.Errorf("Expected 0 users after rollback, got %d", count)
26 }
27}
Test Fixtures and Helpers
Reusable test data and setup:
1type TestFixture struct {
2 DB *sql.DB
3 Users []User
4}
5
6func setupFixture(t *testing.T) *TestFixture {
7 db := setupTestDB(t)
8
9 // Create test users
10 users := []User{
11 {Username: "alice", Email: "alice@example.com"},
12 {Username: "bob", Email: "bob@example.com"},
13 {Username: "carol", Email: "carol@example.com"},
14 }
15
16 for i := range users {
17 result, _ := db.Exec("INSERT INTO users (username, email) VALUES (?, ?)",
18 users[i].Username, users[i].Email)
19 id, _ := result.LastInsertId()
20 users[i].ID = int(id)
21 }
22
23 return &TestFixture{
24 DB: db,
25 Users: users,
26 }
27}
28
29func TestUserRepository_GetAll(t *testing.T) {
30 fixture := setupFixture(t)
31 defer fixture.DB.Close()
32
33 repo := NewUserRepository(fixture.DB)
34
35 users, err := repo.GetAll()
36 if err != nil {
37 t.Fatal(err)
38 }
39
40 if len(users) != len(fixture.Users) {
41 t.Errorf("Got %d users, want %d", len(users), len(fixture.Users))
42 }
43}
Database Security
Database security is critical in production systems. Here are essential patterns for secure database operations.
SQL Injection Prevention
Always use parameterized queries:
1// ā DANGEROUS: SQL injection vulnerability
2func getUserByUsername(db *sql.DB, username string) (*User, error) {
3 query := "SELECT * FROM users WHERE username = '" + username + "'"
4 // If username = "admin'; DROP TABLE users; --"
5 // Query becomes: SELECT * FROM users WHERE username = 'admin'; DROP TABLE users; --'
6 // This will delete your entire users table!
7
8 var user User
9 err := db.QueryRow(query).Scan(&user.ID, &user.Username, &user.Email)
10 return &user, err
11}
12
13// ā
SAFE: Parameterized query
14func getUserByUsernameSafe(db *sql.DB, username string) (*User, error) {
15 query := "SELECT id, username, email FROM users WHERE username = ?"
16 // Driver automatically escapes username - safe from injection
17
18 var user User
19 err := db.QueryRow(query, username).Scan(&user.ID, &user.Username, &user.Email)
20 return &user, err
21}
Real-World Attack Example:
1// Attacker input: admin' OR '1'='1
2// Vulnerable query builds:
3SELECT * FROM users WHERE username = 'admin' OR '1'='1'
4// This returns ALL users! Authentication bypassed.
5
6// Safe parameterized query:
7SELECT * FROM users WHERE username = ?
8// Driver treats entire input as literal string
9// Searches for user with username "admin' OR '1'='1" (no match, safe)
Credential Management
Never hardcode database credentials:
1// ā TERRIBLE: Hardcoded credentials
2db, err := sql.Open("postgres", "user=admin password=secret123 host=db")
3
4// ā
GOOD: Use environment variables
5import "os"
6
7dsn := fmt.Sprintf("user=%s password=%s host=%s dbname=%s",
8 os.Getenv("DB_USER"),
9 os.Getenv("DB_PASSWORD"),
10 os.Getenv("DB_HOST"),
11 os.Getenv("DB_NAME"),
12)
13db, err := sql.Open("postgres", dsn)
14
15// ā
BETTER: Use configuration file (not committed to git)
16type DBConfig struct {
17 User string
18 Password string
19 Host string
20 DBName string
21}
22
23func loadConfig() (*DBConfig, error) {
24 // Load from config.json (in .gitignore)
25 data, err := os.ReadFile("config.json")
26 // ...
27}
Least Privilege Principle
Create database users with minimal permissions:
1-- ā BAD: Application uses root/admin user
2-- Application has DROP DATABASE, CREATE USER, etc.
3-- If compromised, attacker has full control
4
5-- ā
GOOD: Create limited user for application
6CREATE USER app_user WITH PASSWORD 'secure_password';
7
8-- Grant only necessary permissions
9GRANT SELECT, INSERT, UPDATE, DELETE ON users TO app_user;
10GRANT SELECT, INSERT, UPDATE, DELETE ON orders TO app_user;
11
12-- No ALTER TABLE, DROP TABLE, CREATE DATABASE, etc.
Row-Level Security
Filter data based on user context:
1// Implement row-level security in application
2func getUserOrders(db *sql.DB, userID int) ([]Order, error) {
3 // Only return orders belonging to this user
4 query := "SELECT * FROM orders WHERE user_id = ?"
5 rows, err := db.Query(query, userID)
6 // ...
7}
8
9// Multi-tenant applications
10func getTenantData(db *sql.DB, tenantID string, userID int) ([]Data, error) {
11 // Ensure user can only access their tenant's data
12 query := `
13 SELECT d.* FROM data d
14 JOIN users u ON d.user_id = u.id
15 WHERE u.tenant_id = ? AND d.user_id = ?
16 `
17 rows, err := db.Query(query, tenantID, userID)
18 // ...
19}
Sensitive Data Encryption
Encrypt sensitive data at rest:
1import (
2 "crypto/aes"
3 "crypto/cipher"
4 "crypto/rand"
5 "encoding/base64"
6)
7
8type SecureUser struct {
9 ID int
10 Username string
11 Email string
12 SSN string // Sensitive - must encrypt
13}
14
15func encryptSSN(ssn string, key []byte) (string, error) {
16 block, err := aes.NewCipher(key)
17 if err != nil {
18 return "", err
19 }
20
21 gcm, err := cipher.NewGCM(block)
22 if err != nil {
23 return "", err
24 }
25
26 nonce := make([]byte, gcm.NonceSize())
27 rand.Read(nonce)
28
29 ciphertext := gcm.Seal(nonce, nonce, []byte(ssn), nil)
30 return base64.StdEncoding.EncodeToString(ciphertext), nil
31}
32
33func (u *SecureUser) SaveWithEncryption(db *sql.DB, encryptionKey []byte) error {
34 encryptedSSN, err := encryptSSN(u.SSN, encryptionKey)
35 if err != nil {
36 return err
37 }
38
39 query := "INSERT INTO users (username, email, ssn) VALUES (?, ?, ?)"
40 _, err = db.Exec(query, u.Username, u.Email, encryptedSSN)
41 return err
42}
Audit Logging
Track all database modifications:
1type AuditLog struct {
2 ID int
3 UserID int
4 Action string // INSERT, UPDATE, DELETE
5 TableName string
6 RecordID int
7 Timestamp time.Time
8 Changes string // JSON of changes
9}
10
11func logAudit(db *sql.DB, userID int, action, tableName string, recordID int, changes string) error {
12 query := `
13 INSERT INTO audit_logs (user_id, action, table_name, record_id, changes)
14 VALUES (?, ?, ?, ?, ?)
15 `
16 _, err := db.Exec(query, userID, action, tableName, recordID, changes)
17 return err
18}
19
20// Usage in repository
21func (r *UserRepository) Update(userID int, email string) error {
22 // Get old value for audit
23 oldUser, _ := r.GetByID(userID)
24
25 // Perform update
26 query := "UPDATE users SET email = ? WHERE id = ?"
27 _, err := r.db.Exec(query, email, userID)
28 if err != nil {
29 return err
30 }
31
32 // Log the change
33 changes := fmt.Sprintf(`{"old_email": "%s", "new_email": "%s"}`, oldUser.Email, email)
34 logAudit(r.db, userID, "UPDATE", "users", userID, changes)
35
36 return nil
37}
Error Handling Strategies
Robust error handling prevents data corruption and improves application reliability.
Categorizing Database Errors
Different errors require different handling:
1import (
2 "database/sql"
3 "errors"
4 "github.com/lib/pq" // PostgreSQL specific errors
5)
6
7func handleDBError(err error) error {
8 // No error
9 if err == nil {
10 return nil
11 }
12
13 // Record not found (expected, not an error)
14 if errors.Is(err, sql.ErrNoRows) {
15 return ErrUserNotFound
16 }
17
18 // Connection errors (retry)
19 if errors.Is(err, sql.ErrConnDone) {
20 return ErrConnectionClosed
21 }
22
23 // PostgreSQL specific errors
24 if pqErr, ok := err.(*pq.Error); ok {
25 switch pqErr.Code {
26 case "23505": // unique_violation
27 return ErrDuplicateKey
28 case "23503": // foreign_key_violation
29 return ErrForeignKeyViolation
30 case "23502": // not_null_violation
31 return ErrNullConstraint
32 }
33 }
34
35 // Unknown error
36 return fmt.Errorf("database error: %w", err)
37}
Custom Error Types
Create domain-specific errors:
1// Define custom error types
2var (
3 ErrUserNotFound = errors.New("user not found")
4 ErrDuplicateEmail = errors.New("email already exists")
5 ErrInvalidUserID = errors.New("invalid user ID")
6 ErrConnectionClosed = errors.New("database connection closed")
7 ErrForeignKeyViolation = errors.New("foreign key constraint violated")
8)
9
10// Usage in repository
11func (r *UserRepository) GetByID(id int) (*User, error) {
12 if id <= 0 {
13 return nil, ErrInvalidUserID
14 }
15
16 var user User
17 query := "SELECT id, username, email FROM users WHERE id = ?"
18 err := r.db.QueryRow(query, id).Scan(&user.ID, &user.Username, &user.Email)
19
20 if errors.Is(err, sql.ErrNoRows) {
21 return nil, ErrUserNotFound
22 }
23 if err != nil {
24 return nil, fmt.Errorf("failed to get user: %w", err)
25 }
26
27 return &user, nil
28}
29
30// Caller can check specific errors
31user, err := repo.GetByID(123)
32if errors.Is(err, ErrUserNotFound) {
33 // Handle not found case
34 http.NotFound(w, r)
35 return
36}
37if err != nil {
38 // Handle other errors
39 http.Error(w, "Internal server error", 500)
40 return
41}
Retry Logic with Exponential Backoff
Automatically retry transient failures:
1import (
2 "math"
3 "time"
4)
5
6func executeWithRetry(operation func() error, maxRetries int) error {
7 var err error
8
9 for i := 0; i < maxRetries; i++ {
10 err = operation()
11 if err == nil {
12 return nil
13 }
14
15 // Don't retry application errors, only transient DB errors
16 if !isTransientError(err) {
17 return err
18 }
19
20 // Exponential backoff: 100ms, 200ms, 400ms, 800ms...
21 backoff := time.Duration(math.Pow(2, float64(i))) * 100 * time.Millisecond
22 time.Sleep(backoff)
23 }
24
25 return fmt.Errorf("operation failed after %d retries: %w", maxRetries, err)
26}
27
28func isTransientError(err error) bool {
29 // These errors might succeed if retried
30 return errors.Is(err, sql.ErrConnDone) ||
31 strings.Contains(err.Error(), "connection refused") ||
32 strings.Contains(err.Error(), "timeout")
33}
34
35// Usage
36err := executeWithRetry(func() error {
37 return repo.Create("testuser", "test@example.com")
38}, 3)
Circuit Breaker Pattern
Prevent cascading failures:
1type CircuitBreaker struct {
2 maxFailures int
3 failures int
4 lastFailure time.Time
5 timeout time.Duration
6 state string // "closed", "open", "half-open"
7}
8
9func NewCircuitBreaker(maxFailures int, timeout time.Duration) *CircuitBreaker {
10 return &CircuitBreaker{
11 maxFailures: maxFailures,
12 timeout: timeout,
13 state: "closed",
14 }
15}
16
17func (cb *CircuitBreaker) Execute(operation func() error) error {
18 if cb.state == "open" {
19 if time.Since(cb.lastFailure) > cb.timeout {
20 cb.state = "half-open"
21 } else {
22 return errors.New("circuit breaker is open")
23 }
24 }
25
26 err := operation()
27
28 if err != nil {
29 cb.failures++
30 cb.lastFailure = time.Now()
31
32 if cb.failures >= cb.maxFailures {
33 cb.state = "open"
34 }
35
36 return err
37 }
38
39 // Success - reset
40 cb.failures = 0
41 cb.state = "closed"
42 return nil
43}
44
45// Usage
46var dbCircuitBreaker = NewCircuitBreaker(5, 30*time.Second)
47
48func queryWithCircuitBreaker(db *sql.DB, query string) error {
49 return dbCircuitBreaker.Execute(func() error {
50 _, err := db.Exec(query)
51 return err
52 })
53}
Migration Patterns
Database migrations manage schema changes over time in production systems.
Migration Basics
Migrations are versioned schema changes:
1type Migration struct {
2 Version int
3 Name string
4 Up string // SQL to apply migration
5 Down string // SQL to rollback migration
6}
7
8var migrations = []Migration{
9 {
10 Version: 1,
11 Name: "create_users_table",
12 Up: `
13 CREATE TABLE users (
14 id INTEGER PRIMARY KEY AUTOINCREMENT,
15 username TEXT NOT NULL UNIQUE,
16 email TEXT NOT NULL UNIQUE,
17 created_at DATETIME DEFAULT CURRENT_TIMESTAMP
18 );
19 `,
20 Down: `DROP TABLE users;`,
21 },
22 {
23 Version: 2,
24 Name: "add_users_bio_column",
25 Up: `ALTER TABLE users ADD COLUMN bio TEXT;`,
26 Down: `ALTER TABLE users DROP COLUMN bio;`,
27 },
28 {
29 Version: 3,
30 Name: "create_posts_table",
31 Up: `
32 CREATE TABLE posts (
33 id INTEGER PRIMARY KEY AUTOINCREMENT,
34 user_id INTEGER NOT NULL,
35 title TEXT NOT NULL,
36 content TEXT,
37 created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
38 FOREIGN KEY (user_id) REFERENCES users(id)
39 );
40 `,
41 Down: `DROP TABLE posts;`,
42 },
43}
Migration Tracking Table
Track which migrations have been applied:
1func ensureMigrationTable(db *sql.DB) error {
2 query := `
3 CREATE TABLE IF NOT EXISTS schema_migrations (
4 version INTEGER PRIMARY KEY,
5 applied_at DATETIME DEFAULT CURRENT_TIMESTAMP
6 );
7 `
8 _, err := db.Exec(query)
9 return err
10}
11
12func getCurrentVersion(db *sql.DB) (int, error) {
13 var version int
14 err := db.QueryRow("SELECT MAX(version) FROM schema_migrations").Scan(&version)
15 if err == sql.ErrNoRows {
16 return 0, nil
17 }
18 return version, err
19}
20
21func recordMigration(db *sql.DB, version int) error {
22 _, err := db.Exec("INSERT INTO schema_migrations (version) VALUES (?)", version)
23 return err
24}
Running Migrations
Apply pending migrations:
1func migrate(db *sql.DB) error {
2 if err := ensureMigrationTable(db); err != nil {
3 return fmt.Errorf("failed to create migrations table: %w", err)
4 }
5
6 currentVersion, err := getCurrentVersion(db)
7 if err != nil {
8 return fmt.Errorf("failed to get current version: %w", err)
9 }
10
11 for _, migration := range migrations {
12 if migration.Version <= currentVersion {
13 continue // Already applied
14 }
15
16 fmt.Printf("Applying migration %d: %s\n", migration.Version, migration.Name)
17
18 // Run migration in transaction
19 tx, err := db.Begin()
20 if err != nil {
21 return err
22 }
23
24 if _, err := tx.Exec(migration.Up); err != nil {
25 tx.Rollback()
26 return fmt.Errorf("migration %d failed: %w", migration.Version, err)
27 }
28
29 if err := recordMigration(tx, migration.Version); err != nil {
30 tx.Rollback()
31 return fmt.Errorf("failed to record migration %d: %w", migration.Version, err)
32 }
33
34 if err := tx.Commit(); err != nil {
35 return fmt.Errorf("failed to commit migration %d: %w", migration.Version, err)
36 }
37
38 fmt.Printf("ā Migration %d applied successfully\n", migration.Version)
39 }
40
41 return nil
42}
Migration Best Practices
Guidelines for safe migrations:
1// ā
GOOD PRACTICES:
2
3// 1. Always make migrations backward compatible when possible
4// Add nullable columns first, then add NOT NULL in separate migration
5ALTER TABLE users ADD COLUMN bio TEXT; // Migration 1 (safe)
6// Later, after deploying code that sets bio:
7ALTER TABLE users ALTER COLUMN bio SET NOT NULL; // Migration 2
8
9// 2. Use transactions for migrations
10tx, _ := db.Begin()
11tx.Exec(migration.Up)
12tx.Commit()
13
14// 3. Test migrations on production-like data
15// Create staging environment with production data copy
16
17// 4. Keep migrations small and focused
18// One logical change per migration
19
20// 5. Never modify existing migrations after they're deployed
21// Create new migration to fix issues
22
23// ā BAD PRACTICES:
24
25// Don't drop columns immediately (break old code)
26ALTER TABLE users DROP COLUMN deprecated_field;
27
28// Don't make breaking changes without deprecation period
29// 1. Mark column as deprecated
30// 2. Deploy code that doesn't use it
31// 3. Wait for all instances to update
32// 4. Then drop column in new migration
Practice Exercises
Exercise 1: User Repository
Learning Objectives: Implement complete CRUD operations, build repository pattern with database abstraction, handle parameterized queries safely, and understand database connection management and error handling.
Difficulty: āāā Intermediate
Time Estimate: 30 minutes
Build a complete user repository that implements all CRUD operations with proper error handling, parameterized queries, and database connection management. This exercise teaches you to implement the repository pattern for data access, use prepared statements for security, handle different query types, and build robust database operations with proper error handling.
Real-World Context: Repository patterns are fundamental in application architecture for separating business logic from data access logic. Every web application, API, or service that works with user data needs a robust data access layer. Understanding CRUD operations and repository patterns is essential for building scalable, maintainable applications that can reliably store and retrieve data.
Solution
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "time"
8
9 _ "github.com/mattn/go-sqlite3"
10)
11
12type User struct {
13 ID int
14 Username string
15 Email string
16 CreatedAt time.Time
17}
18
19type UserRepository struct {
20 db *sql.DB
21}
22
23func NewUserRepository(db *sql.DB) *UserRepository {
24 return &UserRepository{db: db}
25}
26
27func Create(username, email string) {
28 query := "INSERT INTO users VALUES"
29 result, err := r.db.Exec(query, username, email)
30 if err != nil {
31 return 0, err
32 }
33 return result.LastInsertId()
34}
35
36func GetByID(id int) {
37 query := "SELECT id, username, email, created_at FROM users WHERE id = ?"
38 user := &User{}
39
40 err := r.db.QueryRow(query, id).Scan(&user.ID, &user.Username, &user.Email, &user.CreatedAt)
41 if err != nil {
42 return nil, err
43 }
44
45 return user, nil
46}
47
48func GetAll() {
49 query := "SELECT id, username, email, created_at FROM users"
50 rows, err := r.db.Query(query)
51 if err != nil {
52 return nil, err
53 }
54 defer rows.Close()
55
56 var users []User
57 for rows.Next() {
58 var user User
59 err := rows.Scan(&user.ID, &user.Username, &user.Email, &user.CreatedAt)
60 if err != nil {
61 return nil, err
62 }
63 users = append(users, user)
64 }
65
66 return users, rows.Err()
67}
68
69func Update(id int, email string) error {
70 query := "UPDATE users SET email = ? WHERE id = ?"
71 result, err := r.db.Exec(query, email, id)
72 if err != nil {
73 return err
74 }
75
76 rows, err := result.RowsAffected()
77 if err != nil {
78 return err
79 }
80
81 if rows == 0 {
82 return fmt.Errorf("user not found")
83 }
84
85 return nil
86}
87
88func Delete(id int) error {
89 query := "DELETE FROM users WHERE id = ?"
90 result, err := r.db.Exec(query, id)
91 if err != nil {
92 return err
93 }
94
95 rows, err := result.RowsAffected()
96 if err != nil {
97 return err
98 }
99
100 if rows == 0 {
101 return fmt.Errorf("user not found")
102 }
103
104 return nil
105}
106
107func main() {
108 db, err := sql.Open("sqlite3", "./test.db")
109 if err != nil {
110 panic(err)
111 }
112 defer db.Close()
113
114 repo := NewUserRepository(db)
115
116 // Create
117 id, err := repo.Create("testuser", "test@example.com")
118 if err != nil {
119 panic(err)
120 }
121 fmt.Printf("Created user with ID: %d\n", id)
122
123 // Get by ID
124 user, err := repo.GetByID(int(id))
125 if err != nil {
126 panic(err)
127 }
128 fmt.Printf("Retrieved user: %+v\n", user)
129
130 // Update
131 err = repo.Update(int(id), "newemail@example.com")
132 if err != nil {
133 panic(err)
134 }
135 fmt.Println("Updated user email")
136
137 // Get all
138 users, err := repo.GetAll()
139 if err != nil {
140 panic(err)
141 }
142 fmt.Printf("All users: %d\n", len(users))
143
144 // Delete
145 err = repo.Delete(int(id))
146 if err != nil {
147 panic(err)
148 }
149 fmt.Println("Deleted user")
150}
Exercise 2: Connection Pool Monitor
Learning Objectives: Monitor database connection pool performance, understand connection pooling behavior, implement real-time statistics collection, and learn to optimize database connection management for production systems.
Difficulty: āāā Intermediate
Time Estimate: 25 minutes
Build a connection pool monitoring system that tracks database connection statistics in real-time, including open connections, idle connections, wait times, and pool utilization metrics. This exercise teaches you to understand database/sql.Stats, implement periodic monitoring with goroutines, visualize connection pool behavior under load, and identify performance bottlenecks in database access patterns.
Real-World Context: Connection pool monitoring is crucial for production applications to optimize performance and prevent connection exhaustion. Database connections are expensive resources, and proper pool management can make the difference between a scalable application and one that crashes under load. Understanding connection pool behavior is essential for DevOps engineers and backend developers working on high-traffic systems.
Solution
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "time"
8
9 _ "github.com/mattn/go-sqlite3"
10)
11
12func monitorConnectionPool(db *sql.DB, interval time.Duration, duration time.Duration) {
13 ticker := time.NewTicker(interval)
14 defer ticker.Stop()
15
16 timeout := time.After(duration)
17
18 for {
19 select {
20 case <-ticker.C:
21 stats := db.Stats()
22 fmt.Printf("[%s] Connection Pool Stats:\n", time.Now().Format("15:04:05"))
23 fmt.Printf(" Open Connections: %d\n", stats.OpenConnections)
24 fmt.Printf(" In Use: %d\n", stats.InUse)
25 fmt.Printf(" Idle: %d\n", stats.Idle)
26 fmt.Printf(" Wait Count: %d\n", stats.WaitCount)
27 fmt.Printf(" Wait Duration: %v\n", stats.WaitDuration)
28 fmt.Println()
29
30 case <-timeout:
31 return
32 }
33 }
34}
35
36func simulateLoad(db *sql.DB) {
37 for i := 0; i < 10; i++ {
38 go func(n int) {
39 var count int
40 err := db.QueryRow("SELECT COUNT(*) FROM users").Scan(&count)
41 if err != nil {
42 fmt.Printf("Query %d error: %v\n", n, err)
43 }
44 time.Sleep(2 * time.Second)
45 }(i)
46 }
47}
48
49func main() {
50 db, err := sql.Open("sqlite3", "./test.db")
51 if err != nil {
52 panic(err)
53 }
54 defer db.Close()
55
56 db.SetMaxOpenConns(5)
57 db.SetMaxIdleConns(2)
58
59 // Simulate load
60 simulateLoad(db)
61
62 // Monitor for 10 seconds
63 monitorConnectionPool(db, 1*time.Second, 10*time.Second)
64}
Exercise 3: Batch Insert with Error Recovery
Learning Objectives: Implement high-performance batch operations, handle partial failures gracefully, build error recovery mechanisms, and optimize database performance for bulk data operations.
Difficulty: āāāā Advanced
Time Estimate: 35 minutes
Build a robust batch insertion system that can efficiently insert large datasets while handling individual record failures without losing the entire operation. This exercise teaches you to implement transaction-based batch operations, create error recovery mechanisms for failed records, optimize database performance with bulk operations, and build retry logic for partial failures.
Real-World Context: Batch operations are critical for data processing systems, ETL pipelines, and applications that need to import large amounts of data. Data migration tools, analytics platforms, and backup systems all rely on efficient batch processing. Understanding batch operations with error recovery is essential for building data-intensive applications that can handle real-world data quality issues and network failures.
Solution
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "log"
8 "strings"
9
10 _ "github.com/mattn/go-sqlite3"
11)
12
13type BatchInsertResult struct {
14 SuccessCount int
15 FailedItems []FailedItem
16}
17
18type FailedItem struct {
19 Index int
20 Item interface{}
21 Error error
22}
23
24func batchInsertWithRecovery(db *sql.DB, users []User, batchSize int) {
25 result := &BatchInsertResult{
26 FailedItems: make([]FailedItem, 0),
27 }
28
29 for i := 0; i < len(users); i += batchSize {
30 end := i + batchSize
31 if end > len(users) {
32 end = len(users)
33 }
34
35 batch := users[i:end]
36 err := insertBatch(db, batch)
37 if err != nil {
38 // Batch failed, try individual inserts
39 log.Printf("Batch %d-%d failed, trying individual inserts: %v", i, end-1, err)
40
41 for j, user := range batch {
42 err := insertIndividual(db, user)
43 if err != nil {
44 result.FailedItems = append(result.FailedItems, FailedItem{
45 Index: i + j,
46 Item: user,
47 Error: err,
48 })
49 } else {
50 result.SuccessCount++
51 }
52 }
53 } else {
54 result.SuccessCount += len(batch)
55 }
56 }
57
58 return result, nil
59}
60
61func insertBatch(db *sql.DB, users []User) error {
62 if len(users) == 0 {
63 return nil
64 }
65
66 // Build multi-row insert query
67 valueStrings := make([]string, len(users))
68 valueArgs := make([]interface{}, 0, len(users)*2)
69
70 for i, user := range users {
71 valueStrings[i] = "(?, ?)"
72 valueArgs = append(valueArgs, user.Username, user.Email)
73 }
74
75 query := fmt.Sprintf("INSERT INTO users VALUES %s",
76 strings.Join(valueStrings, ", "))
77
78 _, err := db.Exec(query, valueArgs...)
79 return err
80}
81
82func insertIndividual(db *sql.DB, user User) error {
83 query := "INSERT INTO users VALUES"
84 _, err := db.Exec(query, user.Username, user.Email)
85 return err
86}
87
88type User struct {
89 Username string
90 Email string
91}
92
93func main() {
94 db, err := sql.Open("sqlite3", "./test.db")
95 if err != nil {
96 panic(err)
97 }
98 defer db.Close()
99
100 // Create test data with some potential duplicates
101 users := make([]User, 100)
102 for i := 0; i < 100; i++ {
103 users[i] = User{
104 Username: fmt.Sprintf("user%d", i),
105 Email: fmt.Sprintf("user%d@example.com", i),
106 }
107 }
108
109 // Add a duplicate to test error recovery
110 users[50] = User{Username: "user0", Email: "duplicate@example.com"}
111
112 result, err := batchInsertWithRecovery(db, users, 20)
113 if err != nil {
114 panic(err)
115 }
116
117 fmt.Printf("Batch insert completed:\n")
118 fmt.Printf(" Successful: %d\n", result.SuccessCount)
119 fmt.Printf(" Failed: %d\n", len(result.FailedItems))
120
121 if len(result.FailedItems) > 0 {
122 fmt.Println("Failed items:")
123 for _, failed := range result.FailedItems {
124 fmt.Printf(" Index %d: %v\n", failed.Index, failed.Item, failed.Error)
125 }
126 }
127}
Exercise 4: Simple Migration System
Learning Objectives: Implement database schema migrations, build version control for database changes, create rollback mechanisms for failed migrations, and understand database evolution patterns for production deployments.
Difficulty: āāāā Advanced
Time Estimate: 40 minutes
Create a database migration system that can apply schema changes in version order, track migration history, and provide rollback capabilities for safe database evolution. This exercise teaches you to implement transaction-based schema changes, build migration tracking tables, create forward and rollback migration scripts, and understand database versioning patterns used in production environments.
Real-World Context: Database migrations are fundamental for evolving application schemas over time without losing data. Every production application needs a reliable way to manage database schema changes as features are added and modified. Understanding migration patterns is crucial for DevOps engineers, backend developers, and anyone involved in deploying database-backed applications to production environments.
Solution
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "sort"
8 "time"
9
10 _ "github.com/mattn/go-sqlite3"
11)
12
13type Migration struct {
14 Version int
15 Name string
16 Up string
17 Down string
18}
19
20type Migrator struct {
21 db *sql.DB
22 migrations []Migration
23}
24
25func NewMigrator(db *sql.DB) *Migrator {
26 return &Migrator{db: db}
27}
28
29func init() error {
30 query := `
31 CREATE TABLE IF NOT EXISTS schema_migrations (
32 version INTEGER PRIMARY KEY,
33 name TEXT NOT NULL,
34 applied_at DATETIME DEFAULT CURRENT_TIMESTAMP
35 )`
36 _, err := m.db.Exec(query)
37 return err
38}
39
40func AddMigration(version int, name, up, down string) {
41 m.migrations = append(m.migrations, Migration{
42 Version: version,
43 Name: name,
44 Up: up,
45 Down: down,
46 })
47}
48
49func getCurrentVersion() {
50 var version int
51 err := m.db.QueryRow("SELECT COALESCE(MAX(version), 0) FROM schema_migrations").Scan(&version)
52 return version, err
53}
54
55func Up() error {
56 if err := m.init(); err != nil {
57 return err
58 }
59
60 currentVersion, err := m.getCurrentVersion()
61 if err != nil {
62 return err
63 }
64
65 // Sort migrations by version
66 sort.Slice(m.migrations, func(i, j int) bool {
67 return m.migrations[i].Version < m.migrations[j].Version
68 })
69
70 for _, migration := range m.migrations {
71 if migration.Version <= currentVersion {
72 continue
73 }
74
75 fmt.Printf("Applying migration %d: %s\n", migration.Version, migration.Name)
76
77 tx, err := m.db.Begin()
78 if err != nil {
79 return err
80 }
81
82 if _, err := tx.Exec(migration.Up); err != nil {
83 tx.Rollback()
84 return fmt.Errorf("migration %d failed: %v", migration.Version, err)
85 }
86
87 _, err = tx.Exec("INSERT INTO schema_migrations VALUES",
88 migration.Version, migration.Name)
89 if err != nil {
90 tx.Rollback()
91 return err
92 }
93
94 if err := tx.Commit(); err != nil {
95 return err
96 }
97
98 fmt.Printf("Migration %d applied successfully\n", migration.Version)
99 }
100
101 return nil
102}
103
104func Down(steps int) error {
105 if err := m.init(); err != nil {
106 return err
107 }
108
109 currentVersion, err := m.getCurrentVersion()
110 if err != nil {
111 return err
112 }
113
114 if currentVersion == 0 {
115 fmt.Println("No migrations to rollback")
116 return nil
117 }
118
119 // Get applied migrations in reverse order
120 rows, err := m.db.Query("SELECT version, name FROM schema_migrations ORDER BY version DESC")
121 if err != nil {
122 return err
123 }
124 defer rows.Close()
125
126 var appliedMigrations []Migration
127 for rows.Next() {
128 var version int
129 var name string
130 if err := rows.Scan(&version, &name); err != nil {
131 return err
132 }
133
134 // Find the migration definition
135 for _, mig := range m.migrations {
136 if mig.Version == version {
137 appliedMigrations = append(appliedMigrations, mig)
138 break
139 }
140 }
141
142 if len(appliedMigrations) >= steps {
143 break
144 }
145 }
146
147 // Rollback migrations
148 for _, migration := range appliedMigrations {
149 if migration.Down == "" {
150 fmt.Printf("Migration %d cannot be rolled back\n", migration.Version, migration.Name)
151 continue
152 }
153
154 fmt.Printf("Rolling back migration %d: %s\n", migration.Version, migration.Name)
155
156 tx, err := m.db.Begin()
157 if err != nil {
158 return err
159 }
160
161 if _, err := tx.Exec(migration.Down); err != nil {
162 tx.Rollback()
163 return fmt.Errorf("rollback %d failed: %v", migration.Version, err)
164 }
165
166 _, err = tx.Exec("DELETE FROM schema_migrations WHERE version = ?", migration.Version)
167 if err != nil {
168 tx.Rollback()
169 return err
170 }
171
172 if err := tx.Commit(); err != nil {
173 return err
174 }
175
176 fmt.Printf("Migration %d rolled back successfully\n", migration.Version)
177 }
178
179 return nil
180}
181
182func main() {
183 db, err := sql.Open("sqlite3", "./test.db")
184 if err != nil {
185 panic(err)
186 }
187 defer db.Close()
188
189 migrator := NewMigrator(db)
190
191 migrator.AddMigration(1, "create_users_table", `
192 CREATE TABLE users (
193 id INTEGER PRIMARY KEY AUTOINCREMENT,
194 username TEXT NOT NULL UNIQUE,
195 email TEXT NOT NULL,
196 created_at DATETIME DEFAULT CURRENT_TIMESTAMP
197 )
198 `, "DROP TABLE users")
199
200 migrator.AddMigration(2, "add_users_bio", `
201 ALTER TABLE users ADD COLUMN bio TEXT
202 `, "")
203
204 migrator.AddMigration(3, "create_posts_table", `
205 CREATE TABLE posts (
206 id INTEGER PRIMARY KEY AUTOINCREMENT,
207 user_id INTEGER NOT NULL,
208 title TEXT NOT NULL,
209 content TEXT,
210 created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
211 FOREIGN KEY REFERENCES users(id)
212 )
213 `, "DROP TABLE posts")
214
215 // Run migrations
216 if err := migrator.Up(); err != nil {
217 panic(err)
218 }
219
220 fmt.Println("All migrations applied successfully")
221
222 // Test rollback
223 fmt.Println("\nRolling back last migration...")
224 if err := migrator.Down(1); err != nil {
225 panic(err)
226 }
227
228 fmt.Println("Rollback completed")
229}
Exercise 5: Database Query Builder
Learning Objectives: Build a flexible query builder for constructing SQL queries programmatically, implement method chaining for fluent API design, handle dynamic WHERE clauses and query conditions, and create type-safe query construction patterns.
Difficulty: āāā Intermediate
Time Estimate: 35 minutes
Create a simple SQL query builder that allows you to construct SELECT queries using method chaining, with support for WHERE clauses, ORDER BY, LIMIT, and OFFSET. This exercise teaches you to design fluent APIs in Go, build dynamic SQL queries safely, implement method chaining patterns, and create reusable database utilities that improve code readability and maintainability.
Real-World Context: Query builders are essential in production applications for constructing complex queries dynamically based on user input, filters, and search criteria. Every modern ORM and database library uses query builders to provide type-safe, readable query construction. Understanding query builder patterns helps you build maintainable data access layers and create developer-friendly APIs for database operations.
Solution
1// run
2package main
3
4import (
5 "database/sql"
6 "fmt"
7 "strings"
8
9 _ "github.com/mattn/go-sqlite3"
10)
11
12type QueryBuilder struct {
13 table string
14 selectCols []string
15 whereConds []string
16 whereArgs []interface{}
17 orderBy string
18 limitVal int
19 offsetVal int
20}
21
22func NewQueryBuilder(table string) *QueryBuilder {
23 return &QueryBuilder{
24 table: table,
25 selectCols: []string{"*"},
26 }
27}
28
29func (qb *QueryBuilder) Select(cols ...string) *QueryBuilder {
30 if len(cols) > 0 {
31 qb.selectCols = cols
32 }
33 return qb
34}
35
36func (qb *QueryBuilder) Where(condition string, args ...interface{}) *QueryBuilder {
37 qb.whereConds = append(qb.whereConds, condition)
38 qb.whereArgs = append(qb.whereArgs, args...)
39 return qb
40}
41
42func (qb *QueryBuilder) OrderBy(col string) *QueryBuilder {
43 qb.orderBy = col
44 return qb
45}
46
47func (qb *QueryBuilder) Limit(n int) *QueryBuilder {
48 qb.limitVal = n
49 return qb
50}
51
52func (qb *QueryBuilder) Offset(n int) *QueryBuilder {
53 qb.offsetVal = n
54 return qb
55}
56
57func (qb *QueryBuilder) Build() (string, []interface{}) {
58 // Build SELECT clause
59 query := fmt.Sprintf("SELECT %s FROM %s", strings.Join(qb.selectCols, ", "), qb.table)
60
61 // Add WHERE clause
62 if len(qb.whereConds) > 0 {
63 query += " WHERE " + strings.Join(qb.whereConds, " AND ")
64 }
65
66 // Add ORDER BY
67 if qb.orderBy != "" {
68 query += " ORDER BY " + qb.orderBy
69 }
70
71 // Add LIMIT
72 if qb.limitVal > 0 {
73 query += fmt.Sprintf(" LIMIT %d", qb.limitVal)
74 }
75
76 // Add OFFSET
77 if qb.offsetVal > 0 {
78 query += fmt.Sprintf(" OFFSET %d", qb.offsetVal)
79 }
80
81 return query, qb.whereArgs
82}
83
84func (qb *QueryBuilder) Execute(db *sql.DB) (*sql.Rows, error) {
85 query, args := qb.Build()
86 return db.Query(query, args...)
87}
88
89func main() {
90 db, err := sql.Open("sqlite3", "./test.db")
91 if err != nil {
92 panic(err)
93 }
94 defer db.Close()
95
96 // Create test table
97 _, err = db.Exec(`
98 CREATE TABLE IF NOT EXISTS users (
99 id INTEGER PRIMARY KEY AUTOINCREMENT,
100 username TEXT NOT NULL,
101 email TEXT NOT NULL,
102 age INTEGER,
103 created_at DATETIME DEFAULT CURRENT_TIMESTAMP
104 )
105 `)
106 if err != nil {
107 panic(err)
108 }
109
110 // Insert test data
111 db.Exec("DELETE FROM users") // Clear existing data
112 db.Exec("INSERT INTO users (username, email, age) VALUES (?, ?, ?)", "alice", "alice@example.com", 25)
113 db.Exec("INSERT INTO users (username, email, age) VALUES (?, ?, ?)", "bob", "bob@example.com", 30)
114 db.Exec("INSERT INTO users (username, email, age) VALUES (?, ?, ?)", "carol", "carol@example.com", 35)
115 db.Exec("INSERT INTO users (username, email, age) VALUES (?, ?, ?)", "dave", "dave@example.com", 28)
116
117 // Example 1: Simple query
118 fmt.Println("Example 1: Select all users")
119 query1, args1 := NewQueryBuilder("users").Build()
120 fmt.Printf("Query: %s\nArgs: %v\n\n", query1, args1)
121
122 rows1, _ := db.Query(query1, args1...)
123 defer rows1.Close()
124 for rows1.Next() {
125 var id, age int
126 var username, email string
127 rows1.Scan(&id, &age, &username, &email)
128 fmt.Printf(" User %d: %s (%s), age %d\n", id, username, email, age)
129 }
130
131 // Example 2: Select specific columns with WHERE
132 fmt.Println("\nExample 2: Users with age > 28")
133 query2, args2 := NewQueryBuilder("users").
134 Select("username", "age").
135 Where("age > ?", 28).
136 Build()
137 fmt.Printf("Query: %s\nArgs: %v\n\n", query2, args2)
138
139 rows2, _ := db.Query(query2, args2...)
140 defer rows2.Close()
141 for rows2.Next() {
142 var age int
143 var username string
144 rows2.Scan(&username, &age)
145 fmt.Printf(" %s (age %d)\n", username, age)
146 }
147
148 // Example 3: Multiple conditions with ORDER BY and LIMIT
149 fmt.Println("\nExample 3: Users age 25-35, ordered by age, limit 2")
150 query3, args3 := NewQueryBuilder("users").
151 Select("username", "age").
152 Where("age >= ?", 25).
153 Where("age <= ?", 35).
154 OrderBy("age DESC").
155 Limit(2).
156 Build()
157 fmt.Printf("Query: %s\nArgs: %v\n\n", query3, args3)
158
159 rows3, _ := db.Query(query3, args3...)
160 defer rows3.Close()
161 for rows3.Next() {
162 var age int
163 var username string
164 rows3.Scan(&username, &age)
165 fmt.Printf(" %s (age %d)\n", username, age)
166 }
167
168 // Example 4: Pagination with OFFSET
169 fmt.Println("\nExample 4: Pagination (page 2, 2 per page)")
170 query4, args4 := NewQueryBuilder("users").
171 Select("username", "email").
172 OrderBy("username").
173 Limit(2).
174 Offset(2).
175 Build()
176 fmt.Printf("Query: %s\nArgs: %v\n\n", query4, args4)
177
178 rows4, _ := db.Query(query4, args4...)
179 defer rows4.Close()
180 for rows4.Next() {
181 var username, email string
182 rows4.Scan(&username, &email)
183 fmt.Printf(" %s (%s)\n", username, email)
184 }
185
186 // Example 5: Using Execute helper method
187 fmt.Println("\nExample 5: Using Execute() helper")
188 rows5, err := NewQueryBuilder("users").
189 Where("age < ?", 30).
190 OrderBy("age").
191 Execute(db)
192 if err != nil {
193 panic(err)
194 }
195 defer rows5.Close()
196
197 for rows5.Next() {
198 var id, age int
199 var username, email string
200 rows5.Scan(&id, &username, &email, &age)
201 fmt.Printf(" User: %s (age %d)\n", username, age)
202 }
203
204 fmt.Println("\nā Query builder examples completed successfully")
205}
Summary
- Use
database/sqlpackage with appropriate driver sql.Open()opens connection,db.Ping()verifies itdb.Exec()for INSERT, UPDATE, DELETEdb.QueryRow()for single result queriesdb.Query()for multiple result queries- Always use placeholders to prevent SQL injection
- Use transactions for atomic operations
- Configure connection pool with SetMaxOpenConns, SetMaxIdleConns
- Handle NULL values with sql.Null* types
- Use context for timeouts and cancellation
- Always close rows with defer rows.Close()
- Check rows.Err() after iteration
- Use prepared statements for repeated queries
š” Final Takeaway: Database programming in Go is about building reliable, efficient data persistence layers. Focus on safety, performance, and reliability. These patterns will serve you well whether you're building small tools or large-scale applications.
Master database operations and build robust, secure database applications in Go!