Why File I/O Matters
Consider building a web application that needs to handle user-uploaded photos, process configuration files, and generate daily reports. Every interaction with the file system represents a potential failure point that could crash your application or corrupt user data. File I/O is the foundation of data persistence - without it, your applications can't save settings, store user data, or process information between sessions.
Real-World Impact:
- Configuration Files: Every application needs to read/write config files
- Logging Systems: Processing millions of log entries efficiently without memory issues
- File Uploads: Securely handling user-submitted files in web applications
- Data Processing: Reading CSV files, processing images, handling large datasets
- Backup Operations: Creating atomic backups that won't corrupt on crashes
Why Go Excels at File I/O:
- Cross-Platform: Same code works on Windows, Linux, macOS without changes
- Memory Safe: Built-in protections against buffer overflows and memory leaks
- High Performance: Buffered I/O implementations compete with C/C++ performance
- Simple Error Handling: Explicit error returns instead of exceptions
- Production Ready: Built-in tools for atomic operations, temp files, and directory watching
Learning Objectives
After completing this article, you will be able to:
✅ Choose the Right Method - Select appropriate file reading/writing strategies for different scenarios
✅ Handle Large Files - Process gigabyte-sized files without running out of memory
✅ Write Atomic Operations - Ensure data integrity during crashes and power failures
✅ Master Performance - Use buffered I/O for 10-50x performance improvements
✅ Build Cross-Platform - Write code that works seamlessly across operating systems
✅ Implement Real Patterns - Create configuration managers, log processors, and file backup systems
✅ Avoid Common Pitfalls - Prevent resource leaks, race conditions, and data corruption
Core Concepts - Understanding File System Operations
Before diving into code, let's understand the fundamental concepts that make Go's file I/O both powerful and safe.
The File Ecosystem
Go provides a layered approach to file operations:
┌─────────────────────────────────────┐
│ Application Logic │
├─────────────────────────────────────┤
│ Go Packages │
│ ┌─────────────┬─────────────┐ │
│ │ os │ io │ │ ← Core packages
│ │ files │ buffers │ │
│ │ permissions │ streams │ │
│ └─────────────┴─────────────┘ │
├─────────────────────────────────────┤
│ Operating System │ ← File descriptors, paths
├─────────────────────────────────────┤
│ Hardware │ ← Actual disk storage
└─────────────────────────────────────┘
Key Packages Overview
| Package | Primary Role | Key Functions |
|---|---|---|
| os | File system operations | Open, Create, Remove, Stat, Mkdir |
| io | Basic I/O interfaces | Reader, Writer, Copy, ReadFull |
| bufio | Buffered I/O | Scanner, Reader, Writer |
| filepath | Path manipulation | Join, Clean, Walk, Base, Dir |
| ioutil | Convenience functions | ReadFile, WriteFile, TempFile |
The Resource Management Problem
Every file operation consumes system resources:
1// Each open file = 1 file descriptor
2// Typical limits: 1024-4096 per process
3// Consequence: Too many open files = crash!
4
5// Memory usage:
6// Small file: 1KB memory
7// Large file: 1GB memory
8
9// Solution: Use appropriate reading strategy
This is why choosing the right method matters - it's not just about convenience, but about system stability and performance.
Understanding File Descriptors
Every time you open a file, the operating system allocates a file descriptor - a unique identifier used to reference the open file. Understanding this concept is critical for managing system resources properly.
File Descriptor Lifecycle:
1. os.Open("file.txt") → OS allocates file descriptor #5
2. Read/Write operations → Uses file descriptor #5
3. file.Close() → OS releases file descriptor #5
4. Memory freed → Resources available for reuse
What Happens Without Close:
1// ❌ WRONG: Resource leak
2for i := 0; i < 10000; i++ {
3 file, _ := os.Open("data.txt")
4 // process(file)
5 // Missing file.Close() - leaks one file descriptor per iteration!
6}
7// After 1024 iterations: "too many open files" error
File Descriptor Limits:
1# Check current limits on Unix systems
2ulimit -n # Usually 1024 or 4096
3
4# Increase limits temporarily
5ulimit -n 10000
6
7# Check system-wide limits
8cat /proc/sys/fs/file-max
File I/O Performance Model
Understanding the performance characteristics of different I/O methods helps you make informed decisions:
System Call Overhead:
User Space Kernel Space
│ │
│─── system call ──→│
│ (~1-10 μs) │
│ │─── Disk I/O
│ │ (~5-10 ms HDD)
│ │ (~100-500 μs SSD)
│←─── return ───────│
Buffered vs Unbuffered I/O:
Unbuffered (Direct System Calls):
Read line 1 → syscall → disk → 10ms
Read line 2 → syscall → disk → 10ms
Read line 3 → syscall → disk → 10ms
Total: 30ms for 3 lines
Buffered (Batch System Calls):
Read buffer → syscall → disk → 10ms (gets 1000 lines)
Read line 1 from buffer → memory → 0.001ms
Read line 2 from buffer → memory → 0.001ms
Read line 3 from buffer → memory → 0.001ms
Total: 10ms for 1000 lines!
Practical Examples - Reading Files
Now let's explore different reading strategies with immediate code examples showing when and why to use each approach.
Strategy 1: Small Files - Read All at Once
Use Case: Configuration files, small JSON documents, settings
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 // Perfect for config files
11 data, err := os.ReadFile("config.json")
12 if err != nil {
13 fmt.Println("Error reading config:", err)
14 return
15 }
16
17 fmt.Printf("Config loaded: %d bytes\n%s\n", len(data), string(data))
18}
Why This Works:
- ✅ Simple: One function call
- ✅ Automatic cleanup: File closes automatically
- ✅ Fast: System optimizes small reads
- ⚠️ Memory: Entire file in RAM
When to Use:
- Configuration files (< 1 MB)
- JSON/YAML/TOML files
- Small templates
- License files
- README files
When NOT to Use:
- Log files (might be GBs)
- User-uploaded files (unpredictable size)
- Streaming data
- Files larger than available memory
Strategy 2: Large Files - Streaming Line by Line
Use Case: Log files, CSV processing, large text files
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "log"
9)
10
11func main() {
12 file, err := os.Open("large.log")
13 if err != nil {
14 log.Fatal("Cannot open log:", err)
15 }
16 defer file.Close() // Critical: prevents resource leak
17
18 scanner := bufio.NewScanner(file)
19 lineCount := 0
20
21 // Process one line at a time
22 for scanner.Scan() {
23 lineCount++
24 line := scanner.Text()
25
26 // Process line without loading entire file
27 if lineCount <= 5 { // Show first 5 lines
28 fmt.Printf("Line %d: %s\n", lineCount, line)
29 }
30 }
31
32 if err := scanner.Err(); err != nil {
33 log.Fatal("Error reading log:", err)
34 }
35
36 fmt.Printf("Total lines processed: %d\n", lineCount)
37}
Performance Breakdown:
Processing 100MB log file:
Method | Memory Usage | System Calls | Time
--------------------|--------------|--------------|------
os.ReadFile | 100MB | ~200 | 150ms ❌ High memory
bufio.Scanner | 4KB | ~25,000 | 180ms ✅ Low memory
Memory Efficiency:
Without streaming:
┌──────────────────┐
│ 100 MB in RAM │ ❌ Entire file loaded
└──────────────────┘
With streaming:
┌────────┐
│ 4KB │ ✅ Only buffer in RAM
└────────┘
Process line → discard → read next
Strategy 3: Binary Files - Chunked Reading
Use Case: Images, PDFs, custom binary formats
1// run
2package main
3
4import (
5 "fmt"
6 "io"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("image.png")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 // Read in 4KB chunks
19 buffer := make([]byte, 4096)
20 totalBytes := 0
21
22 for {
23 bytesRead, err := file.Read(buffer)
24 if err == io.EOF {
25 break // End of file
26 }
27 if err != nil {
28 fmt.Println("Read error:", err)
29 return
30 }
31
32 totalBytes += bytesRead
33 fmt.Printf("Read chunk: %d bytes (total: %d)\n", bytesRead, totalBytes)
34
35 // Process chunk here (e.g., hash calculation, compression)
36 }
37
38 fmt.Printf("File completely read: %d bytes\n", totalBytes)
39}
Chunk Size Optimization:
Chunk Size | System Calls | Memory | Best For
-----------|--------------|--------|----------
1 KB | Many | Low | Constrained environments
4 KB | Moderate | Low | General purpose (OS page size)
64 KB | Few | Medium | High throughput
1 MB | Very few | High | Bulk transfers
Strategy 4: Random Access Pattern
Use Case: Database files, indexed data, seeking to specific positions
1// run
2package main
3
4import (
5 "fmt"
6 "io"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("data.bin")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 // Read header at beginning
19 header := make([]byte, 16)
20 _, err = file.Read(header)
21 if err != nil {
22 fmt.Println("Error reading header:", err)
23 return
24 }
25 fmt.Printf("Header: %x\n", header)
26
27 // Seek to specific position (e.g., offset 1000)
28 offset := int64(1000)
29 _, err = file.Seek(offset, io.SeekStart)
30 if err != nil {
31 fmt.Println("Error seeking:", err)
32 return
33 }
34
35 // Read data at that position
36 data := make([]byte, 100)
37 n, err := file.Read(data)
38 if err != nil && err != io.EOF {
39 fmt.Println("Error reading data:", err)
40 return
41 }
42 fmt.Printf("Read %d bytes from offset %d\n", n, offset)
43
44 // Seek relative to current position
45 file.Seek(50, io.SeekCurrent)
46
47 // Seek relative to end of file
48 file.Seek(-100, io.SeekEnd) // Last 100 bytes
49}
Seek Operations:
1// Seek positions
2io.SeekStart // From beginning of file
3io.SeekCurrent // From current position
4io.SeekEnd // From end of file (backwards)
5
6// Examples
7file.Seek(0, io.SeekStart) // Go to start
8file.Seek(100, io.SeekStart) // Go to byte 100
9file.Seek(50, io.SeekCurrent) // Skip 50 bytes forward
10file.Seek(-10, io.SeekCurrent) // Go back 10 bytes
11file.Seek(0, io.SeekEnd) // Go to end
12file.Seek(-100, io.SeekEnd) // Last 100 bytes
Read Entire File
The simplest and most convenient way to read a file - perfect for small to medium files where you need all the content at once.
When to Use:
- Configuration files
- Small text files, JSON, YAML, TOML
- Files you need to process as a whole
- When simplicity matters more than memory efficiency
Pros:
- ✅ Simplest API - one function call
- ✅ Automatically handles file closing
- ✅ Clean, readable code
Cons:
- ❌ Loads entire file into memory
- ❌ Can cause OOM for large files
- ❌ Less control over buffering
Under the Hood:
1// os.ReadFile implementation
2func ReadFile(name string) ([]byte, error) {
3 f, err := Open(name) // 1. Open file
4 if err != nil {
5 return nil, err
6 }
7 defer f.Close() // 2. Auto-close on return
8
9 // 3. Get file size for optimal allocation
10 var size int
11 if info, err := f.Stat(); err == nil {
12 size64 := info.Size()
13 if int64(int(size64)) == size64 {
14 size = int(size64)
15 }
16 }
17
18 // 4. Read entire content with pre-allocated buffer
19 data := make([]byte, 0, size+1)
20 // ... read loop ...
21 return data, nil
22}
Memory Model:
Small file:
┌─────────────┐
│ config.json │ 5 KB
└──────┬──────┘
│ os.ReadFile()
▼
┌──────────────┐
│ []byte 5 KB │ ✅ Efficient
└──────────────┘
Large file:
┌──────────────┐
│ huge.log │ 2 GB
└──────┬───────┘
│ os.ReadFile()
▼
┌────────────────┐
│ []byte 2 GB!!! │ ❌ Memory pressure
└────────────────┘
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 data, err := os.ReadFile("example.txt")
11 if err != nil {
12 fmt.Println("Error:", err)
13 return
14 }
15
16 fmt.Println(string(data))
17}
Read with os.Open
More control over file operations with explicit file handle management.
When to Use:
- Need to check file metadata before reading
- Want to read file in chunks
- Need to seek to different positions
- Combining read operations with other file operations
1// run
2package main
3
4import (
5 "fmt"
6 "io"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("example.txt")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 // Get file info
19 info, err := file.Stat()
20 if err != nil {
21 fmt.Println("Error:", err)
22 return
23 }
24 fmt.Printf("File size: %d bytes\n", info.Size())
25
26 // Read content
27 data, err := io.ReadAll(file)
28 if err != nil {
29 fmt.Println("Error:", err)
30 return
31 }
32
33 fmt.Println(string(data))
34}
File Operations Available:
1file.Read(buffer) // Read into byte slice
2file.ReadAt(buffer, offset) // Read at specific position
3file.Seek(offset, whence) // Move file pointer
4file.Stat() // Get file metadata
5file.Sync() // Flush to disk
6file.Truncate(size) // Resize file
Buffered Reading
The most efficient way to read large files - uses an internal buffer to minimize system calls and memory usage. This is the go-to method for processing large log files, datasets, or any file that doesn't fit comfortably in memory.
Why Buffered I/O is Fast:
- Fewer System Calls - Instead of making a system call for every line, bufio reads large chunks at once
- Lower Memory Footprint - Only keeps the buffer in memory, not the entire file
- Streaming Processing - Process one line at a time, never loading the entire file
Performance Comparison:
Reading 100MB log file:
Method | Memory Usage | System Calls | Time
----------------|--------------|--------------|------
os.ReadFile | 100MB | ~200 | 150ms ❌ High memory
io.ReadAll | 100MB | ~200 | 150ms ❌ High memory
bufio.Scanner | 4KB | ~25,000 | 180ms ✅ Low memory
Under the Hood:
Without buffering:
┌──────────────┐
│ huge.log │
└──────┬───────┘
│ Read line 1 → system call #1
│ Read line 2 → system call #2
│ Read line 3 → system call #3
│ ... 1 million system calls! ❌
With bufio.Scanner:
┌──────────────┐
│ huge.log │
└──────┬───────┘
│ Read 4KB chunk → system call #1
├─→ line 1
├─→ line 2
├─→ line 3
├─→ ...
│ Read 4KB chunk → system call #2
├─→ more lines
│ ... only ~25,000 system calls ✅
System Call Cost:
Each system call involves:
- Context switch from user space → kernel space
- Kernel validates file descriptor, permissions
- Disk I/O
- Context switch back to user space
- Cost: ~1-10 microseconds per call
Buffered Reading Strategies:
| Scanner Method | Use Case | Line Length Limit |
|---|---|---|
bufio.Scanner |
Text files | 64KB per line |
scanner.Split(bufio.ScanWords) |
Word-by-word parsing | Configurable |
scanner.Split(bufio.ScanBytes) |
Byte-by-byte processing | N/A |
bufio.Reader |
Binary files, custom parsing | No limit |
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("example.txt")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 scanner := bufio.NewScanner(file)
19 for scanner.Scan() {
20 fmt.Println(scanner.Text())
21 }
22
23 if err := scanner.Err(); err != nil {
24 fmt.Println("Error:", err)
25 }
26}
Custom Scanner Splitting:
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("data.txt")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 scanner := bufio.NewScanner(file)
19
20 // Scan by words instead of lines
21 scanner.Split(bufio.ScanWords)
22 wordCount := 0
23
24 for scanner.Scan() {
25 wordCount++
26 if wordCount <= 10 {
27 fmt.Printf("Word %d: %s\n", wordCount, scanner.Text())
28 }
29 }
30
31 fmt.Printf("Total words: %d\n", wordCount)
32}
Advanced: Custom Split Function:
1// run
2package main
3
4import (
5 "bufio"
6 "bytes"
7 "fmt"
8 "os"
9)
10
11// Custom split function to read CSV-like data
12func csvSplit(data []byte, atEOF bool) (advance int, token []byte, err error) {
13 // Find comma or newline
14 if i := bytes.IndexAny(data, ",\n"); i >= 0 {
15 return i + 1, data[0:i], nil
16 }
17
18 // Request more data
19 if !atEOF {
20 return 0, nil, nil
21 }
22
23 // EOF: return remaining data
24 if len(data) > 0 {
25 return len(data), data, nil
26 }
27
28 return 0, nil, nil
29}
30
31func main() {
32 file, err := os.Open("data.csv")
33 if err != nil {
34 fmt.Println("Error:", err)
35 return
36 }
37 defer file.Close()
38
39 scanner := bufio.NewScanner(file)
40 scanner.Split(csvSplit)
41
42 for scanner.Scan() {
43 fmt.Printf("Field: %s\n", scanner.Text())
44 }
45}
Reading Line by Line
Perfect for processing log files, text files, and any line-based format where you need to handle each line individually.
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("data.txt")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 scanner := bufio.NewScanner(file)
19 lineNumber := 1
20
21 for scanner.Scan() {
22 fmt.Printf("%d: %s\n", lineNumber, scanner.Text())
23 lineNumber++
24 }
25
26 if err := scanner.Err(); err != nil {
27 fmt.Println("Error:", err)
28 }
29}
Scanner Buffer Size Tuning:
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("large-lines.txt")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 scanner := bufio.NewScanner(file)
19
20 // Increase buffer size for files with long lines
21 const maxCapacity = 1024 * 1024 // 1MB
22 buf := make([]byte, maxCapacity)
23 scanner.Buffer(buf, maxCapacity)
24
25 for scanner.Scan() {
26 line := scanner.Text()
27 fmt.Printf("Line length: %d\n", len(line))
28 }
29
30 if err := scanner.Err(); err != nil {
31 fmt.Println("Error:", err)
32 }
33}
Writing Files
Now that you know how to read files, let's explore writing them back out. Writing files is like sending letters - you need to decide whether you're writing a quick note or composing a novel, and whether you're sending it all at once or in several installments.
Just like reading, Go provides multiple ways to write files. The key consideration is whether you're writing once or making many small writes.
💡 Key Takeaway: Buffering is your best friend when writing. Writing directly to a file for every small piece of data is like mailing individual postcards instead of a single letter - much slower and more expensive in terms of system resources.
Quick Decision Guide:
- Small files, write once: Use
os.WriteFile - Multiple writes: Use
bufio.Writer - Append to existing: Use
os.OpenFilewithO_APPEND - Atomic writes: Write to temp file, then rename
- Need explicit control: Use
os.Create+ manual writes
⚠️ Important: Never forget to flush buffered writers! Buffered data stays in memory until you explicitly flush it to disk. A program crash before flushing means your data is lost forever.
File Permission Bits Explained:
In Unix-like systems, file permissions use octal notation:
0644:
│││└─ Others: read (4)
││└── Group: read (4)
│└─── User: read + write = 6
└──── Special bits
0755:
│││└─ Others: read + execute = 5
││└── Group: read + execute = 5
│└─── User: read + write + execute = 7
└──── Special bits
Permission bits: rwx rwx rwx
│ │ └─── Others
│ └─────── Group
└─────────── Owner
Value | Permission
------|------------
4 | Read (r--)
2 | Write (-w-)
1 | Execute (--x)
6 | Read + Write (rw-)
7 | Read + Write + Execute (rwx)
Common Permissions:
0644- Files: Owner can read/write, others can read0600- Sensitive files: Only owner can read/write0755- Directories: Everyone can read/list, only owner can modify0700- Private directories: Only owner has access0666- Shared files: Everyone can read/write0444- Read-only for everyone
Write Entire File
The simplest way to write a file - perfect for small files or one-time writes.
When to Use:
- Configuration files
- Small JSON/YAML/TOML output
- Simple text files
- Atomic updates
Pros:
- ✅ Simplest API - one function call
- ✅ Automatic file closing
- ✅ Creates parent directories
- ✅ Sets permissions explicitly
Cons:
- ❌ Overwrites existing file
- ❌ Entire content must be in memory
- ❌ No buffering for multiple writes
- ❌ Not atomic
Atomicity Considerations:
Non-atomic write:
1. Open "config.json"
2. Write new data...
3. ❌ CRASH! → File corrupted
Atomic write:
1. Write to "config.json.tmp"
2. Flush and sync to disk
3. Rename "config.json.tmp" → "config.json"
4. ❌ CRASH here? → Original file still intact!
1// run
2package main
3
4import (
5 "os"
6)
7
8func main() {
9 data := []byte("Hello, Go!\nWelcome to file I/O.\n")
10 err := os.WriteFile("output.txt", data, 0644)
11 if err != nil {
12 panic(err)
13 }
14}
Permission Examples:
1// run
2package main
3
4import (
5 "os"
6)
7
8func main() {
9 // World-readable file
10 os.WriteFile("public.txt", []byte("public data"), 0644)
11
12 // Owner-only file (passwords, keys)
13 os.WriteFile("secret.key", []byte("secret"), 0600)
14
15 // Everyone can read and write
16 os.WriteFile("shared.txt", []byte("shared"), 0666)
17
18 // Read-only file
19 os.WriteFile("readonly.txt", []byte("readonly"), 0444)
20}
Write with os.Create
More control over writing with explicit file handle management.
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 file, err := os.Create("output.txt")
11 if err != nil {
12 fmt.Println("Error:", err)
13 return
14 }
15 defer file.Close()
16
17 _, err = file.WriteString("Hello, World!\n")
18 if err != nil {
19 fmt.Println("Error:", err)
20 return
21 }
22
23 fmt.Println("File written successfully")
24}
Multiple Write Operations:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 file, err := os.Create("output.txt")
11 if err != nil {
12 fmt.Println("Error:", err)
13 return
14 }
15 defer file.Close()
16
17 // Multiple writes
18 file.WriteString("Line 1\n")
19 file.WriteString("Line 2\n")
20 file.WriteString("Line 3\n")
21
22 // Write bytes
23 data := []byte("Line 4\n")
24 file.Write(data)
25
26 // Formatted write
27 fmt.Fprintf(file, "Line %d\n", 5)
28
29 fmt.Println("File written successfully")
30}
Buffered Writing
The most efficient way to write files when you have many small writes - dramatically reduces system calls and improves performance.
Why Buffered Writing is Critical:
Without buffering:
for i := 1; i <= 10000; i++ {
file.WriteString(fmt.Sprintf("Line %d\n", i)) // 10,000 system calls! ❌
}
With buffering:
writer := bufio.NewWriter(file)
for i := 1; i <= 10000; i++ {
writer.WriteString(fmt.Sprintf("Line %d\n", i)) // Buffered in memory
}
writer.Flush() // 1-2 system calls! ✅
Performance Impact:
Writing 100,000 small lines:
Method | System Calls | Time | Performance
--------------------|--------------|---------|-------------
Direct file.Write | 100,000 | 1500ms | ❌ Terrible
bufio.Writer | ~200 | 50ms | ✅ 30x faster!
Why Flushing is Critical:
1// ❌ WRONG: Data may be lost!
2func writeLog(filename, message string) {
3 file, _ := os.Create(filename)
4 defer file.Close()
5
6 writer := bufio.NewWriter(file)
7 writer.WriteString(message)
8 // Missing Flush()! Data is in memory buffer, not on disk!
9}
10
11// ✅ CORRECT: Always flush before closing
12func writeLog(filename, message string) {
13 file, _ := os.Create(filename)
14 defer file.Close()
15
16 writer := bufio.NewWriter(file)
17 defer writer.Flush() // Ensures data is written to disk
18 writer.WriteString(message)
19}
What Happens Without Flush:
1. writer.WriteString("data") → Writes to 4KB buffer in memory
2. file.Close() → Closes file descriptor
3. ❌ Buffer content LOST! Never written to disk!
With proper flush:
1. writer.WriteString("data") → Writes to 4KB buffer in memory
2. writer.Flush() → Forces buffer to disk
3. file.Close() → Safe to close now ✅
Buffer Sizes and Tuning:
1// Default buffer size - good for most cases
2writer := bufio.NewWriter(file)
3
4// Custom buffer size - for high-throughput scenarios
5writer := bufio.NewWriterSize(file, 64*1024) // 64KB buffer
6
7// Smaller buffer - for real-time logs
8writer := bufio.NewWriterSize(file, 1024) // 1KB buffer
When to Flush:
- Always: Before closing the file
- Periodically: For long-running processes
- After critical data: After writing important data that must persist
- Real-time logs: After each log entry for immediate visibility
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8)
9
10func main() {
11 file, err := os.Create("output.txt")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 writer := bufio.NewWriter(file)
19 defer writer.Flush() // Critical: flush before file closes
20
21 for i := 1; i <= 5; i++ {
22 fmt.Fprintf(writer, "Line %d\n", i)
23 }
24
25 fmt.Println("File written successfully")
26}
Advanced Buffering Patterns:
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "time"
9)
10
11func main() {
12 file, err := os.Create("log.txt")
13 if err != nil {
14 fmt.Println("Error:", err)
15 return
16 }
17 defer file.Close()
18
19 writer := bufio.NewWriter(file)
20 defer writer.Flush()
21
22 // Periodic flushing for long-running processes
23 ticker := time.NewTicker(5 * time.Second)
24 defer ticker.Stop()
25
26 done := make(chan bool)
27
28 go func() {
29 for {
30 select {
31 case <-ticker.C:
32 writer.Flush() // Flush every 5 seconds
33 fmt.Println("Flushed buffer to disk")
34 case <-done:
35 return
36 }
37 }
38 }()
39
40 // Simulate writing logs
41 for i := 0; i < 20; i++ {
42 fmt.Fprintf(writer, "[%s] Log entry %d\n", time.Now().Format(time.RFC3339), i)
43 time.Sleep(500 * time.Millisecond)
44 }
45
46 done <- true
47 fmt.Println("Logging complete")
48}
Append to File
Add content to the end of an existing file without overwriting existing data - perfect for log files, audit trails, and incremental updates.
OpenFile Flags Explained:
os.OpenFile is the most flexible file opening function. It takes three parameters:
- filename - Path to file
- flags - How to open the file
- perm - Permissions to use if creating new file
Common Flags:
| Flag | Description | Use Case |
|---|---|---|
O_RDONLY |
Read-only | Reading files |
O_WRONLY |
Write-only | Writing files |
O_RDWR |
Read and write | Updating files in place |
O_APPEND |
Append to end | Log files, adding data |
O_CREATE |
Create if doesn't exist | New files |
O_TRUNC |
Truncate existing | Overwrite files |
O_EXCL |
Fail if file exists | Ensure new file |
O_SYNC |
Synchronous I/O | Critical data |
Flag Combinations:
1// Read-only
2os.OpenFile("file.txt", os.O_RDONLY, 0)
3
4// Write, create if needed, truncate existing
5os.Create("file.txt") // Shorthand for:
6os.OpenFile("file.txt", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
7
8// Append, create if needed, don't truncate
9os.OpenFile("log.txt", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
10
11// Read and write, create if needed
12os.OpenFile("data.bin", os.O_RDWR|os.O_CREATE, 0644)
13
14// Create new file, fail if exists
15os.OpenFile("lock.pid", os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0644)
Why O_APPEND Matters:
Without O_APPEND:
Process A: Seek to end → position 1000
Process B: Seek to end → position 1000
Process A: Write "A" → overwrites at 1000
Process B: Write "B" → overwrites at 1000
❌ Result: Data loss! Both wrote to same position
With O_APPEND:
Process A: Write "A" with O_APPEND → atomic append
Process B: Write "B" with O_APPEND → atomic append
✅ Result: Both writes preserved, no data loss
Real-World Append Patterns:
1// 1. Simple log appender
2func appendLog(filename, message string) error {
3 f, err := os.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
4 if err != nil {
5 return err
6 }
7 defer f.Close()
8
9 timestamp := time.Now().Format("2006-01-02 15:04:05")
10 _, err = fmt.Fprintf(f, "[%s] %s\n", timestamp, message)
11 return err
12}
13
14// 2. Buffered log appender
15func appendLogBuffered(filename, message string) error {
16 f, err := os.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
17 if err != nil {
18 return err
19 }
20 defer f.Close()
21
22 writer := bufio.NewWriter(f)
23 defer writer.Flush()
24
25 timestamp := time.Now().Format("2006-01-02 15:04:05")
26 _, err = fmt.Fprintf(writer, "[%s] %s\n", timestamp, message)
27 return err
28}
29
30// 3. Thread-safe log appender
31type LogFile struct {
32 mu sync.Mutex
33 file *os.File
34}
35
36func (lf *LogFile) Append(message string) error {
37 lf.mu.Lock()
38 defer lf.mu.Unlock()
39
40 timestamp := time.Now().Format("2006-01-02 15:04:05")
41 _, err := fmt.Fprintf(lf.file, "[%s] %s\n", timestamp, message)
42 return err
43}
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 file, err := os.OpenFile("log.txt",
11 os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 if _, err := file.WriteString("New log entry\n"); err != nil {
19 fmt.Println("Error:", err)
20 }
21}
Concurrent Appending:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "sync"
8 "time"
9)
10
11func main() {
12 filename := "concurrent.log"
13
14 // Multiple goroutines appending concurrently
15 var wg sync.WaitGroup
16 for i := 0; i < 10; i++ {
17 wg.Add(1)
18 go func(id int) {
19 defer wg.Done()
20
21 for j := 0; j < 5; j++ {
22 f, err := os.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
23 if err != nil {
24 fmt.Println("Error:", err)
25 return
26 }
27
28 timestamp := time.Now().Format(time.RFC3339Nano)
29 fmt.Fprintf(f, "[%s] Goroutine %d, Message %d\n", timestamp, id, j)
30 f.Close()
31
32 time.Sleep(10 * time.Millisecond)
33 }
34 }(i)
35 }
36
37 wg.Wait()
38 fmt.Println("All goroutines finished appending")
39}
File Information
Check if File Exists
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func fileExists(filename string) bool {
10 _, err := os.Stat(filename)
11 return err == nil
12}
13
14func main() {
15 if fileExists("example.txt") {
16 fmt.Println("File exists")
17 } else {
18 fmt.Println("File does not exist")
19 }
20}
Proper Error Checking:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func checkFile(filename string) {
10 info, err := os.Stat(filename)
11 if err == nil {
12 fmt.Printf("File exists: %s (%d bytes)\n", filename, info.Size())
13 } else if os.IsNotExist(err) {
14 fmt.Printf("File does not exist: %s\n", filename)
15 } else {
16 fmt.Printf("Error checking file: %v\n", err)
17 }
18}
19
20func main() {
21 checkFile("example.txt")
22 checkFile("/nonexistent/path/file.txt")
23 checkFile("/root/protected.txt") // May get permission error
24}
Get File Info
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 info, err := os.Stat("example.txt")
11 if err != nil {
12 fmt.Println("Error:", err)
13 return
14 }
15
16 fmt.Println("Name:", info.Name())
17 fmt.Println("Size:", info.Size(), "bytes")
18 fmt.Println("Mode:", info.Mode())
19 fmt.Println("Modified:", info.ModTime())
20 fmt.Println("Is Directory:", info.IsDir())
21}
Complete File Metadata:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "time"
8)
9
10func printFileInfo(filename string) {
11 info, err := os.Stat(filename)
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16
17 fmt.Printf("File: %s\n", filename)
18 fmt.Printf(" Name: %s\n", info.Name())
19 fmt.Printf(" Size: %d bytes\n", info.Size())
20 fmt.Printf(" Mode: %s\n", info.Mode())
21 fmt.Printf(" Modified: %s\n", info.ModTime().Format(time.RFC3339))
22 fmt.Printf(" IsDir: %v\n", info.IsDir())
23
24 // Permissions breakdown
25 mode := info.Mode()
26 fmt.Printf(" Permissions:\n")
27 fmt.Printf(" Owner: %s\n", mode.Perm()&0700)
28 fmt.Printf(" Group: %s\n", mode.Perm()&0070)
29 fmt.Printf(" Others: %s\n", mode.Perm()&0007)
30
31 // File type
32 if mode.IsRegular() {
33 fmt.Println(" Type: Regular file")
34 } else if mode.IsDir() {
35 fmt.Println(" Type: Directory")
36 } else if mode&os.ModeSymlink != 0 {
37 fmt.Println(" Type: Symbolic link")
38 }
39}
40
41func main() {
42 printFileInfo("example.txt")
43}
Directory Operations
Create Directory
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 // Create single directory
11 err := os.Mkdir("testdir", 0755)
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16
17 // Create nested directories
18 err = os.MkdirAll("path/to/nested/dir", 0755)
19 if err != nil {
20 fmt.Println("Error:", err)
21 return
22 }
23
24 fmt.Println("Directories created")
25}
Error Handling:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func createDirectory(path string) error {
10 // Check if already exists
11 if _, err := os.Stat(path); err == nil {
12 return fmt.Errorf("directory already exists: %s", path)
13 } else if !os.IsNotExist(err) {
14 return fmt.Errorf("error checking directory: %w", err)
15 }
16
17 // Create directory
18 if err := os.MkdirAll(path, 0755); err != nil {
19 return fmt.Errorf("failed to create directory: %w", err)
20 }
21
22 fmt.Printf("Created directory: %s\n", path)
23 return nil
24}
25
26func main() {
27 if err := createDirectory("new/nested/path"); err != nil {
28 fmt.Println("Error:", err)
29 }
30}
List Directory Contents
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 entries, err := os.ReadDir(".")
11 if err != nil {
12 fmt.Println("Error:", err)
13 return
14 }
15
16 for _, entry := range entries {
17 if entry.IsDir() {
18 fmt.Printf("[DIR] %s\n", entry.Name())
19 } else {
20 fmt.Printf("[FILE] %s\n", entry.Name())
21 }
22 }
23}
Detailed Directory Listing:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "time"
8)
9
10func listDirectory(path string) error {
11 entries, err := os.ReadDir(path)
12 if err != nil {
13 return err
14 }
15
16 fmt.Printf("Contents of %s:\n", path)
17 fmt.Println("Type | Size | Modified | Name")
18 fmt.Println("------|-----------|---------------------|-----")
19
20 for _, entry := range entries {
21 info, err := entry.Info()
22 if err != nil {
23 continue
24 }
25
26 typeStr := "FILE"
27 if entry.IsDir() {
28 typeStr := "DIR "
29 }
30
31 size := info.Size()
32 modified := info.ModTime().Format("2006-01-02 15:04:05")
33
34 fmt.Printf("%-5s | %9d | %19s | %s\n",
35 typeStr, size, modified, entry.Name())
36 }
37
38 return nil
39}
40
41func main() {
42 if err := listDirectory("."); err != nil {
43 fmt.Println("Error:", err)
44 }
45}
Walk Directory Tree
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "path/filepath"
8)
9
10func main() {
11 err := filepath.Walk(".", func(path string, info os.FileInfo, err error) error {
12 if err != nil {
13 return err
14 }
15
16 if info.IsDir() {
17 fmt.Printf("DIR: %s\n", path)
18 } else {
19 fmt.Printf("FILE: %s (%d bytes)\n", path, info.Size())
20 }
21
22 return nil
23 })
24
25 if err != nil {
26 fmt.Println("Error:", err)
27 }
28}
Advanced Directory Walking:
1// run
2package main
3
4import (
5 "fmt"
6 "io/fs"
7 "os"
8 "path/filepath"
9 "strings"
10)
11
12type DirStats struct {
13 FileCount int
14 DirCount int
15 TotalSize int64
16}
17
18func walkWithFilter(root string, extensions []string) (*DirStats, error) {
19 stats := &DirStats{}
20
21 err := filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
22 if err != nil {
23 return err
24 }
25
26 if info.IsDir() {
27 stats.DirCount++
28 return nil
29 }
30
31 // Filter by extension
32 if len(extensions) > 0 {
33 ext := filepath.Ext(path)
34 matched := false
35 for _, e := range extensions {
36 if ext == e {
37 matched = true
38 break
39 }
40 }
41 if !matched {
42 return nil
43 }
44 }
45
46 stats.FileCount++
47 stats.TotalSize += info.Size()
48 fmt.Printf("Found: %s (%d bytes)\n", path, info.Size())
49
50 return nil
51 })
52
53 return stats, err
54}
55
56func main() {
57 // Find all .txt and .md files
58 stats, err := walkWithFilter(".", []string{".txt", ".md"})
59 if err != nil {
60 fmt.Println("Error:", err)
61 return
62 }
63
64 fmt.Printf("\nStatistics:\n")
65 fmt.Printf(" Directories: %d\n", stats.DirCount)
66 fmt.Printf(" Files: %d\n", stats.FileCount)
67 fmt.Printf(" Total Size: %d bytes\n", stats.TotalSize)
68}
WalkDir (Faster):
1// run
2package main
3
4import (
5 "fmt"
6 "io/fs"
7 "path/filepath"
8)
9
10func main() {
11 // WalkDir is faster than Walk for large directories
12 err := filepath.WalkDir(".", func(path string, d fs.DirEntry, err error) error {
13 if err != nil {
14 return err
15 }
16
17 if d.IsDir() {
18 fmt.Printf("DIR: %s\n", path)
19 } else {
20 info, _ := d.Info()
21 fmt.Printf("FILE: %s (%d bytes)\n", path, info.Size())
22 }
23
24 return nil
25 })
26
27 if err != nil {
28 fmt.Println("Error:", err)
29 }
30}
File Operations
Copy File
1// run
2package main
3
4import (
5 "fmt"
6 "io"
7 "os"
8)
9
10func copyFile(src, dst string) error {
11 sourceFile, err := os.Open(src)
12 if err != nil {
13 return err
14 }
15 defer sourceFile.Close()
16
17 destFile, err := os.Create(dst)
18 if err != nil {
19 return err
20 }
21 defer destFile.Close()
22
23 _, err = io.Copy(destFile, sourceFile)
24 return err
25}
26
27func main() {
28 err := copyFile("source.txt", "destination.txt")
29 if err != nil {
30 fmt.Println("Error:", err)
31 return
32 }
33
34 fmt.Println("File copied successfully")
35}
Copy with Permissions:
1// run
2package main
3
4import (
5 "fmt"
6 "io"
7 "os"
8)
9
10func copyFileWithPermissions(src, dst string) error {
11 // Get source file info
12 sourceInfo, err := os.Stat(src)
13 if err != nil {
14 return err
15 }
16
17 // Open source
18 sourceFile, err := os.Open(src)
19 if err != nil {
20 return err
21 }
22 defer sourceFile.Close()
23
24 // Create destination
25 destFile, err := os.Create(dst)
26 if err != nil {
27 return err
28 }
29 defer destFile.Close()
30
31 // Copy content
32 _, err = io.Copy(destFile, sourceFile)
33 if err != nil {
34 return err
35 }
36
37 // Copy permissions
38 return os.Chmod(dst, sourceInfo.Mode())
39}
40
41func main() {
42 if err := copyFileWithPermissions("source.txt", "dest.txt"); err != nil {
43 fmt.Println("Error:", err)
44 } else {
45 fmt.Println("File copied with permissions")
46 }
47}
Move/Rename File
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 err := os.Rename("oldname.txt", "newname.txt")
11 if err != nil {
12 fmt.Println("Error:", err)
13 return
14 }
15
16 fmt.Println("File renamed")
17}
Cross-Device Move:
1// run
2package main
3
4import (
5 "fmt"
6 "io"
7 "os"
8)
9
10func moveFile(src, dst string) error {
11 // Try rename first (fast, atomic)
12 err := os.Rename(src, dst)
13 if err == nil {
14 return nil
15 }
16
17 // If rename fails (cross-device), copy then delete
18 if err := copyFile(src, dst); err != nil {
19 return err
20 }
21
22 return os.Remove(src)
23}
24
25func copyFile(src, dst string) error {
26 sourceFile, err := os.Open(src)
27 if err != nil {
28 return err
29 }
30 defer sourceFile.Close()
31
32 destFile, err := os.Create(dst)
33 if err != nil {
34 return err
35 }
36 defer destFile.Close()
37
38 _, err = io.Copy(destFile, sourceFile)
39 return err
40}
41
42func main() {
43 if err := moveFile("source.txt", "/tmp/destination.txt"); err != nil {
44 fmt.Println("Error:", err)
45 } else {
46 fmt.Println("File moved successfully")
47 }
48}
Delete File
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 err := os.Remove("file.txt")
11 if err != nil {
12 fmt.Println("Error:", err)
13 return
14 }
15
16 fmt.Println("File deleted")
17}
Safe Delete with Confirmation:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func deleteFile(filename string) error {
10 // Check if file exists
11 info, err := os.Stat(filename)
12 if os.IsNotExist(err) {
13 return fmt.Errorf("file does not exist: %s", filename)
14 } else if err != nil {
15 return err
16 }
17
18 // Confirm before deleting large files
19 if info.Size() > 10*1024*1024 { // > 10MB
20 fmt.Printf("Warning: Deleting large file (%d bytes): %s\n", info.Size(), filename)
21 }
22
23 // Delete
24 if err := os.Remove(filename); err != nil {
25 return fmt.Errorf("failed to delete: %w", err)
26 }
27
28 fmt.Printf("Deleted: %s\n", filename)
29 return nil
30}
31
32func main() {
33 if err := deleteFile("example.txt"); err != nil {
34 fmt.Println("Error:", err)
35 }
36}
Delete Directory
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 // Remove empty directory
11 err := os.Remove("emptydir")
12 if err != nil {
13 fmt.Println("Error:", err)
14 }
15
16 // Remove directory and all contents
17 err = os.RemoveAll("dirwithfiles")
18 if err != nil {
19 fmt.Println("Error:", err)
20 }
21
22 fmt.Println("Directories removed")
23}
Working with Paths
One of Go's greatest strengths is cross-platform path handling. The filepath package automatically handles differences between Windows, Linux, and macOS.
Real-World Example:
Imagine you're building a photo organizing application that needs to:
- Store photos in the user's Documents folder
- Create thumbnails in a subfolder
- Work seamlessly on Windows, Mac, and Linux
Without proper path handling, your app would work on your development machine but crash on users' computers with different operating systems.
The Cross-Platform Path Problem:
Windows: C:\Users\John\Documents\file.txt
Linux/Mac: /home/john/documents/file.txt
Separator: Windows uses \ Linux/Mac uses /
Volumes: Windows has C:\ Linux/Mac has /
Case: Windows is case-insensitive, Unix is case-sensitive
💡 Key Takeaway: Never hardcode path separators or assume specific path structures. Always use filepath.Join() - it's like having a universal translator for file paths across all operating systems.
⚠️ Important: Path handling is one of the most common sources of cross-platform bugs. A path that works perfectly on your Linux development machine might be completely invalid on a user's Windows machine.
Why Never Use String Concatenation for Paths:
1// ❌ WRONG: Breaks on Windows
2path := dir + "/" + "file.txt" // On Windows: dir/file.txt (invalid)
3
4// ✅ CORRECT: Works everywhere
5path := filepath.Join(dir, "file.txt") // Windows: dir\file.txt
6 // Linux: dir/file.txt
When to Use X vs Y:
- Use
filepath.Join()when combining path components - Use
filepath.Abs()when you need the full path from a relative one - Use
filepath.Base()to extract just the filename from a full path - Use
filepath.Dir()to get just the directory from a full path
filepath Package Philosophy:
- Separator Agnostic - Use
filepath.Join, never hardcode/or\ - Clean Paths - Automatically removes
.and..references - Normalized Slashes - Converts
/to\on Windows automatically - Volume Aware - Handles
C:\on Windows,/on Unix
Key Filepath Functions:
| Function | Purpose | Example |
|---|---|---|
filepath.Join |
Concatenate path elements | filepath.Join("a", "b", "c") → a/b/c |
filepath.Dir |
Get directory part | filepath.Dir("/a/b/c.txt") → /a/b |
filepath.Base |
Get filename part | filepath.Base("/a/b/c.txt") → c.txt |
filepath.Ext |
Get file extension | filepath.Ext("file.txt") → .txt |
filepath.Split |
Split dir and file | Returns: ("/a/b/", "c.txt") |
filepath.Abs |
Get absolute path | Resolves relative → absolute |
filepath.Rel |
Get relative path | Makes absolute → relative |
filepath.Clean |
Normalize path | Removes .., ., extra / |
filepath.Match |
Pattern matching | Like shell globs |
filepath.Walk |
Recursively walk tree | Visit every file/dir |
filepath.WalkDir |
Fast dir walking | Faster than Walk |
Join Paths
The fundamental cross-platform path operation - always use this instead of string concatenation.
1// run
2package main
3
4import (
5 "fmt"
6 "path/filepath"
7)
8
9func main() {
10 path := filepath.Join("dir", "subdir", "file.txt")
11 fmt.Println(path) // dir/subdir/file.txt on Unix
12 // dir\subdir\file.txt on Windows
13}
Complex Path Operations:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "path/filepath"
8)
9
10func main() {
11 // Join multiple components
12 path := filepath.Join("/", "home", "user", "documents", "file.txt")
13 fmt.Println("Joined:", path)
14
15 // Clean path (remove . and ..)
16 dirty := filepath.Join("/home/user", "..", "admin", ".", "config")
17 clean := filepath.Clean(dirty)
18 fmt.Println("Cleaned:", clean) // /home/admin/config
19
20 // Absolute path
21 rel := "docs/file.txt"
22 abs, _ := filepath.Abs(rel)
23 fmt.Println("Absolute:", abs)
24
25 // Relative path
26 from := "/home/user/projects"
27 to := "/home/user/documents/file.txt"
28 relPath, _ := filepath.Rel(from, to)
29 fmt.Println("Relative:", relPath) // ../documents/file.txt
30}
Extract Path Components
1// run
2package main
3
4import (
5 "fmt"
6 "path/filepath"
7)
8
9func main() {
10 path := "/home/user/documents/file.txt"
11
12 fmt.Println("Dir:", filepath.Dir(path)) // /home/user/documents
13 fmt.Println("Base:", filepath.Base(path)) // file.txt
14 fmt.Println("Ext:", filepath.Ext(path)) // .txt
15
16 // Split into dir and file
17 dir, file := filepath.Split(path)
18 fmt.Printf("Split: dir='%s', file='%s'\n", dir, file)
19}
Advanced Path Parsing:
1// run
2package main
3
4import (
5 "fmt"
6 "path/filepath"
7 "strings"
8)
9
10type PathInfo struct {
11 Full string
12 Dir string
13 Base string
14 Extension string
15 Name string // Base without extension
16}
17
18func parsePath(path string) PathInfo {
19 abs, _ := filepath.Abs(path)
20 ext := filepath.Ext(path)
21 base := filepath.Base(path)
22 name := strings.TrimSuffix(base, ext)
23
24 return PathInfo{
25 Full: abs,
26 Dir: filepath.Dir(abs),
27 Base: base,
28 Extension: ext,
29 Name: name,
30 }
31}
32
33func main() {
34 info := parsePath("docs/chapter1.md")
35
36 fmt.Printf("Full: %s\n", info.Full)
37 fmt.Printf("Dir: %s\n", info.Dir)
38 fmt.Printf("Base: %s\n", info.Base)
39 fmt.Printf("Extension: %s\n", info.Extension)
40 fmt.Printf("Name: %s\n", info.Name)
41}
Absolute Path
1// run
2package main
3
4import (
5 "fmt"
6 "path/filepath"
7)
8
9func main() {
10 abs, err := filepath.Abs("example.txt")
11 if err != nil {
12 fmt.Println("Error:", err)
13 return
14 }
15
16 fmt.Println("Absolute path:", abs)
17}
Working Directory Context:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "path/filepath"
8)
9
10func main() {
11 // Get current working directory
12 cwd, _ := os.Getwd()
13 fmt.Println("Working directory:", cwd)
14
15 // Resolve relative paths
16 paths := []string{"file.txt", "../parent.txt", "./subdir/file.txt"}
17
18 for _, p := range paths {
19 abs, _ := filepath.Abs(p)
20 fmt.Printf(" %s → %s\n", p, abs)
21 }
22}
CSV Files
Read CSV
1// run
2package main
3
4import (
5 "encoding/csv"
6 "fmt"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("data.csv")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 reader := csv.NewReader(file)
19 records, err := reader.ReadAll()
20 if err != nil {
21 fmt.Println("Error:", err)
22 return
23 }
24
25 for i, record := range records {
26 fmt.Printf("Row %d: %v\n", i, record)
27 }
28}
Streaming CSV Reading:
1// run
2package main
3
4import (
5 "encoding/csv"
6 "fmt"
7 "os"
8)
9
10func main() {
11 file, err := os.Open("large.csv")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 reader := csv.NewReader(file)
19
20 // Read header
21 header, err := reader.Read()
22 if err != nil {
23 fmt.Println("Error:", err)
24 return
25 }
26 fmt.Println("Header:", header)
27
28 // Stream records one by one
29 lineNum := 1
30 for {
31 record, err := reader.Read()
32 if err != nil {
33 break // End of file
34 }
35
36 lineNum++
37 fmt.Printf("Line %d: %v\n", lineNum, record)
38 }
39}
Write CSV
1// run
2package main
3
4import (
5 "encoding/csv"
6 "os"
7)
8
9func main() {
10 file, err := os.Create("output.csv")
11 if err != nil {
12 panic(err)
13 }
14 defer file.Close()
15
16 writer := csv.NewWriter(file)
17 defer writer.Flush()
18
19 records := [][]string{
20 {"Name", "Age", "City"},
21 {"Alice", "25", "NYC"},
22 {"Bob", "30", "LA"},
23 }
24
25 for _, record := range records {
26 writer.Write(record)
27 }
28}
Custom CSV Format:
1// run
2package main
3
4import (
5 "encoding/csv"
6 "fmt"
7 "os"
8)
9
10func main() {
11 file, err := os.Create("custom.csv")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer file.Close()
17
18 writer := csv.NewWriter(file)
19 defer writer.Flush()
20
21 // Configure custom delimiter
22 writer.Comma = ';' // Use semicolon instead of comma
23
24 // Write with custom format
25 records := [][]string{
26 {"Name", "Email", "Phone"},
27 {"John Doe", "john@example.com", "+1-555-1234"},
28 {"Jane Smith", "jane@example.com", "+1-555-5678"},
29 }
30
31 for _, record := range records {
32 if err := writer.Write(record); err != nil {
33 fmt.Println("Error writing:", err)
34 }
35 }
36
37 fmt.Println("Custom CSV written successfully")
38}
Temporary Files
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7)
8
9func main() {
10 // Create temp file
11 tmpfile, err := os.CreateTemp("", "example-*.txt")
12 if err != nil {
13 fmt.Println("Error:", err)
14 return
15 }
16 defer os.Remove(tmpfile.Name()) // Clean up
17
18 fmt.Println("Temp file:", tmpfile.Name())
19
20 // Write to temp file
21 if _, err := tmpfile.Write([]byte("temporary data")); err != nil {
22 fmt.Println("Error:", err)
23 }
24
25 tmpfile.Close()
26}
Temporary Directory:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "path/filepath"
8)
9
10func main() {
11 // Create temp directory
12 tmpDir, err := os.MkdirTemp("", "myapp-*")
13 if err != nil {
14 fmt.Println("Error:", err)
15 return
16 }
17 defer os.RemoveAll(tmpDir) // Clean up
18
19 fmt.Println("Temp directory:", tmpDir)
20
21 // Create files in temp directory
22 file1 := filepath.Join(tmpDir, "file1.txt")
23 file2 := filepath.Join(tmpDir, "file2.txt")
24
25 os.WriteFile(file1, []byte("content1"), 0644)
26 os.WriteFile(file2, []byte("content2"), 0644)
27
28 fmt.Println("Created files in temp directory")
29}
Common Patterns and Production-Ready Solutions
Pattern 1: Atomic Configuration Updates
Problem: Application crashes while writing config file → corrupted configuration
Solution: Write to temp file first, then rename atomically
1// run
2package main
3
4import (
5 "encoding/json"
6 "fmt"
7 "os"
8 "path/filepath"
9)
10
11type Config struct {
12 Database string `json:"database"`
13 Port int `json:"port"`
14 Debug bool `json:"debug"`
15}
16
17func saveConfigAtomic(filename string, config Config) error {
18 // Step 1: Marshal to JSON
19 data, err := json.MarshalIndent(config, "", " ")
20 if err != nil {
21 return fmt.Errorf("marshal config: %w", err)
22 }
23
24 // Step 2: Write to temp file in same directory
25 tmpFile := filename + ".tmp"
26 if err := os.WriteFile(tmpFile, data, 0644); err != nil {
27 return fmt.Errorf("write temp: %w", err)
28 }
29
30 // Step 3: Atomic rename
31 if err := os.Rename(tmpFile, filename); err != nil {
32 os.Remove(tmpFile) // Cleanup on error
33 return fmt.Errorf("atomic rename: %w", err)
34 }
35
36 return nil
37}
38
39func main() {
40 config := Config{
41 Database: "postgres://localhost/mydb",
42 Port: 5432,
43 Debug: true,
44 }
45
46 // Save atomically
47 if err := saveConfigAtomic("app.json", config); err != nil {
48 fmt.Printf("Failed to save config: %v\n", err)
49 return
50 }
51
52 fmt.Println("Config saved atomically!")
53}
Pattern 2: Safe File Processing in Loops
Problem: Processing many files in a loop without proper cleanup → resource leaks
Solution: Extract file processing to separate function
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "path/filepath"
9)
10
11// ✅ CORRECT: Separate function ensures proper cleanup
12func processFile(filename string) error {
13 file, err := os.Open(filename)
14 if err != nil {
15 return fmt.Errorf("open %s: %w", filename, err)
16 }
17 defer file.Close() // Guaranteed cleanup
18
19 scanner := bufio.NewScanner(file)
20 lineCount := 0
21
22 for scanner.Scan() {
23 lineCount++
24 // Process line...
25 }
26
27 if err := scanner.Err(); err != nil {
28 return fmt.Errorf("scan %s: %w", filename, err)
29 }
30
31 fmt.Printf("Processed %s: %d lines\n", filename, lineCount)
32 return nil
33}
34
35func main() {
36 // Process all .txt files in directory
37 files, err := filepath.Glob("*.txt")
38 if err != nil {
39 fmt.Printf("Error: %v\n", err)
40 return
41 }
42
43 // Each file gets its own stack frame with proper defer
44 for _, file := range files {
45 if err := processFile(file); err != nil {
46 fmt.Printf("Error processing %s: %v\n", file, err)
47 }
48 }
49
50 fmt.Println("All files processed successfully!")
51}
Pattern 3: High-Performance Log Writer
Problem: Frequent log writes without buffering → slow performance
Solution: Buffered writer with periodic flushing
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "sync"
9 "time"
10)
11
12type SafeLogger struct {
13 mu sync.Mutex
14 writer *bufio.Writer
15 file *os.File
16}
17
18func NewSafeLogger(filename string) (*SafeLogger, error) {
19 file, err := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
20 if err != nil {
21 return nil, err
22 }
23
24 return &SafeLogger{
25 writer: bufio.NewWriter(file),
26 file: file,
27 }, nil
28}
29
30func (l *SafeLogger) Write(message string) error {
31 l.mu.Lock()
32 defer l.mu.Unlock()
33
34 timestamp := time.Now().Format("2006-01-02 15:04:05")
35 logLine := fmt.Sprintf("[%s] %s\n", timestamp, message)
36
37 if _, err := l.writer.WriteString(logLine); err != nil {
38 return err
39 }
40
41 // Flush every write for critical logs
42 return l.writer.Flush()
43}
44
45func (l *SafeLogger) Close() error {
46 l.mu.Lock()
47 defer l.mu.Unlock()
48
49 // Flush remaining data before closing
50 l.writer.Flush()
51 return l.file.Close()
52}
53
54func main() {
55 logger, err := NewSafeLogger("app.log")
56 if err != nil {
57 fmt.Printf("Failed to create logger: %v\n", err)
58 return
59 }
60 defer logger.Close()
61
62 // Simulate concurrent logging
63 var wg sync.WaitGroup
64 for i := 0; i < 10; i++ {
65 wg.Add(1)
66 go func(id int) {
67 defer wg.Done()
68 msg := fmt.Sprintf("Log message %d from goroutine", id)
69 if err := logger.Write(msg); err != nil {
70 fmt.Printf("Failed to write log %d: %v\n", id, err)
71 }
72 }(i)
73 }
74
75 wg.Wait()
76 fmt.Println("Logging demonstration complete!")
77}
2. Check Every Error
Why it matters: File operations are the #1 source of runtime errors in production code.
1// ❌ WRONG: Ignoring errors leads to silent failures
2data, _ := os.ReadFile("config.json")
3json.Unmarshal(data, &config) // Might unmarshal empty data!
4
5// ✅ CORRECT: Handle each error
6data, err := os.ReadFile("config.json")
7if err != nil {
8 return fmt.Errorf("failed to read config: %w", err)
9}
10
11if err := json.Unmarshal(data, &config); err != nil {
12 return fmt.Errorf("failed to parse config: %w", err)
13}
Common file errors:
os.ErrNotExist- File doesn't existos.ErrPermission- Permission deniedos.ErrExist- File already existsio.EOF- End of file reachedfs.ErrClosed- File already closed
3. Use Buffered I/O for Performance
Impact: 10-50x performance improvement for many small reads/writes.
1// ❌ SLOW: Unbuffered line reading
2file, _ := os.Open("large.log")
3reader := io.Reader(file)
4// Each ReadLine is a system call!
5
6// ✅ FAST: Buffered line reading
7file, _ := os.Open("large.log")
8scanner := bufio.NewScanner(file)
9for scanner.Scan() {
10 line := scanner.Text() // No system call, reads from buffer
11}
4. Use filepath.Join for Cross-Platform Paths
1// ❌ WRONG: Hardcoded separator breaks on Windows
2path := "/home/user/" + filename
3
4// ✅ CORRECT: Works on Windows, Linux, macOS
5path := filepath.Join("/home/user", filename)
5. Check File Existence Before Operations
1// ✅ Proper existence check
2if _, err := os.Stat(filename); err == nil {
3 fmt.Println("File exists")
4} else if os.IsNotExist(err) {
5 fmt.Println("File does not exist")
6} else {
7 fmt.Println("Error checking file:", err)
8}
6. Use os.CreateTemp for Temporary Files
1// ✅ CORRECT: Automatic unique naming, proper cleanup
2tmpfile, err := os.CreateTemp("", "prefix-*.txt")
3if err != nil {
4 log.Fatal(err)
5}
6defer os.Remove(tmpfile.Name()) // Clean up
7defer tmpfile.Close()
8
9// Write to tmpfile...
7. Set Proper Permissions
1// Files readable by owner and group
2os.WriteFile("data.txt", data, 0644)
3
4// Sensitive files readable only by owner
5os.WriteFile("secret.key", key, 0600)
6
7// Executable scripts
8os.WriteFile("script.sh", script, 0755)
9
10// Private directories
11os.MkdirAll("private", 0700)
8. Use Atomic Writes for Critical Files
1// ✅ Atomic config file update
2func atomicWriteConfig(filename string, data []byte) error {
3 // 1. Write to temp file
4 tmpfile, err := os.CreateTemp(filepath.Dir(filename), ".tmp-*")
5 if err != nil {
6 return err
7 }
8 defer os.Remove(tmpfile.Name()) // Clean up on error
9
10 if _, err := tmpfile.Write(data); err != nil {
11 return err
12 }
13
14 if err := tmpfile.Sync(); err != nil { // Force to disk
15 return err
16 }
17
18 if err := tmpfile.Close(); err != nil {
19 return err
20 }
21
22 // 2. Atomic rename
23 return os.Rename(tmpfile.Name(), filename)
24}
Common Pitfalls
These are the mistakes that even experienced Go developers make. Understanding them will save you hours of debugging time.
1. Forgetting to Close Files
Problem: File descriptor leak leads to "too many open files" errors.
💡 Key Takeaway: A file descriptor leak is like slowly filling a bathtub without a drain. At first you don't notice, but eventually it overflows and crashes your entire application.
1// ❌ WRONG: File never closed
2for _, filename := range files {
3 file, _ := os.Open(filename)
4 process(file) // Missing file.Close()!
5}
6// After 1024 iterations: panic: too many open files
7
8// ✅ CORRECT: Each file properly closed
9for _, filename := range files {
10 file, err := os.Open(filename)
11 if err != nil {
12 continue
13 }
14 defer file.Close() // Wait! This is also wrong in a loop!
15 process(file)
16}
17
18// ✅✅ BEST: Use function to ensure proper cleanup
19for _, filename := range files {
20 processFile(filename)
21}
22
23func processFile(filename string) error {
24 file, err := os.Open(filename)
25 if err != nil {
26 return err
27 }
28 defer file.Close() // Now this works correctly!
29 return process(file)
30}
⚠️ Important: The "defer in loops" pitfall is particularly tricky because it works fine for small numbers of files. Your tests pass, but in production with thousands of files, suddenly your application crashes with "too many open files."
Why defer in loops is dangerous:
deferexecutes when the function returns, not the loop iteration- All file.Close() calls accumulate until loop finishes
- Solution: Extract to separate function
When to Use X vs Y:
- Use
deferwhen opening a file in a function - Use explicit file closing when processing files in a loop
- Use
deferfor cleanup in error-prone code paths
2. Not Checking Errors
Problem: Silent failures, corrupted data, security vulnerabilities.
1// ❌ WRONG: What if ReadFile fails?
2data, _ := os.ReadFile("config.json")
3var config Config
4json.Unmarshal(data, &config) // Unmarshals empty data into empty config!
5useConfig(config) // Silent failure with wrong config
6
7// ✅ CORRECT: Every error checked
8data, err := os.ReadFile("config.json")
9if err != nil {
10 log.Fatal("Failed to read config:", err)
11}
12
13if err := json.Unmarshal(data, &config); err != nil {
14 log.Fatal("Failed to parse config:", err)
15}
16
17useConfig(config) // Safe!
3. Using Hardcoded Path Separators
Problem: Code breaks on different operating systems.
1// ❌ WRONG: Only works on Unix
2path := "/home/user/config/" + filename
3
4// ❌ WRONG: Only works on Windows
5path := "C:\\Users\\John\\config\\" + filename
6
7// ✅ CORRECT: Works everywhere
8path := filepath.Join("/home/user/config", filename)
9// Unix: /home/user/config/file.txt
10// Windows: \home\user\config\file.txt
4. Not Using Buffered I/O
Problem: 10-50x slower performance for line-by-line processing.
1// ❌ SLOW: Reading line by line without buffering
2file, _ := os.Open("large.log")
3for {
4 var line string
5 _, err := fmt.Fscanln(file, &line) // System call per line!
6 if err != nil {
7 break
8 }
9 process(line)
10}
11
12// ✅ FAST: Buffered scanning
13file, _ := os.Open("large.log")
14scanner := bufio.NewScanner(file)
15for scanner.Scan() {
16 process(scanner.Text()) // Reads from buffer
17}
5. Forgetting to Flush bufio.Writer
Problem: Data loss! Buffered data never written to disk.
1// ❌ WRONG: Data lost!
2func writeLog(filename, message string) {
3 file, _ := os.Create(filename)
4 defer file.Close()
5
6 writer := bufio.NewWriter(file)
7 writer.WriteString(message)
8 // Missing Flush()! Buffer is in memory, never written to disk
9}
10
11// ✅ CORRECT: Always flush
12func writeLog(filename, message string) {
13 file, _ := os.Create(filename)
14 defer file.Close()
15
16 writer := bufio.NewWriter(file)
17 defer writer.Flush() // Guaranteed flush before file.Close()
18 writer.WriteString(message)
19}
6. Race Conditions with Concurrent Access
Problem: Data corruption, lost writes, inconsistent state.
1// ❌ WRONG: Multiple goroutines writing to same file
2var file *os.File
3
4func init() {
5 file, _ = os.Create("log.txt")
6}
7
8func logMessage(msg string) {
9 file.WriteString(msg + "\n") // Race condition!
10}
11
12// Called from multiple goroutines:
13go logMessage("Message 1")
14go logMessage("Message 2")
15// Result: Interleaved writes, corrupted file!
16
17// ✅ CORRECT: Use mutex for synchronization
18type SafeLogger struct {
19 mu sync.Mutex
20 file *os.File
21}
22
23func (sl *SafeLogger) Log(msg string) {
24 sl.mu.Lock()
25 defer sl.mu.Unlock()
26 sl.file.WriteString(msg + "\n")
27}
7. Reading Entire Large Files into Memory
Problem: Out-of-memory crashes.
1// ❌ WRONG: Loading 10GB log file
2data, _ := os.ReadFile("huge-log.txt") // OOM!
3
4// ✅ CORRECT: Stream processing
5file, _ := os.Open("huge-log.txt")
6defer file.Close()
7
8scanner := bufio.NewScanner(file)
9for scanner.Scan() {
10 processLine(scanner.Text()) // Only one line in memory
11}
8. Not Using Atomic Writes for Critical Files
Problem: Corrupted config files on crash/power loss.
1// ❌ WRONG: Partial write on crash
2func saveConfig(config Config) error {
3 data, _ := json.Marshal(config)
4 return os.WriteFile("config.json", data, 0644)
5 // If crash happens during write: corrupted config!
6}
7
8// ✅ CORRECT: Atomic write pattern
9func saveConfig(config Config) error {
10 data, _ := json.Marshal(config)
11
12 // Write to temp file
13 tmp, _ := os.CreateTemp("", "config-*.json")
14 defer os.Remove(tmp.Name())
15
16 tmp.Write(data)
17 tmp.Sync() // Force to disk
18 tmp.Close()
19
20 // Atomic rename
21 return os.Rename(tmp.Name(), "config.json")
22}
Advanced File Operations
File Watching
Monitor file system changes using the fsnotify package for real-time updates.
Installation:
1go get github.com/fsnotify/fsnotify
Basic File Watcher:
1// run
2package main
3
4import (
5 "fmt"
6 "log"
7
8 "github.com/fsnotify/fsnotify"
9)
10
11func main() {
12 watcher, err := fsnotify.NewWatcher()
13 if err != nil {
14 log.Fatal(err)
15 }
16 defer watcher.Close()
17
18 // Watch for events
19 go func() {
20 for {
21 select {
22 case event, ok := <-watcher.Events:
23 if !ok {
24 return
25 }
26 fmt.Printf("Event: %s - %s\n", event.Op, event.Name)
27
28 // Handle different event types
29 switch {
30 case event.Op&fsnotify.Write == fsnotify.Write:
31 fmt.Println("Modified:", event.Name)
32 case event.Op&fsnotify.Create == fsnotify.Create:
33 fmt.Println("Created:", event.Name)
34 case event.Op&fsnotify.Remove == fsnotify.Remove:
35 fmt.Println("Removed:", event.Name)
36 case event.Op&fsnotify.Rename == fsnotify.Rename:
37 fmt.Println("Renamed:", event.Name)
38 }
39
40 case err, ok := <-watcher.Errors:
41 if !ok {
42 return
43 }
44 log.Println("Error:", err)
45 }
46 }
47 }()
48
49 // Add paths to watch
50 err = watcher.Add("/path/to/watch")
51 if err != nil {
52 log.Fatal(err)
53 }
54
55 // Block forever
56 <-make(chan struct{})
57}
Production Configuration Watcher:
1// run
2package main
3
4import (
5 "encoding/json"
6 "fmt"
7 "log"
8 "os"
9 "sync"
10 "time"
11
12 "github.com/fsnotify/fsnotify"
13)
14
15type Config struct {
16 Host string `json:"host"`
17 Port int `json:"port"`
18 Timeout int `json:"timeout"`
19}
20
21type ConfigWatcher struct {
22 filename string
23 config Config
24 mu sync.RWMutex
25 watcher *fsnotify.Watcher
26 onChange func(Config)
27}
28
29func NewConfigWatcher(filename string, onChange func(Config)) (*ConfigWatcher, error) {
30 cw := &ConfigWatcher{
31 filename: filename,
32 onChange: onChange,
33 }
34
35 // Load initial config
36 if err := cw.reload(); err != nil {
37 return nil, fmt.Errorf("load initial config: %w", err)
38 }
39
40 // Set up file watcher
41 watcher, err := fsnotify.NewWatcher()
42 if err != nil {
43 return nil, fmt.Errorf("create watcher: %w", err)
44 }
45 cw.watcher = watcher
46
47 if err := watcher.Add(filename); err != nil {
48 return nil, fmt.Errorf("watch file: %w", err)
49 }
50
51 // Start watching
52 go cw.watch()
53
54 return cw, nil
55}
56
57func (cw *ConfigWatcher) reload() error {
58 data, err := os.ReadFile(cw.filename)
59 if err != nil {
60 return err
61 }
62
63 var config Config
64 if err := json.Unmarshal(data, &config); err != nil {
65 return err
66 }
67
68 cw.mu.Lock()
69 cw.config = config
70 cw.mu.Unlock()
71
72 return nil
73}
74
75func (cw *ConfigWatcher) watch() {
76 // Debounce rapid changes
77 debounce := time.NewTimer(0)
78 <-debounce.C
79
80 for {
81 select {
82 case event, ok := <-cw.watcher.Events:
83 if !ok {
84 return
85 }
86
87 if event.Op&fsnotify.Write == fsnotify.Write {
88 // Reset debounce timer
89 debounce.Reset(100 * time.Millisecond)
90 }
91
92 case <-debounce.C:
93 // Reload config
94 if err := cw.reload(); err != nil {
95 log.Printf("Failed to reload config: %v", err)
96 continue
97 }
98
99 // Notify callback
100 if cw.onChange != nil {
101 cw.mu.RLock()
102 config := cw.config
103 cw.mu.RUnlock()
104 cw.onChange(config)
105 }
106
107 case err, ok := <-cw.watcher.Errors:
108 if !ok {
109 return
110 }
111 log.Println("Watcher error:", err)
112 }
113 }
114}
115
116func (cw *ConfigWatcher) Get() Config {
117 cw.mu.RLock()
118 defer cw.mu.RUnlock()
119 return cw.config
120}
121
122func (cw *ConfigWatcher) Close() error {
123 return cw.watcher.Close()
124}
125
126// Usage
127func main() {
128 watcher, err := NewConfigWatcher("config.json", func(cfg Config) {
129 fmt.Printf("Config reloaded: %+v\n", cfg)
130 // Apply new configuration...
131 })
132 if err != nil {
133 log.Fatal(err)
134 }
135 defer watcher.Close()
136
137 // Use config
138 config := watcher.Get()
139 fmt.Printf("Current config: %+v\n", config)
140
141 // Keep running
142 select {}
143}
Directory Watcher with Recursive Monitoring:
1// run
2package main
3
4import (
5 "fmt"
6 "log"
7 "os"
8 "path/filepath"
9
10 "github.com/fsnotify/fsnotify"
11)
12
13type DirectoryWatcher struct {
14 watcher *fsnotify.Watcher
15 root string
16}
17
18func NewDirectoryWatcher(root string) (*DirectoryWatcher, error) {
19 watcher, err := fsnotify.NewWatcher()
20 if err != nil {
21 return nil, err
22 }
23
24 dw := &DirectoryWatcher{
25 watcher: watcher,
26 root: root,
27 }
28
29 // Watch root and all subdirectories
30 if err := dw.watchRecursive(root); err != nil {
31 return nil, err
32 }
33
34 return dw, nil
35}
36
37func (dw *DirectoryWatcher) watchRecursive(path string) error {
38 return filepath.Walk(path, func(walkPath string, info os.FileInfo, err error) error {
39 if err != nil {
40 return err
41 }
42
43 if info.IsDir() {
44 if err := dw.watcher.Add(walkPath); err != nil {
45 return fmt.Errorf("watch directory %s: %w", walkPath, err)
46 }
47 }
48
49 return nil
50 })
51}
52
53func (dw *DirectoryWatcher) Watch() {
54 for {
55 select {
56 case event, ok := <-dw.watcher.Events:
57 if !ok {
58 return
59 }
60
61 fmt.Printf("Event: %s on %s\n", event.Op, event.Name)
62
63 // If new directory created, watch it too
64 if event.Op&fsnotify.Create == fsnotify.Create {
65 info, err := os.Stat(event.Name)
66 if err == nil && info.IsDir() {
67 dw.watchRecursive(event.Name)
68 fmt.Println("Now watching new directory:", event.Name)
69 }
70 }
71
72 case err, ok := <-dw.watcher.Errors:
73 if !ok {
74 return
75 }
76 log.Println("Error:", err)
77 }
78 }
79}
80
81func (dw *DirectoryWatcher) Close() error {
82 return dw.watcher.Close()
83}
Atomic File Writes
Comprehensive atomic write patterns to prevent data corruption.
Production-Ready Atomic Writer:
1package fileutil
2
3import (
4 "crypto/sha256"
5 "fmt"
6 "io"
7 "os"
8 "path/filepath"
9)
10
11// AtomicWriteFile writes data to filename atomically with checksums
12func AtomicWriteFile(filename string, data []byte, perm os.FileMode) error {
13 dir := filepath.Dir(filename)
14
15 // Create temp file in same directory
16 tmp, err := os.CreateTemp(dir, ".tmp-*")
17 if err != nil {
18 return fmt.Errorf("create temp file: %w", err)
19 }
20 tmpName := tmp.Name()
21
22 // Clean up on error
23 defer func() {
24 if tmp != nil {
25 tmp.Close()
26 os.Remove(tmpName)
27 }
28 }()
29
30 // Write data
31 if _, err := tmp.Write(data); err != nil {
32 return fmt.Errorf("write temp file: %w", err)
33 }
34
35 // Sync to disk
36 if err := tmp.Sync(); err != nil {
37 return fmt.Errorf("sync temp file: %w", err)
38 }
39
40 // Close before rename
41 if err := tmp.Close(); err != nil {
42 return fmt.Errorf("close temp file: %w", err)
43 }
44
45 // Set permissions
46 if err := os.Chmod(tmpName, perm); err != nil {
47 return fmt.Errorf("chmod temp file: %w", err)
48 }
49
50 // Atomic rename
51 if err := os.Rename(tmpName, filename); err != nil {
52 return fmt.Errorf("rename temp file: %w", err)
53 }
54
55 // Success - don't remove temp file
56 tmp = nil
57 return nil
58}
59
60// AtomicWriteFileWithChecksum writes with integrity verification
61func AtomicWriteFileWithChecksum(filename string, data []byte, perm os.FileMode) error {
62 // Calculate checksum
63 checksum := sha256.Sum256(data)
64 checksumFile := filename + ".sha256"
65
66 // Write data atomically
67 if err := AtomicWriteFile(filename, data, perm); err != nil {
68 return err
69 }
70
71 // Write checksum atomically
72 checksumData := []byte(fmt.Sprintf("%x %s\n", checksum, filepath.Base(filename)))
73 if err := AtomicWriteFile(checksumFile, checksumData, 0644); err != nil {
74 os.Remove(filename) // Rollback
75 return fmt.Errorf("write checksum: %w", err)
76 }
77
78 return nil
79}
80
81// VerifyChecksum verifies file integrity
82func VerifyChecksum(filename string) (bool, error) {
83 checksumFile := filename + ".sha256"
84
85 // Read checksum file
86 checksumData, err := os.ReadFile(checksumFile)
87 if err != nil {
88 return false, fmt.Errorf("read checksum file: %w", err)
89 }
90
91 var expectedChecksum string
92 fmt.Sscanf(string(checksumData), "%s", &expectedChecksum)
93
94 // Calculate actual checksum
95 data, err := os.ReadFile(filename)
96 if err != nil {
97 return false, fmt.Errorf("read file: %w", err)
98 }
99
100 actualChecksum := fmt.Sprintf("%x", sha256.Sum256(data))
101
102 return actualChecksum == expectedChecksum, nil
103}
Atomic Copy Operation:
1// AtomicCopy copies a file atomically with verification
2func AtomicCopy(src, dst string) error {
3 // Open source
4 srcFile, err := os.Open(src)
5 if err != nil {
6 return fmt.Errorf("open source: %w", err)
7 }
8 defer srcFile.Close()
9
10 // Get source file info
11 srcInfo, err := srcFile.Stat()
12 if err != nil {
13 return fmt.Errorf("stat source: %w", err)
14 }
15
16 // Create temp file
17 tmpFile, err := os.CreateTemp(filepath.Dir(dst), ".tmp-*")
18 if err != nil {
19 return fmt.Errorf("create temp file: %w", err)
20 }
21 tmpName := tmpFile.Name()
22 defer os.Remove(tmpName)
23
24 // Copy data with hash verification
25 srcHash := sha256.New()
26 dstHash := sha256.New()
27
28 srcReader := io.TeeReader(srcFile, srcHash)
29 dstWriter := io.MultiWriter(tmpFile, dstHash)
30
31 if _, err := io.Copy(dstWriter, srcReader); err != nil {
32 tmpFile.Close()
33 return fmt.Errorf("copy data: %w", err)
34 }
35
36 // Verify hashes match
37 if string(srcHash.Sum(nil)) != string(dstHash.Sum(nil)) {
38 tmpFile.Close()
39 return fmt.Errorf("checksum mismatch")
40 }
41
42 // Sync and close
43 if err := tmpFile.Sync(); err != nil {
44 tmpFile.Close()
45 return fmt.Errorf("sync: %w", err)
46 }
47
48 if err := tmpFile.Close(); err != nil {
49 return fmt.Errorf("close: %w", err)
50 }
51
52 // Preserve permissions
53 if err := os.Chmod(tmpName, srcInfo.Mode()); err != nil {
54 return fmt.Errorf("chmod: %w", err)
55 }
56
57 // Atomic rename
58 if err := os.Rename(tmpName, dst); err != nil {
59 return fmt.Errorf("rename: %w", err)
60 }
61
62 return nil
63}
File Locking
Prevent concurrent access conflicts with file locking mechanisms.
Advisory Locking:
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "syscall"
8 "time"
9)
10
11// FileLock represents an advisory file lock
12type FileLock struct {
13 file *os.File
14}
15
16// NewFileLock creates a new file lock
17func NewFileLock(filename string) (*FileLock, error) {
18 file, err := os.OpenFile(filename, os.O_CREATE|os.O_RDWR, 0666)
19 if err != nil {
20 return nil, fmt.Errorf("open lock file: %w", err)
21 }
22
23 return &FileLock{file: file}, nil
24}
25
26// Lock acquires an exclusive lock
27func (fl *FileLock) Lock() error {
28 return syscall.Flock(int(fl.file.Fd()), syscall.LOCK_EX)
29}
30
31// TryLock tries to acquire lock without blocking
32func (fl *FileLock) TryLock() error {
33 return syscall.Flock(int(fl.file.Fd()), syscall.LOCK_EX|syscall.LOCK_NB)
34}
35
36// Unlock releases the lock
37func (fl *FileLock) Unlock() error {
38 return syscall.Flock(int(fl.file.Fd()), syscall.LOCK_UN)
39}
40
41// Close releases the lock and closes the file
42func (fl *FileLock) Close() error {
43 fl.Unlock()
44 return fl.file.Close()
45}
46
47// Example usage
48func main() {
49 lock, err := NewFileLock("/tmp/my.lock")
50 if err != nil {
51 panic(err)
52 }
53 defer lock.Close()
54
55 fmt.Println("Attempting to acquire lock...")
56 if err := lock.Lock(); err != nil {
57 panic(err)
58 }
59 fmt.Println("Lock acquired!")
60
61 // Do critical work
62 time.Sleep(5 * time.Second)
63
64 fmt.Println("Releasing lock...")
65}
Cross-Platform File Lock:
1go get github.com/gofrs/flock
1// run
2package main
3
4import (
5 "fmt"
6 "time"
7
8 "github.com/gofrs/flock"
9)
10
11func main() {
12 lock := flock.New("/tmp/my.lock")
13
14 // Try to acquire lock
15 locked, err := lock.TryLock()
16 if err != nil {
17 panic(err)
18 }
19
20 if !locked {
21 fmt.Println("Could not acquire lock - another process holds it")
22 return
23 }
24
25 fmt.Println("Lock acquired!")
26 defer lock.Unlock()
27
28 // Critical section
29 time.Sleep(5 * time.Second)
30}
Shared Lock Manager:
1package lockmanager
2
3import (
4 "context"
5 "fmt"
6 "sync"
7 "time"
8
9 "github.com/gofrs/flock"
10)
11
12type LockManager struct {
13 locks map[string]*flock.Flock
14 mu sync.Mutex
15}
16
17func NewLockManager() *LockManager {
18 return &LockManager{
19 locks: make(map[string]*flock.Flock),
20 }
21}
22
23// AcquireLock gets or creates a file lock
24func (lm *LockManager) AcquireLock(filename string, timeout time.Duration) error {
25 lm.mu.Lock()
26 defer lm.mu.Unlock()
27
28 // Get or create lock
29 lock, exists := lm.locks[filename]
30 if !exists {
31 lock = flock.New(filename + ".lock")
32 lm.locks[filename] = lock
33 }
34
35 // Try to acquire with timeout
36 ctx, cancel := context.WithTimeout(context.Background(), timeout)
37 defer cancel()
38
39 locked, err := lock.TryLockContext(ctx, 100*time.Millisecond)
40 if err != nil {
41 return fmt.Errorf("acquire lock: %w", err)
42 }
43
44 if !locked {
45 return fmt.Errorf("timeout acquiring lock for %s", filename)
46 }
47
48 return nil
49}
50
51// ReleaseLock releases a file lock
52func (lm *LockManager) ReleaseLock(filename string) error {
53 lm.mu.Lock()
54 defer lm.mu.Unlock()
55
56 lock, exists := lm.locks[filename]
57 if !exists {
58 return fmt.Errorf("no lock found for %s", filename)
59 }
60
61 return lock.Unlock()
62}
63
64// WithLock executes function with lock held
65func (lm *LockManager) WithLock(filename string, timeout time.Duration, fn func() error) error {
66 if err := lm.AcquireLock(filename, timeout); err != nil {
67 return err
68 }
69 defer lm.ReleaseLock(filename)
70
71 return fn()
72}
Database-Style Lock File with PID Tracking:
1// run
2package main
3
4import (
5 "encoding/json"
6 "fmt"
7 "os"
8 "syscall"
9 "time"
10)
11
12type LockInfo struct {
13 PID int `json:"pid"`
14 Hostname string `json:"hostname"`
15 StartTime time.Time `json:"start_time"`
16}
17
18func AcquireProcessLock(lockFile string) error {
19 // Check if lock file exists
20 if _, err := os.Stat(lockFile); err == nil {
21 // Read existing lock
22 data, err := os.ReadFile(lockFile)
23 if err != nil {
24 return fmt.Errorf("read lock file: %w", err)
25 }
26
27 var existing LockInfo
28 if err := json.Unmarshal(data, &existing); err != nil {
29 return fmt.Errorf("parse lock file: %w", err)
30 }
31
32 // Check if process still exists
33 if processExists(existing.PID) {
34 return fmt.Errorf("lock held by PID %d on %s since %s",
35 existing.PID, existing.Hostname, existing.StartTime)
36 }
37
38 // Stale lock - remove it
39 os.Remove(lockFile)
40 }
41
42 // Create new lock
43 hostname, _ := os.Hostname()
44 lock := LockInfo{
45 PID: os.Getpid(),
46 Hostname: hostname,
47 StartTime: time.Now(),
48 }
49
50 data, err := json.MarshalIndent(lock, "", " ")
51 if err != nil {
52 return fmt.Errorf("marshal lock info: %w", err)
53 }
54
55 // Write atomically
56 return AtomicWriteFile(lockFile, data, 0644)
57}
58
59func ReleaseLock(lockFile string) error {
60 return os.Remove(lockFile)
61}
62
63func processExists(pid int) bool {
64 process, err := os.FindProcess(pid)
65 if err != nil {
66 return false
67 }
68
69 // Send signal 0 to check if process exists
70 err = process.Signal(syscall.Signal(0))
71 return err == nil
72}
Practice Exercises
Exercise 1: Word Counter
Learning Objectives: Practice file reading, string manipulation, and basic text processing using buffered I/O for efficient file handling.
Difficulty: ⭐⭐ Beginner
Time Estimate: 15 minutes
Build a word counter program that reads a text file and counts the total number of words it contains. This exercise teaches you to work with bufio.Scanner for word-based tokenization and process files efficiently without loading everything into memory.
Real-World Context: Word counters are used in text analysis tools, document processing systems, and content management applications to analyze text statistics, validate content requirements, or calculate reading metrics.
Solution
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "strings"
9)
10
11func countWords(filename string) (int, error) {
12 file, err := os.Open(filename)
13 if err != nil {
14 return 0, err
15 }
16 defer file.Close()
17
18 scanner := bufio.NewScanner(file)
19 scanner.Split(bufio.ScanWords)
20
21 count := 0
22 for scanner.Scan() {
23 count++
24 }
25
26 return count, scanner.Err()
27}
28
29func main() {
30 count, err := countWords("example.txt")
31 if err != nil {
32 fmt.Println("Error:", err)
33 return
34 }
35
36 fmt.Printf("Word count: %d\n", count)
37}
Exercise 2: Log File Analyzer
Learning Objectives: Master pattern matching in text files, implement counting logic for different log levels, and handle real-world log file processing.
Difficulty: ⭐⭐ Beginner
Time Estimate: 20 minutes
Create a log file analyzer that reads through log entries and categorizes them by severity level. This exercise helps you understand how to process structured text files and implement filtering logic for system monitoring applications.
Real-World Context: Log analyzers are essential tools for system administrators and developers to monitor application health, identify issues, and track system performance. Production systems generate thousands of log entries daily that need automatic categorization and analysis.
Solution
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "strings"
9)
10
11func analyzeLogs(filename string) (map[string]int, error) {
12 file, err := os.Open(filename)
13 if err != nil {
14 return nil, err
15 }
16 defer file.Close()
17
18 counts := map[string]int{
19 "ERROR": 0,
20 "WARNING": 0,
21 "INFO": 0,
22 }
23
24 scanner := bufio.NewScanner(file)
25 for scanner.Scan() {
26 line := scanner.Text()
27
28 for level := range counts {
29 if strings.Contains(line, level) {
30 counts[level]++
31 }
32 }
33 }
34
35 return counts, scanner.Err()
36}
37
38func main() {
39 counts, err := analyzeLogs("app.log")
40 if err != nil {
41 fmt.Println("Error:", err)
42 return
43 }
44
45 fmt.Println("Log Analysis:")
46 fmt.Printf(" ERROR: %d\n", counts["ERROR"])
47 fmt.Printf(" WARNING: %d\n", counts["WARNING"])
48 fmt.Printf(" INFO: %d\n", counts["INFO"])
49}
Exercise 3: Directory Size Calculator
Learning Objectives: Implement recursive directory traversal, work with file metadata, and format human-readable file sizes for practical applications.
Difficulty: ⭐⭐⭐ Intermediate
Time Estimate: 25 minutes
Build a directory size calculator that recursively walks through a directory tree and calculates the total size of all files. This exercise teaches you to work with the filepath.Walk function, handle file metadata, and implement helper functions for data formatting.
Real-World Context: Directory size calculators are used in disk space management tools, backup systems, and file organization utilities. System administrators use these tools to identify space-consuming directories, plan storage allocations, and monitor disk usage patterns across servers and workstations.
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "os"
7 "path/filepath"
8)
9
10func dirSize(path string) (int64, error) {
11 var size int64
12
13 err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
14 if err != nil {
15 return err
16 }
17 if !info.IsDir() {
18 size += info.Size()
19 }
20 return nil
21 })
22
23 return size, err
24}
25
26func formatBytes(bytes int64) string {
27 const unit = 1024
28 if bytes < unit {
29 return fmt.Sprintf("%d B", bytes)
30 }
31
32 div, exp := int64(unit), 0
33 for n := bytes / unit; n >= unit; n /= unit {
34 div *= unit
35 exp++
36 }
37
38 return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
39}
40
41func main() {
42 size, err := dirSize(".")
43 if err != nil {
44 fmt.Println("Error:", err)
45 return
46 }
47
48 fmt.Printf("Directory size: %s\n", formatBytes(size))
49}
Exercise 4: File Backup Utility
Learning Objectives: Practice file copying operations, implement timestamp-based naming conventions, and understand atomic file operations for backup systems.
Difficulty: ⭐⭐⭐ Intermediate
Time Estimate: 20 minutes
Develop a file backup utility that creates timestamped copies of important files. This exercise teaches you to implement safe file copying, work with time formatting, and handle file path manipulation for creating backup versions of critical data.
Real-World Context: Backup utilities are fundamental components of data protection strategies. Every application that handles important user data needs automatic backup mechanisms to prevent data loss. This pattern is used in configuration management, document versioning, and disaster recovery systems across all industries.
Solution
1// run
2package main
3
4import (
5 "fmt"
6 "io"
7 "os"
8 "path/filepath"
9 "time"
10)
11
12func backupFile(filename string) error {
13 // Read source file
14 sourceFile, err := os.Open(filename)
15 if err != nil {
16 return err
17 }
18 defer sourceFile.Close()
19
20 // Create backup filename with timestamp
21 timestamp := time.Now().Format("20060102-150405")
22 ext := filepath.Ext(filename)
23 base := filename[:len(filename)-len(ext)]
24 backupName := fmt.Sprintf("%s.backup.%s%s", base, timestamp, ext)
25
26 // Create backup file
27 backupFile, err := os.Create(backupName)
28 if err != nil {
29 return err
30 }
31 defer backupFile.Close()
32
33 // Copy content
34 _, err = io.Copy(backupFile, sourceFile)
35 if err != nil {
36 return err
37 }
38
39 fmt.Printf("Backup created: %s\n", backupName)
40 return nil
41}
42
43func main() {
44 err := backupFile("important.txt")
45 if err != nil {
46 fmt.Println("Error:", err)
47 }
48}
Exercise 5: Configuration File Manager
Learning Objectives: Master configuration file parsing, implement key-value data structures, and handle file read/write operations for application settings management.
Difficulty: ⭐⭐⭐ Intermediate
Time Estimate: 30 minutes
Create a configuration file manager that can read, parse, modify, and save application settings in a simple text format. This exercise teaches you to work with structured text files, implement parsing logic, and handle configuration data persistence for real-world applications.
Real-World Context: Configuration managers are essential in virtually every software application. From web servers to desktop applications, all software needs to store and retrieve settings like database connections, API keys, feature flags, and user preferences. Understanding configuration management is crucial for building maintainable and configurable software systems.
Solution
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "strings"
9)
10
11type Config map[string]string
12
13func LoadConfig(filename string) (Config, error) {
14 config := make(Config)
15
16 file, err := os.Open(filename)
17 if err != nil {
18 if os.IsNotExist(err) {
19 return config, nil // Return empty config if file doesn't exist
20 }
21 return nil, err
22 }
23 defer file.Close()
24
25 scanner := bufio.NewScanner(file)
26 for scanner.Scan() {
27 line := strings.TrimSpace(scanner.Text())
28
29 // Skip empty lines and comments
30 if line == "" || strings.HasPrefix(line, "#") {
31 continue
32 }
33
34 parts := strings.SplitN(line, "=", 2)
35 if len(parts) == 2 {
36 key := strings.TrimSpace(parts[0])
37 value := strings.TrimSpace(parts[1])
38 config[key] = value
39 }
40 }
41
42 return config, scanner.Err()
43}
44
45func SaveConfig(filename string, config Config) error {
46 file, err := os.Create(filename)
47 if err != nil {
48 return err
49 }
50 defer file.Close()
51
52 writer := bufio.NewWriter(file)
53 defer writer.Flush()
54
55 for key, value := range config {
56 fmt.Fprintf(writer, "%s=%s\n", key, value)
57 }
58
59 return nil
60}
61
62func main() {
63 // Load config
64 config, err := LoadConfig("app.conf")
65 if err != nil {
66 fmt.Println("Error loading:", err)
67 return
68 }
69
70 // Set some values
71 config["host"] = "localhost"
72 config["port"] = "8080"
73 config["debug"] = "true"
74
75 // Save config
76 err = SaveConfig("app.conf", config)
77 if err != nil {
78 fmt.Println("Error saving:", err)
79 return
80 }
81
82 fmt.Println("Configuration saved successfully")
83}
Exercise 6: Find and Replace in Files
Learning Objectives: Implement in-place file modification, master string replacement operations, and handle file backup strategies for safe text processing.
Difficulty: ⭐⭐⭐⭐ Advanced
Time Estimate: 35 minutes
Build a find-and-replace utility that can search for specific text patterns in files and replace all occurrences. This exercise teaches you to implement safe file modification workflows, handle in-place text processing, and create robust error handling for file manipulation operations.
Real-World Context: Find-and-replace tools are fundamental in text editors, code refactoring utilities, and content management systems. Developers use these tools to update variable names, modify configuration values across multiple files, and perform bulk text transformations. This pattern is essential for code maintenance, content updates, and automated refactoring workflows.
Solution
1// run
2package main
3
4import (
5 "bufio"
6 "fmt"
7 "os"
8 "strings"
9)
10
11func findAndReplace(filename, find, replace string) error {
12 // Read file
13 file, err := os.Open(filename)
14 if err != nil {
15 return err
16 }
17
18 var lines []string
19 scanner := bufio.NewScanner(file)
20 replacements := 0
21
22 for scanner.Scan() {
23 line := scanner.Text()
24 newLine := strings.ReplaceAll(line, find, replace)
25 if newLine != line {
26 replacements++
27 }
28 lines = append(lines, newLine)
29 }
30 file.Close()
31
32 if err := scanner.Err(); err != nil {
33 return err
34 }
35
36 // Write back
37 outFile, err := os.Create(filename)
38 if err != nil {
39 return err
40 }
41 defer outFile.Close()
42
43 writer := bufio.NewWriter(outFile)
44 for _, line := range lines {
45 fmt.Fprintln(writer, line)
46 }
47 writer.Flush()
48
49 fmt.Printf("Made %d replacements in %s\n", replacements, filename)
50 return nil
51}
52
53func main() {
54 err := findAndReplace("document.txt", "old_term", "new_term")
55 if err != nil {
56 fmt.Println("Error:", err)
57 }
58}
Summary
Core File Operations
Reading Files:
| Method | Best For | Memory Usage | Performance |
|---|---|---|---|
os.ReadFile |
Small files | Full file in memory | Fast |
os.Open + io.ReadAll |
When you need control | Full file in memory | Fast |
bufio.Scanner |
Large files, line-by-line | 4KB buffer only | Very Fast |
bufio.Reader |
Binary files, custom parsing | Configurable buffer | Very Fast |
Writing Files:
| Method | Best For | Performance | Risk |
|---|---|---|---|
os.WriteFile |
Small files, one-shot writes | Good | Overwrites, not atomic |
os.Create + Write |
Manual control | Good | Need explicit flush |
bufio.Writer |
Many small writes | Excellent | Must flush! |
| Temp + Rename | Critical config files | Good | Atomic, safe |
Key Takeaways
- Always Close Files - Use
defer file.Close()immediately after opening - Check Every Error - File operations are the #1 source of runtime errors
- Use Buffered I/O - 10-50x performance improvement for large files
- Use filepath Package - Cross-platform path handling, never hardcode
/or\ - Flush Buffers -
bufio.Writermust be flushed or data is lost - Atomic Writes - Write to temp file, then rename for critical data
- Proper Permissions - 0644 for files, 0755 for dirs, 0600 for secrets
- Watch for Defer in Loops - Extract to function for proper cleanup
Performance Guidelines
File Size | Method | System Calls | Memory
---------------|---------------------|--------------|--------
< 1 MB | os.ReadFile | ~2 | Full
1 MB - 100 MB | bufio.Scanner | ~200 | 4KB
> 100 MB | bufio.Scanner | ~25,000 | 4KB
Streaming | bufio.Reader/Writer | Depends | Configurable
Common Patterns Cheat Sheet
1// Read small file
2data, err := os.ReadFile("file.txt")
3
4// Read large file line-by-line
5file, _ := os.Open("large.log")
6defer file.Close()
7scanner := bufio.NewScanner(file)
8for scanner.Scan() {
9 process(scanner.Text())
10}
11
12// Write small file
13os.WriteFile("output.txt", data, 0644)
14
15// Write many lines efficiently
16file, _ := os.Create("output.txt")
17defer file.Close()
18writer := bufio.NewWriter(file)
19defer writer.Flush()
20for _, line := range lines {
21 fmt.Fprintln(writer, line)
22}
23
24// Append to log file
25file, _ := os.OpenFile("log.txt", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
26defer file.Close()
27fmt.Fprintln(file, logMessage)
28
29// Atomic config update
30tmp, _ := os.CreateTemp("", "config-*")
31tmp.Write(data)
32tmp.Sync()
33tmp.Close()
34os.Rename(tmp.Name(), "config.json")
35
36// Check file exists
37if _, err := os.Stat("file.txt"); err == nil {
38 // File exists
39} else if os.IsNotExist(err) {
40 // File doesn't exist
41}
42
43// Cross-platform path
44path := filepath.Join(dir, subdir, filename)
45
46// Walk directory tree
47filepath.Walk(".", func(path string, info fs.FileInfo, err error) error {
48 if err != nil {
49 return err
50 }
51 fmt.Println(path)
52 return nil
53})
What You've Learned
After mastering this article, you can:
✅ Read and write files efficiently in Go
✅ Choose the right I/O method for your use case
✅ Process large files without running out of memory
✅ Write cross-platform file handling code
✅ Avoid common pitfalls and resource leaks
✅ Implement atomic file updates for critical data
✅ Work with directories, paths, and file metadata
✅ Build production-ready file processing applications
Master file I/O and you'll be able to build powerful file-processing applications - from simple config readers to high-performance log processors!