Go Sync Package
The sync package in Go provides synchronization primitives for coordinating goroutines and protecting shared resources. While channels are Go's preferred method for communication between goroutines, the sync package provides essential tools for synchronization, mutual exclusion, and preventing race conditions. Understanding the sync package is crucial for writing safe concurrent programs that can handle shared resources and coordinate goroutine execution. This comprehensive guide will teach you everything you need to know about Go's sync package.
Understanding the Sync Package
What Is the Sync Package?
The sync package provides synchronization primitives for concurrent programming in Go. It includes:
- Mutexes - Mutual exclusion locks for protecting shared resources
- Wait Groups - Coordination mechanisms for waiting for goroutines to complete
- Atomic Operations - Lock-free operations for simple data types
- Condition Variables - Signaling mechanisms for goroutine coordination
- Once - Ensuring functions are executed only once
When to Use Sync Package vs Channels
Use Sync Package For:
- Protecting shared resources - Mutexes for critical sections
- Waiting for goroutines - WaitGroups for coordination
- Simple synchronization - Atomic operations for counters
- One-time initialization - Once for initialization patterns
Use Channels For:
- Communication - Passing data between goroutines
- Coordination - Signaling and synchronization
- Pipeline patterns - Data flow between goroutines
- Event handling - Asynchronous event processing
Mutexes and Locks
Basic Mutex Usage
The sync.Mutex
Type
Mutexes provide mutual exclusion for protecting shared resources.
Lock and Unlock Operations
Using Lock()
and Unlock()
methods for critical sections.
package main
import (
"fmt"
"sync"
"time"
)
func main() {
// Basic mutex usage examples
fmt.Println("Basic mutex usage examples:")
// Shared resource without mutex (race condition)
var counter int
var wg sync.WaitGroup
// Without mutex - race condition
func incrementWithoutMutex() {
for i := 0; i < 1000; i++ {
counter++
}
wg.Done()
}
wg.Add(2)
go incrementWithoutMutex()
go incrementWithoutMutex()
wg.Wait()
fmt.Printf("Counter without mutex: %d\n", counter)
// Output: Counter without mutex: 2000 (may vary due to race condition)
// Shared resource with mutex
var counterWithMutex int
var mutex sync.Mutex
func incrementWithMutex() {
for i := 0; i < 1000; i++ {
mutex.Lock()
counterWithMutex++
mutex.Unlock()
}
wg.Done()
}
wg.Add(2)
go incrementWithMutex()
go incrementWithMutex()
wg.Wait()
fmt.Printf("Counter with mutex: %d\n", counterWithMutex)
// Output: Counter with mutex: 2000
// Mutex with defer for automatic unlocking
var counterWithDefer int
var mutexWithDefer sync.Mutex
func incrementWithDefer() {
for i := 0; i < 1000; i++ {
mutexWithDefer.Lock()
defer mutexWithDefer.Unlock()
counterWithDefer++
}
wg.Done()
}
wg.Add(2)
go incrementWithDefer()
go incrementWithDefer()
wg.Wait()
fmt.Printf("Counter with defer: %d\n", counterWithDefer)
// Output: Counter with defer: 2000
// Mutex with critical section
var sharedData map[string]int
var dataMutex sync.Mutex
func updateSharedData(key string, value int) {
dataMutex.Lock()
defer dataMutex.Unlock()
if sharedData == nil {
sharedData = make(map[string]int)
}
sharedData[key] = value
fmt.Printf("Updated %s to %d\n", key, value)
}
func readSharedData(key string) int {
dataMutex.Lock()
defer dataMutex.Unlock()
if sharedData == nil {
return 0
}
return sharedData[key]
}
// Test shared data access
wg.Add(3)
go func() {
updateSharedData("counter", 42)
wg.Done()
}()
go func() {
updateSharedData("value", 100)
wg.Done()
}()
go func() {
time.Sleep(50 * time.Millisecond)
value := readSharedData("counter")
fmt.Printf("Read counter: %d\n", value)
wg.Done()
}()
wg.Wait()
// Output:
// Updated counter to 42
// Updated value to 100
// Read counter: 42
}
Read-Write Mutexes
The sync.RWMutex
Type
Read-write mutexes allow multiple readers or one writer.
Read and Write Locks
Using RLock()
, RUnlock()
, Lock()
, and Unlock()
methods.
package main
import (
"fmt"
"sync"
"time"
)
func main() {
// Read-write mutex examples
fmt.Println("Read-write mutex examples:")
// Shared data with read-write mutex
var sharedMap map[string]int
var rwMutex sync.RWMutex
func initializeMap() {
rwMutex.Lock()
defer rwMutex.Unlock()
sharedMap = make(map[string]int)
sharedMap["counter"] = 0
sharedMap["value"] = 100
fmt.Println("Map initialized")
}
func readFromMap(key string) int {
rwMutex.RLock()
defer rwMutex.RUnlock()
if sharedMap == nil {
return 0
}
value := sharedMap[key]
fmt.Printf("Read %s: %d\n", key, value)
return value
}
func writeToMap(key string, value int) {
rwMutex.Lock()
defer rwMutex.Unlock()
if sharedMap == nil {
sharedMap = make(map[string]int)
}
sharedMap[key] = value
fmt.Printf("Wrote %s: %d\n", key, value)
}
// Test read-write mutex
var wg sync.WaitGroup
// Initialize map
initializeMap()
// Multiple readers
wg.Add(3)
go func() {
readFromMap("counter")
wg.Done()
}()
go func() {
readFromMap("value")
wg.Done()
}()
go func() {
readFromMap("counter")
wg.Done()
}()
// One writer
wg.Add(1)
go func() {
writeToMap("counter", 42)
wg.Done()
}()
wg.Wait()
// Output:
// Map initialized
// Read counter: 0
// Read value: 100
// Read counter: 0
// Wrote counter: 42
// Read-write mutex with multiple writers
func multipleWriters() {
var counter int
var rwMutex sync.RWMutex
func increment() {
rwMutex.Lock()
defer rwMutex.Unlock()
counter++
fmt.Printf("Incremented counter to: %d\n", counter)
}
func read() int {
rwMutex.RLock()
defer rwMutex.RUnlock()
value := counter
fmt.Printf("Read counter: %d\n", value)
return value
}
var wg sync.WaitGroup
// Multiple writers
wg.Add(3)
go func() {
increment()
wg.Done()
}()
go func() {
increment()
wg.Done()
}()
go func() {
increment()
wg.Done()
}()
// Multiple readers
wg.Add(2)
go func() {
read()
wg.Done()
}()
go func() {
read()
wg.Done()
}()
wg.Wait()
}
multipleWriters()
// Output:
// Incremented counter to: 1
// Incremented counter to: 2
// Incremented counter to: 3
// Read counter: 3
// Read counter: 3
}
Wait Groups
Basic Wait Group Usage
The sync.WaitGroup
Type
Wait groups coordinate goroutines and wait for them to complete.
Add, Done, and Wait Methods
Using Add()
, Done()
, and Wait()
methods for coordination.
package main
import (
"fmt"
"sync"
"time"
)
func main() {
// Basic wait group usage examples
fmt.Println("Basic wait group usage examples:")
// Simple wait group
var wg sync.WaitGroup
func worker(id int) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
time.Sleep(100 * time.Millisecond)
fmt.Printf("Worker %d finished\n", id)
}
// Start multiple workers
for i := 1; i <= 3; i++ {
wg.Add(1)
go worker(i)
}
// Wait for all workers to complete
wg.Wait()
fmt.Println("All workers completed")
// Output:
// Worker 1 starting
// Worker 2 starting
// Worker 3 starting
// Worker 1 finished
// Worker 2 finished
// Worker 3 finished
// All workers completed
// Wait group with different work loads
func waitGroupWithDifferentWorkloads() {
var wg sync.WaitGroup
func shortWorker(id int) {
defer wg.Done()
fmt.Printf("Short worker %d starting\n", id)
time.Sleep(50 * time.Millisecond)
fmt.Printf("Short worker %d finished\n", id)
}
func longWorker(id int) {
defer wg.Done()
fmt.Printf("Long worker %d starting\n", id)
time.Sleep(200 * time.Millisecond)
fmt.Printf("Long worker %d finished\n", id)
}
// Start workers with different workloads
wg.Add(2)
go shortWorker(1)
go longWorker(1)
wg.Add(1)
go shortWorker(2)
wg.Wait()
fmt.Println("All workers with different workloads completed")
}
waitGroupWithDifferentWorkloads()
// Output:
// Short worker 1 starting
// Long worker 1 starting
// Short worker 2 starting
// Short worker 1 finished
// Short worker 2 finished
// Long worker 1 finished
// All workers with different workloads completed
// Wait group with results
func waitGroupWithResults() {
var wg sync.WaitGroup
results := make(chan int, 3)
func workerWithResult(id int) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Duration(id) * 50 * time.Millisecond)
result := id * 10
results <- result
fmt.Printf("Worker %d finished with result: %d\n", id, result)
}
// Start workers
for i := 1; i <= 3; i++ {
wg.Add(1)
go workerWithResult(i)
}
// Wait for all workers to complete
wg.Wait()
close(results)
// Collect results
var total int
for result := range results {
total += result
fmt.Printf("Collected result: %d\n", result)
}
fmt.Printf("Total result: %d\n", total)
}
waitGroupWithResults()
// Output:
// Worker 1 starting
// Worker 2 starting
// Worker 3 starting
// Worker 1 finished with result: 10
// Worker 2 finished with result: 20
// Worker 3 finished with result: 30
// Collected result: 10
// Collected result: 20
// Collected result: 30
// Total result: 60
}
Advanced Wait Group Patterns
Wait Group with Error Handling
Using wait groups with error channels.
Wait Group with Timeout
Implementing timeout patterns with wait groups.
package main
import (
"fmt"
"sync"
"time"
)
func main() {
// Advanced wait group patterns examples
fmt.Println("Advanced wait group patterns examples:")
// Wait group with error handling
func waitGroupWithErrorHandling() {
var wg sync.WaitGroup
errorCh := make(chan error, 3)
func workerWithError(id int) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
// Simulate error for worker 2
if id == 2 {
errorCh <- fmt.Errorf("worker %d failed", id)
return
}
time.Sleep(100 * time.Millisecond)
fmt.Printf("Worker %d finished successfully\n", id)
}
// Start workers
for i := 1; i <= 3; i++ {
wg.Add(1)
go workerWithError(i)
}
// Wait for all workers to complete
wg.Wait()
close(errorCh)
// Check for errors
var errors []error
for err := range errorCh {
errors = append(errors, err)
}
if len(errors) > 0 {
fmt.Printf("Errors occurred: %v\n", errors)
} else {
fmt.Println("All workers completed successfully")
}
}
waitGroupWithErrorHandling()
// Output:
// Worker 1 starting
// Worker 2 starting
// Worker 3 starting
// Worker 1 finished successfully
// Worker 3 finished successfully
// Errors occurred: [worker 2 failed]
// Wait group with timeout
func waitGroupWithTimeout() {
var wg sync.WaitGroup
done := make(chan bool)
func slowWorker(id int) {
defer wg.Done()
fmt.Printf("Slow worker %d starting\n", id)
time.Sleep(200 * time.Millisecond)
fmt.Printf("Slow worker %d finished\n", id)
}
// Start slow workers
for i := 1; i <= 3; i++ {
wg.Add(1)
go slowWorker(i)
}
// Wait for completion or timeout
go func() {
wg.Wait()
done <- true
}()
select {
case <-done:
fmt.Println("All workers completed")
case <-time.After(100 * time.Millisecond):
fmt.Println("Timeout: workers took too long")
}
}
waitGroupWithTimeout()
// Output:
// Slow worker 1 starting
// Slow worker 2 starting
// Slow worker 3 starting
// Timeout: workers took too long
// Wait group with dynamic workers
func waitGroupWithDynamicWorkers() {
var wg sync.WaitGroup
jobs := make(chan int, 5)
func worker(jobChan <-chan int) {
defer wg.Done()
for job := range jobChan {
fmt.Printf("Processing job %d\n", job)
time.Sleep(50 * time.Millisecond)
fmt.Printf("Job %d completed\n", job)
}
}
// Start workers
numWorkers := 2
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go worker(jobs)
}
// Send jobs
for i := 1; i <= 5; i++ {
jobs <- i
}
close(jobs)
// Wait for all workers to complete
wg.Wait()
fmt.Println("All jobs completed")
}
waitGroupWithDynamicWorkers()
// Output:
// Processing job 1
// Processing job 2
// Processing job 3
// Job 1 completed
// Processing job 4
// Job 2 completed
// Processing job 5
// Job 3 completed
// Job 4 completed
// Job 5 completed
// All jobs completed
}
Atomic Operations
Basic Atomic Operations
The sync/atomic
Package
Atomic operations provide lock-free operations for simple data types.
Atomic Operations for Integers
Using atomic operations for integer types.
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
func main() {
// Basic atomic operations examples
fmt.Println("Basic atomic operations examples:")
// Atomic counter
var counter int64
var wg sync.WaitGroup
func incrementAtomic() {
defer wg.Done()
for i := 0; i < 1000; i++ {
atomic.AddInt64(&counter, 1)
}
}
// Start multiple goroutines
wg.Add(2)
go incrementAtomic()
go incrementAtomic()
wg.Wait()
fmt.Printf("Atomic counter: %d\n", atomic.LoadInt64(&counter))
// Output: Atomic counter: 2000
// Atomic operations comparison
var regularCounter int64
var atomicCounter int64
var mutexCounter int64
var mutex sync.Mutex
func incrementRegular() {
defer wg.Done()
for i := 0; i < 1000; i++ {
regularCounter++
}
}
func incrementMutex() {
defer wg.Done()
for i := 0; i < 1000; i++ {
mutex.Lock()
mutexCounter++
mutex.Unlock()
}
}
func incrementAtomicCounter() {
defer wg.Done()
for i := 0; i < 1000; i++ {
atomic.AddInt64(&atomicCounter, 1)
}
}
// Test regular counter (race condition)
wg.Add(2)
go incrementRegular()
go incrementRegular()
wg.Wait()
fmt.Printf("Regular counter: %d\n", regularCounter)
// Test mutex counter
wg.Add(2)
go incrementMutex()
go incrementMutex()
wg.Wait()
fmt.Printf("Mutex counter: %d\n", mutexCounter)
// Test atomic counter
wg.Add(2)
go incrementAtomicCounter()
go incrementAtomicCounter()
wg.Wait()
fmt.Printf("Atomic counter: %d\n", atomicCounter)
// Output:
// Regular counter: 2000 (may vary due to race condition)
// Mutex counter: 2000
// Atomic counter: 2000
// Atomic operations with different types
var int32Counter int32
var int64Counter int64
var uint32Counter uint32
var uint64Counter uint64
func incrementDifferentTypes() {
defer wg.Done()
for i := 0; i < 1000; i++ {
atomic.AddInt32(&int32Counter, 1)
atomic.AddInt64(&int64Counter, 1)
atomic.AddUint32(&uint32Counter, 1)
atomic.AddUint64(&uint64Counter, 1)
}
}
wg.Add(2)
go incrementDifferentTypes()
go incrementDifferentTypes()
wg.Wait()
fmt.Printf("Int32 counter: %d\n", atomic.LoadInt32(&int32Counter))
fmt.Printf("Int64 counter: %d\n", atomic.LoadInt64(&int64Counter))
fmt.Printf("Uint32 counter: %d\n", atomic.LoadUint32(&uint32Counter))
fmt.Printf("Uint64 counter: %d\n", atomic.LoadUint64(&uint64Counter))
// Output:
// Int32 counter: 2000
// Int64 counter: 2000
// Uint32 counter: 2000
// Uint64 counter: 2000
}
Advanced Atomic Operations
Atomic Compare and Swap
Using CompareAndSwap
for atomic updates.
Atomic Load and Store
Using Load
and Store
for atomic read and write operations.
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
func main() {
// Advanced atomic operations examples
fmt.Println("Advanced atomic operations examples:")
// Atomic compare and swap
var value int64 = 10
func tryUpdateValue(oldVal, newVal int64) bool {
return atomic.CompareAndSwapInt64(&value, oldVal, newVal)
}
fmt.Printf("Initial value: %d\n", atomic.LoadInt64(&value))
// Try to update with correct old value
if tryUpdateValue(10, 20) {
fmt.Printf("Successfully updated value to: %d\n", atomic.LoadInt64(&value))
} else {
fmt.Println("Failed to update value")
}
// Try to update with incorrect old value
if tryUpdateValue(10, 30) {
fmt.Printf("Successfully updated value to: %d\n", atomic.LoadInt64(&value))
} else {
fmt.Println("Failed to update value (old value mismatch)")
}
// Output:
// Initial value: 10
// Successfully updated value to: 20
// Failed to update value (old value mismatch)
// Atomic load and store
var data int64 = 42
func atomicLoadAndStore() {
defer wg.Done()
// Atomic load
currentValue := atomic.LoadInt64(&data)
fmt.Printf("Loaded value: %d\n", currentValue)
// Atomic store
newValue := currentValue * 2
atomic.StoreInt64(&data, newValue)
fmt.Printf("Stored value: %d\n", newValue)
}
var wg sync.WaitGroup
wg.Add(2)
go atomicLoadAndStore()
go atomicLoadAndStore()
wg.Wait()
fmt.Printf("Final value: %d\n", atomic.LoadInt64(&data))
// Output:
// Loaded value: 42
// Loaded value: 42
// Stored value: 84
// Stored value: 84
// Final value: 84
// Atomic operations with pointers
var ptr *int
var newInt int = 100
func atomicPointerOperations() {
defer wg.Done()
// Atomic store pointer
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&ptr)), unsafe.Pointer(&newInt))
// Atomic load pointer
loadedPtr := (*int)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&ptr))))
fmt.Printf("Loaded pointer value: %d\n", *loadedPtr)
}
wg.Add(2)
go atomicPointerOperations()
go atomicPointerOperations()
wg.Wait()
// Atomic operations with boolean
var flag int32
func atomicBooleanOperations() {
defer wg.Done()
// Atomic store boolean (using int32)
atomic.StoreInt32(&flag, 1)
// Atomic load boolean
if atomic.LoadInt32(&flag) == 1 {
fmt.Println("Flag is set")
} else {
fmt.Println("Flag is not set")
}
// Atomic compare and swap boolean
if atomic.CompareAndSwapInt32(&flag, 1, 0) {
fmt.Println("Flag was set, now cleared")
} else {
fmt.Println("Flag was not set")
}
}
wg.Add(2)
go atomicBooleanOperations()
go atomicBooleanOperations()
wg.Wait()
// Output:
// Flag is set
// Flag was set, now cleared
// Flag is set
// Flag was set, now cleared
}
Race Conditions and Prevention
Understanding Race Conditions
What Are Race Conditions?
Race conditions occur when multiple goroutines access shared data concurrently.
Detecting Race Conditions
Using the Go race detector to find race conditions.
package main
import (
"fmt"
"sync"
"time"
)
func main() {
// Race conditions and prevention examples
fmt.Println("Race conditions and prevention examples:")
// Race condition example
var counter int
var wg sync.WaitGroup
func incrementWithRace() {
defer wg.Done()
for i := 0; i < 1000; i++ {
// Race condition: multiple goroutines accessing counter
counter++
}
}
// Start multiple goroutines
wg.Add(2)
go incrementWithRace()
go incrementWithRace()
wg.Wait()
fmt.Printf("Counter with race condition: %d\n", counter)
// Output: Counter with race condition: 2000 (may vary due to race condition)
// Preventing race conditions with mutex
var counterWithMutex int
var mutex sync.Mutex
func incrementWithMutex() {
defer wg.Done()
for i := 0; i < 1000; i++ {
mutex.Lock()
counterWithMutex++
mutex.Unlock()
}
}
wg.Add(2)
go incrementWithMutex()
go incrementWithMutex()
wg.Wait()
fmt.Printf("Counter with mutex: %d\n", counterWithMutex)
// Output: Counter with mutex: 2000
// Preventing race conditions with atomic operations
var counterAtomic int64
func incrementAtomic() {
defer wg.Done()
for i := 0; i < 1000; i++ {
atomic.AddInt64(&counterAtomic, 1)
}
}
wg.Add(2)
go incrementAtomic()
go incrementAtomic()
wg.Wait()
fmt.Printf("Counter with atomic: %d\n", atomic.LoadInt64(&counterAtomic))
// Output: Counter with atomic: 2000
// Race condition with shared slice
var sharedSlice []int
var sliceMutex sync.Mutex
func appendToSlice() {
defer wg.Done()
for i := 0; i < 100; i++ {
sliceMutex.Lock()
sharedSlice = append(sharedSlice, i)
sliceMutex.Unlock()
}
}
wg.Add(2)
go appendToSlice()
go appendToSlice()
wg.Wait()
fmt.Printf("Shared slice length: %d\n", len(sharedSlice))
// Output: Shared slice length: 200
// Race condition with shared map
var sharedMap map[string]int
var mapMutex sync.RWMutex
func updateMap() {
defer wg.Done()
for i := 0; i < 100; i++ {
key := fmt.Sprintf("key%d", i)
mapMutex.Lock()
if sharedMap == nil {
sharedMap = make(map[string]int)
}
sharedMap[key] = i
mapMutex.Unlock()
}
}
func readMap() {
defer wg.Done()
for i := 0; i < 100; i++ {
key := fmt.Sprintf("key%d", i)
mapMutex.RLock()
if sharedMap != nil {
_ = sharedMap[key]
}
mapMutex.RUnlock()
}
}
wg.Add(3)
go updateMap()
go updateMap()
go readMap()
wg.Wait()
fmt.Printf("Shared map size: %d\n", len(sharedMap))
// Output: Shared map size: 200
}
What You've Learned
Congratulations! You now have a comprehensive understanding of Go's sync package:
Mutexes and Locks
- Understanding mutexes for protecting shared resources
- Using read-write mutexes for concurrent read access
- Implementing critical sections with mutexes
- Using defer for automatic mutex unlocking
Wait Groups
- Understanding wait groups for goroutine coordination
- Using Add, Done, and Wait methods for synchronization
- Implementing error handling with wait groups
- Creating timeout patterns with wait groups
Atomic Operations
- Understanding atomic operations for lock-free programming
- Using atomic operations for simple data types
- Implementing compare and swap operations
- Using atomic load and store operations
Race Conditions and Prevention
- Understanding what race conditions are and how they occur
- Preventing race conditions with mutexes and atomic operations
- Using the Go race detector to find race conditions
- Implementing safe concurrent access to shared resources
Key Concepts
sync.Mutex
- Mutual exclusion lock for protecting shared resourcessync.RWMutex
- Read-write mutex for concurrent read accesssync.WaitGroup
- Coordination mechanism for waiting for goroutinessync/atomic
- Atomic operations for lock-free programming- Race conditions - Concurrent access to shared data without synchronization
Next Steps
You now have a solid foundation in Go's sync package. In the next section, we'll explore advanced concurrency patterns, which combine all the concepts we've learned to create sophisticated concurrent applications.
Understanding the sync package is crucial for writing safe concurrent programs that can handle shared resources and coordinate goroutine execution. These concepts form the foundation for all the more advanced concurrency techniques we'll cover in the coming chapters.
Ready to learn about advanced concurrency patterns? Let's explore sophisticated concurrency patterns and learn how to build scalable, efficient concurrent applications!