Go Concurrency Patterns โ Goroutines, Channels, and the Mistakes I Kept Making
The concurrency model that finally clicked for me, the pitfalls that didn't show up until production, and when to reach for sync.Mutex instead of channels.

First goroutine I ever wrote leaked. Didn't know it leaked. The program worked fine in development, passed all tests, deployed to production. Memory usage crept up over three days until the container got OOM-killed. Took me an embarrassing amount of time to connect "goroutine that nobody is reading from" to "memory leak." That was the moment I realized Go concurrency is deceptively easy to start and genuinely hard to get right.
The syntax is simple. go doSomething() โ boom, concurrent execution. Channels for communication. Select statements for multiplexing. The language makes it look trivial, I think. The bugs you create when you don't understand the underlying model are anything but.
Going to walk through the patterns I actually use in production, the mistakes I made learning them, and the decision framework that finally stopped me from reaching for the wrong tool.
Goroutines Are Not Threads
This distinction probably matters more than most tutorials acknowledge. A goroutine is a lightweight execution unit managed by the Go runtime, not by the operating system. The runtime multiplexes goroutines onto a smaller number of OS threads. Starting a goroutine costs about 2-4KB of stack space. Starting an OS thread costs about 1-8MB depending on the platform.
Practical implication: you can run thousands of goroutines without thinking twice. I've had services running 50,000+ concurrent goroutines handling WebSocket connections. Try that with OS threads and you'll probably hit limits fast.
But "lightweight" doesn't mean "free." Each goroutine consumes memory, and more importantly, each goroutine that's stuck โ blocked on a channel nobody writes to, waiting for a lock that's never released, sleeping forever โ is a resource leak. The Go runtime won't kill goroutines for you. If you start one, it runs until it returns or the program exits. No garbage collection for goroutines.
// This leaks a goroutine every time it's called
func leakySearch(query string) string {
ch := make(chan string)
go func() {
ch <- searchDatabaseA(query)
}()
go func() {
ch <- searchDatabaseB(query)
}()
// Returns the first result, but the slower goroutine
// is now blocked trying to send to a channel nobody reads
return <-ch
}
That was essentially my first production goroutine leak. Two goroutines race, first result wins, second goroutine blocks forever on a send to an unbuffered channel. The fix is either a buffered channel (make(chan string, 2)) so both sends succeed regardless of whether anyone reads, or using context for cancellation.
Channels โ The Happy Path and the Traps
Channels are Go's primary concurrency primitive. The idea: don't communicate by sharing memory; share memory by communicating. Send data through channels instead of protecting shared variables with locks.
The basic pattern works beautifully for pipeline-style processing:
func processOrders(orders []Order) []Result {
jobs := make(chan Order, len(orders))
results := make(chan Result, len(orders))
// Start 10 workers
for i := 0; i < 10; i++ {
go func() {
for order := range jobs {
results <- processOrder(order)
}
}()
}
// Send all jobs
for _, order := range orders {
jobs <- order
}
close(jobs)
// Collect results
var processed []Result
for i := 0; i < len(orders); i++ {
processed = append(processed, <-results)
}
return processed
}
Fan-out, fan-in. Ten workers pull from the jobs channel, process concurrently, push results to the results channel. Clean. Readable. Works well.
The traps are in the details.
Unbuffered channels block. ch := make(chan int) creates a channel where a send blocks until someone receives, and a receive blocks until someone sends. It's a synchronization point, not a queue. Forgetting this leads to deadlocks that are obvious in hindsight and invisible when you're writing the code.
// Deadlock - main goroutine blocks on send, nobody to receive
func main() {
ch := make(chan int)
ch <- 42 // blocks forever
fmt.Println(<-ch)
}
Sending to a closed channel panics. Not returns an error. Panics. Crashes the program. This means you need to be very careful about who closes channels and when.
// Rule: only the sender should close a channel
// Never close from the receiver side
// Never close a channel more than once
Nil channels block forever. Both sends and receives on a nil channel block. Sounds useless, but it's actually a pattern โ you can disable a case in a select statement by setting the channel to nil.
func merge(ch1, ch2 <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for ch1 != nil || ch2 != nil {
select {
case v, ok := <-ch1:
if !ok {
ch1 = nil // disable this case
continue
}
out <- v
case v, ok := <-ch2:
if !ok {
ch2 = nil
continue
}
out <- v
}
}
}()
return out
}
Took me a while to internalize that one. Setting a closed channel to nil makes the select statement skip that case entirely instead of busy-looping on the zero value.
Context โ The Cancellation Pattern You Need Everywhere
Every goroutine you start should have a way to stop. context.Context is that way. I ignored this for months because it felt like boilerplate. Then I had a service where cancelled HTTP requests left goroutines running database queries for results that nobody wanted. Wasted database connections, wasted CPU, wasted memory.
func handleRequest(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() // cancelled when client disconnects
resultCh := make(chan QueryResult, 1)
go func() {
result, err := expensiveQuery(ctx)
if err != nil {
return
}
resultCh <- result
}()
select {
case result := <-resultCh:
json.NewEncoder(w).Encode(result)
case <-ctx.Done():
// Client disconnected, goroutine will exit
// when expensiveQuery checks ctx
http.Error(w, "cancelled", http.StatusRequestTimeout)
}
}
The select statement waits for either the result or the context cancellation. If the client disconnects, ctx.Done() fires and we stop waiting. The goroutine running expensiveQuery should also check the context and bail out โ if expensiveQuery passes ctx down to the database driver, the query itself gets cancelled.
Pattern I use for every goroutine now:
func worker(ctx context.Context, jobs <-chan Job) {
for {
select {
case <-ctx.Done():
return // clean shutdown
case job, ok := <-jobs:
if !ok {
return // channel closed
}
process(ctx, job)
}
}
}
Two exit conditions: context cancelled, or channel closed. Covers both graceful shutdown and normal completion. Every long-running goroutine in my code follows this shape.
sync.Mutex vs Channels โ The Decision That Keeps Coming Up
Go's mantra is "share memory by communicating" โ use channels. But the standard library also includes sync.Mutex for traditional lock-based synchronization. When do you use which?
My rule after years of getting this wrong: channels for coordination, mutexes for state protection. From what I've seen, this distinction covers most cases.
If goroutines need to signal each other, pass data between stages, or coordinate who does what โ channels. If multiple goroutines need to read or write a shared data structure and you just need to prevent concurrent access โ mutex.
// Mutex: protecting shared state (simple, appropriate)
type Counter struct {
mu sync.Mutex
count int
}
func (c *Counter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
func (c *Counter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
You could build this with channels. I've seen people do it โ a goroutine that owns the counter, receives increment/read messages on a channel, sends back values. It works, but it's more code, harder to follow, and slower. A mutex is the right tool here. Quick lock, quick unlock, done.
// Channels: coordinating work between goroutines
func fanOut(ctx context.Context, input <-chan Task, numWorkers int) <-chan Result {
results := make(chan Result, numWorkers)
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for task := range input {
select {
case <-ctx.Done():
return
case results <- process(task):
}
}
}()
}
go func() {
wg.Wait()
close(results)
}()
return results
}
Here channels are the right call. Tasks flow in, results flow out, workers coordinate through the channel. Trying to do this with mutexes would probably be a mess of shared slices and condition variables.
Use sync.RWMutex when reads vastly outnumber writes. A regular mutex blocks everyone. An RWMutex allows multiple simultaneous readers, only blocking when a writer needs access. For a configuration cache that gets read thousands of times per second and updated once a minute, the difference can be significant, from what I've seen.
type ConfigCache struct {
mu sync.RWMutex
config map[string]string
}
func (c *ConfigCache) Get(key string) string {
c.mu.RLock()
defer c.mu.RUnlock()
return c.config[key]
}
func (c *ConfigCache) Set(key, value string) {
c.mu.Lock()
defer c.mu.Unlock()
c.config[key] = value
}
The Worker Pool Pattern
Probably the most common concurrency pattern in my Go code. A bounded number of goroutines processing a stream of work items. Prevents unbounded goroutine creation, controls resource usage, provides backpressure.
type Pool struct {
jobs chan func()
workers int
}
func NewPool(workers, queueSize int) *Pool {
p := &Pool{
jobs: make(chan func(), queueSize),
workers: workers,
}
for i := 0; i < workers; i++ {
go func() {
for job := range p.jobs {
job()
}
}()
}
return p
}
func (p *Pool) Submit(job func()) {
p.jobs <- job // blocks if queue is full โ backpressure
}
func (p *Pool) Shutdown() {
close(p.jobs) // workers exit when channel is drained
}
Simple, effective, handles most use cases. The buffered channel acts as a work queue. When the buffer is full, Submit blocks the caller โ natural backpressure without explicit rate limiting.
For production use, I add context-based cancellation and error collection. The errgroup package from golang.org/x/sync handles the common case nicely:
func processAllUsers(ctx context.Context, userIDs []string) error {
g, ctx := errgroup.WithContext(ctx)
g.SetLimit(20) // max 20 concurrent goroutines
for _, id := range userIDs {
id := id // capture loop variable
g.Go(func() error {
return processUser(ctx, id)
})
}
return g.Wait() // returns first error, cancels remaining
}
errgroup handles the WaitGroup, error propagation, goroutine limiting, and context cancellation in one package. I use this more than raw goroutines at this point.
The Pipeline Pattern
Chain stages together, each stage a goroutine reading from an input channel and writing to an output channel. Data flows through the pipeline.
func generate(ctx context.Context, nums ...int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for _, n := range nums {
select {
case <-ctx.Done():
return
case out <- n:
}
}
}()
return out
}
func square(ctx context.Context, in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
select {
case <-ctx.Done():
return
case out <- n * n:
}
}
}()
return out
}
func filter(ctx context.Context, in <-chan int, predicate func(int) bool) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
if predicate(n) {
select {
case <-ctx.Done():
return
case out <- n:
}
}
}
}()
return out
}
Composing them:
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
nums := generate(ctx, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
squared := square(ctx, nums)
even := filter(ctx, squared, func(n int) bool { return n%2 == 0 })
for result := range even {
fmt.Println(result) // 4, 16, 36, 64, 100
}
Each stage runs concurrently. Data flows through as it's produced โ no waiting for the entire dataset before processing begins. For I/O-heavy workloads where each stage involves network calls or disk reads, the concurrency is significant.
Used this pattern for a data ingestion service: stage 1 reads from Kafka, stage 2 enriches records by calling an external API, stage 3 writes to the database. Each stage at different speeds. The channels buffer differences naturally. Stage 2 is slow because of network calls? It applies backpressure to stage 1 via the full channel buffer. Stage 3 is slow? Same thing.
The Mistakes That Bit Me in Production
Beyond the goroutine leak I mentioned at the start, a few other patterns that seemed fine in development and failed in production.
Not handling panics in goroutines. A panic in a goroutine kills the entire program, not just that goroutine. If you have a worker pool processing user requests, one malformed input that causes a nil pointer dereference takes down the whole service.
func safeGo(fn func()) {
go func() {
defer func() {
if r := recover(); r != nil {
log.Printf("goroutine panic recovered: %v\n%s", r, debug.Stack())
}
}()
fn()
}()
}
I wrap goroutine launches in a helper that recovers from panics. Not ideal โ recover swallows the panic and the goroutine is gone. But the service stays up. The alternative โ crashing on every unexpected nil pointer โ was worse for our use case.
Race conditions that only appear under load. The Go race detector (go test -race) catches many but not all data races. Had a map that was read from multiple goroutines and occasionally written to. Worked fine for weeks. Then traffic spiked, concurrent reads and writes overlapped, and the runtime panicked with concurrent map read and map write. Maps in Go are not safe for concurrent access. Period.
// This will eventually crash under concurrent access
var cache = make(map[string]string)
// Fix: use sync.Map or protect with a mutex
var cache sync.Map
cache.Store("key", "value")
val, ok := cache.Load("key")
sync.Map is optimized for two common cases: keys that are written once and read many times, or keys that are unique to each goroutine. For other access patterns, a regular map with a sync.RWMutex tends to perform better in my experience.
Forgetting sync.WaitGroup and exiting before goroutines finish. The main function returns, the program exits, all goroutines are killed mid-execution. Data partially written. Connections left open. Graceful shutdown requires waiting for goroutines to complete.
func main() {
ctx, cancel := signal.NotifyContext(context.Background(),
syscall.SIGINT, syscall.SIGTERM)
defer cancel()
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
runServer(ctx)
}()
wg.Add(1)
go func() {
defer wg.Done()
runWorker(ctx)
}()
<-ctx.Done()
log.Println("shutting down...")
// cancel() already called by signal handler
wg.Wait() // wait for all goroutines to finish
log.Println("clean shutdown complete")
}
This pattern โ context for cancellation signal, WaitGroup for completion tracking โ is in every production service I write now.
Profiling Goroutines
When something is wrong and you suspect goroutine issues, runtime and pprof are your friends.
import "runtime"
// How many goroutines are running right now?
fmt.Println(runtime.NumGoroutine())
If that number keeps climbing and never drops, you have a leak. Import net/http/pprof in your service, hit /debug/pprof/goroutine?debug=2, and you get a full stack trace of every goroutine. The stuck ones are usually obvious โ blocked on channel send, blocked on channel receive, stuck in a select with no exit case.
I log runtime.NumGoroutine() as a metric every 30 seconds. When it trends upward over hours, something is leaking. Caught three leaks this way before they caused OOM kills. Much cheaper than waiting for the container to die and debugging from a restart.
What I'd Tell Past Me
Start with errgroup instead of raw goroutines. It handles roughly 80% of concurrent work patterns with built-in error handling and cancellation. Only drop down to raw goroutines and channels when you need a pattern errgroup can't express.
Always pass context.Context to goroutines. Always check it. The five seconds of extra typing per function saves hours of debugging leaked goroutines.
Use the race detector in CI. go test -race ./... on every PR. It won't catch everything, but it catches the obvious stuff before production does.
Channels are for communication. Mutexes are for state protection. The moment you find yourself building a complex channel-based state machine to protect a shared map, step back and use a mutex. The Go proverb about sharing memory by communicating is directional guidance, not an absolute rule. Sometimes a lock is just the right tool.
Concurrency in Go is genuinely powerful. The goroutine-and-channel model is one of the best implementations of CSP in a mainstream language. But "easy to start" and "easy to get right" are different things entirely. The patterns here took me a couple of years of production bugs to internalize. Hopefully reading about someone else's mistakes is faster than making all of them yourself.
Keep Reading
- System Design โ Not Interview Prep, Real Decisions โ Concurrency patterns are one piece; understanding how they fit into load balancers, queues, and caching layers is the bigger picture.
- Learning Docker โ What I Wish Someone Had Told Me Earlier โ Most Go services end up running in containers, so understanding Docker is the natural next step after writing the code.
Further Resources
- Go Documentation: Concurrency โ The official Effective Go guide on goroutines, channels, and concurrency patterns with idiomatic examples.
- The Go Blog: Concurrency Patterns โ Articles from the Go team covering pipelines, cancellation, fan-out/fan-in, and context-based patterns.
- Go by Example: Goroutines and Channels โ Annotated code examples for every concurrency primitive in Go, from basic goroutines to select statements and worker pools.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

SQLite โ The Most Underrated Database in Your Toolbox
Why I stopped reaching for Postgres by default and started shipping production apps with SQLite. WAL mode, embedded analytics, and when it genuinely beats the big databases.

System Design โ Not Interview Prep, Real Decisions
The system design concepts I actually use at work: load balancers, caching layers, message queues, and why picking the right trade-off matters more than knowing the right answer.

MongoDB vs PostgreSQL โ An Honest Comparison After Using Both in Production
When the document model actually helps, when relational wins, and the real project stories behind the decision.