How Docker Compose Actually Went for My Development Setup
Addressing misconceptions about containerized local setups, configuration pain points, and why volume mounting behaves the way it does.

Four developers. Four different Node versions. One person on an M1 Mac, two on Intel Macs, me on Windows. Every single week, something would work on one machine and break on another. The breaking point was a Monday standup where three out of four of us admitted we'd spent Friday afternoon debugging environment issues instead of writing features.
Someone brought up Docker Compose. I'd pulled images before, run one-off containers โ but never used Docker as a full local development environment. If you're not familiar with the building blocks yet, my Docker for beginners post covers how images, containers, and volumes work at a fundamental level. Figured a containerized setup couldn't be worse than what we already had.
It was worse. At first.
The First Compose File Was Embarrassing
Threw together a docker-compose.yml based on blog post examples that same evening:
services:
api:
build: ./api
ports:
- "4000:4000"
depends_on:
- db
- redis
frontend:
build: ./frontend
ports:
- "3000:3000"
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: password
redis:
image: redis:7
docker compose up. Everything pulled, built, started. Green text scrolling in the terminal. Felt great for about forty-five seconds.
Then the API threw a connection refused error trying to reach Postgres. Frontend couldn't talk to the API. And every code change required a full image rebuild โ fifteen seconds of waiting each time. That's not a development workflow. That's a patience exercise.
depends_on Doesn't Mean What You'd Expect
The connection refused error ate a solid hour of my evening, if I'm being honest.
depends_on controls startup order, not startup readiness. Docker starts containers in sequence, sure. But "started" and "ready to accept connections" are very different states for a database. Postgres takes a few seconds to initialize. Sometimes longer. The API container would boot, try to connect immediately, get refused, and crash.
Tried a sleep 10 in the API's start command. Worked but felt awful. Tried a shell script that looped and checked the port. Better but added complexity. Then found the actual answer:
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: password
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 3s
timeout: 3s
retries: 10
api:
build:
context: ./api
dockerfile: Dockerfile.dev
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
condition: service_healthy is what actually makes Docker wait. Without it, depends_on is just ordering โ not synchronization. Still a bit annoyed this isn't the default behavior. Or at least that the docs don't scream about the distinction louder.
Volume Mounts, or: Why Code Changes Did Nothing
The bigger problem was that my Dockerfile copied source code into the image at build time with COPY . ., and that was it. No volume mounting. The code inside the container was a frozen snapshot. Edit a file on the host? Container doesn't notice. It has its own copy.
Fix: mount the project directory into the container at runtime. Override the copied files with a live bind mount.
services:
api:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api:/app
ports:
- "4000:4000"
Edit ./api/src/server.js on the host machine, the change appears inside the container because /app is mirroring the host directory. Add nodemon or any file watcher and hot reload works inside the container.
Except.
The node_modules Disaster
After setting up volume mounts, docker compose up produced a wall of errors. bcrypt failing to load. Native module compilation errors. The kind of output that makes you want to close the laptop.
The problem: my host machine runs Windows. Container runs Linux. Mounting ./api:/app also mounts ./api/node_modules โ which contains native binaries compiled for Windows. Linux container tries to use them. Doesn't work.
Obvious fix: delete node_modules from the host before mounting. But then the IDE loses import resolution, autocomplete breaks, linting breaks. Not viable.
Actual fix โ an anonymous volume trick:
volumes:
- ./api:/app
- /app/node_modules
That second line creates an anonymous volume at /app/node_modules inside the container. Masks the host's node_modules. Container keeps its own Linux-compiled packages. Host keeps its Windows versions. Both coexist. Container never sees the host's incompatible binaries.
Would love to claim credit for figuring this out independently. Found it in a GitHub issue comment from 2019 with twelve thumbs-up reactions. The kind of tribal knowledge that apparently everyone knows but nobody documents properly.
One catch: adding a new npm dependency requires rebuilding the image or exec'ing into the container to run npm install. The anonymous volume persists between restarts, so new packages don't appear until you rebuild or install inside the container. I forget this regularly and spend five minutes confused about why require('some-new-package') throws a not-found error.
Making the Frontend Work
API was the hard part. Frontend was easier but had quirks.
Next.js or Vite dev server runs inside the container, you forward the port to the host. Straightforward. But ran into an issue where the dev server bound to 127.0.0.1 inside the container, making it inaccessible from outside even with port forwarding.
Had to bind to 0.0.0.0:
{
"scripts": {
"dev": "next dev -H 0.0.0.0"
}
}
For Vite:
export default defineConfig({
server: {
host: '0.0.0.0'
}
})
Small thing. Lost twenty minutes to it.
Hot module replacement worked out of the box once the binding was right โ HMR uses a websocket from the browser to the dev server, and some teams reportedly need to configure the hmr settings for specific ports or client URLs. Haven't hit that wall. Can't say much about it.
Debugging Inside a Container
Expected this to be painful. Turned out to be... fine. For Node.js, expose the inspector port and pass the --inspect flag:
api:
ports:
- "4000:4000"
- "9229:9229"
command: node --inspect=0.0.0.0:9229 src/index.js
VS Code attaches to localhost:9229. Breakpoints, step-through, watch variables โ all of it. The 0.0.0.0 matters for the same reason as the dev server binding. Debugger listening on 127.0.0.1 inside the container is invisible to the host.
Launch config:
{
"type": "node",
"request": "attach",
"name": "Docker: Attach to Node",
"port": 9229,
"address": "localhost",
"localRoot": "${workspaceFolder}/api",
"remoteRoot": "/app",
"restart": true
}
localRoot and remoteRoot mapping is important. Without it, VS Code doesn't know that /app/src/server.js in the container corresponds to ./api/src/server.js on the host. Breakpoints won't hit if paths don't match.
Minor annoyance: nodemon restarts drop the debug connection momentarily. "restart": true helps, but there's a brief disconnection each time. Small tax. Not a dealbreaker.
Database Seeding and Migrations
Wanted new developers to run docker compose up and have a working database. Not an empty Postgres instance where they'd need to figure out which migration commands to execute.
Postgres images support entrypoint scripts. Mount a .sql file into /docker-entrypoint-initdb.d/ and it runs automatically the first time the database volume is created:
db:
image: postgres:16
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev"]
interval: 3s
retries: 10
volumes:
pgdata:
init.sql creates tables, inserts seed data. First time someone runs compose, they get a fully populated database.
But โ it only runs on first initialization. If the pgdata volume already exists with data, entrypoint scripts get skipped. Burned time on this when I updated the init script and nothing changed. Had to run docker compose down -v to destroy the volume and start fresh. That -v flag. Easy to forget. Important to remember.
For ongoing migrations, the init script approach isn't enough. We added migrations to the API startup:
# Dockerfile.dev
CMD ["sh", "-c", "npx prisma migrate deploy && node src/index.js"]
Prisma runs pending migrations before the app starts. Works well enough. Seen it fail when migration ordering depends on seed data that gets created in a particular sequence. That was a fun afternoon.
Environment Variables
Docker Compose reads .env files natively. Can also specify them per-service:
api:
env_file:
- ./api/.env.docker
environment:
DATABASE_URL: postgres://dev:password@db:5432/myapp
REDIS_URL: redis://redis:6379
Notice the hostnames. Inside the Docker network, services refer to each other by service name. Not localhost. Not 127.0.0.1. The service name db resolves to the Postgres container's IP. My original .env had DATABASE_URL=postgres://localhost:5432/myapp and of course that didn't work inside the container โ nothing runs on the container's own localhost.
Ended up with three .env variants:
.envfor running outside Docker on the host.env.dockerfor running inside Docker Compose.env.productionfor deployment
A bit messy. There are probably smarter approaches โ variable substitution, conditional values. Three files works. Nobody has complained about it. Pick your battles.
Where We Landed
The compose file grew to about 90 lines. API, frontend, Postgres, Redis, a background worker, and a mailcatcher for testing emails locally.
services:
api:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api:/app
- /app/node_modules
ports:
- "4000:4000"
- "9229:9229"
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
env_file:
- ./api/.env.docker
command: npx nodemon --inspect=0.0.0.0:9229 src/index.js
worker:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api:/app
- /app/node_modules
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
env_file:
- ./api/.env.docker
command: node src/worker.js
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
volumes:
- ./frontend:/app
- /app/node_modules
ports:
- "3000:3000"
env_file:
- ./frontend/.env.docker
db:
image: postgres:16
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev"]
interval: 3s
retries: 10
redis:
image: redis:7
ports:
- "6379:6379"
mailcatcher:
image: schickling/mailcatcher
ports:
- "1080:1080"
- "1025:1025"
volumes:
pgdata:
New developer clones the repo. Runs docker compose up. Everything running in under two minutes. Compared to the old setup document โ fourteen bullet points of "install this, configure that, make sure this version matches" โ it's a different world.
What Still Bothers Me
Performance on macOS. Volume mounts are noticeably slow. File watching triggers with a delay. Builds that take 2 seconds on the host take 5-6 through the Docker mount. Newer mount options help โ cached, delegated, and the VirtioFS backend in Docker Desktop. We turned on VirtioFS and got maybe 70% of native speed. Still not native. On Linux, bind mounts are direct filesystem access so the problem doesn't exist. Windows with WSL2 falls somewhere in between โ decent if files live inside the WSL filesystem, bad if they're on the Windows side via /mnt/c.
Haven't found a clean answer. Some people recommend devcontainers where files live entirely inside the Linux VM. Probably the right move. Haven't made the switch because the current setup works well enough and introducing another layer of "how does my editor see these files" confusion doesn't appeal.
Image layer caching is another thing. Someone changes package.json and rebuilds โ Docker invalidates the cache from the COPY package*.json layer onward, so npm install runs from scratch. Optimizations exist โ multi-stage builds, mounting the npm cache as a Docker build cache. Haven't bothered. Rebuilds happen once or twice a week per person. Not enough frequency to justify the optimization effort.
Docker networking beyond the basics also remains foggy for me. Each compose project gets its own network. Services talk by name. Exposed ports reach the host. Fine. But when I tried to have two compose projects communicate โ main app plus a shared microservice โ external networks were involved and I'm not confident the configuration was correct. It works. I couldn't explain it to someone else clearly.
If your team outgrows Compose and needs auto-scaling or zero-downtime deploys, I wrote about evaluating whether Kubernetes makes sense for that transition. That might be the nature of Docker Compose for development, though, as far as I can tell. You reach a point where it's working, the team is productive, and you stop tinkering. Things I'd still like to clean up: the three-env-file situation, the rebuild-on-new-dependency workflow, getting devcontainers running properly, not sure. But there's always something else to ship. ## Makefile Shortcuts
One small addition that improved the team's quality of life: a Makefile wrapping the most common Docker Compose commands.
up:
docker compose up -d
down:
docker compose down
logs:
docker compose logs -f $(service)
rebuild:
docker compose build --no-cache $(service) && docker compose up -d $(service)
reset-db:
docker compose down -v && docker compose up -d db && sleep 5 && docker compose up -d api
shell:
docker compose exec api sh
make up instead of docker compose up -d. make logs service=api instead of docker compose logs -f api. make reset-db handles the entire sequence of destroying the database volume, restarting Postgres, waiting for it to initialize, and starting the API (which runs migrations on boot).
Not everyone on the team uses the Makefile. Some prefer typing the full commands. Doesn't matter โ it's there for whoever wants it, and it serves as documentation of the common workflows even for people who don't use make.
The reset-db target was the most requested. Developers regularly need to start with a clean database โ when testing migration sequences, when seed data gets corrupted by a failed experiment, when switching between branches that have incompatible schemas. Without the shortcut, the sequence was: remember to add -v to the down command, then up the database, then wait (how long?), then up the API. Easy to forget a step. The Makefile handles it reliably.
Logs: The Unsung Debugging Tool
One thing that took longer than it should have to get right: useful log output from Docker Compose. By default, docker compose up interleaves stdout from every service into a single stream. API logs, database startup messages, Redis warnings, frontend compilation output โ all mixed together in a wall of text. Finding the one error line that matters is like searching for a specific grain of sand.
Two fixes made this bearable. First, always use docker compose logs -f api to follow a single service when debugging. Second, configure structured JSON logging in the API so log lines are parseable by tools like jq. Piping docker compose logs api | jq '.message' gives you clean, readable output stripped of noise. Small setup cost, big payoff during those "why is this request returning 500" debugging sessions.
Dev Containers โ The Path Not (Yet) Taken
I keep coming back to the idea of devcontainers. The concept: your entire development environment โ editor, tools, shell config, project dependencies โ runs inside a container. VS Code (or other supported editors) connects to the container and everything happens there. No "works on my machine" because everyone's machine is the container.
We haven't made this switch yet. The current Docker Compose setup works well enough that the pain of migration hasn't outweighed the inertia. But there are clear advantages I'm aware of: file system performance isn't an issue because the files live inside the Linux filesystem natively instead of being mounted from the host. New developer onboarding goes from "install Docker and run compose" to "install Docker and open VS Code." Editor extensions and tooling config are version-controlled alongside the project.
The thing holding me back: our frontend developers are split between VS Code and WebStorm. Devcontainer support in WebStorm exists but it's less mature than VS Code's. Asking half the team to switch editors feels like the wrong forcing function. If JetBrains catches up on devcontainer support โ or if our team standardizes on VS Code for other reasons โ that's probably when we make the move.
For now, Docker Compose for local development. The current setup is good enough. For now.
Further Resources
- Docker Compose Documentation โ The official reference for Compose file syntax, networking, volumes, and multi-service configuration.
- Docker Development Best Practices โ Docker's guide to structuring development environments, managing images efficiently, and writing production-ready containers.
- Awesome Compose (GitHub) โ A curated collection of Docker Compose samples for common application stacks maintained by Docker.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

Learning Docker โ What I Wish Someone Had Told Me Earlier
Why most Docker tutorials fail beginners, the critical difference between images and containers, and what actually happens when you run a container.

Do You Actually Need Kubernetes?
Evaluating whether the overhead of Kubernetes is worth it for your team, and what the migration actually looks like.

Observability Beyond Grep โ Logs, Metrics, Traces, and Why They All Matter
Why grepping through log files stops working at scale, the real difference between logs, metrics, and traces, and how OpenTelemetry ties them together.