How Docker Compose Actually Went for My Development Setup
Addressing misconceptions about containerized local setups, configuration pain points, and why volume mounting behaves the way it does.

So here's how it started. We had a Node API, a React frontend, a Postgres database, and Redis for session caching. Four developers on the team. Everyone running different versions of Node. One person on an M1 Mac, two on Intel Macs, me on Windows. And every single week, something would work on one machine but not the others.
The breaking point was a Monday standup where three out of four of us had spent Friday afternoon debugging environment issues instead of writing actual features. Someone brought up Docker Compose. I'd used Docker before โ pulling images, running one-off containers โ but never as a full local development environment. Figured it couldn't be worse than what we already had.
It was worse. At first.
The first docker-compose.yml was embarrassingly bad
I wrote something that night. Just threw together a file based on examples from blog posts. Looked something like this:
services:
api:
build: ./api
ports:
- "4000:4000"
depends_on:
- db
- redis
frontend:
build: ./frontend
ports:
- "3000:3000"
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: password
redis:
image: redis:7
Ran docker compose up. Everything pulled, built, started. Green text in the terminal. I felt great for about forty-five seconds.
Then the API threw a connection refused error trying to hit Postgres. The frontend couldn't talk to the API. And every time I changed a line of code, nothing happened โ I had to rebuild the whole image. Fifteen seconds of waiting per code change. That's not development, that's punishment.
The depends_on lie
Let me talk about that connection refused thing first because it wasted a solid hour of my evening.
depends_on does not do what you think it does. Or at least it doesn't do what I thought it did. I assumed it meant "wait until this service is ready, then start the next one." That's not what happens. Docker starts the containers in order, sure. But "started" and "ready to accept connections" are very different things for a database.
Postgres takes a few seconds to initialize. Sometimes longer. The API container would boot, try to connect immediately, fail, and crash. Then Compose would restart it (if you had restart policies), but sometimes it'd crash again because Postgres still wasn't done.
I tried a bunch of stuff. Added a sleep 10 to the API's start command. That worked but felt disgusting. Tried a shell script that looped and checked the connection. That worked better but added complexity. Then I found the actual answer:
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: password
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 3s
timeout: 3s
retries: 10
api:
build:
context: ./api
dockerfile: Dockerfile.dev
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
The condition: service_healthy bit is what actually makes Docker wait. Without it, depends_on is just ordering, not waiting. I'm still kind of annoyed this isn't the default behavior. Or at least that the documentation doesn't scream about this louder.
Volume mounts, or: why my code changes did nothing
The bigger problem was that I'd built my images with COPY . . in the Dockerfile, and that was it. No volume mounting. So the code that ran inside the container was a frozen snapshot from build time. Change a file on your host machine? The container doesn't care. It has its own copy.
The fix is to mount your project directory into the container at runtime. Overwrite the copied files with a live bind mount.
services:
api:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api:/app
ports:
- "4000:4000"
Now when I edit ./api/src/server.js on my machine, the file changes inside the container too, because /app is just a mirror of the host directory. Pair that with nodemon or any file-watching tool and you've got hot reload working inside a container.
Except. Except except except.
The node_modules disaster
This was the worst one. After setting up volume mounts, I ran docker compose up and got a wall of errors. Something about bcrypt failing to load. Native module compilation errors. The kind of stuff that makes you want to close the laptop and go outside.
Here's what was happening. My host machine is Windows. The container runs Linux. When you mount ./api:/app, you're also mounting ./api/node_modules into the container. Those node_modules were installed on Windows. They contain native binaries compiled for Windows. The Linux container tries to use them and โ yeah. It doesn't work.
The obvious fix is "just delete node_modules from the host before mounting." But then you lose your IDE's ability to resolve imports, autocomplete breaks, linting breaks. Not great.
The actual fix is a trick with anonymous volumes:
volumes:
- ./api:/app
- /app/node_modules
That second line creates an anonymous volume at /app/node_modules inside the container. It masks the host's node_modules directory. So the container keeps its own Linux-compiled node_modules, and your host keeps its own Windows or Mac version. Both coexist. The container doesn't see your host's broken binaries.
I'd love to say I figured this out myself. I didn't. I found it in a GitHub issue comment from 2019 with twelve thumbs-up reactions. The kind of knowledge that apparently everyone knows but nobody writes about properly.
One catch though โ if you add a new dependency, you need to rebuild the image (or exec into the container and run npm install there). The anonymous volume persists between restarts, so the new package won't appear until you either rebuild or install it inside the container. I keep forgetting this and spending five minutes wondering why require('some-new-package') throws a not-found error.
Making the frontend work was its own thing
The API was the hard part, honestly. The frontend was easier but had its own quirks.
With Next.js or Vite, the dev server runs inside the container on some port, and you forward that port to your host. Simple enough. But I ran into an issue where the dev server would bind to 127.0.0.1 inside the container, which meant it wasn't accessible from outside the container even with port forwarding.
Had to make sure the dev server binds to 0.0.0.0:
{
"scripts": {
"dev": "next dev -H 0.0.0.0"
}
}
Or for Vite:
export default defineConfig({
server: {
host: '0.0.0.0'
}
})
Small thing. Lost twenty minutes to it.
Hot module replacement was another story. HMR uses a websocket connection from the browser to the dev server. When the dev server is inside a container, the browser sometimes can't establish that websocket, depending on how your ports and hosts are configured. For our setup it worked out of the box once the host binding was right, but I've heard from other teams that they had to configure the hmr settings to use a specific port or client URL. Haven't hit that wall yet, so I can't say much about it.
Debugging inside a container
This was something I assumed would be painful and it turned out to be... fine? For Node.js, you expose the inspector port and pass the --inspect flag:
api:
ports:
- "4000:4000"
- "9229:9229"
command: node --inspect=0.0.0.0:9229 src/index.js
VS Code attaches to localhost:9229 and you get breakpoints, step-through, watch variables, all of it. The 0.0.0.0 part matters โ same reason as the dev server binding. If the debugger only listens on 127.0.0.1 inside the container, VS Code on the host can't reach it.
I set up a launch.json that connects automatically:
{
"type": "node",
"request": "attach",
"name": "Docker: Attach to Node",
"port": 9229,
"address": "localhost",
"localRoot": "${workspaceFolder}/api",
"remoteRoot": "/app",
"restart": true
}
The localRoot and remoteRoot mapping is important. Without it, VS Code doesn't know that /app/src/server.js in the container is actually ./api/src/server.js on your machine. Breakpoints won't hit if the paths don't line up.
One annoying thing: if nodemon restarts the process, the debug connection drops and VS Code has to reattach. The "restart": true option helps, but there's still a brief moment where you're disconnected. Not a dealbreaker. Just a small tax you pay.
Database seeding and migrations
I wanted new developers to run docker compose up and have a working database. Not an empty Postgres instance where they'd have to figure out which migration commands to run.
Postgres and MySQL images support entrypoint scripts. Mount a .sql file into /docker-entrypoint-initdb.d/ and it runs automatically the first time the database volume is created:
db:
image: postgres:16
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev"]
interval: 3s
retries: 10
volumes:
pgdata:
That init.sql creates the tables, inserts some seed data, whatever you need. First time someone runs compose, they get a fully populated database.
But here's the thing โ it only runs on first initialization. If the pgdata volume already exists with data in it, the entrypoint scripts are skipped. I burned time on this when I updated the init script and nothing changed. Had to run docker compose down -v to destroy the volume and start fresh. The -v flag. Easy to forget.
For ongoing migrations โ like when we add a column or change a schema โ the init script approach doesn't work. We ended up adding a migration command to the API's startup:
# Dockerfile.dev
CMD ["sh", "-c", "npx prisma migrate deploy && node src/index.js"]
Prisma runs any pending migrations before the app starts. Works well enough. Though I've seen it fail when the migration itself depends on data that the seed script creates in a certain order. That was a fun afternoon.
Environment variables and .env files
Docker Compose reads .env files natively if they're in the same directory as the compose file. You can also specify them per-service:
api:
env_file:
- ./api/.env.docker
environment:
DATABASE_URL: postgres://dev:password@db:5432/myapp
REDIS_URL: redis://redis:6379
Notice the hostnames. Inside the Docker network, services refer to each other by their service name. Not localhost, not 127.0.0.1. The service name db resolves to the Postgres container's IP. This tripped me up early on. My .env file had DATABASE_URL=postgres://localhost:5432/myapp and of course that didn't work inside the container because there's no Postgres running on the container's localhost.
We ended up with three .env variants:
.envfor running outside Docker on the host.env.dockerfor running inside Docker Compose.env.productionfor deployment
It's a bit messy. I don't love it. There are probably smarter ways to handle this โ variable substitution in the compose file, or a single env file with conditional values โ but honestly, three files works and nobody on the team has complained about it yet. Pick your battles.
Watching logs without losing your mind
When you run docker compose up, all the logs from all services interleave in one terminal. It gets noisy fast. Color-coded by service name, but still, when the API is spitting out request logs and Postgres is logging every query, good luck finding anything.
docker compose logs -f api follows just one service. That helps. I usually run compose in detached mode (docker compose up -d) and then tail specific services in separate terminal tabs.
For actual log searching, docker compose logs api | grep "ERROR" works but it's crude. We haven't set up anything fancier for local dev. In production we use proper log aggregation, but locally? Grep and scrolling. It's fine.
The compose file grew. And grew.
Where we ended up, the compose file is around 90 lines. API, frontend, Postgres, Redis, a background worker service, and a mailcatcher for testing emails locally. Each with their own volumes, environment variables, health checks, port mappings.
services:
api:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api:/app
- /app/node_modules
ports:
- "4000:4000"
- "9229:9229"
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
env_file:
- ./api/.env.docker
command: npx nodemon --inspect=0.0.0.0:9229 src/index.js
worker:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api:/app
- /app/node_modules
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
env_file:
- ./api/.env.docker
command: node src/worker.js
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
volumes:
- ./frontend:/app
- /app/node_modules
ports:
- "3000:3000"
env_file:
- ./frontend/.env.docker
db:
image: postgres:16
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev"]
interval: 3s
retries: 10
redis:
image: redis:7
ports:
- "6379:6379"
mailcatcher:
image: schickling/mailcatcher
ports:
- "1080:1080"
- "1025:1025"
volumes:
pgdata:
It works. The whole team runs this now. New developers clone the repo, run docker compose up, and have everything running in under two minutes. Compared to the old setup doc that was fourteen bullet points of "install this, configure that, make sure this version matches" โ it's a big improvement.
What still bothers me
Performance. On macOS, volume mounts are slow. Noticeably slow. File watching triggers with a delay. Builds that take 2 seconds on the host take 5-6 seconds through the Docker mount. There are newer mount options โ cached, delegated, and the newer VirtioFS backend in Docker Desktop โ that help. We turned on VirtioFS and it got better, maybe 70% of native speed. Still not native though. On Linux this isn't an issue because bind mounts are direct filesystem access. On Windows with WSL2, it's somewhere in the middle โ decent if your files live inside the WSL filesystem, awful if they're on the Windows side mounted via /mnt/c.
I haven't figured out a clean answer for this. Some people recommend running everything in a devcontainer where the files live entirely inside the Linux VM. That's probably the right move. Haven't made the switch yet because our current setup works well enough and I don't want to introduce another layer of "how does my editor see these files" confusion.
The other thing is image layer caching. When someone changes package.json and rebuilds, Docker invalidates the cache from the COPY package*.json layer onward, which means npm install runs again from scratch. There are ways to optimize this โ multi-stage builds, mounting the npm cache as a Docker build cache โ but we haven't bothered yet. Rebuilds happen maybe once or twice a week per person. Not often enough to justify the yak shaving.
I also still don't fully understand Docker networking beyond the basics. Each compose project gets its own network, services can talk to each other by name, exposed ports are available on the host. But when I tried to have two compose projects talk to each other โ one for the main app, one for a shared microservice โ I had to create external networks and I'm not confident I did it right. It works, but I couldn't explain the configuration to someone else confidently.
Maybe that's the thing about Docker Compose for development. You reach a point where it's working, the whole team is productive, and you stop poking at it. There are things I'd still like to clean up. The three-env-file situation. The rebuild-when-you-add-a-package workflow. Getting devcontainers working properly. But there's always something else to ship, and the current setup is... it's good enough. For now. I think. I'll probably revisit the networking thing next month when we split out another service, and I'm already not looking forward to it.
Written by
Anurag Sinha
Developer who writes about the stuff I actually use day-to-day. If I got something wrong, let me know.
Found this useful?
Share it with someone who might find it helpful too.
Comments
Loading comments...
Related Articles
Learning Docker โ What I Wish Someone Had Told Me Earlier
Why most Docker tutorials fail beginners, the critical difference between images and containers, and what actually happens when you run a container.
Do You Actually Need Kubernetes?
Evaluating whether the overhead of Kubernetes is worth it for your team, and what the migration actually looks like.
Cloud-Native Architecture: What It Means and What It Costs
Reference definitions for the 12-factor app methodology, containerization, infrastructure as code, and CI/CD pipelines.