All posts
dockerdevopscontainers

Docker: A Practical Guide for Full-Stack Developers

A practical guide to Docker — setup, core concepts, common mistakes, and production tips for full-stack developers.

SR

Suhail Roushan

April 10, 2026

·
6 min read

Docker is a tool that packages your application and its dependencies into a single, portable container that runs the same everywhere.

If you've ever said "it works on my machine," you understand the problem Docker solves. As a full-stack developer, I use Docker to create identical environments from my local machine all the way to production. This guide cuts through the hype and shows you the practical parts you'll actually use. We'll cover setup, core concepts, and the common pitfalls I've seen teams stumble over.

Why Docker Matters (and When to Skip It)

Docker matters because it standardizes the "works on my machine" problem into a "works on any machine" solution. It encapsulates your app's OS, runtime, system tools, libraries, and code into a single image. This consistency is invaluable for team collaboration, CI/CD pipelines, and deploying microservices.

However, you can skip Docker for trivial, single-developer projects or when you're just learning a new framework. The overhead of learning Docker Compose and managing containers isn't worth it for a simple to-do app that will never leave your laptop. For anything beyond that—especially team projects or services you plan to deploy—Docker quickly becomes essential.

Getting Started with Docker

You need two files: a Dockerfile to build your image and a docker-compose.yml to orchestrate services. Let's containerize a basic Node.js API. First, create a Dockerfile in your project root.

# Use an official Node runtime as the base image
FROM node:18-alpine

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package files and install dependencies
COPY package*.json ./
RUN npm ci --only=production

# Copy the rest of the application code
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# Define the command to run the app
CMD ["node", "server.js"]

This is a minimal, production-oriented Dockerfile. It uses the small Alpine Linux base, installs only production dependencies, and defines how to start your app.

Core Docker Concepts Every Developer Should Know

1. Images vs. Containers

An image is a read-only template with instructions for creating a container. A container is a runnable instance of an image. You build an image once and run many containers from it.

# Build an image from your Dockerfile and tag it
docker build -t my-app:1.0 .

# List your local images
docker images

# Run a container from that image, mapping port 3000
docker run -p 3000:3000 my-app:1.0

2. Volumes for Persistent Data

Containers are ephemeral. When they stop, all changes inside are lost. Volumes are the mechanism for persisting data generated by and used by Docker containers. This is critical for databases.

# docker-compose.yml snippet for a PostgreSQL service with a volume
services:
  db:
    image: postgres:15
    volumes:
      - postgres_data:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: secret

volumes:
  postgres_data: # Named volume persists data

3. Multi-Stage Builds for Efficiency

Multi-stage builds allow you to use multiple FROM statements in one Dockerfile. You can use one stage for building and compiling your application and a second, much leaner stage for the final runtime image. This drastically reduces your final image size.

# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Runtime
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package*.json ./
RUN npm ci --only=production
EXPOSE 3000
CMD ["node", "dist/server.js"]

This is a game-changer for compiled languages and frontend frameworks, keeping your production images small and secure.

Common Docker Mistakes and How to Fix Them

Mistake 1: Running as Root. By default, containers run as the root user, which is a security risk. Fix: Create a non-root user in your Dockerfile and switch to it.

FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001
USER nodejs # Switch to the non-root user
WORKDIR /app
# ... rest of your Dockerfile

Mistake 2: Not Using .dockerignore. This leads to bloated images and slow builds as your local node_modules and log files get copied into the build context. Fix: Create a .dockerignore file in your project root.

node_modules
npm-debug.log
.git
.env
.DS_Store
dist

Mistake 3: Binding to Localhost in Code. If your app server listens only on 127.0.0.1 (localhost), it won't be accessible from outside the container. Fix: Bind to all interfaces (0.0.0.0).

// In your Node.js/Express server file
const PORT = process.env.PORT || 3000;
app.listen(PORT, '0.0.0.0', () => { // Listen on 0.0.0.0
  console.log(`Server running on port ${PORT}`);
});

When Should You Use Docker?

Use Docker when you need consistent environments across multiple stages (development, testing, production) or across a team of developers. It's ideal for microservices architectures, where each service can have its own isolated environment and dependencies. Use it when you want to simplify your deployment process by creating a single artifact—the container image—that runs anywhere Docker is installed.

Avoid Docker for extremely simple, static websites where platform differences are irrelevant, or when the operational complexity of managing a container orchestration system outweighs the benefits. For many full-stack projects, the sweet spot is using Docker Compose for local development and a container registry with a cloud provider for production.

Docker in Production

For production, your Docker setup needs more rigor. First, always use specific image tags (like my-app:1.2.3) instead of the mutable latest tag. This ensures you know exactly what version is running and can roll back predictably.

Second, integrate Docker into your CI/CD pipeline. Your pipeline should build the image, run tests against it, and push the validated image to a registry (like Docker Hub, AWS ECR, or Google Container Registry). Your production servers then pull and run this pre-tested image. This is the pattern I use for projects on suhailroushan.com.

Finally, don't run Docker in production alone. Use an orchestrator. For most projects, start with Docker Compose on a single server. As you scale, migrate to a managed service like AWS ECS or Google Cloud Run, which handle scheduling, health checks, and scaling for you.

Start your next side project by writing the Dockerfile and docker-compose.yml first—it will force you to define your dependencies and service architecture clearly from day one.

Related posts

Written by Suhail Roushan — Full-stack developer. More posts on AI, Next.js, and building products at suhailroushan.com/blog.

Get in touch