Module 16 · Containers 60 min

Docker is one of the most important tools on Linux today. A container is a way to run an app and everything it needs, sealed in a little box, so it runs the same way on any computer. By the end of this module you'll be running them, building them, and wiring several together.

By the end of this module, you will:

  • Explain the difference between containers and virtual machines
  • Pull images, run containers, and manage their lifecycle
  • Write a Dockerfile and build a custom image
  • Use Docker Compose to run multi-service applications
  • Inspect a running container with logs, exec, and stats
  • Apply Dockerfile best practices: slim base images, layer caching, non-root users

Container vs virtual machine — what's the difference?

A virtual machine is like running a whole second computer inside your computer — its own pretend CPU, its own pretend memory, its own copy of Windows or Linux. A container is much smaller: it shares the Linux underneath and just wraps up the one app and the bits that app needs. Containers start in less than a second, use a few megabytes instead of gigabytes, and you can run hundreds at a time. Picture it like this: virtual machines are separate flats in a building. Containers are separate rooms that share the same plumbing.

Why Docker kills the "but it works on my machine" excuse

Have you ever helped someone with an app and they said "but it works on my laptop"? That happens because their laptop has the right versions of everything, and yours doesn't. A Docker image packs the app together with everything it needs to run — the right version of Linux bits, the right language, the right helper libraries. That same image runs exactly the same on your laptop, on your friend's Mac, on a test server, and on the live server. Nothing to forget, nothing to install. The container is the machine.

Docker Hub — the app store for containers

Docker Hub (at hub.docker.com) is a huge free shop where people share ready-made container images. Need Ubuntu? PostgreSQL? nginx? Node.js? Python? Redis? WordPress? They're all sitting there waiting. Instead of installing them with apt, you "pull" the image and run it a few seconds later. The "official" images are made by the people who make the software, so they're safe to use as a starting point.

Containers vs virtual machines, side by side

Thing being comparedContainerVirtual machine
How long to startLess than a second30 seconds to two minutes
How big on disk10–500 MB5–50 GB
Memory it usesAlmost noneHalf a gig to 2 gig each
How it's separatedShares the Linux underneathA whole second computer
How many on one machineHundredsTens
Moving it elsewhereRuns the same everywhereDepends on what's hosting it
What it's good atSingle apps and servicesRunning whole operating systems

Step 0: install Docker first

Docker doesn't come with Ubuntu out of the box. None of the commands in this module will work until you do these four things. Do them now, before you read any further.

Installing Docker on Ubuntu 24.04
# ── Step 1: Install Docker from Ubuntu's repos ────────────
user@ubuntu:~$ sudo apt update && sudo apt install docker.io

# ── Step 2: Enable and start the Docker daemon ────────────
user@ubuntu:~$ sudo systemctl enable --now docker

# ── Step 3: Add yourself to the docker group ──────────────
# (so you can run docker without sudo)
user@ubuntu:~$ sudo usermod -aG docker $USER
⚠ Log out and log back in for this to take effect

# ── Step 4: Verify the installation works ─────────────────
user@ubuntu:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.

docker.io or docker-ce — which one?

There are two versions of Docker you can install. docker.io is the one Ubuntu packages itself — a little older, but rock-solid and easy to install with one command. docker-ce (the "Community Edition") is Docker's own newer version. For learning, docker.io is fine. If you're running a real live service one day, follow Docker's official guide for docker-ce. Either way, every command in this module works exactly the same.

Clean up the disk every so often

Docker images pile up fast — every time you try something new, more bits sit on your disk. After playing around, run docker system prune and Docker will throw out the stopped containers, the leftover images, and the unused networks. Add -a to throw out every unused image: docker system prune -a. On a learning machine this can win you back several gigabytes.

The Docker commands you'll use every day

These are the ones you'll reach for over and over again. Learn them in this order — each one builds on the one before. Half an hour of poking around with them and they'll feel automatic.

Docker essentials — the daily workflow
# ── Images ────────────────────────────────────────────────
user@ubuntu:~$ docker pull ubuntu # download an image from Docker Hub
user@ubuntu:~$ docker images # list local images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 35a88802559d 2 weeks ago 78.1MB
user@ubuntu:~$ docker rmi ubuntu # remove an image

# ── Containers ────────────────────────────────────────────
user@ubuntu:~$ docker run -it ubuntu bash # run interactively (like SSH into it)
user@ubuntu:~$ docker run -d nginx # run detached (in background)
user@ubuntu:~$ docker ps # list running containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a1b2c3d4e5f6 nginx ... 3 seconds ago Up 2 secs 80/tcp silly_jones
user@ubuntu:~$ docker ps -a # all containers including stopped
user@ubuntu:~$ docker stop a1b2c3d4e5f6 # stop a running container
user@ubuntu:~$ docker rm a1b2c3d4e5f6 # remove a stopped container
user@ubuntu:~$ docker logs a1b2c3d4e5f6 # view container output
user@ubuntu:~$ docker exec -it a1b2c3d4e5f6 bash # shell inside running container
docker run -it IMAGE bash
Start a container and get a shell inside it — a bit like SSHing into a tiny computer. -i keeps the typing channel open; -t gives you a proper terminal. Type exit to leave; the container shuts down when you do.
docker run -d --name myapp IMAGE
Run it in the background and give it a friendly name. After that you can say docker stop myapp instead of remembering a long ID.
docker run --rm IMAGE
Throw the container away as soon as it finishes. Good for quick one-off jobs — nothing to clean up afterwards. Pair with -it for an interactive try-it-and-forget-it run.
docker inspect CONTAINER
Dumps out everything Docker knows about a container — its IP address, what's mounted, what variables are set, and so on. It's a big JSON blob. Pipe it through jq if you only want one field.

Building your own image — the Dockerfile

A Dockerfile is a recipe. Each line adds another step. Docker is clever: it remembers each step, and when you change a line it only redoes the steps from that line down. So put the lines that change a lot (like copying your code) at the bottom, and the lines that almost never change (like installing libraries) at the top. Your rebuilds will be much faster.

Dockerfile — a simple web app
# Start from an official base image (always pin a version in production)
FROM python:3.12-slim

# Set working directory inside the container
WORKDIR /app

# Copy requirements first — Docker caches this layer
# It won't re-run pip install unless requirements.txt changes
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code (changes most frequently — goes last)
COPY . .

# Document which port the app listens on (informational)
EXPOSE 8000

# Command to run when container starts
CMD ["python", "app.py"]
Building and running the image
# Build the image — tag it with a name
user@ubuntu:~$ docker build -t myapp .
Step 1/6 : FROM python:3.12-slim
Step 2/6 : WORKDIR /app
Step 3/6 : COPY requirements.txt .
Step 4/6 : RUN pip install --no-cache-dir -r requirements.txt
Step 5/6 : COPY . .
Step 6/6 : CMD ["python", "app.py"]
Successfully built d4a2e8f1c6b3
Successfully tagged myapp:latest

# Run the image — map host port 8080 to container port 8000
user@ubuntu:~$ docker run -p 8080:8000 myapp
Serving on http://0.0.0.0:8000
# Visit http://localhost:8080 in your browser

The Dockerfile words you'll see most

FROM — the image to start from. Always the first line. RUN — run a command while building (this is how you install things). COPY — copy files from your computer into the image. WORKDIR — set the folder Docker should work in inside the image. ENV — set an environment variable. EXPOSE — tell people which port the app listens on (it doesn't open it, it just documents it). CMD — the command that runs when the container starts. ENTRYPOINT — like CMD but harder to swap out — use ENTRYPOINT for the program and CMD for the arguments you pass it.

Volumes and port mapping

By default, when you delete a container everything inside it disappears — files, database, the lot. A volume fixes that: it keeps the data on your real computer, so the container can be thrown away without losing anything. Port mapping is the other thing: it opens a door from inside the container out to your laptop, so you can actually visit the app in your web browser.

Volumes and port mapping
# ── Port mapping: -p host_port:container_port ─────────────
user@ubuntu:~$ docker run -d -p 8080:80 nginx
# nginx listens on port 80 inside, accessible at localhost:8080 outside

# ── Volume mount: persist data outside the container ──────
user@ubuntu:~$ docker run -d \
-v /home/user/pgdata:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:16
# Database files persist in /home/user/pgdata on the host
# Deleting the container doesn't lose the data

# ── Named volumes (Docker manages the path) ───────────────
user@ubuntu:~$ docker volume create pgdata
user@ubuntu:~$ docker run -d -v pgdata:/var/lib/postgresql/data postgres:16
user@ubuntu:~$ docker volume ls # list volumes
user@ubuntu:~$ docker volume rm pgdata # delete volume (only when container is stopped)

Docker Compose — apps made of several containers

Real apps are usually made of a few pieces — a website, a database, maybe a cache to make it fast. Starting each one by hand with a long docker run command is painful and easy to get wrong. Docker Compose lets you write down the whole setup in one file, and then anyone can start the entire app with a single command.

docker-compose.yml — web app + PostgreSQL + Redis
services:
web:
build: . # build from local Dockerfile
ports:
- "8000:8000" # expose to host
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
volumes:
- ./:/app # mount code for hot-reload in dev

db:
image: postgres:16
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
volumes:
- pgdata:/var/lib/postgresql/data

cache:
image: redis:7-alpine

volumes:
pgdata: # named volume — persists across restarts
Docker Compose commands
user@ubuntu:~$ docker compose up -d # start all services detached
user@ubuntu:~$ docker compose logs -f # stream logs from all services
user@ubuntu:~$ docker compose ps # status of all services
user@ubuntu:~$ docker compose exec web bash # shell inside the web service
user@ubuntu:~$ docker compose restart web # restart one service
user@ubuntu:~$ docker compose down # stop and remove containers (keeps volumes)
user@ubuntu:~$ docker compose down -v # also delete volumes (data gone!)

Services call each other by name

Inside Compose, the containers can talk to each other by their service name as if it were a website address. So the web container reaches the database at db:5432, not localhost:5432. That's why DATABASE_URL=postgresql://user:pass@db:5432/myapp works — Docker turns the word db into the database container's address for you, automatically.

Peeking inside running containers — logs, exec, stats

The most common Docker problem is a container that is running but is doing the wrong thing. Three commands solve nine times out of ten — they're the container versions of journalctl, ssh and htop from earlier modules.

Command What it does The same idea on a normal machine
docker logs CONTAINERShow everything the app has printed so far (the normal output and the errors)journalctl -u service
docker logs -f CONTAINERFollow the output live as new lines come in (Ctrl+C to stop)tail -f /var/log/...
docker exec -it CONTAINER bashOpen a shell inside the running container so you can poke aroundSSHing into the machine
docker statsLive view of CPU, memory and disk use for every running containerhtop
docker inspect CONTAINEREverything Docker knows about the container, dumped as JSON
docker top CONTAINERShow the processes running inside the containerps aux

The go-to routine when something's wrong: docker ps (is it even running?) → docker logs --tail 50 CONTAINER (what did it say went wrong?) → docker exec -it CONTAINER bash (climb inside and look around). Those three commands, in that order, will sort most "but it worked on my machine" mysteries in about a minute.

Dockerfile habits — smaller, faster, safer

Any Dockerfile that builds will run. But a Dockerfile that follows these habits will be smaller, faster to rebuild, safer, and easier to fix later. Once you're shipping images people actually depend on, none of these are optional.

Habit Why it matters
Use "slim" or "alpine" base images (python:3.12-slim, not python:3.12)Smaller download. Fewer extra programs that could go wrong or get attacked. A slim image can be ten times smaller than the full one.
Put the lines that change rarely at the top, and the lines that change a lot at the bottomDocker remembers each step. If you COPY requirements.txt and RUN pip install before COPY . ., Docker doesn't reinstall every library just because you changed one line of code.
Glue related RUN commands together with &&Each RUN makes another layer in the image. Fewer layers means a smaller image and quicker builds.
Don't run as the admin user — add USER appuserIf someone breaks into your app, they're stuck as a normal user inside the container instead of admin. By default Docker runs as admin (root), which is risky.
Use a .dockerignore fileStops things like node_modules, .git and old build files from being copied into your image. Faster builds, smaller image, and you won't accidentally bake a secret into it.
Multi-stage builds (FROM ... AS builder ... FROM ... ... COPY --from=builder ...)Build the program in a big image, then copy only the finished program into a tiny one to ship. Standard trick for Go, Rust and other compiled languages — your final image can be a few MB instead of a few GB.
Always say which versionFROM python:3.12.5-slim, not python:latestSo the build behaves the same way every time. latest means "whatever's newest", which can quietly change and break things on a Tuesday morning months later.
Scan your images with docker scout or trivyFree tools that look at the libraries inside your image and warn you if any of them have known security problems — before you ship.

What to remember

  • Containers share the Linux underneath, so they start in less than a second and use far less memory than a virtual machine.
  • docker pull downloads an image. docker run turns it into a running container. docker ps shows what's running right now.
  • A Dockerfile is a written-down recipe you can rerun. Put the lines that change a lot at the bottom, so Docker can re-use the steps above.
  • Use -v to attach a volume. Without one, everything inside the container vanishes when you delete the container.
  • If your app has more than one piece, use Docker Compose. A single docker compose up -d starts everything.
  • When something breaks: docker psdocker logsdocker exec -it CONTAINER bash. Almost every problem cracks open with that sequence.
  • Real images people depend on always use slim base images, ordered lines, a non-admin user inside, and pinned versions. Those aren't nice-to-haves — they're table stakes.