Module 16 · Containers
Docker is one of the most important tools on Linux today. A container is a way to run an app and everything it needs, sealed in a little box, so it runs the same way on any computer. By the end of this module you'll be running them, building them, and wiring several together.
By the end of this module, you will:
- Explain the difference between containers and virtual machines
- Pull images, run containers, and manage their lifecycle
- Write a Dockerfile and build a custom image
- Use Docker Compose to run multi-service applications
- Inspect a running container with logs, exec, and stats
- Apply Dockerfile best practices: slim base images, layer caching, non-root users
Container vs virtual machine — what's the difference?
A virtual machine is like running a whole second computer inside your computer — its own pretend CPU, its own pretend memory, its own copy of Windows or Linux. A container is much smaller: it shares the Linux underneath and just wraps up the one app and the bits that app needs. Containers start in less than a second, use a few megabytes instead of gigabytes, and you can run hundreds at a time. Picture it like this: virtual machines are separate flats in a building. Containers are separate rooms that share the same plumbing.
Why Docker kills the "but it works on my machine" excuse
Have you ever helped someone with an app and they said "but it works on my laptop"? That happens because their laptop has the right versions of everything, and yours doesn't. A Docker image packs the app together with everything it needs to run — the right version of Linux bits, the right language, the right helper libraries. That same image runs exactly the same on your laptop, on your friend's Mac, on a test server, and on the live server. Nothing to forget, nothing to install. The container is the machine.
Docker Hub — the app store for containers
Docker Hub (at hub.docker.com) is a huge free shop where people share ready-made container images. Need Ubuntu? PostgreSQL? nginx? Node.js? Python? Redis? WordPress? They're all sitting there waiting. Instead of installing them with apt, you "pull" the image and run it a few seconds later. The "official" images are made by the people who make the software, so they're safe to use as a starting point.
Containers vs virtual machines, side by side
| Thing being compared | Container | Virtual machine |
|---|---|---|
| How long to start | Less than a second | 30 seconds to two minutes |
| How big on disk | 10–500 MB | 5–50 GB |
| Memory it uses | Almost none | Half a gig to 2 gig each |
| How it's separated | Shares the Linux underneath | A whole second computer |
| How many on one machine | Hundreds | Tens |
| Moving it elsewhere | Runs the same everywhere | Depends on what's hosting it |
| What it's good at | Single apps and services | Running whole operating systems |
Step 0: install Docker first
Docker doesn't come with Ubuntu out of the box. None of the commands in this module will work until you do these four things. Do them now, before you read any further.
docker.io or docker-ce — which one?
There are two versions of Docker you can install. docker.io is the one Ubuntu packages itself — a little older, but rock-solid and easy to install with one command. docker-ce (the "Community Edition") is Docker's own newer version. For learning, docker.io is fine. If you're running a real live service one day, follow Docker's official guide for docker-ce. Either way, every command in this module works exactly the same.
Clean up the disk every so often
Docker images pile up fast — every time you try something new, more bits sit on your disk. After playing around, run docker system prune and Docker will throw out the stopped containers, the leftover images, and the unused networks. Add -a to throw out every unused image: docker system prune -a. On a learning machine this can win you back several gigabytes.
The Docker commands you'll use every day
These are the ones you'll reach for over and over again. Learn them in this order — each one builds on the one before. Half an hour of poking around with them and they'll feel automatic.
-i keeps the typing channel open; -t gives you a proper terminal. Type exit to leave; the container shuts down when you do.docker stop myapp instead of remembering a long ID.-it for an interactive try-it-and-forget-it run.jq if you only want one field.Building your own image — the Dockerfile
A Dockerfile is a recipe. Each line adds another step. Docker is clever: it remembers each step, and when you change a line it only redoes the steps from that line down. So put the lines that change a lot (like copying your code) at the bottom, and the lines that almost never change (like installing libraries) at the top. Your rebuilds will be much faster.
The Dockerfile words you'll see most
FROM — the image to start from. Always the first line. RUN — run a command while building (this is how you install things). COPY — copy files from your computer into the image. WORKDIR — set the folder Docker should work in inside the image. ENV — set an environment variable. EXPOSE — tell people which port the app listens on (it doesn't open it, it just documents it). CMD — the command that runs when the container starts. ENTRYPOINT — like CMD but harder to swap out — use ENTRYPOINT for the program and CMD for the arguments you pass it.
Volumes and port mapping
By default, when you delete a container everything inside it disappears — files, database, the lot. A volume fixes that: it keeps the data on your real computer, so the container can be thrown away without losing anything. Port mapping is the other thing: it opens a door from inside the container out to your laptop, so you can actually visit the app in your web browser.
Docker Compose — apps made of several containers
Real apps are usually made of a few pieces — a website, a database, maybe a cache to make it fast. Starting each one by hand with a long docker run command is painful and easy to get wrong. Docker Compose lets you write down the whole setup in one file, and then anyone can start the entire app with a single command.
Services call each other by name
Inside Compose, the containers can talk to each other by their service name as if it were a website address. So the web container reaches the database at db:5432, not localhost:5432. That's why DATABASE_URL=postgresql://user:pass@db:5432/myapp works — Docker turns the word db into the database container's address for you, automatically.
Peeking inside running containers — logs, exec, stats
The most common Docker problem is a container that is running but is doing the wrong thing. Three commands solve nine times out of ten — they're the container versions of journalctl, ssh and htop from earlier modules.
| Command | What it does | The same idea on a normal machine |
|---|---|---|
docker logs CONTAINER | Show everything the app has printed so far (the normal output and the errors) | journalctl -u service |
docker logs -f CONTAINER | Follow the output live as new lines come in (Ctrl+C to stop) | tail -f /var/log/... |
docker exec -it CONTAINER bash | Open a shell inside the running container so you can poke around | SSHing into the machine |
docker stats | Live view of CPU, memory and disk use for every running container | htop |
docker inspect CONTAINER | Everything Docker knows about the container, dumped as JSON | — |
docker top CONTAINER | Show the processes running inside the container | ps aux |
The go-to routine when something's wrong: docker ps (is it even running?) → docker logs --tail 50 CONTAINER (what did it say went wrong?) → docker exec -it CONTAINER bash (climb inside and look around). Those three commands, in that order, will sort most "but it worked on my machine" mysteries in about a minute.
Dockerfile habits — smaller, faster, safer
Any Dockerfile that builds will run. But a Dockerfile that follows these habits will be smaller, faster to rebuild, safer, and easier to fix later. Once you're shipping images people actually depend on, none of these are optional.
| Habit | Why it matters |
|---|---|
Use "slim" or "alpine" base images (python:3.12-slim, not python:3.12) | Smaller download. Fewer extra programs that could go wrong or get attacked. A slim image can be ten times smaller than the full one. |
| Put the lines that change rarely at the top, and the lines that change a lot at the bottom | Docker remembers each step. If you COPY requirements.txt and RUN pip install before COPY . ., Docker doesn't reinstall every library just because you changed one line of code. |
Glue related RUN commands together with && | Each RUN makes another layer in the image. Fewer layers means a smaller image and quicker builds. |
Don't run as the admin user — add USER appuser | If someone breaks into your app, they're stuck as a normal user inside the container instead of admin. By default Docker runs as admin (root), which is risky. |
Use a .dockerignore file | Stops things like node_modules, .git and old build files from being copied into your image. Faster builds, smaller image, and you won't accidentally bake a secret into it. |
Multi-stage builds (FROM ... AS builder ... FROM ... ... COPY --from=builder ...) | Build the program in a big image, then copy only the finished program into a tiny one to ship. Standard trick for Go, Rust and other compiled languages — your final image can be a few MB instead of a few GB. |
Always say which version — FROM python:3.12.5-slim, not python:latest | So the build behaves the same way every time. latest means "whatever's newest", which can quietly change and break things on a Tuesday morning months later. |
Scan your images with docker scout or trivy | Free tools that look at the libraries inside your image and warn you if any of them have known security problems — before you ship. |
What to remember
- Containers share the Linux underneath, so they start in less than a second and use far less memory than a virtual machine.
docker pulldownloads an image.docker runturns it into a running container.docker psshows what's running right now.- A Dockerfile is a written-down recipe you can rerun. Put the lines that change a lot at the bottom, so Docker can re-use the steps above.
- Use
-vto attach a volume. Without one, everything inside the container vanishes when you delete the container. - If your app has more than one piece, use Docker Compose. A single
docker compose up -dstarts everything. - When something breaks:
docker ps→docker logs→docker exec -it CONTAINER bash. Almost every problem cracks open with that sequence. - Real images people depend on always use slim base images, ordered lines, a non-admin user inside, and pinned versions. Those aren't nice-to-haves — they're table stakes.