Learn to use Docker quickly.
Backstory
Before discussing how to use Docker, it is important to understand what Docker is. Docker is a container-based environment that allows applications to run consistently across different regions and computers/servers. It is lighter than a virtual machine and provides isolation for our applications, meaning that if Service A goes down, it will not affect Service B.
- Requires fewer resources than VMs.
- Provides isolation for our application.
- Ensures compatibility.
- Deploys at a higher speed compared to VMs.
- Easy to scale.
- It can also be used with K8S.
Command
To use Docker, there are several stages:
In the Image stage:
- Find your desired image using
docker search. - Download the image with
docker pull. - List all downloaded images by running
docker images. - Delete unwanted images with
docker rmi.
- Find your desired image using
In the container stage:
- Create a container based on an image using the command
docker run. - Check the status of containers with the command
docker ps. - Stop a running container using the command
docker stop. - Start a previously created container with the command
docker start. - Restart a container using the command
docker restart. - Monitor resource usage of containers with the command
docker stats. - View logs for a specific container using the command
docker logs. - Access and enter into a running container by executing:
docker exec **** bash - Remove one or more containers with either:
docker rmor force delete all containers at once by executing:docker rm -f $(docker ps -aq).
- Create a container based on an image using the command
In persist image stage:
- Create an image based on the container using
docker commit. - Save the image with
docker save. - Load the saved image using
docker load.
- Create an image based on the container using
In push stage
- Log in to Docker Hub:
docker login. - Tag your image using:
docker tag. - Upload the image to Docker Hub with:
docker push.
- Log in to Docker Hub:
These are the basic commands for managing images, containers. However, it's also important to learn how to mount files from a container to a host, map drivers, and create custom networks.
Bind Mounts (Host Volumes) vs. Volumes (Named Volumes)
Bind Mounts allow you to mount a directory on the host machine directly to a directory inside the container. This means changes made on the host directory are immediately reflected inside the container and vice versa.
Volumes are similar to Bind Mounts in that they enable a container to persist data. However, unlike Bind Mounts, Volumes are managed by Docker. When you create a Named Volume, Docker sets up a dedicated directory on the host (typically within /var/lib/docker/volumes/<volume-name>), which is then mapped to the specified directory in the container. This keeps the container’s data separate from the host’s filesystem structure.
For example, if you attempt to use a Bind Mount with the path /usr/ngconf:/etc/nginx in an NGINX container, it might fail with a "No such file or directory" error if the required NGINX configuration files are not present in the host’s /usr/ngconf directory. Instead, using a Named Volume, like ngconfig:/etc/nginx, instructs Docker to initialize the volume with default files from the container’s setup, allowing the container to start properly.
Docker automatically populates Named Volumes with the container’s initial files if necessary, making them ideal for applications (such as NGINX) that require certain files and configurations at startup.
Networking
What is a network?
When multiple services are running in the same Docker, we prefer to use service names instead of IP addresses for communication. This can be easily achieved by using custom Docker networks. The default docker0 network does not allow communication between services using their names. If we recreate the services, the IP addresses inside the Docker will change and communication will fail.
- Use
docker network lsto list all networks, including the default one and any you have created. - Use
docker create <name>to create a custom network. - Use
docker run --network <name>to make the container use the custom network.
Dockerfile And Docker Compose
Dockerfile: Building an Image from Source Code
When deploying applications with Docker, creating a reproducible and consistent environment is crucial. A Dockerfile is a text file containing all the instructions required to build a Docker image. This image can then be deployed and run anywhere, ensuring consistency across development, testing, and production environments. The Dockerfile typically includes:
- Base Image: Defines the starting environment, often a minimal version of an OS or language runtime, such as
alpineornode. - Commands to Copy and Build Code: Copies the necessary files from your project and runs any required build commands.
- Environment Variables: Defines configuration settings that can be passed into the container.
- Entry Point: Specifies the default command to run when the container starts, such as launching a web server.
For example, here’s a simple Dockerfile for a Node.js application:
# Use a lightweight Node.js base image
FROM node:14-alpine
# Set working directory
WORKDIR /app
# Copy package.json and install dependencies
COPY package.json ./
RUN npm install
# Copy the rest of the source code
COPY . .
# Expose port and define the entry point
EXPOSE 3000
CMD ["node", "index.js"]
This Dockerfile ensures that when we build our image, it includes the right Node.js environment, dependencies, and our application code, making deployment fast and consistent.
Docker Compose: Orchestrating Multiple Services
For applications that depend on multiple components, Docker Compose simplifies managing and launching them together. Using a docker-compose.yml file, we can define several services and configure their dependencies, networking, and volumes. This allows us to spin up a complete, interconnected environment with one command (docker-compose up).
For example, here’s a docker-compose.yml file for a web application with a Node.js backend and a MongoDB database:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017/mydatabase
db:
image: mongo:4.4
volumes:
- db_data:/data/db
ports:
- "27017:27017"
volumes:
db_data:
Key Points of Docker Compose:
- Services: Each service represents a component of your application, like a database or web server.
- Networking: Services can easily communicate with each other using their names (e.g., the
appservice can reachdbby simply referring todb). - Volumes: Define persistent storage, like
db_data, which ensures database data persists even if the container restarts. - Environment Variables: Pass configuration settings to each service, allowing for flexible and reusable setups.
With Docker Compose, you can manage multiple containers as a single unit, which is essential for applications that rely on multiple interconnected services.
When to Use Dockerfile vs. Docker Compose
- Use a Dockerfile when you want to create an isolated, self-contained image that can be run consistently across different environments.
- Use Docker Compose when you need to manage complex, multi-container applications where services need to communicate with each other, share data, or be scaled individually.
In summary, Dockerfile helps you package your application into a standalone image, while Docker Compose helps you launch and coordinate multiple containers with ease.
Conclusion
Docker makes our lives easier and is a lifesaver.