Docker, DockerFiles and docker-compose

Working docker-compose and DockerFile examples to complement this information
Interesting tool to analyze custom Image layers size

Basic Definitions

Image Executable package that includes everything needed to run an application. It consists of read-only layers, each of which represent a DockerFile instruction. The layers are stacked and each one is a delta of changes from the previous layer.
Container Instance of an image.

Stack Defines the interaction of all the services
Services Image for a microservice which defines how containers behave in production

DockerFile File with instructions that allows us to build upon an already existing image. It defines:

  • the base image to build from
  • our own files to use or append
  • the commands to run

At the end, a DockerFile will form a service, which we may call from docker-compose or standalone with docker build.

DockerFiles vs docker-compose A DockerFile is used when managing a single individual container. docker-compose is used to manage an application, which may be formed by one or more DockerFiles. Docker-compose may also be used as support to input large customization options, which otherwise would be parameters in a really long command.

You can do everything docker-compose does with just docker commands and a lot of shell scripting

Volumes They are the preferred mechanism to persist data, as docker manages them itself, they do not increment the size of the container using them and the contents of the volume exist outside of the life cicle of a container. They’re not suitable to write temporal information.

clusters and stacks (advanced)

docker-swarm is the cluster manager embedded in Docker. It manages containers running on multipe hosts and does things like scaling, restart a container when it crashes, networking etc.
kubernetes Developed by google. It has similar goals to DockerSwarm.

swarm multiple Docker hosts which run in swarm mode and act as managers and / or workers. When the user defines a service, it defines its desired state. This is opposed to a standalone container.
swarm vs standalone containers One of swarm main advantages, is the possibility to modify a service’s config. including network and volumes in real-time without restarting the service. When Docker is running in swarm mode, it’s still possible to run a standalone container. The key difference is only swarm managers can manage a swarm, while standalone containers can be started on any daemon.
In the same way that it’s possible to run containers with docker-compose, it’s also possible to define and run swarm service stacks.

node instance of the docker engine participating in the swarm. To deploy an application to a swarm, a service definition is submitted to a manager node. This manager node then dispatches units of work called tasks to worker nodes. This manager node also runs all cluster management functions.

service (in swarm) is the image for a microservice, which will run as a part of a bigger application on any node. It’s the central structure of the swarm system and the root of user interaction with the swarm.
task it carries a docker container and the commands to run inside this container. It’s the atomic scheduling unit of swarm. Manager nodes assign tasks to worker nodes according to the number of replicas set. Once a task is assigned to a node, it can only be run there of fail, but it cannot be moved to another node.


Check if correctly installed

docker version
docker run hello-world

Run without sudo

It’s needed to add your user to the docker group

sudo groupadd docker
sudo gpasswd -a {$USER} docker
# log-out of your user and -in again

Docker (local)


Example DockerFile for a custom python web-app. The commands to run and the needs of every image will be unique. Check at the official image doc. At best is to write a docker-compose.yml file to set all the parameters. This way we avoid a reaaally long start command.

# Use an official Python runtime as a parent image
FROM python:3.7.4-alpine3.10

# Set the working directory to /app, to be able to use relative paths from now on from now on

# Copy the current directory contents into the container at /app
ADD . .

# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host -r app/requirements.txt

# Declare the intention to export port 80 to the world. Needed to map it on compose File.

# Define environment variable

# Run when the container launches
CMD ["python", "app/"]

How-to run them. Examples

Examples ready with the real, binded parameters the apps need. Only run once and from then on just start the container.

# Mongo container with a persisted volume binded to mongo folder and maps the port
docker run -d --name mongo_admin --mount source=mongo_admin,target=/data/db -p 27017:27017 mongoclient/mongoclient:latest
# Portainer
docker run -dp 9000:9000 -v "/var/run/docker.sock:/var/run/docker.sock" --name 'portainer' portainer/portainer

Create and specify a volume for a container to use

# Create a volume named "ohno"
docker volume create ohno
# Creates a container from an image, in interactive mode, which will have access
#   to the "ohno" volume every time it's started.
docker run -it --name=volumetest --mount source=ohno,target=/app ubuntu:latest

Docker compose

Example docker-compose.yml with two services. MySQL and a Mongo database. One of them is specified locally on a custom DockerFile and the other uses a published image.

It specifies all the information that otherwise would be on our docker run command.

version: '3.7'

      context: ./mysql-service
      dockerfile: ./DockerFile
    image: project.mysql
    container_name: project.mysql
    command: mysqld --user=root --verbose
      - ./data/mysql:/var/lib/mysql
      - "3306:3306"
      MYSQL_DATABASE: "project"
      MYSQL_USER: "project_user"
      MYSQL_PASSWORD: "project_pass"

    image: mongo:latest
    container_name: project.mongo
    command: mongod --smallfiles --logpath=/dev/null
      - ./data/mongo:/data/db
      - 27017:27017
      - MONGO_DATA_DIR=/data/db
      - MONGO_LOG_DIR=/dev/null

Run interactive shell on a single service

Get into the docker-compose.yml folder and run the following commands.

docker-compose run -d {$SERVICE_NAME}
docker exec -it {$CONTAINER_NAME} /bin/bash

Versions and Tags

One way to control versions and give it to builds on docker-compose is doing it at docker-compose.yml file on image.

This is an example of a versioned image. If we want to change the version number before deploying, we just need to change it at this file and build again.

version: '3.7'
      context: ./website
      dockerfile: ./DockerFile
    image: personal-website:0.0.1
    container_name: personal-website
      - 4000:4000

After doing this, we may set Tags to specific versions. For example, we want to build the previous version and assign to it latest tag.

docker tag personal-website:0.0.1 personal-website:latest

DockerHub (Remote)

Online platform to deploy to and get built images from. We may deploy our images there, and we just have to pull them at our prod environment.

Third-party repositories

  • docker search {$thing_to_search} explore hub repositories
  • docker pull {$thing}:latest download it

Own repositories

Example on how to build my own image and push to my own DockerHub.

docker login -u mariocodes
docker-compose build --compress --force-rm --no-cache

docker tag mariocodes/personal-website:version-number mariocodes/personal-website:latest

docker push mariocodes/personal-website:version-number

docker push mariocodes/personal-website:latest

Then it’s possible to directly download and run this image from another environment.

docker run mariocodes/personal-website:latest