A Complete Guide on Docker Compose

Docker Compose

This guide will guide you through everything there is to know about Docker Compose and how to use it to run multi-container applications.

 

As applications grow, it becomes more challenging to manage them dependably. It is where Docker Compose comes in. Docker Compose lets developers write a YAML configuration file for our application service, which can be started with a single command.

 

This post will show you how to use all the important commands in Docker Compose. and the configuration file structure. Later on, it may also be used to search the available commands and options.

 

Why should you care about Docker-compose?

 

Before we get into the technical intricacies, let’s talk about why a programmer should care about docker-compose in the first place. Here are some reasons developers should consider incorporating Docker into their work.

 

Portability

Docker Compose lets you set up a complete development environment with a single command: docker-compose up, and then tear it down with docker-compose down. It allows us, developers, to retain our development environment in a single place and helps it easier to deploy our applications.

 

Testing

Compose also supports running unit and E2E tests speedy and repeatable by putting them in their environments. That is, rather than testing the application on your local/host OS; you may run it in an environment that closely matches the production environment.

 

Multiple isolated environments on a single host

Compose isolates environments using project names, which provides the following benefits:

  • You can run numerous instances of the same environment on the same system.
  • It prevents various projects and services from conflicting with one another.

 

Common application cases

Now that you understand why Compose is valuable and how it may improve our developers’ workflows let’s look at some everyday use cases.

 

Single host deployments

Compose was typically used for development and testing, but it can now be used to deploy and manage a whole deployment of containers on a single host machine.

 

Development environments

Compose allows you to run your applications in a different environment that can be run on any Docker system installed. It makes testing your application very easy and will enable you to work as close to the production environment as possible.

The Compose file maintains all of the application’s dependencies (databases, queues, caches, and so on) and may create all containers with a single command.

 

Automated testing environments

The automated testing suite, which needs an environment where the tests can be executed, is a crucial aspect of continuous integration and development. Compose makes it simple to create and delete isolated testing environments close to your production environment.

 

Installation Windows and Mac

Compose can run on almost any operating system and is relatively easy to install, so let’s get started.

Compose is included with the Windows and Mac Desktop installations and does not need to be installed separately.

The installation instructions may be found here:

 

Linux

You may install Compose on Linux by downloading its binary, which you can do by following these instructions. Use the following command to obtain the most recent stable version:

 

sudo curl -L “https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)” -o /usr/local/bin/docker-compose

 

You only need to give it executable permissions now.

 

sudo chmod +x /usr/local/bin/docker-compose

 

After that, run the following command to check your installation:

 

docker-compose –version

 

Structure of the Compose file

Compose enables developers to quickly manage numerous Docker containers simultaneously by enforcing a set of rules defined in a docker-compose.yml file.

It comprises numerous layers that are separated by tab stops or spaces rather than the braces found in most programming languages. There are four major items that almost every Compose-File should have:

  • The compose file’s version
  • The services that will be developed
  • All used volumes
  • The networks that connect the different services

An example file may look something like this

 

version: ‘3.3’

services:

  db:

   image: mysql:5.7

   volumes:

    – db_data:/var/lib/mysql

   restart: always

   environment:

    MYSQL_ROOT_PASSWORD: somewordpress

    MYSQL_DATABASE: wordpress

    MYSQL_USER: wordpress

    MYSQL_PASSWORD: wordpress

  wordpress:

   depends_on:

    – db

   image: wordpress:latest

   ports:

    – “8000:80”

   restart: always

   environment:

    WORDPRESS_DB_HOST: db:3306

    WORDPRESS_DB_USER: wordpress

    WORDPRESS_DB_PASSWORD: wordpress

    WORDPRESS_DB_NAME: wordpress

volumes:

   db_data: {}

 

As you’ll see, this file has an entire WordPress application in it, complete with a MySQL database. Each of these services is handled as a distinct container that may be changed in and out as needed.

 

Now that we’ve established the fundamental structure of a Compose file let’s move on to the key concepts.

 

Keywords / Concepts

The Compose file’s fundamental features are its concepts, which allow it to manage and create a network of containers. This part will go through these concepts in detail and see how we may explore them to modify our Compose configuration.

 

Services

The services tag includes all of the containers in the Compose file and serves as their parent tag.

 

services:

  proxy:

   build: ./proxy

  app:

   build: ./app

  db:

   image: postgres

 

As you can see, the services tag contains all of the Compose configuration’s containers.

 

Build (base image)

A container’s base image can be defined using a previous image from DockerHub or by generating images using a Dockerfile.

Here are a few simple examples

 

version: ‘3.3’

services:

   alpine:

     image: alpine:latest

     stdin_open: true

     tty: true

     command: sh

 

Using the image tag, we use a preconfigured image from DockerHub.

 

version: ‘3.3’

services:

   app:

     container_name: website

     restart: always

     build: .

     ports:

       – ‘3000:3000’

   command:

     – ‘npm run start’

 

This example defines our images with the build tag, which takes the Dockerfile’s destination as an input. The last custom of determining the base image is to use a Dockerfile with a unique name.

 

build:

   context: ./dir

   dockerfile: Dockerfile.dev

 

Ports

Exposing the ports in Compose works the same way as it does in the Dockerfile. We distinguish between two approaches of telling the port:

Exposing the port available to related services:

 

expose:

 – “3000”

 – “8000”

 

Here, we broadcast the ports to the container’s associated services rather than the host system.

 

Exposing the port available to the host system

ports:

  – “8000:80” # host:container

 

In this eg, we define which port we want to expose and which port we want to show it on. You may also define whether the port protocol is UDP or TCP:

 

ports:

  – “8000:80/udp”

 

Commands

Commands perform operations after the container is started and replace the CMD action in your Dockerfile. The CMD action is the initial command executed when the container is started and is therefore commonly used to initiate a process, such as starting your website using a CLI command such as npm run start.

 

app:

     container_name: website

     restart: always

     build: ./

     ports:

       – ‘3000:3000’

     command:

       – ‘npm run start’

 

Here, we create a service for a website and use the command tag to add the beginning command. This command will be executed once the container has started, and it will begin on the website.

ALSO READ:  Fix COM Surrogate Windows 10 Virus

More information regarding CMD, RUN, and Entrypoint may be found on this page, which goes into detail and compares their functionality.

 

Volumes

Volumes are Docker’s preferred method of saving data created and used by Docker containers. Docker manages them entirely, and they may be used to communicate data between containers and the Host system.

 

They do not expand the size of the containers in which it is used, and its context is independent of the provided container’s lifespan. In Docker, you may use many sorts of volumes. They may all be defined using the volumes keyword, but there are some subtle distinctions that we shall discuss today.

 

Normal Volume

The most common approach to utilizing volumes is to specify a path and let the Engine create a volume. It may be accomplished as follows:

 

volumes:

  # Just specify a path and let the Engine create a volume

  – /var/lib/mysql

 

Creating a path

You can also define absolute path mapping for your volumes by defining the path on the host system and mapping it to a container destination using the: operator.

 

volumes:

  – /opt/data:/var/lib/mysql

 

In this section, you define the path of the host system, followed by the path of the container.

 

Titled volume

Another sort of volume is a named volume, which is similar to the other volumes but has a unique name that can be used on many containers. As a result, it is frequently used to transfer data between various containers and services.

 

volumes:

  – datavolume:/var/lib/mysql

 

Dependencies

In Docker, dependencies are used to make that a given service is accessible before the dependent container begins. It is frequently used when one service cannot be used without another, such as a CMS (Content Management System) without its database.

 

ghost:

     container_name: ghost

     restart: always

     image: ghost

     ports:

       – 2368:2368

     environment:

       – .

     depends_on: [db]

   db:

     image: mysql

     command: –default-authentication-plugin=mysql_native_password

     restart: always

     environment:

       MYSQL_ROOT_PASSWORD: example

 

Here’s a simple example of a Ghost CMS that relies on the MySQL database to function and uses the depends on the command. The depends on command takes a text array that depends on the container names on which the service is dependent.

 

Environment variables

Environment variables are used to introduce configuration data into your applications. It is frequently the case if you have setups reliant on the host operating system or other variables that might change.

In our Compose file, we have several options for sending environment variables, which we shall explore here.

 

Setting an environment variable

Like the usual docker container run —environment command in the shell, you may set environment variables in a container using the “environment” keyword.

 

web:

  environment:

   – NODE_ENV=production

 

Here, we set an environment variable by giving a key and the value to do so, like so:

 

Passing an environment variable

You may transfer environment variables directly from your shell to a container by defining an environment key in your Compose file but not assigning it a value. The value of NODE ENV is retrieved from the same variable in the shell that runs the Compose file.

 

Using an.env file

When a few environment variables aren’t enough, handling them in the Compose file might grow complicated. That is the purpose of.env files. They include all of your container’s environment variables and can be added to your Compose file with a single line.

 

web:

  env_file:

   – variables.env

 

Networking

Networks define the rules for communication between containers and between containers and the host system. They may be configured to offer total isolation for containers, allowing developers to create applications that work safely together.

 

Compose creates a single network for each container by default. Each container immediately makes the default network, making it available by other containers on the network and discoverable by the hostname defined in the Compose file.

 

Custom networks

In addition to the default network, you may provide your networks under the top-level networks key, allowing you to create more complicated topologies and set network drivers and options.

 

networks:

  frontend:

  backend:

   driver: custom-driver

   driver_opts:

    foo: “1”

 

As you type in a “networks” service level keyword, it takes for a list of names that point to entries in the “networks” top level keyword. , allows each container to select which networks to connect to.

 

services:

  proxy:

   build: ./proxy

   networks:

    – frontend

  app:

   build: ./app

   networks:

    – frontend

    – backend

  db:

   image: postgres

   networks:

    – backend

 

You may also give your network a unique name (from version 3.5)

 

version: “3.5”

networks:

  webapp:

   name: website

   driver: website-driver

 

See the following references for a complete list of network configuration options:

• Network key at the Top level

• Network key at the Top level

 

Existing (external) networks

Using the external option, you may use pre-existing networks with Docker Compose.

 

networks:

  default:

   external:

    name: pre-existing-network

 

Docker never constructs the default network in this case, instead of using the pre-existing network defined in the external tag.

 

Configure the default networks

Instead of defining your own networks, you may change the default network’s parameters by defining an item with the default under the “networks” keyword.

 

version: “3”

services:

  web:

   build: .

   ports:

    – “8000:8000”

  db:

   image: postgres

networks:

  default:

   driver: custom-driver

 

Container linking

You may also define additional aliases for your containers that services can use to connect. Services inside the same network can already reach one another. The links simply define different names by which the container may be accessed.

 

version: “3”

services:

   web:

     build: . 

     links: 

       – “db:database”

   db:

     image: mongo

 

In this case, the web container can reach to the database through one of the two hostnames (db or database).

 

CLI

Docker-whole Compose’s functionality is carried out via its built-in CLI, which has a very similar set of commands to those provided by Docker.

build Build or rebuild services

 

ps List containers 

pull Pulls service images 

rm Remove stopped containers 

help Get help on a command 

kill Kill containers 

logs View output from containers 

port Print the public port for a port binding 

scale Determine the number of containers set for a service. 

start Start services 

stop Stop services 

run Run a one-off command 

restart Restart services 

up Create and start containers

down Stops and removes containers

 

They are similar and act in the same way as their Docker equivalents. The main difference is that instead of affecting a single container, they affect the whole multi-container architecture defined in the docker-compose.yml file.

 

Some Docker commands are no longer accessible and have been replaced by others that make better sense in the context of a fully multi-container configuration.

ALSO READ:  How to use Coturn to Set up and Configure your own TURN Server

The following are the essential new commands:

 

  • docker-compose up
  • docker-compose down

Making Use of Multiple Docker Compose Files

The usage of several Docker Compose files allows you to modify your application for different environments (e.g., staging, development, and production) and enable you to run administrative activities or tests against your application.

 

By default, Docker Compose reads two files: a docker-compose.yml file and an optional docker-compose.override.yml file. The docker-compose.override file may be used to override existing services or define new services.

 

You may use the -f argument to the docker-compose up command to use multiple override files or an override file with a different name. The base Compose file must be given on the first place of the command.

 

docker-compose up -f override.yml override2.yml

 

When using multiple configuration files, ensure that the paths are relative to the base Compose file, which is initially supplied with the -f parameter. Let’s take a basic example of what this method may do.

 

# original service

command: npm run dev

# new service

command: npm run start

 

Here, you override the existing run command with a new one that launches your website in production mode rather than dev mode. When multiple values are used on options such as ports, expose, DNS, and tmpfs, Docker Compose concatenates the values rather than overriding them, as seen in the following example.

 

# original service

expose:

   – 3000

   

# new service

expose:

   – 8080

 

Compose in production

Docker Compose simplifies deployment by allowing you to run your whole configuration on a single server. For example, you may grow your app by running it on a Swarm cluster.

You undoubtedly need to change certain things before deploying your app configuration to production. Among the modifications are:

 

  • Binding different ports to the host
  • Specifying a restart policy such as restart: always to reduce container downtime
  • Adding new services such as a logger
  • Removing any unnecessary volume bindings for application code

After you’ve completed these steps, you may use the following commands to deploy your changes:

 

docker-compose build

docker-compose up –no-deps -d

 

It rebuilds the images of the services defined in the compose file before recreating the services.

 

Example

Now that we’ve covered the idea of Compose let’s see some of the magic we just discussed in action. To do this, we’ll build a simple Node.js application with a Vue.js frontend, which we’ll deploy using the technologies we learned about previously.

 

Let’s get started by cloning the repository containing the finished Todo list application so we can go right into the Docker section.

 

git clone –single-branch –branch withoutDocker https://github.com/TannerGabriel/docker-node-mongodb.git

 

This should result in a project with the folder structure shown below:

Now that we’ve established the project let’s go on to creating our first Dockerfile for the Node.js backend.

 

FROM node:latest

# Create app directory

WORKDIR /usr/src/app

# Install app dependencies

COPY package*.json ./

RUN npm install

# Bundle app source

COPY . .

EXPOSE 3000:3000

CMD [ “node”, “server.js” ]

 

Okay, let’s understand through the code to figure out what’s going on here:

  • we use the FROM keyword to describe the base image. 
  • Next, we provide the directory to work and copy our local package.
  • After that, we install the necessary dependencies from the package.json file and expose port 3000 to the host system.
  • The CMD keyword allows you to define the command executed after the container boots up. In this case, we will use it to start our express server with the node server.js command.

 

Now that we’ve completed the Dockerfile for the backend let’s do the same for the frontend.

 

FROM node:lts-alpine

RUN npm install -g http-server

WORKDIR /app

COPY package*.json ./

COPY .env ./

RUN npm install

COPY . .

RUN npm run build

EXPOSE 8080

CMD [ “http-server”, “dist” ]

 

The file is the same as the one that came before this one. , but it installs an HTTP server that shows the static site we receive when building a Vue.js application. I won’t go into more detail about this script because it’s outside the scope of this lesson.

 

Now that we have the Dockerfiles in place, we can write the docker-compose.yml file that we learned so much about.

First, we define the version of our Compose file (in this case, version 3)

 

version: ‘3’

Defining that, we start to define the services required for the project to work

services:

   nodejs:

     build:

       context: ./backend/

       dockerfile: Dockerfile

     container_name: nodejs

     restart: always

     environment:

       – HOST=mongo

     ports:

       – ‘3000:3000’

     depends_on: [mongo]

 

The nodejs service uses the Dockerfile we prepared earlier for the backend and publishes port 3000 to the host computer. The service likewise depends on the mongo service, which means it waits for the database to start before starting.

 

Following that, we define a simple MongoDB service that makes use of the DockerHub default image.

 

mongo:

     container_name: mongo

     image: mongo

     ports:

       – ‘27017:27017’

     volumes:

       – ./data:/data/db

 

This service additionally publishes a port to the host system and saves database data to a local folder through a volume. The final service we need to define is the frontend, which uses the frontend Dockerfile to build the image and publish port 8080 to the host system.

 

   frontend:

     build:

       context: ./frontend/

       dockerfile: Dockerfile

     container_name: frontend

     restart: always

     ports:

       – ‘8080:8080’

 

That’s all! We have completed our Docker files and are now ready to run the application. It is accomplished by issuing the following two commands:

 

# builds the images from the dockerfiles

docker-compose build

# starts the services defined in the docker-compose.yml file

# -d stands for detached

docker-compose up -d

 

Your services are now running, as indicated by the terminal output, and you are ready to access the finished website on localhost:8080, which should look like this:

 

You may now add todos to your list by clicking the add button, and your app should appear something like this.

 

The items should remain the same if you reload the page because they are preserved in our database. The final thing I’d want to show is how to obtain the debug logs of the running containers.

 

docker-compose logs

 

This command will display all logs from the currently running containers, which might check you in debugging issues or checking the current condition of your application. That concludes the project. The entire project’s source code is available on Github.

 

You got to the finish! I hope this post has helped you understand Docker Compose and how you can use it to improve your developer development and deployment routine. Please consider promoting and sharing this with your developers if you found this helpful. Also, please leave them in the comments section below if you have any questions or comments. Also read article on how to Create Your Private Docker Registry.

You May Also Like

2 Comments to “A Complete Guide on Docker Compose”

Leave a Reply

Your email address will not be published.