How to install Docker Swarm in Ubuntu 16.04/CentOS 7

How to install Docker Swarm

Docker Swarm is a container orchestration and clustering tool to manage Docker hosts, and is a part of Docker Engine, which was introduced in Docker 1.12. It’s not necessarily the easiest to install, which is why we’ll cover the key steps of how to install Docker Swarm in this tutorial.

The primary objective of Docker Swarm is to group multiple Docker hosts into a single logical virtual server—this ensures availability and high performance for your application by distributing it over a number of Docker hosts instead of just one.

Docker Swarm has a ton of benefits, like self-healing, load balancing, scaling containers up and down, service discovery, and rolling updates, so it’s worth going through the installation process if you want to combine Docker and high availability.

The “Swarm” concept:

To manage and orchestrate cluster, a swarm is created between one or many Docker engines using swarmkit. You can either create a swarm or join an existing swarm. There are two types of nodes in the cluster—one is the manager node and another is the worker node.

A manager node receives a service definition (when you create a service) and dispatches this service/task to a worker node. You can have more than one manager node but there is only one super manager. In a nutshell, the manager node manages the Docker Engine process on all other nodes, and runs containers as well.

The worker nodes execute tasks despatched by manager nodes. The agents on worker nodes report the state of the tasks to the manager node to keep the cluster running smoothly. Because of this, you can take one manager node out of the cluster temporarily, perhaps for maintenance, and then promote another worker node as the temporary manager node.

In this article, we will go through the step-by-step instructions on how to install Docker Swarm and configure two nodes. The master node is on Ubuntu 16, and worker node is on CentOS 7.

Prerequisites to install Docker Swarm

  • Two VPSs: one running Ubuntu 16.04 and a VPS running or CentOS 7.
  • A non-root, sudo-enabled user. If you only have a root user, see our SSH tutorial for details on creating new users.

8 Year Anniversary Sale—> 16GB RAM from $9.99/mo!

It’s been 8 years since we launched our all-SSD VPS cloud. To celebrate, we’re offering amazing deals on all of our most popular plans– like a 16GB RAM SSD VPS loaded with resources for as little as $9.99/month!

Get limited-time deals!⚡

Notes

  • This tutorial uses variables to represent user-specific configurations, such as server IP addresses, passwords, domain names, and more. Whenever you see one of these variables, you should replace them with your specific details.

Step 1. Configure the manager node

Docker Swarm is a container orchestration and clustering tool to manage Docker hosts, and is a part of Docker Engine, which was introduced in Docker 1.12. It’s not necessarily the easiest to install, which is why we’ll cover the critical steps of how to install Docker Swarm in this tutorial.

The primary objective of Docker Swarm is to group multiple Docker hosts into a single logical virtual server—this ensures availability and high performance for your application by distributing it over many Docker hosts instead of just one.

Docker Swarm has a ton of benefits, like self-healing, load balancing, scaling containers up and down, service discovery, and rolling updates, so it’s worth going through the installation process if you want to combine Docker and high availability.

The “Swarm” concept:

A swarm is created between or many Docker engines using swarmkit. You can either create a swarm or join an existing swarm. There are two types of nodes in the cluster—one is the manager node, and another is the worker node.

A manager node receives a service definition (when you create a service) and dispatches this service/task to a worker node. You can have more than one manager node, but there is only one super manager. In a nutshell, the manager node manages the Docker Engine process on all other nodes and runs containers as well.

The worker nodes execute tasks despatched by manager nodes. The agents on worker nodes report the state of the tasks to the manager node to keep the cluster running smoothly. Because of this, you can take one manager node out of the cluster temporarily, perhaps for maintenance, and then promote another worker node as the temporary manager node.

In this article, we will go through the step-by-step instructions on configuring two nodes in Docker Swarm cluster. The master node is on Ubuntu 16, and worker node is on CentOS 7.

Prerequisites to install Docker Swarm

  • Two VPSs: one running Ubuntu 16.04 and a VPS running or CentOS 7.
  • A non-root, sudo-enabled user. If you only have a root user, see our SSH tutorial for details on creating new users.

Notes

  • This tutorial uses variables to represent user-specific configurations, such as server IP addresses, passwords, domain names, and more. Whenever you see one of these variables, you should replace them with your specific details.

Step 1. Configure the manager node

To create the cluster, you need to install Docker on both nodes. This one-command installation process is the same for both nodes, but let’s start with the manager node, which is running Ubuntu 16.04.

Throughout this tutorial, we’ll prefix the terminal commands with [manager] or [worker] to help you remember on which node you’re meant to run a given command.

[manager] $ sudo curl -sS https://get.docker.com/ | sh

Once installed, double-check that Docker is started and enabled during startup/reboot:

[manager] $ sudo systemctl start docker
[manager] $ sudo systemctl enable docker

Be sure to check out our getting started with Docker tutorial for details on further Docker configuration.

Configure the firewall

You need to open ports 7946, 4789, 2376, 2377 and 80 on the firewall for a Swarm cluster to work properly. Execute the following command in the terminal:

[manager] $ sudo ufw allow 2376/tcp && sudo ufw allow 7946/udp && sudo ufw allow 7946/tcp && sudo ufw allow 80/tcp && sudo ufw allow 2377/tcp && sudo ufw allow 4789/udp && sudo ufw allow 22/tcp

Reload the firewall and enable it during boot.

[manager] $ sudo ufw reload && sudo ufw enable

Restart the docker service to touch on the firewall rule.

[manager] $ sudo systemctl restart docker

Create the Docker Swarm cluster

Let’s create a cluster using swarm init. Run the following command on the manager node. The –advertise-addr option tells the manager node to publish its address so that worker nodes can join the cluster. Replace IP_ADDRESS_NODE with the IP address of your manager node.

[manager] $ docker swarm init --advertise-addr IP_ADDRESS_MANAGER

Output:
Swarm initialized: current node (m1z7bxujhc5gby8afas0jxbwb) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0j4sep8zlfcse70subtoin8vm4iembtqtztcfu2qp9jzzljecl-axknutiuxps3l8xngv5fuqnpn 10.0.2.49:2377

The long string that appears after --token will be different for you—in the future, use the token that appears on your terminal, not this one.

Double-check that the manager node is loaded correctly with the following command:

[manager] $ docker node ls

ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
m1z7bxujhc5gby8afas0jxbwb *   ip-10-0-2-49        Ready               Active              Leader              18.03.1-ce

Check the current status of the swarm using the following command in the terminal.

$ docker info

At this point, the manager node is ready to ready to go! We’ll move onto the worker node now.

Step 3. Configure the worker node

3.1 Install Docker

We need to begin by installing Docker if it’s not already. Run the following installation script:

[worker] $ sudo curl -sS https://get.docker.com/ | sh

Once installed, double-check that Docker is started and enabled during startup/reboot:

[worker] $ sudo systemctl start docker
[worker] $ sudo systemctl enable docker

Configure the worker firewall

Just as with the manager node, we need to open ports 7946, 4789, 2376, 2377 and 80 on the worker firewall.

[worker] $ sudo firewall-cmd --permanent --add-port=2376/tcp
[worker] $ sudo firewall-cmd --permanent --add-port=2377/tcp
[worker] $ sudo firewall-cmd --permanent --add-port=7946/tcp
[worker] $ sudo firewall-cmd --permanent --add-port=80/tcp
[worker] $ sudo firewall-cmd --permanent --add-port=7946/udp
[worker] $ sudo firewall-cmd --permanent --add-port=4789/udp
[worker] $ sudo firewall-cmd --permanent --add-port=22/tcp

Reload the firewall and Docker service to apply all the changes.

[worker] $ sudo firewall-cmd --reload
[worker] $ sudo systemctl restart docker

Add the worker node to your swarm

Now we can join the worker node to the swarm. Run the docker swarm join command that we saw earlier in the Swarm initialization step in the manager node.

[worker] $ docker swarm join --token SWMTKN-1-0j4sep8zlfcse70subtoin8vm4iembtqtztcfu2qp9jzzljecl-axknutiuxps3l8xngv5fuqnpn IP_ADDRESS_NODE:2377

Output:
gv5fuqnpn 10.0.2.49:2377
This node joined a swarm as a worker.

You can run above command from other nodes to join/scale the cluster at later stages.

To see the node status in the Swarm cluster, and to check if worker nodes are ready for work, list all the nodes using the following command from the manager node. The value Active in the status field means the node is ready to accept tasks from its manager.

[manager] $ docker node ls

ID                            HOSTNAME                                    STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
m1z7bxujhc5gby8afas0jxbwb *   ip-10-0-2-49                                Ready               Active              Leader              18.03.1-ce
tcbet11p6ox546ue8vkpm3ka3     ip-10-0-2-236.ap-south-1.compute.internal   Ready               Active                                  18.03.1-ce

At this point, no services have been defined to run in the swarm cluster. You can check this using following command from manager node.

[manager] $ docker service ls

Step 4. Deploy a service in the Swarm

Now that the Swarm cluster is up and running, let’s create a Nginx web server inside the Swarm. From the manager node, run the following command:

[manager] $ docker service create --name webservice --publish 80:80 nginx

The above command will create a Nginx web server and map its port to 80 so that you can access it.

From the manager, check the running services:

[manager] $ docker service ls

ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
5x9f0kjztsl2        webservice          replicated          1/1                 nginx:latest        *:80->80/tcp

You can now scale the web service across two containers with the following command:

[manager] $ docker service scale webservice=2

Check the status of the service webservice with the following command—you’ll find that there will be two containers running webservice!

[manager] $ docker service ps webservice

ID                  NAME                IMAGE               NODE                                        DESIRED STATE       CURRENT STATE                ERROR               PORTS
euae177a9x6z        webservice.1        nginx:latest        ip-10-0-2-236.ap-south-1.compute.internal   Running             Running about a minute ago                       
p46ksfcdcd3b        webservice.2        nginx:latest        ip-10-0-2-49                                Running             Running 22 seconds ago                

To remove a node from the cluster, execute the following from that node.

[worker] $ docker swarm leave

Then list the nodes in the cluster from manager node.

[manager] $ docker node ls

The output of the above will show the status of the removed node as Down. Finally, remove the node from the manager’s list

[manager] $ docker node rm worker-1

Step 5. Test the cluster

To test the cluster, use curl and the IP address of any node (manager or worker) in the cluster. You should get a http/200 response code, or the default Nginx web server welcome page.

$ curl -I IP_ADDRESS_MANAGER

Congratulations! The Docker Swarm installation is complete, and you now have a functioning cluster.

 

Let’s take a minute to extend our Swarm and explain a few more features, like self-healing and draining.

Self-healing

Container self-healing is one of the prominent features of Docker Swarm. If there are any issues with a given container, the manager will start another container to make sure that at least two containers are running for a given service.

Try to remove the container from the worker node and then check if a new container is launched or not in the cluster.

[worker] # docker ps
[worker] # docker rm <container-id> -f

If you hop over to the manager node, you’ll find that the manager launched another container.

[manager] $ docker service ps webservice

Scaling up and down

You can also scale up or scale down the containers in the cluster depending on the current load. To scale up the containers to 3 for our service webservice, execute the following command from manager node.

[manager] $ docker service scale webservice=3
webservice scaled to 3

Now check the status of webservice with the following command.

[manager] $ docker service ps webservice

4.3 Draining a node

When a node in the cluster is ready to accept tasks from manager it’s status is set to Active—we’ve seen that before in the docker node ls command. In some situations, you want planned maintenance for a particular node or want to move services/containers from one node to any other node.

Rather than removing that node from the cluster you can set the node’s availability to DRAIN to empty the containers from that node. The DRAIN status for a node prevents it from receiving a new task from the manager. You can set the availability of the node to DRAIN using the following command.

[manager] $ docker node update --availability drain WORKER
WORKER

But what will happen to the containers running on that worker node? Swarm manager will reschedule the containers running on the drained node to any other available node.

Once you drained a node check the status of the nodes in the cluster using the following command.

[manager] $ docker service ps webservice

Now that you have an idea of how to install Docker Swarm and set up a Swarm cluster, check the Docker Swarm documentation to learn more about Swarm. You may want to configure your cluster with more than just two nodes, and, according to your requirements, you will probably want to scale the cluster by adding more nodes to it at later stages.

cheap_vps_16GB

cheap_vps_16gb