Docker Swarm
Last updated
Last updated
During the process of developing, managing, scaling and deploying your project by using Docker commands to deploy, initially the small project only needs to run one host (vps) so there is no problem at all. However, when that project, due to some requirements or reasons, it needs more hosts or many hosts (vps). At this time, it is difficult for you to manage, scale and also cannot use commands to deploy to each host (vps), which is very difficult. Knowing that feeling, Docker has developed something called Docker Swarm .
Docker Swarm is a native clustering tool for Docker. Allows us to group several Docker hosts together into a cluster and we can view it as a single virtual Docker host. And a Swarm is a cluster of one or more running Docker Engines. And Swarm mode provides us with features to manage and coordinate the cluster.
Cluster management integrated with Docker Engine : Manage clusters with Docker Engine by using Docker CLI to create swarms.
Decentralized design: Docker Swarm is designed to be decentralized. Instead of handling differences between node roles at deployment time, Docker Engine handles any specialization at runtime. You can deploy both types of nodes: managers and workers using Docker Engine.
Declarative service model: Docker Engine uses a declarative approach to allow you to define the desired state of different services in your application stack. For example, you can describe an application that includes: web front-end with service message queuing and database back-end.
Scaling: For each service you can declare the number of tasks you want to run. When you scale up or down, the swarm manager will automatically add or remove tasks to maintain the desired state.
Desired state reconciliation: Imagine you set up a service running 10 replicas of a container and a worker machine (host/vps) holding 2 of those 10 replicas crashes, at which point the swarm manager will proceed to create more. 2 new replicas to replace the 2 crashed replicas and transfer these 2 new replicas to the running workers.
Multi-host networking: You can specify an overlay network for your services. Swarm manager will automatically assign IP addresses to containers on the overlay network when it initializes and updates the application.
Service discovery: Swarm manager node assigns each service in the swarm a unique DNS and you can query through this DNS.
Load balancing : Can expose ports for services to the load balancer to communicate with the outside world.
Secure by default: Services communicate with each other via the TLS security protocol. You can custom use a root signed certificate or a certificate from a custom root CA.
Rolling updates: Swarm helps you update service images completely automatically. Swarm manager helps you control the delay between service deployment to different nodes and you can roll back at any time.
Including Managers and Workers . Users can declare the desired state of multiple services to run in Swarm using YAML files.
Swarm: is a cluster of one or more running Docker Engine (specifically, nodes) in Swarm mode. Instead of having to run containers with commands, we will set up services to distribute replicas to nodes.
Node: A node is a physical machine or virtual machine running a Docker Engine instance in Swarm mode. Nodes will be of two types: Manager Node and Worker Node .
Manager Node: Is the node that receives defined services from users, it manages and coordinates tasks to Worker nodes. By default the Manager node is also considered a Worker node.
Worker Node: is the node that receives and executes tasks from the Manager node.
Service: A service defines the container image and the number of replicas desired to launch in the swarm.
Task: is a task that the worker node must perform. This task will be allocated by the node Manager. A task carries a Docker Container and commands to run inside the container.
In this section, we will practice with Docker Swarm through a small demo. First we need 4 virtual machines (virtual vps) to create virtual machines we use the following command:
In there:
<machine-name>: virtual machine name you want to set.
Create a machine (virtual machine) for swarm manager:
Next are the machines for swarm workers: worker1, worker2, worker3.
After creating, we check the list of machines:
Now we use the inspect command to try to see information about a machine
It's easy to see some basic information about the machine such as: IP address, MachineName (name given by us), SSHKey to be able to access the machine via this SSHKey, information about CPU (1 CPU), Memory (1GB) , ….
The setup of the machines is complete, now we proceed to initialize the swarm on the manager and to access the manager or workers, we use it via SSH specifically as follows:
Here:
<name-machine> = manager
And to return to the local host:
Initialize swarm
If you are using Docker Desktop for Mac or Docker Desktop for Windows then just docker swarm init is needed . But here the Operating System is Boot2Docker so the --advertise-addr flag is required .
Check the list of nodes currently in the swarm
Only nodes (machine/vps) that are managers can see this list and the * sign indicates which manager node you are in in the swarm. Here we only have one node manager and this node is in Ready status . OK! So the task at the manager is complete.
Now let's move on to work on worker1 . At worker1, we proceed to join it to the swarm as a worker:
In there:
host: IP address of the manager.
port: The port of the manager.
To get information about the token, on the swarm manager we use the command
On the two workers worker2 and worker3 we do the same
Note: a worker node can only join one swarm.
On the node manager, we check the node list againIt's easy to see that the other 3 worker nodes have the same empty status in the MANAGER STATUS column . This tells us they are worker nodes.
So we have successfully created 3 workers and 1 manager and gathered them into a swarm.
A question here is why don't we take advantage of the swarm we created in Part 3 on the local host machine ( Docker Desktop for Mac ) and consider it as a node manager to join other nodes? Why would this swarm create another machine to act as a node manager to pay for such resource costs? The answer is in Part 3 (which has been said very clearly) on the Docker Desktop for Mac version, which cannot open routing streams to machines, so trying to join nodes (machines/vps) into the swarm with the swarm manager is local host is useless. This is also the weakness when implementing networking on OSX.
Now we continue to create services and replicas as well as deploy on the node manager.
To do this we need to configure the docker-compose.yml file:
and copy the docker-compose.yml file that we configured through the manager:
In this demo:
Next we need to push the 2 images that we used in Part 2 to the repository on hub.docker:
In there:
<image> : Id of the image you want to push
<username>: is the username on your hub.docker.
<repository-name>: name of the repository you want to set.
<tag-name>: tag name you want to put on the image to be pushed there.
On Docker Hub
So we have successfully pushed 2 images and now we need to deploy the stack:
Check the list of services:
Let's try to see which nodes these replicas are running on:
Additionally, you can create a service using the command with the following syntax:
In there:
<task-number>: number of tasks you want to create (or in other words, number of copies of image/container).
<service-name>: name of the service you want to set.
<ID-Image>: ID of image/container.
<command>: command you want to run.
And we can quickly change the cluster's container number with the following command:
In there :
<service-name>: name of the service that we want to change the container number to.
<number>: Number of desired containers.
Next we will see how the load balancing feature works?
We see that on node worker3 there are no replicas of service servergo_1 . Let's send a test request to service servergo_1 on this worker3 and see how it goes!
This means when we send requests to nodes in the swarm. These nodes can contain one or more replicas of services or not contain any replicas at all. The swarm's Routing mesh will forward those requests through the ingress network to the Swarm Load Balancer, this balancer will distribute requests to the nodes. Containers of services on machines (hosts/vps of managers and workers) share the same swarm network. You can see the following image to understand better:
Try again with other requests:
Now let's try shutting down machine worker1 (like in reality when a server dies) to see if there's anything new!
Here :
<machine-name> = worker1
Check the list of nodes and services on the node managerHaven't seen anything new except that worker1 is Down
Continue checking on each service
Here we saw something new. When worker1 is Shutdown , the swarm manager will now create a new replica to replace the Shutdown replica and transfer this new replica to the running workers (specifically worker3). This is also the Desired state reconciliation and Scaling feature that has been clearly mentioned in the Docker Swarm Features section .
So the problem that arises here is that when all worker nodes die, what happens next?
In this case, the node manager will also add replicas to ensure there are enough replicas that we have configured (desired) and run on this manager (meaning the manager node will also act as a worker node). And if this manager dies, everything will be over!!.
In the opposite case, if the worker nodes are running but the manager node dies, the external storage will record that and notify the remaining manager nodes in the cluster. And external storage will choose any node manager to be the next Leader of the cluster.Currently, along with Docker Swarm , we also have another friend, Kubernetes (K8S) . And it is more widely deployed than Docker Swarm