Table of Contents
Introduction
In our previous article, we have set up our cluster with two nodes. Now let’s see how to add additional nodes to the pacemaker cluster. The pacemaker will support a maximum of 16 numbers of nodes per cluster.
By adding more nodes to the cluster will scale your resource availability across the cluster. The steps to start with adding a new node to an existing cluster is similar to straight forward as we did in the earlier guide.
Start with installing a minimal operating system and follow all the steps until Create a password for the Cluster user.
Authorizing the New Node with the Cluster
Hereafter, all the steps we are going to perform from the first node of the cluster. To add a new node to existing pacemaker cluster we need to follow the similar steps but by excluding other nodes.
# pcs cluster auth -u hacluster -p clusterpassword123 corcls3
Here we have used only the new node.
[root@corcls1 ~]# pcs cluster auth -u hacluster -p clusterpassword123 corcls3
corcls3: Authorized
[root@corcls1 ~]#
Once the successful authorization completed we are good to add the new node to our cluster.
Additional New Nodes to the Cluster
It’s time to add the new node to our cluster by running below command with more options and arguments. In our last guide while we are creating the cluster the option “–start” and “–enable” are not used. It’s possible to run these options with pcs command to start the service and enable the service persistently while adding the additional nodes.
# pcs cluster node add corcls3 --start --enable
While running the command the pcs will disable the fencing on the new node before adding it.
[root@corcls1 ~]# pcs cluster node add corcls3 --start --enable
Disabling SBD service…
corcls3: sbd disabled
Sending remote node configuration files to 'corcls3'
corcls3: successful distribution of the file 'pacemaker_remote authkey'
corcls1: Corosync updated
corcls2: Corosync updated
Setting up corosync…
corcls3: Succeeded
corcls3: Cluster Enabled
corcls3: Starting Cluster (corosync)…
Starting Cluster (pacemaker)…
Synchronizing pcsd certificates on nodes corcls3…
corcls3: Success
Restarting pcsd on the nodes in order to reload the certificates…
corcls3: Success
[root@corcls1 ~]#
In a single go, we have started the cluster service and enabled it. However, we can enable and start the cluster using below commands as well. That’s it, we have almost completed with adding additional nodes to the pacemaker cluster.
# pcs cluster start corcls3
# pcs cluster enable corcls3
This is how we can manage to start and enable the cluster service on the newly added node.
[root@corcls1 ~]# pcs cluster start corcls3
corcls3: Starting Cluster (corosync)…
corcls3: Starting Cluster (pacemaker)…
[root@corcls1 ~]#
[root@corcls1 ~]# pcs cluster enable corcls3
corcls3: Cluster Enabled
[root@corcls1 ~]#
Check the Status from CLI
To check the cluster status on the newly added node “corcls3” run status option with pcs command.
[root@corcls1 ~]# pcs status cluster
Cluster Status:
Stack: corosync
Current DC: corcls2 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Sat Aug 10 15:44:10 2019
Last change: Sat Aug 10 15:42:03 2019 by hacluster via crmd on corcls2
3 nodes configured
0 resources configured
PCSD Status:
corcls3: Online
corcls1: Online
corcls2: Online
[root@corcls1 ~]#
From the above output, we can see the configured node count is 3.
Verify from GUI
To confirm the same we can see the changes from GUI as well. Login to the GUI and check under the cluster.
We have confirmed the addition of a new node to the existing cluster.
Conclusion
In this guide, we have seen how to add an additional node to the existing pacemaker cluster. By following let’s start to create the resources and resources group, fencing etc. Subscribe to our newsletter and stay up-to-date, your feedbacks are most welcome through below comment section.