Hello fellows,

The purpose of this article is to setup a fail-over cluster with Pacemaker and DRBD (Distributed Replicated Block Device) which is like RAID-1 over IP. The high availability is for an Apache web server over Debian 8.6.

Apache high availaibility architecture
Apache high availaibility architecture

 

Step 1 : Configure static ip address

etc/network/interfaces (configure debian-slave NIC):

  • auto eth0
  • iface eth0 inet static
  • address 192.168.154.153
  • netmask 255.255.255.0
  • gateway 192.168.154.2

 

etc/network/interfaces (configure debian-master NIC):

  • auto eth0
  • iface eth0 inet static
  • address 192.168.154.152
  • netmask 255.255.255.0
  • gateway 192.168.154.2

 

Step 2 : configure the hostname

etc/hostname (configure debian-slave):

  • debian-slave

etc/hosts (configure debian-slave):

 

etc/hostname (configure debian-master):

  • debian-master

etc/hosts (configure debian-slave): 

 

Step 3 : Ping

Check that the ping is working from debian-master to debian-slave:

 

Step 4 : Create a 6Gb disk partition

In this example, I added another Hard drive on my virtual machines. The following procedure is to apply on both virtual machines.

 

We have created the sdb1 partition. The available space is 6Gb.

 

Step 5 : DRDB installation

root@debian-master:~# apt-get install drbd8-utils

root@debian-slave:~# apt-get install drbd8-utils

To chek the installed version, use the command : modinfo drbd

root@debian-master:~# modinfo drbd

filename:       /lib/modules/3.16.0-4-amd64/kernel/drivers/block/drbd/drbd.ko

alias:         block-major-147-*

license:       GPL

version:       8.4.3

description:   drbd - Distributed Replicated Block Device v8.4.3

author:         Philipp Reisner <This email address is being protected from spambots. You need JavaScript enabled to view it.>;, Lars Ellenberg <This email address is being protected from spambots. You need JavaScript enabled to view it.>;

srcversion:     1A9F77B1CA5FF92235C2213

depends:       lru_cache,libcrc32c

intree:         Y

vermagic:       3.16.0-4-amd64 SMP mod_unload modversions

parm:           minor_count:Approximate number of drbd devices (1-255) (uint)

parm:           disable_sendpage:bool

parm:           allow_oos:DONT USE! (bool)

parm:           proc_details:int

parm:           usermode_helper:string

root@debian-master:~#

 

 

Step 6 : Define a DRBD resource

I will create and modify the file /etc/drbd.d/drbd1.res on the master and the slave.

resource webserver {

   # Taux de transfert

   syncer {

     rate 100M; #100M for 1Gbps

   }

      on debian-master {

               device /dev/drbd0;

               disk /dev/sdb1;

               address 192.168.154.152:7788;

               meta-disk internal;

       }

       on debian-slave {

               device /dev/drbd0;

               disk /dev/sdb1;

               address 192.168.154.153:7788;

               meta-disk internal;

       }

}

 

Explanation:

  • The resource name is webserver
  • The DRBD resource is on /dev/drbd0 which will be created later
  • We will use the partition /dev/sdb1
  • DRBD Meta-datas are written on the disk /dev/sdb1

 

Step 7 : Create meta-datas on Slave and Master

root@debian-master:~# drbdadm create-md webserver

root@debian-master:~# drbdadm up webserver

 

root@debian-slave:~# drbdadm create-md webserver

root@debian-master:~# drbdadm up webserver

 

Check that the Slave and the Master see each other. I used the command: “cat /proc/drbd”:

root@debian-slave:~# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)

srcversion: 1A9F77B1CA5FF92235C2213

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

   ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:6291228

root@debian-slave:~#

 

root@debian-master:~# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)

srcversion: 1A9F77B1CA5FF92235C2213

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

   ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:6291228

root@debian-master:~#

 

Step 8 : Define the primary and the secondary node

As you can see our nodes are connected. But they are both secondary and inconsistent. So we need to define the primary and the secondary node:

root@debian-master:~# drbdadm -- --overwrite-data-of-peer primary webserver

root@debian-slave:~# drbdadm secondary webserver

 

To follow the synchronization, you can use either “cat /proc/drbd” or “watch cat /proc/drbd/”. With “watch cat /proc/drbd/” the status is updated automatically every 2 seconds. To exit the “watch…” command use the CTRL+C.

After the Synchronization, the Master should be Primary and contain the /mnt/www. Moreover, Both nodes should be in the “UpToDate” state.

root@debian-master:~# drbd-overview

0:webserver/0 Connected Primary/Secondary UpToDate/UpToDate /mnt/www ext4 5.8G 12M 5.5G 1%

root@debian-master:~#

 

root@debian-master:~# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda1 14G 1.3G 12G 10% /

udev 10M 0 10M 0% /dev

tmpfs 198M 4.7M 193M 3% /run

tmpfs 494M 39M 456M 8% /dev/shm

tmpfs 5.0M 4.0K 5.0M 1% /run/lock

tmpfs 494M 0 494M 0% /sys/fs/cgroup

tmpfs 99M 0 99M 0% /run/user/1000

/dev/drbd0 5.8G 12M 5.5G 1% /mnt/www

 

root@debian-slave:~# drbd-overview

0:webserver/0 Connected Secondary/Primary UpToDate/UpToDate

root@debian-slave:~#

 

root@debian-slave:~# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda1 14G 1.3G 12G 10% /

udev 10M 0 10M 0% /dev

tmpfs 198M 4.7M 193M 3% /run

tmpfs 494M 54M 441M 11% /dev/shm

tmpfs 5.0M 0 5.0M 0% /run/lock

tmpfs 494M 0 494M 0% /sys/fs/cgroup

tmpfs 99M 0 99M 0% /run/user/1000

root@debian-slave:~#

 

Step 9 : Add datas into the DRBD resource

DRBD is working. Now we need to create /dev/drbd0 - only in the primary node. This DRBD partition contain our Apache files. Let’s define an ext4 filesystem:

root@debian-master:~# mkfs.ext4 /dev/drbd0

mke2fs 1.42.12 (29-Aug-2014)

Creating filesystem with 1572807 4k blocks and 393216 inodes

Filesystem UUID: 48930373-b7e6-487d-b4c7-cf3948ccfab5

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

root@debian-master:~#

 

We mount this filesystem on /mnt/www/

root@debian-master:~# mkdir /mnt/www

 

Backup the /var/www directory and empty it.

root@debian-master:~# cp -r /var/www/ /root/www/

root@debian-master:~# rm -rvf /var/www/

root@debian-slave:~# rm -rvf /var/www/

 

Create a symbolic link from /var/www to /mnt/www

root@debian-master:~# ln -s /mnt/www/ /var/

root@debian-slave:~# ln -s /mnt/www/ /var/

 

Check your configuration

root@debian-master:~# ls -la /var/www

lrwxrwxrwx 1 root root 9 Nov 26 14:15 /var/www -> /mnt/www/

root@debian-master:~#

 

root@debian-slave:~# ls -la /var/www

lrwxrwxrwx 1 root root 9 Nov 26 14:17 /var/www -> /mnt/www/

root@debian-slave:~#

 

Mount your partition /dev/drbd0 on /mnt/www

root@debian-master:~# mount /dev/drbd0 /mnt/www/

If you get the following error, maybe you are on the secondary node:

mount: /dev/drbd0 is write-protected, mounting read-only

mount: mount /dev/drbd0 on /mnt/www failed: Wrong medium type

 

Install php on the Master and the Slave
root@debian-master:~# apt-get install php5
root@debian-slave:~# apt-get install php5

 

Create the test file /mnt/www/test.php:

<?php echo "The hostname of this server is " .gethostname(). "\n" ?>


Beware: in this example, the documentRoot is /var/www in /etc/apache2/sites-enabled/000-default.conf.

 

You can then see our drbd resource via the drbd-overview command:
root@debian-master:~# drbd-overview
 0:webserver/0  Connected Primary/Secondary UpToDate/UpToDate /mnt/webserver ext4 3.9G 8.0M 3.7G 1%
root@debian-master:~#


You can only see this resource on the primary node. Indeed, in the secondary node, you will have this result:
root@debian-slave:~# drbd-overview
 0:r0/0  Connected Secondary/Primary UpToDate/UpToDate
root@debian-slave:~#

 

Step 10 : Pacemaker/Corosync

To manage the DRBD resource, the cluster resource manager Pacemaker is used. So we have to install Pacemaker on the nodes.


Firstly, add the download source:


root@debian-master:~# cat > /etc/apt/sources.list.d/jessie-backports.list << "EOF"
> deb http://http.debian.net/debian jessie-backports main
> EOF
root@debian-master:~# apt-get update
root@debian-master:~# apt-get install -t jessie-backports pacemaker crmsh

 

Disable startup script for drbd (the resource will be managed by pacemaker)

root@debian-master:~# update-rc.d -f drbd remove
root@debian-slave:~# update-rc.d -f drbd remove

 

Customize /etc/corosync/corosync.conf
Totem section
     crypto_cipher: aes256
     crypto_hash: sha256
     bindnetaddr: 192.168.154.0


Logging section
     to_logfile: yes
     logfile: /var/log/corosync/corosync.log


Quorum section
     two_node: 1

 

Define a communication key

To connect the node to each other, we have to define a common key which will be used for private communications. Run  the following command on the primary node:
root@debian-master:~# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Press keys on your keyboard to generate entropy (bits = 920).
Press keys on your keyboard to generate entropy (bits = 1000).
Writing corosync key to /etc/corosync/authkey.
root@debian-master:~#


Finally, copy this key to the second node. I disabled root login to debian-slave. So I’ll use my standard account “svc-bkp”:

root@debian-master:~# scp /etc/corosync/authkey This email address is being protected from spambots. You need JavaScript enabled to view it.:/tmp

 

Then, from the second node, as root move « authkey » to /etc/corosync/authkey:

root@debian-slave:~# mv /tmp/authkey /etc/corosync/authkey
root@debian-slave:~# chown root:root /etc/corosync/authkey

 

Start the cluster

Enbale the cluster service at boot time on the Master and the Slave : add “START=yes” in “/etc/default/corosync”

 

Start the corosync service:
root@debian-master:~# /etc/init.d/corosync start
[ ok ] Starting corosync (via systemctl): corosync.service.
root@debian-master:~#

root@debian-slave:~# /etc/init.d/corosync start
[ ok ] Starting corosync (via systemctl): corosync.service.
root@debian-slave:~#

Check the cluster status: “crm status” or “crm_mon --one-shot –v

As you can see, our nodes see each other and we haven’t added resources yet. For more details on the crm shell, use “crm help”.

 

Step 11 : Add resources

First of all, we need to disable the quorum, so that if one node is down, the cluster is still working. Moreover, disable the stonith - device which can remotely stop a node.

root@debian-master:~# crm configure
crm(live)configure# property stonith-enabled=no
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# commit

 

Check your configuration with the "show" command if you are in the crm command line interface. Else, use "crm configure show".

crm(live)configure# show

node 1084791448: debian-master

node 1084791449: debian-slave

property cib-bootstrap-options: \

       have-watchdog=false \

       dc-version=1.1.15-e174ec8 \

       cluster-infrastructure=corosync \

       cluster-name=debian \

       stonith-enabled=no \

       no-quorum-policy=ignore

crm(live)configure#

 

Resource 1: Virtual IP Address

We’ll use 192.168.154.151/24 as the virtual IP address. To add a resource in the cluster, the keyword “primitive” is used:

root@debian-master:~# crm configure
crm(live)configure# primitive virtual_ip ocf:heartbeat:IPaddr2 params ip="192.168.154.151" \
broadcast="192.168.154.255" nic="eth0" cidr_netmask="24" iflabel="vip1" \
op monitor interval="25s" timeout="20s"
crm(live)configure# commit

Explanation:

  •  Resource name : virtual_ip
  • The script used to manage the resource : ocf:heartbeat:IPaddr2
  • The virtual IP will appear in “ifconfig” command due to “iflabel”
  • This resource is monitored every 20 seconds by the ocf script.
  • If the return code is “0” before 20 seconds, then Pacemaker considers that the resource is up.
  • Validate your configuration with commit

 

Check your configuration :

root@debian-master:~# crm status

Stack: corosync

Current DC: debian-master (version 1.1.15-e174ec8) - partition with quorum

Last updated: Sat Nov 5 18:56:13 2016         Last change: Sat Nov 5 18:51:45 2016 by root via cibadmin on debian-master

2 nodes and 1 resource configured

Online: [ debian-master debian-slave ]

Full list of resources:

virtual_ip     (ocf::heartbeat:IPaddr2):       Started debian-master

root@debian-master:~#

 

Check your NIC configuration : ifconfig extract

eth0:vip1 Link encap:Ethernet HWaddr 00:0c:29:1f:0b:7b

         inet addr:192.168.154.151 Bcast:192.168.154.255 Mask:255.255.255.0

         UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

 

 

Resource 2: Apache

root@debian-master:~# crm configure
primitive APACHE ocf:heartbeat:apache \
    params configfile="/etc/apache2/apache2.conf" \
    op monitor interval="30s" timeout="20s" \
    op start interval="0" timeout="40s" \
    op stop interval="0" timeout="60s"
crm(live)configure# commit

 

Check your configuration:

root@debian-master:~# crm status
Stack: corosync
Current DC: debian-master (version 1.1.15-e174ec8) - partition with quorum
Last updated: Sat Nov 26 13:06:15 2016          Last change: Sat Nov 26 13:03:20 2016 by root via cibadmin on debian-master
2 nodes and 2 resources configured
Online: [ debian-master debian-slave ]

Full list of resources:

 virtual_ip     (ocf::heartbeat:IPaddr2):       Started debian-master
 APACHE (ocf::heartbeat:apache):        Started debian-slave

root@debian-master:~#

The APACHE resource has been successfully added to the cluster. As you can see APACHE is on debian-slave and virtual_ip, on debian-master. We will see later how to mount this resource on the same node.

 

Resource 3: DRBD

root@debian-master:~# crm configure
crm(live)configure# primitive drbd_rsrce ocf:linbit:drbd params drbd_resource="webserver" \
op start interval="0" timeout="240" op stop interval="0" timeout="100" \
op monitor interval="59s" role="Master" timeout="30s" \
op monitor interval="60s" role="Slave" timeout="30s"
crm(live)configure# commit

 

Resource 4: Filesystem

root@debian-master:~# crm configure

crm(live)configure# primitive fsys_apache ocf:heartbeat:Filesystem params device="/dev/drbd0" \
directory="/mnt/www" fstype="ext4" op start interval="0" \
timeout="60" op stop interval="0" timeout="120"
crm(live)configure# commit

 

Step 12 : Resources and order

Now we need that the cluster stop or start the resources in a specific order. Firstly, I created 2 groups:

apache_grp (has the following primitives: virtual_ip and APACHE) and filesystem_grp (has the primitive: fsys_apache).

crm(live)configure# group apache_grp virtual_ip APACHE
crm(live)configure# group filesystem_grp fsys_apache
crm(live)configure# commit

 

 Secondly, some restrictions are needed. We define only *one* drbd master node (the maximum number of node is 2). A master/slave resource masla_drbd is created through the “ms” command.

crm(live)configure# ms masla_drbd drbd_rsrce meta master-node-max="1" clone-max="2" \
clone-node-max="1" globally-unique="false" notify="true" target-role="Master"
crm(live)configure# commit

For more information on ms, use “crm configure help ms”.

 

Then, we specify that masla_drbd should be executed as master on debian-master through the “location” command:

crm(live)configure# location drbd_on_masternode masla_drbd rule role="master" \
100: #uname eq debian-master
crm(live)configure# commit

 

Furthermore, apache_grp and filesystem_grp should be on the master node:

crm(live)configure# colocation apache-deps inf: masla_drbd:Master filesystem_grp apache_grp
crm(live)configure# commit

 

Finally, we specify the order to start resource groups: filesystem_grp then apache_grp:

crm(live)configure# order order_on_drbd inf: masla_drbd:promote filesystem_grp:start apache_grp:start
crm(live)configure# commit

 I recommend to shutdown and boot your nodes in this order : Shutdown debian-slave, Stop debian-master, Boot debian-master, Boot debian-slave.

 

Check your cluster status

root@debian-master:~# crm status
Stack: corosync
Current DC: debian-master (version 1.1.15-e174ec8) - partition with quorum
Last updated: Sat Nov 26 16:10:54 2016         Last change: Sat Nov 26 15:48:54 2016 by root via cibadmin on debian-master
2 nodes and 5 resources configured
Online: [ debian-master debian-slave ]
Full list of resources:

Resource Group: apache_grp

     virtual_ip (ocf::heartbeat:IPaddr2):       Started debian-master
     APACHE    (ocf::heartbeat:apache):       Started debian-master

Resource Group: filesystem_grp

     fsys_apache       (ocf::heartbeat:Filesystem):   Started debian-master

Master/Slave Set: masla_drbd [drbd_rsrce]

     Masters: [ debian-master ]
     Slaves: [ debian-slave ]

root@debian-master:~#

 All the resources are on the debian-master which is the Primary node. Debian-slave is the secondary node.

 

Final configuration

This is the list of commands I've used to setup this cluster and the resources:

root@debian-master:~# crm configure show

node 1084791448: debian-master

node 1084791449: debian-slave

primitive APACHE apache \

       params configfile="/etc/apache2/apache2.conf" \

       op monitor interval=30s timeout=20s \

       op start interval=0 timeout=40s \

       op stop interval=0 timeout=60s

primitive drbd_rsrce ocf:linbit:drbd \

       params drbd_resource=webserver \

       op start interval=0 timeout=240 \

       op stop interval=0 timeout=100 \

       op monitor interval=59s role=Master timeout=30s \

       op monitor interval=60s role=Slave timeout=30s

primitive fsys_apache Filesystem \

       params device="/dev/drbd0" directory="/mnt/www" fstype=ext4 \

       op start interval=0 timeout=60 \

       op stop interval=0 timeout=120

primitive virtual_ip IPaddr2 \

       params ip=192.168.154.151 broadcast=192.168.154.255 nic=eth0 cidr_netmask=24 iflabel=vip1 \

       op monitor interval=25s timeout=20s

group apache_grp virtual_ip APACHE

group filesystem_grp fsys_apache

ms masla_drbd drbd_rsrce \

       meta master-node-max=1 clone-max=2 clone-node-max=1 globally-unique=false notify=true target-role=Master

colocation apache-deps inf: masla_drbd:Master filesystem_grp apache_grp

location drbd_on_masternode masla_drbd \

       rule $role=master 100: #uname eq debian-master

order order_on_drbd inf: masla_drbd:promote filesystem_grp:start apache_grp:start

property cib-bootstrap-options: \

       have-watchdog=false \

       dc-version=1.1.15-e174ec8 \

       cluster-infrastructure=corosync \

       cluster-name=debian \

       stonith-enabled=no \

     no-quorum-policy=ignore \

       last-lrm-refresh=1480162738

root@debian-master:~#

 

Tests

The primary node

Reminder: the script test.php sould print out the name of the current primary node.

The primary node is down

In this scenario, I assume that the first node is down. In principle, deb-salve sould become the Primary node.

This is the state of our cluster before the test:

root@debian-master:~# crm status

Stack: corosync

Current DC: debian-master (version 1.1.15-e174ec8) - partition with quorum

Last updated: Sun Nov 27 04:24:14 2016         Last change: Sun Nov 27 04:13:55 2016 by root via cibadmin on debian-master

2 nodes and 5 resources configured

Online: [ debian-master debian-slave ]

Full list of resources:

Resource Group: apache_grp

     virtual_ip (ocf::heartbeat:IPaddr2):     Started debian-master

     APACHE     (ocf::heartbeat:apache):       Started debian-master

Resource Group: filesystem_grp

     fsys_apache       (ocf::heartbeat:Filesystem):   Started debian-master

Master/Slave Set: masla_drbd [drbd_rsrce]

     Masters: [ debian-master ]

     Slaves: [ debian-slave ]

root@debian-master:~#

 

Shutdown the primary node:

root@debian-master:~# shutdown

Shutdown scheduled for Sun 2016-11-27 04:29:43 CET, use 'shutdown -c' to cancel.

root@debian-master:~#

Broadcast message from root@debian-master (Sun 2016-11-27 04:28:43 CET):

The system is going down for power-off at Sun 2016-11-27 04:29:43 CET!

root@debian-master:~#

 

Check the state of deb-slave:

root@debian-slave:~# crm status

Stack: corosync

Current DC: debian-slave (version 1.1.15-e174ec8) - partition with quorum

Last updated: Sun Nov 27 04:31:00 2016         Last change: Sun Nov 27 04:13:55 2016 by root via cibadmin on debian-master

2 nodes and 5 resources configured

Online: [ debian-slave ]

OFFLINE: [ debian-master ]

Full list of resources:

Resource Group: apache_grp

     virtual_ip (ocf::heartbeat:IPaddr2):       Started debian-slave

     APACHE     (ocf::heartbeat:apache):       Started debian-slave

Resource Group: filesystem_grp

     fsys_apache       (ocf::heartbeat:Filesystem):   Started debian-slave

Master/Slave Set: masla_drbd [drbd_rsrce]

     Masters: [ debian-slave ]

     Stopped: [ debian-master ]

root@debian-slave:~#

As you can see, the resources are on deb-slave which is the current primary node. We have the same result when I try to get test.php from the virtual IP address.

 

Some useful commands

Delete a resource

root@debian-master:~# crm resource stop drbd
root@debian-master:~# crm configure delete drbd

 

Cleanup a resource

Sometimes, in the crm status result, you can have failed actions. A cleanup is required must of the time to deal with this error:

2 nodes and 5 resources configured

Online: [ debian-master debian-slave ]

Full list of resources:

Resource Group: apache_grp

     virtual_ip (ocf::heartbeat:IPaddr2):       Started debian-master

     APACHE     (ocf::heartbeat:apache):       Started debian-master

Resource Group: filesystem_grp

     fsys_apache       (ocf::heartbeat:Filesystem):   Started debian-master

Master/Slave Set: masla_drbd [drbd_rsrce]

     Masters: [ debian-master ]

     Slaves: [ debian-slave ]

Failed Actions:

* APACHE_monitor_30000 on debian-slave 'invalid parameter' (2): call=34, status=complete, exitreason='none',

last-rc-change='Sun Nov 27 04:40:49 2016', queued=0ms, exec=0ms

 

root@debian-master:~# crm resource cleanup APACHE

Cleaning up virtual_ip on debian-master, removing fail-count-virtual_ip

Cleaning up virtual_ip on debian-slave, removing fail-count-virtual_ip

Cleaning up APACHE on debian-master, removing fail-count-APACHE

Cleaning up APACHE on debian-slave, removing fail-count-APACHE

Waiting for 4 replies from the CRMd.... OK

root@debian-master:~#

 

References

  1. http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/pdf/Clusters_from_Scratch/Pacemaker-1.1-Clusters_from_Scratch-en-US.pdf
  2. http://clusterlabs.org/doc/
  3. http://denisrosenkranz.com/tuto-ha-drbd-sur-debian-6/
  4. http://blog.non-a.net/2011/03/27/cluster_drbd
  5. http://www.tecmint.com/extend-and-reduce-lvms-in-linux/
  6. https://geekpeek.net/resize-filesystem-fdisk-resize2fs/
  7. http://www.recitalsoftware.com/blogs/28-howto-drbd-drbdadm-create-md-fails
  8. https://wiki.debian.org/Debian-HA/ClustersFromScratch#Installing_the_Pacemaker.2FCorosync_2.X_HA_cluster_stack
  9. https://doc.ubuntu-fr.org/pacemaker#installation
  10. http://binbash.fr/2011/10/27/cluster-pacemaker-apache-actif-passif/
  11. http://linux-ha.org/doc/man-pages/man-pages.html

Add comment

You are encouraged to comment. However, I will exercise my right to moderate and edit comments which are offensive or not constructive.


Security code
Refresh