Debianhelp.co.uk

            Loadbalanced High-Availability Apache Cluster Using Ultramonkey

how to set up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another "Single Point Of Failure", we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently.

we need four nodes (two Apache nodes and two load balancer nodes) and five IP addresses: one for each node and one virtual IP address that will be shared by the load balancer nodes and used for incoming HTTP requests.

We will use the following setup here:

Apache node 1: webserver1.example.com (webserver1) - IP address: 192.168.0.101; Apache document root: /var/www
Apache node 2: webserver2.example.com (webserver2) - IP address: 192.168.0.102; Apache document root: /var/www
Load Balancer node 1: loadb1.example.com (loadb1) - IP address: 192.168.0.103
Load Balancer node 2: loadb2.example.com (loadb2) - IP address: 192.168.0.104
Virtual IP Address: 192.168.0.105 (used for incoming requests)

Have a look Here to understand how this setup looks like.

In this tutorial We will use Debian Sarge for all four nodes. We assume that you have installed a basic Debian installation on all four nodes, and that you have installed Apache on webserver1 and webserver2, with /var/www being the document root of the main web site.

Enable IPVS On The Load Balancers

First we must enable IPVS on our load balancers. IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching.

loadb1/loadb2

echo ip_vs_dh >> /etc/modules
echo ip_vs_ftp >> /etc/modules
echo ip_vs >> /etc/modules
echo ip_vs_lblc >> /etc/modules
echo ip_vs_lblcr >> /etc/modules
echo ip_vs_lc >> /etc/modules
echo ip_vs_nq >> /etc/modules
echo ip_vs_rr >> /etc/modules
echo ip_vs_sed >> /etc/modules
echo ip_vs_sh >> /etc/modules
echo ip_vs_wlc >> /etc/modules
echo ip_vs_wrr >> /etc/modules

Then we do this

loadb1/loadb2

modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr

If you get errors, then most probably your kernel wasn't compiled with IPVS support, and you need to compile a new kernel with IPVS support (or install a kernel image with IPVS support) now.

Install UltraMonkey On The Load Balancers in Debian

What is ultramonkey ?

Ultra Monkey is a project to create load balanced and highly available network services. For example a cluster of web servers that appear as a single web server to end-users. The service may be for end-users across the world connected via the internet, or for enterprise users connected via an intranet.

Ultra Monkey makes use of the Linux operating system to provide a flexible solution that can be tailored to a wide range of needs. From small clusters of only two nodes to large systems serving thousands of connections per second.

Ultramonkey Features

Fast Load Balancing using The Linux Virtual Server
Flexible High Availability provided by the Linux-HA framework
Service level monitoring using ldirectord ldirectord
Supports Highly Available and/or Load Balanced topologies with worked configuration examples
Easily expandable to a large number of IP based virtual services using fwmarks
Pre-built packages for Debian Sarge [offsite] and Red Hat Enterprise Linux 3 [offsite].
All Code is Open Source

Download Ultramonkey

http://www.ultramonkey.org/download/3/

     Ultramonkey Documentation

http://www.ultramonkey.org/3/

     Install Ultramonkey in Debian

To install Ultra Monkey, we must edit /etc/apt/sources.list now and add these two lines (don't remove the other repositories):

loadb1/loadb2:

#vi /etc/apt/sources.list

deb http://www.ultramonkey.org/download/3/ sarge main
deb-src http://www.ultramonkey.org/download/3 sarge main

Afterwards we do this:

loadb1/loadb2:

#apt-get update

and install Ultra Monkey:

loadb1/loadb2:

#apt-get install ultramonkey

If you see this warning:

libsensors3 not functional
It appears that your kernel is not compiled with sensors support. As a
result, libsensors3 will not be functional on your system.
If you want to enable it, have a look at "I2C Hardware Sensors Chip
support" in your kernel configuration.

you can ignore it.

During the Ultra Monkey installation you will be asked a few question. Answer as follows:

Do you want to automatically load IPVS rules on boot?
<-- No

Select a daemon method.
<-- none

Enable Packet Forwarding On The Load Balancers

     The load balancers must be able to route traffic to the Apache nodes. Therefore we must enable packet forwarding on the load balancers. Add the following lines to /etc/sysctl.conf:

loadb1/loadb2:

#vi /etc/sysctl.conf

# Enables packet forwardingnet.ipv4.ip_forward = 1

Then do this:

loadb1/loadb2:

#sysctl -p

Configure heartbeat And ldirectord

Now we have to create three configuration files for heartbeat. They must be identical on loadb1 and loadb2!

loadb1/loadb2:

#vi /etc/ha.d/ha.cf

logfacility local0
bcast eth0 # Linux
mcast eth0 225.0.0.1 694 1 0
auto_failback off
node loadb
1node loadb2
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster

Important: As nodenames we must use the output of

#uname -n

on loadb1 and loadb2.

loadb1/loadb2:

#vi /etc/ha.d/haresources

loadb1 \
ldirectord::ldirectord.cf \
LVSSyncDaemonSwap::master \
IPaddr2::192.168.0.105/24/eth0/192.168.0.255

The first word is the output of

#uname -n

on loadb1, no matter if you create the file on loadb1 or loadb2! After IPaddr2 we put our virtual IP address 192.168.0.105.

loadb1/loadb2:

#vi /etc/ha.d/authkeys

auth 3
3 md5 somerandomstring

somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.

/etc/ha.d/authkeys should be readable by root only, therefore we do this:

loadb1/loadb2:

#chmod 600 /etc/ha.d/authkeys

ldirectord is the actual load balancer. We are going to configure our two load balancers (loadb1.example.com and loadb2.example.com) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. To make it work, we must create the ldirectord configuration file /etc/ha.d/ldirectord.cf which again must be identical on loadb1 and loadb2.

loadb1/loadb2:

#vi /etc/ha.d/ldirectord.cf

checktimeout=10
checkinterval=2
autoreload=no
logfile="local0"
quiescent=yes
virtual=192.168.0.105:80
real=192.168.0.101:80 gate
real=192.168.0.102:80 gate
fallback=127.0.0.1:80 gate
service=http
request="ldirector.html"
receive="Test Page"
scheduler=rr
protocol=tcp
checktype=negotiate

In the virtual= line we put our virtual IP address (192.168.0.105 in this example), and in the real= lines we list the IP addresses of our Apache nodes (192.168.0.101 and 192.168.0.102 in this example). In the request= line we list the name of a file on webserver1 and webserver2 that ldirectord will request repeatedly to see if webserver1 and webserver2 are still alive. That file (that we are going to create later on) must contain the string listed in the receive= line.

Afterwards we create the system startup links for heartbeat and remove those of ldirectord because ldirectord will be started by the heartbeat daemon:

loadb1/loadb2:

update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove

Finally we start heartbeat (and with it ldirectord):

loadb1/loadb2:

#/etc/init.d/ldirectord stop
#/etc/init.d/heartbeat start

Test The Load Balancers

Let's check if both load balancers work as expected:

loadb1/loadb2:

#ip addr sh eth0

The active load balancer should list the virtual IP address (192.168.0.105):

2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:16:3e:40:18:e5 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.105/24 brd 192.168.0.255 scope global secondary eth0

The hot-standby should show this:

2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:16:3e:50:e3:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.0.104/24 brd 192.168.0.255 scope global eth0

loadb1/loadb2:

ldirectord ldirectord.cf status

Output on the active load balancer:

ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 1455

Output on the hot-standby:

ldirectord is stopped for /etc/ha.d/ldirectord.cf

loadb1/loadb2:

ipvsadm -L -n

Output on the active load balancer:

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.105:80 rr
-> 192.168.0.101:80 Route 0 0 0
-> 192.168.0.102:80 Route 0 0 0
-> 127.0.0.1:80 Local 1 0 0

Output on the hot-standby:

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

loadb1/loadb2:

/etc/ha.d/resource.d/LVSSyncDaemonSwap master status

Output on the active load balancer:

master running(ipvs_syncmaster pid: 1591)

Output on the hot-standby:

master stopped

If your tests went fine, you can now go on and configure the two Apache nodes.

Configure The Two Apache Nodes

Finally we must configure our Apache cluster nodes webserver1.example.com and webserver2.example.com to accept requests on the virtual IP address 192.168.0.105.

webserver1/webserver2:

#apt-get install iproute

Add the following to /etc/sysctl.conf:

webserver1/webserver2:

#vi /etc/sysctl.conf

# Enable configuration of arp_ignore option
net.ipv4.conf.all.arp_ignore = 1
# When an arp request is received on eth0, only respond if that address is
# configured on eth0. In particular, do not respond if the address is
# configured on lo
net.ipv4.conf.eth0.arp_ignore = 1
# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_ignore = 1
# Enable configuration of arp_announce option
net.ipv4.conf.all.arp_announce = 2
# When making an ARP request sent through eth0 Always use an address that
# is configured on eth0 as the source address of the ARP request. If this
# is not set, and packets are being sent out eth0 for an address that is on
# lo, and an arp request is required, then the address on lo will be used.
# As the source IP address of arp requests is entered into the ARP cache on
# the destination, it has the effect of announcing this address. This is
# not desirable in this case as adresses on lo on the real-servers should
# be announced only by the linux-director.
net.ipv4.conf.eth0.arp_announce = 2
# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_announce = 2

Then run this:

webserver1/webserver2:

#sysctl -p

Add this section for the virtual IP address to /etc/network/interfaces:

webserver1/webserver2:

#vi /etc/network/interfaces

auto lo:0iface lo:0 inet static address 192.168.0.105 netmask 255.255.255.255 pre-up sysctl -p > /dev/null

Then run this:

webserver1/webserver2:

ifup lo:0

Finally we must create the file ldirector.html. This file is requested by the two load balancer nodes repeatedly so that they can see if the two Apache nodes are still running. I assume that the document root of the main apache web site on webserver1 and webserver2 is /var/www, therefore we create the file /var/www/ldirector.html:

webserver1/webserver2:

#vi /var/www/ldirector.html

Test Page

Further Testing

You can now access the web site that is hosted by the two Apache nodes by typing http://192.168.0.105 in your browser.

Now stop the Apache on either webserver1 or webserver2. You should then still see the web site on http://192.168.0.105 because the load balancer directs requests to the working Apache node. Of course, if you stop both Apaches, then your request will fail.

Now let's assume that loadb1 is our active load balancer, and loadb2 is the hot-standby. Now stop heartbeat on loadb1:

loadb1:

#/etc/init.d/heartbeat stop

Wait a few seconds, and then try http://192.168.0.105 again in your browser. You should still see your web site because loadb2 has taken the active role now.

Now start heartbeat again on loadb1:

loadb1:

#/etc/init.d/heartbeat start

loadb2 should still have the active role. Do the tests from chapter 5 again on loadb1 and loadb2, and you should see the inverse results as before.

If you have also passed these tests, then your loadbalanced Apache cluster is working as expected.