Custom Search

Monday, August 17, 2009

Zimbra+DRBD+HeartBeat Hi Availability Server Solution .

I was doing so many testing for Hi availability server solutions for a long time & recently I got succeeded . This is the most reliable way for a enterprise server solution. In this scenario I used Zimbra as the MTA, I have tried lot for integrate postfix as MTA but its no so reliable. I also able to use squid and iptables in same time with this server. Ubuntu 8.04 server edition is OS I used here. This is a Mail+proxy+firewall solution with hi-availabily. Hope this will be helpful to you all. Here I lised major steps. If you need further help don't hesitate to leave a comment with your email ID.


DRBD & Heartbeat Configuration For a Hi Availability System


1. Installing Ubuntu 8.04

 Install Ubuntu on both identical servers with following instructions

Partition table should be like this




/dev/sda1 * (Boot Flag on) ext3 mounted on / 12G
Swap 4G
/dev/sda3 Extended
/dev/sda5 No file system & not mounted 63.9G (for DRBD data disk)
/dev/sda6 No file system & not mounted 150MB (for meta data)

Use options call “Do not mount this partition” and “Do not use this partition” for create sda5 and sda6.
NOTE – Meta partition (150Mb) must create end of the disk space.


2. Edit host files

On server 1
/etc/hostname
zimbra-1

/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.0.134 zimbra-1.jwcgloves.com mail.jwcgloves.com
192.168.0.162 zimbra-2.jwcgloves.com

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
On server 2



On server 2
/etc/hostname
zimbra-2

/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.0.162 zimbra-2 mail.jwcgloves.com
192.168.0.134 zimbra-1

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts


3. Installing DRBD


• Before installing DRBD install this header
apt-get install flex build-essential linux-headers-`uname -r`

• Get the latest version of DRBD & install
wget http://oss.linbit.com/drbd/8.2/drbd-8.2.7.tar.gz
tar –xzvf drbd-8.2.7.tar.gz

cd drbd-8.2.7
make all && make make install

cd ..
update-rc.d drbd defaults 70
modprobe drbd
cp /etc/drbd.conf /etc/drbd.conf.org


4. Configuring DRBD on both servers

NOTE - sever1 & server2 DRBD configuration should be same

/etc/drbd.conf

resource r0 {
protocol C;
startup {
degr-wfc-timeout 120; # 2 minutes
}
disk {
on-io-error detach;
}
net {
}
syncer {
rate 110M;
# group 1;
al-extents 257;
}
on zimbra-1{
device /dev/drbd0;
disk /dev/sda5;
address 192.168.0.134:7788;
meta-disk /dev/sda6[0];
}
on zimbra-2 {
device /dev/drbd0;
disk /dev/sda5;
address 192.168.0.162:7788;
meta-disk /dev/sda6[0];
}
}

5. Prepare the DRBD disk

On server1 & server2

drbdadm create-md r0
/etc/init.d/drbd restart

cat /proc/drbd out put should be like this

version: 8.0.11 (api:86/proto:86)
GIT-hash: b3fe2bdfd3b9f7c2f923186883eb9e2a0d3a5b1b build by phil@mescal, 2008-02-12 11:56:43
0: cs:Connected st:secondary/Secondary ds:UpToDate/UpToDate C r---
ns:40752 nr:1054380 dw:1095132 dr:218077 al:22 bm:265 lo:0 pe:0 ua:0 ap:0
resync: used:0/31 hits:65713 misses:143 starving:0 dirty:0 changed:143
act_log: used:0/257 hits:9482 misses:22 starving:0 dirty:0 changed:22



On server1

drbdadm -- --overwrite-data-of-peer primary r0
mount –t ext3 /dev/drbd0 /opt
mkfs –t ext3 /dev/drbd0

cat /proc/drbd now the out put should be like this

Let this process run until finish

root@zimbra-1:~# cat /proc/drbd
version: 8.0.11 (api:86/proto:86)
GIT-hash: b3fe2bdfd3b9f7c2f923186883eb9e2a0d3a5b1b build by phil@mescal, 2008-02-12 11:56:43
0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r---
[>…………………….] sync’ed 2.4% (3083548/3148572) K
ns:40752 nr:1054380 dw:1095132 dr:218077 al:22 bm:265 lo:0 pe:0 ua:0 ap:0
resync: used:0/31 hits:65713 misses:143 starving:0 dirty:0 changed:143
act_log: used:0/257 hits:9482 misses:22 starving:0 dirty:0 changed:22



6. Installing Zimbra


* install zimbra on server 1
Get the latest zimra tar file from the zimbra.com and install zimbra
And unmount the drbd disk and restart DRBD
umont /opt
/etc/init.d/drbd restart (Now the server1 become primary secondary )

*Instal zimbra on server 2
Before install zimbra on server2 need to mount the drbd disk on /opt .
First make server2 primary

drbdadm -- --overwrite-data-of-peer primary r0
mount –t ext3 /dev/drbd0 /opt

now get the zimbra file and install zimbra on server. After installation umout the disk .
umout /opt
/etc/init.d/drbd restart

End of the installation make sure to set the server1 to primary usind the command
drbdadm -- --overwrite-data-of-peer primary r0


7. Install and configure heartbeat

apt-get install heartbeat

create ha.cf , haresources and authkeys on server 1 & 2

/etc/heartbeat/ha.cf
logfacility local0
keepalive 2
deadtime 20 # timeout before the other server takes over
bcast eth0
node zimbra-1 zimbra-2 #node host names
auto_failback on # very important or auto failover won't happen

/etc/heartbeat/haresources
zimbra-1 IPaddr::192.168.0.190/24/eth0 203.143.39.244/28/eth1 drbddisk::r0 Filesystem::/dev/drbd0::/opt::ext3 zimbra

/etc/authkeys
auth 3
3 md5 123456

Set permission to authkey file on both servers
chmod 600 /etc/heartbeat/authkeys

Now start heartbeat on both servers
/etc/init.d/heartbeat start


Now in server1 all services will be start in few seconds and you ‘ll be able to ping virtual IPs. Virtual IPs will bind to sub interfaces like this …


1: lo: mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1: mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0e:2e:eb:dd:7c brd ff:ff:ff:ff:ff:ff
inet 203.143.39.245/28 brd 203.143.39.255 scope global eth1
inet 203.143.39.244/28 brd 203.143.39.255 scope global secondary eth1:0
inet6 fe80::20e:2eff:feeb:dd7c/64 scope link
valid_lft forever preferred_lft forever
3: eth0: mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:1d:92:ed:01:74 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.134/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.190/24 brd 192.168.0.255 scope global secondary eth0:0
inet6 fe80::21d:92ff:feed:174/64 scope link
valid_lft forever preferred_lft forever

To test the fail over stop drbd on server1
/etc/init.d/heartbeat stop

Now all process will take over by server2
(To monitor the process tail –f /var/log/messages )

1 comment:

  1. I know this is 3 years late but I'm hoping you still get this post in an email. I'm trying to follow you're instructions but I was wondering if you had any insight on how to do this if you already have a server in production state?

    ReplyDelete