实验环境:
后端服务器:vm3 172.25.3.3、vm4172.25.3.4
haproxy主机: vm1 172.25.3.1、vm2 172.25.3.2
vm
首先在vm1、vm2上安装keepalived
注意:需要提前配置好高可用的yum仓库
vm1作为mastar,vm2作为backup
yum install keepalived -y
对vm1上的keepalived进行配置,使用外部脚本/opt/haproxy.sh:
[root@vm1 ~]# cat /opt/check_haproxy.sh
#!/bin/bash
systemctl status haproxy &> /dev/null || systemctl restart haproxy &> /dev/null
killall -0 haproxy
if [ $? -ne 0 ];then
systemctl stop keepalived
fi
脚本解释:首先使用 systemctl status haproxy
查看haproxy服务的状态,如果服务状态正常(在运行中)不执行任何操作,如果服务没有正常运行,执行systemctl restar haproxy
来启动haproxy服务。使用[ $? -ne 0
]通过对执行结果的判断,如果执行成功,不做操作;如果执行失败,关闭keepalived,此时vip资源会迁移到备机(vm2)。
vm1上的keepalived配置文件:
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
##定义外部脚本
vrrp_script check_haproxy {
script "/opt/check_haproxy.sh"
interval 2
weight 0
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 56
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_haproxy
}
virtual_ipaddress {
172.25.3.100
}
}
vm2上的配置文件(有差异)
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
##定义外部脚本
vrrp_script check_haproxy {
script "/opt/check_haproxy.sh"
interval 2
weight 0
}
vrrp_instance VI_1 {
state BACKUP //修改状态为backup
interface eth0
virtual_router_id 56
priority 50 advert_int 1 //优先级修改为小于master的优先级
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_haproxy
}
virtual_ipaddress {
172.25.3.100
}
}
重启服务,这里我们用keepalived+haproxy实现了一个七层的负载均衡+高可用 。