Keepalived概述
- keepalived是linux下一个轻量级的高可用解决方案
- 主要通过虚拟路由冗余协议VRRP来实现高可用功能
- 起初就是为了补充LVS功能而设计的,用于监控LVS集群内后端真实服务器状态
- 后来加入了VRRP功能 他出现的目的是为了解决静态路由出现的单点故障问题
功能
- LVS规则管理
- LVS集群真实服务器状态监测
- 管理VIP
Keepalived运行原理
- Keepalived监测每个服务器节点状态
- 服务器节点异常或工作出现故障 keepalived将故障节点从集群系统中剔除
- 故障节点恢复后 keepalived再将其加入到集群系统中
- 所有工作自动完成 无需人工干预
Keepalived实现web高可用 (例)
- 环境 web1:eth0-> 192.168.88.100/24
- web2:eth0->192.168.88.200/24
- 配置Keepalived高可用集群
###配置Keepalived高可用集群###安装Keepalived软件
[root@pubserver cluster]# vim 08_inst_web_kp.yml
---
- name: install keepalive on webservershosts: webserverstasks:- name: install keepalived #安装软件yum:name: keepalivedstate: present
[root@pubserver cluster]# ansible-playbook 08_inst_web_kp.yml #执行剧本# 配置Keepalived集群
[root@web1 ~]# vim /etc/keepalived/keepalived.conf
1 ! Configuration File for keepalived
2
...
11 smtp_connect_timeout 30
12 router_id web1 #设置集群节点唯一标识符
13 vrrp_iptables #与vrrp_strict连用时自动放行iptables规则
14 vrrp_skip_check_adv_addr
15 vrrp_strict #严格遵守VRRP协议,将iptables默认规则设置为拒绝
16 vrrp_garp_interval 0
17 vrrp_gna_interval 0
18 }
19
20 vrrp_instance VI_1 {
21 state MASTER #状态
22 interface eth0 #监听网卡
23 virtual_router_id 51 #虚拟路由器唯一表示,范围0-255
24 priority 100 #优先级
25 advert_int 1 #心跳包频率
26 authentication {
27 auth_type PASS #认证类型为共享密码
28 auth_pass 1111 #集群内节点认证密码相同
29 }
30 virtual_ipaddress {
31 192.168.88.80/24 dev eth0 label eth0:1 #VIP地址及绑定网卡和虚接口标签
32 }
33 }[root@web2 ~]# vim /etc/keepalived/keepalived.conf
1 ! Configuration File for keepalived
2
3 global_defs {
...
11 smtp_connect_timeout 30
12 router_id web2 #集群节点唯一标识
13 vrrp_iptables #放行iptables规则
14 vrrp_skip_check_adv_addr
15 vrrp_strict
16 vrrp_garp_interval 0
17 vrrp_gna_interval 0
18 }
19
20 vrrp_instance VI_1 {
21 state BACKUP #状态
22 interface eth0
23 virtual_router_id 51
24 priority 50 #优先级值低于master
25 advert_int 1
26 authentication {
27 auth_type PASS
28 auth_pass 1111
29 }
30 virtual_ipaddress {
31 192.168.88.80/24 dev eth0 label eth0:1
32 }
33 }# 启动服务
[root@web1 ~]# systemctl start keepalived.service
[root@web2 ~]# systemctl start keepalived.service# 验证VIP绑定情况
[root@web1 ~]# ip a s | grep 192.168 #web1主机绑定vipinet 192.168.88.15/32 brd 192.168.88.15 scope global lo:0inet 192.168.88.100/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.80/24 scope global secondary eth0:1[root@web2 ~]# ip a s | grep 192.168 #web2主机未绑定vipinet 192.168.88.15/32 brd 192.168.88.15 scope global lo:0inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0# 测试高可用
[root@client ~]# curl http://192.168.88.80 #访问VIP得到web1节点首页内容
Welcome to web1
[root@web1 ~]# systemctl stop keepalived.service #模拟web1节点故障
[root@client ~]# curl http://192.168.88.80 #访问VIP得到web2节点首页内容
Welcome to web2[root@web2 ~]# ip a s | grep 192.168 #确认web2主机绑定VIPinet 192.168.88.15/32 brd 192.168.88.15 scope global lo:0inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.80/24 scope global secondary eth0:1
[root@web1 ~]# systemctl start keepalived.service #模拟web1节点修复
[root@web1 ~]# ip a s | grep 192.168 #确认VIP被web1抢占inet 192.168.88.15/32 brd 192.168.88.15 scope global lo:0inet 192.168.88.100/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.80/24 scope global secondary eth0:1
[root@web2 ~]# ip a s | grep 192.168 #确认VIP被web2释放inet 192.168.88.15/32 brd 192.168.88.15 scope global lo:0inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
- 45.
- 46.
- 47.
- 48.
- 49.
- 50.
- 51.
- 52.
- 53.
- 54.
- 55.
- 56.
- 57.
- 58.
- 59.
- 60.
- 61.
- 62.
- 63.
- 64.
- 65.
- 66.
- 67.
- 68.
- 69.
- 70.
- 71.
- 72.
- 73.
- 74.
- 75.
- 76.
- 77.
- 78.
- 79.
- 80.
- 81.
- 82.
- 83.
- 84.
- 85.
- 86.
- 87.
- 88.
- 89.
- 90.
- 91.
- 92.
- 93.
- 94.
- 95.
- 96.
- 97.
- 98.
- 99.
- 100.
- 101.
- 102.
- 103.
- 104.
- 105.
- 106.
- 107.
- 108.
- 109.
配置Keepalived关联节点服务
- 配置高可用web集群时 keepalived只为服务器提供了vip
- keepalived不知道服务器上运行了哪些服务
- master服务器可以通过跟踪脚本监视本机的80端口 一旦本机80端口失效 则将vip切换指BACKUP服务器
- keepalived对脚本的要求是 退出码为0表示访问成功 退出码为1表示访问失败
## 解决Keepalived关联节点服务# 编写服务检查脚本
[root@web1 ~]# vim /etc/keepalived/check_http.sh
#!/bin/bash
ss -antpul | grep -q nginx && exit 0 || exit 1
[root@web1 ~]# chmod +x /etc/keepalived/check_http.sh # 配置Keepalived关联服务
[root@web1 ~]# vim /etc/keepalived/keepalived.conf
[root@web1 ~]# cat -n /etc/keepalived/keepalived.conf
1 ! Configuration File for keepalived
2
...
18 }
19
20 vrrp_script chk_http_port { #定义监控监本,手工编辑本段内容
21 script "/etc/keepalived/check_http.sh" #定义检测脚本位置
22 interval 2 #定义脚本执行时间
23 }
24
25 vrrp_instance VI_1 {
26 state MASTER
27 interface eth0
28 virtual_router_id 51
29 priority 100
30 advert_int 1
31 authentication {
32 auth_type PASS
33 auth_pass 1111
34 }
35 virtual_ipaddress {
36 192.168.88.80/24 dev eth0 label eth0:1
37 }
38 track_script { #引用脚本,手工编写本段
39 chk_http_port
40 }
41 }
[root@web1 ~]# systemctl restart keepalived.service [root@web2 ~]# scp root@192.168.88.100:/etc/keepalived/check_http.sh /etc/keepalived/
[root@web2 ~]# chmod +x /etc/keepalived/check_http.sh
[root@web2 ~]# vim /etc/keepalived/keepalived.conf
[root@web2 ~]# cat -n /etc/keepalived/keepalived.conf
1 ! Configuration File for keepalived
2
...
18 }
19
20 vrrp_script chk_http_port { #定义监控脚本
21 script "/etc/keepalived/check_http.sh"
22 interval 2
23 }
24
25 vrrp_instance VI_1 {
...
35 virtual_ipaddress {
36 192.168.88.80/24 dev eth0 label eth0:1
37 }
38 track_script { #引用监控脚本
39 chk_http_port
40 }
41 }
[root@web2 ~]# systemctl restart keepalived.service # 测试高可用配置
[root@web1 ~]# ip a s | grep 88.80 #确认VIP绑定在web1inet 192.168.88.80/24 scope global secondary eth0:1
[root@client ~]# curl http://192.168.88.80 #访问测试得到web1首页内容
Welcome to web1[root@web1 ~]# systemctl stop nginx.service #模拟web1故障
[root@web1 ~]# ip a s | grep 88.80 #确认web1释放VIP[root@client ~]# curl http://192.168.88.80 #访问测试得到web2首页内容
Welcome to web2
[root@web2 ~]# ip a s | grep 88.80 #确认VIP绑定于web2inet 192.168.88.80/24 scope global secondary eth0:1[root@web1 ~]# systemctl start nginx.service #模拟web1故障修复
[root@web1 ~]# ip a s | grep 88.80 #确认VIP绑定于web1inet 192.168.88.80/24 scope global secondary eth0:1
[root@web1 ~]#
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
- 45.
- 46.
- 47.
- 48.
- 49.
- 50.
- 51.
- 52.
- 53.
- 54.
- 55.
- 56.
- 57.
- 58.
- 59.
- 60.
- 61.
- 62.
- 63.
- 64.
- 65.
- 66.
- 67.
- 68.
- 69.
- 70.
- 71.
- 72.
- 73.
- 74.
- 75.
- 76.
- 77.
- 78.
- 79.
- 80.
- 81.
- 82.
- 83.
LVS+Keepalived高可用负载平衡集群
- 使用Keepalived扩充LVS-DR集群 实现LVS调度器高可用
- 举例环境
- client eth0>192.168.88.10
- lvs1 eth0 > 192.168.88.5
- lvs2 eth0 > 192.168.88.6
- web1 eth0 > 192.168.88.100
- web2 eth0 > 192.168.88.200
## 实验环境准备# 清理web节点上的Keepalived服务
[root@pubserver cluster]# vim 09_rm_web_kp.yml
---
- name: remove keepalived on web servershosts: webserverstasks:- name: stop service #停止Keepalived服务service:name: keepalivedstate: stoppedenabled: false- name: remove soft #卸载Keepalived软件yum:name: keepalivedstate: absent
[root@pubserver cluster]# ansible-playbook 09_rm_web_kp.yml # 清理lvs1主机原有规则和VIP配置
[root@pubserver cluster]# vim 10_rm_dr_manual.yml
---
- name: clean lvs1 manual config #清理手工配置LVS-DR规则,后续交给Keepalived管理hosts: lvs1tasks:- name: clean ipvs ruleshell: "ipvsadm -C"- name: rm vip file #清理手工eth0:0配置,后续交给Keepalived管理file:path: /etc/sysconfig/network-scripts/ifcfg-eth0:0state: absentnotify: deactive viphandlers:- name: deactive vip #关闭eth0:0接口shell: ifconfig eth0:0 down
[root@pubserver cluster]# ansible-playbook 10_rm_dr_manual.yml
[root@pubserver cluster]# ansible lbs -m shell -a "ip a s | grep 192.168"
lvs1 | CHANGED | rc=0 >>inet 192.168.88.5/24 brd 192.168.88.255 scope global noprefixroute eth0
[root@pubserver cluster]# # 创建lvs2主机
[root@server1 ~]# vm clone lvs2
Domain 'lvs2' clone [SUCCESS]
[root@server1 ~]# vm setip lvs2 192.168.88.6
[root@server1 ~]# # 调整Ansible配置
[root@pubserver cluster]# vim inventory
[clients]
client ansible_ssh_host=192.168.88.10[webservers]
web1 ansible_ssh_host=192.168.88.100
web2 ansible_ssh_host=192.168.88.200[lbs]
lvs1 ansible_ssh_host=192.168.88.5
lvs2 ansible_ssh_host=192.168.88.6[all:vars]
ansible_ssh_port=22
ansible_ssh_user=root
ansible_ssh_pass=a
[root@pubserver cluster]# ansible all -m ping
[root@pubserver cluster]# # 更新所有主机yum源
[root@pubserver cluster]# ansible-playbook 05_config_yum.yml
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
- 45.
- 46.
- 47.
- 48.
- 49.
- 50.
- 51.
- 52.
- 53.
- 54.
- 55.
- 56.
- 57.
- 58.
- 59.
- 60.
- 61.
- 62.
- 63.
- 64.
- 65.
- 66.
- 67.
- 68.
- 69.
配置测试高可用负载平衡集群
## 配置高可用负载平衡集群# lvs1和lvs2节点安装LVS和Keepalived软件
[root@pubserver cluster]# vim 11_inst_lvs_kp.yml
---
- name: install softhosts: lbstasks:- name: install pkgs #安装软件yum:name: ipvsadm,keepalivedstate: present
[root@pubserver cluster]# ansible-playbook 11_inst_lvs_kp.yml # 配置lvs1节点Keepalived软件
[root@lvs1 ~]# vim /etc/keepalived/keepalived.conf
[root@lvs1 ~]# cat /etc/keepalived/keepalived.conf
global_defs {router_id lvs1 #集群节点唯一标识vrrp_iptables #放行防火墙规则vrrp_strict #严格遵守VRRP规则
}vrrp_instance VI_1 {state MASTER #状态interface eth0 #网卡virtual_router_id 51 #虚拟路由唯一标识符priority 100 #优先级advert_int 1 #心跳包间隔时间authentication { #认证方式auth_type PASS #密码认证auth_pass 1111 #集群密码}virtual_ipaddress { #定义VIP192.168.88.15/24 dev eth0 label eth0:0}
}virtual_server 192.168.88.15 80 { #定义LVS虚拟服务器delay_loop 6 #健康检查延时6s开始lb_algo wrr #调度算法lb_kind DR #LVS工作模式persistence_timeout 50 #50s内相同客户端发起请求由同一服务器处理protocol TCP #虚拟服务器协议real_server 192.168.88.100 80 { #定义真实服务器weight 1 #权重TCP_CHECK { #健康检查方式connect_timeout 3 #连接超时时间为3snb_get_retry 3 #连续3次访问失败则认为真实服务器故障delay_before_retry 3 #健康检查包时间间隔}}real_server 192.168.88.200 80 {weight 2TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3}}
}
[root@lvs1 ~]# # 启动服务测试
[root@lvs1 ~]# ipvsadm -Ln #启动服务前无LVS规则
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@lvs1 ~]# systemctl start keepalived.service #启动服务
[root@lvs1 ~]# systemctl enable keepalived.service #设置服务开机自启动
[root@lvs1 ~]# ipvsadm -Ln #启动服务后有LVS规则
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.88.15:80 wrr persistent 50-> 192.168.88.100:80 Route 1 0 0 -> 192.168.88.200:80 Route 2 0 0
[root@lvs1 ~]# ip a s | grep 192.168 #VIP已绑定inet 192.168.88.5/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.15/24 scope global secondary eth0:0
[root@lvs1 ~]# [root@client ~]# for i in {1..6};do curl http://192.168.88.15; done #同一服务器处理请求
Welcome to web2
Welcome to web2
Welcome to web2
Welcome to web2
Welcome to web2
Welcome to web2
[root@client ~]# [root@lvs1 ~]# vim +26 /etc/keepalived/keepalived.conf #注释持久连接时长
...#persistence_timeout 50
...
[root@lvs1 ~]# systemctl restart keepalived.service [root@client ~]# for i in {1..6};do curl http://192.168.88.15; done
Welcome to web2
Welcome to web1
Welcome to web2
Welcome to web2
Welcome to web1
Welcome to web2
[root@client ~]# # 配置lvs2节点Keepalived软件
[root@lvs1 ~]# scp /etc/keepalived/keepalived.conf root@192.168.88.6:/etc/keepalived/[root@lvs2 ~]# vim /etc/keepalived/keepalived.conf
2 router_id lvs2 #集群节点唯一标识符
8 state BACKUP #状态
11 priority 50 #优先级[root@lvs2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@lvs2 ~]# systemctl start keepalived.service
[root@lvs2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.88.15:80 wrr-> 192.168.88.100:80 Route 1 0 0 -> 192.168.88.200:80 Route 2 0 0
[root@lvs2 ~]# # 验证真实服务器健康检查
[root@web1 ~]# systemctl stop nginx #模拟web1故障
[root@lvs1 ~]# ipvsadm -Ln #LVS规则中web1被擦除
TCP 192.168.88.15:80 wrr-> 192.168.88.200:80 Route 2 0 0
[root@lvs2 ~]# ipvsadm -Ln
TCP 192.168.88.15:80 wrr-> 192.168.88.200:80 Route 2 0 0
[root@lvs2 ~]# [root@web1 ~]# systemctl start nginx #模拟web1修复
[root@lvs2 ~]# ipvsadm -Ln #LVS规则中web1被添加回来
TCP 192.168.88.15:80 wrr-> 192.168.88.100:80 Route 1 0 0 -> 192.168.88.200:80 Route 2 0 0
[root@lvs2 ~]# ipvsadm -Ln
TCP 192.168.88.15:80 wrr-> 192.168.88.100:80 Route 1 0 0 -> 192.168.88.200:80 Route 2 0 0 # 验证高可用负载平衡
[root@lvs1 ~]# ip a s | grep 88.15 #VIP绑定于lvs1inet 192.168.88.15/24 scope global secondary eth0:0
[root@lvs1 ~]# systemctl stop keepalived #模拟lvs1节点故障
[root@lvs1 ~]# ip a s | grep 88.15 #VIP释放
[root@lvs1 ~]# ipvsadm -Ln #LVS规则被清空[root@lvs2 ~]# ip a s | grep 88.15 #VIP绑定于lvs2inet 192.168.88.15/24 scope global secondary eth0:0
[root@lvs2 ~]# ipvsadm -Ln
TCP 192.168.88.15:80 wrr-> 192.168.88.100:80 Route 1 0 0 -> 192.168.88.200:80 Route 2 0 0
[root@lvs2 ~]# [root@client ~]# for i in {1..6};do curl http://192.168.88.15; done #不影响客户端
Welcome to web1
Welcome to web2
Welcome to web2
Welcome to web1
Welcome to web2
Welcome to web2
[root@client ~]#
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
- 45.
- 46.
- 47.
- 48.
- 49.
- 50.
- 51.
- 52.
- 53.
- 54.
- 55.
- 56.
- 57.
- 58.
- 59.
- 60.
- 61.
- 62.
- 63.
- 64.
- 65.
- 66.
- 67.
- 68.
- 69.
- 70.
- 71.
- 72.
- 73.
- 74.
- 75.
- 76.
- 77.
- 78.
- 79.
- 80.
- 81.
- 82.
- 83.
- 84.
- 85.
- 86.
- 87.
- 88.
- 89.
- 90.
- 91.
- 92.
- 93.
- 94.
- 95.
- 96.
- 97.
- 98.
- 99.
- 100.
- 101.
- 102.
- 103.
- 104.
- 105.
- 106.
- 107.
- 108.
- 109.
- 110.
- 111.
- 112.
- 113.
- 114.
- 115.
- 116.
- 117.
- 118.
- 119.
- 120.
- 121.
- 122.
- 123.
- 124.
- 125.
- 126.
- 127.
- 128.
- 129.
- 130.
- 131.
- 132.
- 133.
- 134.
- 135.
- 136.
- 137.
- 138.
- 139.
- 140.
- 141.
- 142.
- 143.
- 144.
- 145.
- 146.
- 147.
- 148.
- 149.
- 150.
- 151.
- 152.
- 153.
- 154.
- 155.
- 156.
- 157.
- 158.
- 159.
- 160.
- 161.
- 162.
- 163.
- 164.
- 165.
- 166.
- 167.
- 168.
- 169.
- 170.
- 171.
- 172.
- 173.
- 174.
Haproxy负载平衡集群
- 概念
- HAProxy是可提供高可用性、负载均衡以及基于TCP和HTTP应用的代理
- 是免费、快速并且可靠的一种解决方案
- HAProxy非常适用于并发大(并发达1w以上)web站点,这些站点通常又需要会话保持或七层处理
- 可以很简单安全的整合至当前的架构中,同时可以保护web服务器不被暴露到公网
- 工作模式
- mode http:只适用于web服务
- mode tcp:适用于各种服务
- mode health:仅做健康检查,很少使用
- 调度算法
- roundrobin:轮询
- static-rr:加权轮询
- leastconn:最少连接者先处理
- source:根据请求源IP,类似于nginx的ip_hash
- ri:根据请求的URI
- rl_param:根据请求人URL参数'balance url_param'
- rdp-cookie(name) :根据cookie(name)来锁定并哈希每一次的TCP请求
- hdr(name) :根据HTTP请求头来锁定每一次HTTP请求
Haproxy负载平衡集群
- 例 环境
- client1:eth0 -> 192.168.88.10
- HAProxy:eth0 -> 192.168.88.5
- web1:eth0 -> 192.168.88.100
- web2:eth0 -> 192.168.88.200
## 实验环境准备
# 关闭lvs2节点
[root@lvs2 ~]# shutdown -h now# lvs1节点安装Haproxy软件
[root@pubserver cluster]# vim 12_install_haproxy.yml
---
- name: config haproxyhosts: lvs1tasks:- name: stop keepalived #停止Keepalived服务service:name: keepalivedstate: stoppedenabled: false- name: remove softs #卸载软件yum:name: ipvsadm,keepalivedstate: absent- name: modify hostname #设置主机名shell: "hostnamectl set-hostname haproxy"- name: install haproxy #安装软件yum:name: haproxystate: present
[root@pubserver cluster]# ansible-playbook 12_install_haproxy.yml
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
搭建Haproxy负载均衡集群
## 配置Haproxy负载平衡集群
# 配置文件说明global为全局配置,通常保持默认即可defaults为缺省配置,如果后续有相同配置则覆盖缺省值frontend描述用户和haproxy交互backend描述haproxy和真实服务器交互frontend和backend的配置方式适合负载的url代理配置,通常不使用通常使用listen模式,理解为虚拟主机即可# 配置Haproxy
[root@haproxy ~]# sed -ri '64,$s/^/#/' /etc/haproxy/haproxy.cfg #注释无用配置
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
...
listen webservers #定义虚拟服务器bind 0.0.0.0:80 #定义监听端口mode http #定义工作模式balance roundrobin #定义调度算法server web1 192.168.88.100:80 check inter 2000 rise 2 fall 5server web2 192.168.88.200:80 check inter 2000 rise 2 fall 5#check:对后端服务器进行健康检查#inter:健康检查心跳包发送时间间隔#rise:连续检查N次有响应则认为健康#fall:连续检查N次无响应则认为故障
[root@haproxy ~]# systemctl start haproxy.service
[root@haproxy ~]# ss -antpul | grep haproxy
udp UNCONN 0 0 0.0.0.0:35521 0.0.0.0:* users:(("haproxy",pid=33351,fd=6),("haproxy",pid=33349,fd=6))
tcp LISTEN 0 128 0.0.0.0:80 0.0.0.0:* users:(("haproxy",pid=33351,fd=5))
[root@haproxy ~]# # 访问测试
[root@client ~]# for i in {1..6};do curl http://192.168.88.5; done
Welcome to web1
Welcome to web2
Welcome to web1
Welcome to web2
Welcome to web1
Welcome to web2
[root@client ~]#
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
开启Haproxy健康检查页面
## Haproxy健康检查页面# 配置Haproxy
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
...
listen stats #定义虚拟服务器bind 0.0.0.0:1080 #定义监听端口stats refresh 30s #定义页面刷新时间stats uri /stats #定义请求路径stats auth admin:admin #定义用户/密码
[root@haproxy ~]# systemctl restart haproxy.service
[root@haproxy ~]# ss -antpul | grep haproxy
udp UNCONN 0 0 0.0.0.0:32821 0.0.0.0:* users:(("haproxy",pid=33771,fd=6),("haproxy",pid=33767,fd=6))
tcp LISTEN 0 128 0.0.0.0:1080 0.0.0.0:* users:(("haproxy",pid=33771,fd=7))
tcp LISTEN 0 128 0.0.0.0:80 0.0.0.0:* users:(("haproxy",pid=33771,fd=5))
[root@haproxy ~]# # 浏览器访问测试
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
## 参数说明:
# Queue:队列数据的信息(当前队列数量、最大值、队列限制数量)
# Session rate:每秒会话率(当前值、最大值、限制数量)
# Sessions:总会话量(当前值、最大值、总量)
# Bytes:流量统计(入站、出站流量)
# Denied:拒绝请求情况(拒绝请求、拒绝回应)
# Errors:错误请求情况(错误请求、错误连接、错误回应)
# Warings:警告情况(重新尝试警告、重新连接警告)
# Server:后端服务器情况(状态、最后检查时间、权重、备份服务器数量、宕机服务器数量、宕机时长[root@haproxy ~]# curl http://admin:admin@192.168.88.5:1080/stats# 压力测试,查看请求分发情况
[root@client ~]# yum -y install httpd-tools.x86_64
[root@client ~]# ab -c 100 -n 1000 http://192.168.88.5/
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
负载均衡软件对比
适用场景
LVS适用于需要高并发性和稳定性的场景
Nginx适用于静态文件服务和反向代理等应用层负载均衡场景
HAProxy则具备较为丰富的功能和灵活性,适用于多种负载均衡场景
对比
LVS:Linux Virtual Server优点:高性能:LVS使用Linux内核中的IP负载均衡技术,能够实现非常高的并发处理能力
稳定性:LVS经过长时间的实践应用,成熟稳定,被广泛使用
可用性:支持高可用性的配置,可以实现故障自动切换,提供无中断的服务
灵活性:可根据需要采用多种负载均衡算法,如轮询、加权轮询、哈希等
缺点:配置复杂:相对于其他两个技术,LVS的配置相对较为复杂,需要更深入的了解和配置
功能相对局限:LVS主要是一种传输层负载均衡技术,无法像Nginx和HAProxy那样对应用层协议进行处理
Nginx优点高性能:Nginx采用了基于事件驱动的异步非阻塞架构,能够处理大量并发连接
负载均衡:Nginx具备内置的负载均衡功能,可以根据配置进行请求的转发
丰富的功能:Nginx支持反向代理、静态文件服务、缓存、SSL等,在Web服务器领域有很广泛的应用
缺点功能相对较少:相对于LVS和HAProxy,Nginx在负载均衡算法和健康检查等方面的功能相对较少
限制于应用层协议:Nginx只能对HTTP和HTTPS等应用层协议进行处理,无法处理其他协议
Haproxy优点灵活性:HAProxy支持丰富的负载均衡算法和会话保持方式,可以根据需求进行灵活配置
完整的功能:HAProxy支持高可用性配置、健康检查、故障恢复、SSL等功能,在负载均衡领域应用广泛
高性能:HAProxy性能优良,能够处理大量并发连接,并且支持异步IO模型
缺点内存占用:相对于Nginx和LVS,HAProxy在处理大量连接时消耗的内存稍高一些
高可用性:HAProxy需要借助额外的工具来实现高可用性,例如Keepalived
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.