欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 文旅 > 八卦 > k8s集群

k8s集群

2025/2/8 18:14:35 来源:https://blog.csdn.net/qq_43372150/article/details/145430440  浏览:    关键词:k8s集群

文章目录

  • 项目描述
  • 项目环境
    • 系统与软件版本概览
    • 项目步骤
  • 环境准备
    • IP地址规划
    • 关闭selinux和firewall
    • 配置静态ip地址
    • 修改主机名
    • 添加hosts解析
  • 项目步骤
    • 一、使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)
      • 1、互相之间建立免密通道
      • 2.关闭交换分区(Kubeadm初始化的时候会检测)
      • 3.加载相关内核模块
      • 4.配置阿里云的repo源
      • 5.配置安装k8s组件需要的阿里云的repo源
      • 6.配置时间同步
      • 7.安装docker服务,启动docker,设置开机自启
      • 8.配置docker镜像加速器和驱动
      • 9、重新加载配置,重启docker服务
      • 10.安装初始化k8s需要的软件包
      • 11、设置kubelet开机启动
      • 12.kubeadm初始化k8s集群
      • 13.使用kubeadm初始化k8s集群
      • 14.基于kubeadm.yaml文件初始化k8s
      • 15.node节点加入集群
      • 16.在k8smaster上查看集群节点状况
      • 17.k8snode1,k8snode2的ROLES角色为空,把ROLES角色变成work
      • 18.安装kubernetes网络组件-Calico
      • 19、再次查看集群状态
    • 二、部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。
      • 1、部署ansible
        • 1、安装ansible,在管理节点上
        • 2、建立免密通道,在ansible主机上生成密钥对
        • 3、上传公钥到所有服务器的root用户家目录下
        • 4.编写主机清单
      • 2、部署堡垒机
      • 3、部署firewall服务器
    • 三、部署NFS服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现
      • 1.搭建好nfs服务器
      • 2.设置共享目录
      • 3.新建共享目录和index.html
      • 4.刷新nfs或者重新输出共享目录
      • 5.重启nfs服务并且设置nfs开机自启
      • 6.在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录
      • 7.取消挂载
      • 8.创建pv使用nfs服务器上的共享目录
      • 9.创建pvc使用pv
      • 10.创建pod使用pvc,将之前创建的PVC挂载到容器内部的指定路径下
      • 11.测试访问
    • 四、构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。
      • 1、部署gitlab
        • 1、出现的问题
      • 2、部署Jenkins
      • 3、部署harbor
        • 1、测试推送镜像
    • 五、将自己用go开发的web接口系统制作成镜像,部署到k8s里作为web应用;采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小1个业务pod,最多10个业务pod
      • 1.k8s集群 配置 Docker 以使用私有镜像仓库
      • 2. k8s集群 登录到私有镜像仓库harbor
      • 3. k8s集群 测试从私有镜像仓库拉取镜像
      • 4. 制作go-web镜像并推送到私有镜像仓库
      • 5. 在node1上拉取镜像
      • 6. 在node2上拉取镜像
      • 7. 使用 HPA 技术实现自动扩缩容
        • 7.1 安装 Metrics Server
        • 7.2 验证 Metrics Server 安装
        • 7.3 使用 HPA ——创建 Deployment 和 Service,启动web并暴露服务
        • 7.4.使用 HPA ——为 Deployment myweb 创建 HPA,当 CPU 使用率达到 50% 时,Pod 数量在 1 到 10 之间自动调整
        • 7.5访问
        • 7.6删除hpa
    • 六、启动mysql的pod,为web业务提供数据库服务
      • 1、尝试:k8s部署有状态的MySQL
    • 七、使用探针(liveness、readiness、startup)的(httpget、exec)方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性
    • 八、使用ingress给web业务做负载均衡,使用dashboard对整个集群资源进行掌控。
      • 1、使用ingress给web业务做负载均衡
        • 第1大步骤:安装ingress controller
        • 第2大步骤:创建pod和暴露pod的service
        • 第3大步骤:启用ingress关联ingress controller 和service
        • 第4大步骤:查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则
        • 第5大步骤:启动第2个服务和pod,使用了pv+pvc+nfs
          • 1、需要提前准备好nfs服务器+创建pv和pvc
          • 2、创建 Deployment 和 Service
      • 2、使用Kubernetes Dashboard对整个集群资源进行掌控
    • 九、安装zabbix和promethues对整个集群资源(cpu,内存,网络带宽,web服务,数据库服务,磁盘IO等)进行监控。
      • 1、部署zabbix监控Kubernetes
      • 2、使用Prometheus监控Kubernetes
        • 1、在所有节点下载必要的镜像,确保快速部署。
        • 2、使用 DaemonSet 部署 Node Exporter,确保每个节点都能收集指标。
        • 3、部署Prometheus
          • 1、创建 Prometheus 的 RBAC 资源
          • 2、创建 Prometheus 的 ConfigMap
          • 3、部署 Prometheus
          • 4、创建 Prometheus 的 Service
        • 4.部署grafana
          • 1、部署 Grafana
          • 2、创建 Grafana 的 Service
          • 3、创建 Grafana 的 Ingress
        • 5.检查、测试
    • 十、使用测试软件ab对整个k8s集群和相关的服务器进行压力测试。
      • 1、运行php-apache服务器并暴露服务
        • 1、部署和验证
      • 2 、创建HPA
        • 1、创建HPA
        • 2、验证 HPA
      • 3、测试 HPA 的自动伸缩功能
        • 1、创建负载生成器 Pod
      • 4、使用 ab 压力测试工具,对web服务进行压力测试,观察promethues和dashboard
        • 1、安装 httpd-tools
        • 2、使用 ab 进行压力测试
        • 3、压力测试结果
        • 4、使用 Prometheus 和 Grafana 观察指标(4种方式观察)
        • 5、总结
    • 十一、项目心得

项目描述

模拟公司的web业务,部署k8s,web,MySQL,nfs,harbor,zabbix,Prometheus,gitlab,Jenkins,ansible环境,保障web业务的高可用,达到一个高负载的生产环境。

项目环境

系统与软件版本概览

  • 操作系统: CentOS 7.9
  • 配置管理:
    • Ansible 2.9.27
  • 容器技术:
    • Docker 20.10.6
    • Docker Compose 2.18.1
  • 集群管理:
    • Kubernetes 1.20.6
    • Calico 3.23
  • 镜像仓库:
    • Harbor 2.4.1
  • 存储:
    • NFS v4
  • 监控与日志:
    • Metrics Server 0.6.0
    • Prometheus 2.34.0
    • Zabbix 5.0
    • Grafana 10.0.0
  • 持续集成/持续部署(CI/CD):
    • Jenkins (jenkinsci/blueocean)
    • GitLab 16.0.4-jh
  • 安全与认证:
    • Kube-webhook-certgen v1.1.0
  • 网络入口控制:
    • Ingress Nginx Controller v1.1.0
  • 数据库:
    • MySQL 5.7.42
  • Kubernetes Dashboard:
    • Dashboard v2.5.0

项目步骤

1、使用ProcessOn设计了整个集群的架构,规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)。
2、部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。
3、部署nfs服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。
4、构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。
5、将自己用go开发的web接口系统制作成镜像,部署到k8s里作为web应用;采用HPA技术,当cpu使用率达到50%的时候,进行水平扩展,最小20个业务pod,最多40个业务pod。
6、启动mysql的pod,为web业务提供数据库服务。
7、使用探针(liveness、readiness、startup)的(httpget、exec)方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性。
8、使用ingress给web业务做负载均衡,使用dashboard对整个集群资源进行掌控。
9、安装zabbix和prometheus对整个集群资源(cpu、内存、网络带宽、web服务、数据库服务、磁盘IO等)进行监控。
10、使用测试软件ab对整个k8s集群和相关的服务器进行压力测试。

环境准备

10台全新的Linux服务器、关闭firewall和seLinux、配置静态IP地址、修改主机名、添加hosts解析

IP地址规划

serverip
k8smaster192.168.2.104
k8snode1192.168.2.111
k8snode2192.168.2.112
ansible192.168.2.119
nfs192.168.2.121
gitlab192.168.2.124
harbor192.168.2.106
zabbix192.168.2.117
firewalld192.168.2.141
Bastionhost192.168.2.140

关闭selinux和firewall

# 防火墙并且设置防火墙开启不启动
service firewalld stop && systemctl disable firewalld# 临时关闭seLinux
setenforce 0# 永久关闭seLinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config[root@k8smaster ~]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@k8smaster ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8smaster ~]# reboot
[root@k8smaster ~]# getenforce 
Disabled

配置静态ip地址

cd /etc/sysconfig/network-scripts/
vim  ifcfg-ens33TYPE="Ethernet"
BOOTPROTO="static"
DEVICE="ens33"
NAME="ens33"
ONBOOT="yes"
IPADDR="192.168.2.104"
PREFIX=24
GATEWAY="192.168.2.1"
DNS1=114.114.114.114TYPE="Ethernet"
BOOTPROTO="static"
DEVICE="ens33"
NAME="ens33"
ONBOOT="yes"
IPADDR="192.168.2.111"
PREFIX=24
GATEWAY="192.168.2.1"
DNS1=114.114.114.114TYPE="Ethernet"
BOOTPROTO="static"
DEVICE="ens33"
NAME="ens33"
ONBOOT="yes"
IPADDR="192.168.2.112"
PREFIX=24
GATEWAY="192.168.2.1"
DNS1=114.114.114.114

修改主机名

hostnamcectl set-hostname k8smaster
hostnamcectl set-hostname k8snode1
hostnamcectl set-hostname k8snode2#切换用户,重新加载环境
su - root
[root@k8smaster ~]# 
[root@k8snode1 ~]#
[root@k8snode2 ~]#

添加hosts解析

vim /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.104 k8smaster
192.168.2.111 k8snode1
192.168.2.112 k8snode2

项目步骤

一、使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)

1、互相之间建立免密通道

# 1.互相之间建立免密通道
ssh-keygen      # 一路回车ssh-copy-id k8smaster
ssh-copy-id k8snode1
ssh-copy-id k8snode2

2.关闭交换分区(Kubeadm初始化的时候会检测)

# 临时关闭:swapoff -a
# 永久关闭:注释swap挂载,给swap这行开头加一下注释
[root@k8smaster ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Mar 23 15:22:20 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=00236222-82bd-4c15-9c97-e55643144ff3 /boot                   xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

3.加载相关内核模块

添加网桥过滤和地址转发功能,转发IPv4并让iptables看到桥接流量

为什么要执行modprobe br_netfilter?
"modprobe br_netfilter"命令用于在Linux系统中加载br_netfilter内核模块。这个模块是Linux内核中的一个网络桥接模块,它允许管理员使用iptables等工具对桥接到同一网卡的流量进行过滤和管理。
因为要使用Linux系统作为路由器或防火墙,并且需要对来自不同网卡的数据包进行过滤、转发或NAT操作。

为什么要开启net.ipv4.ip_forward = 1参数?
要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指# 定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发功能已经打开。

modprobe br_netfilterecho "modprobe br_netfilter" >> /etc/profilecat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF#重新加载,使配置生效
sysctl -p /etc/sysctl.d/k8s.conf

4.配置阿里云的repo源

yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm

5.配置安装k8s组件需要的阿里云的repo源

[root@k8smaster ~]# vim  /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

6.配置时间同步

[root@k8smaster ~]# crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org#重启crond服务
[root@k8smaster ~]# service crond restart

7.安装docker服务,启动docker,设置开机自启

yum install docker-ce-20.10.6 -y
systemctl start docker && systemctl enable docker.service

8.配置docker镜像加速器和驱动

vim  /etc/docker/daemon.json {"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],"exec-opts": ["native.cgroupdriver=systemd"]
} 

9、重新加载配置,重启docker服务

systemctl daemon-reload  && systemctl restart docker

10.安装初始化k8s需要的软件包

yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

11、设置kubelet开机启动

systemctl enable kubelet #注:每个软件包的作用
#Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的
#kubelet:   安装在集群所有节点上,用于启动Pod的
#kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

12.kubeadm初始化k8s集群

# 把初始化k8s集群需要的离线镜像包上传到k8smaster、k8snode1、k8snode2机器上,然后解压
docker load -i k8simage-1-20-6.tar.gz# 把文件远程拷贝到node节点
root@k8smaster ~]# scp k8simage-1-20-6.tar.gz root@k8snode1:/root
root@k8smaster ~]# scp k8simage-1-20-6.tar.gz root@k8snode2:/root# 查看镜像
[root@k8snode1 ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED       SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.6    9a1ebfd8124d   2 years ago   118MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.6    b93ab2ec4475   2 years ago   47.3MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.6    560dd11d4550   2 years ago   116MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.6    b05d611c1af9   2 years ago   122MB
calico/pod2daemon-flexvol                                         v3.18.0    2a22066e9588   2 years ago   21.7MB
calico/node                                                       v3.18.0    5a7c4970fbc2   2 years ago   172MB
calico/cni                                                        v3.18.0    727de170e4ce   2 years ago   131MB
calico/kube-controllers                                           v3.18.0    9a154323fbf7   2 years ago   53.4MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   2 years ago   253MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   3 years ago   45.2MB
registry.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   3 years ago   683kB

13.使用kubeadm初始化k8s集群

kubeadm config print init-defaults > kubeadm.yaml[root@k8smaster ~]# vim kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.2.104         #控制节点的ipbindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: k8smaster                        #控制节点主机名taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 需要修改为阿里云的仓库
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16         #指定pod网段,需要新增加这个
scheduler: {}
#追加如下几行
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

14.基于kubeadm.yaml文件初始化k8s

[root@k8smaster ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerificationmkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configkubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c 

15.node节点加入集群

[root@k8snode1 ~]# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c [root@k8snode2 ~]# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c 

16.在k8smaster上查看集群节点状况

[root@k8smaster ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE     VERSION
k8smaster   NotReady   control-plane,master   2m49s   v1.20.6
k8snode1    NotReady   <none>                 19s     v1.20.6
k8snode2    NotReady   <none>                 14s     v1.20.6

17.k8snode1,k8snode2的ROLES角色为空,把ROLES角色变成work

#可以把k8snode1,k8snode2的ROLES角色变成work
[root@k8smaster ~]# kubectl label node k8snode1 node-role.kubernetes.io/worker=worker
node/k8snode1 labeled[root@k8smaster ~]# kubectl label node k8snode2 node-role.kubernetes.io/worker=worker
node/k8snode2 labeled
[root@k8smaster ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE     VERSION
k8smaster   NotReady   control-plane,master   2m43s   v1.20.6
k8snode1    NotReady   worker                 2m15s   v1.20.6
k8snode2    NotReady   worker                 2m11s   v1.20.6
# 注意:上面状态都是NotReady状态,说明没有安装网络插件

18.安装kubernetes网络组件-Calico

# 上传calico.yaml到k8smaster上,使用yaml文件安装calico网络插件 。
wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate[root@k8smaster ~]# kubectl apply -f  calico.yaml

19、再次查看集群状态

[root@k8smaster ~]# kubectl get nodes
NAME        STATUS   ROLES                  AGE     VERSION
k8smaster   Ready    control-plane,master   5m57s   v1.20.6
k8snode1    Ready    worker                 3m27s   v1.20.6
k8snode2    Ready    worker                 3m22s   v1.20.6
# STATUS状态是Ready,说明k8s集群正常运行了

二、部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。

1、部署ansible

1、安装ansible,在管理节点上
# 目前,只要机器上安装了 Python 2.6 或 Python 2.7 (windows系统不可以做控制主机),都可以运行Ansible.
[root@ansible .ssh]# yum install epel-release -y
[root@ansible .ssh]# yum  install ansible -y
[root@ansible ~]# ansible --version
ansible 2.9.27config file = /etc/ansible/ansible.cfgconfigured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']ansible python module location = /usr/lib/python2.7/site-packages/ansibleexecutable location = /usr/bin/ansiblepython version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
2、建立免密通道,在ansible主机上生成密钥对
[root@ansible ~]# ssh-keygen -t ecdsa
Generating public/private ecdsa key pair.
Enter file in which to save the key (/root/.ssh/id_ecdsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_ecdsa.
Your public key has been saved in /root/.ssh/id_ecdsa.pub.
The key fingerprint is:
SHA256:FNgCSDVk6i3foP88MfekA2UzwNn6x3kyi7V+mLdoxYE root@ansible
The key's randomart image is:
+---[ECDSA 256]---+
| ..+*o =.        |
|  .o .* o.       |
|  .    +.  .     |
| . .  ..= E .    |
|  o o  +S+ o .   |
|   + o+ o O +    |
|  . . .= B X     |
|   . .. + B.o    |
|    ..o. +oo..   |
+----[SHA256]-----+
[root@ansible ~]# cd /root/.ssh
[root@ansible .ssh]# ls
id_ecdsa  id_ecdsa.pub
3、上传公钥到所有服务器的root用户家目录下
所有服务器上开启ssh服务 ,开放22号端口,允许root用户登录
#上传公钥到k8smaster
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.104
# 上传公钥到k8snode
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.111
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.112# 验证是否实现免密码密钥认证
[root@ansible .ssh]# ssh root@192.168.2.121
Last login: Tue Jun 20 10:33:33 2023 from 192.168.2.240
[root@nfs ~]# exit
登出
Connection to 192.168.2.121 closed.
[root@ansible .ssh]# ssh root@192.168.2.112
Last login: Tue Jun 20 10:34:18 2023 from 192.168.2.240
[root@k8snode2 ~]# exit
登出
Connection to 192.168.2.112 closed.
[root@ansible .ssh]# 
4.编写主机清单
[root@ansible .ssh]# cd /etc/ansible
[root@ansible ansible]# ls
ansible.cfg  hosts  roles
[root@ansible ansible]# vim hosts 
## 192.168.1.110
[k8smaster]
192.168.2.104
[k8snode]
192.168.2.111
192.168.2.112
[nfs]
192.168.2.121
[gitlab]
192.168.2.124
[harbor]
192.168.2.106
[zabbix]
192.168.2.117
# 测试
[root@ansible ansible]# ansible all -m shell -a "ip add"

2、部署堡垒机

仅需两步快速安装 JumpServer:
准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;
以 root 用户执行如下命令一键安装 JumpServer。
curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash
1、命令启动, 然后访问cd /opt/jumpserver-installer-v3.10.7
./jmsctl.sh start2. 其它一些管理命令
./jmsctl.sh stop
./jmsctl.sh restart
./jmsctl.sh backup
./jmsctl.sh upgrade3. Web 访问
http://192.168.2.140:80
默认用户: admin  默认密码: admin4. SSH/SFTP 访问
ssh -p2222 admin@192.168.2.140
sftp -P2222 admin@192.168.2.140

在这里插入图片描述

3、部署firewall服务器

关闭虚拟机,增加一块网卡(ens37)

# 编写脚本实现SNAT_DNAT功能
[root@firewalld ~]# cat snat_dnat.sh 
#!/bin/bash# open  route  开启Linux系统的IP转发功能
# 可以直接去/etc/sysctl.conf文件添加这个配置
# net.ipv4.ip_forward = 1
echo 1 >/proc/sys/net/ipv4/ip_forward# stop firewall  停止并禁用CentOS的默认防火墙管理工具firewalld
systemctl   stop  firewalld
systemctl disable firewalld# clear iptables rule  清空所有iptables的规则,确保从一个干净的状态开始配置
iptables -F
iptables -t nat -F# enable snat 配置SNAT,为来自内网(192.168.2.0/24)的数据包设置SNAT规则,使得这些数据包在通过外网接口(ens33)发送时,其源IP地址会被替换为该接口的实际IP地址(即动态伪装)
iptables -t nat  -A POSTROUTING  -s 192.168.2.0/24  -o ens33  -j  MASQUERADE
#内网来的192.168.2.0网段过来的ip地址全部伪装(替换)为ens33接口的公网ip地址,好处就是不需要考虑ens33接口的ip地址是多少,你是哪个ip地址,我就伪装成哪个ip地址# enable dnat   配置DNAT  为到达特定外部IP(192.168.0.169)且目标端口为2233的TCP流量设置DNAT规则,将其重定向到内部服务器(192.168.2.104:22),
# 以及针对HTTP请求(端口80)同样设置了DNAT规则指向同一台内部服务器。
iptables  -t nat -A PREROUTING  -d 192.168.0.169 -i ens33  -p tcp  --dport 2233 -j DNAT  --to-destination 192.168.2.104:22# open web 80
iptables  -t nat -A PREROUTING  -d 192.168.0.169 -i ens33  -p tcp  --dport 80   -j DNAT  --to-destination 192.168.2.104:80

web服务器上操作
开放一些必要的服务端口,并设置默认策略为拒绝所有未明确允许的流量
开放SSH、DNS、DHCP、HTTP/HTTPS和MySQL服务端口:对于每个服务,都添加了一条INPUT链中的ACCEPT规则,允许相应的流量进入。

[root@k8smaster ~]# cat open_app.sh 
#!/bin/bash# open ssh
iptables -t filter  -A INPUT  -p tcp  --dport  22 -j ACCEPT# open dns
iptables -t filter  -A INPUT  -p udp  --dport 53 -s 192.168.2.0/24 -j ACCEPT# open dhcp 
iptables -t filter  -A INPUT  -p udp   --dport 67 -j ACCEPT# open http/https
iptables -t filter  -A INPUT -p tcp   --dport 80 -j ACCEPT
iptables -t filter  -A INPUT -p tcp   --dport 443 -j ACCEPT# open mysql
iptables  -t filter  -A INPUT -p tcp  --dport 3306  -j ACCEPT# default policy DROP ,默认策略被设置为DROP,这意味着任何不符合上述接受规则的流量都将被丢弃
iptables  -t filter  -P INPUT DROP# drop icmp request,阻止ping请求(ICMP类型8),这可以减少对服务器的探测
iptables -t filter  -A INPUT -p icmp  --icmp-type 8 -j DROP

三、部署NFS服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现

1.搭建好nfs服务器

# 1.搭建好nfs服务器
[root@nfs ~]# yum install nfs-utils -y# 建议k8s集群内的所有的节点都安装nfs-utils软件,因为节点服务器里创建卷需要支持nfs网络文件系统
[root@k8smaster ~]# yum install nfs-utils -y
[root@k8snode1 ~]# yum install nfs-utils -y
[root@k8snode1 ~]# yum install nfs-utils -y[root@k8smaster ~]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service

2.设置共享目录

# 2.设置共享目录
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/web   192.168.2.0/24(rw,no_root_squash,sync)

3.新建共享目录和index.html

# 3.新建共享目录和index.html
[root@nfs ~]# mkdir /web
[root@nfs ~]# cd /web
[root@nfs web]# echo "welcome to changsha" >index.html
[root@nfs web]# ls
index.html
[root@nfs web]# ll -d /web
drwxr-xr-x. 2 root root 24 618 16:46 /web

4.刷新nfs或者重新输出共享目录

# 4.刷新nfs或者重新输出共享目录
[root@nfs ~]# exportfs -r   #输出所有共享目录
[root@nfs ~]# exportfs -v   #显示输出的共享目录
/web            192.168.2.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

5.重启nfs服务并且设置nfs开机自启

# 5.重启nfs服务并且设置nfs开机自启
[root@nfs web]# systemctl restart nfs && systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.

6.在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录

# 6.在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录
[root@k8snode1 ~]# mkdir /node1_nfs
[root@k8snode1 ~]# mount 192.168.2.121:/web /node1_nfs[root@k8snode1 ~]# df -Th|grep nfs
192.168.2.121:/web      nfs4       17G  1.5G   16G    9% /node1_nfs

7.取消挂载

# 7.取消挂载
[root@k8snode1 ~]# umount  /node1_nfs

8.创建pv使用nfs服务器上的共享目录

# 8.创建pv使用nfs服务器上的共享目录
[root@k8smaster pv]# vim nfs-pv.yml
[root@k8smaster pv]# cat nfs-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:name: pv-weblabels:type: pv-web
spec:capacity:storage: 10Gi accessModes:- ReadWriteManystorageClassName: nfs         # pv对应的名字nfs:path: "/web"       # nfs共享的目录server: 192.168.2.121   # nfs服务器的ip地址readOnly: false   # 访问模式[root@k8smaster pv]# kubectl apply -f nfs-pv.yml 
persistentvolume/pv-web created
[root@k8smaster pv]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-web   10Gi       RWX            Retain           Available           nfs                     5s

9.创建pvc使用pv

# 9.创建pvc使用pv
[root@k8smaster pv]# vim nfs-pvc.yml
[root@k8smaster pv]# cat nfs-pvc.yml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc-web
spec:accessModes:- ReadWriteMany      resources:requests:storage: 1GistorageClassName: nfs #使用nfs类型的pv[root@k8smaster pv]# kubectl apply -f nfs-pvc.yml 
persistentvolumeclaim/pvc-web created[root@k8smaster pv]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-web   Bound    pv-web   10Gi       RWX            nfs            6s

10.创建pod使用pvc,将之前创建的PVC挂载到容器内部的指定路径下

# 10.创建pod使用pvc,将之前创建的PVC挂载到容器内部的指定路径下
[root@k8smaster pv]# vim nginx-deployment.yaml 
[root@k8smaster pv]# cat nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:volumes:- name: sc-pv-storage-nfspersistentVolumeClaim:claimName: pvc-webcontainers:- name: sc-pv-container-nfsimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80name: "http-server"volumeMounts:- mountPath: "/usr/share/nginx/html"name: sc-pv-storage-nfs[root@k8smaster pv]# kubectl apply -f nginx-deployment.yaml 
deployment.apps/nginx-deployment created[root@k8smaster pv]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
nginx-deployment-76855d4d79-2q4vh   1/1     Running   0          42s   10.244.185.194   k8snode2   <none>           <none>
nginx-deployment-76855d4d79-mvgq7   1/1     Running   0          42s   10.244.185.195   k8snode2   <none>           <none>
nginx-deployment-76855d4d79-zm8v4   1/1     Running   0          42s   10.244.249.3     k8snode1   <none>           <none>

11.测试访问

# 11.测试访问
[root@k8smaster pv]# curl 10.244.185.194
welcome to changsha
[root@k8smaster pv]# curl 10.244.185.195
welcome to changsha
[root@k8smaster pv]# curl 10.244.249.3
welcome to changsha[root@k8snode1 ~]# curl 10.244.185.194
welcome to changsha
[root@k8snode1 ~]# curl 10.244.185.195
welcome to changsha
[root@k8snode1 ~]# curl 10.244.249.3
welcome to changsha[root@k8snode2 ~]# curl 10.244.185.194
welcome to changsha
[root@k8snode2 ~]# curl 10.244.185.195
welcome to changsha
[root@k8snode2 ~]# curl 10.244.249.3
welcome to changsha
# 12.修改内容
[root@nfs web]# echo "hello,world" >> index.html
[root@nfs web]# cat index.html 
welcome to changsha
hello,world
# 13.再次访问
[root@k8snode1 ~]# curl 10.244.249.3
welcome to changsha
hello,world

四、构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。

1、部署gitlab

GitLab 是一个更全面的 DevOps 平台,提供代码托管、CI/CD、项目管理等功能,适合企业内部使用。
# 部署gitlab
https://gitlab.cn/install/[root@localhost ~]# hostnamectl set-hostname gitlab 	# 将主机名设置为 gitlab
[root@localhost ~]# su - root
su - root
上一次登录:日 618 18:28:08 CST 2023192.168.2.240pts/0 上
[root@gitlab ~]# cd /etc/sysconfig/network-scripts/		# 进入网络配置文件目录
[root@gitlab network-scripts]# vim ifcfg-ens33 			# 编辑网络配置文件
[root@gitlab network-scripts]# service network restart		# 重启网络服务
Restarting network (via systemctl):                        [  确定  ]
[root@gitlab network-scripts]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config		# 禁用 SELinux
[root@gitlab network-scripts]# service firewalld stop && systemctl disable firewalld  # 停止并禁用防火墙
Redirecting to /bin/systemctl stop firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@gitlab network-scripts]# reboot	 reboot  # 重启系统
[root@gitlab ~]# getenforce		# 检查 SELinux 状态,确认已禁用
Disabled# 1.安装和配置必须的依赖项
yum install -y curl policycoreutils-python openssh-server perl# 2.配置极狐GitLab 软件源镜像
[root@gitlab ~]# curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bash
==> Detected OS centos==> Add yum repo file to /etc/yum.repos.d/gitlab-jh.repo[gitlab-jh]
name=JiHu GitLab
baseurl=https://packages.gitlab.cn/repository/el/$releasever/
gpgcheck=0
gpgkey=https://packages.gitlab.cn/repository/raw/gpg/public.gpg.key
priority=1
enabled=1==> Generate yum cache for gitlab-jh==> Successfully added gitlab-jh repo. To install JiHu GitLab, run "sudo yum/dnf install gitlab-jh".[root@gitlab ~]# yum install gitlab-jh -y		# 安装极狐GitLab
Thank you for installing JiHu GitLab!
GitLab was unable to detect a valid hostname for your instance.
Please configure a URL for your JiHu GitLab instance by setting `external_url`
configuration in /etc/gitlab/gitlab.rb file.
Then, you can start your JiHu GitLab instance by running the following command:sudo gitlab-ctl reconfigureFor a comprehensive list of configuration options please see the Omnibus GitLab readme
https://jihulab.com/gitlab-cn/omnibus-gitlab/-/blob/main-jh/README.mdHelp us improve the installation experience, let us know how we did with a 1 minute survey:
https://wj.qq.com/s2/10068464/dc66[root@gitlab ~]# vim /etc/gitlab/gitlab.rb 		 # 编辑 GitLab 配置文件
external_url 'http://myweb.first.com'[root@gitlab ~]# gitlab-ctl reconfigure		# 初始化 GitLab
Notes:
Default admin account has been configured with following details:
Username: root
Password: You didn't opt-in to print initial root password to STDOUT.
Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours.
NOTE: Because these credentials might be present in your log files in plain text, it is highly recommended to reset the password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
gitlab Reconfigured!
# 查看密码
[root@gitlab ~]# cat /etc/gitlab/initial_root_password 
# WARNING: This value is valid only in the following conditions
#          1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
#          2. Password hasn't been changed manually, either via UI or via command line.
#
#          If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.Password: Al5rgYomhXDz5kNfDl3y8qunrSX334aZZxX5vONJ05s=# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.# 可以登录后修改语言为中文
# 用户的profile/preferences# 修改密码[root@gitlab ~]# gitlab-rake gitlab:env:infoSystem information
System:     
Proxy:      no
Current User:   git
Using RVM:  no
Ruby Version:   3.0.6p216
Gem Version:    3.4.13
Bundler Version:2.4.13
Rake Version:   13.0.6
Redis Version:  6.2.11
Sidekiq Version:6.5.7
Go Version: unknownGitLab information
Version:    16.0.4-jh
Revision:   c2ed99db36f
Directory:  /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: PostgreSQL
DB Version: 13.11
URL:        http://myweb.first.com
HTTP Clone URL: http://myweb.first.com/some-group/some-project.git
SSH Clone URL:  git@myweb.first.com:some-group/some-project.git
Elasticsearch:  no
Geo:        no
Using LDAP: no
Using Omniauth: yes
Omniauth Providers: GitLab Shell
Version:    14.20.0
Repository storages:
- default:  unix:/var/opt/gitlab/gitaly/gitaly.socket
GitLab Shell path:      /opt/gitlab/embedded/service/gitlab-shell
1、出现的问题
出现的问题,前期一直报错,502错误,登录不上这个gitlab本地地址 192.168.2.124:9091解决方案:上网查资料发现,使用top 查看CPU和内存的使用,发现不够用了,
关闭虚拟机,进入设置,增加内存,
重新登陆发现成功了

2、部署Jenkins

Jenkins部署到k8s里

# 1.安装git软件
[root@k8smaster jenkins]# yum install git -y# 2.下载包含Jenkins部署所需的各种YAML配置文件
[root@k8smaster jenkins]# git clone https://github.com/scriptcamp/kubernetes-jenkins
正克隆到 'kubernetes-jenkins'...
remote: Enumerating objects: 16, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 16 (delta 1), reused 0 (delta 0), pack-reused 9
Unpacking objects: 100% (16/16), done.
[root@k8smaster jenkins]# ls
kubernetes-jenkins
[root@k8smaster jenkins]# cd kubernetes-jenkins/
[root@k8smaster kubernetes-jenkins]# ls
deployment.yaml  namespace.yaml  README.md  serviceAccount.yaml  service.yaml  volume.yaml# 3.创建命名空间,用于隔离Jenkins资源
[root@k8smaster kubernetes-jenkins]# cat namespace.yaml 
apiVersion: v1
kind: Namespace
metadata:name: devops-tools
[root@k8smaster kubernetes-jenkins]# kubectl apply -f namespace.yaml 
namespace/devops-tools created[root@k8smaster kubernetes-jenkins]# kubectl get ns
NAME                   STATUS   AGE
default                Active   22h
devops-tools           Active   19s
ingress-nginx          Active   139m
kube-node-lease        Active   22h
kube-public            Active   22h
kube-system            Active   22h# 4.创建服务账号,集群角色,绑定
# 创建了一个ServiceAccount并将其与上述ClusterRole绑定
# resources: ["*"] verbs: ["*"]表示允许对所有资源的所有操作
[root@k8smaster kubernetes-jenkins]# cat serviceAccount.yaml 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: jenkins-admin
rules:- apiGroups: [""]resources: ["*"]verbs: ["*"]---
apiVersion: v1
kind: ServiceAccount
metadata:name: jenkins-adminnamespace: devops-tools---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: jenkins-admin
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: jenkins-admin
subjects:
- kind: ServiceAccountname: jenkins-admin[root@k8smaster kubernetes-jenkins]# kubectl apply -f serviceAccount.yaml 
clusterrole.rbac.authorization.k8s.io/jenkins-admin created
serviceaccount/jenkins-admin created
clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created# 5.创建卷,用来存放数据
# 定义了一个StorageClass和一个(PV),以及一个(PVC)。这些资源用于确保Jenkins的数据可以持久化存储
[root@k8smaster kubernetes-jenkins]# cat volume.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer---
apiVersion: v1
kind: PersistentVolume
metadata:name: jenkins-pv-volumelabels:type: local
spec:storageClassName: local-storageclaimRef:name: jenkins-pv-claimnamespace: devops-toolscapacity:storage: 10GiaccessModes:- ReadWriteOncelocal:path: /mntnodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- k8snode1   # 需要修改为k8s里的node节点的名字---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: jenkins-pv-claimnamespace: devops-tools
spec:storageClassName: local-storageaccessModes:- ReadWriteOnceresources:requests:storage: 3Gi[root@k8smaster kubernetes-jenkins]# kubectl apply -f volume.yaml 
storageclass.storage.k8s.io/local-storage created
persistentvolume/jenkins-pv-volume created
persistentvolumeclaim/jenkins-pv-claim created[root@k8smaster kubernetes-jenkins]# kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS    REASON   AGE
jenkins-pv-volume   10Gi       RWO            Retain           Bound    devops-tools/jenkins-pv-claim   local-storage            33s
pv-web              10Gi       RWX            Retain           Bound    default/pvc-web                 nfs                      21h[root@k8smaster kubernetes-jenkins]# kubectl describe pv jenkins-pv-volume
Name:              jenkins-pv-volume
Labels:            type=local
Annotations:       <none>
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-storage
Status:            Bound
Claim:             devops-tools/jenkins-pv-claim
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          10Gi
Node Affinity:     Required Terms:  Term 0:        kubernetes.io/hostname in [k8snode1]
Message:           
Source:Type:  LocalVolume (a persistent volume backed by local storage on a node)Path:  /mnt
Events:    <none># 6.部署Jenkins
# 通过Deployment对象来部署Jenkins容器。它指定了使用的镜像、资源请求和限制、健康检查探针等
# securityContext: 设置运行容器的用户ID和组ID,确保安全上下文正确。
# serviceAccountName: 指定之前创建的服务账户名,使Jenkins Pod拥有相应的权限。
# volumeMounts: 将之前创建的PVC挂载到容器内的/var/jenkins_home目录,保证数据持久化
[root@k8smaster kubernetes-jenkins]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: jenkinsnamespace: devops-tools
spec:replicas: 1selector:matchLabels:app: jenkins-servertemplate:metadata:labels:app: jenkins-serverspec:securityContext:fsGroup: 1000 runAsUser: 1000serviceAccountName: jenkins-admincontainers:- name: jenkinsimage: jenkins/jenkins:ltsimagePullPolicy: IfNotPresentresources:limits:memory: "2Gi"cpu: "1000m"requests:memory: "500Mi"cpu: "500m"ports:- name: httpportcontainerPort: 8080- name: jnlpportcontainerPort: 50000livenessProbe:httpGet:path: "/login"port: 8080initialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5failureThreshold: 5readinessProbe:httpGet:path: "/login"port: 8080initialDelaySeconds: 60periodSeconds: 10timeoutSeconds: 5failureThreshold: 3volumeMounts:- name: jenkins-datamountPath: /var/jenkins_home         volumes:- name: jenkins-datapersistentVolumeClaim:claimName: jenkins-pv-claim[root@k8smaster kubernetes-jenkins]# kubectl apply -f deployment.yaml 
deployment.apps/jenkins created[root@k8smaster kubernetes-jenkins]# kubectl get deploy -n devops-tools
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
jenkins   1/1     1            1           5m36s[root@k8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools
NAME                       READY   STATUS    RESTARTS   AGE
jenkins-7fdc8dd5fd-bg66q   1/1     Running   0          19s# 7.启动服务发布Jenkins的pod
# 创建一个Service对象,类型为NodePort,这样可以从外部网络访问Jenkins服务
# 将宿主机的32000端口映射到Jenkins容器的8080端口,方便从浏览器访问
[root@k8smaster kubernetes-jenkins]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:name: jenkins-servicenamespace: devops-toolsannotations:prometheus.io/scrape: 'true'prometheus.io/path:   /prometheus.io/port:   '8080'
spec:selector: app: jenkins-servertype: NodePort  ports:- port: 8080targetPort: 8080nodePort: 32000[root@k8smaster kubernetes-jenkins]# kubectl apply -f service.yaml 
service/jenkins-service created[root@k8smaster kubernetes-jenkins]# kubectl get svc -n devops-tools
NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
jenkins-service   NodePort   10.104.76.252   <none>        8080:32000/TCP   24s# 8.在Windows机器上访问Jenkins,宿主机ip+端口号
http://192.168.2.104:32000/login?from=%2F# 9.进入pod里获取登录的密码
[root@k8smaster kubernetes-jenkins]# kubectl exec -it jenkins-7fdc8dd5fd-bg66q  -n devops-tools -- bash
bash-5.1$ cat /var/jenkins_home/secrets/initialAdminPassword
b0232e2dad164f89ad2221e4c46b0d46# 修改密码[root@k8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools
NAME                       READY   STATUS    RESTARTS   AGE
jenkins-7fdc8dd5fd-5nn7m   1/1     Running   0          91s

在这里插入图片描述

3、部署harbor

# 前提是安装好 docker 和 docker compose
# 1.配置阿里云的repo源
yum install -y yum-utilsyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 2.安装docker服务
yum install docker-ce-20.10.6 -y# 启动docker,设置开机自启
systemctl start docker && systemctl enable docker.service# 3.查看docker版本,docker compose版本
[root@harbor ~]# docker version
Client: Docker Engine - CommunityVersion:           24.0.2API version:       1.41 (downgraded from 1.43)Go version:        go1.20.4Git commit:        cb74dfcBuilt:             Thu May 25 21:55:21 2023OS/Arch:           linux/amd64Context:           defaultServer: Docker Engine - CommunityEngine:Version:          20.10.6API version:      1.41 (minimum version 1.12)Go version:       go1.13.15Git commit:       8728dd2Built:            Fri Apr  9 22:43:57 2021OS/Arch:          linux/amd64Experimental:     falsecontainerd:Version:          1.6.21GitCommit:        3dce8eb055cbb6872793272b4f20ed16117344f8runc:Version:          1.1.7GitCommit:        v1.1.7-0-g860f061docker-init:Version:          0.19.0GitCommit:        de40ad0[root@harbor ~]# docker compose version
Docker Compose version v2.18.1# 4.安装 docker-compose
[root@harbor ~]# ls
anaconda-ks.cfg  docker-compose-linux-x86_64  harbor
[root@harbor ~]# chmod +x docker-compose-linux-x86_64 
[root@harbor ~]# mv docker-compose-linux-x86_64 /usr/local/sbin/docker-compose# 5.安装 harbor,到 harbor 官网或者 github 下载harbor源码包
[root@harbor harbor]# ls
harbor-offline-installer-v2.4.1.tgz# 6.解压
[root@harbor harbor]# tar xf harbor-offline-installer-v2.4.1.tgz 
[root@harbor harbor]# ls
harbor  harbor-offline-installer-v2.4.1.tgz
[root@harbor harbor]# cd harbor
[root@harbor harbor]# ls
common.sh  harbor.v2.4.1.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
[root@harbor harbor]# pwd
/root/harbor/harbor# 7.修改配置文件
[root@harbor harbor]# cat harbor.yml
# Configuration file of Harbor# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.2.106  # 修改为主机ip地址# http related config
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 5000  # 修改成其他端口号#https可以全关闭
# https related config
#https:# https port for harbor, default is 443#port: 443# The path of cert and key files for nginx#certificate: /your/certificate/path#private_key: /your/private/key/path# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
#   # set enabled to true means internal tls is enabled
#   enabled: true
#   # put your cert and key files on dir
#   dir: /etc/harbor/tls/internal# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345  #登录密码# Harbor DB configuration
database:# The password for the root user of Harbor DB. Change this before any production use.password: root123# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.max_idle_conns: 100# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.# Note: the default number of connections is 1024 for postgres of harbor.max_open_conns: 900# The default data volume
data_volume: /data# 8.执行部署脚本,这将基于docker-compose.yml来启动所有必要的服务
[root@harbor harbor]# ./install.sh[Step 0]: checking if docker is installed ...Note: docker version: 24.0.2[Step 1]: checking docker-compose is installed ...
✖ Need to install docker-compose(1.18.0+) by yourself first and run this script again.[root@harbor harbor]# ./install.sh
[+] Running 10/10⠿ Network harbor_harbor        Created                                                                                                                                                                                                0.7s⠿ Container harbor-log         Started                                                                                                                                                                                                1.6s⠿ Container registry           Started                                                                                                                                                                                                5.2s⠿ Container harbor-db          Started                                                                                                                                                                                                4.9s⠿ Container harbor-portal      Started                                                                                                                                                                                                5.1s⠿ Container registryctl        Started                                                                                                                                                                                                4.8s⠿ Container redis              Started                                                                                                                                                                                                3.9s⠿ Container harbor-core        Started                                                                                                                                                                                                6.5s⠿ Container harbor-jobservice  Started                                                                                                                                                                                                9.0s⠿ Container nginx              Started                                                                                                                                                                                                9.1s
✔ ----Harbor has been installed and started successfully.----# 9.配置开机自启,需要编辑/etc/rc.local文件,加入启动命令
[root@harbor harbor]# vim /etc/rc.local
[root@harbor harbor]# cat /etc/rc.local 
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.touch /var/lock/subsys/local
/usr/local/sbin/docker-compose -f /root/harbor/harbor/docker-compose.yml up -d# 10.设置权限
[root@harbor harbor]# chmod +x /etc/rc.local /etc/rc.d/rc.local# 11.登录Harbor,使用之前设置的账号密码登录
http://192.168.2.106:5000/# 账号:admin
# 密码:Harbor12345

在这里插入图片描述

1、测试推送镜像
# 测试(以nginx为例进行推送到harbor上)
[root@harbor harbor]# docker image ls | grep nginx
nginx                           latest    605c77e624dd   17 months ago   141MB
goharbor/nginx-photon           v2.4.1    78aad8c8ef41   18 months ago   45.7MB[root@harbor harbor]# docker tag nginx:latest 192.168.2.106:5000/test/nginx1:v1[root@harbor harbor]# docker image ls | grep nginx
192.168.2.106:5000/test/nginx1   v1        605c77e624dd   17 months ago   141MB
nginx                            latest    605c77e624dd   17 months ago   141MB
goharbor/nginx-photon            v2.4.1    78aad8c8ef41   18 months ago   45.7MB
[root@harbor harbor]# docker push 192.168.2.106:5000/test/nginx1:v1
The push refers to repository [192.168.2.106:5000/test/nginx1]
Get https://192.168.2.106:5000/v2/: http: server gave HTTP response to HTTPS client# 默认情况下Docker客户端只信任HTTPS连接的仓库,因此还需要配置/etc/docker/daemon.json来允许不安全的注册表连接
[root@harbor harbor]# vim /etc/docker/daemon.json 
{"insecure-registries":["192.168.2.106:5000"]
} [root@harbor harbor]# docker login 192.168.2.106:5000
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded[root@harbor harbor]# docker push 192.168.2.106:5000/test/nginx1:v1
The push refers to repository [192.168.2.106:5000/test/nginx1]
d874fd2bc83b: Pushed 
32ce5f6a5106: Pushed 
f1db227348d0: Pushed 
b8d6e692a25e: Pushed 
e379e8aedd4d: Pushed 
2edcec3590a4: Pushed 
v1: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570
[root@harbor harbor]# cat /etc/docker/daemon.json 
{"insecure-registries":["192.168.2.106:5000"]
} 

五、将自己用go开发的web接口系统制作成镜像,部署到k8s里作为web应用;采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小1个业务pod,最多10个业务pod

1.k8s集群 配置 Docker 以使用私有镜像仓库

# k8s集群每个节点都登入到harbor中,以便于从harbor中拉回镜像。# 1. 配置 Docker 以使用私有镜像仓库
# 1.1 修改 Docker 配置文件
# registry-mirrors:公共的镜像仓库地址,配置了 Docker 的镜像加速器
# exec-opts:使用 systemd 作为 cgroup 驱动
# insecure-registries:这是一个内部的私有镜像仓库
[root@k8snode1 ~]# cat /etc/docker/daemon.json 
{"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],"insecure-registries":["192.168.2.106:5000"],"exec-opts": ["native.cgroupdriver=systemd"]
} [root@k8snode2 ~]# cat /etc/docker/daemon.json 
{"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],"insecure-registries":["192.168.2.106:5000"],"exec-opts": ["native.cgroupdriver=systemd"]
} # 1.2 重新加载配置并重启 Docker 服务
systemctl daemon-reload  && systemctl restart docker

2. k8s集群 登录到私有镜像仓库harbor

# 2.1 在主节点上登录
[root@k8smaster mysql]# docker login 192.168.2.106:5000
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded# 2.2 在节点 1 上登录
[root@k8snode1 ~]# docker login 192.168.2.106:5000
Username: admin   
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
# 2.3 在节点 2 上登录
[root@k8snode2 ~]# docker login 192.168.2.106:5000
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded

3. k8s集群 测试从私有镜像仓库拉取镜像

# 3.1 在节点 1 上拉取镜像
[root@k8snode1 ~]# docker pull 192.168.2.106:5000/test/nginx1:v1# 3.2 查看拉取的镜像
[root@k8snode1 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
mysql                                                                          5.7.42     2be84dd575ee   5 days ago      569MB
nginx                                                                          latest     605c77e624dd   17 months ago   141MB
192.168.2.106:5000/test/nginx1                                                 v1         605c77e624dd   17 months ago   141MB

4. 制作go-web镜像并推送到私有镜像仓库

安装go语言的环境
[root@harbor yum.repos.d]# yum install epel-release -y
[root@harbor yum.repos.d]# yum install golang -y# 4.1 制作go-web镜像的Dockerfile 
[root@harbor ~]# cd go
[root@harbor go]# ls
scweb  Dockerfile
[root@harbor go]# cat Dockerfile 
FROM centos:7		# 指定基础镜像为 centos:7
WORKDIR /go			# 设置工作目录为 /go,后续的操作都将在该目录下执行
COPY . /go			# 将当前目录下的所有文件和子目录复制到镜像的 /go 目录中
RUN ls /go && pwd	# 在镜像构建过程中执行以下命令,ls /go:列出 /go 目录下的所有文件和子目录,pwd:打印当前工作目录
ENTRYPOINT ["/go/scweb"]	# 设置容器启动时的默认入口点为 /go/scweb,即当容器启动时,会自动运行 /go/scweb 命令# 4.2 构建镜像
[root@harbor go]# docker build  -t scmyweb:1.1 .# 4.3 查看构建的镜像
[root@harbor go]# docker image ls | grep scweb
scweb                            1.1       f845e97e9dfd   4 hours ago      214MB# 4.4 标记镜像
# 将镜像 scweb:1.1 标记为 192.168.2.106:5000/test/web:v2
[root@harbor go]#  docker tag scweb:1.1 192.168.2.106:5000/test/web:v2# 4.5 查看标记的镜像
[root@harbor go]# docker image ls | grep web
192.168.2.106:5000/test/web      v2        00900ace4935   4 minutes ago   214MB
scweb                            1.1       00900ace4935   4 minutes ago   214MB# 4.6 推送镜像到私有镜像仓库
[root@harbor go]# docker push 192.168.2.106:5000/test/web:v2
The push refers to repository [192.168.2.106:5000/test/web]
3e252407b5c2: Pushed 
193a27e04097: Pushed 
b13a87e7576f: Pushed 
174f56854903: Pushed 
v1: digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29 size: 1153

5. 在node1上拉取镜像

 # 使用已有的凭据登录到私有镜像仓库
[root@k8snode1 ~]# docker login 192.168.2.106:5000
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded# 5.1 拉取镜像,从私有镜像仓库拉取镜像 test/web:v2
[root@k8snode1 ~]# docker pull 192.168.2.106:5000/test/web:v2
v1: Pulling from test/web
2d473b07cdd5: Pull complete 
bc5e56dd1476: Pull complete 
694440c745ce: Pull complete 
78694d1cffbb: Pull complete 
Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29
Status: Downloaded newer image for 192.168.2.106:5000/test/web:v2
192.168.2.106:5000/test/web:v1# 5.2 查看拉取的镜像
[root@k8snode1 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
192.168.2.106:5000/test/web                                                    v2         f845e97e9dfd   4 hours ago     214MB

6. 在node2上拉取镜像

[root@k8snode2 ~]# docker login 192.168.2.106:5000,使用已有的凭据登录到私有镜像仓库
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded# 6.1 拉取镜像
[root@k8snode2 ~]# docker pull 192.168.2.106:5000/test/web:v2
v1: Pulling from test/web
2d473b07cdd5: Pull complete 
bc5e56dd1476: Pull complete 
694440c745ce: Pull complete 
78694d1cffbb: Pull complete 
Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29
Status: Downloaded newer image for 192.168.2.106:5000/test/web:v2
192.168.2.106:5000/test/web:v1# 6.2 查看拉取的镜像
[root@k8snode2 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
192.168.2.106:5000/test/web                                                    v2         f845e97e9dfd   4 hours ago     214MB

7. 使用 HPA 技术实现自动扩缩容

# 采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小1个,最多10个pod
# HorizontalPodAutoscaler(简称 HPA )自动更新工作负载资源(例如Deployment),目的是自动扩缩# 工作负载以满足需求。
https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
7.1 安装 Metrics Server
# 7.1 安装 Metrics Server
# 下载components.yaml配置文件
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml# 替换imageimage: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0imagePullPolicy: IfNotPresentargs:
#        // 新增下面两行参数- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname# 修改components.yaml配置文件
[root@k8smaster ~]# cat components.yamlspec:containers:- args:- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIP - --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15simage: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0imagePullPolicy: IfNotPresent# 执行安装命令
[root@k8smaster metrics]# kubectl apply -f components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created# 查看效果
[root@k8smaster metrics]# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6949477b58-xdk88   1/1     Running   1          22h
calico-node-4knc8                          1/1     Running   4          22h
calico-node-8jzrn                          1/1     Running   1          22h
calico-node-9d7pt                          1/1     Running   2          22h
coredns-7f89b7bc75-52c4x                   1/1     Running   2          22h
coredns-7f89b7bc75-82jrx                   1/1     Running   1          22h
etcd-k8smaster                             1/1     Running   1          22h
kube-apiserver-k8smaster                   1/1     Running   1          22h
kube-controller-manager-k8smaster          1/1     Running   1          22h
kube-proxy-8wp9c                           1/1     Running   2          22h
kube-proxy-d46jp                           1/1     Running   1          22h
kube-proxy-whg4f                           1/1     Running   1          22h
kube-scheduler-k8smaster                   1/1     Running   1          22h
metrics-server-6c75959ddf-hw7cs            1/1     Running   0          61s# 能够使用下面的命令查看到pod的效果,说明metrics server已经安装成功
[root@k8smaster metrics]# kubectl top node
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8smaster   322m         16%    1226Mi          71%       
k8snode1    215m         10%    874Mi           50%       
k8snode2    190m         9%     711Mi           41% 
7.2 验证 Metrics Server 安装
# 7.2 验证 Metrics Server 安装
# 查看pod、apiservice验证metrics-server安装好了
[root@k8smaster HPA]# kubectl get pod -n kube-system|grep metrics
metrics-server-6c75959ddf-hw7cs            1/1     Running   4          6h35m[root@k8smaster HPA]# kubectl get apiservice |grep metrics
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        6h35m[root@k8smaster HPA]# kubectl top node
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8smaster   349m         17%    1160Mi          67%       
k8snode1    271m         13%    1074Mi          62%       
k8snode2    226m         11%    1224Mi          71%  [root@k8snode1 ~]# docker images|grep metrics
registry.aliyuncs.com/google_containers/metrics-server            v0.6.0     5787924fe1d8   14 months ago   68.8MB
您在 /var/spool/mail/root 中有新邮件# node节点上查看
[root@k8snode1 ~]# docker images|grep metrics
registry.aliyuncs.com/google_containers/metrics-server                         v0.6.0     5787924fe1d8   17 months ago   68.8MB
kubernetesui/metrics-scraper                                                   v1.0.7     7801cfc6d5c0   2 years ago     34.4MB
7.3 使用 HPA ——创建 Deployment 和 Service,启动web并暴露服务
# 1.创建 Deployment 和 Service,启动web并暴露服务
[root@k8smaster hpa]# cat my-web.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: mywebname: myweb
spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: 192.168.2.106:5000/test/web:v2imagePullPolicy: IfNotPresentports:- containerPort: 8000resources:limits:cpu: 300mrequests:cpu: 100m
---
apiVersion: v1
kind: Service
metadata:labels:app: myweb-svcname: myweb-svc
spec:selector:app: mywebtype: NodePortports:- port: 8000protocol: TCPtargetPort: 8000nodePort: 30001# 应用配置文件,创建 Deployment 和 Service
[root@k8smaster HPA]# kubectl apply -f my-web.yaml 
deployment.apps/myweb created
service/myweb-svc created
7.4.使用 HPA ——为 Deployment myweb 创建 HPA,当 CPU 使用率达到 50% 时,Pod 数量在 1 到 10 之间自动调整
[root@k8smaster HPA]# kubectl autoscale deployment myweb --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/myweb autoscaled1. 初始状态,低于 50%,Pod 数量为 3
[root@k8smaster HPA]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myweb-6dc7b4dfcb-9q85g   1/1     Running   0          9s
myweb-6dc7b4dfcb-ddq82   1/1     Running   0          9s
myweb-6dc7b4dfcb-l7sw7   1/1     Running   0          9s[root@k8smaster HPA]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          3d2h
myweb-svc    NodePort    10.102.83.168   <none>        8000:30001/TCP   15s[root@k8smaster HPA]# kubectl get hpa
NAME    REFERENCE          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   <unknown>/50%   1         10        3          16s
增加负载
通过创建一个客户端 Pod,向目标服务发送持续的 HTTP 请求,模拟高负载场景
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://192.168.2.112:30001; done"
2. 增加负载后
在增加负载后,CPU 使用率会逐渐上升,HPA 会自动增加 Pod 的数量。
增加负载后,CPU 使用率上升到 55%,HPA 将 Pod 数量从 1 增加到 5[root@k8smaster hpa]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   55%/50%   1         10        5          12m[root@k8smaster hpa]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
load-generator           1/1     Running   0          43s
myweb-6dc7b4dfcb-9q85g   1/1     Running   0          12m
myweb-6dc7b4dfcb-ddq82   1/1     Running   0          12m
myweb-6dc7b4dfcb-l7sw7   1/1     Running   0          12m
myweb-6dc7b4dfcb-mn3k4   1/1     Running   0          1m
myweb-6dc7b4dfcb-np2s6   1/1     Running   0          1m
3. 负载减少后
负载减少后,CPU 使用率下降到 20%,HPA 将 Pod 数量从 4 减少到 2[root@k8smaster hpa]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   20%/50%   1         10        2          15m[root@k8smaster hpa]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
load-generator           1/1     Running   0          43s
myweb-6dc7b4dfcb-9q85g   1/1     Running   0          15m
myweb-6dc7b4dfcb-ddq82   1/1     Running   0          15m
7.5访问
http://192.168.2.112:30001/[root@k8smaster HPA]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   1%/50%    1         10        1          11m[root@k8smaster HPA]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myweb-6dc7b4dfcb-ddq82   1/1     Running   0          10m
7.6删除hpa
[root@k8smaster HPA]# kubectl delete hpa myweb

六、启动mysql的pod,为web业务提供数据库服务

[root@k8smaster mysql]# cat mysql-deployment.yaml 
# 定义mysql的Deployment
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: mysqlname: mysql
spec:replicas: 1selector:matchLabels:app: mysqltemplate:metadata:labels:app: mysqlspec:containers:- image: mysql:5.7.42name: mysqlimagePullPolicy: IfNotPresentenv:- name: MYSQL_ROOT_PASSWORD   value: "123456"ports:- containerPort: 3306
---
#定义mysql的Service
apiVersion: v1
kind: Service
metadata:labels:app: svc-mysqlname: svc-mysql
spec:selector:app: mysqltype: NodePortports:- port: 3306protocol: TCPtargetPort: 3306nodePort: 30007[root@k8smaster mysql]# kubectl apply -f mysql-deployment.yaml 
deployment.apps/mysql created
service/svc-mysql created[root@k8smaster mysql]# kubectl get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP          28h
svc-mysql        NodePort    10.105.96.217   <none>        3306:30007/TCP   10m[root@k8smaster mysql]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
mysql-5f9bccd855-6kglf              1/1     Running   0          8m59s[root@k8smaster mysql]# kubectl exec -it mysql-5f9bccd855-6kglf -- bash
bash-4.2# mysql -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.42 MySQL Community Server (GPL)Copyright (c) 2000, 2023, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.01 sec)mysql> exit
Bye
bash-4.2# exit
exit
[root@k8smaster mysql]# # Web服务和MySQL数据库结合起来
# 在mysql的service中增加以下内容ports:- name: mysqlprotocol: TCPport: 3306targetPort: 3306# 在web的pod中增加以下内容env:- name: MYSQL_HOSTvalue: mysql- name: MYSQL_PORTvalue: "3306"

1、尝试:k8s部署有状态的MySQL

# 1.创建 ConfigMap
[root@k8smaster mysql]# cat mysql-configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:name: mysqllabels:app: mysql
data:primary.cnf: |# 仅在主服务器上应用此配置[mysqld]log-binreplica.cnf: |# 仅在副本服务器上应用此配置[mysqld]super-read-only[root@k8smaster mysql]# kubectl apply -f mysql-configmap.yaml 
configmap/mysql created[root@k8smaster mysql]# kubectl get cm
NAME               DATA   AGE
kube-root-ca.crt   1      6d22h
mysql              2      5s# 2.创建服务
[root@k8smaster mysql]# cat mysql-services.yaml 
# 为 StatefulSet 成员提供稳定的 DNS 表项的无头服务(Headless Service)
apiVersion: v1
kind: Service
metadata:name: mysqllabels:app: mysqlapp.kubernetes.io/name: mysql
spec:ports:- name: mysqlport: 3306clusterIP: Noneselector:app: mysql
---
# 用于连接到任一 MySQL 实例执行读操作的客户端服务
# 对于写操作,你必须连接到主服务器:mysql-0.mysql
apiVersion: v1
kind: Service
metadata:name: mysql-readlabels:app: mysqlapp.kubernetes.io/name: mysqlreadonly: "true"
spec:ports:- name: mysqlport: 3306selector:app: mysql[root@k8smaster mysql]# kubectl apply -f mysql-services.yaml 
service/mysql created
service/mysql-read created[root@k8smaster mysql]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    6d22h
mysql        ClusterIP   None            <none>        3306/TCP   7s
mysql-read   ClusterIP   10.102.31.144   <none>        3306/TCP   7s# 3.创建 StatefulSet
[root@k8smaster mysql]# cat mysql-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:name: mysql
spec:selector:matchLabels:app: mysqlapp.kubernetes.io/name: mysqlserviceName: mysqlreplicas: 3template:metadata:labels:app: mysqlapp.kubernetes.io/name: mysqlspec:initContainers:- name: init-mysqlimage: mysql:5.7.42imagePullPolicy: IfNotPresentcommand:- bash- "-c"- |set -ex# 基于 Pod 序号生成 MySQL 服务器的 ID。[[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1ordinal=${BASH_REMATCH[1]}echo [mysqld] > /mnt/conf.d/server-id.cnf# 添加偏移量以避免使用 server-id=0 这一保留值。echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf# 将合适的 conf.d 文件从 config-map 复制到 emptyDir。if [[ $ordinal -eq 0 ]]; thencp /mnt/config-map/primary.cnf /mnt/conf.d/elsecp /mnt/config-map/replica.cnf /mnt/conf.d/fi         volumeMounts:- name: confmountPath: /mnt/conf.d- name: config-mapmountPath: /mnt/config-map- name: clone-mysqlimage: registry.cn-hangzhou.aliyuncs.com/google_samples_thepoy/xtrabackup:1.0command:- bash- "-c"- |set -ex# 如果已有数据,则跳过克隆。[[ -d /var/lib/mysql/mysql ]] && exit 0# 跳过主实例(序号索引 0)的克隆。[[ `hostname` =~ -([0-9]+)$ ]] || exit 1ordinal=${BASH_REMATCH[1]}[[ $ordinal -eq 0 ]] && exit 0# 从原来的对等节点克隆数据。ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql# 准备备份。xtrabackup --prepare --target-dir=/var/lib/mysql               volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dcontainers:- name: mysqlimage: mysql:5.7.42imagePullPolicy: IfNotPresentenv:- name: MYSQL_ALLOW_EMPTY_PASSWORDvalue: "1"ports:- name: mysqlcontainerPort: 3306volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dresources:requests:cpu: 500mmemory: 1GilivenessProbe:exec:command: ["mysqladmin", "ping"]initialDelaySeconds: 30periodSeconds: 10timeoutSeconds: 5readinessProbe:exec:# 检查我们是否可以通过 TCP 执行查询(skip-networking 是关闭的)。command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]initialDelaySeconds: 5periodSeconds: 2timeoutSeconds: 1- name: xtrabackupimage: registry.cn-hangzhou.aliyuncs.com/google_samples_thepoy/xtrabackup:1.0ports:- name: xtrabackupcontainerPort: 3307command:- bash- "-c"- |set -excd /var/lib/mysql# 确定克隆数据的 binlog 位置(如果有的话)。if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then# XtraBackup 已经生成了部分的 “CHANGE MASTER TO” 查询# 因为我们从一个现有副本进行克隆。(需要删除末尾的分号!)cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in# 在这里要忽略 xtrabackup_binlog_info (它是没用的)。rm -f xtrabackup_slave_info xtrabackup_binlog_infoelif [[ -f xtrabackup_binlog_info ]]; then# 我们直接从主实例进行克隆。解析 binlog 位置。[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1rm -f xtrabackup_binlog_info xtrabackup_slave_infoecho "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.infi# 检查我们是否需要通过启动复制来完成克隆。if [[ -f change_master_to.sql.in ]]; thenecho "Waiting for mysqld to be ready (accepting connections)"until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; doneecho "Initializing replication from clone position"mysql -h 127.0.0.1 \-e "$(<change_master_to.sql.in), \MASTER_HOST='mysql-0.mysql', \MASTER_USER='root', \MASTER_PASSWORD='', \MASTER_CONNECT_RETRY=10; \START SLAVE;" || exit 1# 如果容器重新启动,最多尝试一次。mv change_master_to.sql.in change_master_to.sql.origfi# 当对等点请求时,启动服务器发送备份。exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"         volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dresources:requests:cpu: 100mmemory: 100Mivolumes:- name: confemptyDir: {}- name: config-mapconfigMap:name: mysqlvolumeClaimTemplates:- metadata:name: dataspec:accessModes: ["ReadWriteOnce"]resources:requests:storage: 1Gi[root@k8smaster mysql]# kubectl apply -f mysql-statefulset.yaml 
statefulset.apps/mysql created[root@k8smaster mysql]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   0/2     Pending   0          3s[root@k8smaster mysql]# kubectl describe pod mysql-0
Events:Type     Reason            Age                From               Message----     ------            ----               ----               -------Warning  FailedScheduling  16s (x2 over 16s)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.[root@k8smaster mysql]# kubectl get pvc
NAME           STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-mysql-0   Pending                                                     3m27s[root@k8smaster mysql]# kubectl get pvc data-mysql-0 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:creationTimestamp: "2023-06-25T06:17:36Z"finalizers:- kubernetes.io/pvc-protectionlabels:app: mysqlapp.kubernetes.io/name: mysql[root@k8smaster mysql]# cat mysql-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:name: mysql-pv
spec:capacity:storage: 1Gi accessModes:- ReadWriteOncenfs:path: "/data/db"       # nfs共享的目录server: 192.168.2.121   # nfs服务器的ip地址[root@k8smaster mysql]# kubectl apply -f mysql-pv.yaml 
persistentvolume/mysql-pv created[root@k8smaster mysql]# kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                           STORAGECLASS    REASON   AGE
jenkins-pv-volume   10Gi       RWO            Retain           Terminating   devops-tools/jenkins-pv-claim   local-storage            5d23h
mysql-pv            1Gi        RWO            Retain           Terminating   default/data-mysql-0                                     15m[root@k8smaster mysql]# kubectl patch pv jenkins-pv-volume -p '{"metadata":{"finalizers":null}}'
persistentvolume/jenkins-pv-volume patched[root@k8smaster mysql]# kubectl patch pv mysql-pv -p '{"metadata":{"finalizers":null}}'
persistentvolume/mysql-pv patched[root@k8smaster mysql]# kubectl get pv
No resources found[root@k8smaster mysql]# kubectl get pod
NAME      READY   STATUS     RESTARTS   AGE
mysql-0   0/2     Init:0/2   0          7m20s[root@k8smaster mysql]# kubectl describe pod mysql-0
Events:Type     Reason            Age                   From               Message----     ------            ----                  ----               -------Warning  FailedScheduling  10m (x3 over 10m)     default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 pvc(s) bound to non-existent pv(s).Normal   Scheduled         10m                   default-scheduler  Successfully assigned default/mysql-0 to k8snode2Warning  FailedMount       10m                   kubelet            Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data conf config-map default-token-24tkk]: error processing PVC default/data-mysql-0: PVC is not boundWarning  FailedMount       9m46s                 kubelet            Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[default-token-24tkk data conf config-map]: error processing PVC default/data-mysql-0: PVC is not boundWarning  FailedMount       5m15s                 kubelet            Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data conf config-map default-token-24tkk]: timed out waiting for the conditionWarning  FailedMount       3m                    kubelet            Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[config-map default-token-24tkk data conf]: timed out waiting for the conditionWarning  FailedMount       74s (x12 over 9m31s)  kubelet            MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 192.168.2.121:/data/db /var/lib/kubelet/pods/424bb72d-8bf5-400f-b954-7fa3666ca0b3/volumes/kubernetes.io~nfs/mysql-pv
Output: mount.nfs: mounting 192.168.2.121:/data/db failed, reason given by server: No such file or directoryWarning  FailedMount  42s (x2 over 7m29s)  kubelet  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[conf config-map default-token-24tkk data]: timed out waiting for the condition1Gi        RWO            Retain           Terminating   default/data-mysql-0                                     15m
[root@nfs data]# pwd
/data
[root@nfs data]# mkdir db replica  replica-3
[root@nfs data]# ls
db  replica  replica-3
[root@k8smaster mysql]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   2/2     Running   0          21m
mysql-1   0/2     Pending   0          2m34s
[root@k8smaster mysql]# kubectl describe  pod mysql-1
Events:Type     Reason            Age                  From               Message----     ------            ----                 ----               -------Warning  FailedScheduling  58s (x4 over 3m22s)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
[root@k8smaster mysql]# cat mysql-pv-2.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:name: mysql-pv-2
spec:capacity:storage: 1Gi accessModes:- ReadWriteOncenfs:path: "/data/replica"       # nfs共享的目录server: 192.168.2.121   # nfs服务器的ip地址
[root@k8smaster mysql]# kubectl apply -f mysql-pv-2.yaml 
persistentvolume/mysql-pv-2 created
[root@k8smaster mysql]# kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
mysql-pv     1Gi        RWO            Retain           Bound    default/data-mysql-0                           24m
mysql-pv-2   1Gi        RWO            Retain           Bound    default/data-mysql-1                           7s
[root@k8smaster mysql]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   2/2     Running   0          25m
mysql-1   1/2     Running   0          7m20s
[root@k8smaster mysql]# cat mysql-pv-3.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:name: mysql-pv-3
spec:capacity:storage: 1Gi accessModes:- ReadWriteOncenfs:path: "/data/replicai-3"       # nfs共享的目录server: 192.168.2.121   # nfs服务器的ip地址
[root@k8smaster mysql]# kubectl apply -f mysql-pv-3.yaml 
persistentvolume/mysql-pv-3 created
[root@k8smaster mysql]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   2/2     Running   0          29m
mysql-1   2/2     Running   0          11m
mysql-2   0/2     Pending   0          3m46s
[root@k8smaster mysql]# kubectl describe pod mysql-2
Events:Type     Reason            Age                    From               Message----     ------            ----                   ----               -------Warning  FailedScheduling  2m13s (x4 over 4m16s)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.Warning  FailedScheduling  47s (x2 over 2m5s)     default-scheduler  0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.

七、使用探针(liveness、readiness、startup)的(httpget、exec)方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性

# 部署原来的应用
[root@k8smaster probe]# vim my-web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: mywebname: myweb
spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: 192.168.2.106:5000/test/web:v2imagePullPolicy: IfNotPresentports:- containerPort: 8000resources:limits:cpu: 300mrequests:cpu: 100m# 探针配置,存活性探针,用于检测容器是否存活livenessProbe:exec:			# 在容器中执行命令 ls /tmpcommand:- ls- /tmpinitialDelaySeconds: 5		# 探针首次执行的延迟时间为 5 秒periodSeconds: 5		# 探针执行的周期为 5 秒# readinessProbe: 就绪探针,用于检测容器是否准备好接收流量readinessProbe:exec:command:- ls- /tmpinitialDelaySeconds: 5periodSeconds: 5 # startupProbe: 启动探针,用于检测容器是否启动完成 startupProbe:httpGet:		# 向容器的 / 路径发送 HTTP GET 请求path: /		port: 8000		# 容器的端口为 8000failureThreshold: 30			# 允许失败的次数为 30 次periodSeconds: 10			# 探针执行的周期为 10 秒
---
# 创建了一个 Service,将应用暴露在节点端口 30001 上
apiVersion: v1
kind: Service
metadata:labels:app: myweb-svcname: myweb-svc
spec:selector:app: mywebtype: NodePortports:- port: 8000protocol: TCPtargetPort: 8000nodePort: 30001# 应用配置文件,创建 Deployment 和 Service
[root@k8smaster probe]# kubectl apply -f my-web.yaml 
deployment.apps/myweb created
service/myweb-svc created[root@k8smaster probe]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myweb-6b89fb9c7b-4cdh9   1/1     Running   0          53s
myweb-6b89fb9c7b-dh87w   1/1     Running   0          53s
myweb-6b89fb9c7b-zvc52   1/1     Running   0          53s# 查看 Pod 详情
[root@k8smaster probe]# kubectl describe pod myweb-6b89fb9c7b-4cdh9
Name:         myweb-6b89fb9c7b-4cdh9
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.2.112
Start Time:   Thu, 22 Jun 2023 16:47:20 +0800
Labels:       app=mywebpod-template-hash=6b89fb9c7b
Annotations:  cni.projectcalico.org/podIP: 10.244.185.219/32cni.projectcalico.org/podIPs: 10.244.185.219/32
Status:       Running
IP:           10.244.185.219
IPs:IP:           10.244.185.219
Controlled By:  ReplicaSet/myweb-6b89fb9c7b
Containers:myweb:Container ID:   docker://8c55c0c825483f86e4b3c87413984415b2ccf5cad78ed005eed8bedb4252c130Image:          192.168.2.106:5000/test/web:v2Image ID:       docker-pullable://192.168.2.106:5000/test/web@sha256:3bef039aa5c13103365a6868c9f052a000de376a45eaffcbad27d6ddb1f6e354Port:           8000/TCPHost Port:      0/TCPState:          RunningStarted:      Thu, 22 Jun 2023 16:47:23 +0800Ready:          TrueRestart Count:  0Limits:cpu:  300mRequests:cpu:        100mLiveness:     exec [ls /tmp] delay=5s timeout=1s period=5s #success=1 #failure=3Readiness:    exec [ls /tmp] delay=5s timeout=1s period=5s #success=1 #failure=3Startup:      http-get http://:8000/ delay=0s timeout=1s period=10s #success=1 #failure=30Environment:  <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-24tkk (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True 
Volumes:default-token-24tkk:Type:        Secret (a volume populated by a Secret)SecretName:  default-token-24tkkOptional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason     Age   From               Message----    ------     ----  ----               -------Normal  Scheduled  55s   default-scheduler  Successfully assigned default/myweb-6b89fb9c7b-4cdh9 to k8snode2Normal  Pulled     52s   kubelet            Container image "192.168.2.106:5000/test/web:v2" already present on machineNormal  Created    52s   kubelet            Created container mywebNormal  Started    52s   kubelet            Started container myweb

八、使用ingress给web业务做负载均衡,使用dashboard对整个集群资源进行掌控。

1、使用ingress给web业务做负载均衡

ingress controller 类似于一个负载均衡器,通常是基于 Nginx 或其他软件实现的,用来做负载均衡。
ingress 是k8s内部管理nginx配置(nginx.conf)的组件,用来给ingress controller传参。

[root@k8smaster ingress]# ls
ingress-controller-deploy.yaml         kube-webhook-certgen-v1.1.0.tar.gz  sc-nginx-svc-1.yaml
ingress-nginx-controllerv1.1.0.tar.gz  sc-ingress.yamlingress-nginx-controllerv1.1.0.tar.gz    ingress-nginx-controller镜像
kube-webhook-certgen-v1.1.0.tar.gz       kube-webhook-certgen镜像
ingress-controller-deploy.yaml   是部署ingress controller使用的yaml文件
sc-ingress.yaml 创建ingress的配置文件
sc-nginx-svc-1.yaml  启动sc-nginx-svc-1服务和相关pod的yaml
nginx-deployment-nginx-svc-2.yaml  启动nginx-deployment-nginx-svc-2服务和相关pod的yaml
第1大步骤:安装ingress controller
# 1.将镜像scp到所有的node节点服务器上
[root@k8smaster ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz k8snode1:/root
ingress-nginx-controllerv1.1.0.tar.gz                                                  100%  276MB 101.1MB/s   00:02    
[root@k8smaster ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz k8snode2:/root
ingress-nginx-controllerv1.1.0.tar.gz                                                  100%  276MB  98.1MB/s   00:02    
[root@k8smaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz k8snode1:/root
kube-webhook-certgen-v1.1.0.tar.gz                                                     100%   47MB  93.3MB/s   00:00    
[root@k8smaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz k8snode2:/root
kube-webhook-certgen-v1.1.0.tar.gz                                                     100%   47MB  39.3MB/s   00:01    # 2.导入镜像,在所有的节点服务器上进行
[root@k8snode1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
[root@k8snode1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
[root@k8snode2 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
[root@k8snode2 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz[root@k8snode1 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
nginx                                                                          latest     605c77e624dd   17 months ago   141MB
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller   v1.1.0     ae1a7201ec95   19 months ago   285MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen       v1.1.1     c41e9fcadf5a   20 months ago   47.7MB[root@k8snode2 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
nginx                                                                          latest     605c77e624dd   17 months ago   141MB
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller   v1.1.0     ae1a7201ec95   19 months ago   285MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen       v1.1.1     c41e9fcadf5a   20 months ago   47.7MB# 3.执行yaml文件去创建ingres controller
[root@k8smaster ingress]# kubectl apply -f ingress-controller-deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created# 4.查看ingress controller的相关命名空间
[root@k8smaster ingress]# kubectl get ns
NAME                   STATUS   AGE
default                Active   20h
ingress-nginx          Active   30s
kube-node-lease        Active   20h
kube-public            Active   20h
kube-system            Active   20h# 5.查看ingress controller的相关service
[root@k8smaster ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.213.95   <none>        80:31457/TCP,443:32569/TCP   64s
ingress-nginx-controller-admission   ClusterIP   10.98.225.196   <none>        443/TCP                      64s# 6.查看ingress controller的相关pod
[root@k8smaster ingress]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-9sg56        0/1     Completed   0          80s
ingress-nginx-admission-patch-8sctb         0/1     Completed   1          80s
ingress-nginx-controller-6c8ffbbfcf-bmdj9   1/1     Running     0          80s
ingress-nginx-controller-6c8ffbbfcf-j576v   1/1     Running     0          80s
第2大步骤:创建pod和暴露pod的service
[root@k8smaster new]# cat sc-nginx-svc-1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: sc-nginx-deploylabels:app: sc-nginx-feng		#为 Deployment 添加标签 app: sc-nginx-feng
spec:replicas: 3selector:matchLabels:app: sc-nginx-feng		# Deployment 会选择所有具有 app: sc-nginx-feng 标签的 Podtemplate:metadata:labels:app: sc-nginx-feng		# 每个由 Deployment 创建的 Pod 都会有一个标签 app: sc-nginx-fengspec:containers:- name: sc-nginx-fengimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80		# 容器的端口为 80
---
apiVersion: v1
kind: Service
metadata:name:  sc-nginx-svclabels:app: sc-nginx-svc		# 为 Service 添加标签 app: sc-nginx-svc
spec:selector:app: sc-nginx-feng		#  Service 会选择所有具有 app: sc-nginx-feng 标签的 Podports:- name: name-of-service-portprotocol: TCPport: 80targetPort: 80[root@k8smaster new]# kubectl apply -f sc-nginx-svc-1.yaml 
deployment.apps/sc-nginx-deploy created
service/sc-nginx-svc created[root@k8smaster ingress]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
sc-nginx-deploy-7bb895f9f5-hmf2n    1/1     Running   0          7s
sc-nginx-deploy-7bb895f9f5-mczzg    1/1     Running   0          7s
sc-nginx-deploy-7bb895f9f5-zzndv    1/1     Running   0          7s[root@k8smaster ingress]# kubectl get svc
NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes     ClusterIP   10.96.0.1     <none>        443/TCP   20h
sc-nginx-svc   ClusterIP   10.96.76.55   <none>        80/TCP    26s# 查看服务器的详细信息,查看Endpoints对应的pod的ip和端口是否正常
[root@k8smaster ingress]# kubectl describe svc sc-nginx-svc
Name:              sc-nginx-svc
Namespace:         default
Labels:            app=sc-nginx-svc
Annotations:       <none>
Selector:          app=sc-nginx-feng
Type:              ClusterIP
IP Families:       <none>
IP:                10.96.76.55
IPs:               10.96.76.55
Port:              name-of-service-port  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.185.209:80,10.244.185.210:80,10.244.249.16:80
Session Affinity:  None
Events:            <none># 访问服务暴露的ip
[root@k8smaster ingress]# curl 10.96.76.55
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
第3大步骤:启用ingress关联ingress controller 和service
# 创建一个yaml文件,去启动ingress
# 分别将 www.feng.com 和 www.zhang.com 的流量路由到不同的 Service
[root@k8smaster ingress]# cat sc-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: sc-ingressannotations:			#使用注解指定 Ingress Controller 为 Nginxkubernets.io/ingress.class: nginx  
spec:ingressClassName: nginx  	#关联 Ingress Controller,指定使用 Nginx Ingress Controller。rules:				# 定义 Ingress 规则。- host: www.feng.com		# 定义主机名为 www.feng.com 的规则http:paths:- pathType: Prefix		path: /backend:service:name: sc-nginx-svc		# 服务名称为 sc-nginx-svc,www.feng.com 的流量路由到 sc-nginx-svcport:number: 80- host: www.zhang.comhttp:paths:- pathType: Prefixpath: /backend:service:	name: sc-nginx-svc-2		# 服务名称为 sc-nginx-svc-2,www.zhang.com 的流量路由到 sc-nginx-svc-2port:number: 80[root@k8smaster ingress]# kubectl apply -f my-ingress.yaml 
ingress.networking.k8s.io/my-ingress created# 查看ingress
[root@k8smaster ingress]# kubectl get ingress
NAME         CLASS   HOSTS                        ADDRESS                       PORTS   AGE
sc-ingress   nginx   www.feng.com,www.zhang.com   192.168.2.111,192.168.2.112   80      52s
第4大步骤:查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则
#  获取 Ingress Controller 的 Pod
[root@k8smaster ingress]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-9sg56        0/1     Completed   0          6m53s
ingress-nginx-admission-patch-8sctb         0/1     Completed   1          6m53s
ingress-nginx-controller-6c8ffbbfcf-bmdj9   1/1     Running     0          6m53s
ingress-nginx-controller-6c8ffbbfcf-j576v   1/1     Running     0          6m53s# 进入 Ingress Controller 的 Pod
[root@k8smaster ingress]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-6c8ffbbfcf-bmdj9 -- bash
# 查看 Nginx 配置文件
bash-5.1$ cat nginx.conf |grep feng.com## start server www.feng.comserver_name www.feng.com ;## end server www.feng.com
bash-5.1$ cat nginx.conf |grep zhang.com## start server www.zhang.comserver_name www.zhang.com ;## end server www.zhang.com
bash-5.1$ cat nginx.conf|grep -C3 upstream_balancererror_log  /var/log/nginx/error.log notice;upstream upstream_balancer {server 0.0.0.1:1234; # placeholder# 获取ingress controller对应的service暴露宿主机的端口,访问宿主机和相关端口,就可以验证ingress controller是否能进行负载均衡
[root@k8smaster ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.213.95   <none>        80:31457/TCP,443:32569/TCP   8m12s
ingress-nginx-controller-admission   ClusterIP   10.98.225.196   <none>        443/TCP                      8m12s# 修改 /etc/hosts 文件,这样可以通过域名访问 Ingress Controller
[root@zabbix ~]# vim /etc/hosts
[root@zabbix ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.111 www.feng.com
192.168.2.112 www.zhang.com# 因为基于域名做的负载均衡的配置,所以必须要在浏览器里使用域名去访问,不能使用ip地址
# 同时ingress controller做负载均衡的时候是基于http协议的,7层负载均衡。[root@zabbix ~]# curl www.feng.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html># 访问www.zhang.com出现异常,503错误,是nginx内部错误
[root@zabbix ~]# curl www.zhang.com
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>
第5大步骤:启动第2个服务和pod,使用了pv+pvc+nfs
1、需要提前准备好nfs服务器+创建pv和pvc
# 需要提前准备好nfs服务器+创建pv和pvc
[root@k8smaster pv]# pwd
/root/pv
[root@k8smaster pv]# ls
nfs-pvc.yml  nfs-pv.yml  nginx-deployment.yml[root@k8smaster pv]# cat nfs-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:name: pv-weblabels:				# 为 PV 添加标签 type: pv-webtype: pv-web
spec:capacity:storage: 10Gi accessModes:- ReadWriteMany			# 定义访问模式为 ReadWriteMany,表示多个 Pod 可以同时读写storageClassName: nfs         # pv对应的名字nfs:					# 定义 NFS 服务器的配置。path: "/web"       # NFS 服务器上的共享目录路径server: 192.168.2.121   # NFS 服务器的 IP 地址readOnly: false   # 访问模式
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc-web			# PVC 的名称为 pvc-weblabels:type: pvc-web		# 为 PVC 添加标签 type: pvc-web
spec:accessModes:- ReadWriteManystorageClassName: nfs		# 定义存储类为 nfsresources:requests:storage: 10Gi[root@k8smaster pv]# kubectl apply -f nfs-pv.yaml
[root@k8smaster pv]# kubectl apply -f nfs-pvc.yaml[root@k8smaster pv]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv-web   10Gi       RWX            Retain           Bound    default/pvc-web   nfs                     19h
[root@k8smaster pv]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-web   Bound    pv-web   10Gi       RWX            nfs            19h
2、创建 Deployment 和 Service
[root@k8smaster ingress]# cat nginx-deployment-nginx-svc-2.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: sc-nginx-feng-2template:metadata:labels:app: sc-nginx-feng-2spec:volumes:- name: sc-pv-storage-nfs	# 定义卷的名称为 sc-pv-storage-nfspersistentVolumeClaim:claimName: pvc-web		# 使用 PVC pvc-webcontainers:- name: sc-pv-container-nfs		# 容器的名称为 sc-pv-container-nfsimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80name: "http-server"volumeMounts:- mountPath: "/usr/share/nginx/html"		# 将卷挂载到容器的 /usr/share/nginx/html 路径name: sc-pv-storage-nfs		# 卷的名称为 sc-pv-storage-nfs
---
apiVersion: v1
kind: Service
metadata:name:  sc-nginx-svc-2labels:app: sc-nginx-svc-2
spec:selector:app: sc-nginx-feng-2		# 选择器匹配标签 app: sc-nginx-feng-2ports:- name: name-of-service-portprotocol: TCPport: 80targetPort: 80[root@k8smaster ingress]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml 
deployment.apps/nginx-deployment created
service/sc-nginx-svc-2 created[root@k8smaster ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.213.95   <none>        80:31457/TCP,443:32569/TCP   24m
ingress-nginx-controller-admission   ClusterIP   10.98.225.196   <none>        443/TCP                      24m[root@k8smaster ingress]# kubectl get ingress
NAME         CLASS   HOSTS                        ADDRESS                       PORTS   AGE
sc-ingress   nginx   www.feng.com,www.zhang.com   192.168.2.111,192.168.2.112   80      18m# 访问宿主机暴露的端口号30092或者80都可以
NodePort:在 Kubernetes 中,Service 的类型为 NodePort 时,会在每个节点的 IP 地址上暴露一个端口(通常是 30000-32767 范围内的端口)。
Ingress:Ingress 是 Kubernetes 中的网络入口,用于管理外部流量到集群内部服务的路由。Ingress 通常会使用 80(HTTP)或 443(HTTPS)端口
Ingress比使用service 暴露服务还是有点优势[root@zabbix ~]# curl www.zhang.com
welcome to changsha
hello,world
[root@zabbix ~]# curl www.feng.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

2、使用Kubernetes Dashboard对整个集群资源进行掌控

# 1.先下载recommended.yaml文件
[root@k8smaster dashboard]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
--2023-06-19 10:18:50--  https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.111.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:7621 (7.4K) [text/plain]
正在保存至: “recommended.yaml”100%[=============================================================================>] 7,621       --.-K/s 用时 0s      2023-06-19 10:18:52 (23.6 MB/s) - 已保存 “recommended.yaml” [7621/7621])[root@k8smaster dashboard]# ls
recommended.yaml# 2.启动
[root@k8smaster dashboard]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created# 3.查看是否启动dashboard的pod
[root@k8smaster dashboard]# kubectl get ns
NAME                   STATUS   AGE
default                Active   18h
ingress-nginx          Active   13h
kube-node-lease        Active   18h
kube-public            Active   18h
kube-system            Active   18h
kubernetes-dashboard   Active   9s# kubernetes-dashboard 是dashboard自己的命名空间[root@k8smaster dashboard]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-5b8896d7fc-6kjlr   1/1     Running   0          4m56s
kubernetes-dashboard-cb988587b-s2f6z         1/1     Running   0          4m57s# 4.查看dashboard对应的服务,因为发布服务的类型是ClusterIP ,外面的机器不能访问,不便于我们通过浏览器访问,因此需要改成NodePort
[root@k8smaster dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.110.32.41     <none>        8000/TCP   4m24s
kubernetes-dashboard        ClusterIP   10.106.104.124   <none>        443/TCP    4m24s# 5.删除已经创建的dashboard 的服务
[root@k8smaster dashboard]# kubectl delete svc kubernetes-dashboard -n kubernetes-dashboard
service "kubernetes-dashboard" deleted
[root@k8smaster dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.110.32.41   <none>        8000/TCP   5m39s# 6.创建一个nodeport的service
[root@k8smaster dashboard]# vim dashboard-svc.yml
[root@k8smaster dashboard]# cat dashboard-svc.yml
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePortports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard[root@k8smaster dashboard]# kubectl apply -f dashboard-svc.yml
service/kubernetes-dashboard created[root@k8smaster dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.110.32.41     <none>        8000/TCP        8m11s
kubernetes-dashboard        NodePort    10.103.185.254   <none>        443:32571/TCP   37s# 7.想要访问dashboard服务,就要有访问权限,创建kubernetes-dashboard管理员角色
# 创建一个名为 dashboard-admin 的服务账号。
# 创建一个 ClusterRoleBinding,将 dashboard-admin 服务账号绑定到 cluster-admin 角色,授予管理员权限
[root@k8smaster dashboard]# vim dashboard-svc-account.yaml
[root@k8smaster dashboard]# cat dashboard-svc-account.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:name: dashboard-adminnamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: dashboard-admin
subjects:- kind: ServiceAccountname: dashboard-adminnamespace: kube-system
roleRef:kind: ClusterRolename: cluster-adminapiGroup: rbac.authorization.k8s.io[root@k8smaster dashboard]# kubectl apply -f dashboard-svc-account.yaml 
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created#获取 Kubernetes Dashboard 的认证 Token,以便用户可以通过浏览器访问 Dashboard
# 8.获取dashboard的secret对象的名字
[root@k8smaster dashboard]# kubectl get secret -n kube-system|grep admin|awk '{print $1}'
dashboard-admin-token-hd2nl# 2. 查看 Secret 的详细信息
[root@k8smaster dashboard]# kubectl describe secret dashboard-admin-token-hd2nl -n kube-system
Name:         dashboard-admin-token-hd2nl
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 4e42ca6a-e5eb-4672-bf3e-ae22935417efType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InBBckJ2U051Y3J4NjVPY2VxOVZzRjBIdzdjNzgycFppcVZ5WWFnQlNsS00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taGQybmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGU0MmNhNmEtZTVlYi00NjcyLWJmM2UtYWUyMjkzNTQxN2VmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.EAVV-s6OnS4htu4kvv3UvlZpqzg5Ei1_tNiBLr08GquUxKX09JGvQhsZQYgluNmS2yqad_lxK_Ie_RgwayqfBdXYtugQPM8m9gZHScsUdo_3b8b4ZEUz7KlDzJVBdBvDFSJjz-7cJhtj-HtazRuLluJbeoQV4zXMXvfhDhYt0k126eiqKzvbHhJmNM8U5XViAUmpUPCUjqFHm8tS1Su7aW75R-qXH6aGjGOv7kTpQdOjFeVO-AbFRIcbDOcqYRrKMyZu0yuH9QZGL35L1Lj3HgePsDbwd3jm2ZS05BjuacSFGle6CdZTOB0b5haeUlFrZ6FWsU-2qoQ67ysOwB0xKQ
[root@k8smaster dashboard]# # 9.获取secret里的token的内容--》token理解为认证的密码
[root@k8smaster dashboard]# kubectl describe secret dashboard-admin-token-hd2nl -n kube-system|awk '/^token/ {print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6InBBckJ2U051Y3J4NjVPY2VxOVZzRjBIdzdjNzgycFppcVZ5WWFnQlNsS00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taGQybmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGU0MmNhNmEtZTVlYi00NjcyLWJmM2UtYWUyMjkzNTQxN2VmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.EAVV-s6OnS4htu4kvv3UvlZpqzg5Ei1_tNiBLr08GquUxKX09JGvQhsZQYgluNmS2yqad_lxK_Ie_RgwayqfBdXYtugQPM8m9gZHScsUdo_3b8b4ZEUz7KlDzJVBdBvDFSJjz-7cJhtj-HtazRuLluJbeoQV4zXMXvfhDhYt0k126eiqKzvbHhJmNM8U5XViAUmpUPCUjqFHm8tS1Su7aW75R-qXH6aGjGOv7kTpQdOjFeVO-AbFRIcbDOcqYRrKMyZu0yuH9QZGL35L1Lj3HgePsDbwd3jm2ZS05BjuacSFGle6CdZTOB0b5haeUlFrZ6FWsU-2qoQ67ysOwB0xKQ# 10.浏览器里访问
[root@k8smaster dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.110.32.41     <none>        8000/TCP        11m
kubernetes-dashboard        NodePort    10.103.185.254   <none>        443:32571/TCP   4m4s# 访问宿主机的ip+端口号
https://192.168.2.104:32571/#/login# 11.输入上面获得的token,登录。
thisisunsafe
https://192.168.2.104:32571/#/workloads?namespace=default

九、安装zabbix和promethues对整个集群资源(cpu,内存,网络带宽,web服务,数据库服务,磁盘IO等)进行监控。

1、部署zabbix监控Kubernetes

安装 Zabbix 的官方软件源。
安装 Zabbix 服务器和代理软件。
安装并配置 MariaDB 数据库。
导入 Zabbix 的数据库架构。
配置 Zabbix 服务器和前端。
启动相关服务并设置开机自启动。
提供 Zabbix 的默认登录信息。
# 1.安装zabbix服务器的源
源:repository 软件仓库,用来找到zabbix官方网站提供的软件,可以下载软件的地方
[root@zabbix ~]# rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm
获取https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm
警告:/var/tmp/rpm-tmp.lL96Rw: 头V4 RSA/SHA512 Signature, 密钥 ID a14fe591: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...1:zabbix-release-5.0-1.el7         ################################# [100%][root@zabbix ~]# cd /etc/yum.repos.d/
[root@zabbix yum.repos.d]# ls
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Media.repo    CentOS-Vault.repo          zabbix.repo
CentOS-CR.repo    CentOS-fasttrack.repo  CentOS-Sources.repo  CentOS-x86_64-kernel.repoCentOS-Base.repo 仓库文件: 用来找到centos官方提供的下载软件的地方的文件
Base 存放centos官方基本软件的仓库zabbix.repo 帮助我们找到zabbix官方提供的软件下载地方的文件[root@zabbix yum.repos.d]# cat zabbix.repo
[zabbix]   源的名字
name=Zabbix Official Repository - $basearch  对这个源的介绍
baseurl=http://repo.zabbix.com/zabbix/5.0/rhel/7/$basearch/   具体源的位置
enabled=1   表示这个源可以使用
gpgcheck=1  操作系统会对下载的软件进行gpg检验码的检查,防止软件不是正版的
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591   --》防伪码 # 2.安装zabbix相关的软件
[root@zabbix yum.repos.d]# yum install zabbix-server-mysql zabbix-agent -yzabbix-server-mysql 安装zabbix server和连接mysql功能的软件
zabbix-agent zabbix的代理软件# 3.安装Zabbix前端
[root@zabbix yum.repos.d]# yum install centos-release-scl -y # 修改仓库文件,启用前端的源
[root@zabbix yum.repos.d]# vim zabbix.repo
[zabbix-frontend]
name=Zabbix Official Repository frontend - $basearch
baseurl=http://repo.zabbix.com/zabbix/5.0/rhel/7/$basearch/frontend
enabled=1  # 修改为1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591# 安装web相关的软件
[root@zabbix yum.repos.d]# yum install zabbix-web-mysql-scl zabbix-nginx-conf-scl -y# 4.安装mariadb数据库
[root@zabbix yum.repos.d]# yum  install mariadb mariadb-server -y  
mariadb-server 服务器端的软件包
mariadb 提供客户端命令的软件包# 注意:如果已经安装过mysql的centos系统,就不需要安装mariadb[root@zabbix yum.repos.d]# service mariadb start  # 启动mariadb
Redirecting to /bin/systemctl start mariadb.service
[root@zabbix yum.repos.d]# systemctl enable mariadb   # 设置开机启动mariadb数据库
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.# 查看mysqld进程运行
[root@zabbix yum.repos.d]# ps aux|grep mysqld
mysql     11940  0.1  0.0 113412  1596 ?        Ss   15:09   0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
mysql     12105  1.1  4.3 968920 80820 ?        Sl   15:09   0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock
root      12159  0.0  0.0 112824   980 pts/0    S+   15:09   0:00 grep --color=auto mysqld[root@zabbix yum.repos.d]# netstat -anplut|grep 3306
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      12105/mysqld # 5.在数据库主机上运行以下命令
[root@zabbix yum.repos.d]# mysql -uroot -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.68-MariaDB MariaDB ServerCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.01 sec)MariaDB [(none)]> create database zabbix character set utf8 collate utf8_bin;
Query OK, 1 row affected (0.00 sec)MariaDB [(none)]> create user zabbix@localhost identified by 'sc123456';  # 创建用户zabbix@localhost 密码是sc123456
Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> grant all privileges on zabbix.* to zabbix@localhost;  #授权zabbix@localhost用户对zabbix.*库里的表有所有的权限(insert,delete,update,select等)
Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> set global log_bin_trust_function_creators = 1;
Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> exit
Bye# 导入初始化数据,会在zabbix库里新建很多的表
[root@zabbix yum.repos.d]# cd /usr/share/doc/zabbix-server-mysql-5.0.35/
[root@zabbix zabbix-server-mysql-5.0.35]# ls
AUTHORS  ChangeLog  COPYING  create.sql.gz  double.sql  NEWS  README[root@zabbix zabbix-server-mysql-5.0.33]# zcat create.sql.gz |mysql -uzabbix -p'sc123456' zabbix[root@zabbix zabbix-server-mysql-5.0.33]# mysql -uzabbix -psc123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 5.5.68-MariaDB MariaDB ServerCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| test               |
| zabbix             |
+--------------------+
3 rows in set (0.00 sec)MariaDB [(none)]> use zabbix;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
MariaDB [zabbix]> show tables;
+----------------------------+
| Tables_in_zabbix           |
+----------------------------+
| acknowledges               |
| actions                    |
| alerts                     |
| application_discovery      |
| application_prototype      |# 导入数据库架构后禁用log_bin_trust_function_creators选项
[root@zabbix zabbix-server-mysql-5.0.33]# mysql -uroot -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 5.5.68-MariaDB MariaDB ServerCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [(none)]> set global log_bin_trust_function_creators = 0;
Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> exit
Bye# 6.为 Zabbix 服务器配置数据库
# 编辑文件 /etc/zabbix/zabbix_server.conf
[root@zabbix zabbix-server-mysql-5.0.33]# cd /etc/zabbix/
[root@zabbix zabbix]# vim zabbix_server.conf 
# DBPassword=
DBPassword=sc123456# 7.为 Zabbix 前端配置 PHP
# 编辑文件 /etc/opt/rh/rh-nginx116/nginx/conf.d/zabbix.conf 取消注释
[root@zabbix conf.d]# cd /etc/opt/rh/rh-nginx116/nginx/conf.d/
[root@zabbix conf.d]# ls
zabbix.conf
[root@zabbix conf.d]# vim zabbix.conf 
server {listen          8080;server_name     zabbix.com;# 编辑/etc/opt/rh/rh-nginx116/nginx/nginx.conf
[root@zabbix conf.d]# cd /etc/opt/rh/rh-nginx116/nginx/ 
[root@zabbix nginx]# vim nginx.conf  server {listen       80 default_server;  #修改80为8080listen       [::]:80 default_server;# 避免zabbix和nginx监听同一个端口,导致zabbix启动不起来。
# 编辑文件 /etc/opt/rh/rh-php72/php-fpm.d/zabbix.conf
[root@zabbix nginx]# cd /etc/opt/rh/rh-php72/php-fpm.d
[root@zabbix php-fpm.d]# ls
www.conf  zabbix.conf[root@zabbix php-fpm.d]# vim zabbix.conf 
listen.acl_users = apache,nginx
php_value[date.timezone] = Asia/Shanghai# 建议一定要关闭selinux,不然会导致zabbix_server启动不了# 8.启动Zabbix服务器和代理进程并且设置开机启动
[root@zabbix php-fpm.d]# systemctl restart zabbix-server zabbix-agent rh-nginx116-nginx rh-php72-php-fpm
[root@zabbix php-fpm.d]# systemctl enable zabbix-server zabbix-agent rh-nginx116-nginx rh-php72-php-fpm
Created symlink from /etc/systemd/system/multi-user.target.wants/zabbix-server.service to /usr/lib/systemd/system/zabbix-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/zabbix-agent.service to /usr/lib/systemd/system/zabbix-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/rh-nginx116-nginx.service to /usr/lib/systemd/system/rh-nginx116-nginx.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/rh-php72-php-fpm.service to /usr/lib/systemd/system/rh-php72-php-fpm.service.# 9.浏览器里访问
http://192.168.2.117:8080# 默认登录的账号和密码
username:  Admin
password:  zabbix

2、使用Prometheus监控Kubernetes

在 Kubernetes 集群中部署 Node Exporter,用于收集节点级别的指标。
部署 Prometheus,用于收集和存储指标。
部署 Grafana,用于可视化指标。
配置 Ingress,以便通过域名访问 Grafana。在所有节点下载必要的镜像,确保快速部署。
使用 DaemonSet 部署 Node Exporter,确保每个节点都能收集指标。
通过 RBAC 授权 Prometheus 访问 Kubernetes 资源。
配置 Prometheus 的 ConfigMap,定义其抓取任务和规则。
部署 Prometheus 的主服务器,并通过 Service 暴露其端口。
部署 Grafana,并配置 Ingress 以方便外部访问。
1、在所有节点下载必要的镜像,确保快速部署。
# 1.在所有节点提前下载镜像
docker pull prom/node-exporter 
docker pull prom/prometheus:v2.0.0
docker pull grafana/grafana:6.1.4[root@k8smaster ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
prom/node-exporter                                                latest     1dbe0e931976   18 months ago   20.9MB
grafana/grafana                                                   6.1.4      d9bdb6044027   4 years ago     245MB
prom/prometheus                                                                v2.0.0     67141fa03496   5 years ago     80.2MB[root@k8snode1 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
prom/node-exporter                                                             latest     1dbe0e931976   18 months ago   20.9MB
grafana/grafana                                                                6.1.4      d9bdb6044027   4 years ago     245MB
prom/prometheus [root@k8snode2 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
prom/node-exporter                                                             latest     1dbe0e931976   18 months ago   20.9MB
grafana/grafana                                                                6.1.4      d9bdb6044027   4 years ago     245MB
prom/prometheus                                                                v2.0.0     67141fa03496   5 years ago     80.2MB
2、使用 DaemonSet 部署 Node Exporter,确保每个节点都能收集指标。

DaemonSet确保每个节点上都运行一个 Node Exporter 容器,用于收集节点级别的指标。
在这里,Node Exporter 的 Pod 被部署到 kube-system 命名空间中。
定义了一个 NodePort类型的服务,将 Node Exporter 容器的端口 9100 暴露到节点端口 31672。

[root@k8smaster prometheus]# ll
总用量 36
-rw-r--r-- 1 root root 5632 625 16:23 configmap.yaml
-rw-r--r-- 1 root root 1515 625 16:26 grafana-deploy.yaml
-rw-r--r-- 1 root root  256 625 16:27 grafana-ing.yaml
-rw-r--r-- 1 root root  225 625 16:27 grafana-svc.yaml
-rw-r--r-- 1 root root  716 625 16:22 node-exporter.yaml
-rw-r--r-- 1 root root 1104 625 16:25 prometheus.deploy.yml
-rw-r--r-- 1 root root  233 625 16:25 prometheus.svc.yml
-rw-r--r-- 1 root root  716 625 16:23 rbac-setukp.yaml
[root@k8smaster prometheus]# cat node-exporter.yaml 
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: node-exporternamespace: kube-systemlabels:k8s-app: node-exporter
spec:selector:matchLabels:k8s-app: node-exportertemplate:metadata:labels:k8s-app: node-exporterspec:containers:- image: prom/node-exporter		# 容器镜像为 prom/node-exportername: node-exporterports:- containerPort: 9100		# 容器公开端口 9100,用于暴露 Node Exporter 的指标protocol: TCPname: http
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: node-exportername: node-exporternamespace: kube-system
spec:ports:- name: httpport: 9100		# 通过 Service 暴露 Node Exporter 的端口 9100nodePort: 31672		# 使用 NodePort 类型的服务,将 Pod 的端口 9100 映射到节点的端口 31672protocol: TCPtype: NodePortselector:k8s-app: node-exporter		# 确保服务绑定到带有标签 k8s-app=node-exporter 的 Pod[root@k8smaster prometheus]# kubectl apply -f node-exporter.yaml
daemonset.apps/node-exporter created
service/node-exporter created[root@k8smaster prometheus]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS      RESTARTS   AGE
kube-system            node-exporter-fcmx5                          1/1     Running     0          47s
kube-system            node-exporter-qccwb                          1/1     Running     0          47s[root@k8smaster prometheus]# kubectl get daemonset -A
NAMESPACE     NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   calico-node     3         3         3       3            3           kubernetes.io/os=linux   7d
kube-system   kube-proxy      3         3         3       3            3           kubernetes.io/os=linux   7d
kube-system   node-exporter   2         2         2       2            2           <none>                   2m29s[root@k8smaster prometheus]# kubectl get service -A
NAMESPACE              NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kube-system            node-exporter                        NodePort    10.111.247.142   <none>        9100:31672/TCP               3m24s
3、部署Prometheus
1、创建 Prometheus 的 RBAC 资源

ClusterRole: 定义了 Prometheus 的权限,允许它访问节点、Pod、服务和 Ingress 的指标。
ServiceAccount: 创建了一个服务账号 prometheus。
ClusterRoleBinding: 将 prometheus服务账号绑定到 ClusterRole

[root@k8smaster prometheus]# cat rbac-setup.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: prometheus		# 定义了 Prometheus 的权限范围,允许它访问 Kubernetes 的节点、Pod、服务、Ingress 等资源的指标。
rules:
- apiGroups: [""]resources:- nodes- nodes/proxy- services- endpoints- podsverbs: ["get", "list", "watch"]
- apiGroups:- extensionsresources:- ingressesverbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount		# 创建了一个名为 prometheus 的服务账号,用于 Prometheus 的运行
metadata:name: prometheusnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding		# 将服务账号 prometheus 绑定到 ClusterRole,使其拥有定义的权限
metadata:name: prometheus
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: prometheus
subjects:
- kind: ServiceAccountname: prometheusnamespace: kube-system
[root@k8smaster prometheus]# kubectl apply -f rbac-setup.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
2、创建 Prometheus 的 ConfigMap

ConfigMap: 定义了 Prometheus 的配置文件 prometheus.yml。
scrape_interval:指定抓取指标的时间间隔。
evaluation_interval: 指定规则评估的时间间隔。 scrape_configs:
定义了多个抓取任务,包括 Kubernetes API 服务器、节点、Pod 等。

[root@k8smaster prometheus]# cat configmap.yaml 
apiVersion: v1
kind: ConfigMap		# 定义了 Prometheus 的配置文件 prometheus.yml
metadata:name: prometheus-confignamespace: kube-system
data:prometheus.yml: |global:scrape_interval:     15s		# 指定监控数据抓取的时间间隔(15 秒)evaluation_interval: 15s		# 指定规则评估的时间间隔(15 秒)scrape_configs:- job_name: 'kubernetes-apiservers'		# 抓取 Kubernetes API 服务器的指标kubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https- job_name: 'kubernetes-nodes'		#  抓取 Kubernetes 节点的指标kubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics- job_name: 'kubernetes-cadvisor'		# 抓取节点上 cAdvisor 的指标kubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor- job_name: 'kubernetes-service-endpoints'		# 抓取服务端点的指标kubernetes_sd_configs:- role: endpointsrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:]+)(?::\d+)?;(\d+)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name- job_name: 'kubernetes-services'		kubernetes_sd_configs:- role: servicemetrics_path: /probeparams:module: [http_2xx]relabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]action: keepregex: true- source_labels: [__address__]target_label: __param_target- target_label: __address__replacement: blackbox-exporter.example.com:9115- source_labels: [__param_target]target_label: instance- action: labelmapregex: __meta_kubernetes_service_label_(.+)- source_labels: [__meta_kubernetes_namespace]target_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]target_label: kubernetes_name- job_name: 'kubernetes-ingresses'	kubernetes_sd_configs:- role: ingressrelabel_configs:- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]action: keepregex: true- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]regex: (.+);(.+);(.+)replacement: ${1}://${2}${3}target_label: __param_target- target_label: __address__replacement: blackbox-exporter.example.com:9115- source_labels: [__param_target]target_label: instance- action: labelmapregex: __meta_kubernetes_ingress_label_(.+)- source_labels: [__meta_kubernetes_namespace]target_label: kubernetes_namespace- source_labels: [__meta_kubernetes_ingress_name]target_label: kubernetes_name- job_name: 'kubernetes-pods'		#  抓取 Pod 的指标kubernetes_sd_configs:- role: podrelabel_configs:- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]action: replaceregex: ([^:]+)(?::\d+)?;(\d+)replacement: $1:$2target_label: __address__- action: labelmapregex: __meta_kubernetes_pod_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: kubernetes_pod_name[root@k8smaster prometheus]# kubectl apply -f configmap.yaml
configmap/prometheus-config created
3、部署 Prometheus
Deployment: 部署 Prometheus 服务器。
command 和 args: 启动 Prometheus 的命令和参数。
volumeMounts: 挂载配置文件和数据存储。 resources: 定义资源限制和请求
[root@k8smaster prometheus]# cat prometheus.deploy.yml 
apiVersion: apps/v1
kind: Deployment
metadata:labels:name: prometheus-deploymentname: prometheusnamespace: kube-system
spec:replicas: 1selector:matchLabels:app: prometheustemplate:metadata:labels:app: prometheusspec:containers:- image: prom/prometheus:v2.0.0name: prometheuscommand:- "/bin/prometheus"args:- "--config.file=/etc/prometheus/prometheus.yml"	# 从 /etc/prometheus/prometheus.yml 加载配置文件- "--storage.tsdb.path=/prometheus"- "--storage.tsdb.retention=24h"ports:- containerPort: 9090protocol: TCPvolumeMounts:- mountPath: "/prometheus"		name: data- mountPath: "/etc/prometheus"	# 配置文件在容器内的挂载路径为 /etc/prometheusname: config-volumeresources:requests:cpu: 100mmemory: 100Milimits:cpu: 500mmemory: 2500MiserviceAccountName: prometheus	# 使用之前创建的服务账号 prometheusvolumes:- name: data			emptyDir: {}			# emptyDir 是一个临时存储卷,会在 Pod 创建时被创建,并在 Pod 被删除时被清理。Prometheus 使用这个卷存储监控数据- name: config-volume		# 定义了 config-volume 卷configMap:name: prometheus-config	#指向前面创建的 ConfigMap 资源[root@k8smaster prometheus]# kubectl apply -f prometheus.deploy.yml
deployment.apps/prometheus created
4、创建 Prometheus 的 Service
Service: 定义了一个 NodePort 类型的服务,将 Prometheus 的端口 9090 暴露到节点端口 30003
[root@k8smaster prometheus]# cat prometheus.svc.yml 
kind: Service
apiVersion: v1
metadata:labels:app: prometheusname: prometheusnamespace: kube-system
spec:type: NodePortports:- port: 9090targetPort: 9090nodePort: 30003selector:app: prometheus		# 确保服务绑定到带有标签 app=prometheus 的 Pod
[root@k8smaster prometheus]# kubectl apply -f prometheus.svc.yml
service/prometheus created
4.部署grafana
1、部署 Grafana
Deployment: 部署 Grafana 服务器。
image: 使用的镜像版本。
env: 定义环境变量,启用基本认证并禁用匿名访问。
readinessProbe: 定义健康检查。
[root@k8smaster prometheus]# cat grafana-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: grafana-corenamespace: kube-systemlabels:app: grafanacomponent: core
spec:replicas: 1selector:matchLabels:app: grafanatemplate:metadata:labels:app: grafanacomponent: corespec:containers:- image: grafana/grafana:6.1.4name: grafana-coreimagePullPolicy: IfNotPresent# env:resources:# keep request = limit to keep this container in guaranteed classlimits:cpu: 100mmemory: 100Mirequests:cpu: 100mmemory: 100Mienv:# The following env variables set up basic auth twith the default admin user and admin password.- name: GF_AUTH_BASIC_ENABLEDvalue: "true"- name: GF_AUTH_ANONYMOUS_ENABLEDvalue: "false"# - name: GF_AUTH_ANONYMOUS_ORG_ROLE#   value: Admin# does not really work, because of template variables in exported dashboards:# - name: GF_DASHBOARDS_JSON_ENABLED#   value: "true"readinessProbe:httpGet:path: /loginport: 3000# initialDelaySeconds: 30# timeoutSeconds: 1#volumeMounts:   #先不进行挂载#- name: grafana-persistent-storage#  mountPath: /var#volumes:#- name: grafana-persistent-storage#emptyDir: {}[root@k8smaster prometheus]# kubectl apply -f grafana-deploy.yaml
deployment.apps/grafana-core created
2、创建 Grafana 的 Service
Service: 定义了一个 NodePort 类型的服务,将 Grafana 的端口 3000 暴露到节点端口
[root@k8smaster prometheus]# cat grafana-svc.yaml 
apiVersion: v1
kind: Service
metadata:name: grafananamespace: kube-systemlabels:app: grafanacomponent: core
spec:type: NodePortports:- port: 3000selector:app: grafanacomponent: core
[root@k8smaster prometheus]# kubectl apply -f grafana-svc.yaml 
service/grafana created
3、创建 Grafana 的 Ingress
Ingress: 定义了 Grafana 的 Ingress 规则,允许通过域名访问 Grafana
[root@k8smaster prometheus]# cat grafana-ing.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:name: grafananamespace: kube-systemannotations:# 使用 annotations 指定 Ingress Controller 为 Nginxkubernetes.io/ingress.class: nginx
spec:ingressClassName: nginx  # 后添加,关联 Ingress Controllerules:- host: k8s.grafanahttp:paths:- path: /	backend:serviceName: grafana	# 对应的 Service 名称servicePort: 3000		# Service 的端口[root@k8smaster prometheus]# kubectl apply -f grafana-ing.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/grafana created
5.检查、测试
[root@k8smaster prometheus]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS      RESTARTS   AGE
kube-system            grafana-core-78958d6d67-49c56                1/1     Running     0          31m
kube-system            node-exporter-fcmx5                          1/1     Running     0          9m33s
kube-system            node-exporter-qccwb                          1/1     Running     0          9m33s
kube-system            prometheus-68546b8d9-qxsm7                   1/1     Running     0          2m47s[root@k8smaster mysql]# kubectl get svc -A
NAMESPACE              NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kube-system            grafana                              NodePort    10.110.87.158    <none>        3000:31267/TCP               31m
kube-system            node-exporter                        NodePort    10.111.247.142   <none>        9100:31672/TCP               39m
kube-system            prometheus                           NodePort    10.102.0.186     <none>        9090:30003/TCP               32m# 访问
# node-exporter采集的数据
http://192.168.2.104:31672/metrics# Prometheus的页面
http://192.168.2.104:30003# grafana的页面,
http://192.168.2.104:31267
# 账户:admin;密码:*******

十、使用测试软件ab对整个k8s集群和相关的服务器进行压力测试。

1、运行php-apache服务器并暴露服务

Deployment:
定义了一个名为 php-apache 的 Deployment,用于部署 PHP-Apache 应用。
通过 matchLabels 和 labels,确保 Deployment 和 Pod 的标签匹配。
使用 k8s.gcr.io/hpa-example 镜像(这是一个用于演示 HPA 的示例镜像)。
设置 containerPort 为 80,表示容器监听的端口。
设置资源限制:CPU 请求为 200m,CPU 限制为 500m。Service:
定义了一个名为 php-apache 的 Service,将流量路由到 Pod。
通过 selector,确保 Service 能够匹配到标签为 run: php-apache 的 Pod。
Service 的端口为 80,与 Pod 的端口一致。
# 1.运行php-apache服务器并暴露服务
[root@k8smaster hpa]# ls
php-apache.yaml
[root@k8smaster hpa]# cat php-apache.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: php-apache
spec:selector:matchLabels:run: php-apachetemplate:metadata:labels:run: php-apache		# Pod 的标签,用于 Service 选择器匹配spec:containers:- name: php-apacheimage: k8s.gcr.io/hpa-exampleimagePullPolicy: IfNotPresentports:- containerPort: 80resources:limits:cpu: 500mrequests:cpu: 200m
---
apiVersion: v1
kind: Service
metadata:name: php-apachelabels:run: php-apache
spec:ports:- port: 80selector:			# 选择带有标签 run: php-apache 的 Podrun: php-apache[root@k8smaster hpa]# kubectl apply -f php-apache.yaml 
deployment.apps/php-apache created
service/php-apache created
1、部署和验证
[root@k8smaster hpa]# kubectl get deploy
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
php-apache   1/1     1            1           93s
[root@k8smaster hpa]# kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
php-apache-567d9f79d-mhfsp   1/1     Running   0          44s

2 、创建HPA

HPA 用于根据 CPU 使用率自动扩展或缩减 Pod 的数量
--cpu-percent=10:当 CPU 使用率达到 10% 时开始触发伸缩。
--min=1:最小 Pod 数量为 1--max=10:最大 Pod 数量为 10
1、创建HPA
[root@k8smaster hpa]# kubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled
2、验证 HPA
[root@k8smaster hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   <unknown>/10%   1         10        0          7s

3、测试 HPA 的自动伸缩功能

通过创建一个负载生成器 Pod,不断向 PHP-Apache 服务发送请求,模拟高负载场景。
使用 busybox:1.28 镜像。
通过 wget 不停地向 php-apache 服务发送 HTTP 请求。
1、创建负载生成器 Pod
[root@k8smaster hpa]# kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
If you don't see a command prompt, try pressing enter.
OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK
#### 2、观察 HPA 的伸缩行为
[root@k8smaster hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/10%    1         10        1          3m24s
[root@k8smaster hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   238%/10%   1         10        1          3m41s
[root@k8smaster hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   250%/10%   1         10        4          3m57s
# 一旦CPU利用率降至0,HPA会自动将副本数缩减为 1。自动扩缩完成副本数量的改变可能需要几分钟的时间

4、使用 ab 压力测试工具,对web服务进行压力测试,观察promethues和dashboard

# ab命令访问web:192.168.2.112:30001 同时进入prometheus和dashboard观察pod
1、安装 httpd-tools
[root@nfs ~]# yum install httpd-tools -y
2、使用 ab 进行压力测试
[root@nfs data]# ab -n 6000 -c 100 -g output.dat http://192.168.2.112:30001/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
3、压力测试结果
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 
Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 192.168.2.112 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Completed 5500 requests
Completed 6000 requests
Finished 6000 requestsServer Software:        
Server Hostname:        192.168.2.112
Server Port:            30001Document Path:          /
Document Length:        146 bytesConcurrency Level:      100
Time taken for tests:   2.504 seconds
Complete requests:      6000
Failed requests:        0
Write errors:           0
Non-2xx responses:      6000
Total transferred:      1644000 bytes
HTML transferred:       876000 bytes
Requests per second:    2396.00 [#/sec] (mean)
Time per request:       41.733 [ms] (mean)
Time per request:       0.417 [ms] (mean, across all concurrent requests)
Transfer rate:          644.00 [Kbytes/sec] receivedConnection Times (ms)min  mean[+/-sd] median   max
Connect:        0    3   4.5      2      25
Processing:     1   40  32.1     39     180
Waiting:        0   39  31.9     37     178
Total:          1   43  32.6     42     185Percentage of the requests served within a certain time (ms)50%     4266%     5575%     6480%     7090%     8595%    10598%    12099%    135100%    185 (longest request)
4、使用 Prometheus 和 Grafana 观察指标(4种方式观察)
kubectl top pod:查看 Pod 的资源使用情况(CPU 和内存)。
http://192.168.2.117:3000/:Grafana 的地址,用于查看实时监控数据。
http://192.168.2.117:9090/targets:Prometheus 的地址,用于查看目标服务的监控状态。
https://192.168.2.104:32571/:可能是一个自定义的监控 dashboard
5、总结

总共发送了 6000 个请求,全部成功完成。
每秒请求数为 2396.00。
完成所有请求耗时:2.504 秒
平均每个请求的时间为 41.733 毫秒。
传输速率为 644.00 KB/秒。
展示了如何在 Kubernetes 中部署 PHP-Apache 服务,并通过 HPA 实现自动伸缩。
同时,通过 ab 压力测试工具和监控工具(如 Prometheus 和 Grafana),
可以对服务的性能进行测试和观察。
在这里插入代码片

十一、项目心得

通过本项目,我深入掌握了Kubernetes功能,熟悉了Prometheus和NFS等服务,提升了故障处理能力,理解了负载均衡、高可用和自动扩缩容,并加深了对运维的认识。

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com