一 configmap
1.1 configmap的功能
-
configMap用于保存配置数据,以键值对形式存储。
-
configMap 资源提供了向 Pod 注入配置数据的方法。
-
镜像和配置文件解耦,以便实现镜像的可移植性和可复用性。
-
etcd限制了文件大小不能超过1M
1.2 configmap的使用场景
-
填充环境变量的值
-
设置容器内的命令行参数
-
填充卷的配置文件
1.3 configmap创建方式
1.3.1 字面值创建
[root@k8s-master ~]# kubectl create cm lee-config --from-literal fname=timing --from-literal lname=lee configmap/lee-config created [root@k8s-master ~]# kubectl describe cm lee-config Name: lee-config Namespace: default Labels: <none> Annotations: <none> Data #键值信息显示 ==== fname: ---- timing lname: ---- lee BinaryData ==== Events: <none>
1.3.2 通过文件创建
[root@k8s-master ~]# cat /etc/resolv.conf # Generated by NetworkManager nameserver 114.114.114.114 [root@k8s-master ~]# kubectl create cm lee2-config --from-file /etc/resolv.conf configmap/lee2-config created [root@k8s-master ~]# kubectl describe cm lee2-config Name: lee2-config Namespace: default Labels: <none> Annotations: <none> Data ==== resolv.conf: ---- # Generated by NetworkManager nameserver 114.114.114.114 BinaryData ==== Events: <none>
1.3.3 通过目录创建
[root@k8s-master ~]# mkdir leeconfig [root@k8s-master ~]# cp /etc/fstab /etc/rc.d/rc.local leeconfig/ [root@k8s-master ~]# kubectl create cm lee3-config --from-file leeconfig/ configmap/lee3-config created [root@k8s-master ~]# kubectl describe cm lee3-config Name: lee3-config Namespace: default Labels: <none> Annotations: <none> Data ==== fstab: ---- # # /etc/fstab # Created by anaconda on Fri Jul 26 13:04:22 2024 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # UUID=6577c44f-9c1c-44f9-af56-6d6b505fcfa8 / xfs defaults 0 0 UUID=eec689b4-73d5-4f47-b999-9a585bb6da1d /boot xfs defaults 0 0 UUID=ED00-0E42 /boot/efi vfat umask=0077,shortname=winnt 0 2 #UUID=be2f2006-6072-4c77-83d4-f2ff5e237f9f none swap defaults 0 0 rc.local: ---- #!/bin/bash # THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES # # It is highly advisable to create own systemd services or udev rules # to run scripts during boot instead of using this file. # # In contrast to previous versions due to parallel execution during boot # this script will NOT be run after all other services. # # Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure # that this script will be executed during boot. touch /var/lock/subsys/local mount /dev/cdrom /rhel9 BinaryData ==== Events: <none>
1.3.4 通过yaml文件创建
[root@k8s-master ~]# kubectl create cm lee4-config --from-literal db_host=172.25.254.100 --from-literal db_port=3306 --dry-run=client -o yaml > lee-config.yaml [root@k8s-master ~]# vim lee-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: lee4-config
data:db_host: 172.25.254.100db_port: "3306"
[root@k8s-master ~]# kubectl describe cm lee4-config
Name: lee4-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db_host:
----
172.25.254.100
db_port:
----
3306
BinaryData
====
Events: <none>
1.3.5 configmap的使用方式
-
通过环境变量的方式直接传递给pod
-
通过pod的 命令行运行方式
-
作为volume的方式挂载到pod内
1.3.5.1 使用configmap填充环境变量
#讲cm中的内容映射为指定变量 [root@k8s-master ~]# vim testpod1.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testpodname: testpod
spec:containers:- image: busyboxplus:latestname: testpodcommand:- /bin/sh- -c- envenv:- name: key1valueFrom:configMapKeyRef:name: lee4-configkey: db_host- name: key2valueFrom:configMapKeyRef:name: lee4-configkey: db_portrestartPolicy: Never
[root@k8s-master ~]# kubectl apply -f testpod.yml pod/testpod created [root@k8s-master ~]# kubectl logs pods/testpod KUBERNETES_PORT=tcp://10.96.0.1:443 KUBERNETES_SERVICE_PORT=443 MYAPP_V1_SERVICE_HOST=10.104.84.65 HOSTNAME=testpod SHLVL=1 MYAPP_V2_SERVICE_HOST=10.105.246.219 HOME=/ MYAPP_V1_PORT=tcp://10.104.84.65:80 MYAPP_V1_SERVICE_PORT=80 MYAPP_V2_SERVICE_PORT=80 MYAPP_V2_PORT=tcp://10.105.246.219:80 MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65 MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219 KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 MYAPP_V1_PORT_80_TCP_PORT=80 MYAPP_V2_PORT_80_TCP_PORT=80 MYAPP_V1_PORT_80_TCP_PROTO=tcp PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MYAPP_V2_PORT_80_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp key1=172.25.254.100 key2=3306 MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80 MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_SERVICE_PORT_HTTPS=443 PWD=/ KUBERNETES_SERVICE_HOST=10.96.0.1 #把cm中的值直接映射为变量 [root@k8s-master ~]# vim testpod2.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testpodname: testpod
spec:containers:- image: busyboxplus:latestname: testpodcommand:- /bin/sh- -c- envenvFrom:- configMapRef:name: lee4-configrestartPolicy: Never
#查看日志 [root@k8s-master ~]# kubectl logs pods/testpod KUBERNETES_PORT=tcp://10.96.0.1:443 KUBERNETES_SERVICE_PORT=443 MYAPP_V1_SERVICE_HOST=10.104.84.65 HOSTNAME=testpod SHLVL=1 MYAPP_V2_SERVICE_HOST=10.105.246.219 HOME=/ db_port=3306 MYAPP_V1_SERVICE_PORT=80 MYAPP_V1_PORT=tcp://10.104.84.65:80 MYAPP_V2_SERVICE_PORT=80 MYAPP_V2_PORT=tcp://10.105.246.219:80 MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65 MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219 KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 MYAPP_V1_PORT_80_TCP_PORT=80 age=18 MYAPP_V2_PORT_80_TCP_PORT=80 MYAPP_V1_PORT_80_TCP_PROTO=tcp PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 MYAPP_V2_PORT_80_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PROTO=tcp MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80 MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 name=lee PWD=/ KUBERNETES_SERVICE_HOST=10.96.0.1 db_host=172.25.254.100 #在pod命令行中使用变量 [root@k8s-master ~]# vim testpod3.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testpodname: testpod
spec:containers:- image: busyboxplus:latestname: testpodcommand:- /bin/sh- -c- echo ${db_host} ${db_port} #变量调用需envFrom:- configMapRef:name: lee4-configrestartPolicy: Never
#查看日志 [root@k8s-master ~]# kubectl logs pods/testpod 172.25.254.100 3306
1.3.5.2 通过数据卷使用configmap
[root@k8s-master ~]# vim testpod4.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testpodname: testpod
spec:containers:- image: busyboxplus:latestname: testpodcommand:- /bin/sh- -c- cat /config/db_hostvolumeMounts: #调用卷策略- name: config-volume #卷名称mountPath: /configvolumes: #声明卷的配置- name: config-volume #卷名称configMap:name: lee4-configrestartPolicy: Never
#查看日志 [root@k8s-master ~]# kubectl logs testpod 172.25.254.100
1.3.5.3 利用configMap填充pod的配置文件
#建立配置文件模板 [root@k8s-master ~]# vim nginx.conf
server {listen 8000;server_name _;root /usr/share/nginx/html;index index.html;
}
#利用模板生成cm
root@k8s-master ~]# kubectl create cm nginx-conf --from-file nginx.conf
configmap/nginx-conf created
[root@k8s-master ~]# kubectl describe cm nginx-conf
Name: nginx-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
nginx.conf:
----
server {listen 8000;server_name _;root /usr/share/nginx/html;index index.html;
}
BinaryData
====
Events: <none>
#建立nginx控制器文件 [root@k8s-master ~]# kubectl create deployment nginx --image nginx:latest --replicas 1 --dry-run=client -o yaml > nginx.yml #设定nginx.yml中的卷 [root@k8s-master ~]# vim nginx.yml [root@k8s-master ~]# cat nginx. cat: nginx.: 没有那个文件或目录 [root@k8s-master ~]# cat nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginxname: nginx
spec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- image: nginx:latestname: nginxvolumeMounts:- name: config-volumemountPath: /etc/nginx/conf.d
volumes:- name: config-volumeconfigMap:name: nginx-conf
#测试 [root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-8487c65cfc-cz5hd 1/1 Running 0 3m7s 10.244.2.38 k8s-node2 <none> <none> [root@k8s-master ~]# curl 10.244.2.38:8000
1.3.5.4 通过热更新cm修改配置
[root@k8s-master ~]# kubectl edit cm nginx-conf
apiVersion: v1
data:nginx.conf: |server {listen 8080; #端口改为8080server_name _;root /usr/share/nginx/html;index index.html;}
kind: ConfigMap
metadata:creationTimestamp: "2024-09-07T02:49:20Z"name: nginx-confnamespace: defaultresourceVersion: "153055"uid: 20bee584-2dab-4bd5-9bcb-78318404fa7a
#查看配置文件 [root@k8s-master ~]# kubectl exec pods/nginx-8487c65cfc-cz5hd -- cat /etc/nginx/conf.d/nginx.conf server {listen 8080;server_name _;root /usr/share/nginx/html;index index.html; }
[!NOTE]
配置文件修改后不会生效,需要删除pod后控制器会重建pod,这时就生效了
[root@k8s-master ~]# kubectl delete pods nginx-8487c65cfc-cz5hd pod "nginx-8487c65cfc-cz5hd" deleted[root@k8s-master ~]# curl 10.244.2.41:8080
二 secrets配置管理
2.1 secrets的功能介绍
-
Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。
-
敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活
-
Pod 可以用两种方式使用 secret:
-
作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。
-
当 kubelet 为 pod 拉取镜像时使用。
-
-
Secret的类型:
-
Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。
-
Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱。
-
kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息
-
2.2 secrets的创建
在创建secrets时我们可以用命令的方法或者yaml文件的方法
2.2.1从文件创建
[root@k8s-master secrets]# echo -n timinglee > username.txt [root@k8s-master secrets]# echo -n lee > password.txt root@k8s-master secrets]# kubectl create secret generic userlist --from-file username.txt --from-file password.txt secret/userlist created [root@k8s-master secrets]# kubectl get secrets userlist -o yaml
编写yaml文件
[root@k8s-master secrets]# echo -n timinglee | base64 dGltaW5nbGVl [root@k8s-master secrets]# echo -n lee | base64 bGVl[root@k8s-master secrets]# kubectl create secret generic userlist --dry-run=client -o yaml > userlist.yml[root@k8s-master secrets]# vim userlist.yml
apiVersion: v1
kind: Secret
metadata:creationTimestamp: nullname: userlist
type: Opaque
data:username: dGltaW5nbGVlpassword: bGVl
[root@k8s-master secrets]# kubectl apply -f userlist.yml secret/userlist created[root@k8s-master secrets]# kubectl describe secrets userlist Name: userlist Namespace: default Labels: <none> Annotations: <none>Type: OpaqueData ==== password: 3 bytes username: 9 byte
2.3 Secret的使用方法
2.3.1 将Secret挂载到Volume中
[root@k8s-master secrets]# kubectl run nginx --image nginx --dry-run=client -o yaml > pod1.yaml#向固定路径映射 [root@k8s-master secrets]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: nginxname: nginx
spec:containers:- image: nginxname: nginxvolumeMounts:- name: secretsmountPath: /secretreadOnly: truevolumes:- name: secretssecret:secretName: userlist
[root@k8s-master secrets]# kubectl apply -f pod1.yaml pod/nginx created[root@k8s-master secrets]# kubectl exec pods/nginx -it -- /bin/bash root@nginx:/# cat /secret/ cat: /secret/: Is a directory root@nginx:/# cd /secret/ root@nginx:/secret# ls password username root@nginx:/secret# cat password leeroot@nginx:/secret# cat username timingleeroot@nginx:/secret#
2.3.2 向指定路径映射 secret 密钥
#向指定路径映射 [root@k8s-master secrets]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: nginx1name: nginx1
spec:containers:- image: nginxname: nginx1volumeMounts:- name: secretsmountPath: /secretreadOnly: truevolumes:- name: secretssecret:secretName: userlistitems:- key: usernamepath: my-users/username
[root@k8s-master secrets]# kubectl apply -f pod2.yaml pod/nginx1 created [root@k8s-master secrets]# kubectl exec pods/nginx1 -it -- /bin/bash root@nginx1:/# cd secret/ root@nginx1:/secret# ls my-users root@nginx1:/secret# cd my-users root@nginx1:/secret/my-users# ls username root@nginx1:/secret/my-users# cat username
2.3.3 将Secret设置为环境变量
[root@k8s-master secrets]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: busyboxname: busybox
spec:containers:- image: busyboxname: busyboxcommand:- /bin/sh- -c- envenv:- name: USERNAMEvalueFrom:secretKeyRef:name: userlistkey: username- name: PASSvalueFrom:secretKeyRef:name: userlistkey: passwordrestartPolicy: Never
[root@k8s-master secrets]# kubectl apply -f pod3.yaml pod/busybox created [root@k8s-master secrets]# kubectl logs pods/busybox KUBERNETES_SERVICE_PORT=443 KUBERNETES_PORT=tcp://10.96.0.1:443 HOSTNAME=busybox MYAPP_V1_SERVICE_HOST=10.104.84.65 MYAPP_V2_SERVICE_HOST=10.105.246.219 SHLVL=1 HOME=/root MYAPP_V1_SERVICE_PORT=80 MYAPP_V1_PORT=tcp://10.104.84.65:80 MYAPP_V2_SERVICE_PORT=80 MYAPP_V2_PORT=tcp://10.105.246.219:80 MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65 USERNAME=timinglee MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219 KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 MYAPP_V1_PORT_80_TCP_PORT=80 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MYAPP_V2_PORT_80_TCP_PORT=80 MYAPP_V1_PORT_80_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 MYAPP_V2_PORT_80_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PROTO=tcp MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80 MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80 PASS=lee KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_SERVICE_HOST=10.96.0.1 PWD=/
2.3.4 存储docker registry的认证信息
建立私有仓库并上传镜像
#登陆仓库 [root@k8s-master secrets]# docker login reg.timinglee.org Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credential-storesLogin Succeeded#上传镜像 [root@k8s-master secrets]# docker tag timinglee/game2048:latest reg.timinglee.org/timinglee/game2048:latest [root@k8s-master secrets]# docker push reg.timinglee.org/timinglee/game2048:latest The push refers to repository [reg.timinglee.org/timinglee/game2048] 88fca8ae768a: Pushed 6d7504772167: Pushed 192e9fad2abc: Pushed 36e9226e74f8: Pushed 011b303988d2: Pushed latest: digest: sha256:8a34fb9cb168c420604b6e5d32ca6d412cb0d533a826b313b190535c03fe9390 size: 1364
#建立用于docker认证的secret[root@k8s-master secrets]# kubectl create secret docker-registry docker-auth --docker-server reg.timinglee.org --docker-username admin --docker-password lee --docker-email timinglee@timinglee.org secret/docker-auth created
[root@k8s-master secrets]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:labels:run: game2048name: game2048
spec:containers:- image: reg.timinglee.org/timinglee/game2048:latestname: game2048imagePullSecrets: #不设定docker认证时无法下载镜像- name: docker-auth
[root@k8s-master secrets]# kubectl get pods NAME READY STATUS RESTARTS AGE game2048 1/1 Running 0 4s
三 volumes配置管理
-
容器中文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题
-
当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。
-
当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。
-
Kubernetes 卷具有明确的生命周期与使用它的 Pod 相同
-
卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留
-
当一个 Pod 不再存在时,卷也将不再存在。
-
Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。
-
卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。
3.1 kubernets支持的卷的类型
官网:卷 | Kubernetes
k8s支持的卷的类型如下:
-
awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi
-
downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker
-
gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、
-
nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd
-
scaleIO、secret、storageos、vsphereVolume
3.2 emptyDir卷
功能:
当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要 Pod 在该节点上运行,卷就一直存在。卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除
emptyDir 的使用场景:
-
缓存空间,例如基于磁盘的归并排序。
-
耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。
-
在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。
示例:
[root@k8s-master volumes]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:name: vol1
spec:containers:- image: busyboxplus:latestname: vm1command:- /bin/sh- -c- sleep 30000000volumeMounts:- mountPath: /cachename: cache-vol- image: nginx:latestname: vm2volumeMounts:- mountPath: /usr/share/nginx/htmlname: cache-volvolumes:- name: cache-volemptyDir:medium: MemorysizeLimit: 100Mi
[root@k8s-master volumes]# kubectl apply -f pod1.yml#查看pod中卷的使用情况 [root@k8s-master volumes]# kubectl describe pods vol1#测试效果[root@k8s-master volumes]# kubectl exec -it pods/vol1 -c vm1 -- /bin/sh / # cd /cache/ /cache # ls /cache # curl localhost <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.27.1</center> </body> </html> /cache # echo timinglee > index.html /cache # curl localhost timinglee /cache # dd if=/dev/zero of=bigfile bs=1M count=101 dd: writing 'bigfile': No space left on device 101+0 records in 99+1 records out
3.3 hostpath卷
功能:
hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中,不会因为pod关闭而被删除
hostPath 的一些用法
-
运行一个需要访问 Docker 引擎内部机制的容器,挂载 /var/lib/docker 路径。
-
在容器中运行 cAdvisor(监控) 时,以 hostPath 方式挂载 /sys。
-
允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在
hostPath的安全隐患
-
具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。
-
当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 hostPath 使用的资源。
-
基础主机上创建的文件或目录只能由 root 用户写入。您需要在 特权容器 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 hostPath 卷。
示例:
[root@k8s-master volumes]# vim pod2.yml
apiVersion: v1
kind: Pod
metadata:name: vol1
spec:containers:- image: nginx:latestname: vm1volumeMounts:- mountPath: /usr/share/nginx/htmlname: cache-volvolumes:- name: cache-volhostPath:path: /datatype: DirectoryOrCreate #当/data目录不存在时自动建立
#测试: [root@k8s-master volumes]# kubectl apply -f pod2.yml pod/vol1 created [root@k8s-master volumes]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vol1 1/1 Running 0 10s 10.244.2.48 k8s-node2 <none> <none>[root@k8s-master volumes]# curl 10.244.2.48 <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.27.1</center> </body> </html>[root@k8s-node2 ~]# echo timinglee > /data/index.html [root@k8s-master volumes]# curl 10.244.2.48 timinglee#当pod被删除后hostPath不会被清理 [root@k8s-master volumes]# kubectl delete -f pod2.yml pod "vol1" deleted [root@k8s-node2 ~]# ls /data/ index.html
3.4 nfs卷
NFS 卷允许将一个现有的 NFS 服务器上的目录挂载到 Kubernetes 中的 Pod 中。这对于在多个 Pod 之间共享数据或持久化存储数据非常有用
例如,如果有多个容器需要访问相同的数据集,或者需要将容器中的数据持久保存到外部存储,NFS 卷可以提供一种方便的解决方案。
3.4.1 部署一台nfs共享主机并在所有k8s节点中安装nfs-utils
#部署nfs主机 [root@reg ~]# dnf install nfs-utils -y [root@reg ~]# systemctl enable --now nfs-server.service[root@reg ~]# vim /etc/exports /nfsdata *(rw,sync,no_root_squash)[root@reg ~]# exportfs -rv exporting *:/nfsdata[root@reg ~]# showmount -e Export list for reg.timinglee.org: /nfsdata *#在k8s所有节点中安装nfs-utils [root@k8s-master & node1 & node2 ~]# dnf install nfs-utils -y
3.4.2 部署nfs卷
[root@k8s-master volumes]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:name: vol1
spec:containers:- image: nginx:latestname: vm1volumeMounts:- mountPath: /usr/share/nginx/htmlname: cache-volvolumes:- name: cache-volnfs:server: 172.25.254.250path: /nfsdata
[root@k8s-master volumes]# kubectl apply -f pod3.yml pod/vol1 created#测试 [root@k8s-master volumes]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vol1 1/1 Running 0 100s 10.244.2.50 k8s-node2 <none> <none> [root@k8s-master volumes]# curl 10.244.2.50 <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.27.1</center> </body> </html>##在nfs主机中 [root@reg ~]# echo timinglee > /nfsdata/index.html [root@k8s-master volumes]# curl 10.244.2.50 timinglee
3.5 PersistentVolume持久卷
3.5.1 静态持久卷pv与静态持久卷声明pvc
PersistentVolume(持久卷,简称PV)
-
pv是集群内由管理员提供的网络存储的一部分。
-
PV也是集群中的一种资源。是一种volume插件,
-
但是它的生命周期却是和使用它的Pod相互独立的。
-
PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节
-
pv有两种提供方式:静态和动态
-
静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,它们存在于Kubernetes API中,并可用于存储使用
-
动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass
-
PersistentVolumeClaim(持久卷声明,简称PVC)
-
是用户的一种存储请求
-
它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源
-
Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式持久卷配置
-
PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态
volumes访问模式
-
ReadWriteOnce -- 该volume只能被单个节点以读写的方式映射
-
ReadOnlyMany -- 该volume可以被多个节点以只读方式映射
-
ReadWriteMany -- 该volume可以被多个节点以读写的方式映射
-
在命令行中,访问模式可以简写为:
-
RWO - ReadWriteOnce
-
-
ROX - ReadOnlyMany
-
RWX – ReadWriteMany
volumes回收策略
-
Retain:保留,需要手动回收
-
Recycle:回收,自动删除卷中数据(在当前版本中已经废弃)
-
Delete:删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除
注意:
[!NOTE]
只有NFS和HostPath支持回收利用
AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持删除操作。
volumes状态说明
-
Available 卷是一个空闲资源,尚未绑定到任何申领
-
Bound 该卷已经绑定到某申领
-
Released 所绑定的申领已被删除,但是关联存储资源尚未被集群回收
-
Failed 卷的自动回收操作失败
静态pv实例:
#在nfs主机中建立实验目录 [root@reg ~]# mkdir /data/pv{1..3}#编写创建pv的yml文件,pv是集群资源,不在任何namespace中 [root@k8s-master pvc]# vim pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:name: pv1
spec:capacity:storage: 5GivolumeMode: FilesystemaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: nfsnfs:path: /nfsdata/pv1server: 172.25.254.250---
apiVersion: v1
kind: PersistentVolume
metadata:name: pv2
spec:capacity:storage: 15GivolumeMode: FilesystemaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: RetainstorageClassName: nfsnfs:path: /nfsdata/pv2server: 172.25.254.250
---
apiVersion: v1
kind: PersistentVolume
metadata:name: pv3
spec:capacity:storage: 25GivolumeMode: FilesystemaccessModes:- ReadOnlyMangypersistentVolumeReclaimPolicy: RetainstorageClassName: nfsnfs:path: /nfsdata/pv3server: 172.25.254.250
[root@k8s-master pvc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pv1 5Gi RWO Retain Available nfs <unset> 4m50s pv2 15Gi RWX Retain Available nfs <unset> 4m50s pv3 25Gi ROX Retain Available nfs <unset> 4m50s#建立pvc,pvc是pv使用的申请,需要保证和pod在一个namesapce中 [root@k8s-master pvc]# vim pvc.ym
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc1
spec:storageClassName: nfsaccessModes:- ReadWriteOnceresources:requests:storage: 1Gi---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc2
spec:storageClassName: nfsaccessModes:- ReadWriteManyresources:requests:storage: 10Gi---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc3
spec:storageClassName: nfsaccessModes:- ReadOnlyManyresources:requests:storage: 15Gi
[root@k8s-master pvc]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc1 Bound pv1 5Gi RWO nfs <unset> 5s pvc2 Bound pv2 15Gi RWX nfs <unset> 4s pvc3 Bound pv3 25Gi ROX nfs <unset> 4s#在其他namespace中无法应用 [root@k8s-master pvc]# kubectl -n kube-system get pvc No resources found in kube-system namespace.
在pod中使用pvc
[root@k8s-master pvc]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:name: timinglee
spec:containers:- image: nginxname: nginxvolumeMounts:- mountPath: /usr/share/nginx/htmlname: vol1volumes:- name: vol1persistentVolumeClaim:claimName: pvc1
[root@k8s-master pvc]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES timinglee 1/1 Running 0 83s 10.244.2.54 k8s-node2 <none> <none> [root@k8s-master pvc]# kubectl exec -it pods/timinglee -- /bin/bash root@timinglee:/# curl localhost <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.27.1</center> </body> </html> root@timinglee:/# cd /usr/share/nginx/ root@timinglee:/usr/share/nginx# ls html root@timinglee:/usr/share/nginx# cd html/ root@timinglee:/usr/share/nginx/html# ls[root@reg ~]# echo timinglee > /data/pv1/index.html[root@k8s-master pvc]# kubectl exec -it pods/timinglee -- /bin/bash root@timinglee:/# cd /usr/share/nginx/html/ root@timinglee:/usr/share/nginx/html# ls index.html
四 存储类storageclass
官网: GitHub - kubernetes-sigs/nfs-subdir-external-provisioner: Dynamic sub-dir volume provisioner on a remote NFS server.
4.1 StorageClass说明
-
StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。
-
每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到
4.2 StorageClass的属性
属性说明:存储类 | Kubernetes
Provisioner(存储分配器):用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。
4.3 存储分配器NFS Client Provisioner
源码地址:GitHub - kubernetes-sigs/nfs-subdir-external-provisioner: Dynamic sub-dir volume provisioner on a remote NFS server.
-
NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。
-
PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服务器上)
-
PV回收的时候以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式(在NFS服务器上)
4.4 部署NFS Client Provisioner
4.4.1 创建sa并授权
[root@k8s-master storageclass]# vim rbac.yml
apiVersion: v1
kind: Namespace
metadata:name: nfs-client-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisionernamespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: nfs-client-provisioner
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: nfs-client-provisioner
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: nfs-client-provisioner
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
#查看rbac信息 [root@k8s-master storageclass]# kubectl apply -f rbac.yml namespace/nfs-client-provisioner created serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created [root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get sa NAME SECRETS AGE default 0 14s nfs-client-provisioner 0 14s
4.4.2 部署应用
[root@k8s-master storageclass]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisionernamespace: nfs-client-provisioner
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: sig-storage/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 172.25.254.250- name: NFS_PATHvalue: /nfsdatavolumes:- name: nfs-client-rootnfs:server: 172.25.254.250path: /nfsdata
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get deployments.apps nfs-client-provisioner NAME READY UP-TO-DATE AVAILABLE AGE nfs-client-provisioner 1/1 1 1 86s
4.4.3 创建存储类
[root@k8s-master storageclass]# vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:archiveOnDelete: "false"[root@k8s-master storageclass]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created
[root@k8s-master storageclass]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 9s
4.4.4 创建pvc
[root@k8s-master storageclass]# vim pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: test-claim
spec:storageClassName: nfs-clientaccessModes:- ReadWriteManyresources:requests:storage: 1G
[root@k8s-master storageclass]# kubectl apply -f pvc.yml persistentvolumeclaim/test-claim created[root@k8s-master storageclass]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE test-claim Bound pvc-7782a006-381a-440a-addb-e9d659b8fe0b 1Gi RWX nfs-client <unset> 21m
4.4.5 创建测试pod
[root@k8s-master storageclass]# vim pod.yml
kind: Pod
apiVersion: v1
metadata:name: test-pod
spec:containers:- name: test-podimage: busyboxcommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1"volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: test-claim
[root@k8s-master storageclass]# kubectl apply -f pod.yml[root@reg ~]# ls /data/default-test-claim-pvc-b1aef9cc-4be9-4d2a-8c5e-0fe7716247e2/ SUCCESS
4.4.6 设置默认存储类
-
在未设定默认存储类时pvc必须指定使用类的名称
-
在设定存储类后创建pvc时可以不用指定storageClassName
#一次性指定多个pvc [root@k8s-master pvc]# vim pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc1
spec:storageClassName: nfs-clientaccessModes:- ReadWriteOnceresources:requests:storage: 1Gi---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc2
spec:storageClassName: nfs-clientaccessModes:- ReadWriteManyresources:requests:storage: 10Gi---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc3
spec:storageClassName: nfs-clientaccessModes:- ReadOnlyManyresources:requests:storage: 15Gi
root@k8s-master pvc]# kubectl apply -f pvc.yml persistentvolumeclaim/pvc1 created persistentvolumeclaim/pvc2 created persistentvolumeclaim/pvc3 created [root@k8s-master pvc]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc1 Bound pvc-25a3c8c5-2797-4240-9270-5c51caa211b8 1Gi RWO nfs-client <unset> 4s pvc2 Bound pvc-c7f34d1c-c8d3-4e7f-b255-e29297865353 10Gi RWX nfs-client <unset> 4s pvc3 Bound pvc-5f1086ad-2999-487d-88d2-7104e3e9b221 15Gi ROX nfs-client <unset> 4s test-claim Bound pvc-b1aef9cc-4be9-4d2a-8c5e-0fe7716247e2 1Gi RWX nfs-client <unset> 9m9s
设定默认存储类
[root@k8s-master storageclass]# kubectl edit sc nfs-client
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:annotations:kubectl.kubernetes.io/last-applied-configuration: |{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"nfs-client"},"parameters":{"archiveOnDelete":"false"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"}storageclass.kubernetes.io/is-default-class: "true" #设定默认存储类creationTimestamp: "2024-09-07T13:49:10Z"name: nfs-clientresourceVersion: "218198"uid: 9eb1e144-3051-4f16-bdec-30c472358028
parameters:archiveOnDelete: "false"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
#测试,未指定storageClassName参数 [root@k8s-master storageclass]# vim pvc.yml kind: PersistentVolumeClaim apiVersion: v1 metadata:name: test-claim spec:accessModes:- ReadWriteManyresources:requests:storage: 1Gi[root@k8s-master storageclass]# kubectl apply -f pvc.yml persistentvolumeclaim/test-claim created [root@k8s-master storageclass]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE test-claim Bound pvc-b96c6983-5a4f-440d-99ec-45c99637f9b5 1Gi RWX nfs-client <unset> 7s
五 statefulset控制器
5.1 功能特性
-
Statefulset是为了管理有状态服务的问提设计的
-
StatefulSet将应用状态抽象成了两种情况:
-
拓扑状态:应用实例必须按照某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样
-
存储状态:应用的多个实例分别绑定了不同存储数据。
-
StatefulSet给所有的Pod进行了编号,编号规则是:$(statefulset名称)-$(序号),从0开始。
-
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,Pod对应的DNS记录。
5.2 StatefulSet的组成部分
-
Headless Service:用来定义pod网络标识,生成可解析的DNS记录
-
volumeClaimTemplates:创建pvc,指定pvc名称大小,自动创建pvc且pvc由存储类供应。
-
StatefulSet:管理pod的
5.3 构建方法
#建立无头服务 [root@k8s-master statefulset]# vim headless.yml
apiVersion: v1
kind: Service
metadata:name: nginx-svclabels:app: nginx
spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx
[root@k8s-master statefulset]# kubectl apply -f headless.yml#建立statefulset [root@k8s-master statefulset]# vim statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:name: web
spec:serviceName: "nginx-svc"replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginxvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:storageClassName: nfs-clientaccessModes:- ReadWriteOnceresources:requests:storage: 1Gi
[root@k8s-master statefulset]# kubectl apply -f statefulset.yml statefulset.apps/web configured root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 3m26s web-1 1/1 Running 0 3m22s web-2 1/1 Running 0 3m18s[root@reg nfsdata]# ls /nfsdata/ default-test-claim-pvc-34b3d968-6c2b-42f9-bbc3-d7a7a02dcbac default-www-web-0-pvc-0390b736-477b-4263-9373-a53d20cc8f9f default-www-web-1-pvc-a5ff1a7b-fea5-4e77-afd4-cdccedbc278c default-www-web-2-pvc-83eff88b-4ae1-4a8a-b042-8899677ae854
5.4 测试:
#为每个pod建立index.html文件[root@reg nfsdata]# echo web-0 > default-www-web-0-pvc-0390b736-477b-4263-9373-a53d20cc8f9f/index.html [root@reg nfsdata]# echo web-1 > default-www-web-1-pvc-a5ff1a7b-fea5-4e77-afd4-cdccedbc278c/index.html [root@reg nfsdata]# echo web-2 > default-www-web-2-pvc-83eff88b-4ae1-4a8a-b042-8899677ae854/index.html#建立测试pod访问web-0~2 [root@k8s-master statefulset]# kubectl run -it testpod --image busyboxplus / # curl web-0.nginx-svc web-0 / # curl web-1.nginx-svc web-1 / # curl web-2.nginx-svc web-2#删掉重新建立statefulset [root@k8s-master statefulset]# kubectl delete -f statefulset.yml statefulset.apps "web" deleted [root@k8s-master statefulset]# kubectl apply -f statefulset.yml statefulset.apps/web created#访问依然不变 [root@k8s-master statefulset]# kubectl attach testpod -c testpod -i -t If you don't see a command prompt, try pressing enter. / # cu curl cut / # curl web-0.nginx-svc web-0 / # curl web-1.nginx-svc web-1 / # curl web-2.nginx-svc web-2
5.5 statefulset的弹缩
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用
用命令改变副本数
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
通过编辑配置改变副本数
$ kubectl edit statefulsets.apps <stateful-set-name>
statefulset有序回收
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 0 statefulset.apps/web scaled [root@k8s-master statefulset]# kubectl delete -f statefulset.yml statefulset.apps "web" deleted [root@k8s-master statefulset]# kubectl delete pvc --all persistentvolumeclaim "test-claim" deleted persistentvolumeclaim "www-web-0" deleted persistentvolumeclaim "www-web-1" deleted persistentvolumeclaim "www-web-2" deleted persistentvolumeclaim "www-web-3" deleted persistentvolumeclaim "www-web-4" deleted persistentvolumeclaim "www-web-5" deleted [root@k8s2 statefulset]# kubectl scale statefulsets web --replicas=0[root@k8s2 statefulset]# kubectl delete -f statefulset.yaml[root@k8s2 mysql]# kubectl delete pvc --all