欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 财经 > 创投人物 > K8S中的Pod生命周期之重启策略

K8S中的Pod生命周期之重启策略

2025/2/5 11:55:52 来源:https://blog.csdn.net/m0_66011019/article/details/145154100  浏览:    关键词:K8S中的Pod生命周期之重启策略

三种策略

Kubernetes 中的 Pod 支持以下三种重启策略:

  • Always

    • 描述:无论容器退出的原因是什么,都会自动重启容器。

    • 默认值:如果未指定重启策略,Kubernetes 默认使用 Always。

  • OnFailure

    • 描述:仅当容器以非零退出码终止时,才会重启容器。

    • 条件:需要指定退出码来触发重启。

  • Never

    • 描述:不论容器退出的原因是什么,都不会重启容器。

重启延迟

  • 首次重启:首次需要重启的容器将立即进行重启。

  • 后续重启:随后如果再次需要重启,kubelet 将会引入延迟,延迟时长从 10 秒开始,并呈指数增长。

  • 延迟时长序列:10s、20s、40s、80s、160s,之后达到最大延迟时长。

  • 最大延迟时长:300s,这是后续重启操作的最大延迟时长。

Never

可以发现pod不会重启

[root@k8s-master ~]# vim pod-restartpolicy.yaml
---
apiVersion: v1
kind: Pod
metadata:name: pod-restartpolicynamespace: test
spec:restartPolicy: Never   #论容器退出的原因是什么,都不会重启容器containers:- name: nginximage: nginx:1.17.1ports:- containerPort: 80name: nginx-portlivenessProbe:httpGet:scheme: HTTPport: 80path: /hello
[root@k8s-master ~]# kubectl apply -f pod-restartpolicy.yaml 
Error from server (NotFound): error when creating "pod-restartpolicy.yaml": namespaces "test" not found
[root@k8s-master ~]# kubectl create ns test
namespace/test created
[root@k8s-master ~]# kubectl apply -f pod-restartpolicy.yaml 
pod/pod-restartpolicy created
[root@k8s-master ~]#  kubectl describe pod pod-restartpolicy -n test
Name:         pod-restartpolicy
Namespace:    test
Priority:     0
Node:         k8s-node2/192.168.58.233
Start Time:   Tue, 14 Jan 2025 20:45:51 -0500
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: 77f405fd3543f24391d29b1f878fff24dda621f6583dd7df8e7020da258b9f4dcni.projectcalico.org/podIP: 10.244.169.129/32cni.projectcalico.org/podIPs: 10.244.169.129/32
Status:       Pending
IP:           
IPs:          <none>
Containers:nginx:Container ID:   Image:          nginx:1.17.1Image ID:       Port:           80/TCPHost Port:      0/TCPState:          WaitingReason:       ContainerCreatingReady:          FalseRestart Count:  0Liveness:       http-get http://:80/hello delay=0s timeout=1s period=10s #success=1 #failure=3Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sf6xn (ro)
Conditions:Type              StatusInitialized       True Ready             False ContainersReady   False PodScheduled      True 
Volumes:kube-api-access-sf6xn:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason     Age   From               Message----    ------     ----  ----               -------Normal  Scheduled  22s   default-scheduler  Successfully assigned test/pod-restartpolicy to k8s-node2Normal  Pulling    20s   kubelet            Pulling image "nginx:1.17.1"
[root@k8s-master ~]#  kubectl describe pod pod-restartpolicy -n test
Name:         pod-restartpolicy
Namespace:    test
Priority:     0
Node:         k8s-node2/192.168.58.233
Start Time:   Tue, 14 Jan 2025 20:45:51 -0500
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: 77f405fd3543f24391d29b1f878fff24dda621f6583dd7df8e7020da258b9f4dcni.projectcalico.org/podIP: cni.projectcalico.org/podIPs: 
Status:       Running
IP:           10.244.169.129
IPs:IP:  10.244.169.129
Containers:nginx:Container ID:   docker://19f7e2ca6a7f4a9487b75fc5dee7d85cf2baef4547ae8b6f1d68f8dfd5d7bb1aImage:          nginx:1.17.1Image ID:       docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort:           80/TCPHost Port:      0/TCPState:          RunningStarted:      Tue, 14 Jan 2025 20:46:18 -0500Ready:          TrueRestart Count:  0Liveness:       http-get http://:80/hello delay=0s timeout=1s period=10s #success=1 #failure=3Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sf6xn (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True 
Volumes:kube-api-access-sf6xn:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason     Age               From               Message----     ------     ----              ----               -------Normal   Scheduled  50s               default-scheduler  Successfully assigned test/pod-restartpolicy to k8s-node2Normal   Pulling    48s               kubelet            Pulling image "nginx:1.17.1"Normal   Pulled     24s               kubelet            Successfully pulled image "nginx:1.17.1" in 24.531177953sNormal   Created    23s               kubelet            Created container nginxNormal   Started    23s               kubelet            Started container nginxWarning  Unhealthy  0s (x3 over 20s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404Normal   Killing    0s                kubelet            Stopping container nginx
[root@k8s-master ~]# kubectl get pod pod-restartpolicy -n test
NAME                READY   STATUS      RESTARTS   AGE
pod-restartpolicy   0/1     Completed   0          53s

Always

可以发现pod会一直重启

[root@k8s-master ~]# vim pod-restartpolicy.yaml 
^C[root@k8s-master ~]#  cat pod-restartpolicy.yaml 
---
apiVersion: v1
kind: Pod
metadata:name: pod-restartpolicynamespace: test
spec:restartPolicy: Always   #论容器退出的原因是什么,都不会重启容器containers:- name: nginximage: nginx:1.17.1ports:- containerPort: 80name: nginx-portlivenessProbe:httpGet:scheme: HTTPport: 80path: /hello[root@k8s-master ~]# kubectl delete pod-restartpolicy.yaml 
error: the server doesn't have a resource type "pod-restartpolicy"
[root@k8s-master ~]# kubectl delete -f pod-restartpolicy.yaml 
pod "pod-restartpolicy" deleted
[root@k8s-master ~]# kubectl apply -f pod-restartpolicy.yaml 
pod/pod-restartpolicy created
[root@k8s-master ~]# kubectl get pod pod-restartpolicy -n test -w
NAME                READY   STATUS    RESTARTS   AGE
pod-restartpolicy   1/1     Running   0          5s
pod-restartpolicy   1/1     Running   1          31s

 

Pod常见状态转换场景

Pod中的容器数Pod状态发生事件AlwaysOnFailureNever
包含一个容器Running容器成功退出RunningSucceededSucceeded
包含一个容器Running容器失败退出RunningRunningFailed
包含两个容器Running1个容器失败退出RunningRunningRunning
包含两个容器Running容器内存溢出挂掉RunningRunningFailed

注释:

  • 对于 Always 重启策略,容器将立即重启。

  • 对于 OnFailureNever 重启策略,如果容器成功退出且退出码为0,Pod状态将变为Succeeded。

  • 对于 Always 重启策略,容器将立即重启。

  • 对于 OnFailure 重启策略,容器将以非零退出码退出,因此会重启。

  • 对于 Never 重启策略,容器将不会重启,Pod状态将变为Failed。

  • 对于 Always 重启策略,由于内存溢出导致的容器终止将重启容器。

  • 对于 OnFailure 重启策略,内存溢出导致的容器终止会触发重启,因为退出码是非零的。

  • 对于 Never 重启策略,容器将不会重启,Pod中其他容器继续运行,但失败的容器状态将为Terminated。

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com