kubernetes 入口控制器 CrashLoopBackOff 错误

kubernetes ingress-controller CrashLoopBackOff Error

我已经设置了一个 Kubernetes (1.17.11) 集群 (Azure),并且我已经通过

安装了 nginx-ingress-controller

helm install nginx-ingress --namespace z1 stable/nginx-ingress --set controller.publishService.enabled=true

设置似乎没问题,它正在运行,但时不时会失败,当我检查 运行 pods (kubectl get pod -n z1) 时,我看到有多次重启对于入口控制器吊舱。

我想也许有一个巨大的负载所以增加副本更好所以我 运行 helm upgrade --namespace z1 stable/ingress --set controller.replicasCount=3 但仍然只有一个 pods (共 3 ) 似乎正在使用 并且有时由于 CrashLoopBackOff 而失败(并非经常)。

值得一提的是,安装的nginx-ingress版本是0.34.1但是0.41.2也可以,你觉得升级有帮助吗,我怎样才能把安装的版本升级到新版本(AFAIK helm upgrade 不会用较新的版本替换图表,我可能错了) ?

有什么想法吗?

kubectl describe pod 结果:

Name:         nginx-ingress-controller-58467bccf7-jhzlx
Namespace:    z1
Priority:     0
Node:         aks-agentpool-41415378-vmss000000/10.240.0.4
Start Time:   Thu, 19 Nov 2020 09:01:30 +0100
Labels:       app=nginx-ingress
              app.kubernetes.io/component=controller
              component=controller
              pod-template-hash=58467bccf7
              release=nginx-ingress
Annotations:  <none>
Status:       Running
IP:           10.244.1.18
IPs:
  IP:           10.244.1.18
Controlled By:  ReplicaSet/nginx-ingress-controller-58467bccf7
Containers:
  nginx-ingress-controller:
    Container ID:  docker://719655d41c1c8cdb8c9e88c21adad7643a44d17acbb11075a1a60beb7553e2cf
    Image:         us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
    Image ID:      docker-pullable://us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller@sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20
    Ports:         80/TCP, 443/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=z1/nginx-ingress-default-backend
      --election-id=ingress-controller-leader
      --ingress-class=nginx
      --configmap=z1/nginx-ingress-controller
    State:          Running
      Started:      Thu, 19 Nov 2020 09:54:07 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 19 Nov 2020 09:50:41 +0100
      Finished:     Thu, 19 Nov 2020 09:51:12 +0100
    Ready:          True
    Restart Count:  8
    Liveness:       http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-controller-58467bccf7-jhzlx (v1:metadata.name)
      POD_NAMESPACE:  z1 (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-7rmtk (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  nginx-ingress-token-7rmtk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-token-7rmtk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From                                        Message
  ----     ------     ----                  ----                                        -------
  Normal   Scheduled  <unknown>             default-scheduler                           Successfully assigned z1/nginx-ingress-controller-58467bccf7-jhzlx to aks-agentpool-41415378-vmss000000
  Normal   Killing    58m                   kubelet, aks-agentpool-41415378-vmss000000  Container nginx-ingress-controller failed liveness probe, will be restarted
  Warning  Unhealthy  57m (x4 over 58m)     kubelet, aks-agentpool-41415378-vmss000000  Readiness probe failed: HTTP probe failed with statuscode: 500
  Warning  Unhealthy  57m                   kubelet, aks-agentpool-41415378-vmss000000  Readiness probe failed: Get http://10.244.1.18:10254/healthz: read tcp 10.244.1.1:54126->10.244.1.18:10254: read: connection reset by peer
  Normal   Pulled     57m (x2 over 59m)     kubelet, aks-agentpool-41415378-vmss000000  Container image "us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1" already present on machine
  Normal   Created    57m (x2 over 59m)     kubelet, aks-agentpool-41415378-vmss000000  Created container nginx-ingress-controller
  Normal   Started    57m (x2 over 59m)     kubelet, aks-agentpool-41415378-vmss000000  Started container nginx-ingress-controller
  Warning  Unhealthy  57m                   kubelet, aks-agentpool-41415378-vmss000000  Liveness probe failed: Get http://10.244.1.18:10254/healthz: dial tcp 10.244.1.18:10254: connect: connection refused
  Warning  Unhealthy  56m                   kubelet, aks-agentpool-41415378-vmss000000  Liveness probe failed: HTTP probe failed with statuscode: 500
  Warning  Unhealthy  23m (x10 over 58m)    kubelet, aks-agentpool-41415378-vmss000000  Liveness probe failed: Get http://10.244.1.18:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  14m (x6 over 57m)     kubelet, aks-agentpool-41415378-vmss000000  Readiness probe failed: Get http://10.244.1.18:10254/healthz: dial tcp 10.244.1.18:10254: connect: connection refused
  Warning  BackOff    9m28s (x12 over 12m)  kubelet, aks-agentpool-41415378-vmss000000  Back-off restarting failed container
  Warning  Unhealthy  3m51s (x24 over 58m)  kubelet, aks-agentpool-41415378-vmss000000  Readiness probe failed: Get http://10.244.1.18:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

来自控制器的一些日志

  NGINX Ingress controller
  Release:       v0.34.1
  Build:         v20200715-ingress-nginx-2.11.0-8-gda5fa45e2
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.1

-------------------------------------------------------------------------------

I1119 08:54:07.267185       6 main.go:275] Running in Kubernetes cluster version v1.17 (v1.17.11) - git (clean) commit 3a3612132641768edd7f7e73d07772225817f630 - platform linux/amd64
I1119 08:54:07.276120       6 main.go:87] Validated z1/nginx-ingress-default-backend as the default backend.
I1119 08:54:07.430459       6 main.go:105] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
W1119 08:54:07.497816       6 store.go:659] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I1119 08:54:07.617458       6 nginx.go:263] Starting NGINX Ingress controller
I1119 08:54:08.748938       6 backend_ssl.go:66] Adding Secret "z1/z1-tls-secret" to the local store
I1119 08:54:08.801385       6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"z2", Name:"zalenium", UID:"8d395a18-811b-4852-8dd5-3cdd682e2e6e", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"13667218", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress z2/zalenium
I1119 08:54:08.801908       6 backend_ssl.go:66] Adding Secret "z2/z2-tls-secret" to the local store
I1119 08:54:08.802837       6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"z1", Name:"zalenium", UID:"244ae6f5-897e-432e-8ec3-fd142f0255dc", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"13667219", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress z1/zalenium
I1119 08:54:08.839946       6 nginx.go:307] Starting NGINX process
I1119 08:54:08.840375       6 leaderelection.go:242] attempting to acquire leader lease  z1/ingress-controller-leader-nginx...
I1119 08:54:08.845041       6 controller.go:141] Configuration changes detected, backend reload required.
I1119 08:54:08.919965       6 status.go:86] new leader elected: nginx-ingress-controller-58467bccf7-5thwb
I1119 08:54:09.084800       6 controller.go:157] Backend successfully reloaded.
I1119 08:54:09.096999       6 controller.go:166] Initial sync, sleeping for 1 second.

正如 OP 在评论部分确认的那样,我发布了此问题的解决方案。

Yes I tried and I replaced the deprecated version with the latest version, it completely solved the nginx issue.

在此设置 OP 中使用 helm chart from stable repository. In Github page, dedicated to stable/nginx-ingress 有一条信息表明此特定图表 已弃用。它是在 12 天前更新的,所以这是一个全新的变化。

This chart is deprecated as we have moved to the upstream repo ingress-nginx The chart source can be found here: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx

Nginx Ingress Controller 中,使用 Helm 选项的部署指南已经包含新存储库。

要列出集群上的当前存储库,请使用命令 $ helm repo list

$ helm repo list
NAME            URL
stable          https://kubernetes-charts.storage.googleapis.com
ingress-nginx   https://kubernetes.github.io/ingress-nginx

如果您没有新的 ingress-nginx 存储库,您必须:

  • 添加新存储库:
    • $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  • 更新一下:
    • $ helm update
  • 部署Nginx Ingress Controller
    • $ helm install my-release ingress-nginx/ingress-nginx

免责声明!

以上命令特定于 Helm v3