EKS - Pod 在 t2.large 个实例上具有未绑定的即时 PersistentVolumeClaims(t2.large,Bottlerocket OS)

EKS - Pod has unbound immediate PersistentVolumeClaims on t2.large instances (t2.large, Bottlerocket OS)

我已经查看了几个解决方案,但找不到答案,所以我试图 运行 集群上的状态集,但由于未绑定声明,pod 无法 运行 .我正在 运行宁 t2.large 台主机类型为 Bottlerocket 的机器。

kubectl 获取事件

28m         Warning   FailedScheduling         pod/carabbitmq-0                              pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
28m         Normal    Scheduled                pod/carabbitmq-0                              Successfully assigned default/carabbitmq-0 to ip-x.compute.internal
28m         Normal    SuccessfulAttachVolume   pod/carabbitmq-0                              AttachVolume.Attach succeeded for volume "pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7"
28m         Normal    Pulled                   pod/carabbitmq-0                              Container image "busybox:1.30.1" already present on machine

kubectl 获取 pv,pvc + describe

NAME                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/data-carabbitmq-0   Bound    pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7   30Gi       RWO            gp2            12m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
persistentvolume/pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7   30Gi       RWO            Retain           Bound    rabbitmq/data-carabbitmq-0   gp2                     12m

描述光伏:

Name:              pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7
Labels:            failure-domain.beta.kubernetes.io/region=eu-west-1
                   failure-domain.beta.kubernetes.io/zone=eu-west-1b
Annotations:       kubernetes.io/createdby: aws-ebs-dynamic-provisioner
                   pv.kubernetes.io/bound-by-controller: yes
                   pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      gp2
Status:            Bound
Claim:             rabbitmq/data-carabbitmq-0
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          30Gi
Node Affinity:     
  Required Terms:  
    Term 0:        failure-domain.beta.kubernetes.io/zone in [eu-west-1b]
                   failure-domain.beta.kubernetes.io/region in [eu-west-1]
Message:           
Source:
    Type:       AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:   aws://eu-west-1b/vol-xx
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
Events:         <none>

描述 PVC:

Name:          data-carabbitmq-0
Namespace:     rabbitmq
StorageClass:  gp2
Status:        Bound
Volume:        pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7
Labels:        app=rabbitmq-ha
               release=rabbit-mq
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      30Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    carabbitmq-0
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  ProvisioningSucceeded  36m   persistentvolume-controller  Successfully provisioned volume pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7 using kubernetes.io/aws-ebs

存储类型为gp2。

Name:                  gp2
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/aws-ebs
Parameters:            encrypted=true,type=gp2
AllowVolumeExpansion:  <unset>
MountOptions:
  debug
ReclaimPolicy:      Retain
VolumeBindingMode:  Immediate
Events:             <none>

我不确定我错过了什么,在我切换到“t”类型的 EC2 之前,相同的配置一直有效

所以,这很奇怪,但我有一些未通过健康检查的就绪探测器,我认为这是因为卷没有安装好。

healthcheck 基本上对 localhost 做了一些请求,它有问题(不知道为什么)- 更改为 127.0.0.1 使检查通过,然后音量错误消失了。

所以 - 如果您遇到这个奇怪的问题(卷已安装,但您仍然收到该错误)- 检查 pod 的探针。