如何为 Kubernetes 配置 'efs-provider'?
How to configure the 'efs-provider' for Kubernetes?
我已按照此 guide this guide 中的步骤为 Kubernetes 部署 efs-provider
并绑定 EFS 文件系统。我没有成功。
我正在使用 Amazon EKS 实施 Kubernetes,我使用 EC2 实例作为工作节点,所有实例都使用 eksctl
部署。
我应用这个调整后 manifest file,结果是:
$ kubectl get pods
NAME READY STATUS RESTARTS
efs-provisioner-#########-##### 1/1 Running 0
$ kubectl get pvc
NAME STATUS VOLUME
test-pvc Pending efs-storage
无论我等了多久,我的 PVC 的状态都停留在 Pending
。
创建 Kubernetes 集群和工作节点并配置 EFS 文件系统后,我应用 efs-provider
清单,其中所有变量都指向 EFS 文件系统。在 StorageClass
配置文件中,将 spec.AccessModes
字段指定为 ReadWriteMany
。
此时我的 efs-provider
pod 运行 没有错误,PVC
的状态是 Pending
。它可以是什么?如何配置 efs-provider
以使用 EFS 文件系统?我应该等多久才能在 Bound
中获得 PVC
状态?
更新
关于Amazon Web Services的配置,我是这样的:
- 创建 EFS 文件系统后,我为我的节点所在的每个子网创建了一个 挂载点。
- 每个 挂载点 都附加了一个 安全组 并带有入站规则以授予对 NFS 端口 (2049) 的访问权限每个节点组的安全组。
我的 EFS 安全组的描述是:
{
"Description": "Communication between the control plane and worker nodes in cluster",
"GroupName": "##################",
"IpPermissions": [
{
"FromPort": 2049,
"IpProtocol": "tcp",
"IpRanges": [],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 2049,
"UserIdGroupPairs": [
{
"GroupId": "sg-##################",
"UserId": "##################"
}
]
}
],
"OwnerId": "##################",
"GroupId": "sg-##################",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"VpcId": "vpc-##################"
}
部署
kubectl describe deploy ${DEPLOY_NAME}
命令的输出是:
$ DEPLOY_NAME=efs-provisioner; \
> kubectl describe deploy ${DEPLOY_NAME}
Name: efs-provisioner
Namespace: default
CreationTimestamp: ####################
Labels: app=efs-provisioner
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"efs-provisioner","namespace":"default"},"spec"...
Selector: app=efs-provisioner
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=efs-provisioner
Service Account: efs-provisioner
Containers:
efs-provisioner:
Image: quay.io/external_storage/efs-provisioner:latest
Port: <none>
Host Port: <none>
Environment:
FILE_SYSTEM_ID: <set to the key 'file.system.id' of config map 'efs-provisioner'> Optional: false
AWS_REGION: <set to the key 'aws.region' of config map 'efs-provisioner'> Optional: false
DNS_NAME: <set to the key 'dns.name' of config map 'efs-provisioner'> Optional: true
PROVISIONER_NAME: <set to the key 'provisioner.name' of config map 'efs-provisioner'> Optional: false
Mounts:
/persistentvolumes from pv-volume (rw)
Volumes:
pv-volume:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: fs-#########.efs.##########.amazonaws.com
Path: /
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: efs-provisioner-576c67cf7b (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 106s deployment-controller Scaled up replica set efs-provisioner-576c67cf7b to 1
Pod 日志
kubectl logs ${POD_NAME}
命令的输出是:
$ POD_NAME=efs-provisioner-576c67cf7b-5jm95; \
> kubectl logs ${POD_NAME}
E0708 16:03:46.841229 1 efs-provisioner.go:69] fs-#########.efs.##########.amazonaws.com
I0708 16:03:47.049194 1 leaderelection.go:187] attempting to acquire leader lease default/kubernetes.io-aws-efs...
I0708 16:03:47.061830 1 leaderelection.go:196] successfully acquired lease default/kubernetes.io-aws-efs
I0708 16:03:47.062791 1 controller.go:571] Starting provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!
I0708 16:03:47.062877 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"kubernetes.io-aws-efs", UID:"f7c682cd-a199-11e9-80bd-1640944916e4", APIVersion:"v1", ResourceVersion:"3914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5 became leader
I0708 16:03:47.162998 1 controller.go:620] Started provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!
存储类
kubectl describe sc ${STORAGE_CLASS_NAME}
命令的输出是:
$ STORAGE_CLASS_NAME=aws-efs; \
> kubectl describe sc ${STORAGE_CLASS_NAME}
Name: aws-efs
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"aws-efs"},"provisioner":"aws-efs"}
Provisioner: aws-efs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
PersistentVolumeClaim
kubectl describe pvc ${PVC_NAME}
命令的输出是:
$ PVC_NAME=efs; \
> kubectl describe pvc ${PVC_NAME}
Name: efs
Namespace: default
StorageClass: aws-efs
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"aws-efs"},"name":"...
volume.beta.kubernetes.io/storage-class: aws-efs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 43s (x12 over 11m) persistentvolume-controller no volume plugin matched
Mounted By: <none>
About the questions
Do you have the EFS filesystem id properly configured for your efs-provisioner
?
- Yes, both (from the fs and the configured) match.
Do you have the proper IAM credentials to access this EFS?
- Yes, my user has and also the
eksctl
tool configures it.
Does that EFS path specified for your provisioner exist?
- Yes, is only the root (/) path.
Did you add an EFS endpoint to the subnet that your worker node(s) are running on, or ensure that your EFS subnets have an
Internet Gateway attached?
- Yes, I have added the EFS endpoints to the subnet that my worker
node(s) are running on.
Did you set your security group to allow the Inbound for NFS port(s)?
- Yes.
我已通过将 StorageClass
的供应商名称从 kubernetes.io/aws-efs
替换为仅 aws-efs
解决了我的问题。
我们可以在 this issue comment on Github posted by wongma7 上看到:
The issue is that provisioner is kubernetes.io/aws-efs
. It can't
begin with kubernetes.io
as that is reserved by kubernetes.
这解决了 persistentvolume-controller
在
PersistentVolumeClaim
上产生的事件上的 ProvisioningFailed
。
我已按照此 guide this guide 中的步骤为 Kubernetes 部署 efs-provider
并绑定 EFS 文件系统。我没有成功。
我正在使用 Amazon EKS 实施 Kubernetes,我使用 EC2 实例作为工作节点,所有实例都使用 eksctl
部署。
我应用这个调整后 manifest file,结果是:
$ kubectl get pods
NAME READY STATUS RESTARTS
efs-provisioner-#########-##### 1/1 Running 0
$ kubectl get pvc
NAME STATUS VOLUME
test-pvc Pending efs-storage
无论我等了多久,我的 PVC 的状态都停留在 Pending
。
创建 Kubernetes 集群和工作节点并配置 EFS 文件系统后,我应用 efs-provider
清单,其中所有变量都指向 EFS 文件系统。在 StorageClass
配置文件中,将 spec.AccessModes
字段指定为 ReadWriteMany
。
此时我的 efs-provider
pod 运行 没有错误,PVC
的状态是 Pending
。它可以是什么?如何配置 efs-provider
以使用 EFS 文件系统?我应该等多久才能在 Bound
中获得 PVC
状态?
更新
关于Amazon Web Services的配置,我是这样的:
- 创建 EFS 文件系统后,我为我的节点所在的每个子网创建了一个 挂载点。
- 每个 挂载点 都附加了一个 安全组 并带有入站规则以授予对 NFS 端口 (2049) 的访问权限每个节点组的安全组。
我的 EFS 安全组的描述是:
{
"Description": "Communication between the control plane and worker nodes in cluster",
"GroupName": "##################",
"IpPermissions": [
{
"FromPort": 2049,
"IpProtocol": "tcp",
"IpRanges": [],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 2049,
"UserIdGroupPairs": [
{
"GroupId": "sg-##################",
"UserId": "##################"
}
]
}
],
"OwnerId": "##################",
"GroupId": "sg-##################",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"VpcId": "vpc-##################"
}
部署
kubectl describe deploy ${DEPLOY_NAME}
命令的输出是:
$ DEPLOY_NAME=efs-provisioner; \
> kubectl describe deploy ${DEPLOY_NAME}
Name: efs-provisioner
Namespace: default
CreationTimestamp: ####################
Labels: app=efs-provisioner
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"efs-provisioner","namespace":"default"},"spec"...
Selector: app=efs-provisioner
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=efs-provisioner
Service Account: efs-provisioner
Containers:
efs-provisioner:
Image: quay.io/external_storage/efs-provisioner:latest
Port: <none>
Host Port: <none>
Environment:
FILE_SYSTEM_ID: <set to the key 'file.system.id' of config map 'efs-provisioner'> Optional: false
AWS_REGION: <set to the key 'aws.region' of config map 'efs-provisioner'> Optional: false
DNS_NAME: <set to the key 'dns.name' of config map 'efs-provisioner'> Optional: true
PROVISIONER_NAME: <set to the key 'provisioner.name' of config map 'efs-provisioner'> Optional: false
Mounts:
/persistentvolumes from pv-volume (rw)
Volumes:
pv-volume:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: fs-#########.efs.##########.amazonaws.com
Path: /
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: efs-provisioner-576c67cf7b (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 106s deployment-controller Scaled up replica set efs-provisioner-576c67cf7b to 1
Pod 日志
kubectl logs ${POD_NAME}
命令的输出是:
$ POD_NAME=efs-provisioner-576c67cf7b-5jm95; \
> kubectl logs ${POD_NAME}
E0708 16:03:46.841229 1 efs-provisioner.go:69] fs-#########.efs.##########.amazonaws.com
I0708 16:03:47.049194 1 leaderelection.go:187] attempting to acquire leader lease default/kubernetes.io-aws-efs...
I0708 16:03:47.061830 1 leaderelection.go:196] successfully acquired lease default/kubernetes.io-aws-efs
I0708 16:03:47.062791 1 controller.go:571] Starting provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!
I0708 16:03:47.062877 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"kubernetes.io-aws-efs", UID:"f7c682cd-a199-11e9-80bd-1640944916e4", APIVersion:"v1", ResourceVersion:"3914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5 became leader
I0708 16:03:47.162998 1 controller.go:620] Started provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!
存储类
kubectl describe sc ${STORAGE_CLASS_NAME}
命令的输出是:
$ STORAGE_CLASS_NAME=aws-efs; \
> kubectl describe sc ${STORAGE_CLASS_NAME}
Name: aws-efs
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"aws-efs"},"provisioner":"aws-efs"}
Provisioner: aws-efs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
PersistentVolumeClaim
kubectl describe pvc ${PVC_NAME}
命令的输出是:
$ PVC_NAME=efs; \
> kubectl describe pvc ${PVC_NAME}
Name: efs
Namespace: default
StorageClass: aws-efs
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"aws-efs"},"name":"...
volume.beta.kubernetes.io/storage-class: aws-efs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 43s (x12 over 11m) persistentvolume-controller no volume plugin matched
Mounted By: <none>
About the questions
Do you have the EFS filesystem id properly configured for your
efs-provisioner
?
- Yes, both (from the fs and the configured) match.
Do you have the proper IAM credentials to access this EFS?
- Yes, my user has and also the
eksctl
tool configures it.Does that EFS path specified for your provisioner exist?
- Yes, is only the root (/) path.
Did you add an EFS endpoint to the subnet that your worker node(s) are running on, or ensure that your EFS subnets have an Internet Gateway attached?
- Yes, I have added the EFS endpoints to the subnet that my worker node(s) are running on.
Did you set your security group to allow the Inbound for NFS port(s)?
- Yes.
我已通过将 StorageClass
的供应商名称从 kubernetes.io/aws-efs
替换为仅 aws-efs
解决了我的问题。
我们可以在 this issue comment on Github posted by wongma7 上看到:
The issue is that provisioner is
kubernetes.io/aws-efs
. It can't begin withkubernetes.io
as that is reserved by kubernetes.
这解决了 persistentvolume-controller
在
PersistentVolumeClaim
上产生的事件上的 ProvisioningFailed
。