Kubernetes:如何将自动缩放的 pod 的 accessModes 更改为 ReadOnlyMany?

Kubernetes: how to change accessModes of auto scaled pod to ReadOnlyMany?

我正在尝试 HPA:https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: api-orientdb-pv
  labels:
    app: api-orientdb
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: api-orientdb-{{ .Values.cluster.name | default "testing" }}
    fsType: ext4

聚氯乙烯:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: api-orientdb-pv-claim
  labels:
    app: api
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      app: api-orientdb
  storageClassName: ""

HPA:

Name:                           api-orientdb-deployment
Namespace:                      default
Labels:                         <none>
Annotations:                        <none>
CreationTimestamp:                  Thu, 08 Jun 2017 10:37:06 +0700
Reference:                      Deployment/api-orientdb-deployment
Metrics:                        ( current / target )
  resource cpu on pods  (as a percentage of request):   17% (8m) / 10%
Min replicas:                       1
Max replicas:                       2
Events:                         <none>

并且已经创建了新的 pod:

NAME                                       READY     STATUS    RESTARTS   AGE
api-orientdb-deployment-2506639415-n8nbt   1/1       Running   0          7h
api-orientdb-deployment-2506639415-x8nvm   1/1       Running   0          6h

如您所见,我使用的 gcePersistentDisk 不支持 ReadWriteMany 访问模式。

新创建的 pod 也将卷挂载为 rw 模式:

Name:        api-orientdb-deployment-2506639415-x8nvm
Containers:
    Mounts:
      /orientdb/databases from api-orientdb-persistent-storage (rw)
Volumes:
  api-orientdb-persistent-storage:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  api-orientdb-pv-claim
    ReadOnly:   false

问题:在这种情况下它是如何工作的?有没有办法配置主要的 pod (n8nbt) 以使用具有 ReadWriteOnce 访问模式的 PV,而所有其他缩放的 pod (x8nvm) 应该是 ReadOnlyMany?如何自动完成?

我能想到的唯一方法是创建另一个 PVC 安装相同的磁盘但具有不同 accessModes,但问题变成了:如何配置新缩放的 pod 以使用该 PVC?


6 月 9 日星期五 11:29:34 ICT 2017

我发现了一些东西:没有什么可以确保新缩放的 pod 将 运行 与第一个 pod 在同一节点上。所以,如果 volume 插件不支持 ReadWriteMany 并且扩展的 pod 在另一个节点上 运行 ,它会挂载失败:

Failed to attach volume "api-orientdb-pv" on node "gke-testing-default-pool-7711f782-4p6f" with: googleapi: Error 400: The disk resource 'projects/xx/zones/us-central1-a/disks/api-orientdb-testing' is already being used by 'projects/xx/zones/us-central1-a/instances/gke-testing-default-pool-7711f782-h7xv'

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

Important! A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.

如果是这样,确保 HPA 正常工作的唯一方法是 ReadWriteMany 卷插件必须支持访问模式?


6 月 9 日星期五 14:28:30 ICT 2017

If you want only one Pod to be able to write then create two Deployments. One with replicas: 1 and the other one that has the autoscaler attached (and has readOnly: true in it)

好的。

Do note that a GCE PD can only be mounted by a single node if any of the Pods are accessing it readWrite.

然后我必须使用label selectors来确保所有pods都在同一个节点上,对吗?

Your question is not clear to me

让我解释一下:在自动缩放的情况下,假设通过使用标签选择器,我可以确保新缩放的 pod 最终在同一个节点上,但是由于卷安装为 rw,它会中断吗GCE PD 因为我们有 2 pods 将卷安装为 rw?

First of all, generally, if you have a Deployment with replicas: 1 you won't have 2 Pod running at the same time (most of the time!!)

我知道。

On the other hand if a PVC specifies ReadWriteOnce then after the first Pod is scheduled any other Pods will need to be scheduled on the same node or not be scheduled at all (most common case: there aren't enough resources on the Node)

如果是 HPA,则不会。请参阅我上面的更新以获取更多详细信息。

If for any reason you do have 2 Pods accessing the same mount readWrite then it's completely up the the application what will happen and is not kubernetes specific

最让我困惑的是:

ReadWriteOnce – the volume can be mounted as read-write by a single node

好的,是节点,不是 Pod。但是在自动缩放的情况下,如果 2 pods 在同一个节点上 运行ning,并且都将卷挂载为 rw,GCE PD 是否支持它?如果是这样,它是如何工作的?

一切正常。 ReadWriteOnce中的Once指的是可以使用PVC的Node数量,而不是Pods(HPA或无HPA)的数量。

如果您只希望一个 Pod 能够写入,则创建两个 Deployment。一个带有 replicas: 1,另一个带有自动缩放器(并且其中有 readOnly: true)。请注意,如果任何 Pods 正在访问它,则 GCE PD 只能由单个节点安装。

我想我们可以使用 StatefulSet,以便每个副本都有自己的 PV。

https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets

Even Deployments with one replica using a ReadWriteOnce Volume are not recommended. This is because the default Deployment strategy will create a second Pod before bringing down the first pod on a recreate. The Deployment may fail in deadlock as the second Pod can't start because the ReadWriteOnce Volume is already in use, and the first Pod wont be removed because the second Pod has not yet started. Instead, use a StatefulSet with ReadWriteOnce volumes.

StatefulSets are the recommended method of deploying stateful applications that require a unique volume per replica. By using StatefulSets with Persistent Volume Claim Templates you can have applications that can scale up automatically with unique Persistent Volume Claims associated to each replica Pod.