创建 PV 和 PVC 后,容器在 minikube 中不断为 Pod 崩溃
Container keeps crashing for Pod in minikube after the creation of PV and PVC
我有一个与 kubernetes 集成的 REST 应用程序,用于测试 REST 查询。现在,当我在客户端执行 POST 查询时,自动创建的作业的状态将无限期保持 PENDING
。同样自动创建的 POD
当我深入查看仪表板中的事件时,它附加了卷但无法安装卷并出现此错误:
Unable to mount volumes for pod "ingestion-88dhg_default(4a8dd589-e3d3-4424-bc11-27d51822d85b)": timeout expired waiting for volumes to attach or mount for pod "default"/"ingestion-88dhg". list of unmounted volumes=[cdiworkspace-volume]. list of unattached volumes=[cdiworkspace-volume default-token-qz2nb]
我已经使用以下代码手动定义了持久卷和持久卷声明,但没有连接到任何 pods。我应该那样做吗?
PV
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "cdiworkspace",
"selfLink": "/api/v1/persistentvolumes/cdiworkspace",
"uid": "92252f76-fe51-4225-9b63-4d6228d9e5ea",
"resourceVersion": "100026",
"creationTimestamp": "2019-07-10T09:49:04Z",
"annotations": {
"pv.kubernetes.io/bound-by-controller": "yes"
},
"finalizers": [
"kubernetes.io/pv-protection"
]
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"fc": {
"targetWWNs": [
"50060e801049cfd1"
],
"lun": 0
},
"accessModes": [
"ReadWriteOnce"
],
"claimRef": {
"kind": "PersistentVolumeClaim",
"namespace": "default",
"name": "cdiworkspace",
"uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
"apiVersion": "v1",
"resourceVersion": "98688"
},
"persistentVolumeReclaimPolicy": "Retain",
"storageClassName": "standard",
"volumeMode": "Block"
},
"status": {
"phase": "Bound"
}
}
PVC
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "cdiworkspace",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/persistentvolumeclaims/cdiworkspace",
"uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
"resourceVersion": "100028",
"creationTimestamp": "2019-07-10T09:32:16Z",
"annotations": {
"pv.kubernetes.io/bind-completed": "yes",
"pv.kubernetes.io/bound-by-controller": "yes",
"volume.beta.kubernetes.io/storage-provisioner": "k8s.io/minikube-hostpath"
},
"finalizers": [
"kubernetes.io/pvc-protection"
]
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Gi"
}
},
"volumeName": "cdiworkspace",
"storageClassName": "standard",
"volumeMode": "Block"
},
"status": {
"phase": "Bound",
"accessModes": [
"ReadWriteOnce"
],
"capacity": {
"storage": "10Gi"
}
}
}
journalctl -xe _SYSTEMD_UNIT=kubelet.service
的结果
Jul 01 09:47:26 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:26.979098 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:40 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:40.979722 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:55.978806 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:08.979375 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:23 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:23.979463 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:37 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:37.979005 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:48 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:48.977686 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:02 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:02.979125 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:17 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:17.979408 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:28 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:28.977499 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:41 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:41.977771 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:53 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:53.978605 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:05 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:05.980251 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:16 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:16.979292 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:31 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:31.978346 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:42 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:42.979302 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:55.978043 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:08.977540 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190929 22759 remote_image.go:113] PullImage "friendly/myplanet:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = E
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190971 22759 kuberuntime_image.go:51] Pull image "friendly/myplanet:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.191024 22759 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon:
部署 Yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: back
spec:
replicas: 1
selector:
matchLabels:
app: back
template:
metadata:
labels:
app: back
spec:
containers:
- name: back
image: back:latest
ports:
- containerPort: 8081
protocol: TCP
volumeMounts:
- mountPath: /data
name: back
volumes:
- name: back
hostPath:
# directory location on host
path: /back
# this field is optional
type: Directory
Dockerfile
FROM python:3.7-stretch
COPY . /code
WORKDIR /code
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
RUN pip install -r requirements.txt
ENTRYPOINT ["python", "ingestion.py"]
python文件1
import os
import shutil
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
logger = logging.getLogger("ingestion")
import requests
import datahub
scihub_username = os.environ["scihub_username"]
scihub_password = os.environ["scihub_password"]
result_url = "http://" + os.environ["CDINRW_BASE_URL"] + "/jobs/" + os.environ["CDINRW_JOB_ID"] + "/results"
logger.info("Searching the Copernicus Open Access Hub")
scenes = datahub.search(username=scihub_username,
password=scihub_password,
producttype=os.getenv("producttype"),
platformname=os.getenv("platformname"),
days_back=os.getenv("days_back", 2),
footprint=os.getenv("footprint"),
max_cloud_cover_percentage=os.getenv("max_cloud_cover_percentage"),
start_date = os.getenv("start_date"),
end_date = os.getenv("end_date"))
logger.info("Found {} relevant scenes".format(len(scenes)))
job_results = []
for scene in scenes:
# do not donwload a scene that has already been ingested
if os.path.exists(os.path.join("/out_data", scene["title"]+".SAFE")):
logger.info("The scene {} already exists in /out_data and will not be downloaded again.".format(scene["title"]))
filename = scene["title"]+".SAFE"
else:
logger.info("Starting the download of scene {}".format(scene["title"]))
filename = datahub.download(scene, "/tmp", scihub_username, scihub_password, unpack=True)
logger.info("The download was successful.")
shutil.move(filename, "/out_data")
result_message = {"description": "test",
"type": "Raster",
"format": "SAFE",
"filename": os.path.basename(filename)}
job_results.append(result_message)
res = requests.put(result_url, json=job_results, timeout=60)
res.raise_for_status()
**python 文件 2 **
import logging
import os
import urllib.parse
import zipfile
import requests
# constructing URLs for querying the data hub
_BASE_URL = "https://scihub.copernicus.eu/dhus/"
SITE = {}
SITE["SEARCH"] = _BASE_URL + "search?format=xml&sortedby=beginposition&order=desc&rows=100&start={offset}&q="
_PRODUCT_URL = _BASE_URL + "odata/v1/Products('{uuid}')/"
SITE["CHECKSUM"] = _PRODUCT_URL + "Checksum/Value/$value"
SITE["SAFEZIP"] = _PRODUCT_URL + "$value"
logger = logging.getLogger(__name__)
def _build_search_url(producttype=None, platformname=None, days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
search_terms = []
if producttype:
search_terms.append("producttype:{}".format(producttype))
if platformname:
search_terms.append("platformname:{}".format(platformname))
if start_date and end_date:
search_terms.append(
"beginPosition:[{}+TO+{}]".format(start_date, end_date))
elif days_back:
search_terms.append(
"beginPosition:[NOW-{}DAYS+TO+NOW]".format(days_back))
if footprint:
search_terms.append("footprint:%22Intersects({})%22".format(
footprint.replace(" ", "+")))
if max_cloud_cover_percentage:
search_terms.append("cloudcoverpercentage:[0+TO+{}]".format(max_cloud_cover_percentage))
url = SITE["SEARCH"] + "+AND+".join(search_terms)
return url
def _unpack(zip_file, directory, remove_after=False):
with zipfile.ZipFile(zip_file) as zf:
# This assumes that the zipfile only contains the .SAFE directory at root level
safe_path = zf.namelist()[0]
zf.extractall(path=directory)
if remove_after:
os.remove(zip_file)
return os.path.normpath(os.path.join(directory, safe_path))
def search(username, password, producttype=None, platformname=None ,days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
""" Search the Copernicus SciHub
Parameters
----------
username : str
user name for the Copernicus SciHub
password : str
password for the Copernicus SciHub
producttype : str, optional
product type to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
platformname : str, optional
plattform name to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
days_back : int, optional
number of days before today that will be searched. Default are the last 2 days. If start and end date are set the days_back parameter is ignored
footprint : str, optional
well-known-text representation of the footprint
max_cloud_cover_percentage: str, optional
percentage of cloud cover per scene. Can only be used in combination with Sentinel-2 imagery.
(see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
start_date: str, optional
start point of the search extent has to be used in combination with end_date
end_date: str, optional
end_point of the search extent has to be used in combination with start_date
Returns
-------
list
a list of scenes that match the search parameters
"""
import xml.etree.cElementTree as ET
scenes = []
search_url = _build_search_url(producttype, platformname, days_back, footprint, max_cloud_cover_percentage, start_date, end_date)
logger.info("Search URL: {}".format(search_url))
offset = 0
rowsBreak = 5000
name_space = {"atom": "http://www.w3.org/2005/Atom",
"opensearch": "http://a9.com/-/spec/opensearch/1.1/"}
while offset < rowsBreak: # Next pagination page:
response = requests.get(search_url.format(offset=offset), auth=(username, password))
root = ET.fromstring(response.content)
if offset == 0:
rowsBreak = int(
root.find("opensearch:totalResults", name_space).text)
for e in root.iterfind("atom:entry", name_space):
uuid = e.find("atom:id", name_space).text
title = e.find("atom:title", name_space).text
begin_position = e.find(
"atom:date[@name='beginposition']", name_space).text
end_position = e.find(
"atom:date[@name='endposition']", name_space).text
footprint = e.find("atom:str[@name='footprint']", name_space).text
scenes.append({
"id": uuid,
"title": title,
"begin_position": begin_position,
"end_position": end_position,
"footprint": footprint})
# Ultimate DHuS pagination page size limit (rows per page).
offset += 100
return scenes
def download(scene, directory, username, password, unpack=True):
""" Download a Sentinel scene based on its uuid
Parameters
----------
scene : dict
the scene to be downloaded
path : str
the path where the file will be downloaded to
username : str
username for the Copernicus SciHub
password : str
password for the Copernicus SciHub
unpack: boolean, optional
flag that defines whether the downloaded product should be unpacked after download. defaults to true
Raises
------
ValueError
if the size of the downloaded file does not match the Content-Length header
ValueError
if the checksum of the downloaded file does not match the checksum provided by the Copernicus SciHub
Returns
-------
str
path to the downloaded file
"""
import hashlib
md5hash = hashlib.md5()
md5sum = requests.get(SITE["CHECKSUM"].format(
uuid=scene["id"]), auth=(username, password)).text
download_path = os.path.join(directory, scene["title"] + ".zip")
# overwrite if path already exists
if os.path.exists(download_path):
os.remove(download_path)
url = SITE["SAFEZIP"].format(uuid=scene["id"])
rsp = requests.get(url, auth=(username, password), stream=True)
cl = rsp.headers.get("Content-Length")
size = int(cl) if cl else -1
# Actually fetch now:
with open(download_path, "wb") as f: # Do not read as a whole into memory:
written = 0
for block in rsp.iter_content(8192):
f.write(block)
written += len(block)
md5hash.update(block)
written = os.path.getsize(download_path)
if size > -1 and written != size:
raise ValueError("{}: size mismatch, {} bytes written but expected {} bytes to write!".format(
download_path, written, size))
elif md5sum:
calculated = md5hash.hexdigest()
expected = md5sum.lower()
if calculated != expected:
raise ValueError("{}: MD5 mismatch, calculated {} but expected {}!".format(
download_path, calculated, expected))
if unpack:
return _unpack(download_path, directory, remove_after=False)
else:
return download_path
我怎样才能将卷正确地自动挂载到 pod 上? 我不想为每个 REST 服务手动创建 pods 并为它们分配卷
i have defined the persistent volume and persistent volume claim
manually using following codes but did not connect to any pods. Should
i do that?
所以您直到现在都没有在 Pod
定义中以任何方式提及它,对吗?至少我在你的 Deployment
中的任何地方都看不到它。如果是这样,答案是:是的,您必须这样做,以便集群中的 Pods 可以使用它。
让我们从头开始。基本上,配置 Pod
(也适用于 Deployment 定义中的 Pod 模板)以使用 PersistentVolume
进行存储的整个过程包括 3 个步骤 [source]:
A cluster administrator creates a PersistentVolume
that is backed by physical storage. The administrator does not associate the volume
with any Pod.
A cluster user creates a PersistentVolumeClaim
, which gets automatically bound to a suitable PersistentVolume
.
The user creates a Pod
( it can be also a Deployment
in which you define a certain Pod template specification ) that uses the PersistentVolumeClaim
as storage.
这里没有必要详细描述上述所有步骤,因为它已经做得很好 here。
您可以使用以下命令验证 PV/PVC 可用性:
此阶段的 kubectl get pv volume-name
应将您的音量状态显示为 Bound
与 kubectl get pvc task-pv-claim
相同(在您的情况下 kubectl get pvc cdiworkspace
但是我建议使用不同的名称,例如 cdiworkspace-claim 代表 PersistentVolumeClaim
这样它就可以很容易地与 PersistentVolume
本身) - 此命令还应显示 Bound
的状态
请注意,Pod 的配置文件仅指定了 PersistentVolumeClaim
,但并未指定 PersistentVolume
本身。从 Pod 的角度来看,声明是一个卷。这是一个很好的描述,清楚地标记了这两个对象之间的区别 [source]:
A PersistentVolume (PV) is a piece of storage in the cluster that has
been provisioned by an administrator or dynamically provisioned using
Storage Classes. It is a resource in the cluster just like a node is a
cluster resource. PVs are volume plugins like Volumes, but have a
lifecycle independent of any individual pod that uses the PV. This API
object captures the details of the implementation of the storage, be
that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It
is similar to a pod. Pods consume node resources and PVCs consume PV
resources. Pods can request specific levels of resources (CPU and
Memory). Claims can request specific size and access modes (e.g., can
be mounted once read/write or many times read-only).
下面的 Pod
/ Deployment
定义中的规范示例引用了现有的 PersistentVolumeClaim
:
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
关于您的问题:
How can i mount the volume properly and automatically onto the pod? i
do not want to create the pods manually for each REST service and
assign volumes to them
您不必手动创建它们。您可以在 Deployment
定义中指定它们在 Pod 模板规范中使用的 PersistentVolumeClaim
。
文档资源:
有关如何配置 Pod 以使用 PersistentVolumeClaim
进行存储的详细分步说明,您可以在 here.
中找到
有关 Kubernetes 中持久卷概念的更多信息,请参阅此 article。
如果您想与集群中的每个 Pod 共享 minikube 主机上的一些可用数据,有比PersistentVolume
。它被称为hostPath
。您可以找到详细说明 here,下面是一个可能对您的特定情况有用的示例:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /directory/with/python/files
# this field is optional
type: Directory
您发布的示例实际上是 json
,而不是 yaml
格式。您应该能够在 this 页面上轻松地将它们转换为所需的格式。您应该将文件放在 minikube 主机上的 /directory/with/python/files
中,它们将在部署创建的每个 Pod 的 /data 目录中可用。
在 yaml 格式的部署下方,主机上的 /directory/with/python/files
目录使用 hostPath:
安装在 /data
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: back
namespace: default
selfLink: "/apis/extensions/v1beta1/namespaces/default/deployments/back"
uid: 9f21717c-2c04-459f-b47a-95fd8e11728d
resourceVersion: '298987'
generation: 1
creationTimestamp: '2019-07-16T13:16:15Z'
labels:
run: back
annotations:
deployment.kubernetes.io/revision: '1'
spec:
replicas: 1
selector:
matchLabels:
run: back
template:
metadata:
creationTimestamp:
labels:
run: back
spec:
containers:
- name: back
image: back:latest
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /directory/with/python/files
# this field is optional
type: Directory
resources: {}
terminationMessagePath: "/dev/termination-log"
terminationMessagePolicy: File
imagePullPolicy: Never
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 1
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
conditions:
- type: Progressing
status: 'True'
lastUpdateTime: '2019-07-16T13:16:34Z'
lastTransitionTime: '2019-07-16T13:16:15Z'
reason: NewReplicaSetAvailable
message: ReplicaSet "back-7fd9995747" has successfully progressed.
- type: Available
status: 'False'
lastUpdateTime: '2019-07-19T08:32:49Z'
lastTransitionTime: '2019-07-19T08:32:49Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
我再次查看了 pod 的日志,发现 python file1 所需的参数没有提供,导致容器崩溃。我通过提供日志中指出的所有缺失参数并在 deployment.yaml
中为现在看起来像这样的 pod 提供它们来测试它:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: back
spec:
replicas: 1
selector:
matchLabels:
app: back
template:
metadata:
creationTimestamp:
labels:
app: back
spec:
containers:
- name: back
image: back:latest
imagePullPolicy: Never
env:
- name: scihub_username
value: test
- name: scihub_password
value: test
- name: CDINRW_BASE_URL
value: 10.1.40.11:8081/swagger-ui.html
- name: CDINRW_JOB_ID
value: 3fa85f64-5717-4562-b3fc-2c963f66afa6
ports:
- containerPort: 8081
protocol: TCP
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /back
# this field is optional
type: Directory
这开始下载数据并暂时解决了问题,但这不是我想要的 运行 因为我希望它通过提供所有参数的 REST API 触发,并且启动和停止这个容器。我会为此创建一个单独的问题,link 它在下面供任何人关注。
我有一个与 kubernetes 集成的 REST 应用程序,用于测试 REST 查询。现在,当我在客户端执行 POST 查询时,自动创建的作业的状态将无限期保持 PENDING
。同样自动创建的 POD
当我深入查看仪表板中的事件时,它附加了卷但无法安装卷并出现此错误:
Unable to mount volumes for pod "ingestion-88dhg_default(4a8dd589-e3d3-4424-bc11-27d51822d85b)": timeout expired waiting for volumes to attach or mount for pod "default"/"ingestion-88dhg". list of unmounted volumes=[cdiworkspace-volume]. list of unattached volumes=[cdiworkspace-volume default-token-qz2nb]
我已经使用以下代码手动定义了持久卷和持久卷声明,但没有连接到任何 pods。我应该那样做吗?
PV
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "cdiworkspace",
"selfLink": "/api/v1/persistentvolumes/cdiworkspace",
"uid": "92252f76-fe51-4225-9b63-4d6228d9e5ea",
"resourceVersion": "100026",
"creationTimestamp": "2019-07-10T09:49:04Z",
"annotations": {
"pv.kubernetes.io/bound-by-controller": "yes"
},
"finalizers": [
"kubernetes.io/pv-protection"
]
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"fc": {
"targetWWNs": [
"50060e801049cfd1"
],
"lun": 0
},
"accessModes": [
"ReadWriteOnce"
],
"claimRef": {
"kind": "PersistentVolumeClaim",
"namespace": "default",
"name": "cdiworkspace",
"uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
"apiVersion": "v1",
"resourceVersion": "98688"
},
"persistentVolumeReclaimPolicy": "Retain",
"storageClassName": "standard",
"volumeMode": "Block"
},
"status": {
"phase": "Bound"
}
}
PVC
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "cdiworkspace",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/persistentvolumeclaims/cdiworkspace",
"uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
"resourceVersion": "100028",
"creationTimestamp": "2019-07-10T09:32:16Z",
"annotations": {
"pv.kubernetes.io/bind-completed": "yes",
"pv.kubernetes.io/bound-by-controller": "yes",
"volume.beta.kubernetes.io/storage-provisioner": "k8s.io/minikube-hostpath"
},
"finalizers": [
"kubernetes.io/pvc-protection"
]
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Gi"
}
},
"volumeName": "cdiworkspace",
"storageClassName": "standard",
"volumeMode": "Block"
},
"status": {
"phase": "Bound",
"accessModes": [
"ReadWriteOnce"
],
"capacity": {
"storage": "10Gi"
}
}
}
journalctl -xe _SYSTEMD_UNIT=kubelet.service
Jul 01 09:47:26 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:26.979098 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:40 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:40.979722 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:55.978806 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:08.979375 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:23 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:23.979463 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:37 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:37.979005 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:48 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:48.977686 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:02 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:02.979125 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:17 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:17.979408 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:28 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:28.977499 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:41 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:41.977771 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:53 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:53.978605 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:05 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:05.980251 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:16 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:16.979292 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:31 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:31.978346 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:42 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:42.979302 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:55.978043 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:08.977540 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190929 22759 remote_image.go:113] PullImage "friendly/myplanet:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = E
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190971 22759 kuberuntime_image.go:51] Pull image "friendly/myplanet:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.191024 22759 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon:
部署 Yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: back
spec:
replicas: 1
selector:
matchLabels:
app: back
template:
metadata:
labels:
app: back
spec:
containers:
- name: back
image: back:latest
ports:
- containerPort: 8081
protocol: TCP
volumeMounts:
- mountPath: /data
name: back
volumes:
- name: back
hostPath:
# directory location on host
path: /back
# this field is optional
type: Directory
Dockerfile
FROM python:3.7-stretch
COPY . /code
WORKDIR /code
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
RUN pip install -r requirements.txt
ENTRYPOINT ["python", "ingestion.py"]
python文件1
import os
import shutil
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
logger = logging.getLogger("ingestion")
import requests
import datahub
scihub_username = os.environ["scihub_username"]
scihub_password = os.environ["scihub_password"]
result_url = "http://" + os.environ["CDINRW_BASE_URL"] + "/jobs/" + os.environ["CDINRW_JOB_ID"] + "/results"
logger.info("Searching the Copernicus Open Access Hub")
scenes = datahub.search(username=scihub_username,
password=scihub_password,
producttype=os.getenv("producttype"),
platformname=os.getenv("platformname"),
days_back=os.getenv("days_back", 2),
footprint=os.getenv("footprint"),
max_cloud_cover_percentage=os.getenv("max_cloud_cover_percentage"),
start_date = os.getenv("start_date"),
end_date = os.getenv("end_date"))
logger.info("Found {} relevant scenes".format(len(scenes)))
job_results = []
for scene in scenes:
# do not donwload a scene that has already been ingested
if os.path.exists(os.path.join("/out_data", scene["title"]+".SAFE")):
logger.info("The scene {} already exists in /out_data and will not be downloaded again.".format(scene["title"]))
filename = scene["title"]+".SAFE"
else:
logger.info("Starting the download of scene {}".format(scene["title"]))
filename = datahub.download(scene, "/tmp", scihub_username, scihub_password, unpack=True)
logger.info("The download was successful.")
shutil.move(filename, "/out_data")
result_message = {"description": "test",
"type": "Raster",
"format": "SAFE",
"filename": os.path.basename(filename)}
job_results.append(result_message)
res = requests.put(result_url, json=job_results, timeout=60)
res.raise_for_status()
**python 文件 2 **
import logging
import os
import urllib.parse
import zipfile
import requests
# constructing URLs for querying the data hub
_BASE_URL = "https://scihub.copernicus.eu/dhus/"
SITE = {}
SITE["SEARCH"] = _BASE_URL + "search?format=xml&sortedby=beginposition&order=desc&rows=100&start={offset}&q="
_PRODUCT_URL = _BASE_URL + "odata/v1/Products('{uuid}')/"
SITE["CHECKSUM"] = _PRODUCT_URL + "Checksum/Value/$value"
SITE["SAFEZIP"] = _PRODUCT_URL + "$value"
logger = logging.getLogger(__name__)
def _build_search_url(producttype=None, platformname=None, days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
search_terms = []
if producttype:
search_terms.append("producttype:{}".format(producttype))
if platformname:
search_terms.append("platformname:{}".format(platformname))
if start_date and end_date:
search_terms.append(
"beginPosition:[{}+TO+{}]".format(start_date, end_date))
elif days_back:
search_terms.append(
"beginPosition:[NOW-{}DAYS+TO+NOW]".format(days_back))
if footprint:
search_terms.append("footprint:%22Intersects({})%22".format(
footprint.replace(" ", "+")))
if max_cloud_cover_percentage:
search_terms.append("cloudcoverpercentage:[0+TO+{}]".format(max_cloud_cover_percentage))
url = SITE["SEARCH"] + "+AND+".join(search_terms)
return url
def _unpack(zip_file, directory, remove_after=False):
with zipfile.ZipFile(zip_file) as zf:
# This assumes that the zipfile only contains the .SAFE directory at root level
safe_path = zf.namelist()[0]
zf.extractall(path=directory)
if remove_after:
os.remove(zip_file)
return os.path.normpath(os.path.join(directory, safe_path))
def search(username, password, producttype=None, platformname=None ,days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
""" Search the Copernicus SciHub
Parameters
----------
username : str
user name for the Copernicus SciHub
password : str
password for the Copernicus SciHub
producttype : str, optional
product type to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
platformname : str, optional
plattform name to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
days_back : int, optional
number of days before today that will be searched. Default are the last 2 days. If start and end date are set the days_back parameter is ignored
footprint : str, optional
well-known-text representation of the footprint
max_cloud_cover_percentage: str, optional
percentage of cloud cover per scene. Can only be used in combination with Sentinel-2 imagery.
(see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
start_date: str, optional
start point of the search extent has to be used in combination with end_date
end_date: str, optional
end_point of the search extent has to be used in combination with start_date
Returns
-------
list
a list of scenes that match the search parameters
"""
import xml.etree.cElementTree as ET
scenes = []
search_url = _build_search_url(producttype, platformname, days_back, footprint, max_cloud_cover_percentage, start_date, end_date)
logger.info("Search URL: {}".format(search_url))
offset = 0
rowsBreak = 5000
name_space = {"atom": "http://www.w3.org/2005/Atom",
"opensearch": "http://a9.com/-/spec/opensearch/1.1/"}
while offset < rowsBreak: # Next pagination page:
response = requests.get(search_url.format(offset=offset), auth=(username, password))
root = ET.fromstring(response.content)
if offset == 0:
rowsBreak = int(
root.find("opensearch:totalResults", name_space).text)
for e in root.iterfind("atom:entry", name_space):
uuid = e.find("atom:id", name_space).text
title = e.find("atom:title", name_space).text
begin_position = e.find(
"atom:date[@name='beginposition']", name_space).text
end_position = e.find(
"atom:date[@name='endposition']", name_space).text
footprint = e.find("atom:str[@name='footprint']", name_space).text
scenes.append({
"id": uuid,
"title": title,
"begin_position": begin_position,
"end_position": end_position,
"footprint": footprint})
# Ultimate DHuS pagination page size limit (rows per page).
offset += 100
return scenes
def download(scene, directory, username, password, unpack=True):
""" Download a Sentinel scene based on its uuid
Parameters
----------
scene : dict
the scene to be downloaded
path : str
the path where the file will be downloaded to
username : str
username for the Copernicus SciHub
password : str
password for the Copernicus SciHub
unpack: boolean, optional
flag that defines whether the downloaded product should be unpacked after download. defaults to true
Raises
------
ValueError
if the size of the downloaded file does not match the Content-Length header
ValueError
if the checksum of the downloaded file does not match the checksum provided by the Copernicus SciHub
Returns
-------
str
path to the downloaded file
"""
import hashlib
md5hash = hashlib.md5()
md5sum = requests.get(SITE["CHECKSUM"].format(
uuid=scene["id"]), auth=(username, password)).text
download_path = os.path.join(directory, scene["title"] + ".zip")
# overwrite if path already exists
if os.path.exists(download_path):
os.remove(download_path)
url = SITE["SAFEZIP"].format(uuid=scene["id"])
rsp = requests.get(url, auth=(username, password), stream=True)
cl = rsp.headers.get("Content-Length")
size = int(cl) if cl else -1
# Actually fetch now:
with open(download_path, "wb") as f: # Do not read as a whole into memory:
written = 0
for block in rsp.iter_content(8192):
f.write(block)
written += len(block)
md5hash.update(block)
written = os.path.getsize(download_path)
if size > -1 and written != size:
raise ValueError("{}: size mismatch, {} bytes written but expected {} bytes to write!".format(
download_path, written, size))
elif md5sum:
calculated = md5hash.hexdigest()
expected = md5sum.lower()
if calculated != expected:
raise ValueError("{}: MD5 mismatch, calculated {} but expected {}!".format(
download_path, calculated, expected))
if unpack:
return _unpack(download_path, directory, remove_after=False)
else:
return download_path
我怎样才能将卷正确地自动挂载到 pod 上? 我不想为每个 REST 服务手动创建 pods 并为它们分配卷
i have defined the persistent volume and persistent volume claim manually using following codes but did not connect to any pods. Should i do that?
所以您直到现在都没有在 Pod
定义中以任何方式提及它,对吗?至少我在你的 Deployment
中的任何地方都看不到它。如果是这样,答案是:是的,您必须这样做,以便集群中的 Pods 可以使用它。
让我们从头开始。基本上,配置 Pod
(也适用于 Deployment 定义中的 Pod 模板)以使用 PersistentVolume
进行存储的整个过程包括 3 个步骤 [source]:
A cluster administrator creates a
PersistentVolume
that is backed by physical storage. The administrator does not associate the volume with any Pod.A cluster user creates a
PersistentVolumeClaim
, which gets automatically bound to a suitablePersistentVolume
.The user creates a
Pod
( it can be also aDeployment
in which you define a certain Pod template specification ) that uses thePersistentVolumeClaim
as storage.
这里没有必要详细描述上述所有步骤,因为它已经做得很好 here。
您可以使用以下命令验证 PV/PVC 可用性:
此阶段的kubectl get pv volume-name
应将您的音量状态显示为 Bound
与 kubectl get pvc task-pv-claim
相同(在您的情况下 kubectl get pvc cdiworkspace
但是我建议使用不同的名称,例如 cdiworkspace-claim 代表 PersistentVolumeClaim
这样它就可以很容易地与 PersistentVolume
本身) - 此命令还应显示 Bound
请注意,Pod 的配置文件仅指定了 PersistentVolumeClaim
,但并未指定 PersistentVolume
本身。从 Pod 的角度来看,声明是一个卷。这是一个很好的描述,清楚地标记了这两个对象之间的区别 [source]:
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
下面的 Pod
/ Deployment
定义中的规范示例引用了现有的 PersistentVolumeClaim
:
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
关于您的问题:
How can i mount the volume properly and automatically onto the pod? i do not want to create the pods manually for each REST service and assign volumes to them
您不必手动创建它们。您可以在 Deployment
定义中指定它们在 Pod 模板规范中使用的 PersistentVolumeClaim
。
文档资源:
有关如何配置 Pod 以使用 PersistentVolumeClaim
进行存储的详细分步说明,您可以在 here.
有关 Kubernetes 中持久卷概念的更多信息,请参阅此 article。
如果您想与集群中的每个 Pod 共享 minikube 主机上的一些可用数据,有比PersistentVolume
。它被称为hostPath
。您可以找到详细说明 here,下面是一个可能对您的特定情况有用的示例:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /directory/with/python/files
# this field is optional
type: Directory
您发布的示例实际上是 json
,而不是 yaml
格式。您应该能够在 this 页面上轻松地将它们转换为所需的格式。您应该将文件放在 minikube 主机上的 /directory/with/python/files
中,它们将在部署创建的每个 Pod 的 /data 目录中可用。
在 yaml 格式的部署下方,主机上的 /directory/with/python/files
目录使用 hostPath:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: back
namespace: default
selfLink: "/apis/extensions/v1beta1/namespaces/default/deployments/back"
uid: 9f21717c-2c04-459f-b47a-95fd8e11728d
resourceVersion: '298987'
generation: 1
creationTimestamp: '2019-07-16T13:16:15Z'
labels:
run: back
annotations:
deployment.kubernetes.io/revision: '1'
spec:
replicas: 1
selector:
matchLabels:
run: back
template:
metadata:
creationTimestamp:
labels:
run: back
spec:
containers:
- name: back
image: back:latest
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /directory/with/python/files
# this field is optional
type: Directory
resources: {}
terminationMessagePath: "/dev/termination-log"
terminationMessagePolicy: File
imagePullPolicy: Never
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 1
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
conditions:
- type: Progressing
status: 'True'
lastUpdateTime: '2019-07-16T13:16:34Z'
lastTransitionTime: '2019-07-16T13:16:15Z'
reason: NewReplicaSetAvailable
message: ReplicaSet "back-7fd9995747" has successfully progressed.
- type: Available
status: 'False'
lastUpdateTime: '2019-07-19T08:32:49Z'
lastTransitionTime: '2019-07-19T08:32:49Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
我再次查看了 pod 的日志,发现 python file1 所需的参数没有提供,导致容器崩溃。我通过提供日志中指出的所有缺失参数并在 deployment.yaml
中为现在看起来像这样的 pod 提供它们来测试它:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: back
spec:
replicas: 1
selector:
matchLabels:
app: back
template:
metadata:
creationTimestamp:
labels:
app: back
spec:
containers:
- name: back
image: back:latest
imagePullPolicy: Never
env:
- name: scihub_username
value: test
- name: scihub_password
value: test
- name: CDINRW_BASE_URL
value: 10.1.40.11:8081/swagger-ui.html
- name: CDINRW_JOB_ID
value: 3fa85f64-5717-4562-b3fc-2c963f66afa6
ports:
- containerPort: 8081
protocol: TCP
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /back
# this field is optional
type: Directory
这开始下载数据并暂时解决了问题,但这不是我想要的 运行 因为我希望它通过提供所有参数的 REST API 触发,并且启动和停止这个容器。我会为此创建一个单独的问题,link 它在下面供任何人关注。