Helm 似乎以不同方式解析我的图表,具体取决于我是否使用 --dry-运行 --debug?

Helm appears to parse my chart differently depending on if I use --dry-run --debug?

所以我今天部署了一个新的 cronjob 并得到了以下错误:

Error: release acs-export-cronjob failed: CronJob.batch "acs-export-cronjob" is invalid: [spec.jobTemplate.spec.template.spec.containers: Required value, spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"]

这是同一张图表上 运行 helm 的一些输出,未做任何更改,但带有 --debug --dry-run 标志:

 NAME:   acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *

COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job

HOOKS:
MANIFEST:

---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
    app: generic-job
    chart: "generic-job-0.1.0"
    release: "acs-export-cronjob"
    heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
    spec:
    metadata:
        name: acs-export-cronjob
        labels:
        jobgroup: acs-export-jobs
        app: generic-job
        chart: "generic-job-0.1.0"
        release: "acs-export-cronjob"
        heritage: "Tiller"
    spec:
        template:
        metadata:
            labels:
            jobgroup: acs-export-jobs
            app: generic-job
            chart: "generic-job-0.1.0"
            release: "acs-export-cronjob"
            heritage: "Tiller"
            annotations:
            iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
        spec:
            restartPolicy: Never   #<----------this is not 'Always'!!
            serviceAccountName: acs-export-cronjob-sa
            tolerations:
            - key: sonic-node-group
            operator: Equal
            value: api
            effect: NoSchedule
            nodeSelector:
            sonic-node-group: api
            volumes:
            - name: config
            emptyDir: {}
            initContainers:
            - name: "get-users-vmargs-from-deployment"
            image: <censored>.amazonaws.com/utils/kubectl-helm:latest
            command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
            volumeMounts:
            - mountPath: /config
                name: config
            - name: "get-users-yaml-appconfig-from-deployment"
            image: <censored>.amazonaws.com/utils/kubectl-helm:latest
            command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
            volumeMounts:
            - mountPath: /config
                name: config
            containers:     #<--------this field is not missing!
            - image: <censored>.amazonaws.com/sonic/acs-export:latest
            imagePullPolicy: Always
            name: "users-batch"
            command:
            - "bash"
            - "-c"
            - 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
            env:
            - name: FRENV
                value: "batch"
            - name: STACKNAME
                value: eu1-test
            - name: SPRING_PROFILES
                value: "export-job"
            - name: NAMESPACE
                valueFrom:
                fieldRef:
                    fieldPath: metadata.namespace
            volumeMounts:
            - mountPath: /config
                name: config
            resources:
                limit:
                cpu: 100m
                memory: 1Gi

如果你注意的话,你可能已经注意到调试输出中的第 101 行(我后来添加了注释),它将 restartPolicy 设置为 Never,与 [=16 完全相反=] 正如错误消息所声称的那样。

您可能还注意到调试输出的第 126 行(同样,我在事后添加了注释),其中指定了必填字段 containers,这与错误消息非常矛盾.

这是怎么回事?

这可能是由于格式错误。 查看示例 here and here。 结构是

jobTemplate:  
    spec:  
      template:  
        spec:  
          restartPolicy: Never

根据提供的输出,您在同一行上有 specrestartPolicy

jobTemplate:
       spec:
        template:
        spec:
            restartPolicy: Never   #<----------this is not 'Always'!!

spec.jobTemplate.spec.template.spec.containers 假设 helm 使用一些默认值而不是你的。 您也可以尝试生成yaml文件,将其转换为json并应用。

哈!找到了!实际上这是一个简单的错误。我在 jobtemplate 下有一个额外的 spec:metadata 部分,该部分是重复的。删除其中一个骗子解决了我的问题。

我真的希望 helm 的错误消息能更有帮助。

更正后的图表如下所示:

 NAME:   acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *

COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job

HOOKS:
MANIFEST:

---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
    app: generic-job
    chart: "generic-job-0.1.0"
    release: "acs-export-cronjob"
    heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
   spec:
      template:
         metadata:
            labels:
            jobgroup: acs-export-jobs
            app: generic-job
            chart: "generic-job-0.1.0"
            release: "acs-export-cronjob"
            heritage: "Tiller"
            annotations:
            iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
        spec:
            restartPolicy: Never   
            serviceAccountName: acs-export-cronjob-sa
            tolerations:
            - key: sonic-node-group
            operator: Equal
            value: api
            effect: NoSchedule
            nodeSelector:
            sonic-node-group: api
            volumes:
            - name: config
            emptyDir: {}
            initContainers:
            - name: "get-users-vmargs-from-deployment"
            image: <censored>.amazonaws.com/utils/kubectl-helm:latest
            command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
            volumeMounts:
            - mountPath: /config
                name: config
            - name: "get-users-yaml-appconfig-from-deployment"
            image: <censored>.amazonaws.com/utils/kubectl-helm:latest
            command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
            volumeMounts:
            - mountPath: /config
                name: config
            containers:     
            - image: <censored>.amazonaws.com/sonic/acs-export:latest
            imagePullPolicy: Always
            name: "users-batch"
            command:
            - "bash"
            - "-c"
            - 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
            env:
            - name: FRENV
                value: "batch"
            - name: STACKNAME
                value: eu1-test
            - name: SPRING_PROFILES
                value: "export-job"
            - name: NAMESPACE
                valueFrom:
                fieldRef:
                    fieldPath: metadata.namespace
            volumeMounts:
            - mountPath: /config
                name: config
            resources:
                limit:
                cpu: 100m
                memory: 1Gi