在 kubernetes 集群中过期后重新部署证书
Re-deploying certificates after expiry in kubernetes cluster
我的 kubernetes 中的证书已过期。部署证书的步骤是什么。重新部署 pod 健康后会受到影响。我该如何克服这个问题?
[mdupaguntla@iacap067 K8S_HA_Setup_Post_RPM_Installation_With_RBAC]$ sudo kubectl logs elasticsearch-logging-0
+ export NODE_NAME=elasticsearch-logging-0
+ NODE_NAME=elasticsearch-logging-0
+ export NODE_MASTER=true
+ NODE_MASTER=true
+ export NODE_DATA=true
+ NODE_DATA=true
+ export HTTP_PORT=9200
+ HTTP_PORT=9200
+ export TRANSPORT_PORT=9300
+ TRANSPORT_PORT=9300
+ export MINIMUM_MASTER_NODES=2
+ MINIMUM_MASTER_NODES=2
+ chown -R elasticsearch:elasticsearch /data
+ ./bin/elasticsearch_logging_discovery
F0323 07:18:25.043962 8 elasticsearch_logging_discovery.go:78] kube-system namespace doesn't exist: Unauthorized
goroutine 1 [running]:
k8s.io/kubernetes/vendor/github.com/golang/glog.stacks(0xc4202b1200, 0xc42020a000, 0x77, 0x85)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:766 +0xcf
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).output(0x1a38100, 0xc400000003, 0xc4200ba2c0, 0x1994cf4, 0x22, 0x4e, 0x0)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:717 +0x322
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).printf(0x1a38100, 0x3, 0x121acfe, 0x1e, 0xc4206aff50, 0x2, 0x2)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:655 +0x14c
k8s.io/kubernetes/vendor/github.com/golang/glog.Fatalf(0x121acfe, 0x1e, 0xc4206aff50, 0x2, 0x2)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:1145 +0x67
main.main()
/go/src/k8s.io/kubernetes/cluster/addons/fluentd-elasticsearch/es-image/elasticsearch_logging_dis
Certificates in my kubernetes are expired. What are the steps in redeploying certificates. After redployment pod health is affected. How do i overcome this?
...
F0323 07:18:25.043962 8 elasticsearch_logging_discovery.go:78] kube-system namespace doesn't exist: Unauthorized
看来您必须重新生成证书的私钥,而不仅仅是使用集群的 现有 密钥生成的 CSR 来颁发新证书。
如果这是真的,那么您将需要(至少)做以下两件事之一:
从备份中挖掘出旧的私钥文件,从中生成 CSR,re-issue API 证书,并将此总结为一个宝贵的教训,不要在没有删除私钥的情况下再次删除私钥慎重考虑
或者:
删除所有 serviceAccounts
named in any Pod
's serviceAccountName
, for every namespace, followed by a deletion of those pods themselves to get their volumeMount:
s rebound. Addition information is in their admin guide.
如果一切顺利,ServiceAccountController
将重新创建那些 ServiceAccount
秘密,让那些 Pod
重新开始,你就可以重新开始工作了。
为集群管理 X.509 证书的具体步骤太多,无法用一个答案框来说明,但这是对需要发生的事情的高级概述。
我的 kubernetes 中的证书已过期。部署证书的步骤是什么。重新部署 pod 健康后会受到影响。我该如何克服这个问题?
[mdupaguntla@iacap067 K8S_HA_Setup_Post_RPM_Installation_With_RBAC]$ sudo kubectl logs elasticsearch-logging-0
+ export NODE_NAME=elasticsearch-logging-0
+ NODE_NAME=elasticsearch-logging-0
+ export NODE_MASTER=true
+ NODE_MASTER=true
+ export NODE_DATA=true
+ NODE_DATA=true
+ export HTTP_PORT=9200
+ HTTP_PORT=9200
+ export TRANSPORT_PORT=9300
+ TRANSPORT_PORT=9300
+ export MINIMUM_MASTER_NODES=2
+ MINIMUM_MASTER_NODES=2
+ chown -R elasticsearch:elasticsearch /data
+ ./bin/elasticsearch_logging_discovery
F0323 07:18:25.043962 8 elasticsearch_logging_discovery.go:78] kube-system namespace doesn't exist: Unauthorized
goroutine 1 [running]:
k8s.io/kubernetes/vendor/github.com/golang/glog.stacks(0xc4202b1200, 0xc42020a000, 0x77, 0x85)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:766 +0xcf
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).output(0x1a38100, 0xc400000003, 0xc4200ba2c0, 0x1994cf4, 0x22, 0x4e, 0x0)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:717 +0x322
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).printf(0x1a38100, 0x3, 0x121acfe, 0x1e, 0xc4206aff50, 0x2, 0x2)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:655 +0x14c
k8s.io/kubernetes/vendor/github.com/golang/glog.Fatalf(0x121acfe, 0x1e, 0xc4206aff50, 0x2, 0x2)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:1145 +0x67
main.main()
/go/src/k8s.io/kubernetes/cluster/addons/fluentd-elasticsearch/es-image/elasticsearch_logging_dis
Certificates in my kubernetes are expired. What are the steps in redeploying certificates. After redployment pod health is affected. How do i overcome this?
...
F0323 07:18:25.043962 8 elasticsearch_logging_discovery.go:78] kube-system namespace doesn't exist: Unauthorized
看来您必须重新生成证书的私钥,而不仅仅是使用集群的 现有 密钥生成的 CSR 来颁发新证书。
如果这是真的,那么您将需要(至少)做以下两件事之一:
从备份中挖掘出旧的私钥文件,从中生成 CSR,re-issue API 证书,并将此总结为一个宝贵的教训,不要在没有删除私钥的情况下再次删除私钥慎重考虑
或者:
删除所有 serviceAccounts
named in any Pod
's serviceAccountName
, for every namespace, followed by a deletion of those pods themselves to get their volumeMount:
s rebound. Addition information is in their admin guide.
如果一切顺利,ServiceAccountController
将重新创建那些 ServiceAccount
秘密,让那些 Pod
重新开始,你就可以重新开始工作了。
为集群管理 X.509 证书的具体步骤太多,无法用一个答案框来说明,但这是对需要发生的事情的高级概述。