分期和生产路线相互冲突
Staging and production routes conflicting withone another
昨天我正在测试同时进行生产和暂存部署 运行。我得到的结果是不稳定的,因为有时你会去暂存 URL 并且它会加载生产。如果你刷新页面,它会在两者之间随机切换。
直到本周末我才有时间对其进行测试,因为质量检查正在进行中,而这个问题扰乱了它的发生。我终止了生产部署并从我的 ingress.yaml
中删除了路由,因此 QA 可以毫无问题地继续。
无论如何,我的 ingress.yaml
配置可能是原因,所以想分享它以查看导致此行为的原因:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "500m"
nginx.ingress.kubernetes.io/use-regex: "true"
name: ingress-service
namespace: default
spec:
tls:
- hosts:
- domain.com
- www.domain.com
- staging.domain.com
secretName: tls-domain-com
rules:
- host: domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-prod
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-prod
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-prod
servicePort: 5000
- host: www.domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-prod
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-prod
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-prod
servicePort: 5000
- host: staging.domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-staging
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-staging
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-staging
servicePort: 5000
我感觉是以下情况之一:
domain.com
在其他之前并且应该在最后
- 现在再看,他们是同一个IP,用的是同一个端口,所以需要改端口
- 如果我想让端口保持不变,我需要部署另一个入口控制器,一个用于暂存,一个用于生产,然后按照 these instructions
无论如何,有人可以确认吗?
编辑
添加.yaml
:
# client-staging.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment-staging
spec:
replicas: 3
selector:
matchLabels:
component: client
template:
metadata:
labels:
component: client
spec:
containers:
- name: client
image: testappacr.azurecr.io/test-app-client
ports:
- containerPort: 3000
env:
- name: DOMAIN
valueFrom:
secretKeyRef:
name: test-app-staging-secrets
key: DOMAIN
---
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service-staging
spec:
type: ClusterIP
selector:
component: client
ports:
- port: 3000
targetPort: 3000
# client-prod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment-prod
spec:
replicas: 3
selector:
matchLabels:
component: client
template:
metadata:
labels:
component: client
spec:
containers:
- name: client
image: testappacr.azurecr.io/test-app-client
ports:
- containerPort: 3000
env:
- name: DOMAIN
valueFrom:
secretKeyRef:
name: test-app-prod-secrets
key: DOMAIN
---
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service-prod
spec:
type: ClusterIP
selector:
component: client
ports:
- port: 3000
targetPort: 3000
在没有看到部署 yaml 描述符和问题描述的情况下,最可能的原因是两个部署都进入了服务负载均衡器端点,因此您必须更改选择器并添加如下内容:env: prod
和 env: staging
并将它们添加到每个部署中。
为了检查这是否是问题 运行 为每个服务 kubectl describe service
并检查端点。
如果没有,请告诉我输出结果,我可以帮助您进一步调试。
编辑:文件发布后的变化:
生产
服务:
spec:
type: ClusterIP
selector:
component: client
environment: production
部署:
replicas: 3
selector:
matchLabels:
component: client
environment: production
template:
metadata:
labels:
component: client
environment: production
分期
服务:
spec:
type: ClusterIP
selector:
component: client
environment: staging
部署:
replicas: 3
selector:
matchLabels:
component: client
environment: staging
template:
metadata:
labels:
component: client
environment: staging
昨天我正在测试同时进行生产和暂存部署 运行。我得到的结果是不稳定的,因为有时你会去暂存 URL 并且它会加载生产。如果你刷新页面,它会在两者之间随机切换。
直到本周末我才有时间对其进行测试,因为质量检查正在进行中,而这个问题扰乱了它的发生。我终止了生产部署并从我的 ingress.yaml
中删除了路由,因此 QA 可以毫无问题地继续。
无论如何,我的 ingress.yaml
配置可能是原因,所以想分享它以查看导致此行为的原因:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "500m"
nginx.ingress.kubernetes.io/use-regex: "true"
name: ingress-service
namespace: default
spec:
tls:
- hosts:
- domain.com
- www.domain.com
- staging.domain.com
secretName: tls-domain-com
rules:
- host: domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-prod
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-prod
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-prod
servicePort: 5000
- host: www.domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-prod
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-prod
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-prod
servicePort: 5000
- host: staging.domain.com
http:
paths:
- path: /?(.*)
backend:
serviceName: client-cluster-ip-service-staging
servicePort: 3000
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-staging
servicePort: 4000
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service-staging
servicePort: 5000
我感觉是以下情况之一:
domain.com
在其他之前并且应该在最后- 现在再看,他们是同一个IP,用的是同一个端口,所以需要改端口
- 如果我想让端口保持不变,我需要部署另一个入口控制器,一个用于暂存,一个用于生产,然后按照 these instructions
无论如何,有人可以确认吗?
编辑
添加.yaml
:
# client-staging.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment-staging
spec:
replicas: 3
selector:
matchLabels:
component: client
template:
metadata:
labels:
component: client
spec:
containers:
- name: client
image: testappacr.azurecr.io/test-app-client
ports:
- containerPort: 3000
env:
- name: DOMAIN
valueFrom:
secretKeyRef:
name: test-app-staging-secrets
key: DOMAIN
---
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service-staging
spec:
type: ClusterIP
selector:
component: client
ports:
- port: 3000
targetPort: 3000
# client-prod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment-prod
spec:
replicas: 3
selector:
matchLabels:
component: client
template:
metadata:
labels:
component: client
spec:
containers:
- name: client
image: testappacr.azurecr.io/test-app-client
ports:
- containerPort: 3000
env:
- name: DOMAIN
valueFrom:
secretKeyRef:
name: test-app-prod-secrets
key: DOMAIN
---
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service-prod
spec:
type: ClusterIP
selector:
component: client
ports:
- port: 3000
targetPort: 3000
在没有看到部署 yaml 描述符和问题描述的情况下,最可能的原因是两个部署都进入了服务负载均衡器端点,因此您必须更改选择器并添加如下内容:env: prod
和 env: staging
并将它们添加到每个部署中。
为了检查这是否是问题 运行 为每个服务 kubectl describe service
并检查端点。
如果没有,请告诉我输出结果,我可以帮助您进一步调试。
编辑:文件发布后的变化:
生产
服务:
spec:
type: ClusterIP
selector:
component: client
environment: production
部署:
replicas: 3
selector:
matchLabels:
component: client
environment: production
template:
metadata:
labels:
component: client
environment: production
分期
服务:
spec:
type: ClusterIP
selector:
component: client
environment: staging
部署:
replicas: 3
selector:
matchLabels:
component: client
environment: staging
template:
metadata:
labels:
component: client
environment: staging