使用多个 ingress-nginx 控制器时,入口资源从错误的入口控制器获取地址
Ingress Resource getting address from wrong Ingress Controller when using multiple ingress-nginx Controllers
我们在 AWS (EKS) 中有一个 Kubernetes 集群。在我们的设置中,我们需要有两个 ingress-nginx 控制器,以便我们可以执行不同的安全策略。为此,我正在利用
kubernetes.io/ingress.class and -ingress-class
根据 ingress-nginx 存储库中的建议 here, I created one standard Ingress Controller with default 'mandatory.yaml'。
为了创建第二个入口控制器,我对 'mandatory.yaml' 的入口部署进行了一些自定义。我基本上添加了标签:
'env: internal'
部署定义。
我还相应地创建了另一个服务,指定 'env: internal' 标记以便将此新服务与我的新入口控制器绑定。请看一下我的 yaml 定义:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller-internal
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller-internal
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --ingress-class=nginx-internal
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx-internal
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
应用此定义后,我的 Ingress Controller 与新的 LoadBalancer 服务一起创建:
$ kubectl get deployments -n ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1/1 1 1 10d
nginx-ingress-controller-internal 1/1 1 1 95m
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 172.20.6.67 xxxx.elb.amazonaws.com 80:30857/TCP,443:31863/TCP 10d
ingress-nginx-internal LoadBalancer 172.20.115.244 yyyyy.elb.amazonaws.com 80:30036/TCP,443:30495/TCP 97m
目前一切顺利,一切正常。
但是,当我创建两个入口资源时,每个资源都绑定到不同的入口控制器(注意 'kubernetes.io/ingress.class:'):
外部入口资源:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: accounting-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec: ...
内部入口资源:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: internal-ingress
annotations:
kubernetes.io/ingress.class: nginx-internal
spec: ...
我看到它们都包含相同的ADDRESS,第一个Ingress Controller的地址:
$ kg ingress
NAME HOSTS ADDRESS PORTS AGE
external-ingress bbb.aaaa.com xxxx.elb.amazonaws.com 80, 443 10d
internal-ingress ccc.aaaa.com xxxx.elb.amazonaws.com 80 88m
我希望绑定到 'ingress-class=nginx-internal' 的入口包含此地址:'yyyyy.elb.amazonaws.com'。虽然一切似乎都很好,但这让我很烦,我觉得有什么不对劲。
我应该从哪里开始对其进行故障排除以了解幕后发生的事情?
####---更新---####
除上述内容外,我还在 manadatory.yaml 中添加了行“"ingress-controller-leader-nginx-internal"”,如下所示。我是根据我在 mandatory.yaml 文件中找到的一篇评论来做的:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
- "ingress-controller-leader-nginx-internal"
不幸的是,nginx 文档只提到 'kubernetes.io/ingress.class and -ingress-class' 用于定义新控制器。我有可能搞砸了一些小细节。
尝试更改此行:
- --configmap=$(POD_NAMESPACE)/nginx-configuration
在你的代码中应该是这样的:
- --configmap=$(POD_NAMESPACE)/internal-nginx-configuration
这样你每个nginx-controller都会有不同的配置,否则你会有相同的配置,它可能看起来有效,但你在更新时会有一些错误......(已经在那里.. ..)
我们在 AWS (EKS) 中有一个 Kubernetes 集群。在我们的设置中,我们需要有两个 ingress-nginx 控制器,以便我们可以执行不同的安全策略。为此,我正在利用
kubernetes.io/ingress.class and -ingress-class
根据 ingress-nginx 存储库中的建议 here, I created one standard Ingress Controller with default 'mandatory.yaml'。
为了创建第二个入口控制器,我对 'mandatory.yaml' 的入口部署进行了一些自定义。我基本上添加了标签:
'env: internal'
部署定义。
我还相应地创建了另一个服务,指定 'env: internal' 标记以便将此新服务与我的新入口控制器绑定。请看一下我的 yaml 定义:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller-internal
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller-internal
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --ingress-class=nginx-internal
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx-internal
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
应用此定义后,我的 Ingress Controller 与新的 LoadBalancer 服务一起创建:
$ kubectl get deployments -n ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1/1 1 1 10d
nginx-ingress-controller-internal 1/1 1 1 95m
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 172.20.6.67 xxxx.elb.amazonaws.com 80:30857/TCP,443:31863/TCP 10d
ingress-nginx-internal LoadBalancer 172.20.115.244 yyyyy.elb.amazonaws.com 80:30036/TCP,443:30495/TCP 97m
目前一切顺利,一切正常。
但是,当我创建两个入口资源时,每个资源都绑定到不同的入口控制器(注意 'kubernetes.io/ingress.class:'):
外部入口资源:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: accounting-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec: ...
内部入口资源:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: internal-ingress
annotations:
kubernetes.io/ingress.class: nginx-internal
spec: ...
我看到它们都包含相同的ADDRESS,第一个Ingress Controller的地址:
$ kg ingress
NAME HOSTS ADDRESS PORTS AGE
external-ingress bbb.aaaa.com xxxx.elb.amazonaws.com 80, 443 10d
internal-ingress ccc.aaaa.com xxxx.elb.amazonaws.com 80 88m
我希望绑定到 'ingress-class=nginx-internal' 的入口包含此地址:'yyyyy.elb.amazonaws.com'。虽然一切似乎都很好,但这让我很烦,我觉得有什么不对劲。
我应该从哪里开始对其进行故障排除以了解幕后发生的事情?
####---更新---####
除上述内容外,我还在 manadatory.yaml 中添加了行“"ingress-controller-leader-nginx-internal"”,如下所示。我是根据我在 mandatory.yaml 文件中找到的一篇评论来做的:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
- "ingress-controller-leader-nginx-internal"
不幸的是,nginx 文档只提到 'kubernetes.io/ingress.class and -ingress-class' 用于定义新控制器。我有可能搞砸了一些小细节。
尝试更改此行:
- --configmap=$(POD_NAMESPACE)/nginx-configuration
在你的代码中应该是这样的:
- --configmap=$(POD_NAMESPACE)/internal-nginx-configuration
这样你每个nginx-controller都会有不同的配置,否则你会有相同的配置,它可能看起来有效,但你在更新时会有一些错误......(已经在那里.. ..)