如何使用 GKE 启用子域

How to enable subdomain with GKE

我在 GKE 中有不同的 Kubernetes 部署,我想从不同的外部子域访问它们。

我尝试创建 2 个部署,子域 "sub1" 和 "sub2" 以及主机名 "app" 另一个部署,主机名 "app" 和一个在 IP 上公开它的服务 XXX.XXX.XXX.XXX 在 app.mydomain.com

的 DNS 上配置

我想从 sub1.app.mydomain.com 和 sub2.app.mydomain.com

访问 2 个子部署

这应该是自动的,添加新的部署我不能每次都更改 DNS 记录。 也许我以错误的方式解决了这个问题,我是 GKE 的新手,有什么建议吗?

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-host
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: my-host
        type: proxy
    spec:
      hostname: app
      containers:
        - image: nginx:alpine
          name: nginx
          ports:
            - name: nginx
              containerPort: 80
              hostPort: 80
      restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-subdomain-1
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: my-subdomain-1
        type: app
    spec:
      hostname: app
      subdomain: sub1
      containers:
        - image: nginx:alpine
          name: nginx
          ports:
            - name: nginx
              containerPort: 80
              hostPort: 80
      restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-subdomain-2
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: my-subdomain-2
        type: app
    spec:
      hostname: app
      subdomain: sub2
      containers:
        - image: nginx:alpine
          name: nginx
          ports:
            - name: nginx
              containerPort: 80
              hostPort: 80
      restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
  name: my-expose-dns
spec:
  ports:
    - port: 80
  selector:
    name: my-host
  type: LoadBalancer

你想要Ingress. There are several options available (Istio, nginx, traefik, etc). I like using nginx and it's really easy to install and work with. Installation steps can be found at kubernetes.github.io.

Ingress Controller 安装完成后,您需要确保已使用 type=LoadBalancer 的服务公开了它。接下来,如果您使用的是 Google Cloud DNS,请为您的域设置通配符条目,并使用指向入口控制器服务的外部 IP 地址的 A 记录。在您的情况下,它将是 *.app.mydomain.com.

所以现在您到 app.mydomain.com 的所有流量都将转到该负载均衡器并由您的 Ingress Controller 处理,因此现在您需要为您想要的任何服务添加服务和 Ingress 实体。

apiVersion: v1
kind: Service
metadata:
  name: my-service1
spec:
  selector:
    app: my-app-1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

apiVersion: v1
kind: Service
metadata:
  name: my-service2
spec:
  selector:
    app: my-app2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
spec:
  rules:
  - host: sub1.app.mydomain.com
    http:
      paths:
      - backend:
          serviceName: my-service1
          servicePort: 80
  - host: sub2.app.mydomain.com
    http:
      paths:
      - backend:
          serviceName: my-service2
          servicePort: 80

显示的路由是基于主机的,但您可以像基于路径一样轻松地处理这些服务,因此到 app.mydomain.com/service1 的所有流量都将转到您的部署之一。

这可能是一个解决方案,就我而言,我需要更动态的东西。我不会在每次添加子域时都更新入口。

我几乎已经使用这样的 nginx 代理解决了问题:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: my-subdomain-1
    spec:
    replicas: 1
    strategy: {}
    template:
        metadata:
        creationTimestamp: null
        labels:
            name: my-subdomain-1
            type: app
        spec:
        hostname: sub1
        subdomain: my-internal-host
        containers:
            - image: nginx:alpine
            name: nginx
            ports:
                - name: nginx
                containerPort: 80
                hostPort: 80
        restartPolicy: Always
    status: {}
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: my-subdomain-2
    spec:
    replicas: 1
    strategy: {}
    template:
        metadata:
        creationTimestamp: null
        labels:
            name: my-subdomain-2
            type: app
        spec:
        hostname: sub2
        subdomain: my-internal-host
        containers:
            - image: nginx:alpine
            name: nginx
            ports:
                - name: nginx
                containerPort: 80
                hostPort: 80
        restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: nginx-config-dns-file
    data:
    nginx.conf: |
        server {
        listen       80;
        server_name ~^(?.*?)\.;

        location / {
            proxy_pass         http://$subdomain.my-internal-host;
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
        }
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: my-proxy
    spec:
    replicas: 1
    strategy: {}
    template:
        metadata:
        creationTimestamp: null
        labels:
            name: my-proxy
            type: app
        spec:
        subdomain: my-internal-host
        containers:
            - image: nginx:alpine
            name: nginx
            volumeMounts:
                - name: nginx-config-dns-file
                mountPath: /etc/nginx/conf.d/default.conf.test
                subPath: nginx.conf
            ports:
                - name: nginx
                containerPort: 80
                hostPort: 80
        volumes:
            - name: nginx-config-dns-file
            configMap:
                name: nginx-config-dns-file
        restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: my-internal-host
    spec:
    selector:
        type: app
    clusterIP: None
    ports:
        - name: sk-port
        port: 80
        targetPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sk-expose-dns
    spec:
    ports:
        - port: 80
    selector:
        name: my-proxy
    type: LoadBalancer

我确实知道我需要服务 'my-internal-host' 来允许所有部署在内部相互查看。 现在的问题只是 nginx 的 proxy_pass,如果我用 'proxy_pass http://sub1.my-internal-host;' 改变它,它就可以工作,但不能用 regexp var.

问题与 nginx 解析器有关。

已解决!

这是正确的 nginx 配置:

server {
  listen       80;
  server_name ~^(?<subdomain>.*?)\.;
  resolver kube-dns.kube-system.svc.cluster.local valid=5s;

  location / {
      proxy_pass         http://$subdomain.my-internal-host.default.svc.cluster.local;
      root   /usr/share/nginx/html;
      index  index.html index.htm;
  }

  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
      root   /usr/share/nginx/html;
  }
}