通过 Kubernetes 服务的基本网络在 Minikube 中不起作用

Basic Networking through Kubernetes Services not working in Minikube

我运行宁群

下面是一些显示问题的示例命令

Shell 在:

HOST$ kubectl exec -it my-system-mongo-54b8c75798-lptzq /bin/bash

进入后,我使用 docker 网络 IP 连接到 mongo:

MONGO-POD# mongo mongodb://172.17.0.6
Welcome to the MongoDB shell.
> exit
bye

现在我尝试使用 K8 服务 IP(DNS 有效,因为它被转换为 10.96.154.36,如下所示)

MONGO-POD# mongo mongodb://my-system-mongo
MongoDB shell version v3.6.3
connecting to: mongodb://my-system-mongo
2020-01-03T02:39:55.883+0000 W NETWORK  [thread1] Failed to connect to 10.96.154.36:27017 after 5000ms milliseconds, giving up.
2020-01-03T02:39:55.903+0000 E QUERY    [thread1] Error: couldn't connect to server my-system-mongo:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed

Ping 也不行

MONGO-POD# ping my-system-mongo
PING my-system-mongo.default.svc.cluster.local (10.96.154.36) 56(84) bytes of data.
--- my-system-mongo.default.svc.cluster.local ping statistics ---
112 packets transmitted, 0 received, 100% packet loss, time 125365ms

我的设置是 运行ning Minikube 1.6.2 与 Kubernetes 1.17 和 Helm 3.0.2。这是我的完整(创建的 helm)干 运行 yaml 文件:

NAME: mysystem-1578018793
LAST DEPLOYED: Thu Jan  2 18:33:13 2020
NAMESPACE: default
STATUS: pending-install
REVISION: 1
HOOKS:
---
# Source: mysystem/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "my-system-test-connection"
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args:  ['my-system:']
  restartPolicy: Never
MANIFEST:
---
# Source: mysystem/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-system-configmap
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
data:
  _lots_of_key_value_pairs: here-I-shortened-it
---
# Source: mysystem/templates/my-system-mongo-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-system-mongo
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongo
spec:
  type: ClusterIP
  ports:
  - port: 27017
    targetPort: 27017
    protocol: TCP
    name: mongo
  selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: mongo
---
# Source: mysystem/templates/my-system-pg-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-system-postgres
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: postgres
spec:
  type: ClusterIP
  ports:
  - port: 5432
    targetPort: 5432
    protocol: TCP
    name: postgres
  selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: postgres
---
# Source: mysystem/templates/my-system-restsrv-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-system-rest-server
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: rest-server
spec:
  type: NodePort
  ports:
  #- port: 8009
  #  targetPort: 8009
  #  protocol: TCP
  #  name: jpda
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: rest-server
---
# Source: mysystem/templates/my-system-mongo-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-system-mongo
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongo
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mysystem
      app.kubernetes.io/instance: mysystem-1578018793
      app.kubernetes.io/component: mongo
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mysystem
        app.kubernetes.io/instance: mysystem-1578018793
        app.kubernetes.io/component: mongo
    spec:
      imagePullSecrets:
        - name: regcred
      serviceAccountName: default
      securityContext:
        {}
      containers:
      - name: my-system-mongo-pod
        securityContext:
            {}
        image: private.hub.net/my-system-mongo:latest
        imagePullPolicy: Always
        envFrom:
          - configMapRef:
              name: my-system-configmap
        ports:
        - name: "mongo"
          containerPort: 27017
          protocol: TCP
        resources:
            {}
---
# Source: mysystem/templates/my-system-pg-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-system-postgres
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mysystem
      app.kubernetes.io/instance: mysystem-1578018793
      app.kubernetes.io/component: postgres
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mysystem
        app.kubernetes.io/instance: mysystem-1578018793
        app.kubernetes.io/component: postgres
    spec:
      imagePullSecrets:
        - name: regcred
      serviceAccountName: default
      securityContext:
        {}
      containers:
      - name: mysystem
        securityContext:
            {}
        image: private.hub.net/my-system-pg:latest
        imagePullPolicy: Always
        envFrom:
          - configMapRef:
              name: my-system-configmap
        ports:
        - name: postgres
          containerPort: 5432
          protocol: TCP
        resources:
            {}
---
# Source: mysystem/templates/my-system-restsrv-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-system-rest-server
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: rest-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mysystem
      app.kubernetes.io/instance: mysystem-1578018793
      app.kubernetes.io/component: rest-server
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mysystem
        app.kubernetes.io/instance: mysystem-1578018793
        app.kubernetes.io/component: rest-server
    spec:
      imagePullSecrets:
        - name: regcred
      serviceAccountName: default
      securityContext:
        {}
      containers:
      - name: mysystem
        securityContext:
            {}
        image: private.hub.net/my-system-restsrv:latest
        imagePullPolicy: Always
        envFrom:
          - configMapRef:
              name: my-system-configmap
        ports:
        - name: rest-server
          containerPort: 8080
          protocol: TCP
        #- name: "jpda"
        #  containerPort: 8009
        #  protocol: TCP
        resources:
            {}

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mysystem,app.kubernetes.io/instance=mysystem-1578018793" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

我最好的理论(部分在 working through this 之后)是 kube-proxy 在 minikube 中不能正常工作,但是我不确定如何解决这个问题。 shell 什么时候进入 minikube 并通过 journalctl grep 代理我得到这个:

# grep proxy journal.log
Jan 03 02:16:02 minikube sudo[2780]:   docker : TTY=unknown ; PWD=/home/docker ; USER=root ; COMMAND=/bin/touch -d 2020-01-02 18:16:03.05808666 -0800 /var/lib/minikube/certs/proxy-client.crt
Jan 03 02:16:02 minikube sudo[2784]:   docker : TTY=unknown ; PWD=/home/docker ; USER=root ; COMMAND=/bin/touch -d 2020-01-02 18:16:03.05908666 -0800 /var/lib/minikube/certs/proxy-client.key
Jan 03 02:16:15 minikube kubelet[2821]: E0103 02:16:15.423027    2821 reflector.go:156] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.503466    2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-n78g9" (UniqueName: "kubernetes.io/secret/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy-token-n78g9") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.503965    2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/50fbf70b-724a-4b76-af7f-5f4b91735c84-xtables-lock") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.530948    2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/50fbf70b-724a-4b76-af7f-5f4b91735c84-lib-modules") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.538938    2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/50fbf70b-724a-4b76-af7f-5f4b91735c84/volumes/kubernetes.io~secret/kube-proxy-token-n78g9.
Jan 03 02:16:16 minikube kubelet[2821]: E0103 02:16:16.670527    2821 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
Jan 03 02:16:16 minikube kubelet[2821]: E0103 02:16:16.670670    2821 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy\" (\"50fbf70b-724a-4b76-af7f-5f4b91735c84\")" failed. No retries permitted until 2020-01-03 02:16:17.170632812 +0000 UTC m=+13.192986021 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy\") pod \"kube-proxy-pbs6s\" (UID: \"50fbf70b-724a-4b76-af7f-5f4b91735c84\") : failed to sync configmap cache: timed out waiting for the condition"

虽然这确实显示出一些问题,但我不确定如何采取行动或纠正这些问题。

更新:

我在翻阅日志时发现了这一点:

# grep conntrack journal.log
Jan 03 02:16:04 minikube kubelet[2821]: W0103 02:16:04.286682    2821 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.

查看 conntrack,尽管 minikube VM 没有 yum 或 apt!

您的 mongodb 服务定义中有错字。

 - port: 27107
   targetPort: 27017

将端口更改为 27017。

我们来看看相关服务:

apiVersion: v1
kind: Service
metadata:
  name: my-system-mongo
spec:
  ports:
  - port: 27017       # note typo here, see @aviator's answer
    targetPort: 27017
    protocol: TCP
    name: mongo
  selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793

特别注意selector:;这可以将流量路由到任何具有这两个标签的 pod。例如,这是一个有效的目标:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-system-postgres
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: mysystem
      app.kubernetes.io/instance: mysystem-1578018793
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mysystem
        app.kubernetes.io/instance: mysystem-1578018793

由于每个 pod 都有相同的标签对,任何服务都可以向任何 pod 发送流量;您的 "MongoDB" 服务不一定针对实际的 MongoDB pod。您的部署规范存在同样的问题,如果 kubectl get pods 输出有点混乱,我不会感到惊讶。

这里的正确答案是添加另一个标签来区分应用程序的不同部分。 The Helm docs推荐

app.kubernetes.io/component: mongodb

这必须出现在部署中嵌入的 pod 规范、匹配的部署选择器和匹配的服务选择器的标签中;简单地在所有相关对象上设置它,包括部署和服务标签是有意义的。