为什么 docker0 桥在带有法兰绒的 kubernetes 集群中出现?

Why docker0 bridge is showing down in a kubernetes cluster with flannel?

我有一个使用 kubadm 创建的 kubernetes 集群,有 1 个主节点和 2 个工作节点。法兰绒被用作网络插件。注意到 docker0 网桥在所有工作节点和主节点上都已关闭,但集群网络工作正常。如果我们在 kubernetes 集群中使用任何网络插件(如 flannel),docker0 网桥是否会按设计关闭?

docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:ad:8f:3a:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

我正在发布来自 this SO 线程的社区维基答案,因为我相信它回答了您的问题。


这里有两种网络模型Docker和Kubernetes。

Docker 型号

By default, Docker uses host-private networking. It creates a virtual bridge, called docker0 by default, and allocates a subnet from one of the private address blocks defined in RFC1918 for that bridge. For each container that Docker creates, it allocates a virtual Ethernet device (called veth) which is attached to the bridge. The veth is mapped to appear as eth0 in the container, using Linux namespaces. The in-container eth0 interface is given an IP address from the bridge’s address range.

The result is that Docker containers can talk to other containers only if they are on the same machine (and thus the same virtual bridge). Containers on different machines can not reach each other - in fact they may end up with the exact same network ranges and IP addresses.

Kubernetes 模型

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):

  • 所有容器都可以在没有 NAT 的情况下与所有其他容器通信
  • 所有节点都可以在没有 NAT 的情况下与所有容器通信(反之亦然)
  • 容器看到自己的 IP 与其他人看到的 IP 相同

Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost. This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model. This is implemented, using Docker, as a “pod container” which holds the network namespace open while “app containers” (the things the user specified) join that namespace with Docker’s --net=container:<id> function.

As with Docker, it is possible to request host ports, but this is reduced to a very niche operation. In this case a port will be allocated on the host Node and traffic will be forwarded to the Pod. The Pod itself is blind to the existence or non-existence of host ports.

为了将平台与底层网络基础设施集成,Kubernetes 提供了一个名为 Container Networking Interface (CNI) 的插件规范。如果满足 Kubernetes 的基本要求,供应商可以随意使用网络堆栈,通常使用覆盖网络来支持 multi-subnetmulti-az 集群.

下面显示了如何通过 Flannel which is a popular CNI 实现覆盖网络。

您可以阅读更多关于其他 CNI 的信息 here. The Kubernetes approach is explained in Cluster Networking docs. I also recommend reading Kubernetes Is Hard: Why EKS Makes It Easier for Network and Security Architects which explains how Flannel works, also another article from Medium

希望这能回答您的问题。