如何让一个 pod 与 Kubernetes 中的另一个 pod 联网? (简单的)
How do I get one pod to network to another pod in Kubernetes? (SIMPLE)
一段时间以来,我一直在用头撞墙。网络上有大量关于 Kubernetes 的信息,但这些都是假设知识太多,以至于像我这样的新手真的没什么可继续的。
那么,谁能分享一个简单的示例(作为yaml文件)?我只要
- 两个pods
- 假设一个 pod 有一个后端(我不知道 - node.js),一个有一个前端(比如 React)。
- 他们之间建立联系的方式。
然后是从后往前调用一个api调用的例子
我开始研究这类事情,突然间我点击了这个页面 - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this。这是超级无益。我不想也不需要高级网络策略,我也没有时间遍历映射在 kubernetes 之上的几个不同的服务层。我只想弄清楚网络请求的一个简单示例。
希望如果这个例子存在于 Whosebug 上,它也能为其他人服务。
如有任何帮助,我们将不胜感激。谢谢
编辑; 看起来最简单的示例可能是使用 Ingress 控制器。
编辑编辑;
我正在尝试部署一个最小的示例 - 我将在这里完成一些步骤并指出我的问题。
下面是我的 yaml
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.kubeplaytime.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
我认为这是在做
部署前端和后端应用程序 - 我将 patientplatypus/frontend_example
和 patientplatypus/backend_example
部署到 dockerhub,然后将图像拉下来。我有一个悬而未决的问题是,如果我不想从 docker 集线器中提取图像,而只想从我的本地主机加载图像,这可能吗?在这种情况下,我会将我的代码推送到生产服务器,在服务器上构建 docker 图像,然后上传到 kubernetes。好处是,如果我希望我的图像是私有的,我不必依赖 dockerhub。
它正在创建两个服务端点,将外部流量从 Web 浏览器路由到每个部署。这些服务属于 loadBalancer
类型,因为它们正在平衡我在部署中拥有的(在本例中为 3 个)复制集之间的流量。
最后,我有一个入口控制器应该允许我的服务通过www.kubeplaytime.example
和[=29=相互路由].然而,这是行不通的。
当我运行这个时会发生什么?
patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created
所以首先,它似乎创建了我需要的所有部分,没有错误。
patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m
frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m
backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m
其次,如果我观看这些服务,我最终会获得可用于在浏览器中导航到这些站点的 IP 地址。上面的每个IP地址都在将我分别路由到前端和后端。
然而
我在尝试使用入口控制器时遇到问题 - 它似乎已部署,但我不知道如何到达那里。
patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
frontend www.kubeplaytime.example 80 16m
- 所以我没有可以使用的地址,而且
www.kubeplaytime.example
似乎不起作用。
我必须做的是路由到我刚创建的入口扩展是在 it 上使用服务和部署以获得 IP 地址,但是这很快就会变得异常复杂。
例如,看看这篇媒体文章:https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e。
似乎只为到 Ingress 的服务路由添加的必要代码(即他所谓的 Ingress Controller)似乎是这样的:
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
这似乎需要附加到我上面的其他 yaml
代码,以便为我的入口路由获取服务入口点,它似乎提供了一个 ip:
patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.31.209 <pending> 80:32428/TCP 4m
frontend LoadBalancer 10.0.222.47 <pending> 80:32482/TCP 4m
ingress-nginx LoadBalancer 10.0.28.157 <pending> 80:30573/TCP,443:30802/TCP 4m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
nginx-default-backend ClusterIP 10.0.71.121 <none> 80/TCP 4m
frontend LoadBalancer 10.0.222.47 40.121.7.66 80:32482/TCP 5m
ingress-nginx LoadBalancer 10.0.28.157 40.121.6.179 80:30573/TCP,443:30802/TCP 6m
backend LoadBalancer 10.0.31.209 40.117.248.73 80:32428/TCP 7m
所以 ingress-nginx
似乎是我想访问的站点。导航到 40.121.6.179
returns 默认 404 消息 (default backend - 404
) - 它不会转到 frontend
因为 /
应该路由。 /api
returns 一样。从浏览器导航到我的主机命名空间 www.kubeplaytime.example
returns 一个 404 - 没有错误处理。
问题
Ingress Controller 是否绝对必要,如果是,是否有更简单的版本?
我觉得我很接近,我做错了什么?
完整 YAML
此处可用:https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938
感谢您的帮助!
编辑编辑编辑
我尝试使用 HELM。从表面上看,它似乎是一个简单的界面,所以我尝试将其旋转起来:
patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME: erstwhile-beetle
LAST DEPLOYED: Sun May 6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
erstwhile-beetle-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
erstwhile-beetle-nginx-ingress-controller LoadBalancer 10.0.216.38 <pending> 80:31494/TCP,443:32118/TCP 1s
erstwhile-beetle-nginx-ingress-default-backend ClusterIP 10.0.55.224 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
erstwhile-beetle-nginx-ingress-controller 1 1 1 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 1 1 0 1s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
erstwhile-beetle-nginx-ingress-controller 1 N/A 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 N/A 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz 0/1 ContainerCreating 0 1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w 0/1 ContainerCreating 0 1s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
看起来这真的很棒 - 它使所有内容都旋转起来并给出了如何添加入口的示例。由于我在一片空白中启动了 helm kubectl
,所以我使用了以下 yaml
文件来添加我认为需要的内容。
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.example.com
http:
paths:
- path: /api
backend:
serviceName: backend
servicePort: 80
- path: /
frontend:
serviceName: frontend
servicePort: 80
将此部署到集群但是 运行 出现此错误:
patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
那么,问题就变成了,糟糕,我该如何调试呢?
如果你吐出 helm 产生的代码,它基本上是一个人不可读的——没有办法进入那里并弄清楚发生了什么。
查看:https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - 超过 1000 行!
如果有人有更好的调试 helm 部署的方法,请将其添加到未决问题列表中。
编辑编辑编辑编辑
为了简化极端,我尝试仅使用名称空间从一个 pod 调用另一个 pod。
这是我发出 http 请求的 React 代码:
axios.get('http://backend/test')
.then(response=>{
console.log('return from backend and response: ', response);
})
.catch(error=>{
console.log('return from backend and error: ', error);
})
我也尝试过使用 http://backend.exampledeploy.svc.cluster.local/test
,但没有成功。
这是我处理 get 的节点代码:
router.get('/test', function(req, res, next) {
res.json({"test":"test"})
});
这是我上传到 kubectl
集群的 yaml
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: exampledeploy
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
namespace: exampledeploy
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
正如我在终端中看到的那样,上传到集群似乎有效:
patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy
NAME READY STATUS RESTARTS AGE
pod/backend-584c5c59bc-5wkb4 1/1 Running 0 15m
pod/backend-584c5c59bc-jsr4m 1/1 Running 0 15m
pod/backend-584c5c59bc-txgw5 1/1 Running 0 15m
pod/frontend-647c99cdcf-2mmvn 1/1 Running 0 15m
pod/frontend-647c99cdcf-79sq5 1/1 Running 0 15m
pod/frontend-647c99cdcf-r5bvg 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend LoadBalancer 10.0.112.160 168.62.175.155 80:31498/TCP 15m
service/frontend LoadBalancer 10.0.246.212 168.62.37.100 80:31139/TCP 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/backend 3 3 3 3 15m
deployment.extensions/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/backend-584c5c59bc 3 3 3 15m
replicaset.extensions/frontend-647c99cdcf 3 3 3 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/backend 3 3 3 3 15m
deployment.apps/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/backend-584c5c59bc 3 3 3 15m
replicaset.apps/frontend-647c99cdcf 3 3 3 15m
但是,当我尝试发出请求时出现以下错误:
return from backend and error:
Error: Network Error
Stack trace:
createError@http://168.62.37.100/static/js/bundle.js:1555:15
handleError@http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14
由于 axios
调用是从浏览器进行的,我想知道是否根本不可能使用此方法调用后端,即使后端和前端位于不同的 pods。我有点迷茫,因为我认为这是将 pods 联网的最简单方法。
编辑 X5
我确定可以通过像这样执行到 pod 中从命令行卷曲后端:
patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
* Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
<
* Connection #0 to host backend left intact
{"test":"test"}
这意味着,毫无疑问,因为前端代码是在浏览器中执行的,所以它需要 Ingress 才能进入 pod,因为来自前端的 http 请求打破了简单的 pod 网络。我不确定这一点,但这意味着 Ingress 是必要的。
首先,让我们澄清一些明显的误解。你提到你的前端是一个 React 应用程序,它可能会在用户浏览器中 运行。为此,您的实际问题不是您的后端和前端 pods 相互通信 ,而是浏览器需要能够 连接到这两个 pods(连接到前端 pod 以加载 React 应用程序,连接到后端 pod 以便 React 应用程序进行 API 调用) .
可视化:
+---------+
+---| Browser |---+
| +---------+ |
V V
+-----------+ +----------+ +-----------+ +----------+
| Front-end |---->| Back-end | | Front-end | | Back-end |
+-----------+ +----------+ +-----------+ +----------+
(what you asked for) (what you need)
如前所述,最简单的解决方案是使用 Ingress controller. I won't go into detail on how to set up an Ingress controller here; in some cloud environments (like GKE) you will be able to use an Ingress controller provided to you by the cloud provider. Otherwise, you can set up the NGINX Ingress controller. Have a look at the NGINX Ingress controllers deployment guide 获取更多信息。
定义服务
首先为您的前端和后端应用程序定义 Service resources(这些还允许您的 Pods 相互通信)。服务定义可能如下所示:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
确保你的 Pods 有 labels 可以被服务资源选择(在这个例子中,我使用 app=backend
和 app=frontend
作为标签).
如果你想建立 Pod 到 Pod 的通信,你现在就完成了。在每个 Pod 中,您现在可以使用 backend.<namespace>.svc.cluster.local
(或 backend
作为 shorthand)和 frontend
作为连接到该 Pod 的主机名。
定义入口
接下来,您可以定义 Ingress 资源;由于这两种服务都需要来自集群外部(用户浏览器)的连接,因此您需要为这两种服务定义入口。
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backend
spec:
rules:
- host: api.your-application.example
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
或者,您也可以使用单个 Ingress 资源聚合前端和后端(这里没有 "right" 答案,只是一个偏好问题):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
之后,确保 www.your-application.example
和 api.your-application.example
都指向您的 Ingress 控制器的外部 IP 地址,您应该完成了。
要使用入口控制器,您需要拥有有效的域(DNS 服务器配置为指向您的入口控制器 ip)。这不是由于任何 kubernetes "magic" 而是由于 vhosts 的工作方式(here 是 nginx 的一个例子 - 经常用作入口服务器,但任何其他入口实现将以相同的方式工作引擎盖)。
如果您无法配置您的域,最简单的开发方式是创建 kubernetes 服务。使用 kubectl expose
有一个很好的捷径
kubectl expose pod frontend-pod --port=444 --name=frontend
kubectl expose pod backend-pod --port=888 --name=backend
事实证明我把事情复杂化了。这是 Kubernetes 文件,可以执行我想要的操作。您可以使用两个部署(前端和后端)和一个服务入口点来执行此操作。据我所知,一个服务可以对许多(不仅仅是 2 个)不同的部署进行负载均衡,这意味着对于实际开发来说,这应该是微服务开发的一个良好开端。入口方法的好处之一是允许使用路径名而不是端口号,但考虑到难度,它在开发中似乎并不实用。
这是 yaml
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplayfrontend
ports:
- containerPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplaybackend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: entrypt
spec:
type: LoadBalancer
ports:
- name: backend
port: 8080
targetPort: 5000
- name: frontend
port: 81
targetPort: 3000
selector:
app: exampleapp
这是我用来启动它的 bash 命令(您可能需要添加一个登录命令 - docker login
- 以推送到 dockerhub):
#!/bin/bash
# stop all containers
echo stopping all containers
docker stop $(docker ps -aq)
# remove all containers
echo removing all containers
docker rm $(docker ps -aq)
# remove all images
echo removing all images
docker rmi $(docker images -q)
echo building backend
cd ./backend
docker build -t patientplatypus/kubeplaybackend .
echo push backend to dockerhub
docker push patientplatypus/kubeplaybackend:latest
echo building frontend
cd ../frontend
docker build -t patientplatypus/kubeplayfrontend .
echo push backend to dockerhub
docker push patientplatypus/kubeplayfrontend:latest
echo now working on kubectl
cd ..
echo deleting previous variables
kubectl delete pods,deployments,services entrypt backend frontend
echo creating deployment
kubectl create -f kube-deploy.yaml
echo watching services spin up
kubectl get services --watch
实际代码只是一个前端 React 应用程序,它对起始应用程序页面的 componentDidMount
上的后端节点路由进行 axios http 调用。
您还可以在此处查看工作示例:https://github.com/patientplatypus/KubernetesMultiPodCommunication
再次感谢大家的帮助。
一段时间以来,我一直在用头撞墙。网络上有大量关于 Kubernetes 的信息,但这些都是假设知识太多,以至于像我这样的新手真的没什么可继续的。
那么,谁能分享一个简单的示例(作为yaml文件)?我只要
- 两个pods
- 假设一个 pod 有一个后端(我不知道 - node.js),一个有一个前端(比如 React)。
- 他们之间建立联系的方式。
然后是从后往前调用一个api调用的例子
我开始研究这类事情,突然间我点击了这个页面 - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this。这是超级无益。我不想也不需要高级网络策略,我也没有时间遍历映射在 kubernetes 之上的几个不同的服务层。我只想弄清楚网络请求的一个简单示例。
希望如果这个例子存在于 Whosebug 上,它也能为其他人服务。
如有任何帮助,我们将不胜感激。谢谢
编辑; 看起来最简单的示例可能是使用 Ingress 控制器。
编辑编辑;
我正在尝试部署一个最小的示例 - 我将在这里完成一些步骤并指出我的问题。
下面是我的 yaml
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.kubeplaytime.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
我认为这是在做
部署前端和后端应用程序 - 我将
patientplatypus/frontend_example
和patientplatypus/backend_example
部署到 dockerhub,然后将图像拉下来。我有一个悬而未决的问题是,如果我不想从 docker 集线器中提取图像,而只想从我的本地主机加载图像,这可能吗?在这种情况下,我会将我的代码推送到生产服务器,在服务器上构建 docker 图像,然后上传到 kubernetes。好处是,如果我希望我的图像是私有的,我不必依赖 dockerhub。它正在创建两个服务端点,将外部流量从 Web 浏览器路由到每个部署。这些服务属于
loadBalancer
类型,因为它们正在平衡我在部署中拥有的(在本例中为 3 个)复制集之间的流量。最后,我有一个入口控制器应该允许我的服务通过
www.kubeplaytime.example
和[=29=相互路由].然而,这是行不通的。
当我运行这个时会发生什么?
patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created
所以首先,它似乎创建了我需要的所有部分,没有错误。
patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m
frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m
backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m
其次,如果我观看这些服务,我最终会获得可用于在浏览器中导航到这些站点的 IP 地址。上面的每个IP地址都在将我分别路由到前端和后端。
然而
我在尝试使用入口控制器时遇到问题 - 它似乎已部署,但我不知道如何到达那里。
patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
frontend www.kubeplaytime.example 80 16m
- 所以我没有可以使用的地址,而且
www.kubeplaytime.example
似乎不起作用。
我必须做的是路由到我刚创建的入口扩展是在 it 上使用服务和部署以获得 IP 地址,但是这很快就会变得异常复杂。
例如,看看这篇媒体文章:https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e。
似乎只为到 Ingress 的服务路由添加的必要代码(即他所谓的 Ingress Controller)似乎是这样的:
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
这似乎需要附加到我上面的其他 yaml
代码,以便为我的入口路由获取服务入口点,它似乎提供了一个 ip:
patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.31.209 <pending> 80:32428/TCP 4m
frontend LoadBalancer 10.0.222.47 <pending> 80:32482/TCP 4m
ingress-nginx LoadBalancer 10.0.28.157 <pending> 80:30573/TCP,443:30802/TCP 4m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
nginx-default-backend ClusterIP 10.0.71.121 <none> 80/TCP 4m
frontend LoadBalancer 10.0.222.47 40.121.7.66 80:32482/TCP 5m
ingress-nginx LoadBalancer 10.0.28.157 40.121.6.179 80:30573/TCP,443:30802/TCP 6m
backend LoadBalancer 10.0.31.209 40.117.248.73 80:32428/TCP 7m
所以 ingress-nginx
似乎是我想访问的站点。导航到 40.121.6.179
returns 默认 404 消息 (default backend - 404
) - 它不会转到 frontend
因为 /
应该路由。 /api
returns 一样。从浏览器导航到我的主机命名空间 www.kubeplaytime.example
returns 一个 404 - 没有错误处理。
问题
Ingress Controller 是否绝对必要,如果是,是否有更简单的版本?
我觉得我很接近,我做错了什么?
完整 YAML
此处可用:https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938
感谢您的帮助!
编辑编辑编辑
我尝试使用 HELM。从表面上看,它似乎是一个简单的界面,所以我尝试将其旋转起来:
patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME: erstwhile-beetle
LAST DEPLOYED: Sun May 6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
erstwhile-beetle-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
erstwhile-beetle-nginx-ingress-controller LoadBalancer 10.0.216.38 <pending> 80:31494/TCP,443:32118/TCP 1s
erstwhile-beetle-nginx-ingress-default-backend ClusterIP 10.0.55.224 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
erstwhile-beetle-nginx-ingress-controller 1 1 1 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 1 1 0 1s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
erstwhile-beetle-nginx-ingress-controller 1 N/A 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 N/A 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz 0/1 ContainerCreating 0 1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w 0/1 ContainerCreating 0 1s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
看起来这真的很棒 - 它使所有内容都旋转起来并给出了如何添加入口的示例。由于我在一片空白中启动了 helm kubectl
,所以我使用了以下 yaml
文件来添加我认为需要的内容。
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.example.com
http:
paths:
- path: /api
backend:
serviceName: backend
servicePort: 80
- path: /
frontend:
serviceName: frontend
servicePort: 80
将此部署到集群但是 运行 出现此错误:
patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
那么,问题就变成了,糟糕,我该如何调试呢? 如果你吐出 helm 产生的代码,它基本上是一个人不可读的——没有办法进入那里并弄清楚发生了什么。
查看:https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - 超过 1000 行!
如果有人有更好的调试 helm 部署的方法,请将其添加到未决问题列表中。
编辑编辑编辑编辑
为了简化极端,我尝试仅使用名称空间从一个 pod 调用另一个 pod。
这是我发出 http 请求的 React 代码:
axios.get('http://backend/test')
.then(response=>{
console.log('return from backend and response: ', response);
})
.catch(error=>{
console.log('return from backend and error: ', error);
})
我也尝试过使用 http://backend.exampledeploy.svc.cluster.local/test
,但没有成功。
这是我处理 get 的节点代码:
router.get('/test', function(req, res, next) {
res.json({"test":"test"})
});
这是我上传到 kubectl
集群的 yaml
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: exampledeploy
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
namespace: exampledeploy
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
正如我在终端中看到的那样,上传到集群似乎有效:
patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy
NAME READY STATUS RESTARTS AGE
pod/backend-584c5c59bc-5wkb4 1/1 Running 0 15m
pod/backend-584c5c59bc-jsr4m 1/1 Running 0 15m
pod/backend-584c5c59bc-txgw5 1/1 Running 0 15m
pod/frontend-647c99cdcf-2mmvn 1/1 Running 0 15m
pod/frontend-647c99cdcf-79sq5 1/1 Running 0 15m
pod/frontend-647c99cdcf-r5bvg 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend LoadBalancer 10.0.112.160 168.62.175.155 80:31498/TCP 15m
service/frontend LoadBalancer 10.0.246.212 168.62.37.100 80:31139/TCP 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/backend 3 3 3 3 15m
deployment.extensions/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/backend-584c5c59bc 3 3 3 15m
replicaset.extensions/frontend-647c99cdcf 3 3 3 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/backend 3 3 3 3 15m
deployment.apps/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/backend-584c5c59bc 3 3 3 15m
replicaset.apps/frontend-647c99cdcf 3 3 3 15m
但是,当我尝试发出请求时出现以下错误:
return from backend and error:
Error: Network Error
Stack trace:
createError@http://168.62.37.100/static/js/bundle.js:1555:15
handleError@http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14
由于 axios
调用是从浏览器进行的,我想知道是否根本不可能使用此方法调用后端,即使后端和前端位于不同的 pods。我有点迷茫,因为我认为这是将 pods 联网的最简单方法。
编辑 X5
我确定可以通过像这样执行到 pod 中从命令行卷曲后端:
patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
* Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
<
* Connection #0 to host backend left intact
{"test":"test"}
这意味着,毫无疑问,因为前端代码是在浏览器中执行的,所以它需要 Ingress 才能进入 pod,因为来自前端的 http 请求打破了简单的 pod 网络。我不确定这一点,但这意味着 Ingress 是必要的。
首先,让我们澄清一些明显的误解。你提到你的前端是一个 React 应用程序,它可能会在用户浏览器中 运行。为此,您的实际问题不是您的后端和前端 pods 相互通信 ,而是浏览器需要能够 连接到这两个 pods(连接到前端 pod 以加载 React 应用程序,连接到后端 pod 以便 React 应用程序进行 API 调用) .
可视化:
+---------+
+---| Browser |---+
| +---------+ |
V V
+-----------+ +----------+ +-----------+ +----------+
| Front-end |---->| Back-end | | Front-end | | Back-end |
+-----------+ +----------+ +-----------+ +----------+
(what you asked for) (what you need)
如前所述,最简单的解决方案是使用 Ingress controller. I won't go into detail on how to set up an Ingress controller here; in some cloud environments (like GKE) you will be able to use an Ingress controller provided to you by the cloud provider. Otherwise, you can set up the NGINX Ingress controller. Have a look at the NGINX Ingress controllers deployment guide 获取更多信息。
定义服务
首先为您的前端和后端应用程序定义 Service resources(这些还允许您的 Pods 相互通信)。服务定义可能如下所示:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
确保你的 Pods 有 labels 可以被服务资源选择(在这个例子中,我使用 app=backend
和 app=frontend
作为标签).
如果你想建立 Pod 到 Pod 的通信,你现在就完成了。在每个 Pod 中,您现在可以使用 backend.<namespace>.svc.cluster.local
(或 backend
作为 shorthand)和 frontend
作为连接到该 Pod 的主机名。
定义入口
接下来,您可以定义 Ingress 资源;由于这两种服务都需要来自集群外部(用户浏览器)的连接,因此您需要为这两种服务定义入口。
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backend
spec:
rules:
- host: api.your-application.example
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
或者,您也可以使用单个 Ingress 资源聚合前端和后端(这里没有 "right" 答案,只是一个偏好问题):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
之后,确保 www.your-application.example
和 api.your-application.example
都指向您的 Ingress 控制器的外部 IP 地址,您应该完成了。
要使用入口控制器,您需要拥有有效的域(DNS 服务器配置为指向您的入口控制器 ip)。这不是由于任何 kubernetes "magic" 而是由于 vhosts 的工作方式(here 是 nginx 的一个例子 - 经常用作入口服务器,但任何其他入口实现将以相同的方式工作引擎盖)。
如果您无法配置您的域,最简单的开发方式是创建 kubernetes 服务。使用 kubectl expose
kubectl expose pod frontend-pod --port=444 --name=frontend
kubectl expose pod backend-pod --port=888 --name=backend
事实证明我把事情复杂化了。这是 Kubernetes 文件,可以执行我想要的操作。您可以使用两个部署(前端和后端)和一个服务入口点来执行此操作。据我所知,一个服务可以对许多(不仅仅是 2 个)不同的部署进行负载均衡,这意味着对于实际开发来说,这应该是微服务开发的一个良好开端。入口方法的好处之一是允许使用路径名而不是端口号,但考虑到难度,它在开发中似乎并不实用。
这是 yaml
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplayfrontend
ports:
- containerPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplaybackend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: entrypt
spec:
type: LoadBalancer
ports:
- name: backend
port: 8080
targetPort: 5000
- name: frontend
port: 81
targetPort: 3000
selector:
app: exampleapp
这是我用来启动它的 bash 命令(您可能需要添加一个登录命令 - docker login
- 以推送到 dockerhub):
#!/bin/bash
# stop all containers
echo stopping all containers
docker stop $(docker ps -aq)
# remove all containers
echo removing all containers
docker rm $(docker ps -aq)
# remove all images
echo removing all images
docker rmi $(docker images -q)
echo building backend
cd ./backend
docker build -t patientplatypus/kubeplaybackend .
echo push backend to dockerhub
docker push patientplatypus/kubeplaybackend:latest
echo building frontend
cd ../frontend
docker build -t patientplatypus/kubeplayfrontend .
echo push backend to dockerhub
docker push patientplatypus/kubeplayfrontend:latest
echo now working on kubectl
cd ..
echo deleting previous variables
kubectl delete pods,deployments,services entrypt backend frontend
echo creating deployment
kubectl create -f kube-deploy.yaml
echo watching services spin up
kubectl get services --watch
实际代码只是一个前端 React 应用程序,它对起始应用程序页面的 componentDidMount
上的后端节点路由进行 axios http 调用。
您还可以在此处查看工作示例:https://github.com/patientplatypus/KubernetesMultiPodCommunication
再次感谢大家的帮助。