使用 kubernetes 为自定义 kafka 连接图像创建 pod 和服务

Creating pod and service for custom kafka connect image with kubernetes

我已经成功创建了一个自定义的 kafka 连接器映像,其中包含融合的集线器连接器。

我正在尝试创建 pod 和服务以使用 kubernetes 在 GCP 中启动它。

我应该如何配置 yaml 文件?我从快速入门指南中获取的下一部分代码。这是我试过的: Dockerfile:

FROM confluentinc/cp-kafka-connect-base:latest
ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components,/usr/share/java/kafka-connect-jdbc"
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.2.6
RUN confluent-hub install --no-prompt debezium/debezium-connector-mysql:1.7.1
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.7.1
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-oracle-cdc:1.5.0
RUN wget -O /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/mysql-connector-java-8.0.26.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.26/mysql-connector-java-8.0.26.jar

合流的莫迪菲尔德部分-platform.yaml

apiVersion: platform.confluent.io/v1beta1
kind: Connect
metadata:
  name: connect
  namespace: confluent
spec:
  replicas: 1
  image:
    application: maxprimeaery/kafka-connect-jdbc:latest   #confluentinc/cp-server-connect:7.0.1
    init: confluentinc/confluent-init-container:2.2.0-1
  configOverrides:
    server:
      - config.storage.replication.factor=1
      - offset.storage.replication.factor=1
      - status.storage.replication.factor=1
  podTemplate:
    resources:
      requests:
        cpu: 200m
        memory: 512Mi
    probe:
      liveness:
        periodSeconds: 10
        failureThreshold: 5
        timeoutSeconds: 500
    podSecurityContext:
      fsGroup: 1000
      runAsUser: 1000
      runAsNonRoot: true

这就是我在 connect-0 pod 的控制台中得到的错误:

Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  45m                 default-scheduler  Successfully assigned confluent/connect-0 to gke-my-kafka-cluster-default-pool-6ee97fb9-fh9w
  Normal   Pulling    45m                 kubelet            Pulling image "confluentinc/confluent-init-container:2.2.0-1"
  Normal   Pulled     45m                 kubelet            Successfully pulled image "confluentinc/confluent-init-container:2.2.0-1" in 17.447881861s
  Normal   Created    45m                 kubelet            Created container config-init-container
  Normal   Started    45m                 kubelet            Started container config-init-container
  Normal   Pulling    45m                 kubelet            Pulling image "maxprimeaery/kafka-connect-jdbc:latest"
  Normal   Pulled     44m                 kubelet            Successfully pulled image "maxprimeaery/kafka-connect-jdbc:latest" in 23.387676944s
  Normal   Created    44m                 kubelet            Created container connect
  Normal   Started    44m                 kubelet            Started container connect
  Warning  Unhealthy  41m (x5 over 42m)   kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    41m                 kubelet            Container connect failed liveness probe, will be restarted
  Warning  Unhealthy  5m (x111 over 43m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404
  Warning  BackOff    17s (x53 over 22m)  kubelet            Back-off restarting failed container

我应该为自定义 kafka 连接器创建单独的 pod 和服务,还是必须配置上面的代码?

更新 我的问题

我已经找到如何在 kubernetes 中配置它,添加它以连接 pod

apiVersion: platform.confluent.io/v1beta1
kind: Connect
metadata:
  name: connect
  namespace: confluent
spec:
  replicas: 1
  image:
    application: confluentinc/cp-server-connect:7.0.1
    init: confluentinc/confluent-init-container:2.2.0-1
  configOverrides:
    server:
      - config.storage.replication.factor=1
      - offset.storage.replication.factor=1
      - status.storage.replication.factor=1
 build:
    type: onDemand
    onDemand:
      plugins:
        locationType: confluentHub
        confluentHub:
          - name: kafka-connect-jdbc
            owner: confluentinc
            version: 10.2.6
          - name: kafka-connect-oracle-cdc
            owner: confluentinc
            version: 1.5.0
          - name: debezium-connector-mysql
            owner: debezium
            version: 1.7.1
          - name: debezium-connector-postgresql
            owner: debezium
            version: 1.7.1
      storageLimit: 4Gi
  podTemplate:
    resources:
      requests:
        cpu: 200m
        memory: 1024Mi
    probe:
      liveness:
        periodSeconds: 180 #DONT CHANGE THIS
        failureThreshold: 5
        timeoutSeconds: 500
    podSecurityContext:
      fsGroup: 1000
      runAsUser: 1000
      runAsNonRoot: true

但我仍然无法从 Maven 添加 mysql-connector repo

我也尝试制作新的 docker 图像,但它不起作用。我还尝试了新的代码部分:

locationType: url #NOT WORKING. NO IDEA HOW TO CONFIGURE THAT
        url:
          - name: mysql-connector-java
            archivePath: https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.26/mysql-connector-java-8.0.26.jar
            checksum: sha512sum #definitely wrong

经过一些重试,我发现我只需要再等一会儿。

probe:
      liveness:
        periodSeconds: 180 #DONT CHANGE THIS
        failureThreshold: 5
        timeoutSeconds: 500

这部分periodSeconds: 180会增加更多时间来制作广告连播Running,我可以只使用我自己的图像。

image:
    application: maxprimeaery/kafka-connect-jdbc:5.0
    init: confluentinc/confluent-init-container:2.2.0-1

并且 build 部分可以在这些更改后删除。