如何在 Kubernetes 上使用 Spark 修复 "Forbidden!Configured service account doesn't have access"?

How to fix "Forbidden!Configured service account doesn't have access" with Spark on Kubernetes?

我正在尝试 运行 submitting a spark application with a k8s cluster 的基本示例。

我使用 spark 文件夹中的脚本创建了 docker 图像:

sudo ./bin/docker-image-tool.sh -mt spark-docker build

sudo docker image ls 

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
spark-r             spark-docker        793527583e00        17 minutes ago      740MB
spark-py            spark-docker        c984e15fe747        18 minutes ago      446MB
spark               spark-docker        71950de529b3        18 minutes ago      355MB
openjdk             8-alpine            88d1c219f815        15 hours ago        105MB
hello-world         latest              fce289e99eb9        3 months ago        1.84kB

然后尝试提交 SparkPi 示例(如在官方文档中)。

./bin/spark-submit \
        --master k8s://[MY_IP]:8443 \
        --deploy-mode cluster \
        --name spark-pi --class org.apache.spark.examples.SparkPi \
        --driver-memory 1g \
        --executor-memory 3g \
        --conf spark.executor.instances=2 \
        --conf spark.kubernetes.container.image=spark:spark-docker \
        local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar

但是 运行 失败并出现以下异常:

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-1554304245069-driver. 
Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-1554304245069-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".

以下是来自 Kubernetes 仪表板的 pod 的完整日志:

2019-04-03 15:10:50 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@49096b06{/executors/threadDump,null,AVAILABLE,@Spark}
2019-04-03 15:10:50 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4a183d02{/executors/threadDump/json,null,AVAILABLE,@Spark}
2019-04-03 15:10:50 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5d05ef57{/static,null,AVAILABLE,@Spark}
2019-04-03 15:10:50 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@34237b90{/,null,AVAILABLE,@Spark}
2019-04-03 15:10:50 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1d01dfa5{/api,null,AVAILABLE,@Spark}
2019-04-03 15:10:50 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@31ff1390{/jobs/job/kill,null,AVAILABLE,@Spark}
2019-04-03 15:10:50 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@759d81f3{/stages/stage/kill,null,AVAILABLE,@Spark}
2019-04-03 15:10:50 INFO  SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://spark-pi-1554304245069-driver-svc.default.svc:4040
2019-04-03 15:10:50 INFO  SparkContext:54 - Added JAR file:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar at spark://spark-pi-1554304245069-driver-svc.default.svc:7078/jars/spark-examples_2.11-2.4.0.jar with timestamp 1554304250157
2019-04-03 15:10:51 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: External scheduler cannot be instantiated
    at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2794)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:493)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun.apply(SparkSession.scala:935)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun.apply(SparkSession.scala:926)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
    at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
    at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
    at org.apache.spark.deploy.SparkSubmit.doRunMain(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-1554304245069-driver. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-1554304245069-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:470)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:407)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312) 

备注:

您好,我遇到了同样的问题。 然后我发现了这个 Github 问题 https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/113

这让我明白了问题所在。我在这里按照 RBAC 集群的 Spark 指南解决了这个问题 https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/113

创建服务帐户

kubectl create serviceaccount spark

为服务帐户授予集群上的编辑角色

kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default

运行 使用以下标志提交 spark,以便 运行 使用(刚刚创建的(服务帐户)

--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark

希望对您有所帮助!

Simone 的解决方案非常适合我。给新手更多提示。

--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark

上面的 conf 应该添加为第一个参数。在 spark submit 命令的末尾附加它是行不通的。