Apache Spark 错误,未找到克隆 Python 环境

Apache Spark error, Cloned Python environment not found

我正在尝试将 huggingface 升级到我们目前拥有的更新版本 2.11。当我通过 pip install tr​​ansformers=={any version} 在 azure databricks notebook 中安装任何更新版本的转换器时,我在执行期间收到以下错误。我对此很陌生,但非常感谢任何有关故障排除方法的反馈。谢谢。

org.apache.spark.SparkException: Cloned Python environment not found at /local_disk0/.ephemeral_nfs/envs/pythonEnv-89bc8046-d7ae-4968-b280-fc233a9bf3e4
at org.apache.spark.api.python.PythonWorkerFactory.waitForPythonEnvironment(PythonWorkerFactory.scala:190)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:313)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:222)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:119)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:192)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:184)
at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:70)
at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute(EvalPythonExec.scala:129)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions(RDD.scala:844)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$adapted(RDD.scala:844)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)
at org.apache.spark.rdd.RDD.$anonfun$getOrCompute(RDD.scala:369)
at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator(BlockManager.scala:1376)
at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1303)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1367)
at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1187)
at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:367)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:318)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:144)
at org.apache.spark.scheduler.Task.run(Task.scala:117)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:642)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:645)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

请按照以下步骤在 azure databricks 上安装 huggingface 库

第 1 步: 首先使用此命令安装变压器 - pip install transformers

第 2 步: 要测试安装,试试这个 - from transformers import pipeline

第 3 步: 接下来使用此命令安装 pytorch - pip install transformers[torch]

第 4 步: 安装 Tensorflow 使用 - pip install transformers[tf-cpu]

第 5 步: 为了测试安装,我使用了 print(pipeline('sentiment-analysis')('we love you'))

这样我成功安装了huggingface库

参考 - Installation