如何在没有 Databricks 的情况下使用 Apache Spark 从 Azure Blob 读取文件,但在 Windows 10 上使用 wasbs?

How do you read a file from Azure Blob w/ Apache Spark without Databricks but with wasbs on Windows 10?

我有 azure-storage-8.6.0.jar 和 hadoop-azure-3.0.1.jar。我一直从其他论坛上看到我必须像 https://github.com/hning86/articles/blob/master/hadoopAndWasb.md 那样修改 hadoop 中 etc 文件夹中的 core-site.xml 文件。我不知道我什至需要将所有 hadoop 下载到 运行 Spark。我以为我需要的只是 hadoop/bin.

中的 winutils.exe
    spark.read.load(f"wasbs://{container_name}@{storage_account_name}.blob.core.windows.net/{container_name}/myfile.txt" )



Py4JJavaError: An error occurred while calling o53.load.
: java.io.IOException: No FileSystem for scheme: wasbs
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
    at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
    at org.apache.spark.sql.DataFrameReader.$anonfun$load(DataFrameReader.scala:286)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

如果想在Windows10上使用pyspark从Azure blob存储中读取CSV文件,请参考以下步骤

  1. 安装 pyspark
pip install pyspark
  1. 代码(创建.py文件)
from pyspark.sql import SparkSession
import traceback

try:

    spark = SparkSession.builder.getOrCreate()
    conf = spark.sparkContext._jsc.hadoopConfiguration()
    conf.set("fs.wasbs.impl", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
    spark.conf.set('fs.azure.account.key.<account name>.blob.core.windows.net',
                   '<account key>')
    df = spark.read.option("header", True).csv(
        'wasbs://<container name>@<account name>.blob.core.windows.net/<directory name>/<file name>')
    df.show()

except Exception as exp:
    print("Exception occurred")
    print(traceback.format_exc())

  1. 运行代码
cd <your python or env path>\Scripts
spark-submit --packages org.apache.hadoop:hadoop-azure:3.2.1,com.microsoft.azure:azure-storage:8.6.5 <your py file path>