无法将数据从 s3 读取到 Sagemaker 中的 spark 数据框
Failing to read data from s3 to a spark dataframe in Sagemaker
我正在尝试将 s3 存储桶(sagemaker notebook 对其具有完全访问权限)上的 csv 文件读取到 spark 数据框中,但是我遇到了以下问题,其中 sagemaker-spark_2.11-spark_2.2.0-1.1.1.jar
不能成立。非常感谢有关如何解决此问题的任何提示!
bucket = "mybucket"
prefix = "folder/file.csv"
df = spark.read.csv("s3://{}/{}/".format(bucket,prefix))
Py4JJavaError: An error occurred while calling o388.csv.
: java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Error reading configuration file
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.parse(ServiceLoader.java:309)
at java.util.ServiceLoader.access0(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
at java.util.ServiceLoader.hasNext(ServiceLoader.java:474)
at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:614)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:190)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:596)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker_pyspark/jars/sagemaker-spark_2.11-spark_2.2.0-1.1.1.jar (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:219)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at sun.net.www.protocol.jar.URLJarFile.<init>(URLJarFile.java:93)
at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69)
at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:84)
at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122)
at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150)
at java.net.URL.openStream(URL.java:1045)
at java.util.ServiceLoader.parse(ServiceLoader.java:304)
... 26 more
(对原问题进行评论作为回答)
看起来像是 jupyter 内核问题。我有一个类似的问题,我使用 Sparkmagic (pyspark)
内核而不是 Sparkmagic (pyspark3)
并且它工作正常。按照此 blog 的说明进行操作,看看是否有帮助。
我正在尝试将 s3 存储桶(sagemaker notebook 对其具有完全访问权限)上的 csv 文件读取到 spark 数据框中,但是我遇到了以下问题,其中 sagemaker-spark_2.11-spark_2.2.0-1.1.1.jar
不能成立。非常感谢有关如何解决此问题的任何提示!
bucket = "mybucket"
prefix = "folder/file.csv"
df = spark.read.csv("s3://{}/{}/".format(bucket,prefix))
Py4JJavaError: An error occurred while calling o388.csv.
: java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Error reading configuration file
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.parse(ServiceLoader.java:309)
at java.util.ServiceLoader.access0(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
at java.util.ServiceLoader.hasNext(ServiceLoader.java:474)
at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:614)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:190)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:596)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker_pyspark/jars/sagemaker-spark_2.11-spark_2.2.0-1.1.1.jar (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:219)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at sun.net.www.protocol.jar.URLJarFile.<init>(URLJarFile.java:93)
at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69)
at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:84)
at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122)
at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150)
at java.net.URL.openStream(URL.java:1045)
at java.util.ServiceLoader.parse(ServiceLoader.java:304)
... 26 more
(对原问题进行评论作为回答)
看起来像是 jupyter 内核问题。我有一个类似的问题,我使用 Sparkmagic (pyspark)
内核而不是 Sparkmagic (pyspark3)
并且它工作正常。按照此 blog 的说明进行操作,看看是否有帮助。