在 Spark Scala 中读取文件名中具有特殊字符“{”和“}”的文件
Read file in Spark Scala having special character '{' and '}' in their filename
我想在 Spark Scala 中读取一个名为:monthlyPurchaseFile{202205}-May.TXT
的文件
我正在使用以下代码:
val df = spark.read.text("handel_special_ch/monthlyPurchaseFile{202205}-May.TXT"
但我遇到以下异常:
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/hdp_batch_datalake_dev/handel_special_ch/monthlyPurchaseFile{202205}-May.TXT
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary(DataSource.scala:792)
at org.apache.spark.util.ThreadUtils$.$anonfun$parmap(ThreadUtils.scala:372)
at scala.concurrent.Future$.$anonfun$apply(Future.scala:659)
at scala.util.Success.$anonfun$map(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
请建议我如何读取名称中包含 {
、}
字符的文件。
您传递给 spark.read.text
方法的 path
被视为正则表达式。由于 {
和 }
是特殊字符,Spark 会尝试将路径与该表达式匹配。您可以使用 ?
字符来匹配任何字符,因此以下应该有效:
val df = spark.read.text("handel_special_ch/monthlyPurchaseFile?202205?-May.TXT"
字符 \
作为转义序列。因此,使用以下代码按预期工作并解决了问题:
val df = spark.read.text("handel_special_ch/monthlyPurchaseFile\{202205\}-May.TXT"
我想在 Spark Scala 中读取一个名为:monthlyPurchaseFile{202205}-May.TXT
的文件
我正在使用以下代码:
val df = spark.read.text("handel_special_ch/monthlyPurchaseFile{202205}-May.TXT"
但我遇到以下异常:
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/hdp_batch_datalake_dev/handel_special_ch/monthlyPurchaseFile{202205}-May.TXT
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary(DataSource.scala:792)
at org.apache.spark.util.ThreadUtils$.$anonfun$parmap(ThreadUtils.scala:372)
at scala.concurrent.Future$.$anonfun$apply(Future.scala:659)
at scala.util.Success.$anonfun$map(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
请建议我如何读取名称中包含 {
、}
字符的文件。
您传递给 spark.read.text
方法的 path
被视为正则表达式。由于 {
和 }
是特殊字符,Spark 会尝试将路径与该表达式匹配。您可以使用 ?
字符来匹配任何字符,因此以下应该有效:
val df = spark.read.text("handel_special_ch/monthlyPurchaseFile?202205?-May.TXT"
字符 \
作为转义序列。因此,使用以下代码按预期工作并解决了问题:
val df = spark.read.text("handel_special_ch/monthlyPurchaseFile\{202205\}-May.TXT"