在 Windows 中下载 spark-csv

Downloading spark-csv in Windows

我是 Spark 世界的初学者,想使用 SparkR 进行我的机器学习算法。

我在我的笔记本电脑(Win 7 64 位)中以独立模式安装了 Spark,我可以使用 运行 Spark (1.6.1)、Pyspark 并在 Windows 之后开始使用 SparkR有效指南:link 。一旦启动 SparkR,我就从著名的 Flights 示例开始:

#Set proxy
Sys.setenv(http_proxy="http://user:password@proxy.companyname.es:8080/")
#Set SPARK_HOME
Sys.setenv(SPARK_HOME="C:/Users/amartinezsistac/spark-1.6.1-bin-hadoop2.4")
#Load SparkR and its library
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"),"R", "lib"), .libPaths()))
library(SparkR)
#Set Spark Context and SQL Context
sc = sparkR.init(master="local")
sqlContext <- sparkRSQL.init(sc)
#Read Data
link <- "s3n://mortar-example-data/airline-data"
flights <- read.df(sqlContext, link, source = "com.databricks.spark.csv", header= "true")

然而,我在最后一行之后收到下一条错误消息:

Error in invokeJava(isStatic = TRUE, className, methodName, ...) : 
  java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    at org.apache.spark.sql.api.r.SQLUtils$.loadDF(SQLUtils.scala:160)
    at org.apache.spark.sql.api.r.SQLUtils.loadDF(SQLUtils.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)
    at org.apache.spark.api.r.RBackendHandler.ch

好像是我没有安装read-csv包,可以从这个页面下载(Github link). As well as in Stack, in spark-packages.org website, (link) 建议是:$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.11:1.4.0对于 Linux 安装。

我的问题是:我如何 运行 来自 Windows 7 cmd 的代码行才能下载此包?

我还针对我的错误消息 (Github) 尝试了替代解决方案,但没有成功:

#In master you don't need spark-csv. 
#CSV data source is built into SparkSQL. Just use it as follows:
flights <- read.df(sqlContext, "out/data.txt", source = "com.databricks.spark.csv", delimiter="\t", header="true", inferSchema="true")

在此先感谢大家。

Windows也一样。从bin目录启动spark-shell时,这样启动:

spark-shell --packages com.databricks:spark-csv_2.11:1.4.0