spark python 脚本未写入 hbase

spark python script not writing to hbase

我正在尝试 运行 来自这个 blog

的脚本
import sys  
import json  
from pyspark import SparkContext  
from pyspark.streaming import StreamingContext  
def SaveRecord(rdd):  
    host = 'sparkmaster.example.com'  
    table = 'cats'  
    keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"  
    valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"  
    conf = {"hbase.zookeeper.quorum": host,  
        "hbase.mapred.outputtable": table,  
        "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",  
        "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",  
        "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}  
    datamap = rdd.map(lambda x: (str(json.loads(x)["id"]),[str(json.loads(x)["id"]),"cfamily","cats_json",x]))  
    datamap.saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)  

if __name__ == "__main__":  
    if len(sys.argv) != 3:  
      print("Usage: StreamCatsToHBase.py <hostname> <port>")  
      exit(-1)  

    sc = SparkContext(appName="StreamCatsToHBase")  
    ssc = StreamingContext(sc, 1)  
    lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))  
    lines.foreachRDD(SaveRecord)  

    ssc.start()       # Start the computation  
    ssc.awaitTermination() # Wait for the computation to terminate

我无法 运行 它。我尝试了三种不同的命令行选项,但 none 正在生成输出,也没有将数据写入 hbase table

这是我尝试过的命令行选项

spark-submit --jars /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar --jars /usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2389 > sp_json.log

spark-submit --driver-class-path /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar sp_json.py localhost 2389 > sp_json.log

spark-submit --driver-class-path /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar --jars /usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2389 > sp_json.log

这里是logfile。它太冗长了。 Apache spark调试困难的原因之一是吐出太多信息

终于使用以下命令语法使其工作spark-submit --jars /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar,/usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2399 > sp_json.log