HBase 区域服务器在 tsv 导入时不断崩溃

HBase region servers keeps crashing on tsv import

我正在尝试加载 tab 分隔的 HDFS 文件 (3.5G),其中 4500 万条记录 进入 HBASE,使用下面的命令

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,description:part_description part /user/sw/spark_search/part_description_data

文件片段

45-573  Conn Circular Adapter F/M 11 POS ST 1 Port
CA3100E14S-4P-B-03  Conn Circular PIN 1 POS Crimp ST Wall Mount 1 Terminal 1 Port Automotive

我可以看到 map reduce 作业开始并达到 5%,但随后区域服务器崩溃和作业超时。 并抛出

19/06/26 14:56:31 INFO mapreduce.Job:  map 0% reduce 0%
19/06/26 15:06:59 INFO mapreduce.Job: Task Id : attempt_1561551541629_0001_m_000010_0, Status : FAILED
AttemptID:attempt_1561551541629_0001_m_000010_0 Timed out after 600 secs
19/06/26 15:06:59 INFO mapreduce.Job: Task Id : attempt_1561551541629_0001_m_000004_0, Status : FAILED
AttemptID:attempt_1561551541629_0001_m_000004_0 Timed out after 600 secs
19/06/26 15:06:59 INFO mapreduce.Job: Task Id : attempt_1561551541629_0001_m_000003_0, Status : FAILED
AttemptID:attempt_1561551541629_0001_m_000003_0 Timed out after 600 secs

重启服务器后可以看到部分数据已经加载,如何追查崩溃原因?

检查 regionservers 日志后,我能看到的唯一错误是

2019-06-27 15:43:05,361 ERROR org.apache.hadoop.hbase.ipc.RpcServer: Unexpected throwable object 
java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ResultOrException$Builder.buildPartial(ClientProtos.java:29885)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ResultOrException$Builder.build(ClientProtos.java:29877)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getResultOrException(RSRpcServices.java:328)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getResultOrException(RSRpcServices.java:319)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:789)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:716)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2146)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService.callBlockingMethod(ClientProtos.java:33656)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
2019-06-27 15:43:08,120 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.14.4--1, built on 06/12/2018 10:49 GMT

但我可以看到我有足够的空闲 RAM

问题是您的映射器 运行 花费的时间超过 600 秒,因此超时并死掉。将 mapreduce.task.timeout 设置为 0。通常这不是问题,但在您的情况下,作业写入 HBase 而不是正常的 MapReduce context.write(...),因此 MapReduce 认为没有任何事情发生。

参见https://hadoop.apache.org/docs/r2.8.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml

问题是Heap内存溢出引起的,cloudera设置的默认值好像偏低,增加heap到4G后文件加载成功