使用 Hadoop 库序列化 Java 个对象
Serializing Java object using Hadoop libraries
我正在尝试序列化 Java 中的对象并将其写入文件,以便我的 Map 函数可以从文件中获取它并反序列化它以取回对象。
我认为 Java 序列化不是很理想。所以我想使用 hadoop 库来做这个。
有人可以与我分享一些如何实现的语法。我对 hadoop 很陌生,不知道从哪里开始。代码的一些解释也很棒!
Hadoop 中的序列化是使用 Avro、Parquet、Sequence File、ORC e.t.c 完成的...您可以在 java 程序中使用上述任何一种。您需要了解 hadoop 使用上述序列化格式之一。所以你不需要那些 Hadoop 库。
您应该使用 Hadoop 序列化,以便您的 mapreduce 作业的任何 InputFormat 都可以读取输入记录。
我建议您按照以下步骤操作,
Try to convert your java object as any of Writable implementation of
Mapreduce such as Text , IntWritable, ListWritable, etc. If not
feasible make your Java object to extend Writable interface of
hadoop and implement the custom read/write methods.
Use SequenceFile as the storage format, and create a Java program to use SequenceFile.Writer to write your above Writable objects into
a sequence file. Refer SequenceFile topic of Hadoop I/O chapter in
Hadoop definitive guide book.
Now write a mapreduce program with SequenceFileInputFormat as input
format and you will be getting the serialized java objects
read one by one in your mapper.
要创建自定义可写,请参考
http://www.ashishpaliwal.com/blog/2012/05/hadoop-recipe-implementing-custom-writable/
要将可写对象写入序列文件,请参考,
How to convert .txt file to Hadoop's sequence file format
我正在尝试序列化 Java 中的对象并将其写入文件,以便我的 Map 函数可以从文件中获取它并反序列化它以取回对象。
我认为 Java 序列化不是很理想。所以我想使用 hadoop 库来做这个。
有人可以与我分享一些如何实现的语法。我对 hadoop 很陌生,不知道从哪里开始。代码的一些解释也很棒!
Hadoop 中的序列化是使用 Avro、Parquet、Sequence File、ORC e.t.c 完成的...您可以在 java 程序中使用上述任何一种。您需要了解 hadoop 使用上述序列化格式之一。所以你不需要那些 Hadoop 库。
您应该使用 Hadoop 序列化,以便您的 mapreduce 作业的任何 InputFormat 都可以读取输入记录。
我建议您按照以下步骤操作,
Try to convert your java object as any of Writable implementation of Mapreduce such as Text , IntWritable, ListWritable, etc. If not feasible make your Java object to extend Writable interface of hadoop and implement the custom read/write methods.
Use SequenceFile as the storage format, and create a Java program to use SequenceFile.Writer to write your above Writable objects into a sequence file. Refer SequenceFile topic of Hadoop I/O chapter in Hadoop definitive guide book.
Now write a mapreduce program with SequenceFileInputFormat as input format and you will be getting the serialized java objects read one by one in your mapper.
要创建自定义可写,请参考
http://www.ashishpaliwal.com/blog/2012/05/hadoop-recipe-implementing-custom-writable/
要将可写对象写入序列文件,请参考,
How to convert .txt file to Hadoop's sequence file format