mapreduce wordcount 程序中的驱动程序未调用 reducer

reducer not called by driver in mapreduce wordcount program

   package com.delhi;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class UppercaseDriver extends Configured implements Tool {

    public int run(String[] args) throws Exception{
        if(args.length !=2){
            System.out.printf("Two parameters are required- <input dir> <output dir>n");
        return -1;}


    Configuration conf = new Configuration();
    Job job=Job.getInstance(conf); 
    job.setJobName("uppercase");
    job.setJarByClass(UppercaseDriver.class);
    job.setMapperClass(UpperCaseMapper.class);
    job.setReducerClass(UpperCaseReduce.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(LongWritable.class);
    FileInputFormat.setInputPaths(job,new Path(args[0]));
    FileOutputFormat.setOutputPath(job,new Path(args[1]));
    //job.setNumReduceTasks(1);
    boolean success = job.waitForCompletion(true);
    return success ?0:1;
    }
    public static void main(String[] args) throws Exception {
        int exitcode = ToolRunner.run(new UppercaseDriver(), args);
        System.exit(exitcode);
    }

}

这是驱动程序。

接下来是reducer程序:

package com.delhi;

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class UpperCaseReduce extends Reducer< Text,LongWritable, Text, LongWritable>{



    public void reduce(Text key, Iterable<LongWritable> value,
        org.apache.hadoop.mapreduce.Reducer.Context context)
        throws IOException, InterruptedException {

      int sum=0;
      System.out.println("how +++++++++++++++++" + key);
      for(LongWritable st: value){
          sum = (int) (sum + st.get());

      }
      System.out.println("how +++++++++++++++++" + key);
    context.write(key, new LongWritable(sum));
}

}

接下来是映射器程序:

package com.delhi;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class UpperCaseMapper extends Mapper<Object, Text, Text, LongWritable>{

@Override
protected void map(Object key, Text value,
        org.apache.hadoop.mapreduce.Mapper.Context context)
        throws IOException, InterruptedException {
    String line = value.toString();
    String arr[] = line.split(" ");
    System.out.println("hello++++++++++++++++++++++++++++");
    for(String st: arr){
    //context.write(new Text(st.toUpperCase().trim()),new LongWritable(1));
    context.write(new Text(st),new LongWritable(1));
    }
}
}

从已经存在的解决方案中我发现在这种类型的问题中 outputkeyclass 和 outputvalueclass 应该与 reducer.I 匹配 reducer.I 认为我注意那部分 properly.I 我的案例 @Override for reduce 不是 working.I 我正在使用 hadoop 7.2.3。我尝试使用 trim 函数 also.Problem 是 wordcount 不是 happening.I 只给我 "word 1" 输出文件中的任何单词。 我从不同的问题开始,最后 this.Please 帮助我。 谢谢

因此,如果您向 reduce 方法添加 @Override 注释,则会出现错误:

Method does not override method from its superclass

所以你遇到了方法签名与 Reducer 中的不匹配的问题。

你有:

public void reduce(Text key, Iterable<LongWritable> value,
                   org.apache.hadoop.mapreduce.Reducer.Context context)

如果将其更改为:

public void reduce(Text key, Iterable<LongWritable> value, Context context)

错误消失了。由于您的 reduce 方法没有覆盖任何它不会被调用的方法,并且它会使用与您的输出匹配的 Identity reduce。