[Hadoop]基于Eclipse的Hadoop应用开发环境配置
安装Eclipse
下载Eclipse(点击进入下载),解压安装。我安装在/usr/local/software/目录下。在eclipse上安装hadoop插件
下载hadoop插件(点击进入下载) 把插件放到eclipse/plugins目录下。
重启eclipse,配置hadoop installation directory
如果安装插件成功,打开Window–>Preferens,你会发现Hadoop Map/Reduce选项,在这个选项里你需要配置Hadoop installation directory。配置完成后退出。
配置Map/Reduce Locations
在Window–>Show View中打开Map/Reduce Locations。
在Map/Reduce Locations中新建一个Hadoop Location。在这个View中,右键–>New Hadoop Location。在弹出的对话框中你需要配置Location name,如Hadoop1.0,还有Map/Reduce Master和DFS Master。这里面的Host、Port分别为你在mapred-site.xml、core-site.xml中配置的地址及端口。如:
Map/Reduce Master192.168.239.130 9001
DFS Master
192.168.239.130 9000
配置完后退出。点击DFS Locations–>Hadoop如果能显示文件夹(2)说明配置正确,如果显示”拒绝连接”,请检查你的配置。
新建WordCount项目
File—>Project,选择Map/Reduce Project,输入项目名称WordCount等。
在WordCount项目里新建class,名称为WordCount,代码如下:
package WordCount;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class WordCount extends Configured implements Tool{
/**
*
* @author root
*
*/
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}// while
}// map
}// mapper
/**
*
* @author root
*
*/
public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}//for
result.set(sum);
context.write(key, result);
}// reduce
}// reducer
/**
*
* @param args
* @return
* @throws Exception
*/
public int run(String[] args) throws Exception{
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
// job name
Job job = new Job(conf, "word count");
// class
job.setJarByClass(WordCount.class);
// mapper
job.setMapperClass(TokenizerMapper.class);
// combiner
job.setCombinerClass(IntSumReducer.class);
// reducer
job.setReducerClass(IntSumReducer.class);
// output key format
job.setOutputKeyClass(Text.class);
// outout value format
job.setOutputValueClass(IntWritable.class);
// input path
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
// output path
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
job.waitForCompletion(true);
return job.isSuccessful() ? 0: 1;
}
/**
*
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new WordCount(), args);
System.exit(res);
}
}
郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。