Hadoop学习笔记0004——eclipse安装hadoop插件
Hadoop学习笔记0004——eclipse安装hadoop插件
1、下载hadoop-1.2.1.tar.gz,解压到win7下hadoop-1.2.1;
2、如果hadoop-1.2.1中没有hadoop-eclipse-plugin-1.2.1.jar包,就到网上下载下来;
3、关闭eclipse,然后将hadoop-eclipse-plugin-1.2.1.jar拷贝到eclipse安装目录下的eclipse-x.x\plugins文件夹下,重启eclipse
4、在eclipse中顶部菜单栏Window->Preferences设置下列路径
5、打开Map/ReduceLocations窗口
6、设置Map/ReduceLocation参数
点击"Finish"按钮,关闭窗口。
7、 点击左侧的DFSLocations—>Hadoop(上一步配置的location name),如能看到user,表示安装成功
注意:如果没有DFSLocations项,就新建一个Map/ReduceProject工程;
8、测试
(1)在HDFS上创建目录input
hadoop fs -mkdir input( 2)拷贝本地README.txt到HDFS的input里
hadoop fs -put /usr /hadoop/README.txt input
( 3)新建WordCount项目
File—>Project,选择Map/Reduce Project,输入项目名称WordCount等。
在WordCount项目里新建class,名称为WordCount,代码如下:
package com.hadoop.test; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
(4)点击WordCount.java,右键,点击Run As—>RunConfigurations,配置运行参数,即输入和输出文件夹hdfs://192.168.0.134:9000/user/root/inputhdfs://192.168.0.134:9000/user/root/output
点击Run按钮,运行程序。
展开DFS Locations,如下图所示,双击打开part-r00000查看结果
附:在测试的时候出现了下列错误
Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-Administrator\mapred\staging\Administrator-519341271\.staging to 0700
解决方案:
方法一:替换文件hadoop-core-1.2.1.jar
下载hadoop-core-1.2.1-modified.jar替换到hadoop安装目录下的hadoop-core-1.2.1.jar文件 下载地址:http://download.csdn.net/detail/m_star_jy_sy/7376283
方法二:修改org.apache.hadoop.fs.FileUtil文件并重新编译即可
解决步骤如下:
1.eclipse中新建java工程;
2.将hadoop相关jar包都导入工程;
3.到源码中拷贝src/core/org/apache/hadoop/fs/FileUtil.java文件,粘贴到eclipse工程的src目录下;
4.找到以下部分,注释掉checkReturnValue方法中的代码;
5..到工程的输出目录找到class文件,会有两个class文件,因为FileUtil.java有内部类;
6.将该class文件添加到hadoop-core-1.2.1.jar中对应的目录,覆盖原文件;
7.将更新过的hadoop-core-1.2.1.jar拷贝到Hadoop集群,覆盖原有文件,重启Hadoop集群;
8. 将更新过的hadoop-core-1.2.1.jar添加到项目中;
9运行程序,成功~!
郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。