Hadoop 2.2在linux上伪分布安装
1、确认java已经安装
[root@carefree ~]# java -version java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Hadoop 2.2官方建议使用jdk 1.6(sun)以上,我们这里使用1.7。Jdk安装比较简单,不做演示。 只需要下载包,解压,然后配置环境变量即可。
2、添加hadoop管理用户
使用用户hadoop,
[root@carefree ~]# groupadd hadoop [root@carefree ~]# useradd -g hadoop hadoop [root@carefree ~]# passwd hadoop Changing password for user hadoop. New password: BAD PASSWORD: it is based on a dictionary word BAD PASSWORD: is too simple Retype new password: passwd: all authentication tokens updated successfully.
3、SSH互信
[root@carefree ~]# su - hadoop [hadoop@carefree ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): ..... ..... +-----------------+ [hadoop@carefree ~]$ cd .ssh/ [hadoop@carefree .ssh]$ ll total 8 -rw-------. 1 hadoop hadoop 1675 Sep 2 12:51 id_rsa -rw-r--r--. 1 hadoop hadoop 397 Sep 2 12:51 id_rsa.pub [hadoop@carefree .ssh]$ cp id_rsa.pub authorized_keys [hadoop@carefree .ssh]$ ll total 12 -rw-r--r--. 1 hadoop hadoop 397 Sep 2 12:51 authorized_keys -rw-------. 1 hadoop hadoop 1675 Sep 2 12:51 id_rsa -rw-r--r--. 1 hadoop hadoop 397 Sep 2 12:51 id_rsa.pub [hadoop@carefree .ssh]$ ssh localhost The authenticity of host ‘localhost (::1)‘ can‘t be established. RSA key fingerprint is 15:09:cf:b4:94:df:a4:6b:65:69:3f:d4:c3:fc:8b:2a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘localhost‘ (RSA) to the list of known hosts. [hadoop@carefree ~]$ ssh localhost Last login: Tue Sep 2 12:51:41 2014 from localhost
4、解压安装包,配置相关参数
tar -zxvf hadoop-2.2.0.tar.gz
hadoop-env.sh 、yarn-env.sh、mapred-env.sh中指定JAVA_HOME,内容如下:
export JAVA_HOME=/u01/app/jdk1.7.0_51
yarn-site.xml文件配置以下内容:
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>localhost:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>localhost:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>localhost:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>localhost:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>localhost:8088</value> </property> </configuration>
mapred-site.xml配置以下的内容:
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>localhost:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>localhost:19888</value> </property> </configuration>
core-site.xml配置以下的内容
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/data/tmp</value> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration>
5、格式化namenode,启动,验证
[hadoop@carefree app]$ hdfs namenode -format -bash: hdfs: command not found [hadoop@carefree app]$ vim /home/hadoop/.bash_profile [hadoop@carefree app]$ source /home/hadoop/.bash_profile [hadoop@carefree app]$ hdfs namenode -format 14/09/02 13:28:26 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = carefree/192.168.2.111 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.2.0 STARTUP_MSG: classpath = /u01/app/hadoop-2.2.0/etc/hadoop:/u01/app/hadoop-2.2.0/share/hadoo ........ ....... /u01/app/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/u01/app/hadoop-2.2.0/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = Unknown -r Unknown; compiled by ‘root‘ on 2014-09-02T02:29Z STARTUP_MSG: java = 1.7.0_51) 14/09/02 13:28:28 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 ........ ........ 14/09/02 13:28:31 INFO namenode.FSImage: Image file /u01/app/data/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds. 14/09/02 13:28:31 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 14/09/02 13:28:31 INFO util.ExitUtil: Exiting with status 0 14/09/02 13:28:31 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at carefree/192.168.2.111 ************************************************************/
启动相关进程:
[hadoop@carefree app]$ start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [localhost] localhost: starting namenode, logging to /u01/app/hadoop-2.2.0/logs/hadoop-hadoop-namenode-carefree.out localhost: starting datanode, logging to /u01/app/hadoop-2.2.0/logs/hadoop-hadoop-datanode-carefree.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /u01/app/hadoop-2.2.0/logs/hadoop-hadoop-secondarynamenode-carefree.out starting yarn daemons starting resourcemanager, logging to /u01/app/hadoop-2.2.0/logs/yarn-hadoop-resourcemanager-carefree.out localhost: starting nodemanager, logging to /u01/app/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-carefree.out
检查进程:
[hadoop@carefree ~]$ jps 5826 NodeManager 5319 NameNode 5726 ResourceManager 5565 SecondaryNameNode 5413 DataNode 6337 Jps
校验hdfs:
[hadoop@carefree app]$ hadoop fs -ls / [hadoop@carefree app]$ hadoop fs -mkdir /input [hadoop@carefree app]$ hadoop fs -ls / Found 1 items drwxr-xr-x - hadoop supergroup 0 2014-09-02 13:39 /input
本文出自 “阿布” 博客,请务必保留此出处http://carefree.blog.51cto.com/5771371/1557813
郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。