网卡配置好、主机名配置好、修改host映射(加好友)、关闭集群里机器的防火墙、配置好免密登陆ssh、安装jdk (这些在我之前的文章里有看着上面配置就行了)
tar -zxvf /hadoop-2.7.3.tar.gz -C /usr/local
vi hadoop-env.sh
改写为 :
# The java implementation to use. export JAVA_HOME=/usr/local/jdk1.8.0_102JAVA_HOME=你jdk的路径
vi core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop-2.7.3/tmp</value> </property> </configuration>这里的hadoop01 是你主节点机器的名字
vi hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop-2.7.3/data/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop-2.7.3/data/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.secondary.http.address</name> <value>hadoop01:50090</value> </property> </configuration>这里 hadoop01:50090 用于网页访问hdfs
vi mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> vi yarn-site.xml <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>vi slaves
hadoop02 hadoop03这里的hadoop02、hadoop03 是从节点名
vi /etc/profile
source /etc/profile
初始化 HDFS(在主节点机器里进行操作)(操作一次就ok)
hadoop namenode -format一键启动:
start-all.sh先去windows下去改变映射关系
C/windows/system32/drivers/etc/hosts
去访问hdfs
