最简化配置如下:
vi hadoop-env.sh
jdk环境变量
export JAVA_HOME=/usr/local/jdk1.8.0_211vi core-site.xml
Namenode在哪里 ,临时文件存储在哪里
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop-2.7.3/tmp</value> </property> </configuration>vi hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop-2.7.3/data/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop-2.7.3/data/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.secondary.http.address</name> <value>hadoop01:50090</value> </property> </configuration>cp mapred-site.xml.tmp* mapred-site.xml vi mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>vi yarn-site.xml
<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>vi slaves
hadoop02 hadoop03Hadoop的path
export JAVA_HOME=/usr/local/jdk1.8.0_211 export HADOOP_HOME=/usr/local/hadoop-2.7.3 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin把第一台安装好的jdk和hadoop以及配置文件发送给另外两台
hosts文件jdk安装后的文件夹Hadoop安装后的文件夹/etc/profile 文件scp -r /usr/local/jdk1.8.0_211 hadoop02:/usr/local/
4.1.7 启动集群 初始化HDFS(在hadoop01进行操作)(操作一次就ok)
hadoop namenode -format启动HDFS
start-dfs.sh
启动YARN
start-yarn.sh
我更喜欢下面的命令一键启动 一键启动:start-all.sh
4.1.9 通过网页查看