hadoop安装

    xiaoxiao2022-07-07  170

    1 HADOOP安装部署

    上传HADOOP安装包规划安装目录 /usr/local/hadoop-2.7.3解压安装包修改配置文件 /usr/local/hadoop-2.7.3/etc/hadoop/

    最简化配置如下:

    vi hadoop-env.sh

    jdk环境变量

    export JAVA_HOME=/usr/local/jdk1.8.0_211

    vi core-site.xml

    Namenode在哪里 ,临时文件存储在哪里

    <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop-2.7.3/tmp</value> </property> </configuration>

    vi hdfs-site.xml

    <configuration> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop-2.7.3/data/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop-2.7.3/data/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.secondary.http.address</name> <value>hadoop01:50090</value> </property> </configuration>

    cp mapred-site.xml.tmp* mapred-site.xml vi mapred-site.xml

    <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>

    vi yarn-site.xml

    <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>

    vi slaves

    hadoop02 hadoop03

    Hadoop的path

    export JAVA_HOME=/usr/local/jdk1.8.0_211 export HADOOP_HOME=/usr/local/hadoop-2.7.3 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

    把第一台安装好的jdk和hadoop以及配置文件发送给另外两台

    hosts文件jdk安装后的文件夹Hadoop安装后的文件夹/etc/profile 文件

    scp -r /usr/local/jdk1.8.0_211 hadoop02:/usr/local/

    4.1.7 启动集群 初始化HDFS(在hadoop01进行操作)(操作一次就ok)

    hadoop namenode -format

    启动HDFS

    start-dfs.sh

    启动YARN

    start-yarn.sh

    我更喜欢下面的命令一键启动 一键启动:start-all.sh

    4.1.9 通过网页查看

    最新回复(0)