清除之前环境(接上一篇)
[hadoop@server1 hadoop]$ sbin/stop-yarn.sh [hadoop@server1 hadoop]$ sbin/stop-dfs.shserver1,2,3,4都要清除
[hadoop@server1 hadoop]$ cd /tmp/ [hadoop@server1 tmp]$ rf -rm * [hadoop@server2 ~]$ rm -rf /tmp/* [hadoop@server3 ~]$ rm -rf /tmp/* [hadoop@server4 ~]$ rm -rf /tmp/*搭建zookepper(任意一个节点,这里在server2上)
[hadoop@server2 ~]$ ls hadoop java zookeeper-3.4.9.tar.gz hadoop-3.0.3 jdk1.8.0_181 hadoop-3.0.3.tar.gz jdk-8u181-linux-x64.tar.gz [hadoop@server2 ~]$ tar zxf zookeeper-3.4.9.tar.gz [hadoop@server2 ~]$ cd zookeeper-3.4.9 [hadoop@server2 zookeeper-3.4.9]$ cd conf/ [hadoop@server2 conf]$ ls configuration.xsl log4j.properties zoo_sample.cfg\添加从节点信息
[hadoop@server2 conf]$ cp zoo_sample.cfg zoo.cfg [hadoop@server2 conf]$ vim zoo.cfg #添加以下代码到文件末尾 server.1=172.25.70.2:2888:3888 server.2=172.25.70.3:2888:3888 server.3=172.25.70.4:2888:3888各节点配置文件相同,并且需要在/tmp/zookeeper 目录中创建 myid 文件,写入一个唯一的数字,取值范围在 1-255
[hadoop@server2 ~]$ mkdir /tmp/zookeeper [hadoop@server3 ~]$ mkdir /tmp/zookeeper [hadoop@server4 ~]$ mkdir /tmp/zookeeper [hadoop@server2 ~]$ echo 1 > /tmp/zookeeper/myid [hadoop@server3 ~]$ echo 2 > /tmp/zookeeper/myid [hadoop@server4 ~]$ echo 3 > /tmp/zookeeper/myidserver2,3,4都开启服务
[hadoop@server2 zookeeper-3.4.9]$ pwd /home/hadoop/zookeeper-3.4.9 [hadoop@server2 zookeeper-3.4.9]$ bin/zkServer.sh start [hadoop@server3 zookeeper-3.4.9]$ bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [hadoop@server4 hadoop]$ cd /home/hadoop/zookeeper-3.4.9 [hadoop@server4 zookeeper-3.4.9]$ bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg Starting zookeeper ... STARTED并查看各节点状态 server3是主节点,server2,4是从节点
[hadoop@server2 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower [hadoop@server3 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: leader [hadoop@server4 zookeeper-3.4.9]$ bin/zkServer.sh status #确保java环境,用java -version查看,如果环境有问题可以重新加载一下,使用[hadoop@server4 ~]$ source .bash_profile ZooKeeper JMX enabled by default Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower在server2进入命令行
[hadoop@server2 bin]$ ls README.txt zkCli.cmd zkEnv.cmd zkServer.cmd zkCleanup.sh zkCli.sh zkEnv.sh zkServer.sh [hadoop@server2 bin]$ pwd /home/hadoop/zookeeper-3.4.9/bin [hadoop@server2 bin]$ ./zkCli.sh #连接zookeeper回车进入命令行 server1进行hadoop的配置详解
/home/hadoop/hadoop/etc/hadoop [hadoop@server1 hadoop]$ vim core-site.xml 19 <configuration> 20 <property> 21 <name>fs.defaultFS</name> 22 <value>hdfs://masters</value> #指定 hdfs 的 namenode 为 masters (名称可自定义) 23 </property> 24 25 <property> 26 <name>ha.zookeeper.quorum</name> 27 <value>172.25.70.2:2181,172.25.70.3:2181,172.25.70.4:2181</value> #指定 zookeeper 集群主机地址 28 </property> 29 </configuration> [hadoop@server1 hadoop]$ vim hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> #指定 hdfs 的 nameservices 为 masters,和 core-site.xml 文件中的设置保持一致 <property> <name>dfs.nameservices</name> <value>masters</value> </property> #masters 下面有两个 namenode 节点,分别是 h1 和 h2 <property> <name>dfs.ha.namenodes.masters</name> <value>h1,h2</value> </property> #指定 h1 节点的 rpc 通信地址 <property> <name>dfs.namenode.rpc-address.masters.h1</name> <value>172.25.70.1:9000</value> </property> #指定 h1 节点的 http 通信地址 <property> <name>dfs.namenode.http-address.masters.h1</name> <value>172.25.70.1:9870</value> </property> #指定 h2 节点的 rpc 通信地址 <property> <name>dfs.namenode.rpc-address.masters.h1</name> <value>172.25.70.5:9000</value> </property> #指定 h2 节点的 http 通信地址 <property> <name>dfs.namenode.http-address.masters.h1</name> <value>172.25.70.5:9870</value> </property> #指定 NameNode 元数据在 JournalNode 上的存放位置 <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://172.25.70.2:8485;172.25.70.3:8485;172.25.70.4:8485/masters</value> </property> #指定 JournalNode 在本地磁盘存放数据的位置 <property> <name>dfs.journalnode.edits.dir</name> <value>/tmp/journaldata</value> </property> #开启 NameNode 失败自动切换 <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> #配置失败自动切换实现方式 <property> <name>dfs.client.failover.proxy.provider.masters</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> #配置隔离机制方法,每个机制占用一行 <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> #使用 sshfence 隔离机制时需要 ssh 免密码 <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> #配置 sshfence 隔离机制超时时间 <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property>启动 hdfs 集群(按顺序启动)在三个 DN 上依次启动 zookeeper 集群 server2:
[hadoop@server2 bin]$ cd /home/hadoop/hadoop [hadoop@server2 hadoop]$ bin/hdfs --daemon start journalnode [hadoop@server2 hadoop]$ jps 11652 Jps 11301 QuorumPeerMain 11612 JournalNode [hadoop@server3 bin]$ cd /home/hadoop/hadoop [hadoop@server3 hadoop]$ bin/hdfs --daemon start journalnode [hadoop@server3 hadoop]$ jps 11107 QuorumPeerMain 11399 JournalNode 11448 Jps [hadoop@server4 zookeeper-3.4.9]$ cd /home/hadoop/hadoop [hadoop@server4 hadoop]$ bin/hdfs --daemon start journalnode [hadoop@server4 hadoop]$ jps 12018 Jps 11797 QuorumPeerMain 11977 JournalNode传递配置文件搭建高可用
[hadoop@server1 hadoop]$ cd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs namenode -format[hadoop@server1 hadoop]$ scp -r /tmp/hadoop-hadoop 172.25.70.5:/tmp
格式化 zookeeper (只需在 h1 上执行即可)
[hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs zkfc -formatZK [zk: localhost:2181(CONNECTED) 1] get /hadoop-ha/masters/ActiveBreadCrumb启动 hdfs 集群(只需在 h1 上执行即可)
[hadoop@server1 hadoop]$ sbin/start-dfs.sh Starting namenodes on [server1 server5] server5: Warning: Permanently added 'server5' (ECDSA) to the list of known hosts. Starting datanodes Starting journal nodes [172.25.70.2 172.25.70.3 172.25.70.4] 172.25.70.2: journalnode is running as process 11612. Stop it first. 172.25.70.3: journalnode is running as process 11399. Stop it first. 172.25.70.4: journalnode is running as process 11977. Stop it first. Starting ZK Failover Controllers on NN hosts [server1 server5] [hadoop@server1 hadoop]$ jps 17074 DFSZKFailoverController 16725 NameNode 17125 Jps浏览器测试显示server1上是active,server5是standby 关闭server1,server5的状态就变成了active 此时上传文件是通过server5
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir input [hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/* input重新打开server1
[hadoop@server1 hadoop]$ bin/hdfs --daemon start namenode [hadoop@server1 hadoop]$ jps