hadoop+zookeeper高可用

    xiaoxiao2022-07-07  140

    前言:环境基于nfs文件共享,server1上安装了hadoop,部署的相关服务参考我的上一篇博客

    环境RHEL 7.3h1172.25.61.1(server1)h2172.25.61.5(server5)DN1172.25.61.2(server2)DN2172.25.61.3(server3)DN3172.25.61.4(server4)

    1.环境恢复

    [hadoop@server1 hadoop]$ sbin/stop-yarn.sh [hadoop@server1 hadoop]$ sbin/stop-dfs.sh ##1-4server都执行 [hadoop@server1 hadoop]$ rm -fr /tmp/*

    2.在server2上搭建zookeeper

    [hadoop@server2 ~]$ ls hadoop hadoop-3.0.3.tar.gz jdk1.8.0_181 zookeeper-3.4.9 hadoop-3.0.3 java jdk-8u181-linux-x64.tar.gz zookeeper-3.4.9.tar.gz [hadoop@server2 ~]$ cd zookeeper-3.4.9/conf [hadoop@server2 conf]$ cp zoo_sample.cfg zoo.cfg server.1=172.25.61.2:2888:3888 server.2=172.25.61.3:2888:3888 server.3=172.25.61.4:2888:3888

    3.各节点配置文件相同,并且需要在/tmp/zookeeper 目录中创建 myid 文件,写入一个唯一的数字,取值范围在 1-255

    [hadoop@server2 conf]$ mkdir /tmp/zookeeper [hadoop@server3 conf]$ mkdir /tmp/zookeeper [hadoop@server4 conf]$ mkdir /tmp/zookeeper [hadoop@server2 conf]$ echo 1 > /tmp/zookeeper/myid [hadoop@server3 conf]$ echo 2 > /tmp/zookeeper/myid [hadoop@server4 conf]$ echo 3 > /tmp/zookeeper/myid [hadoop@server2 zookeeper-3.4.9]$ bin/zkServer.sh start [hadoop@server3 zookeeper-3.4.9]$ bin/zkServer.sh start [hadoop@server4 zookeeper-3.4.9]$ bin/zkServer.sh start [hadoop@server2 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower [hadoop@server3 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: leader [hadoop@server4 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower

    4.进入命令行

    [hadoop@server2 bin]$ pwd /home/hadoop/zookeeper-3.4.9/bin [hadoop@server2 bin]$ ls README.txt zkCli.cmd zkEnv.cmd zkServer.cmd zkCleanup.sh zkCli.sh zkEnv.sh zkServer.sh [hadoop@server2 bin]$ ./zkCli.sh WatchedEvent state:SyncConnected type:None path:null ls / [zookeeper] [zk: localhost:2181(CONNECTED) 1] ls / [zookeeper] [zk: localhost:2181(CONNECTED) 2] ls /zookeeper [quota] [zk: localhost:2181(CONNECTED) 3] ls /zookeeper/quota [] [zk: localhost:2181(CONNECTED) 4] get /zookeeper/quota cZxid = 0x0 ctime = Thu Jan 01 08:00:00 CST 1970 mZxid = 0x0 mtime = Thu Jan 01 08:00:00 CST 1970 pZxid = 0x0 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 0 numChildren = 0

    5.

    <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://masters</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>172.25.61.2:2181,172.25.61.3:2181,172.25.61.4:2181</value> </property> </configuration> <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.nameservices</name> <value>masters</value> </property> <property> <name>dfs.ha.namenodes.masters</name> <value>h1,h2</value> </property> <property> <name>dfs.namenode.rpc-address.masters.h1</name> <value>172.25.61.1:9000</value> </property> <property> <name>dfs.namenode.http-address.masters.h1</name> <value>172.25.61.1:9870</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://172.25.61.2:8485;172.25.61.3:8485;172.25.61.4:8485/masters</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/tmp/journaldata</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.client.failover.proxy.provider.masters</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration>

    6.启动 hdfs 集群(按顺序启动)

    在三个 DN 上依次启动 journalnode(第一次启动 hdfs 必须先启动 journalnode) $ sbin/hadoop-daemon.sh start journalnode [hadoop@server2 hadoop]$ jps 1382 DataNode 1097 QuorumPeerMain 1836 JournalNode 1854 Jps

    7.格式化 HDFS 集群

    $ bin/hdfs namenode -format Namenode 数据默认存放在/tmp,需要把数据拷贝到 h2 $ scp -r /tmp/hadoop-hadoop 172.25.61.5:/tmp

    8.格式化 zookeeper (只需在 h1 上执行即可

    $ bin/hdfs zkfc -formatZK

    9.启动 hdfs 集群(只需在 h1 上执行即可)

    [hadoop@server1 hadoop]$ sbin/start-dfs.sh Starting namenodes on [server1 server5] server1: namenode is running as process 1867. Stop it first. server5: namenode is running as process 1468. Stop it first. Starting datanodes 172.25.61.2: datanode is running as process 1382. Stop it first. 172.25.61.3: datanode is running as process 1320. Stop it first. Starting journal nodes [172.25.61.2 172.25.61.3 172.25.61.4] 172.25.61.2: journalnode is running as process 1836. Stop it first. 172.25.61.3: journalnode is running as process 1686. Stop it first. 172.25.61.4: journalnode is running as process 1527. Stop it first. Starting ZK Failover Controllers on NN hosts [server1 server5] [hadoop@server1 hadoop]$ jps 2901 DFSZKFailoverController 2949 Jps 1867 NameNode

    10.查看节点状态

    [hadoop@server5 ~]$ jps 1959 Jps 1929 DFSZKFailoverController 1468 NameNode [hadoop@server1 hadoop]$ jps 2901 DFSZKFailoverController 2949 Jps 1867 NameNod

    11.浏览器上测试

    12.测试故障自动切换

    [hadoop@server1 hadoop]$ jps 3011 Jps 2901 DFSZKFailoverController 1867 NameNode [hadoop@server1 hadoop]$ kill -9 1867 [hadoop@server1 hadoop]$ jps 2901 DFSZKFailoverController 3036 Jps [hadoop@server1 hadoop]$ bin/hdfs dfs -ls

    杀掉h1的进程之后,数据仍然可以看到,故障切换到h2,重启h1之后,h1变为standby

    最新回复(0)