Linux企业运维篇——Hadoop的三种运行模式(单机模式,伪分布式模式,全分布式集群模式)

    xiaoxiao2022-07-02  150

    Hadoop技术原理:

    Hdfs主要模块:NameNode、DataNode Yarn主要模块:ResourceManager、NodeManager

    HDFS主要模块及运行原理:

    1)NameNode:

    功能:是整个文件系统的管理节点。维护整个文件系统的文件目录树,文件/目录的元数据和 每个文件对应的数据块列表。接收用户的请求。

    2)DataNode:

    功能:是HA(高可用性)的一个解决方案,是备用镜像,但不支持热备

    注意做好地址解析,每一台server都要做

    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.25.6.1 server1 172.25.6.2 server2 172.25.6.3 server3 172.25.6.4 server4

    一、单机模式(独立模式)(Local或Standalone Mode)

    -默认情况下,Hadoop即处于该模式,用于开发和调式。 -不对配置文件进行修改。 -使用本地文件系统,而不是分布式文件系统。 -Hadoop不会启动NameNode、DataNode、JobTracker、TaskTracker等守护进程,Map()和Reduce()任务作为同一个进程的不同部分来执行的。 -用于对MapReduce程序的逻辑进行调试,确保程序的正确。

    server1:   建立hadoop用户,将安装包移动到hadoop用户的家目录,切换到hadoop用户下

    [root@server1 ~]# useradd hadoop [root@server1 ~]# id hadoop uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop) [root@server1 ~]# ls hadoop-3.0.3.tar.gz jdk-8u181-linux-x64.tar.gz [root@server1 ~]# mv * /home/hadoop/ [root@server1 ~]# su - hadoop [hadoop@server1 ~]$ ls hadoop-3.0.3.tar.gz jdk-8u181-linux-x64.tar.gz

    解压并做链接

    [hadoop@server1 ~]$ tar zxf jdk-8u181-linux-x64.tar.gz # 解压 [hadoop@server1 ~]$ ls hadoop-3.0.3.tar.gz jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ ln -s jdk1.8.0_181/ java #做链接 [hadoop@server1 ~]$ ls hadoop-3.0.3.tar.gz java jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ tar zxf hadoop-3.0.3.tar.gz [hadoop@server1 ~]$ ls hadoop-3.0.3 hadoop-3.0.3.tar.gz java jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ ln -s hadoop-3.0.3 hadoop #做链接 [hadoop@server1 ~]$ ls hadoop hadoop-3.0.3 hadoop-3.0.3.tar.gz java jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz

    配置环境

    [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ cd etc/ [hadoop@server1 etc]$ ls hadoop [hadoop@server1 etc]$ cd hadoop/ [hadoop@server1 hadoop]$ ls

    [hadoop@server1 hadoop]$ vim hadoop-env.sh #添加java的环境 54 export JAVA_HOME=/home/hadoop/java

    [hadoop@server1 hadoop]$ cd [hadoop@server1 ~]$ ls hadoop hadoop-3.0.3 hadoop-3.0.3.tar.gz java jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ vim .bash_profile #修改添加环境 PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/java/bin

    配置成功就可以使用jps命令,可以列出本机所有java的pid

    [hadoop@server1 ~]$ jps 1069 Jps

    测试:

    [hadoop@server1 ~]$ ls hadoop hadoop-3.0.3 hadoop-3.0.3.tar.gz java jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ ls bin etc include lib libexec LICENSE.txt NOTICE.txt README.txt sbin share [hadoop@server1 hadoop]$ mkdir input #新建目录 [hadoop@server1 hadoop]$ cd input/ [hadoop@server1 input]$ ls [hadoop@server1 input]$ cd .. [hadoop@server1 hadoop]$ cp etc/hadoop/*.xml input # 随便复制文件到 input目录中,这里复制了hadoop中所有的xml文件 [hadoop@server1 hadoop]$ ls input/ capacity-scheduler.xml hadoop-policy.xml httpfs-site.xml kms-site.xml yarn-site.xml core-site.xml hdfs-site.xml kms-acls.xml mapred-site.xml [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+' #output目录是自定义名字

    发现多了output目录

    [hadoop@server1 hadoop]$ ls bin include lib LICENSE.txt output sbin etc input libexec NOTICE.txt README.txt share [hadoop@server1 hadoop]$ ls input/ capacity-scheduler.xml hadoop-policy.xml httpfs-site.xml kms-site.xml yarn-site.xml core-site.xml hdfs-site.xml kms-acls.xml mapred-site.xml [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS

    二、伪分布式模式(Pseudo-Distrubuted Mode)

    -Hadoop的守护进程运行在本机机器,模拟一个小规模的集群  -在一台主机模拟多主机。 -Hadoop启动NameNode、DataNode、JobTracker、TaskTracker这些守护进程都在同一台机器上运行,是相互独立的Java进程。 -在这种模式下,Hadoop使用的是分布式文件系统,各个作业也是由JobTraker服务,来管理的独立进程。在单机模式之上增加了代码调试功能,允许检查内存使用情况,HDFS输入输出,以及其他的守护进程交互。类似于完全分布式模式,因此,这种模式常用来开发测试Hadoop程序的执行是否正确。 -修改3个配置文件:core-site.xml(Hadoop集群的特性,作用于全部进程及客户端)、hdfs-site.xml(配置HDFS集群的工作属性)、mapred-site.xml(配置MapReduce集群的属性) -格式化文件系统

    server1: 编辑文件

    [hadoop@server1 output]$ cd .. [hadoop@server1 hadoop]$ cd etc/hadoop/ [hadoop@server1 hadoop]$ vim core-site.xml 20 <property> 21 <name>fs.defaultFS</name> 22 <value>hdfs://172.25.6.1:9000</value> 23 </property>

    [hadoop@server1 hadoop]$ vim hdfs-site.xml 19 <configuration> 20 <property> 21 <name>dfs.replication</name> 22 <value>1</value> ##自己充当节点 23 </property> 24 25 </configuration>

    生成密钥做免密

    [hadoop@server1 ~]$ ssh-keygen

    给hadoop用户设置一个密码,否则无法做免密

    [hadoop@server1 ~]$ logout [root@server1 ~]# passwd hadoop [root@server1 ~]# su - hadoop Last login: Sun May 19 13:42:37 CST 2019 on pts/1 [hadoop@server1 ~]$ ssh-copy-id 172.25.6.1

    [hadoop@server1 ~]$ ssh-copy-id localhost

    [hadoop@server1 ~]$ ssh-copy-id server1

    格式化,开启服务

    [hadoop@server1 ~]$ cd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs namenode -format

    [hadoop@server1 hadoop]$ sbin/start-dfs.sh #等待 Starting namenodes on [server1] Starting datanodes Starting secondary namenodes [server1]

    打开浏览器172.25.6.1:9870 测试:创建目录并上传

    [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user #建立user目录 [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop #在user目录下建立hadoop目录 [hadoop@server1 hadoop]$ bin/hdfs dfs -ls # hadoop目录下为空 [hadoop@server1 hadoop]$ ls bin include lib LICENSE.txt NOTICE.txt README.txt share etc input libexec logs output sbin [hadoop@server1 hadoop]$ bin/hdfs dfs -put input/ #上传input目录到/user/hadoop目录中

    防止干扰,这里把之前的hadoop目录下的input和output删除

    [hadoop@server1 hadoop]$ rm -rf input/ [hadoop@server1 hadoop]$ rm -rf output/

    在浏览器中点击Utilities,选择Browse the file system 点击GO!可以看见刚才建立的/user/hadoop目录。

    [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar wordcount input output #生成output文件记录inout中单词个数

    查看方式1: 生成了output文件,可下载查看 查看方式2: 命令直接查看

    [hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/*

    查看方式3: 命令下载查看

    [hadoop@server1 hadoop]$ bin/hdfs dfs -get output [hadoop@server1 hadoop]$ ls bin include libexec logs output sbin etc lib LICENSE.txt NOTICE.txt README.txt share [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS [hadoop@server1 output]$ cat part-r-00000

    三、全分布式集群模式(Full-Distributed Mode)

    -Hadoop的守护进程运行在一个集群上

    -Hadoop的守护进程运行在由多台主机搭建的集群上,是真正的生产环境。 -在所有的主机上安装JDK和Hadoop,组成相互连通的网络。 -在主机间设置SSH免密码登录,把各从节点生成的公钥添加到主节点的信任列表。 -修改3个配置文件:core-site.xml、hdfs-site.xml、mapred-site.xml,指定NameNode和JobTraker的位置和端口,设置文件的副本等参数 -格式化文件系统

    停掉servre1上的服务,清除之前的数据

    [hadoop@server1 output]$ cd [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ sbin/stop-dfs.sh Stopping namenodes on [server1] Stopping datanodes Stopping secondary namenodes [server1] [hadoop@server1 hadoop]$ cd /tmp/ [hadoop@server1 tmp]$ ls hadoop hadoop-hadoop hsperfdata_hadoop [hadoop@server1 tmp]$ ll total 0 drwxr-xr-x 3 hadoop hadoop 20 May 19 13:18 hadoop drwxr-xr-x 4 hadoop hadoop 31 May 19 13:56 hadoop-hadoop drwxr-xr-x 2 hadoop hadoop 6 May 19 14:42 hsperfdata_hadoop [hadoop@server1 tmp]$ rm -rf * [hadoop@server1 ~]$ logout

    安装nfs-utils

    [root@server1 ~]# yum install -y nfs-utils

    在server2和server3也安装nfs-utils,并添加用户hadoop,打开rpcbind服务 (如果用户uid,gid不确定,可以用命令useradd -u id hadoop 建立用户)

    [root@server2 ~]# yum install -y nfs-utils [root@server2 ~]# useradd hadoop #这里是三台新的虚拟机,所以就直接添加用户了 [root@server2 ~]# id hadoop uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop) [root@server2 ~]# systemctl start rpcbind

    [root@server3 ~]# yum install -y nfs-utils [root@server3 ~]# useradd hadoop [root@server3 ~]# id hadoop uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop) [root@server3 ~]# systemctl start rpcbind

    server1打开服务,设置开机自启,

    [root@server1 ~]# systemctl start rpcbind [root@server1 ~]# systemctl is-enabled rpcbind #查看rpcbind服务是否是开机自启,rpcbind是自动开启开机自启 indirect [root@server1 ~]# vim /etc/exports /home/hadoop *(rw,anonuid=1000,anongid=1000) #一定要保证时间,用户uid和gid一样

    [root@server1 ~]# systemctl start nfs #开启nfs服务 [root@server1 ~]# exportfs -v

    [root@server1 ~]# showmount -e

    server2,server3挂载

    [root@server2 ~]# mount 172.25.6.1:/home/hadoop/ /home/hadoop/ [root@server3 ~]# mount 172.25.6.1:/home/hadoop/ /home/hadoop/

    server1:和server2,server3做免密

    [root@server1 ~]# su - hadoop Last login: Sun May 19 13:44:36 CST 2019 on pts/1 [hadoop@server1 ~]$ ssh-copy-id 172.25.6.2 [hadoop@server1 ~]$ ssh-copy-id 172.25.6.3

    此时可以免密转换

    [hadoop@server3 ~]$ logout #回到server1 [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ cd etc/hadoop/ #注意路径 [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop/etc/hadoop [hadoop@server1 hadoop]$ vim workers

    [hadoop@server1 hadoop]$ vim hdfs-site.xml #改为两个节点

    server2,server3:

    [root@server2 ~]# su - hadoop Last login: Sun May 19 15:37:51 CST 2019 from 172.25.6.1 on pts/1 [hadoop@server2 ~]$ cd hadoop/etc/hadoop/ [hadoop@server2 hadoop]$ cat workers 172.25.6.2 172.25.6.3 [root@server3 ~]# su - hadoop Last login: Sun May 19 15:38:23 CST 2019 from 172.25.6.2 on pts/1 [hadoop@server3 ~]$ cd hadoop/etc/hadoop/ [hadoop@server3 hadoop]$ cat workers 172.25.6.2 172.25.6.3

    格式化并启动服务

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs namenode -format

    [hadoop@server1 hadoop]$ sbin/start-dfs.sh Starting namenodes on [server1] Starting datanodes Starting secondary namenodes [server1] [hadoop@server1 hadoop]$ jps 4051 SecondaryNameNode #出现SecondaryNameNode 4227 Jps 3877 NameNode

    server2上

    [hadoop@server2 ~]$ jps 1542 Jps 1432 DataNode

    server3上

    [hadoop@server3 ~]$ jps 1536 Jps 1422 DataNode

    无任何数据

    测试: 建立目录上传文件

    [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop [hadoop@server1 hadoop]$ ls bin include libexec logs output sbin etc lib LICENSE.txt NOTICE.txt README.txt share [hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/ input

    可以看到两个节点 input中的数据上传成功

    [hadoop@server1 hadoop]$ bin/hdfs dfs -rm -r input #删除input再次查看 Deleted input

    添加新节点

    [root@server4 ~]# yum install -y nfs-utils [root@server4 ~]# useradd hadoop [root@server4 ~]# id hadoop uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop) [root@server4 ~]# systemctl start rpcbind [root@serve45 hadoop]# mount 172.25.6.1:/home/hadoop/ /home/hadoop/ [root@server4 ~]# su - hadoop [hadoop@server4 hadoop]$ pwd /home/hadoop/hadoop/etc/hadoop [hadoop@server4 hadoop]$ vim workers 172.25.6.2 172.25.6.3 172.25.6.4 [hadoop@server4 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server4 hadoop]$ sbin/hadoop-daemon.sh start datanode #开启节点,可在浏览器中查看到

    最新回复(0)