hadoop的本地模式与伪分布式(单节点)、分布式的部署及集群搭建

    xiaoxiao2022-07-07  235

    前言:

    一、关于hadoop

    Hadoop是一个由Apache基金会所开发的分布式系统基础架构。 用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力进行高速运算和存储。

    Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。

    HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求,可以以流的形式访问(streaming access)文件系统中的数据。

    Hadoop的框架最核心的设计就是:HDFS和MapReduce。HDFS为海量的数据提供了存储,则MapReduce为海量的数据提供了计算。

    Hadoop解决哪些问题?

    海量数据需要及时分析和处理

    海量数据需要深入分析和挖掘

    数据需要长期保存

    海量数据存储的问题:

    磁盘IO称为一种瓶颈,而非CPU资源

    网络带宽是一种稀缺资源

    硬件故障成为影响稳定的一大因素

    HDFS采用master/slave架构

    Hadoop的三种运行模式 :

    1.独立(本地)运行模式:无需任何守护进程,所有的程序都运行在同一个JVM上执行。在独立模式下调试MR程序非常高效方便。所以一般该模式主要是在学习或者开发阶段调试使用 。 2.伪分布式模式: Hadoop守护进程运行在本地机器上,模拟一个小规模的集群,换句话说,可以配置一台机器的Hadoop集群,伪分布式是完全分布式的一个特例。 3.完全分布式模式:Hadoop守护进程运行在一个集群上。

    HDFS的主要模块

    1.NameNode:

    功能:是整个文件系统的管理节点。维护整个文件系统的文件目录数,文件/目录的源数据和每个文件对应的数据快列表。用于接受用户的请求。

    2.DataNode:

    是HA(高可用性)的一个解决方案,是备用镜像,但不支持热设备

    实验环境:

    redhat7.3 hadoop-3.0.3版本 jdk 8版本

    实验主机:server1 172.25.1.1

    注意:在本地模式下,将使用本地文件系统和本地MapReduce运行器。在分布式模式下,将启动HDFS和YARN守护进程。

    一、Hadoop部署

    1.建立hadoop用户,设置密码

    [root@server1 ~]# useradd hadoop [root@server1 ~]# id hadoop uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop) [root@server1 ~]# passwd hadoop #这里我设置为redhat

    2.安装hadoop及jdk并制作软链接

    [root@server1 ~]# ls hadoop-3.0.3.tar.gz jdk-8u181-linux-x64.tar.gz [root@server1 ~]# mv * /home/hadoop/ [root@server1 ~]# su - hadoop [hadoop@server1 ~]$ ls hadoop-3.0.3.tar.gz jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ tar zxf hadoop-3.0.3.tar.gz [hadoop@server1 ~]$ ln -s hadoop-3.0.3 hadoop [hadoop@server1 ~]$ tar zxf jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ ln -s jdk jdk1.8.0_181/ jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ ln -s jdk1.8.0_181/ java [hadoop@server1 ~]$ ls hadoop hadoop-3.0.3.tar.gz jdk1.8.0_181 hadoop-3.0.3 java jdk-8u181-linux-x64.tar.gz

    3.配置java的环境变量

    [hadoop@server1 ~]$ cd /home/hadoop/hadoop/etc/hadoop/ [hadoop@server1 hadoop]$ vim hadoop-env.sh 54 export JAVA_HOME=/home/hadoop/java [hadoop@server1 hadoop]$ cd [hadoop@server1 ~]$ vim .bash_profile PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/java/bin [hadoop@server1 ~]$ source .bash_profile [hadoop@server1 ~]$ jps 测试java的环境变量 10508 Jps

    4.测试

    [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ mkdir input [hadoop@server1 hadoop]$ cd input/ [hadoop@server1 input]$ ls [hadoop@server1 input]$ cd .. [hadoop@server1 hadoop]$ cp etc/hadoop/*.xml input [hadoop@server1 hadoop]$ ls input/ capacity-scheduler.xml hdfs-site.xml kms-site.xml core-site.xml httpfs-site.xml mapred-site.xml hadoop-policy.xml kms-acls.xml yarn-site.xml [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'

    [hadoop@server1 hadoop]$ ls bin include lib LICENSE.txt output sbin etc input libexec NOTICE.txt README.txt share [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS [hadoop@server1 output]$ cat * 1 dfsadmin

    二、伪分布式Hadoop

    1.编辑hadoop配置文件

    [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ cd etc/hadoop/ [hadoop@server1 hadoop]$ vim core-site.xml 3 <configuration> 4 <property> 5 <name>fs.defaultFS</name> 6 <value>hdfs://172.25.1.1:9000</value> 7 </property> 8 </configuration> [hadoop@server1 hadoop]$ vim hdfs-site.xml 19 <configuration> 20 <property> 21 <name>dfs.replication</name> 22 <value>1</value> 23 </property> 24 </configuration>

    2.生成密钥做免密连接

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ ssh-keygen [hadoop@server1 hadoop]$ ssh-copy-id localhost

    3.格式化并且开启服务

    [hadoop@server1 hadoop]$ bin/hdfs namenode -format [hadoop@server1 hadoop]$ cd sbin/ [hadoop@server1 sbin]$ ./start-dfs.sh Starting namenodes on [server1] Starting datanodes Starting secondary namenodes [server1] [hadoop@server1 sbin]$ jps 12513 DataNode 12838 Jps 12695 SecondaryNameNode 12409 NameNode

    4.用浏览器访问server1的9870端口

    5.测试:创建目录,上传文件

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -ls [hadoop@server1 hadoop]$ bin/hdfs dfs -put input [hadoop@server1 hadoop]$ bin/hdfs dfs -ls Found 1 items drwxr-xr-x - hadoop supergroup 0 2019-05-22 22:53 input

    删除input和output文件

    [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar wordcount input outpu

    [hadoop@server1 hadoop]$ rm -rf input/ [hadoop@server1 hadoop]$ rm -rf output/

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar wordcount input output

    [hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/*

    [hadoop@server1 hadoop]$ bin/hdfs dfs -get output [hadoop@server1 hadoop]$ ls bin include libexec logs output sbin y etc lib LICENSE.txt NOTICE.txt README.txt share y.pub [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS

    我们可以发现当我们删除本地的input和output后本地文件时没有了,但是hadoop分布式文件系统中人然存在,网页上可以看到,我们也可以通过get的方法下载ouput等。此时在网页上也可以查看到

    三、分布式

    1.首先关闭服务,清除原来的影响

    2.开启两个虚拟机server2和server3做Datanode

    在hadoop的datanode必须是完全同步的,那么我们可以用一个简单的办法使他们做到完全同步,那就是使用nfs文件系统

    [root@server2 ~]# useradd hadoop [root@server3 ~]# useradd hadoop

    server1-3安装nfs-utils:

    [root@server1 ~]# yum install -y nfs-utils [root@server1 ~]# systemctl start rpcbind [root@server2 ~]# yum install -y nfs-utils [root@server2 ~]# systemctl start rpcbind [root@server3 ~]# yum install -y nfs-utils [root@server3 ~]# systemctl start rpcbind

    3.server1开启nfs服务,进行配置

    [root@server1 ~]# systemctl start nfs-server [root@server1 ~]# vim /etc/exports /home/hadoop *(rw,anonuid=1000,anongid=1000) [root@server1 ~]# exportfs -rv exporting *:/home/hadoop [root@server1 ~]# showmount -e Export list for server1: /home/hadoop *

    4.server2,server3挂载server1分享的文件目录

    [root@server2 ~]# mount 172.25.1.1:/home/hadoop /home/hadoop/ [root@server2 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/rhel_foundation168-root 17811456 1098376 16713080 7% / devtmpfs 497300 0 497300 0% /dev tmpfs 508264 0 508264 0% /dev/shm tmpfs 508264 13076 495188 3% /run tmpfs 508264 0 508264 0% /sys/fs/cgroup /dev/sda1 1038336 123364 914972 12% /boot tmpfs 101656 0 101656 0% /run/user/0 172.25.1.1:/home/hadoop 17811456 2796544 15014912 16% /home/hadoop

    5.测试不同节点之间的免密登陆(挂载则不用密码)

    [root@server1 ~]# su - hadoop Last login: Wed May 22 20:44:42 CST 2019 on pts/0 [hadoop@server1 ~]$ ssh 172.25.1.2 The authenticity of host '172.25.1.2 (172.25.1.2)' can't be established. ECDSA key fingerprint is cc:cf:a2:8e:bc:5e:92:d9:5b:52:5d:66:04:ac:de:c4. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.25.1.2' (ECDSA) to the list of known hosts. [hadoop@server2 ~]$ logout Connection to 172.25.1.2 closed. [hadoop@server1 ~]$ ssh 172.25.1.3 The authenticity of host '172.25.1.3 (172.25.1.3)' can't be established. ECDSA key fingerprint is b1:f5:42:d9:dc:11:53:1f:56:55:99:63:b7:63:7a:2e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.25.1.3' (ECDSA) to the list of known hosts. [hadoop@server3 ~]$ logout Connection to 172.25.1.3 closed.

    6.编辑hadoop的配置文件

    [hadoop@server1 ~]$ cd hadoop/etc/hadoop/ [hadoop@server1 hadoop]$ vim core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://172.25.1.1:9000</value> </property> </configuration>

    [hadoop@server1 hadoop]$ vim hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>2</value> 改为两个节点 </property> </configuration>

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop/etc/hadoop [hadoop@server1 hadoop]$ vim workers [hadoop@server1 hadoop]$ cat workers 172.25.1.2 172.25.1.3

    7.格式化,并且启动服务

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs namenode -format [hadoop@server1 hadoop]$ sbin/start-dfs.sh Starting namenodes on [server1] Starting datanodes Starting secondary namenodes [server1] [hadoop@server1 hadoop]$ jps 出现SecondaryNameNode 14848 NameNode 15034 SecondaryNameNode 15166 Jps

    从节点查看:

    [root@server2 hadoop]# su - hadoop Last login: Wed May 22 23:29:55 CST 2019 from server1 on pts/1 [hadoop@server2 ~]$ ls hadoop hadoop-3.0.3.tar.gz jdk1.8.0_181 hadoop-3.0.3 java jdk-8u181-linux-x64.tar.gz [hadoop@server2 ~]$ jps 10754 DataNode 10845 Jps

    8.测试

    可以看到存活节点为2个。

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir input [hadoop@server1 hadoop]$ bin/hdfs dfs -ls Found 1 items drwxr-xr-x - hadoop supergroup 0 2019-05-22 23:46 input [hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/*.xml input

    四、集群搭建

    1、server4节点添加

    [root@server4 ~]# useradd hadoop [root@server4 ~]# passwd hadoop Changing password for user hadoop. New password: BAD PASSWORD: The password is shorter than 8 characters Retype new password: passwd: all authentication tokens updated successfully. [root@server4 ~]# yum install -y nfs-utils [root@server4 ~]# systemctl start rpcbind [root@server4 ~]# mount 172.25.1.1:/home/hadoop/ /home/hadoop/ [root@server4 ~]# su - hadoop [hadoop@server4 ~]$ cd hadoop/etc/hadoop/ [hadoop@server4 hadoop]$ vim workers [hadoop@server4 hadoop]$ cat workers 172.25.1.2 172.25.1.3 172.25.1.4 [hadoop@server4 hadoop]$ cd ../.. [hadoop@server4 hadoop]$ sbin/hadoop-daemon.sh start datanode WARNING: Use of this script to start HDFS daemons is deprecated. WARNING: Attempting to execute replacement "hdfs --daemon start" instead. [hadoop@server4 hadoop]$ jps 3672 Jps 3610 DataNode

    2、节点删除

    为了测试效果,在server1端将一个大文加加到文件系统中:

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ dd if=/dev/zero of=Bigfile bs=1M count=300 300+0 records in 300+0 records out 314572800 bytes (315 MB) copied, 1.62678 s, 193 MB/s [hadoop@server1 hadoop]$ bin/hdfs dfs -put Bigfile

    浏览器查看文件成功加入到文件系统:

    查看节点信息:

    [hadoop@server4 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server4 hadoop]$ bin/hdfs dfsadmin -report Name: 172.25.1.4:9866 (server4) Hostname: server4 Decommission Status : Normal Configured Capacity: 18238930944 (16.99 GB) DFS Used: 270540814 (258.01 MB) #查看到server4端有258.01M的数据 Non DFS Used: 1124823026 (1.05 GB) DFS Remaining: 16843567104 (15.69 GB) DFS Used%: 1.48% DFS Remaining%: 92.35% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu May 23 00:42:29 CST 2019 Last Block Report: Thu May 23 00:33:22 CST 2019

    编辑master的slave文件将sevrer4删除:

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop/etc/hadoop [hadoop@server1 hadoop]$ vim workers

    新建文件hosta-exclude,写入被删除的节点ip:

    [hadoop@server4 hadoop]$ vim hosts-exclude [hadoop@server4 hadoop]$ cat hosts-exclude 172.25.1.4

    编辑文件etc/hadoop/hdfs-site.xml:

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ vim etc/hadoop/hdfs-site.xml 19 <configuration> 20 <property> 21 <name>dfs.replication</name> 22 <value>2</value> 23 </property> 24 25 <property> 26 <name>dfs.hosts.exclude</name> 27 <value>/home/hadoop/hadoop/etc/hadoop/hosts-exclude</value> 28 </property> 29 30 </configuration>

    刷新:

    [hadoop@server1 hadoop]$ bin/hdfs dfsadmin -refreshNodes Refresh nodes successful

    查看集群状态:

    [hadoop@server1 hadoop]$ bin/hdfs dfsadmin -report
    最新回复(0)