Hadoop单击版测试,搭建集群节点

    xiaoxiao2022-07-07  216

    一.Hadoop技术原理:

    Hdfs主要模块:NameNode、DataNode Yarn主要模块:ResourceManager、NodeManager

    二.hadoop单机版测试

    1.安装hadoop,创建hadoop用户

    tar zxf jdk-8u181-linux-x64.tar.gz tar zxf hadoop-3.0.3.tar.gz ln -s jdk1.8.0_181/ java ##制作软链接,便于启动 ln -s hadoop-3.0.3 hadoop [hadoop@server1 ~]$ ls hadoop-3.0.3 hadoop-3.0.3.tar.gz jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz

    2.配置环境变量

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop/etc/hadoop [hadoop@server1 hadoop]$ vim hadoop-env.sh 54 export JAVA_HOME=/home/hadoop/java [hadoop@server1 ~]$ vim .bash_profile PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/java/bin [hadoop@server1 ~]$ source .bash_profile [hadoop@server1 ~]$ jps ##配置成功可以调用

    3.测试

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ mkdir input [hadoop@server1 hadoop]$ cp etc/hadoop/*.xml input/ [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+' [hadoop@server1 hadoop]$ ls input/ capacity-scheduler.xml hadoop-policy.xml httpfs-site.xml kms-site.xml yarn-site.xml core-site.xml hdfs-site.xml kms-acls.xml mapred-site.xml [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS

    二.伪分布式

    1.编辑文件

    hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop/etc/hadoop [hadoop@server1 hadoop]$ vim core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://172.25.61.1:9000</value> </property> </configuration> [hadoop@server1 hadoop]$ vim hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> ##自己充当节点 </property> </configuration>

    2.免密操作

    [hadoop@server1 hadoop]$ ssh-keygen [hadoop@server1 hadoop]$ logot [root@server1 ~]# passwd hadoop [root@server1 ~]# su - hadoop [hadoop@server1 ~]$ ssh-copy-id 172.25.61.1 [hadoop@server1 ~]$ ssh-copy-id localhost [hadoop@server1 ~]$ ssh-copy-id server1

    3.格式化,开启服务

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs namenode -format [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ cd sbin/ [hadoop@server1 sbin]$ ./start-dfs.sh Starting namenodes on [server1] Starting datanodes localhost: datanode is running as process 13435. Stop it first. Starting secondary namenodes [server1] server1: secondarynamenode is running as process 13617. Stop it first. [hadoop@server1 sbin]$ jps 13617 SecondaryNameNode 14329 Jps 13435 DataNode 13964 NameNode

    网页输入http://172.25.61.1:9870

    4.测试,创建目录,上传

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -ls [hadoop@server1 hadoop]$ bin/hdfs dfs -put input [hadoop@server1 hadoop]$ bin/hdfs dfs -ls Found 1 items drwxr-xr-x - hadoop supergroup 0 2019-05-23 03:11 input

    网页上也可以看到

    [hadoop@server1 hadoop]$ rm -rf input/ [hadoop@server1 hadoop]$ rm -rf output [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar wordcount input output [hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/* [hadoop@server1 hadoop]$ bin/hdfs dfs -get output ##可以把文件get下来查看 [hadoop@server1 hadoop]$ ls [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS

    三.分布式

    1.环境恢复

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ sbin/stop-dfs.sh Stopping namenodes on [server1] Stopping datanodes Stopping secondary namenodes [server1] [hadoop@server1 hadoop]$ cd /tmp/ [hadoop@server1 tmp]$ ls hadoop hadoop-hadoop hsperfdata_hadoop [hadoop@server1 tmp]$ rm -rf *

    2新开两个虚拟机server2,server3当做节点

    创建用户 [root@server2 ~]# useradd -u 1000 hadoop [root@server3 ~]# useradd -u 1000 hadoop 安装nfs-utils [root@server1 ~]# yum install -y nfs-utils [root@server2 ~]# yum install -y nfs-utils [root@server3 ~]# yum install -y nfs-utils [root@server1 ~]# systemctl start rpcbind [root@server2 ~]# systemctl start rpcbind [root@server3 ~]# systemctl start rpcbind

    3.server1开启服务,配置

    [root@server1 ~]# systemctl start nfs-server [root@server1 ~]# vim /etc/exports /home/hadoop *(rw,anonuid=1000,anongid=1000) [root@server1 ~]# exportfs -rv exporting *:/home/hadoop [root@server1 ~]# showmount -e Export list for server1: /home/hadoop *

    4.server2,3挂载

    [root@server2 ~]# mount 172.25.61.1:/home/hadoop /home/hadoop [root@server2 ~]# df [root@server2 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/rhel-root 17811456 1095856 16715600 7% / devtmpfs 497292 0 497292 0% /dev tmpfs 508264 0 508264 0% /dev/shm tmpfs 508264 13072 495192 3% /run tmpfs 508264 0 508264 0% /sys/fs/cgroup /dev/sda1 1038336 123376 914960 12% /boot tmpfs 101656 0 101656 0% /run/user/0 172.25.61.1:/home/hadoop 17811456 2794496 15016960 16% /home/hadoop [root@server3 ~]# mount 172.25.61.1:/home/hadoop /home/hadoop [root@server3 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/rhel-root 17811456 1095808 16715648 7% / devtmpfs 497292 0 497292 0% /dev tmpfs 508264 0 508264 0% /dev/shm tmpfs 508264 13060 495204 3% /run tmpfs 508264 0 508264 0% /sys/fs/cgroup /dev/sda1 1038336 123376 914960 12% /boot tmpfs 101656 0 101656 0% /run/user/0 172.25.61.1:/home/hadoop 17811456 2794496 15016960 16% /home/hadoop
    挂载之后,server1,2,3可以免密登陆

    5.重新编辑文件

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop/etc/hadoop [hadoop@server1 hadoop]$ vim core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://172.25.61.1:9000</value> </property> </configuration> [hadoop@server1 hadoop]$ vim hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>2</value> ##改为两个节点 </property> </configuration> [hadoop@server1 hadoop]$ vim workers [hadoop@server1 hadoop]$ cat workers 172.25.61.2 172.25.61.3 ##在一个地方编辑,其他节点都有了 [root@server2 ~]# su - hadoop [hadoop@server2 ~]$ cd hadoop/etc/hadoop/ [hadoop@server2 hadoop]$ cat workers 172.25.61.2 172.25.61.3 [root@server3 ~]# su - hadoop [hadoop@server3 ~]$ cd hadoop/etc/hadoop/ [hadoop@server3 hadoop]$ cat workers 172.25.61.2 172.25.61.3

    6.格式化,并启动服务

    [hadoop@server1 hadoop]$ bin/hdfs namenode -format [hadoop@server1 hadoop]$ sbin/start-dfs.sh Starting namenodes on [server1] Starting datanodes Starting secondary namenodes [server1] ##从节点可以看到datanode信息 [hadoop@server2 ~]$ jps 11959 DataNode 12046 Jps [hadoop@server3 ~]$ jps 10774 Jps 10713 DataNode

    7.测试

    [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop [hadoop@server1 hadoop]$ ls bin etc include lib libexec LICENSE.txt logs NOTICE.txt output README.txt sbin share [hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/ input

    可以在网页上查看上传的数据及节点信息

    8.上传大文件

    [hadoop@server2 ~]$ cd /home/hadoop/hadoop [hadoop@server2 hadoop]$ dd if=/dev/zero of=bigfile bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 16.1338 s, 32.5 MB/s [hadoop@server2 hadoop]$ bin/hdfs dfs -put bigfile

    最新回复(0)