hadoop单击测试、伪分布式、完全分布式

    xiaoxiao2021-04-16  226

    一、实验前提

    三台7.3的虚拟机:

    server1

    server2

    server3

    二、hadoop单击测试

    单机模式(standalone):单机模式是Hadoop的默认模式。这种模式在一台单机上运行,没有分布式文件系统,而是直接读写本地操作系统的文件系统。

    1.建立hadoop用户:

    [root@server1 ~]# useradd hadoop [root@server1 ~]# id hadoop uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop) [root@server1 ~]# passwd hadoop Changing password for user hadoop. New password: BAD PASSWORD: The password is shorter than 8 characters Retype new password: passwd: all authentication tokens updated successfully. [root@server1 ~]#

    2.安装hadoop和jdk

    [root@server1 ~]# mv * /home/hadoop/ [root@server1 ~]# su - hadoop [hadoop@server1 ~]$ tar zxf hadoop-3.0.3.tar.gz [hadoop@server1 ~]$ tar zxf jdk-8u181-linux-x64.tar.gz [hadoop@server1 ~]$ ln -s hadoop-3.0.3 hadoop [hadoop@server1 ~]$ ln -s jdk1.8.0_181/ java [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ ls [hadoop@server1 hadoop]$ cd etc/ [hadoop@server1 etc]$ ls hadoop [hadoop@server1 etc]$ cd hadoop/ [hadoop@server1 hadoop]$ ls [hadoop@server1 hadoop]$ vim hadoop-env.sh 54 export JAVA_HOME=/home/hadoop/java

    配置环境变量:

    [hadoop@server1 hadoop]$ cd [hadoop@server1 ~]$ cd java/ [hadoop@server1 java]$ vim ~/.bash_profile #配置环境变量 PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/java/bin [hadoop@server1 java]$ source ~/.bash_profile

    [hadoop@server1 java]$ jps #查看java进程 10452 Jps

    3.测试:

    [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ mkdir input [hadoop@server1 hadoop]$ cp etc/hadoop/*.xml input/ [hadoop@server1 hadoop]$ ls input/ capacity-scheduler.xml hdfs-site.xml kms-site.xml core-site.xml httpfs-site.xml mapred-site.xml hadoop-policy.xml kms-acls.xml yarn-site.xml [hadoop@server1 hadoop]$ [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'

    [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS [hadoop@server1 output]$ cat part-r-00000 1 dfsadmin

    三、伪分布式

    这种模式也是在一台单机上运行,但用不同的Java进程模仿分布式运行中的各类结点。

    没有所谓的在多台机器上进行真正的分布式计算,故称为"伪分布式"。

     伪分布模式在“单节点集群”上运行Hadoop,其中所有的守护进程都运行在同一台机器上。该模式在单机模式之上增加了代码调试功能,允许你检查内存使用情况,HDFS输入输出,以及其他的守护进程交互。

    1.编辑配置文件

    [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ cd etc/hadoop/ [hadoop@server1 hadoop]$ vim core-site.xml 19 <configuration> 20 <property> 21 <name>fs.defaultFS</name> 22 <value>hdfs://172.25.60.1:9000</value> 23 </property> 24 </configuratioN>

    [hadoop@server1 hadoop]$ vim hdfs-site.xml 20 <property> 21 <name>dfs.replication</name> 22 <value>1</value> ##自己充当节点 23 </property>

    2.为了方便,设置免密

    [hadoop@server1 hadoop]$ cd [hadoop@server1 ~]$ ssh-keygen [hadoop@server1 ~]$ ssh-copy-id 172.25.60.1 [hadoop@server1 ~]$ ssh-copy-id localhost [hadoop@server1 ~]$ ssh-copy-id server1

    3.格式化,并开启服务

    [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ bin/hdfs namenode -format

    [hadoop@server1 hadoop]$ cd sbin/ [hadoop@server1 sbin]$ ./start-dfs.sh Starting namenodes on [server1] Starting datanodes Starting secondary namenodes [server1] [hadoop@server1 sbin]$ jps 4417 DataNode 4314 NameNode 4602 SecondaryNameNode 4746 Jps

    4.打开浏览器:http://172.25.60.1:9000

    http://172.25.60.1:9870

    5.测试:创建目录,并上传

    [hadoop@server1 hadoop]$ pwd /home/hadoop/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -ls [hadoop@server1 hadoop]$ bin/hdfs dfs -put input/ [hadoop@server1 hadoop]$ bin/hdfs dfs -ls Found 1 items drwxr-xr-x - hadoop supergroup 0 2019-05-23 02:41 input

    刷新浏览器:

    点击:Browse the system

    点击:user-->hadoop-->input

    6.删除input和output文件,重新执行命令

    [hadoop@server1 hadoop]$ rm -fr input/ output/ [hadoop@server1 hadoop]$ ls bin include libexec logs README.txt share etc lib LICENSE.txt NOTICE.txt sbin [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar wordcount input output #计算文件里的每个单词出现的次数

    此时input和output不会出现在当前目录下,而是上传到了分布式文件系统中,网页上可以看到

    刷新浏览器:

    点击output-->_SUCCESS(如果要查看结果,就可以点击下载)

    命令查看:

    [hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/*

    [hadoop@server1 hadoop]$ bin/hdfs dfs -get output #从分布式系统中get下来output目录 [hadoop@server1 hadoop]$ cd output/ [hadoop@server1 output]$ ls part-r-00000 _SUCCESS [hadoop@server1 output]$ cat part-r-00000

    四、完全分布式

    真正的分布式,由3个及以上的实体机或者虚拟机组件的集群。Hadoop守护进程运行在一个集群上。

    1.清除原来的数据

    [hadoop@server1 hadoop]$ sbin/stop-dfs.sh Stopping namenodes on [server1] Stopping datanodes Stopping secondary namenodes [server1] [hadoop@server1 hadoop]$ cd /tmp/ [hadoop@server1 tmp]$ ls hadoop hadoop-hadoop hsperfdata_hadoop [hadoop@server1 tmp]$ rm -fr * [hadoop@server1 tmp]$ logout

    2.新开两个虚拟机,当做新节点

    [root@server2 ~]# useradd -u 1000 hadoop [root@server3 ~]# useradd -u 1000 hadoop

    3.安装nfs

    [root@server1 ~]# yum install -y nfs-utils [root@server2 ~]# yum install -y nfs-utils [root@server3 ~]# yum install -y nfs-utils [root@server1 ~]# systemctl start rpcbind [root@server1 ~]# systemctl is-enabled rpcbind indirect [root@server2 ~]# systemctl start rpcbind [root@server3 ~]# systemctl start rpcbind

    4.在server1上开启服务,配置

    [root@server1 ~]# systemctl start nfs-server [root@server1 ~]# vim /etc/exports /home/hadoop *(rw,anonuid=1000,anongid=1000) [root@server1 ~]# exportfs -rv exporting *:/home/hadoop [root@server1 ~]# exportfs -v /home/hadoop <world>(rw,wdelay,root_squash,no_subtree_check,anonuid=1000,anongid=1000,sec=sys,rw,secure,root_squash,no_all_squash)

    5.server2/server3:挂载nfs

    [root@server2 ~]# showmount -e server1 Export list for server1: /home/hadoop * [root@server2 ~]# mount 172.25.60.1:/home/hadoop/ /home/hadoop/ [root@server2 ~]# df 172.25.60.1:/home/hadoop 17811456 2817792 14993664 16% /home/hadoop

    [root@server3 ~]# showmount -e server1 Export list for server1: /home/hadoop * [root@server3 ~]# mount 172.25.60.1:/home/hadoop/ /home/hadoop/ [root@server3 ~]# df

    6.此时发现可以免密登录(因为是挂载上的)

    [hadoop@server1 ~]$ ssh 172.25.60.2 Last login: Thu May 23 06:00:34 2019 from server1 [hadoop@server2 ~]$ ssh 172.25.60.3 Last login: Thu May 23 06:07:42 2019 from server1

    7.重新编辑文件

    [hadoop@server1 ~]$ cd hadoop [hadoop@server1 hadoop]$ ls bin include libexec logs output sbin etc lib LICENSE.txt NOTICE.txt README.txt share [hadoop@server1 hadoop]$ cd etc/hadoop/ [hadoop@server1 hadoop]$ vim workers 172.25.60.2 172.25.60.3

    修改节点数:

    [hadoop@server3 hadoop]$ vim hdfs-site.xml <value>2</value>

    在一个地方编辑,其他节点都有了:

    8.格式化,并启动服务

    [hadoop@server1 hadoop]$ bin/hdfs namenode -format [hadoop@server1 hadoop]$ sbin/start-dfs.sh Starting namenodes on [server1] Starting datanodes Starting secondary namenodes [server1]

    从节点可以看到datanode信息:

    [hadoop@server2 ~]$ jps 10834 Jps 10772 DataNode [hadoop@server3 ~]$ jps 10842 Jps 10780 DataNode

    9.测试:刷新浏览器,点击Datanodes,有两个节点

    [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop [hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir input [hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/*.xml input [hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'

     


    最新回复(0)