欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 房产 > 建筑 > openEuler24.03 LTS下安装Hadoop3完全分布式

openEuler24.03 LTS下安装Hadoop3完全分布式

2025/3/20 9:47:44 来源:https://blog.csdn.net/qq_42881421/article/details/146381806  浏览:    关键词:openEuler24.03 LTS下安装Hadoop3完全分布式

目录

Linux准备

openEuler24.03 LTS简介

下载openEuler24.03 LTS

安装openEuler24.03 LTS

Linux基本设置

关闭及禁用防火墙

修改主机名

静态ip

映射主机名

创建普通用户

目录准备

克隆主机

配置机器之间免密登录

编写分发脚本

安装Java

下载Java

解压

设置环境变量

分发到其他机器

安装Hadoop

Hadoop集群规划

下载hadoop

解压

设置环境变量

查看版本

配置hadoop

配置core-site.xml

配置hdfs-site.xml

配置mapred-site.xml

配置yarn-site.xml

配置workers

分发到其他机器

格式化文件系统

启动集群

启动hdfs

启动yarn

查看jps进程

访问Web UI

测试Hadoop

计算pi

计算wordcount

集群实用脚本

统一执行jps脚本

hadoop启停脚本

集群机器执行相同命令脚本

集群机器一键关机脚本


Linux准备

openEuler24.03 LTS简介

Linux选择国产的openEuler24.03 LTS。

openEuler 24.03 LTS 是华为捐赠给开放原子开源基金会的开源操作系统 openEuler 的长期支持版本,于2024年6月6日正式发布‌。作为首个AI原生开源操作系统,其聚焦于服务器、云计算、边缘计算及嵌入式设备等数字基础设施领域。

下载openEuler24.03 LTS

https://www.openeuler.org/en/download/

下载openEuler24.03 LTS SP1的Offline Standard ISO文件:openEuler-24.03-LTS-SP1-x86_64-dvd.iso

 

安装openEuler24.03 LTS

创建一台虚拟机名字为node2的机器,然后安装openEuler24.03 LTS SP1,可参考:Vmware下安装openEuler24.03 LTS

Linux基本设置

关闭及禁用防火墙

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld

修改主机名

修改主机名为node2

# 修改主机名
[root@localhost ~]# hostnamectl set-hostname node2
​
# 重启
[root@localhost ~]# reboot

重启后,重新用远程工具连接,看到显示的主机名已经变为node2

[root@node2 ~]# 

静态ip

默认为DHCP,ip可能会变化,ip变化会带来不必要的麻烦,所以需要将ip固定下来方便使用。

[root@node2 ~]# cd /etc/sysconfig/network-scripts/
[root@node2 network-scripts]# ls
ifcfg-ens33
[root@node1 network-scripts]# vim ifcfg-ens33

修改内容如下

# 修改
BOOTPROTO=static# 添加
IPADDR=192.168.193.132
NETMASK=255.255.255.0
GATEWAY=192.168.193.2
DNS1=192.168.193.2
DNS2=114.114.114.114

这里设置的固定IP为192.168.193.132。注意:IPADDR、GATEWAY、DNS,使用192.168.193.*的网段要与Vmware查询到的NAT网络所在的网段一致,请根据实际情况修改网段值,网段查询方法:打开Vmware,文件-->虚拟网络编辑器。

重启生效

reboot

映射主机名

修改/etc/hosts

[root@node2 ~]$ vim /etc/hosts

末尾添加如下内容

192.168.193.132 node2
192.168.193.133 node3
192.168.193.134 node4

注意:ip和主机名,请根据实际情况修改。集群规划用到node3和node4,提前写入node3和node4 映射信息。

创建普通用户

因为root用户权限太高,误操作可能会造成不可挽回的损失,所以需要新建一个普通用户来进行后续大数据环境操作。例如:这里创建一个名为liang的普通用户,密码也是liang,注意:用户名和密码请根据实际需要修改。命令如下:

useradd liang
passwd liang

操作过程

[root@node2 ~]# useradd liang
[root@node2 ~]# passwd liang
更改用户 liang 的密码 。
新的密码:
无效的密码: 密码少于 8 个字符
重新输入新的密码:
passwd:所有的身份验证令牌已经成功更新。
​

虽然提示无效密码,但已经更新成功。

给新用户添加sudo权限

修改/etc/sudoers文件

vim /etc/sudoers

在%wheel这行下面添加如下一行

liang   ALL=(ALL)     NOPASSWD:ALL

注意:liang是用户名,需要根据实际情况修改。

保存按Esc退出编辑模式,再按:wq!

目录准备

目录规划:

1.把软件安装包放在/opt/software目录;

2.把可自定义安装目录的软件安装在/opt/module目录。

注意:规划的目录可以根据实际需要修改。

创建目录及修改权限

[root@node2 ~]# mkdir /opt/module
[root@node2 ~]# mkdir /opt/software
[root@node2 ~]# chown liang:liang /opt/module
[root@node2 ~]# chown liang:liang /opt/software

注意:如果普通用户不是liang,chown命令的liang需要根据实际情况修改。

克隆主机

克隆node2机器得到node3和node4

操作克隆node2得到node3

克隆方法:在node2为关机状态下,点击 虚拟机-->管理-->克隆,克隆类型选择创建完整克隆,根据提示完成克隆。

设置静态ip

打开node3机器

[root@node2 ~]# cd /etc/sysconfig/network-scripts/
[root@node2 network-scripts]# ls
ifcfg-ens33
[root@node2 network-scripts]# vim ifcfg-ens33

将ip地址改为

192.168.193.133

修改主机名为node3

# 修改主机名
[root@node2 ~]$ hostnamectl set-hostname node3
​
# 查看主机名
[root@node2 ~]$ hostname
node3
​
# 重启机器
[root@node2 ~]$ reboot

登录普通用户liang验证主机名和ip地址,确实已经为node3

[liang@node3 ~]$ hostname
node3
[liang@node3 ~]$ ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.193.133  netmask 255.255.255.0  broadcast 192.168.193.255inet6 fe80::20c:29ff:feaa:b060  prefixlen 64  scopeid 0x20<link>ether 00:0c:29:aa:b0:60  txqueuelen 1000  (Ethernet)RX packets 100  bytes 12934 (12.6 KiB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 106  bytes 15512 (15.1 KiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
​
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 127.0.0.1  netmask 255.0.0.0inet6 ::1  prefixlen 128  scopeid 0x10<host>loop  txqueuelen 1000  (Local Loopback)RX packets 0  bytes 0 (0.0 B)RX errors 0  dropped 0  overruns 0  frame 0TX packets 0  bytes 0 (0.0 B)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
​
[liang@node3 ~]$
​

操作克隆node2得到node4

同样的方法,操作克隆node2得到node4,设置静态ip为192.168.193.134,修改主机名为node4。

配置机器之间免密登录

后续的安装操作都在普通用户下操作,所以需要在普通用户下设置SSH免密登录。

在node2机器操作:

登录node2普通用户(liang),执行如下命令生成密钥对

ssh-keygen -t rsa

执行命令后,连续敲击三次回车键

拷贝公钥

ssh-copy-id node2
ssh-copy-id node3
ssh-copy-id node4

执行ssh-copy-id命令后,根据提示输入yes,再输入机器登录密码

验证

从node2发起ssh登录到node3,过程中不需要登录密码为配置成功,使用exit退出免密登录。

ssh node3
exit

同样的方法,在node3、node4机器上操作。

编写分发脚本

使用rsync命令分发,可以实现增量复制,速度快。

在主目录创建bin目录

[liang@node2 ~]$ mkdir ~/bin

创建分发脚本文件xsync

[liang@node2 ~]$ vim ~/bin/xsync

内容如下

#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
thenecho Not Enough Arguement!exit;
fi
#2. 遍历集群所有机器
for host in node2 node3 node4
doecho ====================  $host  ====================    #3. 遍历所有目录,挨个发送for file in $@do#4. 判断文件是否存在if [ -e $file ]then#5. 获取父目录pdir=$(cd -P $(dirname $file); pwd)#6. 获取当前文件的名称fname=$(basename $file)ssh $host "mkdir -p $pdir"rsync -av $pdir/$fname $host:$pdirelseecho $file does not exists!fidone
done

修改权限

[hadoop@node2 ~]$ chmod +x ~/bin/xsync

添加环境变量

[liang@node2 ~]$ sudo vim /etc/profile.d/my_env.sh

添加内容

#MyShellCommand
export PATH=$PATH:/home/liang/bin

让环境变量生效

[liang@node2 ~]$ source /etc/profile

测试

xsync命令发送到node3、node4

xsync /home/liang/bin

查看node3、node4是否有收到xsync脚本。

[liang@node3 ~]$ ls bin/
xsync
[liang@node4 ~]$ ls bin/
xsync

安装Java

Java是基础软件,查看Hadoop支持的Java版本

Supported Java Versions
Apache Hadoop 3.3 and upper supports Java 8 and Java 11 (runtime only)
Please compile Hadoop with Java 8. Compiling Hadoop with Java 11 is not supported: 
Apache Hadoop from 3.0.x to 3.2.x now supports only Java 8
Apache Hadoop from 2.7.x to 2.10.x support both Java 7 and 8

看到Hadoop3.3及以上版本只支持Java8和Java11,编译只支持Java8。若使用更高版本的Java,需要做一定的适配,所以这里选择Java8。

先在node2上安装Java,然后再分发拷贝到其他机器。

下载Java

下载Java8,下载版本为:jdk-8u271-linux-x64.tar.gz,浏览器访问如下下载地址,找到并下载需要的版本:

https://www.oracle.com/java/technologies/javase/javase8u211-later-archive-downloads.html

登录node2普通用户

将jdk-8u271-linux-x64.tar.gz上传到Linux的/opt/software

[liang@node2 opt]$ ls /opt/software/
jdk-8u271-linux-x64.tar.gz

解压

[liang@node2 opt]$ cd /opt/software/
[liang@node2 software]$ ls
jdk-8u271-linux-x64.tar.gz
[liang@node2 software]$ tar -zxvf jdk-8u271-linux-x64.tar.gz -C /opt/module/

设置环境变量

[liang@node2 software]$ sudo vim /etc/profile.d/my_env.sh

末尾添加如下内容

#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_271
export PATH=$PATH:$JAVA_HOME/bin

让环境变量生效

[liang@node2 software]$ source /etc/profile

查看版本

[liang@node2 module]$ java -version
java version "1.8.0_271"
Java(TM) SE Runtime Environment (build 1.8.0_271-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.271-b09, mixed mode)

正常可以看到java version "1.8.0.271"版本号输出,如果看不到,再检查前面的步骤是否正确。

分发到其他机器

分发安装文件

/home/liang/bin/xsync /opt/module/jdk1.8.0_271

分发环境变量

sudo /home/liang/bin/xsync /etc/profile.d/my_env.sh

因为my_env.sh是root权限,所以命令前要加sudo,过程中需要根据提示输入yes及node2机器root账户的登录密码。

让环境变量立即生效,需要分别在node3、node4执行如下命令

source /etc/profile

安装Hadoop

安装配置Hadoop完全分布式

Hadoop集群规划

项目node2node3node4
HDFSNameNode、DataNodeDataNodeDataNode、SecondaryNameNode
YarnNodeManagerResourcemanager、NodeManagerNodeManager

下载hadoop

浏览器下载hadoop安装包,下载版本为hadoop-3.3.4

https://archive.apache.org/dist/hadoop/common/hadoop-3.3.4/hadoop-3.3.4.tar.gz

上传hadoop安装包到Linux /opt/software

[liang@node2 opt]$ ls /opt/software/ | grep hadoop
hadoop-3.3.4.tar.gz

解压

[liang@node2 opt]$ cd /opt/software/
[liang@node2 software]$ tar -zxvf hadoop-3.3.4.tar.gz -C /opt/module/

设置环境变量

[liang@node2 software]$ sudo vim /etc/profile.d/my_env.sh

文件末尾,添加如下内容

#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

让环境变量立即生效

[liang@node2 software]$ source /etc/profile

查看版本

[liang@node2 software]$ hadoop version
Hadoop 3.3.4
Source code repository https://github.com/apache/hadoop.git -r a585a73c3e02ac62350c136643a5e7f6095a3dbb
Compiled by stevel on 2022-07-29T12:32Z
Compiled with protoc 3.7.1
From source with checksum fb9dd8918a7b8a5b430d61af858f6ec
This command was run using /opt/module/hadoop-3.3.4/share/hadoop/common/hadoop-common-3.3.4.jar

配置hadoop

配置hadoop完全分布式

进入配置文件所在目录,并查看配置文件

[liang@node2 software]$ cd $HADOOP_HOME/etc/hadoop/
[liang@node2 hadoop]$ ls
capacity-scheduler.xml            httpfs-env.sh               mapred-site.xml
configuration.xsl                 httpfs-log4j.properties     shellprofile.d
container-executor.cfg            httpfs-site.xml             ssl-client.xml.example
core-site.xml                     kms-acls.xml                ssl-server.xml.example
hadoop-env.cmd                    kms-env.sh                  user_ec_policies.xml.template
hadoop-env.sh                     kms-log4j.properties        workers
hadoop-metrics2.properties        kms-site.xml                yarn-env.cmd
hadoop-policy.xml                 log4j.properties            yarn-env.sh
hadoop-user-functions.sh.example  mapred-env.cmd              yarnservice-log4j.properties
hdfs-rbf-site.xml                 mapred-env.sh               yarn-site.xml
hdfs-site.xml                     mapred-queues.xml.template
​

配置core-site.xml
[liang@node2 hadoop]$ vim core-site.xml

<configuration></configuration>之间添加如下内容

    <!-- 指定NameNode的地址 --><property><name>fs.defaultFS</name><value>hdfs://node2:8020</value></property><!-- 指定hadoop数据的存储目录 --><property><name>hadoop.tmp.dir</name><value>/opt/module/hadoop-3.3.4/data</value></property><!-- 配置HDFS网页登录使用的静态用户为liang --><property><name>hadoop.http.staticuser.user</name><value>liang</value></property><!-- 配置该liang(superUser)允许通过代理访问的主机节点 --><property><name>hadoop.proxyuser.liang.hosts</name><value>*</value></property><!-- 配置该liang(superUser)允许通过代理用户所属组 --><property><name>hadoop.proxyuser.liang.groups</name><value>*</value></property><!-- 配置该liang(superUser)允许通过代理的用户--><property><name>hadoop.proxyuser.liang.users</name><value>*</value></property>

注意:如果主机名不是node2,用户名不是liang,根据实际情况修改主机名和用户名,后续的配置同样注意修改。

配置hdfs-site.xml
[liang@node2 hadoop]$ vim hdfs-site.xml

<configuration></configuration>之间添加如下内容

    <!-- nn web端访问地址--><property><name>dfs.namenode.http-address</name><value>node2:9870</value></property>    <!-- 2nn web端访问地址--><property><name>dfs.namenode.secondary.http-address</name><value>node4:9868</value></property><!-- 测试环境指定HDFS副本的数量1 --><property><name>dfs.replication</name><value>1</value></property>

注意:副本数根据实际需要设置,生产环境副本数要大于1,例如:3。

配置mapred-site.xml
[liang@node2 hadoop]$ vim mapred-site.xml

同样在<configuration></configuration>之间添加配置内容如下

 	<!-- mapreduce运行在yarn框架之上 --><property><name>mapreduce.framework.name</name><value>yarn</value></property><!-- 历史服务器端地址 --><property><name>mapreduce.jobhistory.address</name><value>node2:10020</value></property><!-- 历史服务器web端地址 --><property><name>mapreduce.jobhistory.webapp.address</name><value>node2:19888</value></property>

    

配置yarn-site.xml
[liang@node2 hadoop]$ vim yarn-site.xml

同样在<configuration></configuration>之间添加配置内容如下

    <!-- 指定ResourceManager的地址-->    <property><name>yarn.resourcemanager.hostname</name><value>node3</value></property><!-- 指定MR走shuffle --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><!-- 环境变量的继承 --><property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value></property><!--yarn单个容器允许分配的最大最小内存 --><property><name>yarn.scheduler.minimum-allocation-mb</name><value>512</value></property><property><name>yarn.scheduler.maximum-allocation-mb</name><value>4096</value></property><!-- yarn容器允许管理的物理内存大小 --><property><name>yarn.nodemanager.resource.memory-mb</name><value>4096</value></property><!-- 关闭yarn对物理内存和虚拟内存的限制检查 --><property><name>yarn.nodemanager.pmem-check-enabled</name><value>false</value></property><property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value></property><!-- 开启日志聚集功能 --><property><name>yarn.log-aggregation-enable</name><value>true</value></property><!-- 设置日志聚集服务器地址 --><property><name>yarn.log.server.url</name><value>http://node2:19888/jobhistory/logs</value></property><!-- 设置日志保留时间为7天 --><property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property>

   

配置workers

配置从节点所在的机器

[liang@node2 hadoop]$ vim workers

将localhost修改为如下主机名

node2
node3
node4

分发到其他机器

分发安装文件到其他机器

/home/liang/bin/xsync /opt/module/hadoop-3.3.4

分发环境变量

sudo /home/liang/bin/xsync /etc/profile.d/my_env.sh

因为my_env.sh是root权限,所以命令前要加sudo,过程中需要根据提示输入node2机器root账户的登录密码。

分别让node3及node4的环境变量生效

[liang@node3 ~]$ source /etc/profile
[liang@node4 ~]$ source /etc/profile

格式化文件系统

在node2操作

[liang@node2 hadoop]$ hdfs namenode -format

看到successfully formatted输出,说明格式化成功。

注意:格式化只能做一次,格式化成功后就不能再次格式化了。

启动集群

启动hdfs

在node2机器启动hdfs

[liang@node2 hadoop]$ start-dfs.sh

启动yarn

在node3机器启动yarn

[liang@node3 hadoop]$ start-yarn.sh

查看jps进程

分别在不同机器执行jps命令

[liang@node2 hadoop]$ jps
3767 DataNode
4199 NodeManager
4407 Jps
3566 NameNode
​
[liang@node3 ~]$ jps
3555 NodeManager
3205 DataNode
3417 ResourceManager
3996 Jps
​
[liang@node4 ~]$ jps
3555 NodeManager
3332 SecondaryNameNode
3765 Jps
3166 DataNode

访问Web UI

为了能使用主机名访问,修改Windows下的C:\Windows\System32\drivers\etc\hosts文件,添加如下映射语句

192.168.193.132 node2
192.168.193.133 node3
192.168.193.134 node4

注意:根据实际情况修改ip和主机名

浏览器访问

node2:9870

浏览器访问

node3:8088

测试Hadoop

计算pi
[liang@node2 hadoop]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.4.jar pi 2 4
Number of Maps  = 2
Samples per Map = 4
Wrote input for Map #0
Wrote input for Map #1
Starting Job
2025-03-18 23:15:49,010 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at node3/192.168.193.133:8032
2025-03-18 23:15:49,696 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liang/.staging/job_1742310641710_0001
2025-03-18 23:15:50,236 INFO input.FileInputFormat: Total input files to process : 2
2025-03-18 23:15:51,045 INFO mapreduce.JobSubmitter: number of splits:2
2025-03-18 23:15:51,599 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1742310641710_0001
2025-03-18 23:15:51,599 INFO mapreduce.JobSubmitter: Executing with tokens: []
2025-03-18 23:15:51,782 INFO conf.Configuration: resource-types.xml not found
2025-03-18 23:15:51,782 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2025-03-18 23:15:52,460 INFO impl.YarnClientImpl: Submitted application application_1742310641710_0001
2025-03-18 23:15:52,555 INFO mapreduce.Job: The url to track the job: http://node3:8088/proxy/application_1742310641710_0001/
2025-03-18 23:15:52,556 INFO mapreduce.Job: Running job: job_1742310641710_0001
2025-03-18 23:16:04,788 INFO mapreduce.Job: Job job_1742310641710_0001 running in uber mode : false
2025-03-18 23:16:04,789 INFO mapreduce.Job:  map 0% reduce 0%
2025-03-18 23:16:13,970 INFO mapreduce.Job:  map 100% reduce 0%
2025-03-18 23:16:20,025 INFO mapreduce.Job:  map 100% reduce 100%
2025-03-18 23:16:21,100 INFO mapreduce.Job: Job job_1742310641710_0001 completed successfully
2025-03-18 23:16:21,262 INFO mapreduce.Job: Counters: 55File System CountersFILE: Number of bytes read=50FILE: Number of bytes written=829296FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=522HDFS: Number of bytes written=215HDFS: Number of read operations=13HDFS: Number of large read operations=0HDFS: Number of write operations=3HDFS: Number of bytes read erasure-coded=0Job CountersLaunched map tasks=2Launched reduce tasks=1Data-local map tasks=1Rack-local map tasks=1Total time spent by all maps in occupied slots (ms)=26878Total time spent by all reduces in occupied slots (ms)=6476Total time spent by all map tasks (ms)=13439Total time spent by all reduce tasks (ms)=3238Total vcore-milliseconds taken by all map tasks=13439Total vcore-milliseconds taken by all reduce tasks=3238Total megabyte-milliseconds taken by all map tasks=13761536Total megabyte-milliseconds taken by all reduce tasks=3315712Map-Reduce FrameworkMap input records=2Map output records=4Map output bytes=36Map output materialized bytes=56Input split bytes=286Combine input records=0Combine output records=0Reduce input groups=2Reduce shuffle bytes=56Reduce input records=4Reduce output records=0Spilled Records=8Shuffled Maps =2Failed Shuffles=0Merged Map outputs=2GC time elapsed (ms)=222CPU time spent (ms)=2910Physical memory (bytes) snapshot=835469312Virtual memory (bytes) snapshot=7758372864Total committed heap usage (bytes)=621281280Peak Map Physical memory (bytes)=307945472Peak Map Virtual memory (bytes)=2587164672Peak Reduce Physical memory (bytes)=226463744Peak Reduce Virtual memory (bytes)=2590654464Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format CountersBytes Read=236File Output Format CountersBytes Written=97
Job Finished in 32.328 seconds
Estimated value of Pi is 3.50000000000000000000
[liang@node2 hadoop]$
​

计算wordcount

准备输入数据

[liang@node1 ~]$ vim 1.txt
[liang@node1 ~]$ cat 1.txt
hello world
hello hadoop
[liang@node1 ~]$ hdfs dfs -put 1.txt /
[liang@node2 ~]$ hdfs dfs -ls /
Found 3 items
-rw-r--r--   1 liang supergroup         25 2025-03-18 23:17 /1.txt
drwx------   - liang supergroup          0 2025-03-18 23:15 /tmp
drwxr-xr-x   - liang supergroup          0 2025-03-18 23:15 /user

运行wordcount程序

[liang@node2 ~]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.4.jar wordcount /1.txt /out
2025-03-18 23:18:10,177 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at node3/192.168.193.133:8032
2025-03-18 23:18:11,025 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liang/.staging/job_1742310641710_0002
2025-03-18 23:18:11,462 INFO input.FileInputFormat: Total input files to process : 1
2025-03-18 23:18:11,631 INFO mapreduce.JobSubmitter: number of splits:1
2025-03-18 23:18:11,821 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1742310641710_0002
2025-03-18 23:18:11,821 INFO mapreduce.JobSubmitter: Executing with tokens: []
2025-03-18 23:18:12,091 INFO conf.Configuration: resource-types.xml not found
2025-03-18 23:18:12,091 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2025-03-18 23:18:12,213 INFO impl.YarnClientImpl: Submitted application application_1742310641710_0002
2025-03-18 23:18:12,299 INFO mapreduce.Job: The url to track the job: http://node3:8088/proxy/application_1742310641710_0002/
2025-03-18 23:18:12,301 INFO mapreduce.Job: Running job: job_1742310641710_0002
2025-03-18 23:18:19,456 INFO mapreduce.Job: Job job_1742310641710_0002 running in uber mode : false
2025-03-18 23:18:19,457 INFO mapreduce.Job:  map 0% reduce 0%
2025-03-18 23:18:24,551 INFO mapreduce.Job:  map 100% reduce 0%
2025-03-18 23:18:29,602 INFO mapreduce.Job:  map 100% reduce 100%
2025-03-18 23:18:30,617 INFO mapreduce.Job: Job job_1742310641710_0002 completed successfully
2025-03-18 23:18:30,703 INFO mapreduce.Job: Counters: 54File System CountersFILE: Number of bytes read=43FILE: Number of bytes written=552145FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=113HDFS: Number of bytes written=25HDFS: Number of read operations=8HDFS: Number of large read operations=0HDFS: Number of write operations=2HDFS: Number of bytes read erasure-coded=0Job CountersLaunched map tasks=1Launched reduce tasks=1Rack-local map tasks=1Total time spent by all maps in occupied slots (ms)=5490Total time spent by all reduces in occupied slots (ms)=4870Total time spent by all map tasks (ms)=2745Total time spent by all reduce tasks (ms)=2435Total vcore-milliseconds taken by all map tasks=2745Total vcore-milliseconds taken by all reduce tasks=2435Total megabyte-milliseconds taken by all map tasks=2810880Total megabyte-milliseconds taken by all reduce tasks=2493440Map-Reduce FrameworkMap input records=2Map output records=4Map output bytes=41Map output materialized bytes=43Input split bytes=88Combine input records=4Combine output records=3Reduce input groups=3Reduce shuffle bytes=43Reduce input records=3Reduce output records=3Spilled Records=6Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=100CPU time spent (ms)=1470Physical memory (bytes) snapshot=524570624Virtual memory (bytes) snapshot=5171003392Total committed heap usage (bytes)=391643136Peak Map Physical memory (bytes)=300306432Peak Map Virtual memory (bytes)=2581856256Peak Reduce Physical memory (bytes)=224264192Peak Reduce Virtual memory (bytes)=2589147136Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format CountersBytes Read=25File Output Format CountersBytes Written=25
[liang@node2 ~]$

查看结果

[liang@node2 ~]$ hdfs dfs -cat /out/part-r-00000
hadoop  1
hello   2
world   1

集群实用脚本

编写脚本一般步骤:

1.在node2的~/bin目录下创建脚本

2.给脚本添加执行权限

chmod +x ~/bin/<脚本名称>

3.测试

统一执行jps脚本

jpsall

vim ~/bin/jpsall

内容如下

#!/bin/bashfor host in node2 node3 node4
doecho =============== $host ===============ssh $host jps 
done

测试

jpsall

hadoop启停脚本

hdp.sh

vim ~/bin/hdp.sh

内容如下

#!/bin/bashif [ $# -lt 1 ]
thenecho "No Args Input..."exit ;
ficase $1 in
"start")echo " =================== 启动 hadoop集群 ==================="echo " --------------- 启动 hdfs ---------------"ssh node2 "/opt/module/hadoop-3.3.4/sbin/start-dfs.sh"echo " --------------- 启动 yarn ---------------"ssh node3 "/opt/module/hadoop-3.3.4/sbin/start-yarn.sh"echo " --------------- 启动 historyserver ---------------"ssh node2 "/opt/module/hadoop-3.3.4/bin/mapred --daemon start historyserver"
;;
"stop")echo " =================== 关闭 hadoop集群 ==================="echo " --------------- 关闭 historyserver ---------------"ssh node2 "/opt/module/hadoop-3.3.4/bin/mapred --daemon stop historyserver"echo " --------------- 关闭 yarn ---------------"ssh node3 "/opt/module/hadoop-3.3.4/sbin/stop-yarn.sh"echo " --------------- 关闭 hdfs ---------------"ssh node2 "/opt/module/hadoop-3.3.4/sbin/stop-dfs.sh"
;;
*)echo "Input Args Error..."
;;
esac

添加执行权限

chmod +x hdp.sh

测试

hdp.sh start
hdp.sh stop

集群机器执行相同命令脚本

same.sh

vim ~/bin/same.sh

内容如下

#!/bin/bash# 1.获取参数个数,小于1个参数报错
if [ $# -lt 1 ]
thenecho "No Args command Input..."exit ;
fi# 2.获取当前机器的路径
currDir=$pwd# 3.ssh到每一台机器,切换到执行脚本机器的当前目录并执行相应命令,这里执行的命令只支持3个参数,可自己根据实际情况扩展,一般用于查看路径或文件内容
for host in node2 node3 node4
doecho =============== $host ===============ssh $host "cd $currDir;$1 $2 $3;" 
done

添加权限

chmod +x same.sh

测试,ls命令查看三台机器的/home目录,命令如下

same.sh ls /home

集群机器一键关机脚本

gj.sh

vim gj.sh

内容如下

#!/bin/bashfor host in node4 node3 node2
doecho =============== $host ===============ssh $host sudo init 0; 
done

添加权限

chmod +x gj.sh

测试

gj.sh

完成!enjoy it!

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词