欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 新闻 > 国际 > 【小菜鸟之—CEPH文件集群部署实践】

【小菜鸟之—CEPH文件集群部署实践】

2024/12/31 6:42:30 来源:https://blog.csdn.net/FWT975959/article/details/144734909  浏览:    关键词:【小菜鸟之—CEPH文件集群部署实践】

Ceph分布式存储架构搭建:

192.168.1.180 ceph-node1 (admin,osd、mon作为管理和监控节点)
192.168.1.179 ceph-node2
192.168.1.178 ceph-node3
192.168.1.172 client1
192.168.1.173 client2

1.添加硬盘20G、ceph-node1、ceph-node2、ceph-node3

2.修改主机名

[root@localhost ~]# hostnamectl set-hostname ceph-node1
[root@localhost ~]# hostnamectl set-hostname ceph-node2
[root@localhost ~]# hostnamectl set-hostname ceph-node3
[root@localhost ~]# hostnamectl set-hostname client1
[root@localhost ~]# hostnamectl set-hostname client2

3.格式化磁盘

[root@ceph-node1 ~]# mkfs.xfs /dev/sdb
[root@ceph-node2 ~]# mkfs.xfs /dev/sdb
[root@ceph-node3 ~]# mkfs.xfs /dev/sdb
Discarding blocks...Done.
meta-data=/dev/sdb               isize=512    agcount=4, agsize=1310720 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

4.创建目录osd0,osd1,osd2

[root@ceph-node1 ~]# mkdir /var/local/osd{0,1,2}
[root@ceph-node2 ~]# mkdir /var/local/osd{0,1,2}
[root@ceph-node3 ~]# mkdir /var/local/osd{0,1,2}

5.将磁盘挂载到创建的文件夹

[root@ceph-node1 ~]# mount /dev/sdb /var/local/osd0/
[root@ceph-node2 ~]# mount /dev/sdb /var/local/osd1/
[root@ceph-node3 ~]# mount /dev/sdb /var/local/osd2/

6.修改文件权限

[root@ceph-node1 ~]# chmod 777 -R /var/local/osd0
[root@ceph-node2 ~]# chmod 777 -R /var/local/osd1
[root@ceph-node3 ~]# chmod 777 -R /var/local/osd2

7.修改/etc/hosts文件,修改主机名映射关系(每一台虚拟机)

[root@ceph-node1 ~]# vim /etc/hosts
[root@ceph-node2 ~]# vim /etc/hosts
[root@ceph-node3 ~]# vim /etc/hosts
[root@client1 ~]# vim /etc/hosts
[root@client2 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.180 ceph-node1
192.168.1.179 ceph-node2
192.168.1.178 ceph-node3
192.168.1.172 client1
192.168.1.173 client2

8.ceph-node1节点生成Root SSH密钥,并将它复制到其他节点

[root@ceph-node1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):回车
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):回车
Enter same passphrase again:回车
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:D9VwitoJ8Ac8M00VWZZHF1O9d7c/xFbfNVIBK2uZYAU root@ceph-node1
The key's randomart image is:
+---[RSA 2048]----+
|    ...o.E==++o+B|
|     o=...o*. ooo|
|      o+ooo..o. .|
|       =.o. =. o*|
|      . S  =  o @|
|         o.    =o|
|          .   o .|
|               ..|
|                .|
+----[SHA256]-----+
[root@ceph-node1 ~]# ssh-copy-id ceph-node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph-node1 (192.168.1.180)' can't be established.
ECDSA key fingerprint is SHA256:wYjcu/pQkoX10MFoSZKg+0Mog67KxMCscQ95IJZ9URY.
ECDSA key fingerprint is MD5:3e:03:bf:f4:70:77:0b:3a:b4:84:97:48:56:de:ad:0b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-node1's password:Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'ceph-node1'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph-node1 ~]# ssh-copy-id ceph-node2
[root@ceph-node1 ~]# ssh-copy-id ceph-node3
[root@ceph-node1 ~]# ssh-copy-id client1
[root@ceph-node1 ~]# ssh-copy-id client2

9.配置阿里yum网络源(每一台虚拟机)

[root@ceph-node1 ~]# mv /etc/yum.repos.d/* /media/ 
[root@ceph-node2 ~]# mv /etc/yum.repos.d/* /media/ 
[root@ceph-node3 ~]# mv /etc/yum.repos.d/* /media/ 
[root@ceph-client1 ~]# mv /etc/yum.repos.d/* /media/ 
[root@ceph-client2 ~]# mv /etc/yum.repos.d/* /media/ 
将 /etc/yum.repos.d/ 目录下的所有文件移动到 /media/ 目录,原有的 YUM 源配置文件将被移走。
[root@ceph-node1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo% Total    % Received % Xferd  Average Speed   Time    Time     Time  CurrentDload  Upload   Total   Spent    Left  Speed
100  2523  100  2523    0     0   1551      0  0:00:01  0:00:01 --:--:--  1551

10.安装NTP服务,使用互联网上提供的NTP服务,三台主机的时间保持一致(主机需要能够正常联网)

[root@ceph-node1 ~]# yum -y install ntp
[root@ceph-node1 ~]# ntpdate ntp1.aliyun.com
20 Dec 10:24:16 ntpdate[13005]: adjust time server 120.25.115.20 offset -0.000328 sec

11.增加yum配置文件(四台虚拟机都要配置ceph源)

[root@ceph-node1 ~]# yum -y install wget
[root@ceph-node1 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@ceph-node1 ~]# vim /etc/yum.repos.d/ceph.repo                                    
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
[root@ceph-node1 ~]# yum clean all
[root@ceph-node1 ~]# yum makecache

12.Ceph系统安装

(1)使用ceph-deploy工具在ceph-node1节点上安装和配置Ceph
[root@ceph-node1 ~]# yum -y install ceph-deploy(2)进入配置文件夹中,使用ceph-deploy创建一个ceph集群,更改配置文件使两个osd也能达到active+clean状态
ceph-deploy的new子命令能够部署一个默认名称为Ceph的新集群
[root@ceph-node1 ~]# mkdir /etc/ceph && cd /etc/ceph
[root@ceph-node1 ceph]# ceph-deploy new ceph-node1
[root@ceph-node1 ceph]# vim ceph.conf
[global]
fsid = bafd6b88-c929-439c-9824-60e2a12d2e45
mon_initial_members = ceph-node1
mon_host = 192.168.200.10
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2(3)在ceph-node1节点上,使用ceph-deploy工具在所有节点上安装ceph二进制软件包
[root@ceph-node1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3 client1 client2
[root@ceph-node1 ceph]# ceph -v
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)(4)在ceph-node1节点上创建Ceph monitor,并查看mon状态
[root@ceph-node1 ceph]# ceph-deploy mon create ceph-node1
[root@ceph-node1 ceph]# ceph-deploy gatherkeys ceph-node1
[root@ceph-node1 ceph]# ceph mon stat
e1: 1 mons at {ceph-node1=192.168.200.10:6789/0}, election epoch 3, quorum 0 ceph-node1

13.OSD和MDS部署

(1)在前三台服务器节点上关闭防火墙
[root@ceph-node1 ceph]# systemctl stop firewalld
[root@ceph-node1 ceph]# systemctl disable firewalld(2)在ceph-node1节点创建OSD
[root@ceph-node1 ceph]# ceph-deploy osd prepare ceph-node1:/var/local/osd0 ceph-node2:/var/local/osd1 ceph-node3:/var/local/osd2(3)在ceph-node1节点使用ceph-deploy工具激活OSD节点
[root@ceph-node1 ceph]# ceph-deploy osd activate ceph-node1:/var/local/osd0 ceph-node2:/var/local/osd1 ceph-node3:/var/local/osd2(4)在ceph-node1节点使用ceph-deploy工具列出osd服务三个节点状态
[root@ceph-node1 ceph]# ceph-deploy osd list ceph-node1 ceph-node2 ceph-node3(5)在ceph-node1节点使用ceph osd tree查看目录树
[root@ceph-node1 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.05846 root default
-2 0.01949     host ceph-node10 0.01949         osd.0            up  1.00000          1.00000
-3 0.01949     host ceph-node21 0.01949         osd.1            up  1.00000          1.00000
-4 0.01949     host ceph-node32 0.01949         osd.2            up  1.00000          1.00000(6)在ceph-node1节点使用ceph-deploy工具把配置文件和admin密钥复制到所有节点[root@ceph-node1 ceph]# ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
[root@ceph-node1 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring(7)使用ceph health或ceph -s命令查看osd状态
[root@ceph-node1 ceph]# ceph health
HEALTH_OK
[root@ceph-node1 ceph]# ceph -scluster 272cb484-c334-4f8e-8192-fc172048b4cfhealth HEALTH_OKmonmap e1: 1 mons at {ceph-node1=192.168.1.180:6789/0}election epoch 3, quorum 0 ceph-node1osdmap e15: 3 osds: 3 up, 3 inflags sortbitwise,require_jewel_osdspgmap v31: 64 pgs, 1 pools, 0 bytes data, 0 objects15681 MB used, 45728 MB / 61410 MB avail64 active+clean(8)部署mds服务,在ceph-node1节点使用ceph-deploy工具创建两个服务
[root@ceph-node1 ceph]# ceph-deploy mds create ceph-node2 ceph-node3(9)使用命令查看服务状态和集群状态
[root@ceph-node1 ceph]# ceph mds stat
e3:, 2 up:standby
[root@ceph-node1 ceph]#  ceph -scluster 272cb484-c334-4f8e-8192-fc172048b4cfhealth HEALTH_OKmonmap e1: 1 mons at {ceph-node1=192.168.1.180:6789/0}election epoch 3, quorum 0 ceph-node1osdmap e15: 3 osds: 3 up, 3 inflags sortbitwise,require_jewel_osdspgmap v31: 64 pgs, 1 pools, 0 bytes data, 0 objects15681 MB used, 45728 MB / 61410 MB avail64 active+clean

14. 创建ceph文件系统并挂载

(1)ceph-node1节点需要创建ceph文件系统的存储池,在创建之前先查看一下文件系统
[root@ceph-node1 ceph]# ceph fs ls
No filesystems enabled(2)使用命令创建两个存储池
[root@ceph-node1 ceph]# ceph osd pool create cephfs_data 128
pool 'cephfs_data' created
[root@ceph-node1 ceph]# ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created(3)创建好存储池后,使用fs new命令创建文件系统
[root@ceph-node1 ceph]# ceph fs new 128 cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1(4)查看创建后的cephfs和mds节点状态
[root@ceph-node1 ceph]# ceph fs ls
name: 128, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph-node1 ceph]# ceph mds stat
e6: 1/1/1 up {0=ceph-node3=up:active}, 1 up:standby(5)在Client客户端上挂载Ceph文件系统,将ceph-node1节点中的存储密钥复制到Client客户端的Ceph配置文件下(admin.secret需自己创建)
[root@ceph-node1 ceph]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]key =  AQCCQ2VnWrvmMxAAFN0eK0Dg6AftohKCnGXqWA==
[root@client1 ~]# vi /etc/ceph/admin.secret
[root@client2 ~]# vi /etc/ceph/admin.secretAQCCQ2VnWrvmMxAAFN0eK0Dg6AftohKCnGXqWA==(6)登录Client客户端节点,创建一个挂载点,进入相应的配置目录进行Ceph文件系统挂载
[root@client1 ~]# mkdir /opt/ceph
[root@client1 ~]# cd /opt/ceph/
[root@client1 ceph]# mount -t ceph 192.168.1.180:6789:/ /home/im_user/im_server/im_webserver/www/public/uploads/ -o name=admin,secretfile=/etc/ceph/admin.secret(7)在Client客户端使用df -h命令查看挂载详情
[root@client1 ceph]#  df -h
文件系统                 容量  已用  可用 已用% 挂载点
devtmpfs                 2.0G     0  2.0G    0% /dev
tmpfs                    2.0G     0  2.0G    0% /dev/shm
tmpfs                    2.0G  9.6M  2.0G    1% /run
tmpfs                    2.0G     0  2.0G    0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  4.7G   46G   10% /
/dev/sda1               1014M  172M  843M   17% /boot
/dev/mapper/centos-home   75G  4.0G   72G    6% /home
tmpfs                    396M   12K  396M    1% /run/user/42
tmpfs                    396M     0  396M    0% /run/user/0
192.168.1.180:6789:/      60G   16G   45G   26% /home/im_user/im_server/im_webserver/www/public/uploads/

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com