目的, 分离 ceph data 与 journal 位置
环境
[root@ceph-gw-209214 ~]# ceph osd tree # id weight type name up/down reweight -1 12 root default -2 3 host ceph-gw-209214 0 1 osd.0 up 1 1 1 osd.1 up 1 2 1 osd.2 up 1 -4 3 host ceph-gw-209216 6 1 osd.6 up 1 7 1 osd.7 up 1 8 1 osd.8 up 1 -5 3 host ceph-gw-209217 9 1 osd.9 up 1 10 1 osd.10 up 1 11 1 osd.11 up 1 -6 3 host ceph-gw-209219 3 1 osd.3 up 1 4 1 osd.4 up 1 5 1 osd.5 up 1对应分区
192.168.209.214 /dev/sdb1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-0 /dev/sdc1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-1 /dev/sdd1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-2 192.168.209.219 /dev/sdb1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-3 /dev/sdc1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-4 /dev/sdd1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-5 192.168.209.216 /dev/sdc1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-7 /dev/sdd1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-8 /dev/sdb1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-6 192.168.209.217 /dev/sdc1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-10 /dev/sdb1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-9 /dev/sdd1 50G 1.1G 49G 3% /var/lib/ceph/osd/ceph-11查询当前日志位置方法
[root@ceph-gw-209214 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep osd_journal "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-0\/journal", [root@ceph-gw-209214 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-1\/journal",默认状态下, ceph journal 都存放到 /var/run/ceph/ceph-osd.$num/journal 位置中
修改方法:
创建目录 脚本
#!/bin/bash LANG=en_US num=0 for ip in $ips do diskpart=`ssh $ip "fdisk -l | grep Linux | grep -v sda" | awk '{print $1}' | sort` for partition in $diskpart do ssh $ip "mkdir /var/log/ceph-$num" let num++ done done效果如下
192.168.209.214 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-0 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-1 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-2 192.168.209.219 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-3 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-4 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-5 192.168.209.216 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-6 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-7 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-8 192.168.209.217 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-10 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-11 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-9修改配置文件 /etc/ceph/ceph.conf 中添加下面配置
[osd] osd journal = /var/log/$cluster-$id/journal替换 OSD 步骤
验证当前 journal 位置
[root@ceph-gw-209214 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-1\/journal",设 noout, 并停止 osd
root@ceph-gw-209214 ceph]# ceph osd set noout set noout [root@ceph-gw-209214 ceph]# /etc/init.d/ceph stop osd.1 === osd.1 === Stopping Ceph osd.1 on ceph-gw-209214...kill 2744...kill 2744...done手动移动日志
[root@ceph-gw-209214 ceph]# mv /var/lib/ceph/osd/ceph-1/journal /var/log/ceph-1/启动 osd
[root@ceph-gw-209214 ceph]# /etc/init.d/ceph start osd.1 === osd.1 === create-or-move updated item name 'osd.1' weight 0.05 at location {host=ceph-gw-209214,root=default} to crush map Starting Ceph osd.1 on ceph-gw-209214... Running as unit run-13260.service. [root@ceph-gw-209214 ceph]# ceph osd unset noout unset noout验证
[root@ceph-gw-209214 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal "osd_journal": "\/var\/log\/ceph-1\/journal", "osd_journal_size": "1024", [root@ceph-gw-209214 ceph]# ceph -s cluster 1237dd6a-a4f6-43e0-8fed-d9bcc8084bf1 health HEALTH_OK monmap e1: 2 mons at {ceph-gw-209214=192.168.209.214:6789/0,ceph-gw-209216=192.168.209.216:6789/0}, election epoch 8, quorum 0,1 ceph-gw-209214,ceph-gw-209216 osdmap e189: 12 osds: 12 up, 12 in pgmap v6474: 4560 pgs, 10 pools, 1755 bytes data, 51 objects 10725 MB used, 589 GB / 599 GB avail 4560 active+clean