参考文档:https://docs.ceph.com/docs/master/rados/operations/pools/
列出集群的池
[ceph@ceph02 ~]$ ceph osd lspools
创建pool
#语法:
ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \
[crush-rule-name] [expected-num-objects]
ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] erasure \
[erasure-code-profile] [crush-rule-name] [expected_num_objects] [--autoscale-mode=<on,off,warn>]
#举例:
[ceph@ceph02 ~]$ ceph osd pool create test11 100
pool 'test11' created
将pool关联到应用程序
#语法:
ceph osd pool application enable {pool-name} {application-name}
#举例:
[ceph@ceph02 ~]$ ceph osd pool application enable test11 rbd
enabled application 'rbd' on pool 'test11'
设置pool的配额
#语法:
ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}]
#举例:
[ceph@ceph02 ~]$ ceph osd pool set-quota test11 max_objects 100
set-quota max_objects = 100 for pool test11
#注:要删除配额,请将其值设置为0。
删除pool
#语法:
ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
#举例:
[ceph@ceph02 ~]$ ceph osd pool delete mytest mytest --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
#根据错误提示,在ceph-deploy修改ceph.conf,然后把配置文件同步到其它节点
[ceph@ceph01 ceph-cluster]$ vi ceph.conf
[mon]
mon allow pool delete = true
[ceph@ceph01 ceph-cluster]$ ceph-deploy --overwrite-conf config push ceph{01,02,03,04,05,06}
#在mon节点上重启一下mon服务
[ceph@ceph01 ceph-cluster]$ sudo systemctl restart ceph-mon.target
[ceph@ceph02 ~]$ sudo systemctl restart ceph-mon.target
[ceph@ceph03 ~]$ sudo systemctl restart ceph-mon.target
#再删除pool
[ceph@ceph02 ~]$ ceph osd pool delete mytest mytest --yes-i-really-really-mean-it
pool 'mytest' removed
重命名pool
#语法
ceph osd pool rename {current-pool-name} {new-pool-name}
举例:
[ceph@ceph02 ~]$ ceph osd pool rename test11 test22
pool 'test11' renamed to 'test22'
统计pool信息
[ceph@ceph02 ~]$ rados df
#要获取特定池或全部池的I/O信息
ceph osd pool stats [{pool-name}]
pool快照管理
[ceph@ceph02 ~]$ rados -p test put testfile /etc/hosts #把hosts文件写到对象testfile
[ceph@ceph02 ~]$ rados -p test ls #查看对象
testfile
[ceph@ceph02 ~]$ rados -p test mksnap snaphost #创建名为snaphost的快照
created pool test snap snaphost
[ceph@ceph02 ~]$ rados -p test lssnap #查看快照
1 snaphost 2020.04.22 04:14:47
1 snaps
[ceph@ceph02 ~]$ rados -p test ls
testfile
[ceph@ceph02 ~]$ rados -p test rm testfile #删除对象
[ceph@ceph02 ~]$ rados -p test ls #很奇怪,删除后ls还显示有这个对象
testfile
[ceph@ceph02 ~]$ rados -p test get testfile host #验证一下这个对象是否存在,尝试把对象文件get下来,提示没有这个文件
error getting test/testfile: (2) No such file or directory
[ceph@ceph02 ~]$ rados rollback -p test testfile snaphost #回滚快照
rolled back pool test to snapshot snaphost
[ceph@ceph02 ~]$ rados -p test ls #查看这个pool里的对象,显示有testfile这个对象
testfile
[ceph@ceph02 ~]$ rados -p test get testfile host #把这个对象get下来,验证回滚是否成功
[ceph@ceph02 ~]$ ls
host
获取对象副本数
[ceph@ceph01 ceph-cluster]$ ceph osd dump | grep 'replicated size'
pool 2 'test22' replicated size 2 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 47 lfor 0/0/43 flags hashpspool,pool_snaps stripe_width 0
pool 3 'test' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 48 flags hashpspool,pool_snaps stripe_width 0
GET POOL VALUES
#语法:
ceph osd pool get {pool-name} {key}
{key}有以下值: size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeepscrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote|all|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_size_ratio
SET POOL VALUES
#语法:
ceph osd pool set {pool-name} {key} {value}
#举例:
[ceph@ceph03 ~]$ ceph osd pool set test size 2 #设置对象副本数
set pool 3 size to 2
ERASURE CODE
新建的pool默认都是 replicated(复制池) 的,而erasure类型的pool池可以用来代替以节省空间,最简单的erasure编码池等效于RAID5,但至少需要三个主机。
我们来看一下默认的erasure的配置文件:
[ceph@ceph06 ~]$ ceph osd erasure-code-profile ls #查看集群内的存在的erasure配置文件
default
[ceph@ceph06 ~]$ ceph osd erasure-code-profile get default
k=2 #K原始数据盘个数或恢复数据需要的磁盘个数
m=1 #M校验盘个数或允许出故障的盘个数
#使用编码算法,通过K个原始数据生成K+M个新数据,通过任何K个新数据都能还原出原始的K个数据,即允许M个数据盘出故障,数据仍然不会丢失
plugin=jerasure #jerasure plugin是最通用和灵活的插件,封装了Jerasure库。
technique=reed_sol_van #使用的编码算法为reed_sol_van
新建一个ERASURE CODE PROFILES文件
[ceph@ceph06 ~]$ ceph osd erasure-code-profile set myprofile \
k=2 \
m=1 \
crush-failure-domain=rack #将创建一个CRUSH规则集,确保没有两个块存储在同一个机架中
[ceph@ceph06 ~]$ ceph osd erasure-code-profile ls
default
myprofile #看到自定义的erasure code文件已被创建
创建erasure pool
[ceph@ceph06 ~]$ ceph osd pool create ecpool 12 12 erasure myprofile
pool 'ecpool' created
[ceph@ceph06 ~]$ echo ABCDEFGHI | rados --pool ecpool put NYAN -
[ceph@ceph06 ~]$ rados --pool ecpool01 get NYAN -
ABCDEFGHI
删除erasure code profiles配置文件
[ceph@ceph05 ~]$ ceph osd erasure-code-profile ls
default
myprofile
[ceph@ceph05 ~]$ ceph osd erasure-code-profile rm myprofile #因为ecpool引用了myprofile配置文件,删除失败
Error EBUSY: ecpool pool(s) are using the erasure code profile 'myprofile'
文档更新时间: 2020-05-07 16:41 作者:子木