bluestore调研记录
参考:
1. 深入浅出BlueStore的OSD创建与启动
2. Ceph Bluestore 部署实践
3.
代码中经常看到的read_meta和write_meta,实际读取和写入的地址是/var/lib/ceph/osd/ceph-x/里面的block文件;
/var/lib/ceph/osd/ceph-0/里面包含的文件有block, block.db, block.wal, ceph_fsid, fsid, keyring, ready, type, whoami;
其中block,block.db,block.wal是软链接,block指向数据空间,block.db指向rocksdb存放数据库的空间,block.wal指向log存放的空间。
关于block,block.db,block.wal的打开、读写、关闭操作实际上是对应于底层的xfs来实现的。
引入bluefs后,bluestore将空间分为三层:
1. slow:用于存储对象,可以由普通机械硬盘构成,由BlueStore进行管理;
2. 高速(DB)空间:由普通SSD构成,用于存储BlueStore内部产生的元数据,由于BlueStore产生的元数据由RocksDB进行存储,而RocksDB最终将数据存储在BlueFS中,因而这类空间由BlueFS进行管理;
3. 超高速(WAL)空间:用于存储RocksDB内部产生的.log日志,使用NVMe或比普通SSD时延更小的设备存储,由于.log日志也由BlueFS存储,因而这部分空间也由BlueFS管理;
当然,BlueFS的可用空闲空间低于一定比例的BlueStore可用空闲空间时,会共享Bluestore的空间,同理,也会被回收空间;
BlueFS本身也是日志文件系统,因而也会产生日志数据,其数据和.log文件优先使用WAL空间存储,WAL不够,则使用DB空间,DB不够,使用slow空间,slow空间由Bluestore管理,Bluestore会将该部分空间的分配情况记录在bluefs_extents结构中,bluefs_extents是一个集合,每个成员对应slow的一段空间,每次更新(更新Alloctor)会作为Bluestore的元数据存储;后续上电过程中,会读取该部分信息,从而正确初始化Alloctor;
预留空间:0~8192
0~4096:预留给lable使用
osd_uuid | blockdevice关联的osd |
size | blockdevice的size |
btime | label的产生时间 |
description | label描述信息 |
4096~8192:预留给bluefs保存自己的superblock
BlueFS的超级快保存在其DB空间的第二个4K空间范围内,包含:
uuid | bluefs关联的uuid(fsid ?) |
osduuid | bluefs关联的osd |
block_size | DB/WAL关联的设备块大小 |
log_fnode | 日志文件对应的fnode |
version | 超级块当前的版本 |
bluefs管理:
blockdevice管理:
1. mkfs:固化一些配置项到磁盘中,原因:Bluestore的配置项会对磁盘的数据组织方式是不一样的,如SSD和NVMe的数据组织不同;因此,为了防止重新上电前后使用不同的配置导致数据的损坏,故将配置信息固化,从而再次上电时从磁盘读取后恢复,保持一致;
os_type | object_store类型:可以是filestore和bluestore |
fsid | 唯一标志一个bluestore实例 |
freelist_type | 标志FreelistManage的类型,因为BlueStore固化所有的空闲列表的kv_store,如果FreelistManage允许动态改变会导致上电时候无法正常从kvDB中读取空闲列表信息 |
kv_backend | 使用何种类型的kv_store,目前是level_db和rocks_db |
bluefs | 如果kv_store使用rocksdb,则使用bluefs替换本地文件系统接口 |
2. mount:osd进程上电时,需要mount操作来进行上电前检查和准备工作:
2.1 检查ObjectStore类型:由于在mkfs时被写入磁盘,因而mount的时候读取并校验类型是否一致;
2.2 fsck:检查是否出现损坏;
2.3 加载并锁定fsid
2.4 加载主块设别
2.5 加载数据库,调取元数据:
nid_max |
标记bluestore最小未分配的nid,新建对象都是从当前的nid_max开始进行分配 |
blobid_max | 全局唯一,目前还不太清楚 |
freelist_type | 标记FreelistManage的类型 |
min_min_alloc_size | BlueStore自行配置的最小空间分配单元 |
bluefs_extents | 从主设备共享给bluefs的额外空间 |
数据结构
基本数据结构:
/// an in-memory object 数据,扩展属性,omap头部,omap条目 struct Onode { MEMPOOL_CLASS_HELPERS(); // Not persisted and updated on cache insertion/removal OnodeCacheShard *s; bool pinned = false; // Only to be used by the onode cache shard std::atomic_int nref; ///< reference count Collection *c; // PG信息 ghobject_t oid; /// key under PREFIX_OBJ where we are stored mempool::bluestore_cache_other::string key; boost::intrusive::list_member_hook<> lru_item, pin_item; // onode磁盘数据结构 bluestore_onode_t onode; ///< metadata stored as value in kv store bool exists; ///< true if object logically exists // 有序的Extent逻辑空间集合,持久化在RocksDB中,lextent-->blob // 由于支持稀疏写,因而extent map中的extent可以是不连续的,即存在空洞,前一个extent的结束地址小于后一个extent的起始地址 // 单个对象内的extent过多会导致extentMap很大,严重影响RocksDB的访问效率,因而加入shared_inf,同时也会合并相邻的小段 // 好处是可以按需加载,减少内存占用率 // 空间管理,包含多个extent,每个extent负责管理对象内的一个逻辑段数据并且关联一个Blob,Blob包含多个pextent,最终将对象的数据映射到磁盘上 ExtentMap extent_map; // track txc's that have not been committed to kv store (and whose // effects cannot be read via the kvdb read methods) std::atomic<int> flushing_count = {0}; std::atomic<int> waiting_count = {0}; /// protect flush_txns ceph::mutex flush_lock = ceph::make_mutex("BlueStore::Onode::flush_lock"); ceph::condition_variable flush_cond; ///< wait here for uncommitted txns
......
}
// extentmap结构
/// a sharded extent map, mapping offsets to lextents to blobs
struct ExtentMap {
Onode *onode;
extent_map_t extent_map; ///< map of Extents to Blobs
blob_map_t spanning_blob_map; ///< blobs that span shards
typedef boost::intrusive_ptr<Onode> OnodeRef;
......
}
// extent数据结构,主要就是offset和length的集合
struct Extent : public ExtentBase {
MEMPOOL_CLASS_HELPERS();
uint32_t logical_offset = 0; ///< logical offset
uint32_t blob_offset = 0; ///< blob offset
uint32_t length = 0; ///< length
BlobRef blob; ///< the blob with our data
......
}
// 磁盘数据结构 /// onode: per-object metadata struct bluestore_onode_t { // 逻辑ID,单个Bulestore内部唯一 uint64_t nid = 0; ///< numeric id (locally unique) // 对象的大小 uint64_t size = 0; ///< object size // 对象的扩展属性 map<mempool::bluestore_cache_other::string, bufferptr> attrs; ///< attrs ...... }
/// pextent: physical extent
// offset磁盘上的物理偏移,块大小对齐
// length数据段的长度,块大小对齐
struct bluestore_pextent_t : public bluestore_interval_t<uint64_t, uint32_t> { bluestore_pextent_t() {} bluestore_pextent_t(uint64_t o, uint64_t l) : bluestore_interval_t(o, l) {} bluestore_pextent_t(const bluestore_interval_t &ext) : bluestore_interval_t(ext.offset, ext.length) {} DENC(bluestore_pextent_t, v, p) { denc_lba(v.offset, p); denc_varint_lowz(v.length, p); } void dump(Formatter *f) const; static void generate_test_instances(list<bluestore_pextent_t*>& ls); };
CollectionHandle& ch
Collection *c = static_cast<Collection*>(ch.get());
OpSequencer *osr = c->osr.get();
/** * a collection also orders transactions * * Any transactions queued under a given collection will be applied in * sequence. Transactions queued under different collections may run * in parallel. * * ObjectStore users may get collection handles with open_collection() (or, * for bootstrapping a new collection, create_new_collection()). */ struct CollectionImpl : public RefCountedObject { const coll_t cid; /// wait for any queued transactions to apply // block until any previous transactions are visible. specifically, // collection_list and collection_empty need to reflect prior operations. virtual void flush() = 0; /** * Async flush_commit * * There are two cases: * 1) collection is currently idle: the method returns true. c is * not touched. * 2) collection is not idle: the method returns false and c is * called asynchronously with a value of 0 once all transactions * queued on this collection prior to the call have been applied * and committed. */ virtual bool flush_commit(Context *c) = 0; const coll_t &get_cid() { return cid; } protected: CollectionImpl() = delete; CollectionImpl(CephContext* cct, const coll_t& c) : RefCountedObject(cct), cid(c) {} ~CollectionImpl() = default; };
void BlueStore::_txc_finish_io(TransContext *txc) { dout(20) << __func__ << " " << txc << dendl; /* * we need to preserve the order of kv transactions, * even though aio will complete in any order. */ OpSequencer *osr = txc->osr.get(); std::lock_guard l(osr->qlock); txc->state = TransContext::STATE_IO_DONE; // 更新状态 txc->ioc.release_running_aios(); OpSequencer::q_list_t::iterator p = osr->q.iterator_to(*txc); while (p != osr->q.begin()) { --p; if (p->state < TransContext::STATE_IO_DONE) { dout(20) << __func__ << " " << txc << " blocked by " << &*p << " " << p->get_state_name() << dendl; return; } if (p->state > TransContext::STATE_IO_DONE) { ++p; break; } } do { _txc_state_proc(&*p++); // 再次进入状态机 } while (p != osr->q.end() && p->state == TransContext::STATE_IO_DONE); if (osr->kv_submitted_waiters) { osr->qcond.notify_all(); } }
然后检查是否还有未提交的IO,如果还有就将state设置为STATE_AIO_WAIT
并调用_txc_aio_submit
提交IO,然后退出状态机,之后aio完成的时候会调用回调函数txc_aio_finish
再次进入状态机;否则就进入STATE_AIO_WAIT
状态
case TransContext::STATE_PREPARE: throttle.log_state_latency(*txc, logger, l_bluestore_state_prepare_lat); if (txc->ioc.has_pending_aios()) { // 检查是否还有未提交的IO,如果有,将状态设置为STATE_AIO_WAIT,并提交IO txc->state = TransContext::STATE_AIO_WAIT; txc->had_ios = true; // 更新txc _txc_aio_submit(txc); // 提交IO return; }
void BlueStore::_txc_aio_submit(TransContext *txc) { dout(10) << __func__ << " txc " << txc << dendl; bdev->aio_submit(&txc->ioc); }
bdev = BlockDevice::create(cct, p, aio_cb, static_cast<void*>(this), discard_cb, static_cast<void*>(this));
static void aio_cb(void *priv, void *priv2) { BlueStore *store = static_cast<BlueStore*>(priv); BlueStore::AioContext *c = static_cast<BlueStore::AioContext*>(priv2); c->aio_finish(store); }
void aio_finish(BlueStore *store) override { store->txc_aio_finish(this); }
void txc_aio_finish(void *p) { _txc_state_proc(static_cast<TransContext*>(p)); }
STATE_AIO_WAIT阶段:
case TransContext::STATE_AIO_WAIT: // IO保序处理,等待AIO的完成 { mono_clock::duration lat = throttle.log_state_latency( *txc, logger, l_bluestore_state_aio_wait_lat); if (ceph::to_seconds<double>(lat) >= cct->_conf->bluestore_log_op_age) { dout(0) << __func__ << " slow aio_wait, txc = " << txc << ", latency = " << lat << dendl; } } _txc_finish_io(txc); // may trigger blocked txc's too return;
void BlueStore::_txc_finish_io(TransContext *txc) { dout(20) << __func__ << " " << txc << dendl; /* * we need to preserve the order of kv transactions, * even though aio will complete in any order. */ OpSequencer *osr = txc->osr.get(); std::lock_guard l(osr->qlock); txc->state = TransContext::STATE_IO_DONE; // 更新状态 txc->ioc.release_running_aios(); OpSequencer::q_list_t::iterator p = osr->q.iterator_to(*txc); while (p != osr->q.begin()) { --p;
// 保证q之前的state状态已经完成,这里来保证有序完成,因为完成后还会进入到_txc_finish_io if (p->state < TransContext::STATE_IO_DONE) { dout(20) << __func__ << " " << txc << " blocked by " << &*p << " " << p->get_state_name() << dendl; return; } if (p->state > TransContext::STATE_IO_DONE) { ++p; break; } } do { _txc_state_proc(&*p++); // 再次进入状态机 } while (p != osr->q.end() && p->state == TransContext::STATE_IO_DONE); if (osr->kv_submitted_waiters) { osr->qcond.notify_all(); } }
_txc_finish_io