Obstacles for OpenStack: cinder-volume tears down control node

I’d like to share another finding from my work with OpenStack.
I was asked for assistance in a small private cloud based on Ocata with a single control node and a handful of compute nodes, and Ceph as storage backend.

During tests with a Heat template (containing 6 instances, 7 volumes and a small network infrastructure) the control node became unresponsive due to the load cinder-volume caused. The reason was that some of the volumes had to be created from (large) images, in which case Cinder has to convert the Glance images and upload them back to Ceph as volumes.

The conversion happens on the local disk of the control node, which was known and therefore the directory /var/lib/cinder was a separate logical volume with enough disk space. This was a suitable setup for the creation of single volumes, but this was the first real performance test for this environment, and it failed! While the stack was created – which took ages! – the control node was almost inoperable.

So we decided to put the conversion directory on a SSD. Not only lead this to a much faster stack creation but it also kept the control node “alive” and responsive.

This should be considered while planning a cloud infrastructure, although it can be fixed quickly if you have an empty slot available in your server. Until there’s a way for qemu-convert to avoid this workaround it’s quite a good idea to source out the conversion directory onto a faster device.

Posted in OpenStack | Tagged , , , | Leave a comment

Migrating BlueStore’s block.db

Ceph’s BlueStore storage engine is rather new, so the big wave of migrations because of failing block devices is still ahead – on the other hand, non-optimum device selection because of missing experience or “heritage environments” may have left you with a setup you’d rather like to change.

Such an issue can be the location of the OSDs’ RocksDB devices. As a recap: BlueStore allows you to separate storage for the write-ahead log (WAL), its meta-data storage (RocksDB) and the actual content. When using spinning disks for content, the most common case is probably to split off RocksDB onto some SSD. If you have the money, you may have put the WAL onto NVME storage, but if not, it’ll automatically end up on the SSD (if you have it) or on the main block device, if you only have that.

So when setting up that “HDD, plus RocksDB on SSD” OSD, you’ll have had to decide on how to set up the RocksDB block device. As 10 GB RocksDB per Terabyte of main storage is recommended, assigning a full SSD is a waste of resources. You end up with basically two options: Partition the SSD, or turn it into a PV and create a LVM volume group from it. But whatever you decide: Once set up, there’s no documented way to move the RocksDB to a different block device – you’d need to recreate the OSD. Continue reading

Posted in BlueStore, Ceph | Leave a comment

Resetting an existing BlueStore OSD

During an attempt to migrate some OSDs’ BlueStore RocksDB to a different block device, we noticed (previously undetected) fatal read errors on the existing RocksDB. The only way to recover from this situation is to remove the OSD and rebuild its content from the other copies.

There are standard procedures to delete and to create OSDs, BlueStore and FileStore. But during our transition from FileStore to BlueStore, we came across a problem where we could not specify the new OSD’s id and had other minor difficulties. And we now wanted to cause the least data movement possible. All this while replacing the RocksDB block device.

To make a long story short: We were looking for a “mkfs”-style approach. Continue reading

Posted in BlueStore, Ceph | Leave a comment