OpenStack with Ceph: Clean up orphaned instances

Working with Ceph and OpenStack can make your life as a cloud administrator really easy, but sometimes you discover its downsides. From time to time I share some findings in this blog, it’s a nice documentation for me and hopefully it helps you preventing the same mistakes I did.

I discovered an orphaned instance in a user’s project, fortunately it was not an important one. The instance’s disk was not a volume but a clone from the Glance image (<INSTANCE_ID>_disk), so it depended on that base image. Only there was no base image in the backend anymore, somehow it must have been deleted even though there were existing clones. I assume it had to do with a cache tier incident a couple of months earlier, something must have destroyed the relationship between the image and its clones.

Continue reading

Posted in Ceph, OpenStack | Tagged , , | Leave a comment

Obstacles for OpenStack: cinder-volume tears down control node

I’d like to share another finding from my work with OpenStack.
I was asked for assistance in a small private cloud based on Ocata with a single control node and a handful of compute nodes, and Ceph as storage backend.

During tests with a Heat template (containing 6 instances, 7 volumes and a small network infrastructure) the control node became unresponsive due to the load cinder-volume caused. The reason was that some of the volumes had to be created from (large) images, in which case Cinder has to convert the Glance images and upload them back to Ceph as volumes.

The conversion happens on the local disk of the control node, which was known and therefore the directory /var/lib/cinder was a separate logical volume with enough disk space. This was a suitable setup for the creation of single volumes, but this was the first real performance test for this environment, and it failed! While the stack was created – which took ages! – the control node was almost inoperable.

So we decided to put the conversion directory on a SSD. Not only lead this to a much faster stack creation but it also kept the control node “alive” and responsive.

This should be considered while planning a cloud infrastructure, although it can be fixed quickly if you have an empty slot available in your server. Until there’s a way for qemu-convert to avoid this workaround it’s quite a good idea to source out the conversion directory onto a faster device.

Posted in OpenStack | Tagged , , , | Leave a comment

Migrating BlueStore’s block.db

Ceph’s BlueStore storage engine is rather new, so the big wave of migrations because of failing block devices is still ahead – on the other hand, non-optimum device selection because of missing experience or “heritage environments” may have left you with a setup you’d rather like to change.

Such an issue can be the location of the OSDs’ RocksDB devices. As a recap: BlueStore allows you to separate storage for the write-ahead log (WAL), its meta-data storage (RocksDB) and the actual content. When using spinning disks for content, the most common case is probably to split off RocksDB onto some SSD. If you have the money, you may have put the WAL onto NVME storage, but if not, it’ll automatically end up on the SSD (if you have it) or on the main block device, if you only have that.

So when setting up that “HDD, plus RocksDB on SSD” OSD, you’ll have had to decide on how to set up the RocksDB block device. As 10 GB RocksDB per Terabyte of main storage is recommended, assigning a full SSD is a waste of resources. You end up with basically two options: Partition the SSD, or turn it into a PV and create a LVM volume group from it. But whatever you decide: Once set up, there’s no documented way to move the RocksDB to a different block device – you’d need to recreate the OSD. Continue reading

Posted in BlueStore, Ceph | Leave a comment