OpenStack with Ceph: slow instance boot

This is another article about OpenStack with Ceph as the storage backend. Like my other posts about this topic this is not about how to install and configure your private cloud but it’s more a collection of obstacles you could be facing. For me it’s also an online documentation in case I forgot what I did weeks, months or even years ago.

Now let’s get to it. We’ve been working with Ceph for quite a while now, it’s really comfortable launching instances within seconds. But from time to time we noticed that some instances took several minutes to boot, and there was nothing obvious happening on the compute nodes or in the Ceph cluster. So we didn’t really bother to debug it further, it’s not too bad if you have to wait one minute or so for a 6 GB instance to start.

Continue reading

Posted in Ceph, OpenStack, SUSE Cloud, Virtualisation | Leave a comment

OpenStack with Ceph: Clean up orphaned instances

Working with Ceph and OpenStack can make your life as a cloud administrator really easy, but sometimes you discover its downsides. From time to time I share some findings in this blog, it’s a nice documentation for me and hopefully it helps you preventing the same mistakes I did.

I discovered an orphaned instance in a user’s project, fortunately it was not an important one. The instance’s disk was not a volume but a clone from the Glance image (<INSTANCE_ID>_disk), so it depended on that base image. Only there was no base image in the backend anymore, somehow it must have been deleted even though there were existing clones. I assume it had to do with a cache tier incident a couple of months earlier, something must have destroyed the relationship between the image and its clones.

Continue reading

Posted in Ceph, OpenStack | Tagged , , | Leave a comment

Obstacles for OpenStack: cinder-volume tears down control node

I’d like to share another finding from my work with OpenStack.
I was asked for assistance in a small private cloud based on Ocata with a single control node and a handful of compute nodes, and Ceph as storage backend.

During tests with a Heat template (containing 6 instances, 7 volumes and a small network infrastructure) the control node became unresponsive due to the load cinder-volume caused. The reason was that some of the volumes had to be created from (large) images, in which case Cinder has to convert the Glance images and upload them back to Ceph as volumes.

The conversion happens on the local disk of the control node, which was known and therefore the directory /var/lib/cinder was a separate logical volume with enough disk space. This was a suitable setup for the creation of single volumes, but this was the first real performance test for this environment, and it failed! While the stack was created – which took ages! – the control node was almost inoperable.

So we decided to put the conversion directory on a SSD. Not only lead this to a much faster stack creation but it also kept the control node “alive” and responsive.

This should be considered while planning a cloud infrastructure, although it can be fixed quickly if you have an empty slot available in your server. Until there’s a way for qemu-convert to avoid this workaround it’s quite a good idea to source out the conversion directory onto a faster device.

Posted in OpenStack | Tagged , , , | Leave a comment