OpenStack: Upgrade to high availability (Part II)

This article is the second part (find the previous post here) of the series about our OpenStack upgrade process. It will be about the preparation steps required to reach that goal, the key elements of the preparation were (not necessarily in that order):

  • Upgrade the OpenStack database
  • Create AutoYaST profiles
  • Create salt states
  • Prepare PXE installation

Continue reading

Posted in Ceph, High Availability, OpenStack | Tagged , , , , | Leave a comment

OpenStack: Upgrade to high availability (Part I)

This article is the beginning of a little blog series about our journey to upgrade a single-control-node environment to a highly available OpenStack Cloud. We started to use OpenStack as an experiment while our company was running on a different environment, but as it sometimes happens the experiment suddenly became a production environment without any redundancy, except when we migrated to Ceph as our storage back-end, so at least the (Glance) images, (Nova) ephemeral disks and (Cinder) volumes were redundant. But the (single) control node didn’t even have any RAID configuration, just a regular backup configuration for the database, config files etc.

Continue reading

Posted in Ceph, High Availability, OpenStack | Tagged , , , , | Leave a comment

Obstacles for OpenStack: disk_format is not the same as disk_format

I bet you’re wondering about the title and if there’s a typo or some other mistake. I promise, there’s nothing wrong with the title. It’s a result of some research I did in two different environments (Ocata and Rocky), I already wrote an article about some of the findings.

Continue reading

Posted in Ceph, OpenStack, SUSE Cloud | Leave a comment