SUSE Cloud 5 repositories

Currently, I am working in a test environment for SUSE Cloud 5 consisting of three nodes: 1 admin, 1 control and 1 compute node. Compute runs with SLES12, the other nodes with SLES11-SP3.
I had some troubles setting up the repositories correctly, that’s why I wanted to share my experience and my current settings. This post is divided into two parts because in my first attempts I had a setup with a remote SMT-Server. Meanwhile, I use SUSE Manager to provide the update repositories.

First of all, here is a complete list of all repositories, extracted from SUSE Cloud Deployment Guide, Table 4.3:

Repository Locations on the Administration Server

Channel Directory on the Administration Server
Mandatory Repositories
SLES11-SP3-Pool /srv/tftpboot/suse-11.3/repos/SLES11-SP3-Pool/
SLES11-SP3-Updates /srv/tftpboot/suse-11.3/repos/SLES11-SP3-Updates/
SUSE-Cloud-5-Pool /srv/tftpboot/suse-11.3/repos/SUSE-Cloud-5-Pool/
SUSE-Cloud-5-Updates /srv/tftpboot/suse-11.3/repos/SUSE-Cloud-5-Updates
Optional Repositories
SLES12-Pool /srv/tftpboot/suse-12.0/repos/SLES12-Pool
SLES12-Updates /srv/tftpboot/suse-12.0/repos/SLES12-Updates
SLE-12-Cloud-Compute5-Pool /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Pool
SLE-12-Cloud-Compute5-Updates /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Updates
SLE11-HAE-SP3-Pool /srv/tftpboot/suse-11.3/repos/SLE11-HAE-SP3-Pool
SLE11-HAE-SP3-Updates /srv/tftpboot/suse-11.3/repos/SLE11-HAE-SP3-Updates
SUSE-Enterprise-Storage-1.0-Pool /srv/tftpboot/suse-12.0/repos/SUSE-Enterprise-Storage-1.0-Pool
SUSE-Enterprise-Storage-1.0-Updates /srv/tftpboot/suse-12.0/repos/SUSE-Enterprise-Storage-1.0-Updates

If you decide to use SLES12 the corresponding repositories aren’t optional anymore, of course.

Remote SMT-Server

Before I used SUSE Manager (SUMA) to provide updates I just had to mount our SMT-Server on admin node and create soft links to the right directories. I have an autoinst.xml file to be able to quickly set up the admin from scratch. In that xml file I have a simple script that creates the required mount points and soft links. Here is what my script does:

# create mandatory directories
mkdir -p /srv/smt
mkdir -p /opt/iso
# SLES11 directories
mkdir -p /srv/tftpboot/suse-11.3/repos/Cloud
mkdir -p /srv/tftpboot/suse-12.0/repos/SLE12-Cloud-Compute
mkdir -p /srv/tftpboot/suse-11.3/repos/SLES11-SP3-Pool/
# SLES12 directories, create repos
mkdir -p /srv/tftpboot/suse-12.0/repos/SLES12-Pool/
createrepo /srv/tftpboot/suse-12.0/repos/SLES12-Pool/
mkdir -p /srv/tftpboot/suse-12.0/repos/SLES12-Updates/
createrepo /srv/tftpboot/suse-12.0/repos/SLES12-Updates/
mkdir -p /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Pool
createrepo /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Pool
mkdir -p /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Updates
createrepo /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Updates

# create mount points in fstab
echo "<YOUR-SERVER>:/iso /opt/iso nfs defaults 0 0" >> /etc/fstab
echo "<YOUR-SERVER>:/repo/\$RCE /srv/smt nfs defaults 0 0" >> /etc/fstab
echo "/opt/iso/suse/sles11sp3/SLES-11-SP3-DVD-x86_64-GM-DVD1.iso /srv/tftpboot/suse-11.3/install iso9660 defaults,_netdev 0 0" >> /etc/fstab
echo "/opt/iso/suse/sles12/SLE-12-Server-DVD-x86_64-GM-DVD1.iso /srv/tftpboot/suse-12.0/install iso9660 defaults,_netdev 0 0" >> /etc/fstab
echo "/opt/iso/suse/cloud_5/SUSE-CLOUD-5-x86_64-GM-DVD1.iso /srv/tftpboot/suse-11.3/repos/Cloud iso9660 defaults,_netdev 0 0" >> /etc/fstab
echo "/opt/iso/suse/cloud_5/SUSE-SLE12-CLOUD-5-COMPUTE-x86_64-GM-DVD1.iso /srv/tftpboot/suse-12.0/repos/SLE12-Cloud-Compute iso9660 defaults,_netdev 0 0" >> /etc/fstab

# mount
mount /opt/ndeag2/iso
mount /srv/smt
# wait 5 seconds until the nfs mount is done
sleep 5
mount /srv/tftpboot/suse-11.3/install
mount /srv/tftpboot/suse-12.0/install
mount /srv/tftpboot/suse-11.3/repos/Cloud
mount /srv/tftpboot/suse-12.0/repos/SLE12-Cloud-Compute

# create symlinks to repositories
ln -s /srv/smt/SLE11-HAE-SP3-Pool/sle-11-x86_64/ SLE11-HAE-SP3-Pool
ln -s /srv/smt/SLE11-HAE-SP3-Updates/sle-11-x86_64/ SLE11-HAE-SP3-Updates
ln -s /srv/smt/SLES11-SP3-Pool/sle-11-x86_64/ SLES11-SP3-Pool
ln -s /srv/smt/SLES11-SP3-Updates/sle-11-x86_64/ SLES11-SP3-Updates
ln -s /srv/smt/SUSE-Cloud-5-Pool/sle-11-x86_64/ SUSE-Cloud-5-Pool
ln -s /srv/smt/SUSE-Cloud-5-Updates/sle-11-x86_64/ SUSE-Cloud-5-Updates

Although I don’t have a HA environment I included HAE-repositories to get rid of the warnings:

Optional repo SLE11-HAE-SP3-Pool (11.3) is missing.
Optional repo SLE11-HAE-SP3-Updates (11.3) is missing.

I did not consider Enterprise Storage repositories.

SUSE Manager

The difference between the scripts is that you don’t have to mount your SMT-Server and no soft links are required to use SUMA. But you have to create all necessary repos:

# create mandatory directories
mkdir -p /opt/iso
# SLES11 directories and repos
mkdir -p /srv/tftpboot/suse-11.3/repos/Cloud
mkdir -p /srv/tftpboot/suse-12.0/repos/SLE12-Cloud-Compute
mkdir -p /srv/tftpboot/suse-11.3/repos/SLES11-SP3-Pool/
createrepo /srv/tftpboot/suse-11.3/repos/SLES11-SP3-Pool/
mkdir -p /srv/tftpboot/suse-11.3/repos/SLES11-SP3-Updates
createrepo /srv/tftpboot/suse-11.3/repos/SLES11-SP3-Updates

# SLES12 directories and repos
mkdir -p /srv/tftpboot/suse-12.0/repos/SLES12-Pool/
createrepo /srv/tftpboot/suse-12.0/repos/SLES12-Pool/
mkdir -p /srv/tftpboot/suse-12.0/repos/SLES12-Updates/
createrepo /srv/tftpboot/suse-12.0/repos/SLES12-Updates/
mkdir -p /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Pool
createrepo /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Pool
mkdir -p /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Updates
createrepo /srv/tftpboot/suse-12.0/repos/SLE-12-Cloud-Compute5-Updates

# create mount points in fstab
echo "<YOUR-SERVER>:/iso /opt/iso nfs defaults 0 0" >> /etc/fstab
echo "/opt/iso/suse/sles11sp3/SLES-11-SP3-DVD-x86_64-GM-DVD1.iso /srv/tftpboot/suse-11.3/install iso9660 defaults,_netdev 0 0" >> /etc/fstab
echo "/opt/iso/suse/sles12/SLE-12-Server-DVD-x86_64-GM-DVD1.iso /srv/tftpboot/suse-12.0/install iso9660 defaults,_netdev 0 0" >> /etc/fstab
echo "/opt/iso/suse/cloud_5/SUSE-CLOUD-5-x86_64-GM-DVD1.iso /srv/tftpboot/suse-11.3/repos/Cloud iso9660 defaults,_netdev 0 0" >> /etc/fstab
echo "/opt/iso/suse/cloud_5/SUSE-SLE12-CLOUD-5-COMPUTE-x86_64-GM-DVD1.iso /srv/tftpboot/suse-12.0/repos/SLE12-Cloud-Compute iso9660 defaults,_netdev 0 0" >> /etc/fstab

mount /opt/ndeag2/iso
sleep 5
mount /srv/tftpboot/suse-11.3/install
mount /srv/tftpboot/suse-12.0/install
mount /srv/tftpboot/suse-11.3/repos/Cloud
mount /srv/tftpboot/suse-12.0/repos/SLE12-Cloud-Compute

Provisioner.json

If you use yast crowbar to edit/add your SUSE Manager repositories (before you run the cloud installation script) your changes are written to /etc/crowbar/provisioner.json. If you have such a file already, just copy it to /etc/crowbar/ and the installation script will apply that configuration automatically. Here is my provisioner.json:

{
   "attributes" : {
      "provisioner" : {
         "suse" : {
            "autoyast" : {
               "repos" : {
                  "suse-12.0" : {
                     "SLES12-Updates" : {
                        "ask_on_error" : false,
                        "url" : "http://<YOUR-MANAGER>/ks/dist/child/sles12-updates-x86_64/sles12-x86_64/"
                     },
                     "SLE-12-Cloud-Compute5-Pool" : {
                        "ask_on_error" : false,
                        "url" : "http://<YOUR-MANAGER>/ks/dist/child/sle-12-cloud-compute5-pool-x86_64/sles12-x86_64"
                     },
                     "SLE-12-Cloud-Compute5-Updates" : {
                        "ask_on_error" : false,
                        "url" : "http://<YOUR-MANAGER>/ks/dist/child/sle-12-cloud-compute5-updates-x86_64/sles12-x86_64"
                     }
                  },
                  "suse-11.3" : {
                     "SLE11-HAE-SP3-Updates" : {
                        "ask_on_error" : false,
                        "url" : "http://<YOUR-MANAGER>/ks/dist/child/sle11-hae-sp3-updates-x86_64/sles11-sp3-x86_64"
                     },
                     "SLES11-SP3-Updates" : {
                        "ask_on_error" : false,
                        "url" : "http://<YOUR-MANAGER>/ks/dist/child/sles11-sp3-updates-x86_64/sles11-sp3-x86_64/"
                     },
                     "SLE11-HAE-SP3-Pool" : {
                        "ask_on_error" : false,
                        "url" : "http://<YOUR-MANAGER>/ks/dist/child/sle11-hae-sp3-pool-x86_64/sles11-sp3-x86_64"
                     }
                  },
                  "common" : {
                     "SUSE-Cloud-5-Pool" : {
                        "ask_on_error" : false,
                        "url" : "http://<YOUR-MANAGER>/ks/dist/child/suse-cloud-5-pool-x86_64/sles11-sp3-x86_64/"
                     },
                     "SUSE-Cloud-5-Updates" : {
                        "ask_on_error" : false,
                        "url" : "http://<YOUR-MANAGER>/ks/dist/child/suse-cloud-5-updates-x86_64/sles11-sp3-x86_64/"
                     }
                  }
               }
            }
         }
      }
   }
}

You’ll have to replace “<YOUR-SERVER>” and “<YOUR-MANAGER>” with your own URL, of course. Since the URLs for the channels “SLE-12-Cloud-Compute5-Pool” and “SLE-12-Cloud-Compute5-Updates” still haven’t been updated in the official documentation for SUSE Cloud 5 – they still have the status “to be announced” – I figured them out myself.

The reorganization from SMT to SUMA happened recently, so there may occur a problem with this configuration I didn’t face yet, but at this point my cloud is working fine. I have no troubles updating my nodes, I don’t receive any error messages. Especially the HAE repositories are an assumption, I’m not sure if that would work. I’ll update this post when I find out. If you have any comment though, please let me know.

This entry was posted in SUSE Cloud and tagged , , , . Bookmark the permalink.

Leave a Reply