Automatic Virtual Machine deployment using the RHEV API and Python

Author: Daniele Mazzocchio
Last update: Sep 8, 2014

The purpose of this document is to give a quick overview of the Python RHEV API, by creating a basic script for deploying new RHEL virtual machines; such a script could be used for faster VM creation (by bypassing the web interface completely) or for automating the deployment process.

So let's talk prerequisites. The first choice is whether to create new virtual machines using templates or kickstart (i.e. network installation); though the most natural choice would be using templates, there are a number of reasons why I prefer the kickstart installation method:

Though we'll be installing over the network, another thing on my wishlist was to avoid using PXE/DHCP for network boot; besides the potential security and reliability issues with both PXE and DHCP, DHCP can increase the administrative overhead on segmented networks (where each network segment may require its own DHCP server, or a DHCP relay agent) or on multi-OS networks, not to mention the hassle of creating static DHCP reservations.

The great thing is that RHEV allows you to benefit from the advantages of PXE/DHCP, without any of the drawbacks, by passing kernel (vmlinuz), initial ramdisk image (initrd.img) and boot parameters directly to the virtual machine. The required files can be found in the /images/pxeboot directory of the install media (boot.iso), which can be downloaded from the RedHat website; files must be uploaded on the ISO domain, using the rhevm-iso-uploader(8) command:

# rhevm-iso-uploader --iso-domain=<ISODomain> upload initrd.img vmlinuz
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): <password>
Uploading, please wait...
INFO: Start uploading initrd.img
INFO: initrd.img uploaded successfully
INFO: Start uploading vmlinuz
INFO: vmlinuz uploaded successfully

These files will not appear when listing the content of the ISO domain (unless you add a ".iso" extension), but will be available nonetheless.

And here we come to the RHEV API (make sure you have installed the rhevm-sdk package); the first step, in a script, is importing the necessary modules and creating an instance of the API class, which is the base class for accessing the entire RHEV configuration:

#!/usr/bin/env python

from ovirtsdk.api import API
from ovirtsdk.xml import params

URL      = "https://rhevm.my.domain/api"
USERNAME = "admin@internal"
PASSWORD = "password"
CA_FILE  = "/etc/pki/ovirt-engine/ca.pem"

api = API(url=URL, username=USERNAME, password=PASSWORD, ca_file=CA_FILE)

The certificate (CA_FILE) can be downloaded from the RHEV-manager at the URL https://<rhevm-server>/ca.crt.

So we're ready to create our first virtual machine; we're going to setup the VM with 2 virtual dual-core processors, 2GB of RAM and a SPICE console.

VM_NAME = "my_vm"
CLUSTER_NAME = "my_cluster"
SOCKETS = 2
CORES = 2
GB = 1024**3

cpu_params = params.CPU(topology=params.CpuTopology(sockets=SOCKETS,
                                                    cores=CORES))
api.vms.add(params.VM(name=VM_NAME,
                      cluster=api.clusters.get(CLUSTER_NAME),
                      template=api.templates.get("Blank"),
                      cpu=cpu_params,
                      memory=2*GB,
                      display=params.Display(type_="SPICE")))

Before proceeding to the next steps, we have to wait for the virtual machine to be in down state; we can use a simple function like the following:

import time

def wait_vm_state(vm_name, state):
    while api.vms.get(vm_name).status.state != state:
        time.sleep(1)

wait_vm_state(VM_NAME, "down")

A similar function can be used to monitor the state of virtual disks:

def wait_disk_state(disk_name, state):
    while api.disks.get(disk_name).status.state != state:
        time.sleep(1)

Once the machine has been created and is in down state, we can add one or more disks and network cards; in this example, we will add a 20GB thin-provisioned disk (Copy-On-Write) and one NIC:

STG_DOMAIN = "my_stg_domain"
DSK_NAME = "disk1"
NIC_NAME = "nic1"
NET_NAME = "my_network"

vm = api.vms.get(VM_NAME)
stg_domain = api.storagedomains.get(STG_DOMAIN)
stg_parms = params.StorageDomains(storage_domain=[stg_domain])
# Boot disk
vm.disks.add(params.Disk(name=DSK_NAME,
                         storage_domains=stg_parms,
                         size=20*GB,
                         status=None,
                         interface='virtio',
                         format='cow',
                         sparse=False,
                         bootable=True))
wait_disk_state(DSK_NAME, "ok")

# Boot NIC
vm.nics.add(params.NIC(name=NIC_NAME,
                       network=params.Network(name=NET_NAME),
                       interface='virtio'))
boot_if = vm.nics.get(NIC_NAME).mac.address

# Add more disks and NICs to your liking...

As you can see, I have stored the MAC address of the boot interface in the boot_if variable, in order to pass it later to the kernel to identify the boot NIC (ksdevice boot parameter); of course, this only makes sense if you have multiple NICs configured on the virtual machine.

Now we can set the boot parameters of the VM (kernel, init ramdisk and command line); this could have been done at the VM creation, if we didn't have to save the MAC address for the ksdevice parameter (see below for how to use the boot parameters to fully customize post-installation).

boot_params = {"ks": "http://<satellite-server>/ks",
               "ksdevice": boot_if,
               "dns": "1.2.3.4,1.2.3.5",
               "ip": "10.9.8.7",
               "netmask": "255.255.255.0",
               "gateway": "10.9.8.1",
               "hostname": "{0}.my.domain".format(VM_NAME)}
cmdline = " ".join(map("{0[0]}={0[1]}".format, boot_params.iteritems()))
vm.set_os(params.OperatingSystem(kernel="iso://vmlinuz",
                                 initrd="iso://initrd.img",
                                 cmdline=cmdline))
vm.update()

To start the installation, we only have to power on the machine:

vm.start()

Finally, to prevent the virtual machine from network-booting again after installation, we need to remove the boot parameters from its configuration. However, if the kickstart file contains the reboot directive, the machine won't re-read its configuration before rebooting and will start the installation again. Therefore, I prefer using the poweroff directive in the kickstart file and use the API to power on the machine again at the end of the installation, as soon as it will reach the down state.

# Wait for machine to power off
wait_vm_state(VM_NAME, "down")
# Remove boot parameters
vm.set_os(params.OperatingSystem(kernel="", initrd="", cmdline=""))
vm.update()
# Start the VM after the installation
vm.start()
api.disconnect()

Just a final note on boot parameters; one trick I like to use for customizing OS post-installation is passing arbitrary parameters to the boot line, so that the post-install script(s) can take different actions based on these parameters. For example, say you add the parameter "do_stuff=y" to the boot line; the kernel will just ignore it because it means nothing to it; but in the post-install section of the kickstart file, you can parse the boot line and perform different actions based on these "special" parameters, e.g.:

%post --log /root/post-ks.log --interpreter /bin/bash
    eval $(cat /proc/cmdline)
    if [[ "$do_stuff" = "y" ]]; then
        # ... do stuff here ...
    fi
%end

Bibliography