There is many ways to create a kvm+qemu VM remotely. This article will explain how to create a VM remotely on your linux hypervisor.

1- Installation of the libvirt plugin:

You will have to install the libvirt plugin with the following command:

$ pip3 install libvirt-python

Create and edit the file requirements.yml:

collections:
  - name: community.libvirt

Then, run the following command to install the plugin:

$ ansible-galaxy install -r requirements.yml 

2- Configuration of libvirt plugin:

Create and edit the file libvirt.yml:

plugin: community.libvirt.libvirt
uri: 'qemu+ssh://<LOGIN>@<HOSTNAME>/system'
ansible_connection: community.libvirt.libvirt
ansible_libvirt_uri: 'qemu+ssh://<LOGIN>@<HOSTNAME>/system'
NameValue
LOGINYour username of the SSH connection on the hypervisor
HOSTNAMEThe IP+Port or alias of the hypervisor

The hostname can be an alias defined into ~/.ssh/config:

Host <HOSTNAME>
        Hostname <IP>
        Port <PORT>
        User <LOGIN>

3- Configuration of the ansible client:

Create and edit the file ansible.cfg:

[defaults]
inventory=libvirt.yml
interpreter_python=auto_silent
[inventory]
enable_plugins = community.libvirt.libvirt, auto, host_list, yaml, ini, toml, script

This configuration defines the file libvirt.yml as the default inventory and enables the libvirt plugin.

4- Definition of the default variables for the playbook:

Before writing the ansible playbook we will have to define some variables required to make our VMs. For example, one of these variables will be the number of CPU, the quantity of RAM, etc…

Create and edit the file hosts.yml:

all:
  vars:
    # If the SSH port of your hypervisor is different than 22 then you will have to overwrite
    # the ansible_port variable with the port you have choosen
    ansible_port: <PORT>
    # name of ubuntu release
    ubuntu_release_name: jammy
    # settings for downloading the image of server distribution
    image_name: "{{ ubuntu_release_name }}-server-cloudimg-amd64.img"
    image_url: https://cloud-images.ubuntu.com/{{ ubuntu_release_name }}/current/{{ image_name }}
    image_sha_url: https://cloud-images.ubuntu.com/{{ ubuntu_release_name }}/current/SHA256SUMS
    # where the image will be saved
    libvirt_pool_dir: "/var/lib/libvirt/images"
    # name of my VMS
    vm_name: u22.04-dev
    # settings of the VM to create
    vm_vcpus: 2
    vm_ram_mb: 2048
    # this is the name of the libvirt network to use
    vm_net: default
    # VM password encrypted with ansible-vault
    vm_root_pass: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          61393135613561303138373433656134613734636466326464616636353234373064326661366136
          6530323038383361343232373133336163383862383263310a396139666462376432396537376138
          66623864656638373332373832323138626337663166353161643233373833383363363866303331
          6464326564363439390a653833386432626233333663366561303263306462343262343065343633
          63653461333134303037613066613666383263326637656264333835363434343539
    vm_disk_size: 15G
    vm_user: micmik
    cleanup_tmp: no
    # Path to your public SSH key to add it the authorized_keys on the VM
    ssh_key: /home/me/.ssh/id_rsa.pub
  hosts:
    # Replace HOSTNAME with the hostname of your HV
    HOSTNAME:

As you can see our root password to define for the VM is encrypted. It has been generated with the following command:

# For example here our root password will be "mysecurepassword" which is not secured at all !!!
# use pwgen instead :)
$ echo "mysecurepassword" | ansible-vault encrypt_string
New Vault password: 
Confirm New Vault password:

Save the vault password you have chosen. It will be required at each run of the playbook to decrypt the value of the variable vm_root_pass.

OK now let’s start writing our playbook to create our tiny Ubuntu Jammy VM with 2 vCPUs and 2Gb of memory !!!

5- It’s time to write the ansible playbook

Create and edit the file virt-create-ubuntu-guest.yml:

- name: install ubuntu focal guest
  # Replace HOSTNAME with the actual hostname of your HV
  hosts: HOSTNAME
  become: true
  tasks:
    - name: Get VMs list
      community.libvirt.virt:
        command: list_vms
      register: existing_vms
      changed_when: false

    - name: Create VM if not exists
      block:
        - name: Download base image
          ansible.builtin.get_url:
            url: "{{ image_url }}"
            dest: "/tmp/{{ image_name }}"
            checksum: "sha256:{{ image_sha_url }}"
            mode: 0440

        - name: Copy base image to libvirt directory
          ansible.builtin.copy:
            dest: "{{ libvirt_pool_dir }}/{{ vm_name }}.qcow2"
            src: "/tmp/{{ image_name }}"
            force: false
            remote_src: true
            mode: 0660
          become: true
          register: copy_results

        - name: Resize qcow2 file
          ansible.builtin.command: |
            qemu-img resize {{ libvirt_pool_dir }}/{{ vm_name }}.qcow2 {{ vm_disk_size }}
          become: true
          register: resize_results

        - name: Copy netplan.yaml to tmp
          ansible.builtin.copy:
            dest: /tmp/netcfg.yaml
            src: files/01-netcfg.yaml
            mode: 0440
          become: true

        # Customize the VM by choosing the french layout for the keyboard
        # and creating the user micmik and some other stuffs like enabling SSH
        - name: Configure the image
          ansible.builtin.command: |
            virt-customize -a {{ libvirt_pool_dir }}/{{ vm_name }}.qcow2 \
            --update
            --install qemu-guest-agent,molly-guard,openssh-server \
            --hostname {{ vm_name }} \
            --root-password password:{{ vm_root_pass }} \
            --ssh-inject 'root:file:{{ ssh_key }}' \
            --edit '/etc/default/keyboard: s/^XKBLAYOUT=.*/XKBLAYOUT="fr"/' \
            --copy-in /tmp/netcfg.yaml:/etc/netplan \
            --run-command "resize2fs /dev/vda1" \
            --run-command "sed -i 's/GRUB_CMDLINE_LINUX/GRUB_CMDLINE_LINUX=\"net.ifnames=0 biosdevname=0 console=tty0 console=ttyS0,115200n8\"/g' /etc/default/grub" \
            --run-command "update-grub" \
            --run-command "systemctl mask apt-daily.service apt-daily-upgrade.service" \
            --run-command 'useradd -m -p "" {{ vm_user }} ; chage -d 0 {{ vm_user }}' \
            --run-command 'systemctl enable serial-getty@ttyS0.service ; systemctl start serial-getty@ttyS0.service' \
            --firstboot-command "netplan generate && netplan apply" \
            --firstboot-command "dpkg-reconfigure openssh-server" \
            --firstboot-command "sync" \
            --timezone UTC
          when: copy_results is changed

        - name: Define vm
          community.libvirt.virt:
            command: define
            xml: "{{ lookup('template', 'vm-template.xml.j2') }}"

      when: "vm_name not in existing_vms.list_vms"

    - name: Ensure VM is started
      community.libvirt.virt:
        name: "{{ vm_name }}"
        state: running
      register: vm_start_results
      until: "vm_start_results is success"
      retries: 15
      delay: 2

    - name: Ensure temporary files are deleted
      ansible.builtin.file:
        path: "{{ item }}"
        state: absent
      when: cleanup_tmp | bool
      with_items:
        - "/tmp/{{ base_image_name }}"
        - "/tmp/netcfg.yaml"

Our playbook requires one jinja template file for generating the libvirt configuration of our new VM and a static file which will be the configuration for creating the network interfaces with netplan.

First, let’s make the new directory *templates”. In this directory, create and edit the file vm-template.xml.j2. You don’t have to change anything in this file:

<domain type='kvm'>
  <name>{{ vm_name }}</name>
  <memory unit='MiB'>{{ vm_ram_mb }}</memory>
  <vcpu placement='static'>{{ vm_vcpus }}</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-5.2'>hvm</type>
    <boot dev='hd'/>
  </os>
  <cpu mode='host-model' check='none'/>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='{{ libvirt_pool_dir }}/{{ vm_name }}.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>
    <interface type='network'>
      <source network='{{ vm_net }}'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <image compression='off'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </rng>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
  </devices>
</domain>

As you can guess this file will specify how many vCPU, memory, etc… we need for our new VM.

To finish, make the new directory files. In this directory, create and edit the file 01-netcfg.yaml. You don’t have to change anything in this file:

network:
  version: 2
  renderer: networkd

  ethernets:
    enp1s0:
      dhcp4: true
      dhcp6: true
      optional: true

This file will ask to netplan to create the network interface enp1s0. The IPv4 and IPv6 addresses will be assigned dynamically with DHCP.

6- Run the playbook:

Now our playbook is ready to be applied !!

Run the following command to apply the playbook:

$ ansible-playbook -i hosts.yml --ask-vault-pass --ask-become-pass virt-create-ubuntu-guest.yml

The option –ask-vault-pass will made you ask for the password you used for encrypting the root password.

The option –ask-become-pass will ask you for the sudo password for running commands on the HV with root privileges.