public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: kdevops@lists.linux.dev, Jeff Layton <jlayton@kernel.org>
Subject: [PATCH kdevops v2 4/4] guestfs: add a new local guest management variant
Date: Fri, 08 Dec 2023 10:01:35 -0500	[thread overview]
Message-ID: <20231208-guestfs-v2-4-082c11a3a0af@kernel.org> (raw)
In-Reply-To: <20231208-guestfs-v2-0-082c11a3a0af@kernel.org>

The future of Vagrant is somewhat uncertain, so we need to replace it.

This patch adds a new way to manage local guests, using libguestfs-tools:

    https://libguestfs.org/

At a high level, this new option mostly affects the "make" and "make
bringup" stages. Instead of using vagrant, it will:

- use gen_nodes to build a nodelist and a libvirt xml file for each
  guest (instead of building a Vagrantfile)
- build a base image using virt-builder for a given distro (see
  virt-builder -l for the list of options)
- clone that image (using reflink) to a base root image for each guest
- create extra disks for each guest
- define the new guests using the xml file and start them

While no one really loves XML, I think it turns out to be a lot simpler
than trying to template out a ruby script (like we do with the
Vagrantfile.

Not all storage and hardware configurations are yet supported. I'm
hoping others that are using those configurations will help flesh that
out in later patches.

Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
 Makefile                                           |   4 +
 kconfigs/Kconfig.bringup                           |   6 +
 kconfigs/workflows/Kconfig.data_partition          |   2 +
 playbooks/roles/gen_nodes/defaults/main.yml        |   1 +
 playbooks/roles/gen_nodes/tasks/main.yml           |  30 +++
 .../roles/gen_nodes/templates/guestfs_nodes.j2     |   3 +
 .../roles/gen_nodes/templates/guestfs_q35.j2.xml   | 228 +++++++++++++++++++++
 scripts/bringup.Makefile                           |   1 +
 scripts/bringup_guestfs.sh                         |  92 +++++++++
 scripts/destroy_guestfs.sh                         |  29 +++
 scripts/guestfs.Makefile                           |  88 ++++++++
 scripts/update_ssh_config_guestfs.py               |  94 +++++++++
 vagrant/Kconfig                                    |  51 +++--
 13 files changed, 613 insertions(+), 16 deletions(-)

diff --git a/Makefile b/Makefile
index 895a0a67c705..ef30c3af80d7 100644
--- a/Makefile
+++ b/Makefile
@@ -110,6 +110,10 @@ ifeq (y,$(CONFIG_VAGRANT))
 include scripts/vagrant.Makefile
 endif
 
+ifeq (y,$(CONFIG_GUESTFS))
+include scripts/guestfs.Makefile
+endif
+
 ifeq (y,$(CONFIG_WORKFLOWS))
 include workflows/Makefile
 endif # CONFIG_WORKFLOWS
diff --git a/kconfigs/Kconfig.bringup b/kconfigs/Kconfig.bringup
index 1eaa2c99dd10..cc3d3eb573ab 100644
--- a/kconfigs/Kconfig.bringup
+++ b/kconfigs/Kconfig.bringup
@@ -15,6 +15,12 @@ config VAGRANT
 
 	    make deps
 
+config GUESTFS
+	bool "Use guestfs-tools for local virtualization via KVM and libvirt (EXPERIMENTAL)"
+	help
+	  This option will use libguestfs utilities instead of Vagrant to build
+	  guest images and spin them up using libvirt with KVM.
+
 config TERRAFORM
 	bool "Terraform for cloud environments"
 	select EXTRA_STORAGE_SUPPORTS_512
diff --git a/kconfigs/workflows/Kconfig.data_partition b/kconfigs/workflows/Kconfig.data_partition
index 8dd16ae78698..3a6b0ac6251e 100644
--- a/kconfigs/workflows/Kconfig.data_partition
+++ b/kconfigs/workflows/Kconfig.data_partition
@@ -71,6 +71,7 @@ if !WORKFLOW_INFER_USER_AND_GROUP
 
 config WORKFLOW_DATA_USER
 	string "The username to use to chown on the target data workflow directory"
+	default "kdevops" if GUESTFS
 	default "vagrant" if VAGRANT
 	default TERRAFORM_SSH_CONFIG_USER if TERRAFORM
 	help
@@ -78,6 +79,7 @@ config WORKFLOW_DATA_USER
 
 config WORKFLOW_DATA_GROUP
 	string "The group to use to chown on the target data workflow directory"
+	default "kdevops" if GUESTFS
 	default "vagrant" if VAGRANT
 	default TERRAFORM_SSH_CONFIG_USER if TERRAFORM_AWS_AMI_DEBIAN || TERRAFORM_AZURE_IMAGE_PUBLISHER_DEBIAN
 	default "users" if !TERRAFORM_AWS_AMI_DEBIAN && !TERRAFORM_AZURE_IMAGE_PUBLISHER_DEBIAN
diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
index e1e675cb38d0..6ed914f479c7 100644
--- a/playbooks/roles/gen_nodes/defaults/main.yml
+++ b/playbooks/roles/gen_nodes/defaults/main.yml
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier GPL-2.0+
 ---
+kdevops_enable_guestfs: False
 kdevops_enable_terraform: False
 kdevops_enable_vagrant: False
 kdevops_vagrant: '/dev/null'
diff --git a/playbooks/roles/gen_nodes/tasks/main.yml b/playbooks/roles/gen_nodes/tasks/main.yml
index 7beea670f0e3..0271fce34900 100644
--- a/playbooks/roles/gen_nodes/tasks/main.yml
+++ b/playbooks/roles/gen_nodes/tasks/main.yml
@@ -18,6 +18,12 @@
   command: "id -g -n"
   register: my_group
 
+- name: Create guestfs directory
+  ansible.builtin.file:
+    path: "{{ guestfs_path }}"
+    state: directory
+  when: kdevops_enable_guestfs
+
 - name: Verify Ansible nodes template file exists {{ kdevops_nodes_template_full_path }}
   stat:
     path: "{{ kdevops_nodes_template_full_path }}"
@@ -350,3 +356,27 @@
     - kdevops_enable_vagrant|bool
     - vagrant_template.stat.exists
     - not vagrant_dest.stat.exists
+
+- name: Import list of guest nodes
+  include_vars: "{{ topdir_path }}/{{ kdevops_nodes }}"
+  ignore_errors: yes
+  when:
+    - kdevops_enable_guestfs
+
+- name: Create local directories for each of the guests
+  ansible.builtin.file:
+    path: "{{ guestfs_path }}/{{ item.name }}"
+    state: directory
+  with_items: "{{ guestfs_nodes }}"
+  when: kdevops_enable_guestfs
+
+- name: Generate XML files for the libvirt guests
+  vars:
+    cur_hostname: "{{ item.name }}"
+  template:
+    src: "guestfs_{{ libvirt_machine_type }}.j2.xml"
+    dest: "{{ topdir_path }}/guestfs/{{ cur_hostname }}/{{ cur_hostname }}.xml"
+    force: yes
+  with_items: "{{ guestfs_nodes }}"
+  when:
+    - kdevops_enable_guestfs
diff --git a/playbooks/roles/gen_nodes/templates/guestfs_nodes.j2 b/playbooks/roles/gen_nodes/templates/guestfs_nodes.j2
new file mode 100644
index 000000000000..5e87df3bafff
--- /dev/null
+++ b/playbooks/roles/gen_nodes/templates/guestfs_nodes.j2
@@ -0,0 +1,3 @@
+---
+guestfs_nodes:
+{% include './templates/hosts.j2' %}
diff --git a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
new file mode 100644
index 000000000000..527b44b90e89
--- /dev/null
+++ b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
@@ -0,0 +1,228 @@
+<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
+  <name>{{ cur_hostname }}</name>
+  <memory unit='MiB'>{{ vagrant_mem_mb }}</memory>
+  <currentMemory unit='MiB'>{{ vagrant_mem_mb }}</currentMemory>
+  <vcpu placement='static'>{{ vagrant_vcpus_count }}</vcpu>
+  <os>
+    <type arch='x86_64' machine='pc-q35-8.1'>hvm</type>
+    <boot dev='hd'/>
+  </os>
+  <features>
+    <acpi/>
+    <apic/>
+  </features>
+  <cpu mode='host-passthrough' check='none' migratable='on'/>
+  <clock offset='utc'>
+    <timer name='rtc' tickpolicy='catchup'/>
+    <timer name='pit' tickpolicy='delay'/>
+    <timer name='hpet' present='no'/>
+  </clock>
+  <on_poweroff>destroy</on_poweroff>
+  <on_reboot>restart</on_reboot>
+  <on_crash>destroy</on_crash>
+  <pm>
+    <suspend-to-mem enabled='no'/>
+    <suspend-to-disk enabled='no'/>
+  </pm>
+  <devices>
+    <emulator>/usr/bin/qemu-system-x86_64</emulator>
+    <disk type='file' device='disk'>
+      <driver name='qemu' type='qcow2'/>
+      <source file='{{ kdevops_storage_pool_path }}/guestfs/{{ cur_hostname }}/root.qcow2' index='1'/>
+      <backingStore/>
+      <target dev='vda' bus='virtio'/>
+      <alias name='virtio-disk0'/>
+      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
+    </disk>
+    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
+      <alias name='usb'/>
+      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
+    </controller>
+    <controller type='pci' index='0' model='pcie-root'>
+      <alias name='pcie.0'/>
+    </controller>
+    <controller type='pci' index='1' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='1' port='0x8'/>
+      <alias name='pci.1'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='pci' index='2' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='2' port='0x9'/>
+      <alias name='pci.2'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
+    </controller>
+    <controller type='pci' index='3' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='3' port='0xa'/>
+      <alias name='pci.3'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
+    </controller>
+    <controller type='pci' index='4' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='4' port='0xb'/>
+      <alias name='pci.4'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
+    </controller>
+    <controller type='pci' index='5' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='5' port='0xc'/>
+      <alias name='pci.5'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
+    </controller>
+    <controller type='pci' index='6' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='6' port='0xd'/>
+      <alias name='pci.6'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
+    </controller>
+    <controller type='pci' index='7' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='7' port='0xe'/>
+      <alias name='pci.7'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
+    </controller>
+    <controller type='pci' index='8' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='8' port='0xf'/>
+      <alias name='pci.8'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
+    </controller>
+    <controller type='pci' index='9' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='9' port='0x10'/>
+      <alias name='pci.9'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='pci' index='10' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='10' port='0x11'/>
+      <alias name='pci.10'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
+    </controller>
+    <controller type='pci' index='11' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='11' port='0x12'/>
+      <alias name='pci.11'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
+    </controller>
+    <controller type='pci' index='12' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='12' port='0x13'/>
+      <alias name='pci.12'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
+    </controller>
+    <controller type='pci' index='13' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='13' port='0x14'/>
+      <alias name='pci.13'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
+    </controller>
+    <controller type='pci' index='14' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='14' port='0x15'/>
+      <alias name='pci.14'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
+    </controller>
+    <controller type='sata' index='0'>
+      <alias name='ide'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
+    </controller>
+    <controller type='virtio-serial' index='0'>
+      <alias name='virtio-serial0'/>
+      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
+    </controller>
+    <interface type='bridge'>
+      <source bridge='virbr0'/>
+      <target dev='tap0'/>
+      <model type='virtio'/>
+      <alias name='net0'/>
+      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
+    </interface>
+    <serial type='pty'>
+      <source path='/dev/pts/2'/>
+      <target type='isa-serial' port='0'>
+        <model name='isa-serial'/>
+      </target>
+      <alias name='serial0'/>
+    </serial>
+    <console type='pty' tty='/dev/pts/2'>
+      <source path='/dev/pts/2'/>
+      <target type='serial' port='0'/>
+      <alias name='serial0'/>
+    </console>
+    <channel type='unix'>
+      <source mode='bind'/>
+      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
+      <alias name='channel0'/>
+      <address type='virtio-serial' controller='0' bus='0' port='1'/>
+    </channel>
+    <input type='mouse' bus='ps2'>
+      <alias name='input0'/>
+    </input>
+    <input type='keyboard' bus='ps2'>
+      <alias name='input1'/>
+    </input>
+    <audio id='1' type='none'/>
+    <watchdog model='itco' action='reset'>
+      <alias name='watchdog0'/>
+    </watchdog>
+    <memballoon model='virtio'>
+      <alias name='balloon0'/>
+      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
+    </memballoon>
+    <rng model='virtio'>
+      <backend model='random'>/dev/urandom</backend>
+      <alias name='rng0'/>
+      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
+    </rng>
+  </devices>
+  <seclabel type='dynamic' model='selinux' relabel='yes'>
+    <label>unconfined_u:unconfined_r:svirt_t:s0:c279,c814</label>
+    <imagelabel>unconfined_u:object_r:svirt_image_t:s0:c279,c814</imagelabel>
+  </seclabel>
+  <qemu:commandline>
+    <qemu:arg value='-global'/>
+    <qemu:arg value='ICH9-LPC.disable_s3=0'/>
+    <qemu:arg value='-global'/>
+    <qemu:arg value='ICH9-LPC.disable_s4=0'/>
+    <qemu:arg value='-device'/>
+    <qemu:arg value='pxb-pcie,id=pcie.1,bus_nr=32,bus=pcie.0,addr=0x8'/>
+{% if libvirt_extra_storage_drive_ide %}
+{% for n in range(0,4) %}
+    <qemu:arg value='-drive'/>
+    <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ cur_hostname }}/extra{{ n }}.{{ vagrant_extra_drive_format }},format={{ vagrant_extra_drive_format }},aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},if=ide,serial=kdevops{{ n }}'/>
+{% endfor %}
+{% elif libvirt_extra_storage_drive_virtio %}
+{% for n in range(0,4) %}
+    <qemu:arg value='-device'/>
+    <qemu:arg value='pcie-root-port,id=pcie-port-for-virtio-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ n }},chassis=5{{ n }}'/>
+    <qemu:arg value="-object"/>
+    <qemu:arg value="iothread,id=kdevops-virtio-iothread-{{ n }}"/>
+    <qemu:arg value="-drive"/>
+    <qemu:arg value="file={{ kdevops_storage_pool_path }}/guestfs/{{ cur_hostname }}/extra{{ n }}.{{ vagrant_extra_drive_format }},format={{ vagrant_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}"/>
+    <qemu:arg value="-device"/>
+    <qemu:arg value="virtio-blk-pci,scsi=off,drive=drv{{ n }},id=virtio-drv{{ n }},serial=kdevops{{ n }},bus=pcie-port-for-virtio-{{ n }},addr=0x0,iothread=kdevops-virtio-iothread-{{ n }},logical_block_size={{ libvirt_extra_storage_virtio_logical_block_size }},physical_block_size={{ libvirt_extra_storage_virtio_physical_block_size }}"/>
+{% endfor %}
+{% elif libvirt_extra_storage_drive_nvme  %}
+{% for n in range(0,4) %}
+    <qemu:arg value='-device'/>
+    <qemu:arg value='pcie-root-port,id=pcie-port-for-nvme-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ n }},chassis=5{{ n }}'/>
+    <qemu:arg value='-drive'/>
+    <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ cur_hostname }}/extra{{ n }}.{{ vagrant_extra_drive_format }},format={{ vagrant_extra_drive_format }},if=none,id=drv{{ n }}'/>
+    <qemu:arg value='-device'/>
+    <qemu:arg value='nvme,id=nvme{{ n }},serial=kdevops{{ n }},bus=pcie-port-for-nvme-{{ n }},addr=0x0'/>
+    <qemu:arg value='-device'/>
+    <qemu:arg value='nvme-ns,drive=drv{{ n }},bus=nvme{{ n }},nsid=1,logical_block_size=512,physical_block_size=512'/>
+{% endfor %}
+{% endif %}
+{% if bootlinux_9p %}
+    <qemu:arg value='-device'/>
+    <qemu:arg value='{{ bootlinux_9p_driver }},fsdev={{ bootlinux_9p_fsdev }},mount_tag={{ bootlinux_9p_mount_tag }},bus=pcie.0,addr=0x10'/>
+    <qemu:arg value='-fsdev'/>
+    <qemu:arg value='local,id={{ bootlinux_9p_fsdev }},path={{ bootlinux_9p_host_path }},security_model={{ bootlinux_9p_security_model }}'/>
+{% endif %}
+  </qemu:commandline>
+</domain>
+
diff --git a/scripts/bringup.Makefile b/scripts/bringup.Makefile
index 2219cb3f4342..d7a07f0becb7 100644
--- a/scripts/bringup.Makefile
+++ b/scripts/bringup.Makefile
@@ -33,6 +33,7 @@ bringup-help-menu:
 	@echo "Bringup targets:"
 	@echo "bringup            - Brings up target hosts"
 	@echo "destroy            - Destroy all target hosts"
+	@echo "cleancache	  - Remove all cached images"
 	@echo ""
 
 HELP_TARGETS+=bringup-help-menu
diff --git a/scripts/bringup_guestfs.sh b/scripts/bringup_guestfs.sh
new file mode 100755
index 000000000000..2f2406dc054d
--- /dev/null
+++ b/scripts/bringup_guestfs.sh
@@ -0,0 +1,92 @@
+#!/bin/bash
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+[ -z "${TOPDIR}" ] && TOPDIR='.'
+source ${TOPDIR}/.config
+source ${TOPDIR}/scripts/lib.sh
+
+#
+# We use the NVMe setting for virtio too (go figure), but IDE
+# requires qcow2
+#
+IMG_FMT="qcow2"
+if [ "${CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_IDE}" != "y" -a \
+     "${CONFIG_LIBVIRT_NVME_DRIVE_FORMAT_RAW}" = "y" ]; then
+	IMG_FMT="raw"
+fi
+STORAGEDIR="${CONFIG_KDEVOPS_STORAGE_POOL_PATH}/kdevops/guestfs"
+GUESTFSDIR="${TOPDIR}/guestfs"
+OS_VERSION=${CONFIG_VIRT_BUILDER_OS_VERSION}
+BASE_IMAGE_DIR="${STORAGEDIR}/base_images"
+BASE_IMAGE="${BASE_IMAGE_DIR}/${OS_VERSION}.qcow2"
+mkdir -p $STORAGEDIR
+mkdir -p $BASE_IMAGE_DIR
+
+cmdfile=$(mktemp)
+
+if [ ! -f $BASE_IMAGE ]; then
+
+# basic pre-install customization
+	cat <<_EOT >>$cmdfile
+install sudo,qemu-guest-agent
+run-command useradd -m kdevops
+append-line /etc/sudoers.d/kdevops:kdevops   ALL=(ALL)       NOPASSWD: ALL
+_EOT
+
+# Ugh, debian has to be told to bring up the network and regenerate ssh keys
+# Hope we get that interface name right!
+	if echo $OS_VERSION | grep -q '^debian'; then
+		cat <<_EOT >>$cmdfile
+append-line /etc/network/interfaces.d/enp1s0:auto enp1s0
+append-line /etc/network/interfaces.d/enp1s0:allow-hotplug enp1s0
+append-line /etc/network/interfaces.d/enp1s0:iface enp1s0 inet dhcp
+firstboot-command dpkg-reconfigure openssh-server
+_EOT
+	fi
+
+	#
+	# Note that we always use qcow2 for the base image.
+	#
+	echo "Generating new base image for ${OS_VERSION}"
+	virt-builder ${OS_VERSION} -o $BASE_IMAGE --format qcow2 --commands-from-file $cmdfile
+fi
+
+# FIXME: is there a yaml equivalent of jq?
+grep -e '^  - name: ' ${TOPDIR}/guestfs/kdevops_nodes.yaml | sed 's/^  - name: //' | while read name
+do
+	#
+	# If the guest is already defined, then just stop what we're doing
+	# and plead to the developer to clean things up.
+	#
+	virsh domstate $name 1>/dev/null 2>&1
+	if [ $? -eq 0 ]; then
+		echo "Domain $name is already defined. Aborting!"
+		exit 1
+	fi
+
+	SSH_KEY_DIR="${GUESTFSDIR}/$name/ssh"
+	SSH_KEY="${SSH_KEY_DIR}/id_ed25519"
+
+	# Generate a new ssh key
+	mkdir -p "$SSH_KEY_DIR"
+	chmod 0700 "$SSH_KEY_DIR"
+	rm -f $SSH_KEY $SSH_KEY.pub
+	ssh-keygen -q -t ed25519 -f $SSH_KEY -N ""
+
+	mkdir -p "$STORAGEDIR/$name"
+
+	# Copy the base image and prep it
+	ROOTIMG="$STORAGEDIR/$name/root.qcow2"
+	cp --reflink=auto $BASE_IMAGE $ROOTIMG
+	virt-sysprep -a $ROOTIMG --hostname $name --ssh-inject "kdevops:file:$SSH_KEY.pub"
+
+	# build some extra disks
+	for i in $(seq 0 3); do
+		diskimg="$STORAGEDIR/$name/extra${i}.${IMG_FMT}"
+		rm -f $diskimg
+		qemu-img create -f $IMG_FMT "$STORAGEDIR/$name/extra${i}.$IMG_FMT" 100G
+	done
+
+	virsh define $GUESTFSDIR/$name/$name.xml
+	virsh start $name
+done
diff --git a/scripts/destroy_guestfs.sh b/scripts/destroy_guestfs.sh
new file mode 100755
index 000000000000..4512dc07246b
--- /dev/null
+++ b/scripts/destroy_guestfs.sh
@@ -0,0 +1,29 @@
+#!/bin/bash
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+[ -z "${TOPDIR}" ] && TOPDIR='.'
+source ${TOPDIR}/.config
+source ${TOPDIR}/scripts/lib.sh
+
+STORAGEDIR="${CONFIG_KDEVOPS_STORAGE_POOL_PATH}/kdevops/guestfs"
+GUESTFSDIR="${TOPDIR}/guestfs"
+
+if [ -f "$GUESTFSDIR/kdevops_nodes.yaml" ]; then
+	# FIXME: is there a yaml equivalent to jq ?
+	grep -e '^  - name: ' "${GUESTFSDIR}/kdevops_nodes.yaml"  | sed 's/^  - name: //' | while read name
+	do
+		domstate=$(virsh domstate $name 2>/dev/null)
+		if [ $? -eq 0 ]; then
+			if [ "$domstate" = 'running' ]; then
+				virsh destroy $name
+			fi
+			virsh undefine $name
+		fi
+		rm -rf "$GUESTFSDIR/$name"
+		rm -rf "$STORAGEDIR/$name"
+	done
+fi
+
+rm -f ~/.ssh/config_kdevops_$CONFIG_KDEVOPS_HOSTS_PREFIX
+rm -f $GUESTFSDIR/.provisioned_once
+rm -f $GUESTFSDIR/kdevops_nodes.yaml
diff --git a/scripts/guestfs.Makefile b/scripts/guestfs.Makefile
new file mode 100644
index 000000000000..958063d5eeef
--- /dev/null
+++ b/scripts/guestfs.Makefile
@@ -0,0 +1,88 @@
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+GUESTFS_ARGS :=
+
+KDEVOPS_NODES_TEMPLATE :=	$(KDEVOPS_NODES_ROLE_TEMPLATE_DIR)/guestfs_nodes.j2
+KDEVOPS_NODES :=		guestfs/kdevops_nodes.yaml
+
+export KDEVOPS_GUESTFS_PROVISIONED :=	guestfs/.provisioned_once
+
+KDEVOPS_MRPROPER +=		$(KDEVOPS_GUESTFS_PROVISIONED)
+
+GUESTFS_ARGS += kdevops_enable_guestfs=True
+GUESTFS_ARGS += guestfs_path='$(TOPDIR_PATH)/guestfs'
+GUESTFS_ARGS += data_home_dir=/home/kdevops
+GUESTFS_ARGS += virtbuilder_os_version=$(CONFIG_VIRT_BUILDER_OS_VERSION)
+GUESTFS_ARGS += kdevops_storage_pool_user='$(USER)'
+
+GUESTFS_ARGS += libvirt_provider=True
+
+QEMU_GROUP:=$(subst ",,$(CONFIG_LIBVIRT_QEMU_GROUP))
+GUESTFS_ARGS += kdevops_storage_pool_group='$(QEMU_GROUP)'
+GUESTFS_ARGS += storage_pool_group='$(QEMU_GROUP)'
+
+STORAGE_POOL_PATH:=$(subst ",,$(CONFIG_KDEVOPS_STORAGE_POOL_PATH))
+KDEVOPS_STORAGE_POOL_PATH:=$(STORAGE_POOL_PATH)/kdevops
+GUESTFS_ARGS += storage_pool_path=$(STORAGE_POOL_PATH)
+GUESTFS_ARGS += kdevops_storage_pool_path=$(KDEVOPS_STORAGE_POOL_PATH)
+
+9P_HOST_CLONE :=
+ifeq (y,$(CONFIG_BOOTLINUX_9P))
+9P_HOST_CLONE := 9p_linux_clone
+endif
+
+LIBVIRT_PCIE_PASSTHROUGH :=
+ifeq (y,$(CONFIG_KDEVOPS_LIBVIRT_PCIE_PASSTHROUGH))
+LIBVIRT_PCIE_PASSTHROUGH := libvirt_pcie_passthrough_permissions
+endif
+
+ifneq ($(strip $(CONFIG_RHEL_ORG_ID)),)
+ifneq ($(strip $(CONFIG_RHEL_ACTIVATION_KEY)),)
+RHEL_ORG_ID:=$(subst ",,$(CONFIG_RHEL_ORG_ID))
+RHEL_ACTIVATION_KEY:=$(subst ",,$(CONFIG_RHEL_ACTIVATION_KEY))
+GUESTFS_ARGS += rhel_org_id="$(RHEL_ORG_ID)"
+GUESTFS_ARGS += rhel_activation_key="$(RHEL_ACTIVATION_KEY)"
+endif
+endif
+
+ANSIBLE_EXTRA_ARGS += $(GUESTFS_ARGS)
+
+GUESTFS_BRINGUP_DEPS :=
+GUESTFS_BRINGUP_DEPS +=  $(9P_HOST_CLONE)
+GUESTFS_BRINGUP_DEPS +=  $(LIBVIRT_PCIE_PASSTHROUGH)
+
+KDEVOPS_BRING_UP_DEPS := bringup_guestfs
+KDEVOPS_DESTROY_DEPS := destroy_guestfs
+
+# Provisioning goes last
+KDEVOPS_BRING_UP_DEPS += $(KDEVOPS_GUESTFS_PROVISIONED)
+
+9p_linux_clone:
+	$(Q)make linux-clone
+
+libvirt_pcie_passthrough_permissions:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
+		--inventory localhost, \
+		playbooks/libvirt_pcie_passthrough.yml \
+		-e 'ansible_python_interpreter=/usr/bin/python3'
+
+$(KDEVOPS_GUESTFS_PROVISIONED):
+	$(Q)if [[ "$(CONFIG_KDEVOPS_SSH_CONFIG_UPDATE)" == "y" ]]; then \
+		$(TOPDIR)/scripts/update_ssh_config_guestfs.py; \
+	fi
+	$(Q)if [[ "$(CONFIG_KDEVOPS_ANSIBLE_PROVISION_PLAYBOOK)" != "" ]]; then \
+		ansible-playbook $(ANSIBLE_VERBOSE) -i \
+			$(KDEVOPS_HOSTFILE) $(KDEVOPS_PLAYBOOKS_DIR)/$(KDEVOPS_ANSIBLE_PROVISION_PLAYBOOK) ; \
+	fi
+	$(Q)touch $(KDEVOPS_GUESTFS_PROVISIONED)
+
+bringup_guestfs: $(GUESTFS_BRINGUP_DEPS)
+	$(Q)$(TOPDIR)/scripts/bringup_guestfs.sh
+PHONY += bringup_guestfs
+
+destroy_guestfs:
+	$(Q)$(TOPDIR)/scripts/destroy_guestfs.sh
+PHONY += destroy_guestfs
+
+cleancache:
+	$(Q)rm -f $(subst ",,$(CONFIG_KDEVOPS_STORAGE_POOL_PATH))/kdevops/guestfs/base_images/*
diff --git a/scripts/update_ssh_config_guestfs.py b/scripts/update_ssh_config_guestfs.py
new file mode 100755
index 000000000000..ebf49b8b09b4
--- /dev/null
+++ b/scripts/update_ssh_config_guestfs.py
@@ -0,0 +1,94 @@
+#!/usr/bin/python3
+#
+# update_ssh_config_guestfs
+#
+# For each kdevops guest, determine the IP address and write a ssh_config
+# entry for it to ~/.ssh/config_kdevops_$prefix. Users can then just add a
+# line like this to ~/.ssh/config:
+#
+# Include ~/.ssh/config_kdevops_*
+#
+
+import yaml
+import json
+import sys
+import pprint
+import subprocess
+import time
+import os
+from pathlib import Path
+
+ssh_template = """Host {name} {addr}
+	HostName {addr}
+	User kdevops
+	Port 22
+	IdentityFile {sshkey}
+	UserKnownHostsFile /dev/null
+	StrictHostKeyChecking no
+	PasswordAuthentication no
+	IdentitiesOnly yes
+	LogLevel FATAL
+"""
+
+# We take the first IPv4 address on the first non-loopback interface.
+def get_addr(name):
+    attempt = 0
+    while True:
+        attempt += 1
+        if attempt > 60:
+            raise Exception(f"Unable to get an address for {name} after 60s")
+
+        result = subprocess.run(['/usr/bin/virsh','qemu-agent-command',name,'{"execute":"guest-network-get-interfaces"}'], capture_output=True)
+        # Did it error out? Sleep and try again.
+        if result.returncode != 0:
+            time.sleep(1)
+            continue
+
+        # slurp the output into a dict
+        netinfo = json.loads(result.stdout)
+
+        ret = None
+        for iface in netinfo['return']:
+            if iface['name'] == 'lo':
+                continue
+            if 'ip-addresses' not in iface:
+                continue
+            for addr in iface['ip-addresses']:
+                if addr['ip-address-type'] != 'ipv4':
+                    continue
+                ret = addr['ip-address']
+                break
+
+        # If we didn't get an address, try again
+        if ret:
+            return ret
+        time.sleep(1)
+
+def main():
+    topdir = os.environ.get('TOPDIR', '.')
+
+    # load extra_vars
+    with open(f'{topdir}/extra_vars.yaml') as stream:
+        extra_vars = yaml.safe_load(stream)
+
+    # slurp in the guestfs_nodes list
+    with open(f'{topdir}/{extra_vars["kdevops_nodes"]}') as stream:
+        nodes = yaml.safe_load(stream)
+
+    ssh_config = f'{Path.home()}/.ssh/config_kdevops_{extra_vars["kdevops_host_prefix"]}'
+
+    # make a stanza for each node
+    sshconf = open(ssh_config, 'w')
+    for node in nodes['guestfs_nodes']:
+        name = node['name']
+        addr = get_addr(name)
+        context = {
+            "name" : name,
+            "addr" : addr,
+            "sshkey" : f"{extra_vars['guestfs_path']}/{name}/ssh/id_ed25519"
+        }
+        sshconf.write(ssh_template.format(**context))
+    sshconf.close()
+
+if __name__ == "__main__":
+    main()
diff --git a/vagrant/Kconfig b/vagrant/Kconfig
index 3b069d098500..04c8556a5bff 100644
--- a/vagrant/Kconfig
+++ b/vagrant/Kconfig
@@ -1,10 +1,11 @@
-if VAGRANT
+if VAGRANT || GUESTFS
 
+if ! GUESTFS
 choice
 	prompt "Vagrant virtualization technology to use"
-	default LIBVIRT
+	default VAGRANT_LIBVIRT_SELECT
 
-config LIBVIRT
+config VAGRANT_LIBVIRT_SELECT
 	bool "Libvirt"
 	help
 	  Select this option if you want to use KVM / libvirt for
@@ -21,6 +22,12 @@ config VAGRANT_VIRTUALBOX
 	  local virtualization.
 
 endchoice
+endif # !GUESTFS
+
+config LIBVIRT
+	bool
+	depends on GUESTFS || VAGRANT_LIBVIRT_SELECT
+	default y
 
 config USE_LIBVIRT_MIRROR
 	bool
@@ -1103,6 +1110,18 @@ config LIBVIRT_SESSION_PUBLIC_NETWORK_DEV
 
 endif # LIBVIRT_SESSION
 
+if GUESTFS
+config VIRT_BUILDER_OS_VERSION
+	string "virt-builder os-version"
+	default "fedora-39"
+	help
+	  Have virt-builder use this os-version string to
+	  build a root image for the guest. Run "virt-builder -l"
+	  to get a list of operating systems and versions supported
+	  by guestfs.
+endif # GUESTFS
+
+if ! GUESTFS
 config HAVE_SUSE_VAGRANT
 	bool
 	default $(shell, scripts/check_distro_kconfig.sh suse)
@@ -1183,7 +1202,6 @@ config VAGRANT_KDEVOPS
 	help
 	  This option will let you select custom kernel builds by the
 	  kdevops project. The distributions may vary and are are specified.
-
 endchoice
 
 config HAVE_VAGRANT_BOX_VERSION
@@ -1264,7 +1282,6 @@ config VAGRANT_PREFERRED_KERNEL_CI_SUBJECT_TOPIC
 	default VAGRANT_BOX if VAGRANT_DEBIAN_BUSTER64
 	default VAGRANT_BOX if VAGRANT_DEBIAN_BULLSEYE64
 
-
 config HAVE_VAGRANT_BOX_URL
 	bool
 
@@ -1304,6 +1321,19 @@ config VAGRANT_BOX_VERSION
 
 endif # !HAVE_VAGRANT_BOX_VERSION
 
+config VAGRANT_INSTALL_PRIVATE_BOXES
+	bool "Install private Vagrant boxes"
+	default y
+	help
+	  If this option is enabled then the Ansible role which installs
+	  additional Vagrant boxes will be run. This is useful if for example,
+	  you have private Vagrant boxes available and you want to use them.
+	  You can safely disable this option if you are using only public
+	  Vagrant boxes. Enabling this option is safe as well, given no
+	  private boxes would be defined, and so nothing is done.
+
+endif # !GUESTFS
+
 config LIBVIRT_INSTALL
 	bool "Install libvirt"
 	default y if KDEVOPS_FIRST_RUN
@@ -1341,17 +1371,6 @@ config LIBVIRT_VERIFY
 	  verify and ensure that your user is already part of these groups.
 	  You can safely say yes here.
 
-config VAGRANT_INSTALL_PRIVATE_BOXES
-	bool "Install private Vagrant boxes"
-	default y
-	help
-	  If this option is enabled then the Ansible role which installs
-	  additional Vagrant boxes will be run. This is useful if for example,
-	  you have private Vagrant boxes available and you want to use them.
-	  You can safely disable this option if you are using only public
-	  Vagrant boxes. Enabling this option is safe as well, given no
-	  private boxes would be defined, and so nothing is done.
-
 choice
 	prompt "Libvirt NVMe drive file format"
 	depends on LIBVIRT

-- 
2.43.0


  parent reply	other threads:[~2023-12-08 15:01 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-08 15:01 [PATCH kdevops v2 0/4] guestfs: replacement for vagrant in kdevops Jeff Layton
2023-12-08 15:01 ` [PATCH kdevops v2 1/4] Kconfig: s/VAGRANT_LIBVIRT/LIBVIRT/ Jeff Layton
2023-12-09  2:05   ` Luis Chamberlain
2023-12-09  2:06     ` Luis Chamberlain
2023-12-09  2:08       ` Luis Chamberlain
2023-12-09 12:07     ` Jeff Layton
2023-12-08 15:01 ` [PATCH kdevops v2 2/4] nfsd: key off of LIBVIRT instead of VAGRANT for storage config Jeff Layton
2023-12-08 15:01 ` [PATCH kdevops v2 3/4] vagrant: rename the RHEL registration settings Jeff Layton
2023-12-08 15:23   ` Jeff Layton
2023-12-08 15:01 ` Jeff Layton [this message]
2023-12-09  2:14   ` [PATCH kdevops v2 4/4] guestfs: add a new local guest management variant Luis Chamberlain
2023-12-09 12:25     ` Jeff Layton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231208-guestfs-v2-4-082c11a3a0af@kernel.org \
    --to=jlayton@kernel.org \
    --cc=kdevops@lists.linux.dev \
    --cc=mcgrof@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox