public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
* [PATCH v2 0/4] Enable aarch64 with guestfs
@ 2024-03-06 15:03 Chuck Lever
  2024-03-06 15:03 ` [PATCH v2 1/4] guestfs: Specify host ISA to virt-builder Chuck Lever
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Chuck Lever @ 2024-03-06 15:03 UTC (permalink / raw)
  To: kdevops

Hi-

These patches add aarch64 platform support when running kdevops
with guestfs (ie, local virtualization). I decided not to add
aarch64 support for vagrant since vagrant seems to be going away
soon.

I haven't made substantive changes to this series. I think once
these patches are merged, subsequent modifications can bring these
into better focus. For example, a subsequent patch can de-duplicate
the guestfs XML files via %include directives.

The one area of continued (mild) controversy seems to be the use of
CONFIG_TARGET_ARCHITECTURE. This option appears to be a blanket
selection of which tests are available to run, though it is
conflated with host and/or guest ISA in some areas of the
configuration. (Though, perhaps it is only my own confusion).

For local virtualization, guest ISA should match host ISA, which is
what 1/4 does. For cloud virtualization, some other (unspecified)
mechanism is used to select guest ISA, and this behavior is not
changed by this patch series.


Changes since v1:
- Checked with users@lists.libvirt about "virsh undefine --nvram"
- Sharpened patch descriptions based on review comments

---

Chuck Lever (4):
      guestfs: Specify host ISA to virt-builder
      guestfs: Enable destruction of guests with NVRAM
      gen_nodes: Instructions for adding a new guestfs architecture
      libvirt: Support aarch64 guests


 kconfigs/Kconfig.libvirt                      |   8 +-
 playbooks/roles/gen_nodes/templates/README.md |  65 ++++++
 .../gen_nodes/templates/guestfs_virt.j2.xml   | 215 ++++++++++++++++++
 scripts/bringup_guestfs.sh                    |   2 +-
 scripts/destroy_guestfs.sh                    |   2 +-
 scripts/gen-nodes.Makefile                    |   5 +
 workflows/linux/Kconfig                       |   3 +-
 7 files changed, 295 insertions(+), 5 deletions(-)
 create mode 100644 playbooks/roles/gen_nodes/templates/README.md
 create mode 100644 playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml

--
Chuck Lever


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/4] guestfs: Specify host ISA to virt-builder
  2024-03-06 15:03 [PATCH v2 0/4] Enable aarch64 with guestfs Chuck Lever
@ 2024-03-06 15:03 ` Chuck Lever
  2024-03-06 15:03 ` [PATCH v2 2/4] guestfs: Enable destruction of guests with NVRAM Chuck Lever
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Chuck Lever @ 2024-03-06 15:03 UTC (permalink / raw)
  To: kdevops

From: Chuck Lever <chuck.lever@oracle.com>

According to documentation, the default architecture for "virt-
builder" isn't always the same as the host's ISA.

Specifically for guestfs/libvirt, though, we want to ensure that the
host and guest ISA to match in order to avoid ISA emulation, which
is slow.

It shouldn't be difficult to customize this later on, if someone
wants to run libvirt guests under emulation.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 scripts/bringup_guestfs.sh |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/bringup_guestfs.sh b/scripts/bringup_guestfs.sh
index 34ad48cbe81f..51b5a07218ac 100755
--- a/scripts/bringup_guestfs.sh
+++ b/scripts/bringup_guestfs.sh
@@ -67,7 +67,7 @@ _EOT
 	fi
 
 	echo "Generating new base image for ${OS_VERSION}"
-	virt-builder ${OS_VERSION} -o $BASE_IMAGE --size 20G --format raw --commands-from-file $cmdfile
+	virt-builder ${OS_VERSION} --arch `uname -m` -o $BASE_IMAGE --size 20G --format raw --commands-from-file $cmdfile
 fi
 
 # FIXME: is there a yaml equivalent of jq?



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/4] guestfs: Enable destruction of guests with NVRAM
  2024-03-06 15:03 [PATCH v2 0/4] Enable aarch64 with guestfs Chuck Lever
  2024-03-06 15:03 ` [PATCH v2 1/4] guestfs: Specify host ISA to virt-builder Chuck Lever
@ 2024-03-06 15:03 ` Chuck Lever
  2024-03-06 15:03 ` [PATCH v2 3/4] gen_nodes: Instructions for adding a new guestfs architecture Chuck Lever
  2024-03-06 15:03 ` [PATCH v2 4/4] libvirt: Support aarch64 guests Chuck Lever
  3 siblings, 0 replies; 7+ messages in thread
From: Chuck Lever @ 2024-03-06 15:03 UTC (permalink / raw)
  To: kdevops

From: Chuck Lever <chuck.lever@oracle.com>

The default guest configuration on ARM systems includes a virtual
NVRAM device. However, "virsh undefine" won't touch such guests
without an explicit command-line option that enables destruction of
that device and its contents.

Note that virsh(1) says:

> --nvram and --keep-nvram specify accordingly to delete or keep
> nvram (/domain/os/nvram/) file.

In other words, "virsh undefine" is supposed to delete that file,
but currently does not. This is a known bug.

Link: https://lists.libvirt.org/archives/list/users@lists.libvirt.org/thread/MG3BFEOO7DUX6APTL7NXHOVZE4K2TO22/
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 scripts/destroy_guestfs.sh |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/destroy_guestfs.sh b/scripts/destroy_guestfs.sh
index 125890dc34dc..9c627f231cc7 100755
--- a/scripts/destroy_guestfs.sh
+++ b/scripts/destroy_guestfs.sh
@@ -19,7 +19,7 @@ if [ -f "$GUESTFSDIR/kdevops_nodes.yaml" ]; then
 			if [ "$domstate" = 'running' ]; then
 				virsh destroy $name
 			fi
-			virsh undefine $name
+			virsh undefine --nvram $name
 		fi
 		rm -rf "$GUESTFSDIR/$name"
 		rm -rf "$STORAGEDIR/$name"



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/4] gen_nodes: Instructions for adding a new guestfs architecture
  2024-03-06 15:03 [PATCH v2 0/4] Enable aarch64 with guestfs Chuck Lever
  2024-03-06 15:03 ` [PATCH v2 1/4] guestfs: Specify host ISA to virt-builder Chuck Lever
  2024-03-06 15:03 ` [PATCH v2 2/4] guestfs: Enable destruction of guests with NVRAM Chuck Lever
@ 2024-03-06 15:03 ` Chuck Lever
  2024-03-06 15:03 ` [PATCH v2 4/4] libvirt: Support aarch64 guests Chuck Lever
  3 siblings, 0 replies; 7+ messages in thread
From: Chuck Lever @ 2024-03-06 15:03 UTC (permalink / raw)
  To: kdevops

From: Chuck Lever <chuck.lever@oracle.com>

Write down what I did to build a new guestfs_splat.j2.xml file.
These notes guide the addition of support for a new guest type.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 playbooks/roles/gen_nodes/templates/README.md |   65 +++++++++++++++++++++++++
 1 file changed, 65 insertions(+)
 create mode 100644 playbooks/roles/gen_nodes/templates/README.md

diff --git a/playbooks/roles/gen_nodes/templates/README.md b/playbooks/roles/gen_nodes/templates/README.md
new file mode 100644
index 000000000000..67c91368654d
--- /dev/null
+++ b/playbooks/roles/gen_nodes/templates/README.md
@@ -0,0 +1,65 @@
+Constructing Node XML Files
+===========================
+
+Here are some basic recipes for constructing a guestfs_nnnn.j2.xml
+file. This will be necessary only when bringing up a previously
+unsupported guest ISA for use as a target guest.
+
+There are already a few guestfs_nnnn.j2.xml files in this directory
+to review for guidance.
+
+Requirements
+------------
+
+These recipes assume you have already installed the virt-* tools
+on your host.
+
+Build a virtual machine image
+-----------------------------
+
+Use virt-builder to download an build a sample disk image for the
+new guest. The following example builds a guest image with the same
+ISA as the host.
+
+  $ virt-builder fedora-38 --arch `uname -m` --size 20G --format raw
+
+Provision a virtual machine
+---------------------------
+
+Use virt-install to start up a guest on the disk image you built.
+
+  $ virt-install --disk path=./fedora-38.img --osinfo detect=on,require=off \
+        --install no_install=yes --memory=8000
+
+Extract node XML
+----------------
+
+Extract the guest's machine description into a file.
+
+  $ virsh dumpxml xxx > guestfs_nnnn.xml
+  $ virsh destroy xxx
+
+
+Hand-edit XML
+-------------
+
+kdevops wants a jinja2 file that can be used to substitute configured
+values into the XML. So:
+
+  $ cp guestfs_nnnn.xml guestfs_nnnn.j2.xml
+  $ edit guestfs_q35.j2.xml guestfs_nnnn.j2.xml
+
+Find instances of "{{" and copy those lines, as appropriate, to the
+new XML file.
+
+Test the new file with "make && make bringup". Adjust the .j2.xml
+file as needed.
+
+When you are satisfied with guestfs_nnnn.j2.xml, delete guestfs_nnnn.xml,
+then commit the guestfs_nnnn.j2.xml file to the kdevops repo.
+
+
+License
+-------
+
+copyleft-next-0.3.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 4/4] libvirt: Support aarch64 guests
  2024-03-06 15:03 [PATCH v2 0/4] Enable aarch64 with guestfs Chuck Lever
                   ` (2 preceding siblings ...)
  2024-03-06 15:03 ` [PATCH v2 3/4] gen_nodes: Instructions for adding a new guestfs architecture Chuck Lever
@ 2024-03-06 15:03 ` Chuck Lever
  2024-03-06 21:05   ` Luis Chamberlain
  3 siblings, 1 reply; 7+ messages in thread
From: Chuck Lever @ 2024-03-06 15:03 UTC (permalink / raw)
  To: kdevops

From: Chuck Lever <chuck.lever@oracle.com>

I've added aarch64 support only under guestfs, since I figure
Vagrant support in kdevops is not long for this world.

- Add a .j2.xml file for building aarch64 nodes
- Remove some hard dependencies on the q35 guest definition

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 kconfigs/Kconfig.libvirt                           |    8 +
 .../roles/gen_nodes/templates/guestfs_virt.j2.xml  |  215 ++++++++++++++++++++
 scripts/gen-nodes.Makefile                         |    5 
 workflows/linux/Kconfig                            |    3 
 4 files changed, 228 insertions(+), 3 deletions(-)
 create mode 100644 playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml

diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
index fa39120450fd..6d51a1c26604 100644
--- a/kconfigs/Kconfig.libvirt
+++ b/kconfigs/Kconfig.libvirt
@@ -466,7 +466,8 @@ endif # HAVE_LIBVIRT_PCIE_PASSTHROUGH
 
 choice
 	prompt "Machine type to use"
-	default LIBVIRT_MACHINE_TYPE_Q35
+	default LIBVIRT_MACHINE_TYPE_Q35 if TARGET_ARCH_X86_64
+	default LIBVIRT_MACHINE_TYPE_VIRT if TARGET_ARCH_ARM64
 
 config LIBVIRT_MACHINE_TYPE_DEFAULT
 	bool "Use the default machine type"
@@ -487,6 +488,11 @@ config LIBVIRT_MACHINE_TYPE_Q35
 	  Use q35 for the machine type. This will be required for things like
 	  CXL or PCIe passthrough.
 
+config LIBVIRT_MACHINE_TYPE_VIRT
+	bool "virt"
+	help
+	  Use virt for the machine type. This is the default on aarch64 hosts.
+
 endchoice
 
 config LIBVIRT_HOST_PASSTHROUGH
diff --git a/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
new file mode 100644
index 000000000000..9a7f004dcc1c
--- /dev/null
+++ b/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
@@ -0,0 +1,215 @@
+<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
+  <name>{{ hostname }}</name>
+  <memory unit='MiB'>{{ libvirt_mem_mb }}</memory>
+  <currentMemory unit='MiB'>{{ libvirt_mem_mb }}</currentMemory>
+  <vcpu placement='static'>{{ libvirt_vcpus_count }}</vcpu>
+  <os firmware='efi'>
+    <type arch='aarch64' machine='virt-8.1'>hvm</type>
+    <firmware>
+      <feature enabled='no' name='enrolled-keys'/>
+      <feature enabled='no' name='secure-boot'/>
+    </firmware>
+    <loader readonly='yes' type='pflash' format='qcow2'>/usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.qcow2</loader>
+    <boot dev='hd'/>
+  </os>
+  <features>
+    <acpi/>
+    <gic version='3'/>
+  </features>
+  <cpu mode='{{ 'host-passthrough' if libvirt_host_passthrough else 'host-model' }}' migratable='off'/>
+  <clock offset='utc'/>
+  <on_poweroff>destroy</on_poweroff>
+  <on_reboot>restart</on_reboot>
+  <on_crash>destroy</on_crash>
+  <devices>
+    <emulator>{{ qemu_bin_path }}</emulator>
+    <disk type='file' device='disk'>
+      <driver name='qemu' type='raw'/>
+      <source file='{{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/root.raw' index='1'/>
+      <backingStore/>
+      <target dev='vda' bus='virtio'/>
+      <alias name='virtio-disk0'/>
+      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
+    </disk>
+    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
+      <alias name='usb'/>
+      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
+    </controller>
+    <controller type='pci' index='0' model='pcie-root'>
+      <alias name='pcie.0'/>
+    </controller>
+    <controller type='pci' index='1' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='1' port='0x8'/>
+      <alias name='pci.1'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='pci' index='2' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='2' port='0x9'/>
+      <alias name='pci.2'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
+    </controller>
+    <controller type='pci' index='3' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='3' port='0xa'/>
+      <alias name='pci.3'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
+    </controller>
+    <controller type='pci' index='4' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='4' port='0xb'/>
+      <alias name='pci.4'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
+    </controller>
+    <controller type='pci' index='5' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='5' port='0xc'/>
+      <alias name='pci.5'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
+    </controller>
+    <controller type='pci' index='6' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='6' port='0xd'/>
+      <alias name='pci.6'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
+    </controller>
+    <controller type='pci' index='7' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='7' port='0xe'/>
+      <alias name='pci.7'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
+    </controller>
+    <controller type='pci' index='8' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='8' port='0xf'/>
+      <alias name='pci.8'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
+    </controller>
+    <controller type='pci' index='9' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='9' port='0x10'/>
+      <alias name='pci.9'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='pci' index='10' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='10' port='0x11'/>
+      <alias name='pci.10'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
+    </controller>
+    <controller type='pci' index='11' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='11' port='0x12'/>
+      <alias name='pci.11'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
+    </controller>
+    <controller type='pci' index='12' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='12' port='0x13'/>
+      <alias name='pci.12'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
+    </controller>
+    <controller type='pci' index='13' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='13' port='0x14'/>
+      <alias name='pci.13'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
+    </controller>
+    <controller type='pci' index='14' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='14' port='0x15'/>
+      <alias name='pci.14'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
+    </controller>
+    <controller type='virtio-serial' index='0'>
+      <alias name='virtio-serial0'/>
+      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
+    </controller>
+    <interface type='bridge'>
+      <source bridge='{{ libvirt_session_public_network_dev }}'/>
+      <target dev='tap0'/>
+      <model type='virtio'/>
+      <alias name='net0'/>
+      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
+    </interface>
+    <serial type='pty'>
+      <source path='/dev/pts/2'/>
+      <target type='system-serial' port='0'>
+        <model name='pl011'/>
+      </target>
+      <alias name='serial0'/>
+      <log file='{{ guestfs_path }}/{{ hostname }}/console.log' append='on'/>
+    </serial>
+    <console type='pty' tty='/dev/pts/2'>
+      <source path='/dev/pts/1'/>
+      <target type='serial' port='0'/>
+      <alias name='serial0'/>
+    </console>
+    <channel type='unix'>
+      <source mode='bind'/>
+      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
+      <alias name='channel0'/>
+      <address type='virtio-serial' controller='0' bus='0' port='1'/>
+    </channel>
+    <tpm model='tpm-tis'>
+      <backend type='emulator' version='2.0'/>
+      <alias name='tpm0'/>
+    </tpm>
+    <audio id='1' type='none'/>
+    <memballoon model='virtio'>
+      <alias name='balloon0'/>
+      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
+    </memballoon>
+    <rng model='virtio'>
+      <backend model='random'>/dev/urandom</backend>
+      <alias name='rng0'/>
+      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
+    </rng>
+  </devices>
+  <qemu:commandline>
+    <qemu:arg value='-global'/>
+    <qemu:arg value='ICH9-LPC.disable_s3=0'/>
+    <qemu:arg value='-global'/>
+    <qemu:arg value='ICH9-LPC.disable_s4=0'/>
+    <qemu:arg value='-device'/>
+    <qemu:arg value='pxb-pcie,id=pcie.1,bus_nr=32,bus=pcie.0,addr=0x8'/>
+{% if libvirt_extra_storage_drive_ide %}
+{% for n in range(0,4) %}
+    <qemu:arg value='-drive'/>
+    <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},if=none,id=drv{{ n }}'/>
+    <qemu:arg value='-device'/>
+    <qemu:arg value="ide-hd,drive=drv{{ n }},bus=ide.{{ n }},serial=kdevops{{ n }}"/>
+{% endfor %}
+{% elif libvirt_extra_storage_drive_virtio %}
+{% for n in range(0,4) %}
+    <qemu:arg value='-device'/>
+    <qemu:arg value='pcie-root-port,id=pcie-port-for-virtio-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ n }},chassis=5{{ n }}'/>
+    <qemu:arg value="-object"/>
+    <qemu:arg value="iothread,id=kdevops-virtio-iothread-{{ n }}"/>
+    <qemu:arg value="-drive"/>
+    <qemu:arg value="file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}"/>
+    <qemu:arg value="-device"/>
+    <qemu:arg value="virtio-blk-pci,scsi=off,drive=drv{{ n }},id=virtio-drv{{ n }},serial=kdevops{{ n }},bus=pcie-port-for-virtio-{{ n }},addr=0x0,iothread=kdevops-virtio-iothread-{{ n }},logical_block_size={{ libvirt_extra_storage_virtio_logical_block_size }},physical_block_size={{ libvirt_extra_storage_virtio_physical_block_size }}"/>
+{% endfor %}
+{% elif libvirt_extra_storage_drive_nvme  %}
+{% for n in range(0,4) %}
+    <qemu:arg value='-device'/>
+    <qemu:arg value='pcie-root-port,id=pcie-port-for-nvme-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ n }},chassis=5{{ n }}'/>
+    <qemu:arg value='-drive'/>
+    <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,id=drv{{ n }}'/>
+    <qemu:arg value='-device'/>
+    <qemu:arg value='nvme,id=nvme{{ n }},serial=kdevops{{ n }},bus=pcie-port-for-nvme-{{ n }},addr=0x0'/>
+    <qemu:arg value='-device'/>
+    <qemu:arg value='nvme-ns,drive=drv{{ n }},bus=nvme{{ n }},nsid=1,logical_block_size=512,physical_block_size=512'/>
+{% endfor %}
+{% endif %}
+{% if bootlinux_9p %}
+    <qemu:arg value='-device'/>
+    <qemu:arg value='{{ bootlinux_9p_driver }},fsdev={{ bootlinux_9p_fsdev }},mount_tag={{ bootlinux_9p_mount_tag }},bus=pcie.0,addr=0x10'/>
+    <qemu:arg value='-fsdev'/>
+    <qemu:arg value='local,id={{ bootlinux_9p_fsdev }},path={{ bootlinux_9p_host_path }},security_model={{ bootlinux_9p_security_model }}'/>
+{% endif %}
+  </qemu:commandline>
+</domain>
+
diff --git a/scripts/gen-nodes.Makefile b/scripts/gen-nodes.Makefile
index 657f64496309..ce6b794f1fb1 100644
--- a/scripts/gen-nodes.Makefile
+++ b/scripts/gen-nodes.Makefile
@@ -216,4 +216,9 @@ endif # CONFIG_QEMU_ENABLE_CXL
 
 endif # CONFIG_LIBVIRT_MACHINE_TYPE_Q35
 
+ifeq (y,$(CONFIG_LIBVIRT_MACHINE_TYPE_VIRT))
+GEN_NODES_EXTRA_ARGS += libvirt_override_machine_type='True'
+GEN_NODES_EXTRA_ARGS += libvirt_machine_type='virt'
+endif # CONFIG_LIBVIRT_MACHINE_TYPE_VIRT
+
 ANSIBLE_EXTRA_ARGS += $(GEN_NODES_EXTRA_ARGS)
diff --git a/workflows/linux/Kconfig b/workflows/linux/Kconfig
index d4dd3abe953b..41ca740dcca0 100644
--- a/workflows/linux/Kconfig
+++ b/workflows/linux/Kconfig
@@ -29,8 +29,7 @@ endif # HAVE_SUPPORTS_PURE_IOMAP
 config BOOTLINUX_9P
 	bool "Use 9p to build Linux"
 	depends on LIBVIRT
-	depends on LIBVIRT_MACHINE_TYPE_Q35
-	default LIBVIRT_MACHINE_TYPE_Q35
+	default LIBVIRT
 	help
 	  This will let you choose use 9p to build Linux. What this does is
 	  use your localhost to git clone Linux under the assumption your



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 4/4] libvirt: Support aarch64 guests
  2024-03-06 15:03 ` [PATCH v2 4/4] libvirt: Support aarch64 guests Chuck Lever
@ 2024-03-06 21:05   ` Luis Chamberlain
  2024-03-07 14:09     ` Chuck Lever III
  0 siblings, 1 reply; 7+ messages in thread
From: Luis Chamberlain @ 2024-03-06 21:05 UTC (permalink / raw)
  To: Chuck Lever; +Cc: kdevops

On Wed, Mar 06, 2024 at 10:03:36AM -0500, Chuck Lever wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
> 
> I've added aarch64 support only under guestfs, since I figure
> Vagrant support in kdevops is not long for this world.
> 
> - Add a .j2.xml file for building aarch64 nodes
> - Remove some hard dependencies on the q35 guest definition
> 
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
>  kconfigs/Kconfig.libvirt                           |    8 +
>  .../roles/gen_nodes/templates/guestfs_virt.j2.xml  |  215 ++++++++++++++++++++
>  scripts/gen-nodes.Makefile                         |    5 
>  workflows/linux/Kconfig                            |    3 
>  4 files changed, 228 insertions(+), 3 deletions(-)
>  create mode 100644 playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
> 
> diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
> index fa39120450fd..6d51a1c26604 100644
> --- a/kconfigs/Kconfig.libvirt
> +++ b/kconfigs/Kconfig.libvirt
> @@ -466,7 +466,8 @@ endif # HAVE_LIBVIRT_PCIE_PASSTHROUGH
>  
>  choice
>  	prompt "Machine type to use"
> -	default LIBVIRT_MACHINE_TYPE_Q35
> +	default LIBVIRT_MACHINE_TYPE_Q35 if TARGET_ARCH_X86_64
> +	default LIBVIRT_MACHINE_TYPE_VIRT if TARGET_ARCH_ARM64

Picking q35 when one selected TARGET_ARCH_ARM64 would likely fail at
bringup, likewise picking LIBVIRT_MACHINE_TYPE_VIRT when one selected
TARGET_ARCH_X86_64 would fail. So how about just hiding that complexity
and only presenting q35 as an option if TARGET_ARCH_X86_64 is enabled?

So LIBVIRT_MACHINE_TYPE_Q35 should then depend on TARGET_ARCH_X86_64 and
   LIBVIRT_MACHINE_TYPE_VIRT depend on TARGET_ARCH_ARM64 ?

The above can be kept as is, we'd just make q35 not appear as an option
if one selected TARGET_ARCH_ARM64 and if one selected TARGET_ARCH_ARM64
one would not see q35 in the drop down menu as well.

Other than that:

Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>

  Luis

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 4/4] libvirt: Support aarch64 guests
  2024-03-06 21:05   ` Luis Chamberlain
@ 2024-03-07 14:09     ` Chuck Lever III
  0 siblings, 0 replies; 7+ messages in thread
From: Chuck Lever III @ 2024-03-07 14:09 UTC (permalink / raw)
  To: Luis Chamberlain; +Cc: Chuck Lever, kdevops@lists.linux.dev



> On Mar 6, 2024, at 4:05 PM, Luis Chamberlain <mcgrof@kernel.org> wrote:
> 
> On Wed, Mar 06, 2024 at 10:03:36AM -0500, Chuck Lever wrote:
>> From: Chuck Lever <chuck.lever@oracle.com>
>> 
>> I've added aarch64 support only under guestfs, since I figure
>> Vagrant support in kdevops is not long for this world.
>> 
>> - Add a .j2.xml file for building aarch64 nodes
>> - Remove some hard dependencies on the q35 guest definition
>> 
>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>> ---
>> kconfigs/Kconfig.libvirt                           |    8 +
>> .../roles/gen_nodes/templates/guestfs_virt.j2.xml  |  215 ++++++++++++++++++++
>> scripts/gen-nodes.Makefile                         |    5 
>> workflows/linux/Kconfig                            |    3 
>> 4 files changed, 228 insertions(+), 3 deletions(-)
>> create mode 100644 playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
>> 
>> diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
>> index fa39120450fd..6d51a1c26604 100644
>> --- a/kconfigs/Kconfig.libvirt
>> +++ b/kconfigs/Kconfig.libvirt
>> @@ -466,7 +466,8 @@ endif # HAVE_LIBVIRT_PCIE_PASSTHROUGH
>> 
>> choice
>> prompt "Machine type to use"
>> - default LIBVIRT_MACHINE_TYPE_Q35
>> + default LIBVIRT_MACHINE_TYPE_Q35 if TARGET_ARCH_X86_64
>> + default LIBVIRT_MACHINE_TYPE_VIRT if TARGET_ARCH_ARM64
> 
> Picking q35 when one selected TARGET_ARCH_ARM64 would likely fail at
> bringup, likewise picking LIBVIRT_MACHINE_TYPE_VIRT when one selected
> TARGET_ARCH_X86_64 would fail. So how about just hiding that complexity
> and only presenting q35 as an option if TARGET_ARCH_X86_64 is enabled?
> 
> So LIBVIRT_MACHINE_TYPE_Q35 should then depend on TARGET_ARCH_X86_64 and
>   LIBVIRT_MACHINE_TYPE_VIRT depend on TARGET_ARCH_ARM64 ?
> 
> The above can be kept as is, we'd just make q35 not appear as an option
> if one selected TARGET_ARCH_ARM64 and if one selected TARGET_ARCH_ARM64
> one would not see q35 in the drop down menu as well.
> 
> Other than that:
> 
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>

I split these up a little bit and added your suggestion to
hide inappropriate machine types based on the TARGET_ARCH.

I pushed them but forgot to add your R-b. Apologies!


--
Chuck Lever



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-03-07 14:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-06 15:03 [PATCH v2 0/4] Enable aarch64 with guestfs Chuck Lever
2024-03-06 15:03 ` [PATCH v2 1/4] guestfs: Specify host ISA to virt-builder Chuck Lever
2024-03-06 15:03 ` [PATCH v2 2/4] guestfs: Enable destruction of guests with NVRAM Chuck Lever
2024-03-06 15:03 ` [PATCH v2 3/4] gen_nodes: Instructions for adding a new guestfs architecture Chuck Lever
2024-03-06 15:03 ` [PATCH v2 4/4] libvirt: Support aarch64 guests Chuck Lever
2024-03-06 21:05   ` Luis Chamberlain
2024-03-07 14:09     ` Chuck Lever III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox