* [PATCH 1/8] guestfs: use macros for drives for aarch64
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
@ 2024-03-08 0:03 ` Luis Chamberlain
2024-03-08 0:03 ` [PATCH 2/8] bringup: disable ZNS and CXL for guestfs Luis Chamberlain
` (7 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 0:03 UTC (permalink / raw)
To: kdevops; +Cc: Luis Chamberlain
Now that we have macros for drive generation for the XML files
we can share this with aarch64. So move all the code to generate
drives into a shared file and include it for both q35 and aarch64.
This then also augments aarch64 to also support large IO experimentation
(CONFIG_QEMU_ENABLE_EXTRA_DRIVE_LARGEIO)
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
.../roles/gen_nodes/templates/gen_drives.j2 | 57 ++++++++++++++++++
.../gen_nodes/templates/guestfs_q35.j2.xml | 58 +------------------
.../gen_nodes/templates/guestfs_virt.j2.xml | 38 +-----------
3 files changed, 59 insertions(+), 94 deletions(-)
create mode 100644 playbooks/roles/gen_nodes/templates/gen_drives.j2
diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
new file mode 100644
index 00000000..105e2cf0
--- /dev/null
+++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
@@ -0,0 +1,57 @@
+{% if libvirt_extra_storage_drive_ide %}
+{{ drives.gen_drive_ide(4,
+ kdevops_storage_pool_path,
+ hostname,
+ libvirt_extra_drive_format,
+ libvirt_extra_storage_aio_mode,
+ libvirt_extra_storage_aio_cache_mode) }}
+{% elif libvirt_extra_storage_drive_virtio %}
+{% if libvirt_largeio_enable %}
+{{ drives.gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
+ libvirt_largeio_logical_compat_size,
+ libvirt_largeio_pow_limit,
+ libvirt_largeio_drives_per_space,
+ hostname,
+ libvirt_extra_drive_format,
+ libvirt_extra_storage_aio_mode,
+ libvirt_extra_storage_aio_cache_mode,
+ kdevops_storage_pool_path) }}
+{% else %}
+{{ drives.gen_drive_virtio(4,
+ kdevops_storage_pool_path,
+ hostname,
+ libvirt_extra_drive_format,
+ libvirt_extra_storage_aio_mode,
+ libvirt_extra_storage_aio_cache_mode,
+ libvirt_extra_storage_virtio_logical_block_size,
+ libvirt_extra_storage_virtio_physical_block_size) }}
+{% endif %}
+{% elif libvirt_extra_storage_drive_nvme %}
+{% if libvirt_largeio_enable %}
+{{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
+ libvirt_largeio_logical_compat_size,
+ libvirt_largeio_pow_limit,
+ libvirt_largeio_drives_per_space,
+ hostname,
+ libvirt_extra_drive_format,
+ libvirt_extra_storage_aio_mode,
+ libvirt_extra_storage_aio_cache_mode,
+ kdevops_storage_pool_path) }}
+{% else %}
+{{ drives.gen_drive_nvme(4,
+ kdevops_storage_pool_path,
+ hostname,
+ libvirt_extra_drive_format,
+ libvirt_extra_storage_aio_mode,
+ libvirt_extra_storage_aio_cache_mode,
+ libvirt_extra_storage_nvme_logical_block_size) }}
+{% endif %}
+{% endif %}
+{% if bootlinux_9p %}
+ {{ drives.gen_9p_mount(bootlinux_9p_driver,
+ bootlinux_9p_fsdev,
+ bootlinux_9p_host_path,
+ bootlinux_9p_mount_tag,
+ bootlinux_9p_security_model,
+ 10) }}
+{% endif %}
diff --git a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
index fe8be827..ce160490 100644
--- a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
+++ b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
@@ -178,62 +178,6 @@
<qemu:arg value='ICH9-LPC.disable_s4=0'/>
<qemu:arg value='-device'/>
<qemu:arg value='pxb-pcie,id=pcie.1,bus_nr=32,bus=pcie.0,addr=0x8'/>
-{% if libvirt_extra_storage_drive_ide %}
-{{ drives.gen_drive_ide(4,
- kdevops_storage_pool_path,
- hostname,
- libvirt_extra_drive_format,
- libvirt_extra_storage_aio_mode,
- libvirt_extra_storage_aio_cache_mode) }}
-{% elif libvirt_extra_storage_drive_virtio %}
-{% if libvirt_largeio_enable %}
-{{ drives.gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
- libvirt_largeio_logical_compat_size,
- libvirt_largeio_pow_limit,
- libvirt_largeio_drives_per_space,
- hostname,
- libvirt_extra_drive_format,
- libvirt_extra_storage_aio_mode,
- libvirt_extra_storage_aio_cache_mode,
- kdevops_storage_pool_path) }}
-{% else %}
-{{ drives.gen_drive_virtio(4,
- kdevops_storage_pool_path,
- hostname,
- libvirt_extra_drive_format,
- libvirt_extra_storage_aio_mode,
- libvirt_extra_storage_aio_cache_mode,
- libvirt_extra_storage_virtio_logical_block_size,
- libvirt_extra_storage_virtio_physical_block_size) }}
-{% endif %}
-{% elif libvirt_extra_storage_drive_nvme %}
-{% if libvirt_largeio_enable %}
-{{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
- libvirt_largeio_logical_compat_size,
- libvirt_largeio_pow_limit,
- libvirt_largeio_drives_per_space,
- hostname,
- libvirt_extra_drive_format,
- libvirt_extra_storage_aio_mode,
- libvirt_extra_storage_aio_cache_mode,
- kdevops_storage_pool_path) }}
-{% else %}
-{{ drives.gen_drive_nvme(4,
- kdevops_storage_pool_path,
- hostname,
- libvirt_extra_drive_format,
- libvirt_extra_storage_aio_mode,
- libvirt_extra_storage_aio_cache_mode,
- libvirt_extra_storage_nvme_logical_block_size) }}
-{% endif %}
-{% endif %}
-{% if bootlinux_9p %}
- {{ drives.gen_9p_mount(bootlinux_9p_driver,
- bootlinux_9p_fsdev,
- bootlinux_9p_host_path,
- bootlinux_9p_mount_tag,
- bootlinux_9p_security_model,
- 10) }}
-{% endif %}
+{% include './templates/gen_drives.j2' %}
</qemu:commandline>
</domain>
diff --git a/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
index 9a7f004d..29dc0951 100644
--- a/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
+++ b/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
@@ -174,42 +174,6 @@
<qemu:arg value='ICH9-LPC.disable_s4=0'/>
<qemu:arg value='-device'/>
<qemu:arg value='pxb-pcie,id=pcie.1,bus_nr=32,bus=pcie.0,addr=0x8'/>
-{% if libvirt_extra_storage_drive_ide %}
-{% for n in range(0,4) %}
- <qemu:arg value='-drive'/>
- <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},if=none,id=drv{{ n }}'/>
- <qemu:arg value='-device'/>
- <qemu:arg value="ide-hd,drive=drv{{ n }},bus=ide.{{ n }},serial=kdevops{{ n }}"/>
-{% endfor %}
-{% elif libvirt_extra_storage_drive_virtio %}
-{% for n in range(0,4) %}
- <qemu:arg value='-device'/>
- <qemu:arg value='pcie-root-port,id=pcie-port-for-virtio-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ n }},chassis=5{{ n }}'/>
- <qemu:arg value="-object"/>
- <qemu:arg value="iothread,id=kdevops-virtio-iothread-{{ n }}"/>
- <qemu:arg value="-drive"/>
- <qemu:arg value="file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}"/>
- <qemu:arg value="-device"/>
- <qemu:arg value="virtio-blk-pci,scsi=off,drive=drv{{ n }},id=virtio-drv{{ n }},serial=kdevops{{ n }},bus=pcie-port-for-virtio-{{ n }},addr=0x0,iothread=kdevops-virtio-iothread-{{ n }},logical_block_size={{ libvirt_extra_storage_virtio_logical_block_size }},physical_block_size={{ libvirt_extra_storage_virtio_physical_block_size }}"/>
-{% endfor %}
-{% elif libvirt_extra_storage_drive_nvme %}
-{% for n in range(0,4) %}
- <qemu:arg value='-device'/>
- <qemu:arg value='pcie-root-port,id=pcie-port-for-nvme-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ n }},chassis=5{{ n }}'/>
- <qemu:arg value='-drive'/>
- <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,id=drv{{ n }}'/>
- <qemu:arg value='-device'/>
- <qemu:arg value='nvme,id=nvme{{ n }},serial=kdevops{{ n }},bus=pcie-port-for-nvme-{{ n }},addr=0x0'/>
- <qemu:arg value='-device'/>
- <qemu:arg value='nvme-ns,drive=drv{{ n }},bus=nvme{{ n }},nsid=1,logical_block_size=512,physical_block_size=512'/>
-{% endfor %}
-{% endif %}
-{% if bootlinux_9p %}
- <qemu:arg value='-device'/>
- <qemu:arg value='{{ bootlinux_9p_driver }},fsdev={{ bootlinux_9p_fsdev }},mount_tag={{ bootlinux_9p_mount_tag }},bus=pcie.0,addr=0x10'/>
- <qemu:arg value='-fsdev'/>
- <qemu:arg value='local,id={{ bootlinux_9p_fsdev }},path={{ bootlinux_9p_host_path }},security_model={{ bootlinux_9p_security_model }}'/>
-{% endif %}
+{% include './templates/gen_drives.j2' %}
</qemu:commandline>
</domain>
-
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/8] bringup: disable ZNS and CXL for guestfs
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
2024-03-08 0:03 ` [PATCH 1/8] guestfs: use macros for drives for aarch64 Luis Chamberlain
@ 2024-03-08 0:03 ` Luis Chamberlain
2024-03-08 0:03 ` [PATCH 3/8] libvirt: move zns, largio and cxl to its own files Luis Chamberlain
` (6 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 0:03 UTC (permalink / raw)
To: kdevops; +Cc: Luis Chamberlain
Support for things like ZNS and CXL require libvirt XML macros to be
developed for guestfs and this is not ready yet, so hide support for it
under options which make it clear these features are still missing for
guestfs.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
kconfigs/Kconfig.bringup | 7 +++++++
| 6 ++++++
kconfigs/Kconfig.libvirt | 9 +++++++++
3 files changed, 22 insertions(+)
diff --git a/kconfigs/Kconfig.bringup b/kconfigs/Kconfig.bringup
index ba7b5430..de4128ae 100644
--- a/kconfigs/Kconfig.bringup
+++ b/kconfigs/Kconfig.bringup
@@ -1,3 +1,6 @@
+config BRINGUP_SUPPORTS_CXL
+ bool
+
choice
prompt "Node bring up method"
default VAGRANT
@@ -5,6 +8,9 @@ choice
config VAGRANT
bool "Vagrant for local virtualization (KVM / VirtualBox)"
select KDEVOPS_SSH_CONFIG_UPDATE_STRICT
+ select EXTRA_STORAGE_SUPPORTS_ZNS
+ select EXTRA_STORAGE_SUPPORTS_LARGEIO
+ select BRINGUP_SUPPORTS_CXL
depends on TARGET_ARCH_X86_64
help
This option will enable use of Vagrant. Enable this if you want to
@@ -17,6 +23,7 @@ config VAGRANT
config GUESTFS
bool "Use guestfs-tools for local virtualization via KVM and libvirt (EXPERIMENTAL)"
+ select EXTRA_STORAGE_SUPPORTS_LARGEIO
help
This option will use libguestfs utilities instead of Vagrant to build
guest images and spin them up using libvirt with KVM.
--git a/kconfigs/Kconfig.extra_storage b/kconfigs/Kconfig.extra_storage
index 12bb4206..7b0df9a1 100644
--- a/kconfigs/Kconfig.extra_storage
+++ b/kconfigs/Kconfig.extra_storage
@@ -13,3 +13,9 @@ config EXTRA_STORAGE_SUPPORTS_2K
config EXTRA_STORAGE_SUPPORTS_4K
bool
default n
+config EXTRA_STORAGE_SUPPORTS_ZNS
+ bool
+ default n
+config EXTRA_STORAGE_SUPPORTS_LARGEIO
+ bool
+ default n
diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
index d8b972c1..7486be49 100644
--- a/kconfigs/Kconfig.libvirt
+++ b/kconfigs/Kconfig.libvirt
@@ -1091,6 +1091,8 @@ config LIBVIRT_STORAGE_POOL_NAME
For instance you may want to use a volume name of "data2" for a path
on a partition on /data2/ or something like that.
+if EXTRA_STORAGE_SUPPORTS_ZNS
+
config QEMU_ENABLE_NVME_ZNS
bool "Enable QEMU NVMe ZNS drives"
depends on LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
@@ -1238,6 +1240,10 @@ config QEMU_NVME_ZONE_LOGICAL_BLOCK_SIZE
default 4096 if !QEMU_CUSTOM_NVME_ZNS
default QEMU_CUSTOM_NVME_ZONE_LOGICAL_BLOCK_SIZE if QEMU_CUSTOM_NVME_ZNS
+endif # EXTRA_STORAGE_SUPPORTS_ZNS
+
+if EXTRA_STORAGE_SUPPORTS_LARGEIO
+
config QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
bool "Enable QEMU drives for large IO experimentation"
depends on LIBVIRT
@@ -1369,10 +1375,13 @@ config QEMU_LARGEIO_MAX_POW_LIMIT
default 12 if !QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
default QEMU_EXTRA_DRIVE_LARGEIO_MAX_POW_LIMIT if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+endif # EXTRA_STORAGE_SUPPORTS_LARGEIO
+
config QEMU_ENABLE_CXL
bool "Enable QEMU CXL devices"
depends on LIBVIRT
depends on LIBVIRT_MACHINE_TYPE_Q35
+ depends on BRINGUP_SUPPORTS_CXL
depends on QEMU_USE_DEVELOPMENT_VERSION
default n
help
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/8] libvirt: move zns, largio and cxl to its own files
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
2024-03-08 0:03 ` [PATCH 1/8] guestfs: use macros for drives for aarch64 Luis Chamberlain
2024-03-08 0:03 ` [PATCH 2/8] bringup: disable ZNS and CXL for guestfs Luis Chamberlain
@ 2024-03-08 0:03 ` Luis Chamberlain
2024-03-08 0:03 ` [PATCH 4/8] guestfs: move options to its own file Luis Chamberlain
` (5 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 0:03 UTC (permalink / raw)
To: kdevops; +Cc: Luis Chamberlain
The features for ZNS, large IO support and CXL are pretty large now,
move them to its own Kconfig file to clean up clutter and make things
easier to scale and read.
No functional changes.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
kconfigs/Kconfig.libvirt | 362 +------------------------------
kconfigs/Kconfig.libvirt.cxl | 73 +++++++
kconfigs/Kconfig.libvirt.largeio | 134 ++++++++++++
kconfigs/Kconfig.libvirt.zns | 150 +++++++++++++
4 files changed, 360 insertions(+), 359 deletions(-)
create mode 100644 kconfigs/Kconfig.libvirt.cxl
create mode 100644 kconfigs/Kconfig.libvirt.largeio
create mode 100644 kconfigs/Kconfig.libvirt.zns
diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
index 7486be49..f6f7d134 100644
--- a/kconfigs/Kconfig.libvirt
+++ b/kconfigs/Kconfig.libvirt
@@ -1091,362 +1091,6 @@ config LIBVIRT_STORAGE_POOL_NAME
For instance you may want to use a volume name of "data2" for a path
on a partition on /data2/ or something like that.
-if EXTRA_STORAGE_SUPPORTS_ZNS
-
-config QEMU_ENABLE_NVME_ZNS
- bool "Enable QEMU NVMe ZNS drives"
- depends on LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
- default n
- help
- If this option is enabled then you can enable NVMe ZNS drives on the
- guests.
-
-config QEMU_CUSTOM_NVME_ZNS
- bool "Customize QEMU NVMe ZNS settings"
- depends on QEMU_ENABLE_NVME_ZNS
- default n
- help
- If this option is enabled then you will be able to modify the defaults
- used for the 2 NVMe ZNS drives we create for you. By default we create
- two NVMe ZNS drives with 100 GiB of total size, each zone being
- 128 MiB, and so you end up with 800 total zones. The zone capacity
- equals the zone size. The default zone size append limit is also
- set to 0, which means the zone append size limit will equal to the
- maximum data transfer size (MDTS). The default logical and physical
- block size of 4096 bytes is also used. If you want to customize any
- of these ZNS settings for the drives we bring up enable this option.
-
- If unsure say N.
-
-if QEMU_CUSTOM_NVME_ZNS
-
-config QEMU_CUSTOM_NVME_ZONE_DRIVE_SIZE
- int "QEMU ZNS storage NVMe drive size"
- default 102400
- help
- The size of the QEMU NVMe ZNS drive to expose. We expose 2 NVMe
- ZNS drives of 100 GiB by default. This value chagnes its size.
- 100 GiB is a sensible default given most full fstests require about
- 50 GiB of data writes.
-
-config QEMU_CUSTOM_NVME_ZONE_ZASL
- int "QEMU ZNS zasl - zone append size limit power of 2"
- default 0
- help
- This is the zone append size limit. If left at 0 QEMU will use
- the maximum data transfer size (MDTS) for the zone size append limit.
- Otherwise if this value is set to something other than 0, then the
- zone size append limit will equal to 2 to the power of the value set
- here multiplied by the minimum memory page size (4096 bytes) but the
- QEMU promises this value cannot exceed the maximum data transfer size.
-
-config QEMU_CUSTOM_NVME_ZONE_SIZE
- string "QEMU ZNS storage NVMe zone size"
- default "128M"
- help
- The size the the QEMU NVMe ZNS zone size. The number of zones are
- implied by the driver size / zone size. If there is a remainder
- technically that should go into another zone with a smaller zone
- capacity.
-
-config QEMU_CUSTOM_NVME_ZONE_CAPACITY
- string "QEMU ZNS storage NVMe zone capacity"
- default "0M"
- help
- The size to use for the zone capacity. This may be smaller or equal
- to the zone size. If set to 0 then this will ensure the zone
- capacity is equal to the zone size.
-
-config QEMU_CUSTOM_NVME_ZONE_MAX_ACTIVE
- int "QEMU ZNS storage NVMe zone max active"
- default 0
- help
- The max numbe of active zones. The default of 0 means all zones
- can be active at all times.
-
-config QEMU_CUSTOM_NVME_ZONE_MAX_OPEN
- int "QEMU ZNS storage NVMe zone max open"
- default 0
- help
- The max numbe of open zones. The default of 0 means all zones
- can be opened at all times. If the number of active zones is
- specified this value must be less than or equal to that value.
-
-config QEMU_CUSTOM_NVME_ZONE_PHYSICAL_BLOCK_SIZE
- int "QEMU ZNS storage NVMe physical block size"
- default 4096
- help
- The physical block size to use for ZNS drives. This ends up
- what is put into the /sys/block/<disk>/queue/physical_block_size
- and is the smallest unit a physical storage device can write
- atomically. It is usually the same as the logical block size but may
- be bigger. One example is SATA drives with 4KB sectors that expose a
- 512-byte logical block size to the operating system. For stacked
- block devices the physical_block_size variable contains the maximum
- physical_block_size of the component devices.
-
-config QEMU_CUSTOM_NVME_ZONE_LOGICAL_BLOCK_SIZE
- int "QEMU ZNS storage NVMe logical block size"
- default 4096
- help
- The logical block size to use for ZNS drives. This ends up what is
- put into the /sys/block/<disk>/queue/logical_block_size and the
- smallest unit the storage device can address. It is typically 512
- bytes.
-
-endif # QEMU_CUSTOM_NVME_ZNS
-
-config LIBVIRT_ENABLE_ZNS
- bool
- default y if QEMU_ENABLE_NVME_ZNS
-
-config QEMU_NVME_ZONE_DRIVE_SIZE
- int
- default 102400 if !QEMU_CUSTOM_NVME_ZNS
- default QEMU_CUSTOM_NVME_ZONE_DRIVE_SIZE if QEMU_CUSTOM_NVME_ZNS
-
-config QEMU_NVME_ZONE_ZASL
- int
- default 0 if !QEMU_CUSTOM_NVME_ZNS
- default QEMU_CUSTOM_NVME_ZONE_ZASL if QEMU_CUSTOM_NVME_ZNS
-
-config QEMU_NVME_ZONE_SIZE
- string
- default "128M" if !QEMU_CUSTOM_NVME_ZNS
- default QEMU_CUSTOM_NVME_ZONE_SIZE if QEMU_CUSTOM_NVME_ZNS
-
-config QEMU_NVME_ZONE_CAPACITY
- string
- default "0M" if !QEMU_CUSTOM_NVME_ZNS
- default QEMU_CUSTOM_NVME_ZONE_CAPACITY if QEMU_CUSTOM_NVME_ZNS
-
-config QEMU_NVME_ZONE_MAX_ACTIVE
- int
- default 0 if !QEMU_CUSTOM_NVME_ZNS
- default QEMU_CUSTOM_NVME_ZONE_MAX_ACTIVE if QEMU_CUSTOM_NVME_ZNS
-
-config QEMU_NVME_ZONE_MAX_OPEN
- int
- default 0 if !QEMU_CUSTOM_NVME_ZNS
- default QEMU_CUSTOM_NVME_ZONE_MAX_OPEN if QEMU_CUSTOM_NVME_ZNS
-
-config QEMU_NVME_ZONE_PHYSICAL_BLOCK_SIZE
- int
- default 4096 if !QEMU_CUSTOM_NVME_ZNS
- default QEMU_CUSTOM_NVME_ZONE_PHYSICAL_BLOCK_SIZE if QEMU_CUSTOM_NVME_ZNS
-
-config QEMU_NVME_ZONE_LOGICAL_BLOCK_SIZE
- int
- default 4096 if !QEMU_CUSTOM_NVME_ZNS
- default QEMU_CUSTOM_NVME_ZONE_LOGICAL_BLOCK_SIZE if QEMU_CUSTOM_NVME_ZNS
-
-endif # EXTRA_STORAGE_SUPPORTS_ZNS
-
-if EXTRA_STORAGE_SUPPORTS_LARGEIO
-
-config QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
- bool "Enable QEMU drives for large IO experimentation"
- depends on LIBVIRT
- default n
- help
- If you want to experiment with large IO either with NVMe or virtio
- you can enable this option. This will create a few additional drives
- which are dedicated for largio experimentation testing.
-
- For now you will need a distribution with a root filesystem on XFS
- or btrfs, and so you will want to enable the kdevops distribution and
- VAGRANT_KDEVOPS_DEBIAN_TESTING64_XFS_20230427. This is a requirement
- given all block devices must use iomap and that is the only current
- way to disable buffer-heads. Eventually this limitation is expected
- You can also use large-block-20230525 with Amazon Linux 2023 on AWS.
-
- If unsure say N.
-
-if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
-
-config QEMU_EXTRA_DRIVE_LARGEIO_NUM_DRIVES_PER_SPACE
- int "How many qemu drives to create per each target size"
- default 4
- help
- If you are going to try to mess with LBS on 4k LBA you can experiment
- with:
-
- - 4k block size
- - 8k block size
- - 16k block size
- - 32k block size
- - 64k block size
-
- So in total 4 drives. For a drive with an LBA format of 16k, you can
- only experiment with block sizes:
-
- - 16k block size
- - 32k block size
- - 64k block size
-
- In theory you can experiment up to MAX_PAGECACHE_ORDER and to make
- things worse some filesystems can use block sizes which are not power
- of two. For now filesystems only support up to max block size 64k, so
- we can just keep the max drive sizes down a bit. Likewise twice the
- PAGE_SIZE is not supported as we require at least order 2 so 16k as
- folios use the 3rd page for the deferred list. So you really only need
- for 4k today:
-
- - 4k block size
- - 16k block size
- - 32k block size
- - 64k block size
-
- If we create 4 drives per space you can have 4 for basic baseline
- coverage testing. It seems the max limit is about 20 drives per
- qemu pcie port today, if you enable more than the default 4, good
- luck!
-
-config QEMU_EXTRA_DRIVE_LARGEIO_BASE_SIZE
- int "QEMU extra drive drive base size"
- default 10240
- help
- The base size of the QEMU extra storage drive to expose. The
- size is increased by 1 MiB as we go down the list of extra large IO
- drives.
-
-config QEMU_EXTRA_DRIVE_LARGEIO_COMPAT
- bool "Use a compatibility logical block size"
- default n
- help
- Since older spindle drives used to work with 512 bytes some drives
- exist with support to handle 512 writes even if they physically store
- more data on their drives for that one 512 byte write. Enable this if
- you want to ensure your large IO drives always have a logical block
- size restrained by the compatibility size you want to support.
-
- By default this is not enabled, and therefore the logical block size
- for the large IO drives will be equal to the physical block size.
-
-config QEMU_EXTRA_DRIVE_LARGEIO_COMPAT_SIZE
- int "Large IO compat size"
- default 512
- help
- This is the compatibility base block size to use for older drives.
- Even if you disable QEMU_EXTRA_DRIVE_LARGEIO_COMPAT, this value will
- be used as the base for the computation for the physical block size
- for the large IO drives we create for you using the formula:
-
- libvirt_largeio_logical_compat_size * (2 ** n)
-
- where n is the index of the large IO drive.
-
-config QEMU_EXTRA_DRIVE_LARGEIO_MAX_POW_LIMIT
- int "Large IO - number of drives - power"
- default 7
- help
- We use an iterator to create the number of large drives on the
- guest system using:
-
- for n in range(0,libvirt_largeio_pow_limit)
- pbs = compat_size * (2 ** n)
-
- Using a compat_size of 512 means we go up to 64k physical block
- size by using the default of 7.
-
- This provides the value for the libvirt_largeio_pow_limit. By
- default we set this to 12 so we get drives of different physical
- sizes in powers of 2 ranging from 512 up to 1 GiB. You can reduce
- this if you want less drives to experiment with.
-
-endif # QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
-
-config LIBVIRT_ENABLE_LARGEIO
- bool
- default y if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
-
-config QEMU_LARGEIO_DRIVE_BASE_SIZE
- int
- default 10240 if !QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
- default QEMU_EXTRA_DRIVE_LARGEIO_BASE_SIZE if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
-
-config QEMU_LARGEIO_COMPAT_SIZE
- int
- default 512 if !QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
- default QEMU_EXTRA_DRIVE_LARGEIO_COMPAT_SIZE if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
-
-config QEMU_LARGEIO_MAX_POW_LIMIT
- int
- default 12 if !QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
- default QEMU_EXTRA_DRIVE_LARGEIO_MAX_POW_LIMIT if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
-
-endif # EXTRA_STORAGE_SUPPORTS_LARGEIO
-
-config QEMU_ENABLE_CXL
- bool "Enable QEMU CXL devices"
- depends on LIBVIRT
- depends on LIBVIRT_MACHINE_TYPE_Q35
- depends on BRINGUP_SUPPORTS_CXL
- depends on QEMU_USE_DEVELOPMENT_VERSION
- default n
- help
- If this option is enabled then you can enable different types of
- CXL devices which we will emulate for you.
-
-if QEMU_ENABLE_CXL
-
-config QEMU_START_QMP_ON_TCP_SOCKET
- bool "Start QMP on a TCP socket"
- default n
-
-if QEMU_START_QMP_ON_TCP_SOCKET
-
-config QEMU_QMP_COMMAND_LINE_STRING
- string "Qemu command line string for qmp"
- default "tcp:localhost:4444,server"
- help
- Option for qmp interface (from https://wiki.qemu.org/Documentation/QMP).
-
-config QEMU_QMP_WAIT_ON
- bool "Let Qemu instance wait for qmp connection"
- default n
-
-endif # QEMU_START_QMP_ON_TCP_SOCKET
-
-choice
- prompt "CXL topology to enable"
- default QEMU_ENABLE_CXL_DEMO_TOPOLOGY_1
-
-config QEMU_ENABLE_CXL_DEMO_TOPOLOGY_1
- bool "Basic CXL demo topology with a CXL Type 3 device"
- help
- This is a basic CXL demo topology. It consists of single host bridge that
- has one root port. A Type 3 persistent memory device is attached to the
- root port. This topology is referred to as a passthrough decoder in
- kernel terminology. The kernel CXL core will consume the resource exposed
- in the ACPI CXL memory layout description, such as Host Managed
- Device memory (HDM), CXL Early Discovery Table (CEDT), and the
- CXL Fixed Memory Window Structures to publish the root of a
- cxl_port decode hierarchy to map regions that represent System RAM,
- or Persistent Memory regions to be managed by LIBNVDIMM.
-
-config QEMU_ENABLE_CXL_DEMO_TOPOLOGY_2
- bool "Host bridge with two root ports"
- help
- This topology extends the first demo topology by placing two root ports
- in the host bridge. This ensures that the decoder associated with the
- host bridge is not a passthrough decoder.
-
-config QEMU_ENABLE_CXL_SWITCH_TOPOLOGY_1
- bool "CXL switch connected to root port with two down stream ports"
- help
- This topology adds a CXL switch in the topology. A memory device
- is connected to one of the down stream ports. The upstream port
- is connected to a root port on the host bridge.
-
-config QEMU_ENABLE_CXL_DEMO_DCD_TOPOLOGY_1
- bool "CXL DCD demo directly attached to a single-port HB"
- help
- This topology adds a DCD device in the topology, directly attached to
- a host bridge with only one root port.
- The device has zero (volatile or non-volatile) static capacity
- and 2 dynamic capacity regions where dynamic extents can be added.
-
-endchoice
-
-endif # QEMU_ENABLE_CXL
+source "kconfigs/Kconfig.libvirt.zns"
+source "kconfigs/Kconfig.libvirt.largeio"
+source "kconfigs/Kconfig.libvirt.cxl"
diff --git a/kconfigs/Kconfig.libvirt.cxl b/kconfigs/Kconfig.libvirt.cxl
new file mode 100644
index 00000000..bac83a57
--- /dev/null
+++ b/kconfigs/Kconfig.libvirt.cxl
@@ -0,0 +1,73 @@
+config QEMU_ENABLE_CXL
+ bool "Enable QEMU CXL devices"
+ depends on LIBVIRT
+ depends on LIBVIRT_MACHINE_TYPE_Q35
+ depends on BRINGUP_SUPPORTS_CXL
+ depends on QEMU_USE_DEVELOPMENT_VERSION
+ default n
+ help
+ If this option is enabled then you can enable different types of
+ CXL devices which we will emulate for you.
+
+if QEMU_ENABLE_CXL
+
+config QEMU_START_QMP_ON_TCP_SOCKET
+ bool "Start QMP on a TCP socket"
+ default n
+
+if QEMU_START_QMP_ON_TCP_SOCKET
+
+config QEMU_QMP_COMMAND_LINE_STRING
+ string "Qemu command line string for qmp"
+ default "tcp:localhost:4444,server"
+ help
+ Option for qmp interface (from https://wiki.qemu.org/Documentation/QMP).
+
+config QEMU_QMP_WAIT_ON
+ bool "Let Qemu instance wait for qmp connection"
+ default n
+
+endif # QEMU_START_QMP_ON_TCP_SOCKET
+
+choice
+ prompt "CXL topology to enable"
+ default QEMU_ENABLE_CXL_DEMO_TOPOLOGY_1
+
+config QEMU_ENABLE_CXL_DEMO_TOPOLOGY_1
+ bool "Basic CXL demo topology with a CXL Type 3 device"
+ help
+ This is a basic CXL demo topology. It consists of single host bridge that
+ has one root port. A Type 3 persistent memory device is attached to the
+ root port. This topology is referred to as a passthrough decoder in
+ kernel terminology. The kernel CXL core will consume the resource exposed
+ in the ACPI CXL memory layout description, such as Host Managed
+ Device memory (HDM), CXL Early Discovery Table (CEDT), and the
+ CXL Fixed Memory Window Structures to publish the root of a
+ cxl_port decode hierarchy to map regions that represent System RAM,
+ or Persistent Memory regions to be managed by LIBNVDIMM.
+
+config QEMU_ENABLE_CXL_DEMO_TOPOLOGY_2
+ bool "Host bridge with two root ports"
+ help
+ This topology extends the first demo topology by placing two root ports
+ in the host bridge. This ensures that the decoder associated with the
+ host bridge is not a passthrough decoder.
+
+config QEMU_ENABLE_CXL_SWITCH_TOPOLOGY_1
+ bool "CXL switch connected to root port with two down stream ports"
+ help
+ This topology adds a CXL switch in the topology. A memory device
+ is connected to one of the down stream ports. The upstream port
+ is connected to a root port on the host bridge.
+
+config QEMU_ENABLE_CXL_DEMO_DCD_TOPOLOGY_1
+ bool "CXL DCD demo directly attached to a single-port HB"
+ help
+ This topology adds a DCD device in the topology, directly attached to
+ a host bridge with only one root port.
+ The device has zero (volatile or non-volatile) static capacity
+ and 2 dynamic capacity regions where dynamic extents can be added.
+
+endchoice
+
+endif # QEMU_ENABLE_CXL
diff --git a/kconfigs/Kconfig.libvirt.largeio b/kconfigs/Kconfig.libvirt.largeio
new file mode 100644
index 00000000..0d9e5973
--- /dev/null
+++ b/kconfigs/Kconfig.libvirt.largeio
@@ -0,0 +1,134 @@
+if EXTRA_STORAGE_SUPPORTS_LARGEIO
+
+config QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+ bool "Enable QEMU drives for large IO experimentation"
+ depends on LIBVIRT
+ default n
+ help
+ If you want to experiment with large IO either with NVMe or virtio
+ you can enable this option. This will create a few additional drives
+ which are dedicated for largio experimentation testing.
+
+ For now you will need a distribution with a root filesystem on XFS
+ or btrfs, and so you will want to enable the kdevops distribution and
+ VAGRANT_KDEVOPS_DEBIAN_TESTING64_XFS_20230427. This is a requirement
+ given all block devices must use iomap and that is the only current
+ way to disable buffer-heads. Eventually this limitation is expected
+ You can also use large-block-20230525 with Amazon Linux 2023 on AWS.
+
+ If unsure say N.
+
+if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+
+config QEMU_EXTRA_DRIVE_LARGEIO_NUM_DRIVES_PER_SPACE
+ int "How many qemu drives to create per each target size"
+ default 4
+ help
+ If you are going to try to mess with LBS on 4k LBA you can experiment
+ with:
+
+ - 4k block size
+ - 8k block size
+ - 16k block size
+ - 32k block size
+ - 64k block size
+
+ So in total 4 drives. For a drive with an LBA format of 16k, you can
+ only experiment with block sizes:
+
+ - 16k block size
+ - 32k block size
+ - 64k block size
+
+ In theory you can experiment up to MAX_PAGECACHE_ORDER and to make
+ things worse some filesystems can use block sizes which are not power
+ of two. For now filesystems only support up to max block size 64k, so
+ we can just keep the max drive sizes down a bit. Likewise twice the
+ PAGE_SIZE is not supported as we require at least order 2 so 16k as
+ folios use the 3rd page for the deferred list. So you really only need
+ for 4k today:
+
+ - 4k block size
+ - 16k block size
+ - 32k block size
+ - 64k block size
+
+ If we create 4 drives per space you can have 4 for basic baseline
+ coverage testing. It seems the max limit is about 20 drives per
+ qemu pcie port today, if you enable more than the default 4, good
+ luck!
+
+config QEMU_EXTRA_DRIVE_LARGEIO_BASE_SIZE
+ int "QEMU extra drive drive base size"
+ default 10240
+ help
+ The base size of the QEMU extra storage drive to expose. The
+ size is increased by 1 MiB as we go down the list of extra large IO
+ drives.
+
+config QEMU_EXTRA_DRIVE_LARGEIO_COMPAT
+ bool "Use a compatibility logical block size"
+ default n
+ help
+ Since older spindle drives used to work with 512 bytes some drives
+ exist with support to handle 512 writes even if they physically store
+ more data on their drives for that one 512 byte write. Enable this if
+ you want to ensure your large IO drives always have a logical block
+ size restrained by the compatibility size you want to support.
+
+ By default this is not enabled, and therefore the logical block size
+ for the large IO drives will be equal to the physical block size.
+
+config QEMU_EXTRA_DRIVE_LARGEIO_COMPAT_SIZE
+ int "Large IO compat size"
+ default 512
+ help
+ This is the compatibility base block size to use for older drives.
+ Even if you disable QEMU_EXTRA_DRIVE_LARGEIO_COMPAT, this value will
+ be used as the base for the computation for the physical block size
+ for the large IO drives we create for you using the formula:
+
+ libvirt_largeio_logical_compat_size * (2 ** n)
+
+ where n is the index of the large IO drive.
+
+config QEMU_EXTRA_DRIVE_LARGEIO_MAX_POW_LIMIT
+ int "Large IO - number of drives - power"
+ default 7
+ help
+ We use an iterator to create the number of large drives on the
+ guest system using:
+
+ for n in range(0,libvirt_largeio_pow_limit)
+ pbs = compat_size * (2 ** n)
+
+ Using a compat_size of 512 means we go up to 64k physical block
+ size by using the default of 7.
+
+ This provides the value for the libvirt_largeio_pow_limit. By
+ default we set this to 12 so we get drives of different physical
+ sizes in powers of 2 ranging from 512 up to 1 GiB. You can reduce
+ this if you want less drives to experiment with.
+
+endif # QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+
+config LIBVIRT_ENABLE_LARGEIO
+ bool
+ default y if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+
+config QEMU_LARGEIO_DRIVE_BASE_SIZE
+ int
+ default 10240 if !QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+ default QEMU_EXTRA_DRIVE_LARGEIO_BASE_SIZE if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+
+config QEMU_LARGEIO_COMPAT_SIZE
+ int
+ default 512 if !QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+ default QEMU_EXTRA_DRIVE_LARGEIO_COMPAT_SIZE if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+
+config QEMU_LARGEIO_MAX_POW_LIMIT
+ int
+ default 12 if !QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+ default QEMU_EXTRA_DRIVE_LARGEIO_MAX_POW_LIMIT if QEMU_ENABLE_EXTRA_DRIVE_LARGEIO
+
+endif # EXTRA_STORAGE_SUPPORTS_LARGEIO
diff --git a/kconfigs/Kconfig.libvirt.zns b/kconfigs/Kconfig.libvirt.zns
new file mode 100644
index 00000000..1b1b7090
--- /dev/null
+++ b/kconfigs/Kconfig.libvirt.zns
@@ -0,0 +1,150 @@
+if EXTRA_STORAGE_SUPPORTS_ZNS
+
+config QEMU_ENABLE_NVME_ZNS
+ bool "Enable QEMU NVMe ZNS drives"
+ depends on LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
+ default n
+ help
+ If this option is enabled then you can enable NVMe ZNS drives on the
+ guests.
+
+config QEMU_CUSTOM_NVME_ZNS
+ bool "Customize QEMU NVMe ZNS settings"
+ depends on QEMU_ENABLE_NVME_ZNS
+ default n
+ help
+ If this option is enabled then you will be able to modify the defaults
+ used for the 2 NVMe ZNS drives we create for you. By default we create
+ two NVMe ZNS drives with 100 GiB of total size, each zone being
+ 128 MiB, and so you end up with 800 total zones. The zone capacity
+ equals the zone size. The default zone size append limit is also
+ set to 0, which means the zone append size limit will equal to the
+ maximum data transfer size (MDTS). The default logical and physical
+ block size of 4096 bytes is also used. If you want to customize any
+ of these ZNS settings for the drives we bring up enable this option.
+
+ If unsure say N.
+
+if QEMU_CUSTOM_NVME_ZNS
+
+config QEMU_CUSTOM_NVME_ZONE_DRIVE_SIZE
+ int "QEMU ZNS storage NVMe drive size"
+ default 102400
+ help
+ The size of the QEMU NVMe ZNS drive to expose. We expose 2 NVMe
+ ZNS drives of 100 GiB by default. This value chagnes its size.
+ 100 GiB is a sensible default given most full fstests require about
+ 50 GiB of data writes.
+
+config QEMU_CUSTOM_NVME_ZONE_ZASL
+ int "QEMU ZNS zasl - zone append size limit power of 2"
+ default 0
+ help
+ This is the zone append size limit. If left at 0 QEMU will use
+ the maximum data transfer size (MDTS) for the zone size append limit.
+ Otherwise if this value is set to something other than 0, then the
+ zone size append limit will equal to 2 to the power of the value set
+ here multiplied by the minimum memory page size (4096 bytes) but the
+ QEMU promises this value cannot exceed the maximum data transfer size.
+
+config QEMU_CUSTOM_NVME_ZONE_SIZE
+ string "QEMU ZNS storage NVMe zone size"
+ default "128M"
+ help
+ The size the the QEMU NVMe ZNS zone size. The number of zones are
+ implied by the driver size / zone size. If there is a remainder
+ technically that should go into another zone with a smaller zone
+ capacity.
+
+config QEMU_CUSTOM_NVME_ZONE_CAPACITY
+ string "QEMU ZNS storage NVMe zone capacity"
+ default "0M"
+ help
+ The size to use for the zone capacity. This may be smaller or equal
+ to the zone size. If set to 0 then this will ensure the zone
+ capacity is equal to the zone size.
+
+config QEMU_CUSTOM_NVME_ZONE_MAX_ACTIVE
+ int "QEMU ZNS storage NVMe zone max active"
+ default 0
+ help
+ The max numbe of active zones. The default of 0 means all zones
+ can be active at all times.
+
+config QEMU_CUSTOM_NVME_ZONE_MAX_OPEN
+ int "QEMU ZNS storage NVMe zone max open"
+ default 0
+ help
+ The max numbe of open zones. The default of 0 means all zones
+ can be opened at all times. If the number of active zones is
+ specified this value must be less than or equal to that value.
+
+config QEMU_CUSTOM_NVME_ZONE_PHYSICAL_BLOCK_SIZE
+ int "QEMU ZNS storage NVMe physical block size"
+ default 4096
+ help
+ The physical block size to use for ZNS drives. This ends up
+ what is put into the /sys/block/<disk>/queue/physical_block_size
+ and is the smallest unit a physical storage device can write
+ atomically. It is usually the same as the logical block size but may
+ be bigger. One example is SATA drives with 4KB sectors that expose a
+ 512-byte logical block size to the operating system. For stacked
+ block devices the physical_block_size variable contains the maximum
+ physical_block_size of the component devices.
+
+config QEMU_CUSTOM_NVME_ZONE_LOGICAL_BLOCK_SIZE
+ int "QEMU ZNS storage NVMe logical block size"
+ default 4096
+ help
+ The logical block size to use for ZNS drives. This ends up what is
+ put into the /sys/block/<disk>/queue/logical_block_size and the
+ smallest unit the storage device can address. It is typically 512
+ bytes.
+
+endif # QEMU_CUSTOM_NVME_ZNS
+
+config LIBVIRT_ENABLE_ZNS
+ bool
+ default y if QEMU_ENABLE_NVME_ZNS
+
+config QEMU_NVME_ZONE_DRIVE_SIZE
+ int
+ default 102400 if !QEMU_CUSTOM_NVME_ZNS
+ default QEMU_CUSTOM_NVME_ZONE_DRIVE_SIZE if QEMU_CUSTOM_NVME_ZNS
+
+config QEMU_NVME_ZONE_ZASL
+ int
+ default 0 if !QEMU_CUSTOM_NVME_ZNS
+ default QEMU_CUSTOM_NVME_ZONE_ZASL if QEMU_CUSTOM_NVME_ZNS
+
+config QEMU_NVME_ZONE_SIZE
+ string
+ default "128M" if !QEMU_CUSTOM_NVME_ZNS
+ default QEMU_CUSTOM_NVME_ZONE_SIZE if QEMU_CUSTOM_NVME_ZNS
+
+config QEMU_NVME_ZONE_CAPACITY
+ string
+ default "0M" if !QEMU_CUSTOM_NVME_ZNS
+ default QEMU_CUSTOM_NVME_ZONE_CAPACITY if QEMU_CUSTOM_NVME_ZNS
+
+config QEMU_NVME_ZONE_MAX_ACTIVE
+ int
+ default 0 if !QEMU_CUSTOM_NVME_ZNS
+ default QEMU_CUSTOM_NVME_ZONE_MAX_ACTIVE if QEMU_CUSTOM_NVME_ZNS
+
+config QEMU_NVME_ZONE_MAX_OPEN
+ int
+ default 0 if !QEMU_CUSTOM_NVME_ZNS
+ default QEMU_CUSTOM_NVME_ZONE_MAX_OPEN if QEMU_CUSTOM_NVME_ZNS
+
+config QEMU_NVME_ZONE_PHYSICAL_BLOCK_SIZE
+ int
+ default 4096 if !QEMU_CUSTOM_NVME_ZNS
+ default QEMU_CUSTOM_NVME_ZONE_PHYSICAL_BLOCK_SIZE if QEMU_CUSTOM_NVME_ZNS
+
+config QEMU_NVME_ZONE_LOGICAL_BLOCK_SIZE
+ int
+ default 4096 if !QEMU_CUSTOM_NVME_ZNS
+ default QEMU_CUSTOM_NVME_ZONE_LOGICAL_BLOCK_SIZE if QEMU_CUSTOM_NVME_ZNS
+
+endif # EXTRA_STORAGE_SUPPORTS_ZNS
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 4/8] guestfs: move options to its own file
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
` (2 preceding siblings ...)
2024-03-08 0:03 ` [PATCH 3/8] libvirt: move zns, largio and cxl to its own files Luis Chamberlain
@ 2024-03-08 0:03 ` Luis Chamberlain
2024-03-08 0:03 ` [PATCH 5/8] bringup: match default distro to user's distro Luis Chamberlain
` (4 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 0:03 UTC (permalink / raw)
To: kdevops; +Cc: Luis Chamberlain
Move guestfs config options to its own file. This makes it
easier to find guestfs options.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
kconfigs/Kconfig.bringup | 52 +---------------------------------------
kconfigs/Kconfig.guestfs | 50 ++++++++++++++++++++++++++++++++++++++
2 files changed, 51 insertions(+), 51 deletions(-)
create mode 100644 kconfigs/Kconfig.guestfs
diff --git a/kconfigs/Kconfig.bringup b/kconfigs/Kconfig.bringup
index de4128ae..2d913ea6 100644
--- a/kconfigs/Kconfig.bringup
+++ b/kconfigs/Kconfig.bringup
@@ -64,62 +64,12 @@ config SKIP_BRINGUP
endchoice
-if GUESTFS
-
-choice
- prompt "Guestfs Linux distribution to use"
- default GUESTFS_FEDORA
-
-config GUESTFS_FEDORA
- bool "Fedora (or derived distro)"
- select HAVE_DISTRO_XFS_PREFERS_MANUAL if FSTESTS_XFS
- select HAVE_DISTRO_BTRFS_PREFERS_MANUAL if FSTESTS_BTRFS
- select HAVE_DISTRO_EXT4_PREFERS_MANUAL if FSTESTS_EXT4
- select HAVE_DISTRO_PREFERS_FSTESTS_WATCHDOG if KDEVOPS_WORKFLOW_ENABLE_FSTESTS
- select HAVE_DISTRO_PREFERS_FSTESTS_WATCHDOG_KILL if KDEVOPS_WORKFLOW_ENABLE_FSTESTS
- help
- This option will set the target guest to be a distro in the Fedora family.
- For example, Fedora, RHEL, etc.
-
-config GUESTFS_DEBIAN
- bool "Debian"
- select HAVE_CUSTOM_DISTRO_HOST_PREFIX
- select HAVE_DISTRO_XFS_PREFERS_MANUAL if FSTESTS_XFS
- select HAVE_DISTRO_BTRFS_PREFERS_MANUAL if FSTESTS_BTRFS
- select HAVE_DISTRO_EXT4_PREFERS_MANUAL if FSTESTS_EXT4
- select HAVE_DISTRO_PREFERS_CUSTOM_HOST_PREFIX
- select HAVE_DISTRO_PREFERS_FSTESTS_WATCHDOG if KDEVOPS_WORKFLOW_ENABLE_FSTESTS
- select HAVE_DISTRO_PREFERS_FSTESTS_WATCHDOG_KILL if KDEVOPS_WORKFLOW_ENABLE_FSTESTS
- help
- This option will set the target guest to Debian.
-
-endchoice
-
-config VIRT_BUILDER_OS_VERSION
- string "virt-builder os-version"
- default "fedora-39" if GUESTFS_FEDORA
- default "debian-12" if GUESTFS_DEBIAN
- help
- Have virt-builder use this os-version string to
- build a root image for the guest. Run "virt-builder -l"
- to get a list of operating systems and versions supported
- by guestfs.
-
-if GUESTFS_DEBIAN
-
-config GUESTFS_DEBIAN_BOX_SHORT
- string
- default "debian12" if GUESTFS_DEBIAN
-
-endif
-
-endif # GUESTFS
-
config LIBVIRT
bool
depends on VAGRANT_LIBVIRT_SELECT || GUESTFS
default y
+source "kconfigs/Kconfig.guestfs"
source "vagrant/Kconfig"
source "terraform/Kconfig"
if LIBVIRT
diff --git a/kconfigs/Kconfig.guestfs b/kconfigs/Kconfig.guestfs
new file mode 100644
index 00000000..58c0c69a
--- /dev/null
+++ b/kconfigs/Kconfig.guestfs
@@ -0,0 +1,50 @@
+if GUESTFS
+
+choice
+ prompt "Guestfs Linux distribution to use"
+ default GUESTFS_FEDORA
+
+config GUESTFS_FEDORA
+ bool "Fedora (or derived distro)"
+ select HAVE_DISTRO_XFS_PREFERS_MANUAL if FSTESTS_XFS
+ select HAVE_DISTRO_BTRFS_PREFERS_MANUAL if FSTESTS_BTRFS
+ select HAVE_DISTRO_EXT4_PREFERS_MANUAL if FSTESTS_EXT4
+ select HAVE_DISTRO_PREFERS_FSTESTS_WATCHDOG if KDEVOPS_WORKFLOW_ENABLE_FSTESTS
+ select HAVE_DISTRO_PREFERS_FSTESTS_WATCHDOG_KILL if KDEVOPS_WORKFLOW_ENABLE_FSTESTS
+ help
+ This option will set the target guest to be a distro in the Fedora family.
+ For example, Fedora, RHEL, etc.
+
+config GUESTFS_DEBIAN
+ bool "Debian"
+ select HAVE_CUSTOM_DISTRO_HOST_PREFIX
+ select HAVE_DISTRO_XFS_PREFERS_MANUAL if FSTESTS_XFS
+ select HAVE_DISTRO_BTRFS_PREFERS_MANUAL if FSTESTS_BTRFS
+ select HAVE_DISTRO_EXT4_PREFERS_MANUAL if FSTESTS_EXT4
+ select HAVE_DISTRO_PREFERS_CUSTOM_HOST_PREFIX
+ select HAVE_DISTRO_PREFERS_FSTESTS_WATCHDOG if KDEVOPS_WORKFLOW_ENABLE_FSTESTS
+ select HAVE_DISTRO_PREFERS_FSTESTS_WATCHDOG_KILL if KDEVOPS_WORKFLOW_ENABLE_FSTESTS
+ help
+ This option will set the target guest to Debian.
+
+endchoice
+
+config VIRT_BUILDER_OS_VERSION
+ string "virt-builder os-version"
+ default "fedora-39" if GUESTFS_FEDORA
+ default "debian-12" if GUESTFS_DEBIAN
+ help
+ Have virt-builder use this os-version string to
+ build a root image for the guest. Run "virt-builder -l"
+ to get a list of operating systems and versions supported
+ by guestfs.
+
+if GUESTFS_DEBIAN
+
+config GUESTFS_DEBIAN_BOX_SHORT
+ string
+ default "debian12" if GUESTFS_DEBIAN
+
+endif
+
+endif # GUESTFS
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 5/8] bringup: match default distro to user's distro
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
` (3 preceding siblings ...)
2024-03-08 0:03 ` [PATCH 4/8] guestfs: move options to its own file Luis Chamberlain
@ 2024-03-08 0:03 ` Luis Chamberlain
2024-03-08 0:03 ` [PATCH 6/8] guestfs: remove explicit tap0 device name Luis Chamberlain
` (3 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 0:03 UTC (permalink / raw)
To: kdevops; +Cc: Luis Chamberlain
We've had kconfigs/Kconfig.distro for a while which let's us pick
up on the distro the user of kdevops is using but haven't used it
extensively. Heavy users of kdevops want sensible defaults so you
have to do less configuration of kdevops, one of the things we could
do to make things smoother is match the target distro to use with the
user's distro. So do that and be consistent over vagrant / guestfs and
promote the same best practice when using terraform to use the cloud.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
kconfigs/Kconfig.guestfs | 4 +++-
terraform/aws/Kconfig | 1 +
terraform/azure/Kconfig | 2 +-
terraform/gce/Kconfig | 3 ++-
terraform/openstack/Kconfig | 2 +-
vagrant/Kconfig | 4 +++-
6 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/kconfigs/Kconfig.guestfs b/kconfigs/Kconfig.guestfs
index 58c0c69a..03e0fb86 100644
--- a/kconfigs/Kconfig.guestfs
+++ b/kconfigs/Kconfig.guestfs
@@ -2,7 +2,9 @@ if GUESTFS
choice
prompt "Guestfs Linux distribution to use"
- default GUESTFS_FEDORA
+ default GUESTFS_FEDORA if DISTRO_FEDORA || DISTRO_REDHAT
+ default GUESTFS_FEDORA if DISTRO_OPENSUSE || DISTRO_SUSE
+ default GUESTFS_DEBIAN if DISTRO_DEBIAN || DISTRO_UBUNTU
config GUESTFS_FEDORA
bool "Fedora (or derived distro)"
diff --git a/terraform/aws/Kconfig b/terraform/aws/Kconfig
index 9f4f9070..db8a5f76 100644
--- a/terraform/aws/Kconfig
+++ b/terraform/aws/Kconfig
@@ -96,6 +96,7 @@ config TERRAFORM_AWS_AV_REGION
choice
prompt "AWS AMI owner"
+ default TERRAFORM_AWS_AMI_DEBIAN if DISTRO_DEBIAN
default TERRAFORM_AWS_AMI_AMAZON_X86_64 if TARGET_ARCH_X86_64
default TERRAFORM_AWS_AMI_AMAZON_ARM64 if TARGET_ARCH_ARM64
diff --git a/terraform/azure/Kconfig b/terraform/azure/Kconfig
index 97513c7a..30acefd3 100644
--- a/terraform/azure/Kconfig
+++ b/terraform/azure/Kconfig
@@ -80,7 +80,7 @@ if TERRAFORM_AZURE_IMAGE_PUBLISHER_DEBIAN
choice
prompt "Azure image offer"
- default TERRAFORM_AZURE_IMAGE_OFFER_DEBIAN_10
+ default TERRAFORM_AZURE_IMAGE_OFFER_DEBIAN_10 if DISTRO_DEBIAN
config TERRAFORM_AZURE_IMAGE_OFFER_DEBIAN_10
bool "debian-10"
diff --git a/terraform/gce/Kconfig b/terraform/gce/Kconfig
index df17078d..6fedb2bc 100644
--- a/terraform/gce/Kconfig
+++ b/terraform/gce/Kconfig
@@ -57,7 +57,8 @@ config TERRAFORM_GCE_SCRATCH_DISK_INTERFACE
config TERRAFORM_GCE_IMAGE
string "GCE image to use"
- default "debian-cloud/debian-10"
+ default "debian-cloud/debian-10" if DISTRO_DEBIAN
+ default "debian-cloud/debian-10" if !DISTRO_DEBIAN
help
This option will set GCE image to debian-cloud/debian-10.
diff --git a/terraform/openstack/Kconfig b/terraform/openstack/Kconfig
index 9b1f324d..61167ad1 100644
--- a/terraform/openstack/Kconfig
+++ b/terraform/openstack/Kconfig
@@ -47,7 +47,7 @@ endchoice
config TERRAFORM_OPENSTACK_IMAGE_NAME
string "OpenStack image name"
- default "Debian 10 ppc64le" if TERRAFORM_OPENSTACK_IMAGE_DEBIAN_10_PPC64LE
+ default "Debian 10 ppc64le" if TERRAFORM_OPENSTACK_IMAGE_DEBIAN_10_PPC64LE && DISTRO_DEBIAN
help
This option will set OpenStack image name to use.
diff --git a/vagrant/Kconfig b/vagrant/Kconfig
index fc322438..b5abba76 100644
--- a/vagrant/Kconfig
+++ b/vagrant/Kconfig
@@ -97,7 +97,9 @@ config HAVE_SUSE_VAGRANT
choice
prompt "Vagrant guest Linux distribution to use"
- default VAGRANT_DEBIAN if !HAVE_SUSE_VAGRANT
+ default VAGRANT_DEBIAN if DISTRO_DEBIAN || DISTRO_UBUNTU
+ default VAGRANT_FEDORA if DISTRO_FEDORA
+ default VAGRANT_OPENSUSE if DISTRO_OPENSUSE
default VAGRANT_SUSE if HAVE_SUSE_VAGRANT
config VAGRANT_DEBIAN
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 6/8] guestfs: remove explicit tap0 device name
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
` (4 preceding siblings ...)
2024-03-08 0:03 ` [PATCH 5/8] bringup: match default distro to user's distro Luis Chamberlain
@ 2024-03-08 0:03 ` Luis Chamberlain
2024-03-08 0:03 ` [PATCH 7/8] destroy_guestfs.sh: remove known ssh key Luis Chamberlain
` (2 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 0:03 UTC (permalink / raw)
To: kdevops; +Cc: Luis Chamberlain
If you create more than one guest each guest will have the same tap0
device name, and when you try to start the second guest it will fail
due to the clash. We can avoid this by just not specifying the target
host name and letting libvirt figure it out. This does mean a slight
functional change in that the host will get something like vnet+1 id
instead of tap0. This fixes getting networking when using multiple
guests as when testing with fstests against a filesystem.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml | 1 -
playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml | 1 -
2 files changed, 2 deletions(-)
diff --git a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
index ce160490..9af98313 100644
--- a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
+++ b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
@@ -136,7 +136,6 @@
</controller>
<interface type='bridge'>
<source bridge='{{ libvirt_session_public_network_dev }}'/>
- <target dev='tap0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
diff --git a/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
index 29dc0951..ee669a0b 100644
--- a/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
+++ b/playbooks/roles/gen_nodes/templates/guestfs_virt.j2.xml
@@ -128,7 +128,6 @@
</controller>
<interface type='bridge'>
<source bridge='{{ libvirt_session_public_network_dev }}'/>
- <target dev='tap0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 7/8] destroy_guestfs.sh: remove known ssh key
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
` (5 preceding siblings ...)
2024-03-08 0:03 ` [PATCH 6/8] guestfs: remove explicit tap0 device name Luis Chamberlain
@ 2024-03-08 0:03 ` Luis Chamberlain
2024-03-08 0:03 ` [PATCH 8/8] guestfs: verify new line on ssh include directive Luis Chamberlain
2024-03-08 9:55 ` [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
8 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 0:03 UTC (permalink / raw)
To: kdevops; +Cc: Luis Chamberlain
When destroying a guest we should remove the known key so that if
we re-create the guest we won't be asked about a new key.
Without this the ssh-injection will fail on a guest if an old key was
already used in the bringup proces with:
virt-sysprep -a $ROOTIMG --hostname $name --ssh-inject <...>
Fix this.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
scripts/destroy_guestfs.sh | 1 +
1 file changed, 1 insertion(+)
diff --git a/scripts/destroy_guestfs.sh b/scripts/destroy_guestfs.sh
index 9c627f23..3cdd21a8 100755
--- a/scripts/destroy_guestfs.sh
+++ b/scripts/destroy_guestfs.sh
@@ -23,6 +23,7 @@ if [ -f "$GUESTFSDIR/kdevops_nodes.yaml" ]; then
fi
rm -rf "$GUESTFSDIR/$name"
rm -rf "$STORAGEDIR/$name"
+ ssh-keygen -q -f ~/.ssh/known_hosts -R $name 1> /dev/null 2>&1
done
fi
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 8/8] guestfs: verify new line on ssh include directive
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
` (6 preceding siblings ...)
2024-03-08 0:03 ` [PATCH 7/8] destroy_guestfs.sh: remove known ssh key Luis Chamberlain
@ 2024-03-08 0:03 ` Luis Chamberlain
2024-03-08 9:55 ` [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
8 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 0:03 UTC (permalink / raw)
To: kdevops; +Cc: Luis Chamberlain
If the ansible task added the include directive for kdevops and later a new
host entry was added (say with Vagrant), it means the Include directive is
followed by an entry without a new line. This will mean ssh won't use
that include file.
So we need to be a bit paranoid with this effort. So we are going to
first check if this sanity check was done first by looking for a special
new tag we're going to add now, if that exists we know we've our job and
can bail. Otherwise we're going to remove the old stale line, move it to
the top and ensure its at the top of the file. To ensure a new line is
used we use the ansible block module, and we take advantage of this by
adding the version of kdevops we use to add this. That's our marker that
the include directive is OK.
Fixes: e9390b898f98 ("guestfs: add the Include directive to ~/.ssh/config")
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
.../update_ssh_config_guestfs/tasks/main.yml | 58 ++++++++++++++++++-
scripts/bringup_guestfs.sh | 1 -
scripts/guestfs.Makefile | 1 +
3 files changed, 57 insertions(+), 3 deletions(-)
diff --git a/playbooks/roles/update_ssh_config_guestfs/tasks/main.yml b/playbooks/roles/update_ssh_config_guestfs/tasks/main.yml
index 368f9941..4ac1ce44 100644
--- a/playbooks/roles/update_ssh_config_guestfs/tasks/main.yml
+++ b/playbooks/roles/update_ssh_config_guestfs/tasks/main.yml
@@ -1,6 +1,60 @@
-- name: Add Include directive to ~/.ssh/config
+# Check if the include directive is already presetn
+- name: Check if the kdevops include directive was used
+ lineinfile:
+ path: ~/.ssh/config
+ regexp: "Include ~/.ssh/config_kdevops_*"
+ state: absent
+ check_mode: yes
+ changed_when: false
+ register: kdevops_ssh_include
+
+# Check if the the kdevops_version was added in a comment
+- name: Check if the new include directive was used with a kdevops_version comment
+ lineinfile:
+ path: ~/.ssh/config
+ regexp: "^#(.*)kdevops_version(.*)"
+ state: absent
+ check_mode: yes
+ changed_when: false
+ register: fixed_ssh_entry
+
+# If both the include directive was found and kdevops version comment was found
+# we bail right away to avoid updating the ssh config file always.
+- name: Check if the new fixed include directive was used
+ meta: end_play
+ when:
+ - kdevops_ssh_include.found
+ - fixed_ssh_entry.found
+
+# If we're still running it means the correct include directive following a new
+# line was not found. So remove old stale include directives which may be
+# buggy.
+- name: Add remove buggy stale include directive to ~/.ssh/config without a new line which was buggy
lineinfile:
path: ~/.ssh/config
line: "Include ~/.ssh/config_kdevops_*"
- insertbefore: "BOF"
+ state: absent
+
+- name: Remove any stale kdevops comments
+ lineinfile:
+ path: ~/.ssh/config
+ regexp: "^#(.*)kdevops(.*)"
+ state: absent
+
+- name: Remove any extra new lines
+ replace:
+ path: ~/.ssh/config
+ regexp: '(^\s*$)'
+ replace: ''
+
+# ssh include directives must follow a new line.
+- name: Add Include directive to ~/.ssh/config
+ blockinfile:
+ path: ~/.ssh/config
+ insertbefore: BOF
+ marker: "{mark}"
+ marker_begin: "# Automatically added by kdevops\n# kdevops_version: {{ kdevops_version }}"
+ marker_end: ""
create: true
+ block: |
+ Include ~/.ssh/config_kdevops_*
diff --git a/scripts/bringup_guestfs.sh b/scripts/bringup_guestfs.sh
index b55b6a92..2b5b3857 100755
--- a/scripts/bringup_guestfs.sh
+++ b/scripts/bringup_guestfs.sh
@@ -109,7 +109,6 @@ do
cp --reflink=auto $BASE_IMAGE $ROOTIMG
virt-sysprep -a $ROOTIMG --hostname $name --ssh-inject "kdevops:file:$SSH_KEY.pub"
-
if [[ "$CONFIG_LIBVIRT_ENABLE_LARGEIO" == "y" ]]; then
lbs_idx=1
for i in $(seq 1 $(($CONFIG_QEMU_LARGEIO_MAX_POW_LIMIT+1))); do
diff --git a/scripts/guestfs.Makefile b/scripts/guestfs.Makefile
index 6328cfd5..cfa59cc6 100644
--- a/scripts/guestfs.Makefile
+++ b/scripts/guestfs.Makefile
@@ -66,6 +66,7 @@ $(KDEVOPS_PROVISIONED_SSH):
ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
--inventory localhost, \
playbooks/update_ssh_config_guestfs.yml \
+ --extra-vars=@./extra_vars.yaml \
-e 'ansible_python_interpreter=/usr/bin/python3' ;\
LIBVIRT_DEFAULT_URI=$(CONFIG_LIBVIRT_URI) $(TOPDIR)/scripts/update_ssh_config_guestfs.py; \
fi
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH 0/8] guestfs: fixes and enhancements
2024-03-08 0:03 [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
` (7 preceding siblings ...)
2024-03-08 0:03 ` [PATCH 8/8] guestfs: verify new line on ssh include directive Luis Chamberlain
@ 2024-03-08 9:55 ` Luis Chamberlain
2024-03-08 14:14 ` Chuck Lever III
8 siblings, 1 reply; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 9:55 UTC (permalink / raw)
To: kdevops
On Thu, Mar 07, 2024 at 04:03:51PM -0800, Luis Chamberlain wrote:
> guestfs didn't work with multiple guests for me and it took me a while
> to figure out why. The issue was a stupid bug where if you have the
> ssh include directive without a new line it won't be processed. So the
> last patch fixes that.
>
> The rest is general cleanup and sanity stuff.
>
> I tested with vagrant too to ensure the Kconfig changes don't break it.
>
> I'd like to wait to push until this is also tesed on aarch64 guests
I didn't wait and pushed but I was confident it wouldn't break aarch64.
Luis
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 0/8] guestfs: fixes and enhancements
2024-03-08 9:55 ` [PATCH 0/8] guestfs: fixes and enhancements Luis Chamberlain
@ 2024-03-08 14:14 ` Chuck Lever III
2024-03-08 14:26 ` Chuck Lever III
0 siblings, 1 reply; 15+ messages in thread
From: Chuck Lever III @ 2024-03-08 14:14 UTC (permalink / raw)
To: Luis Chamberlain; +Cc: kdevops@lists.linux.dev
> On Mar 8, 2024, at 4:55 AM, Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Thu, Mar 07, 2024 at 04:03:51PM -0800, Luis Chamberlain wrote:
>> guestfs didn't work with multiple guests for me and it took me a while
>> to figure out why. The issue was a stupid bug where if you have the
>> ssh include directive without a new line it won't be processed. So the
>> last patch fixes that.
>>
>> The rest is general cleanup and sanity stuff.
>>
>> I tested with vagrant too to ensure the Kconfig changes don't break it.
>>
>> I'd like to wait to push until this is also tesed on aarch64 guests
>
> I didn't wait and pushed but I was confident it wouldn't break aarch64.
I can pull it onto my aarch64 system today to try it out.
--
Chuck Lever
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/8] guestfs: fixes and enhancements
2024-03-08 14:14 ` Chuck Lever III
@ 2024-03-08 14:26 ` Chuck Lever III
2024-03-08 15:44 ` Luis Chamberlain
0 siblings, 1 reply; 15+ messages in thread
From: Chuck Lever III @ 2024-03-08 14:26 UTC (permalink / raw)
To: Luis Chamberlain; +Cc: kdevops@lists.linux.dev
> On Mar 8, 2024, at 9:14 AM, Chuck Lever III <chuck.lever@oracle.com> wrote:
>
>
>
>> On Mar 8, 2024, at 4:55 AM, Luis Chamberlain <mcgrof@kernel.org> wrote:
>>
>> On Thu, Mar 07, 2024 at 04:03:51PM -0800, Luis Chamberlain wrote:
>>> guestfs didn't work with multiple guests for me and it took me a while
>>> to figure out why. The issue was a stupid bug where if you have the
>>> ssh include directive without a new line it won't be processed. So the
>>> last patch fixes that.
>>>
>>> The rest is general cleanup and sanity stuff.
>>>
>>> I tested with vagrant too to ensure the Kconfig changes don't break it.
>>>
>>> I'd like to wait to push until this is also tesed on aarch64 guests
>>
>> I didn't wait and pushed but I was confident it wouldn't break aarch64.
>
> I can pull it onto my aarch64 system today to try it out.
During "make deps" :
TASK [gen_nodes : Generate XML files for the libvirt guests] *****************************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'drives' is undefined. 'drives' is undefined
failed: [localhost] (item={'name': 'kdevops-nfsd'}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"name": "kdevops-nfsd"
}
}
MSG:
AnsibleUndefinedVariable: 'drives' is undefined. 'drives' is undefined
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************
localhost : ok=13 changed=4 unreachable=0 failed=1 skipped=26 rescued=0 ignored=0
make: *** [Makefile:233: guestfs/kdevops_nodes.yaml] Error 2
--
Chuck Lever
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 0/8] guestfs: fixes and enhancements
2024-03-08 14:26 ` Chuck Lever III
@ 2024-03-08 15:44 ` Luis Chamberlain
2024-03-08 15:46 ` Chuck Lever III
0 siblings, 1 reply; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 15:44 UTC (permalink / raw)
To: Chuck Lever III; +Cc: kdevops@lists.linux.dev
On Fri, Mar 08, 2024 at 02:26:46PM +0000, Chuck Lever III wrote:
>
> > On Mar 8, 2024, at 9:14 AM, Chuck Lever III <chuck.lever@oracle.com> wrote:
> >
> >
> >
> >> On Mar 8, 2024, at 4:55 AM, Luis Chamberlain <mcgrof@kernel.org> wrote:
> >>
> >> On Thu, Mar 07, 2024 at 04:03:51PM -0800, Luis Chamberlain wrote:
> >>> guestfs didn't work with multiple guests for me and it took me a while
> >>> to figure out why. The issue was a stupid bug where if you have the
> >>> ssh include directive without a new line it won't be processed. So the
> >>> last patch fixes that.
> >>>
> >>> The rest is general cleanup and sanity stuff.
> >>>
> >>> I tested with vagrant too to ensure the Kconfig changes don't break it.
> >>>
> >>> I'd like to wait to push until this is also tesed on aarch64 guests
> >>
> >> I didn't wait and pushed but I was confident it wouldn't break aarch64.
> >
> > I can pull it onto my aarch64 system today to try it out.
>
> During "make deps" :
>
> TASK [gen_nodes : Generate XML files for the libvirt guests] *****************************************************************************************************************************************************************************************************************
> An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'drives' is undefined. 'drives' is undefined
Could you try this patch?
diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
index 105e2cf03913..874e5b0623b9 100644
--- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
+++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
@@ -1,3 +1,4 @@
+{% import './templates/drives.j2' as drives %}
{% if libvirt_extra_storage_drive_ide %}
{{ drives.gen_drive_ide(4,
kdevops_storage_pool_path,
diff --git a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
index 9af9831399d2..a5f60858e402 100644
--- a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
+++ b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
@@ -1,4 +1,3 @@
-{% import './templates/drives.j2' as drives %}
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>{{ hostname }}</name>
<memory unit='MiB'>{{ libvirt_mem_mb }}</memory>
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH 0/8] guestfs: fixes and enhancements
2024-03-08 15:44 ` Luis Chamberlain
@ 2024-03-08 15:46 ` Chuck Lever III
2024-03-08 15:56 ` Luis Chamberlain
0 siblings, 1 reply; 15+ messages in thread
From: Chuck Lever III @ 2024-03-08 15:46 UTC (permalink / raw)
To: Luis Chamberlain; +Cc: kdevops@lists.linux.dev
> On Mar 8, 2024, at 10:44 AM, Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Fri, Mar 08, 2024 at 02:26:46PM +0000, Chuck Lever III wrote:
>>
>>> On Mar 8, 2024, at 9:14 AM, Chuck Lever III <chuck.lever@oracle.com> wrote:
>>>
>>>
>>>
>>>> On Mar 8, 2024, at 4:55 AM, Luis Chamberlain <mcgrof@kernel.org> wrote:
>>>>
>>>> On Thu, Mar 07, 2024 at 04:03:51PM -0800, Luis Chamberlain wrote:
>>>>> guestfs didn't work with multiple guests for me and it took me a while
>>>>> to figure out why. The issue was a stupid bug where if you have the
>>>>> ssh include directive without a new line it won't be processed. So the
>>>>> last patch fixes that.
>>>>>
>>>>> The rest is general cleanup and sanity stuff.
>>>>>
>>>>> I tested with vagrant too to ensure the Kconfig changes don't break it.
>>>>>
>>>>> I'd like to wait to push until this is also tesed on aarch64 guests
>>>>
>>>> I didn't wait and pushed but I was confident it wouldn't break aarch64.
>>>
>>> I can pull it onto my aarch64 system today to try it out.
>>
>> During "make deps" :
>>
>> TASK [gen_nodes : Generate XML files for the libvirt guests] *****************************************************************************************************************************************************************************************************************
>> An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'drives' is undefined. 'drives' is undefined
>
> Could you try this patch?
>
> diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> index 105e2cf03913..874e5b0623b9 100644
> --- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
> +++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> @@ -1,3 +1,4 @@
> +{% import './templates/drives.j2' as drives %}
> {% if libvirt_extra_storage_drive_ide %}
> {{ drives.gen_drive_ide(4,
> kdevops_storage_pool_path,
> diff --git a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
> index 9af9831399d2..a5f60858e402 100644
> --- a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
> +++ b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
> @@ -1,4 +1,3 @@
> -{% import './templates/drives.j2' as drives %}
> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
> <name>{{ hostname }}</name>
> <memory unit='MiB'>{{ libvirt_mem_mb }}</memory>
That WFM.
--
Chuck Lever
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 0/8] guestfs: fixes and enhancements
2024-03-08 15:46 ` Chuck Lever III
@ 2024-03-08 15:56 ` Luis Chamberlain
0 siblings, 0 replies; 15+ messages in thread
From: Luis Chamberlain @ 2024-03-08 15:56 UTC (permalink / raw)
To: Chuck Lever III; +Cc: kdevops@lists.linux.dev
On Fri, Mar 8, 2024 at 7:46 AM Chuck Lever III <chuck.lever@oracle.com> wrote:
> > On Mar 8, 2024, at 10:44 AM, Luis Chamberlain <mcgrof@kernel.org> wrote:
> > diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> > index 105e2cf03913..874e5b0623b9 100644
> > --- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
> > +++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> > @@ -1,3 +1,4 @@
> > +{% import './templates/drives.j2' as drives %}
> > {% if libvirt_extra_storage_drive_ide %}
> > {{ drives.gen_drive_ide(4,
> > kdevops_storage_pool_path,
> > diff --git a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
> > index 9af9831399d2..a5f60858e402 100644
> > --- a/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
> > +++ b/playbooks/roles/gen_nodes/templates/guestfs_q35.j2.xml
> > @@ -1,4 +1,3 @@
> > -{% import './templates/drives.j2' as drives %}
> > <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
> > <name>{{ hostname }}</name>
> > <memory unit='MiB'>{{ libvirt_mem_mb }}</memory>
>
> That WFM.
FIx pushed, thanks for testing!
Luis
^ permalink raw reply [flat|nested] 15+ messages in thread