* [PATCH] guestfs: add scsi extra storage option
@ 2025-07-10 17:35 Luis Chamberlain
2025-07-10 22:53 ` Chuck Lever
2025-07-12 7:47 ` Daniel Gomez
0 siblings, 2 replies; 9+ messages in thread
From: Luis Chamberlain @ 2025-07-10 17:35 UTC (permalink / raw)
To: Chuck Lever, Daniel Gomez, Swarna Prabhu, swarnassp6, kdevops
Cc: Luis Chamberlain, Luis Chamberlain
From: Luis Chamberlain <mcgrof@gmail.com>
Add scsi device support for virtualization.
Generated-by: ChatGPT Codex
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
defconfigs/blktests_scsi | 2 +-
kconfigs/Kconfig.libvirt | 25 ++++++++++++-------
playbooks/roles/gen_nodes/defaults/main.yml | 1 +
.../roles/gen_nodes/templates/Vagrantfile.j2 | 11 ++++++--
playbooks/roles/gen_nodes/templates/drives.j2 | 24 ++++++++++++++++--
.../roles/gen_nodes/templates/gen_drives.j2 | 21 ++++++++++------
.../kdevops_nodes_split_start.j2.yaml | 10 ++++++++
workflows/steady_state/Kconfig | 1 +
8 files changed, 74 insertions(+), 21 deletions(-)
diff --git a/defconfigs/blktests_scsi b/defconfigs/blktests_scsi
index d49e72893690..28462939e11e 100644
--- a/defconfigs/blktests_scsi
+++ b/defconfigs/blktests_scsi
@@ -9,7 +9,7 @@ CONFIG_GUESTFS_BRINGUP_DEBUG_1=y
CONFIG_BOOTLINUX=y
CONFIG_BOOTLINUX_9P=y
-CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI=y
CONFIG_WORKFLOWS_TESTS=y
CONFIG_WORKFLOWS_LINUX_TESTS=y
diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
index 8654eb9b1a06..f737b79faea4 100644
--- a/kconfigs/Kconfig.libvirt
+++ b/kconfigs/Kconfig.libvirt
@@ -487,7 +487,7 @@ config LIBVIRT_MACHINE_TYPE_DEFAULT
is the old i440x, so you will not get PCIe support.
We only want to support PCI-E capable guests with libguest so the
- deafult is not allowed on libguest.
+ default is not allowed on libguest.
config LIBVIRT_MACHINE_TYPE_Q35
bool "q35"
@@ -548,11 +548,18 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
you won't be able to test ZNS.
config LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
- bool "ide"
- output yaml
- help
- Use the QEMU ide driver for extra storage drives. This is useful for
- really old Linux distributions that lack the virtio backend driver.
+ bool "ide"
+ output yaml
+ help
+ Use the QEMU ide driver for extra storage drives. This is useful for
+ really old Linux distributions that lack the virtio backend driver.
+
+config LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
+ bool "scsi"
+ output yaml
+ help
+ Use the QEMU SCSI driver for extra storage drives. This relies on a
+ virtio-scsi controller with scsi-hd devices attached.
endchoice
@@ -817,7 +824,7 @@ config LIBVIRT_AIO_MODE_NATIVE
help
Use the aio=native mode. For some older kernels it is known that
native will cause corruption if used on ext4 or xfs filesystem if
- you also use cace=none. This corruption is documented for RHEL:
+ you also use cache=none. This corruption is documented for RHEL:
https://access.redhat.com/articles/41313
https://bugzilla.redhat.com/show_bug.cgi?id=615309
@@ -1104,7 +1111,7 @@ config LIBVIRT_ENABLE_GDB
output yaml
help
Select this option if you want to enable debugging support for GDB.
- By default , it is assumed that gdb is disabled since we dont want
+ By default , it is assumed that gdb is disabled since we don't want
to complicate this for the CI runs. If enabled then libvirt guest
xml for each guest will be configured to use gdb on a specific
tcp port.
@@ -1116,7 +1123,7 @@ config LIBVIRT_GDB_BASEPORT
depends on LIBVIRT_ENABLE_GDB
help
This option defines the base port to be used for the GDB.
- Esentially we need to make QEMU listen for an incoming connection from
+ Essentially we need to make QEMU listen for an incoming connection from
gdb on a TCP port. The default port is chosen to be 1234. However we
introduce variability for assigning the port to each guest by defining
a base port and adding an index to it based on the number of libvrt guest
diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
index 6a899be45128..2cf450b84281 100644
--- a/playbooks/roles/gen_nodes/defaults/main.yml
+++ b/playbooks/roles/gen_nodes/defaults/main.yml
@@ -70,6 +70,7 @@ libvirt_extra_drive_id_prefix: 'drv'
libvirt_extra_storage_drive_nvme: False
libvirt_extra_storage_drive_virtio: False
libvirt_extra_storage_drive_ide: False
+libvirt_extra_storage_drive_scsi: False
libvirt_extra_storage_aio_mode: "native"
libvirt_extra_storage_aio_cache_mode: "none"
# Note that NVMe on qemu does not allow the physical block size
diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
index 2298c3455251..a52f5566280e 100644
--- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
+++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
@@ -244,7 +244,7 @@ Vagrant.configure("2") do |config|
#
# Just create a PCI or PCI-E root bus dedicated for extra drives. We
# use 0x08 to place this PCI / PCI-E root bus as we know this is
- # avilable on modern x86-64 systems. Eventually we may need to bump
+ # available on modern x86-64 systems. Eventually we may need to bump
# this to 0x9, but it would be better instead to have vagant-libvirt
# speak "add a new PCI or PCI-E root bus" and "add extra drives" whether
# that is nvme or virtio.
@@ -252,7 +252,7 @@ Vagrant.configure("2") do |config|
{% if not libvirt_override_machine_type %}
# For i440x on x86_64 (default on libvirt as of today) we use PCI, not
# PCI-E. Below assumes i440x. i440x cannot support CXL as it does not
- # suport PCI-E.
+ # support PCI-E.
libvirt.qemuargs :value => "-device"
libvirt.qemuargs :value => "pci-bridge,id=custom-pci-for-{{ extra_disk_driver }},chassis_nr=1,bus=pci.0,addr=0x8"
{% else %}
@@ -417,6 +417,13 @@ Vagrant.configure("2") do |config|
libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
libvirt.qemuargs :value => "-device"
libvirt.qemuargs :value => "virtio-blk-pci,drive=#{disk_id},id=virtio-#{disk_id},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},iothread=kdevops-virtio-iothread-#{port}#{extra_drive_largio_args},logical_block_size=#{virtio_lbs},physical_block_size=#{virtio_pbs}"
+{% elif libvirt_extra_storage_drive_scsi %}
+ libvirt.qemuargs :value => "-device"
+ libvirt.qemuargs :value => "virtio-scsi-pci,id=scsi#{port},bus=#{bus_for_extra_drives},addr=#{pci_function}"
+ libvirt.qemuargs :value => "-drive"
+ libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
+ libvirt.qemuargs :value => "-device"
+ libvirt.qemuargs :value => "scsi-hd,drive=#{disk_id},bus=scsi#{port}.0"
{% elif libvirt_extra_storage_drive_nvme %}
if ! largio
nvme_lbs = "{{ libvirt_extra_storage_nvme_logical_block_size }}"
diff --git a/playbooks/roles/gen_nodes/templates/drives.j2 b/playbooks/roles/gen_nodes/templates/drives.j2
index 8327cd430981..1241b88a9379 100644
--- a/playbooks/roles/gen_nodes/templates/drives.j2
+++ b/playbooks/roles/gen_nodes/templates/drives.j2
@@ -40,6 +40,26 @@ the drives can vary by type, so we have one macro by type of drive.
{% endfor %}
<!-- End of virtio drives-->
{%- endmacro -%}
+{%- macro gen_drive_scsi(num_drives,
+ kdevops_storage_pool_path,
+ hostname,
+ libvirt_extra_drive_format,
+ libvirt_extra_storage_aio_mode,
+ libvirt_extra_storage_aio_cache_mode) -%}
+<!-- We generated {{ num_drives }} scsi dives -->
+{% for n in range(0,num_drives) %}
+ <!-- This is scsi drive # {{ n + 1 }} -->
+ <qemu:arg value='-device'/>
+ <qemu:arg value='pcie-root-port,id=pcie-port-for-scsi-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ "%0x" | format( n | int) }},chassis=5{{ n }}'/>
+ <qemu:arg value='-device'/>
+ <qemu:arg value='virtio-scsi-pci,id=scsi{{ n }},bus=pcie-port-for-scsi-{{ n }},addr=0x0'/>
+ <qemu:arg value='-drive'/>
+ <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}'/>
+ <qemu:arg value='-device'/>
+ <qemu:arg value='scsi-hd,drive=drv{{ n }},bus=scsi{{ n }}.0'/>
+{% endfor %}
+<!-- End of scsi drives-->
+{%- endmacro -%}
{%- macro gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
libvirt_largeio_logical_compat_size,
libvirt_largeio_pow_limit,
@@ -49,7 +69,7 @@ the drives can vary by type, so we have one macro by type of drive.
libvirt_extra_storage_aio_mode,
libvirt_extra_storage_aio_cache_mode,
kdevops_storage_pool_path) -%}
-<!-- These are virtio drives used for large IO experimentaiton, with LBS support -->
+<!-- These are virtio drives used for large IO experimentation, with LBS support -->
{% set ns = namespace(lbs_idx=0) %}
{% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
{% for n in range(0,libvirt_largeio_pow_limit+1) %}
@@ -106,7 +126,7 @@ the drives can vary by type, so we have one macro by type of drive.
libvirt_extra_storage_aio_mode,
libvirt_extra_storage_aio_cache_mode,
kdevops_storage_pool_path) -%}
-<!-- These are NVMe drives used for large IO experimentaiton, with LBS support -->
+<!-- These are NVMe drives used for large IO experimentation, with LBS support -->
{% set ns = namespace(lbs_idx=0) %}
{% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
{% for n in range(0,libvirt_largeio_pow_limit+1) %}
diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
index 874e5b0623b9..2de13da4ab8e 100644
--- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
+++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
@@ -19,14 +19,21 @@
kdevops_storage_pool_path) }}
{% else %}
{{ drives.gen_drive_virtio(4,
- kdevops_storage_pool_path,
- hostname,
- libvirt_extra_drive_format,
- libvirt_extra_storage_aio_mode,
- libvirt_extra_storage_aio_cache_mode,
- libvirt_extra_storage_virtio_logical_block_size,
- libvirt_extra_storage_virtio_physical_block_size) }}
+ kdevops_storage_pool_path,
+ hostname,
+ libvirt_extra_drive_format,
+ libvirt_extra_storage_aio_mode,
+ libvirt_extra_storage_aio_cache_mode,
+ libvirt_extra_storage_virtio_logical_block_size,
+ libvirt_extra_storage_virtio_physical_block_size) }}
{% endif %}
+{% elif libvirt_extra_storage_drive_scsi %}
+{{ drives.gen_drive_scsi(4,
+ kdevops_storage_pool_path,
+ hostname,
+ libvirt_extra_drive_format,
+ libvirt_extra_storage_aio_mode,
+ libvirt_extra_storage_aio_cache_mode) }}
{% elif libvirt_extra_storage_drive_nvme %}
{% if libvirt_largeio_enable %}
{{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
diff --git a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
index e5b88efefc63..91e105cd3bff 100644
--- a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
+++ b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
@@ -76,6 +76,16 @@ vagrant_global:
physical_block_size: {{ libvirt_extra_storage_virtio_physical_block_size }}
logical_block_size: {{ libvirt_extra_storage_virtio_logical_block_size }}
{% endif %}
+{% elif libvirt_extra_storage_drive_scsi %}
+ extra_disks:
+ data:
+ size: 102400
+ scratch:
+ size: 102400
+ extra1:
+ size: 102400
+ extra2:
+ size: 102400
{% elif libvirt_extra_storage_drive_nvme %}
extra_disks:
data:
diff --git a/workflows/steady_state/Kconfig b/workflows/steady_state/Kconfig
index 335b833cfbcc..f7064d078c0a 100644
--- a/workflows/steady_state/Kconfig
+++ b/workflows/steady_state/Kconfig
@@ -6,6 +6,7 @@ config SSD_STEADY_STATE_DEVICE
default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
+ default "/dev/sdc" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
default "/dev/nvme1n1" if TERRAFORM_GCE
default "/dev/sdd" if TERRAFORM_AZURE
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH] guestfs: add scsi extra storage option
2025-07-10 17:35 [PATCH] guestfs: add scsi extra storage option Luis Chamberlain
@ 2025-07-10 22:53 ` Chuck Lever
2025-07-11 0:45 ` Luis Chamberlain
2025-07-12 7:47 ` Daniel Gomez
1 sibling, 1 reply; 9+ messages in thread
From: Chuck Lever @ 2025-07-10 22:53 UTC (permalink / raw)
To: Luis Chamberlain, Daniel Gomez, Swarna Prabhu, swarnassp6,
kdevops
Cc: Luis Chamberlain
On 7/10/25 1:35 PM, Luis Chamberlain wrote:
> From: Luis Chamberlain <mcgrof@gmail.com>
>
> Add scsi device support for virtualization.
>
> Generated-by: ChatGPT Codex
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
> defconfigs/blktests_scsi | 2 +-
> kconfigs/Kconfig.libvirt | 25 ++++++++++++-------
> playbooks/roles/gen_nodes/defaults/main.yml | 1 +
> .../roles/gen_nodes/templates/Vagrantfile.j2 | 11 ++++++--
> playbooks/roles/gen_nodes/templates/drives.j2 | 24 ++++++++++++++++--
> .../roles/gen_nodes/templates/gen_drives.j2 | 21 ++++++++++------
> .../kdevops_nodes_split_start.j2.yaml | 10 ++++++++
> workflows/steady_state/Kconfig | 1 +
> 8 files changed, 74 insertions(+), 21 deletions(-)
>
> diff --git a/defconfigs/blktests_scsi b/defconfigs/blktests_scsi
> index d49e72893690..28462939e11e 100644
> --- a/defconfigs/blktests_scsi
> +++ b/defconfigs/blktests_scsi
> @@ -9,7 +9,7 @@ CONFIG_GUESTFS_BRINGUP_DEBUG_1=y
> CONFIG_BOOTLINUX=y
> CONFIG_BOOTLINUX_9P=y
>
> -CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
> +CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI=y
>
> CONFIG_WORKFLOWS_TESTS=y
> CONFIG_WORKFLOWS_LINUX_TESTS=y
> diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
> index 8654eb9b1a06..f737b79faea4 100644
> --- a/kconfigs/Kconfig.libvirt
> +++ b/kconfigs/Kconfig.libvirt
> @@ -487,7 +487,7 @@ config LIBVIRT_MACHINE_TYPE_DEFAULT
> is the old i440x, so you will not get PCIe support.
>
> We only want to support PCI-E capable guests with libguest so the
> - deafult is not allowed on libguest.
> + default is not allowed on libguest.
>
> config LIBVIRT_MACHINE_TYPE_Q35
> bool "q35"
> @@ -548,11 +548,18 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
> you won't be able to test ZNS.
>
> config LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
> - bool "ide"
> - output yaml
> - help
> - Use the QEMU ide driver for extra storage drives. This is useful for
> - really old Linux distributions that lack the virtio backend driver.
> + bool "ide"
> + output yaml
> + help
> + Use the QEMU ide driver for extra storage drives. This is useful for
> + really old Linux distributions that lack the virtio backend driver.
> +
> +config LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
> + bool "scsi"
> + output yaml
> + help
> + Use the QEMU SCSI driver for extra storage drives. This relies on a
> + virtio-scsi controller with scsi-hd devices attached.
>
> endchoice
>
> @@ -817,7 +824,7 @@ config LIBVIRT_AIO_MODE_NATIVE
> help
> Use the aio=native mode. For some older kernels it is known that
> native will cause corruption if used on ext4 or xfs filesystem if
> - you also use cace=none. This corruption is documented for RHEL:
> + you also use cache=none. This corruption is documented for RHEL:
>
> https://access.redhat.com/articles/41313
> https://bugzilla.redhat.com/show_bug.cgi?id=615309
> @@ -1104,7 +1111,7 @@ config LIBVIRT_ENABLE_GDB
> output yaml
> help
> Select this option if you want to enable debugging support for GDB.
> - By default , it is assumed that gdb is disabled since we dont want
> + By default , it is assumed that gdb is disabled since we don't want
> to complicate this for the CI runs. If enabled then libvirt guest
> xml for each guest will be configured to use gdb on a specific
> tcp port.
> @@ -1116,7 +1123,7 @@ config LIBVIRT_GDB_BASEPORT
> depends on LIBVIRT_ENABLE_GDB
> help
> This option defines the base port to be used for the GDB.
> - Esentially we need to make QEMU listen for an incoming connection from
> + Essentially we need to make QEMU listen for an incoming connection from
> gdb on a TCP port. The default port is chosen to be 1234. However we
> introduce variability for assigning the port to each guest by defining
> a base port and adding an index to it based on the number of libvrt guest
> diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
> index 6a899be45128..2cf450b84281 100644
> --- a/playbooks/roles/gen_nodes/defaults/main.yml
> +++ b/playbooks/roles/gen_nodes/defaults/main.yml
> @@ -70,6 +70,7 @@ libvirt_extra_drive_id_prefix: 'drv'
> libvirt_extra_storage_drive_nvme: False
> libvirt_extra_storage_drive_virtio: False
> libvirt_extra_storage_drive_ide: False
> +libvirt_extra_storage_drive_scsi: False
> libvirt_extra_storage_aio_mode: "native"
> libvirt_extra_storage_aio_cache_mode: "none"
> # Note that NVMe on qemu does not allow the physical block size
> diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> index 2298c3455251..a52f5566280e 100644
> --- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> +++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> @@ -244,7 +244,7 @@ Vagrant.configure("2") do |config|
> #
> # Just create a PCI or PCI-E root bus dedicated for extra drives. We
> # use 0x08 to place this PCI / PCI-E root bus as we know this is
> - # avilable on modern x86-64 systems. Eventually we may need to bump
> + # available on modern x86-64 systems. Eventually we may need to bump
> # this to 0x9, but it would be better instead to have vagant-libvirt
> # speak "add a new PCI or PCI-E root bus" and "add extra drives" whether
> # that is nvme or virtio.
> @@ -252,7 +252,7 @@ Vagrant.configure("2") do |config|
> {% if not libvirt_override_machine_type %}
> # For i440x on x86_64 (default on libvirt as of today) we use PCI, not
> # PCI-E. Below assumes i440x. i440x cannot support CXL as it does not
> - # suport PCI-E.
> + # support PCI-E.
> libvirt.qemuargs :value => "-device"
> libvirt.qemuargs :value => "pci-bridge,id=custom-pci-for-{{ extra_disk_driver }},chassis_nr=1,bus=pci.0,addr=0x8"
> {% else %}
> @@ -417,6 +417,13 @@ Vagrant.configure("2") do |config|
> libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
> libvirt.qemuargs :value => "-device"
> libvirt.qemuargs :value => "virtio-blk-pci,drive=#{disk_id},id=virtio-#{disk_id},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},iothread=kdevops-virtio-iothread-#{port}#{extra_drive_largio_args},logical_block_size=#{virtio_lbs},physical_block_size=#{virtio_pbs}"
> +{% elif libvirt_extra_storage_drive_scsi %}
> + libvirt.qemuargs :value => "-device"
> + libvirt.qemuargs :value => "virtio-scsi-pci,id=scsi#{port},bus=#{bus_for_extra_drives},addr=#{pci_function}"
> + libvirt.qemuargs :value => "-drive"
> + libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
> + libvirt.qemuargs :value => "-device"
> + libvirt.qemuargs :value => "scsi-hd,drive=#{disk_id},bus=scsi#{port}.0"
> {% elif libvirt_extra_storage_drive_nvme %}
> if ! largio
> nvme_lbs = "{{ libvirt_extra_storage_nvme_logical_block_size }}"
> diff --git a/playbooks/roles/gen_nodes/templates/drives.j2 b/playbooks/roles/gen_nodes/templates/drives.j2
> index 8327cd430981..1241b88a9379 100644
> --- a/playbooks/roles/gen_nodes/templates/drives.j2
> +++ b/playbooks/roles/gen_nodes/templates/drives.j2
> @@ -40,6 +40,26 @@ the drives can vary by type, so we have one macro by type of drive.
> {% endfor %}
> <!-- End of virtio drives-->
> {%- endmacro -%}
> +{%- macro gen_drive_scsi(num_drives,
> + kdevops_storage_pool_path,
> + hostname,
> + libvirt_extra_drive_format,
> + libvirt_extra_storage_aio_mode,
> + libvirt_extra_storage_aio_cache_mode) -%}
> +<!-- We generated {{ num_drives }} scsi dives -->
> +{% for n in range(0,num_drives) %}
> + <!-- This is scsi drive # {{ n + 1 }} -->
> + <qemu:arg value='-device'/>
> + <qemu:arg value='pcie-root-port,id=pcie-port-for-scsi-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ "%0x" | format( n | int) }},chassis=5{{ n }}'/>
> + <qemu:arg value='-device'/>
> + <qemu:arg value='virtio-scsi-pci,id=scsi{{ n }},bus=pcie-port-for-scsi-{{ n }},addr=0x0'/>
> + <qemu:arg value='-drive'/>
> + <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}'/>
> + <qemu:arg value='-device'/>
> + <qemu:arg value='scsi-hd,drive=drv{{ n }},bus=scsi{{ n }}.0'/>
> +{% endfor %}
> +<!-- End of scsi drives-->
> +{%- endmacro -%}
> {%- macro gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
> libvirt_largeio_logical_compat_size,
> libvirt_largeio_pow_limit,
> @@ -49,7 +69,7 @@ the drives can vary by type, so we have one macro by type of drive.
> libvirt_extra_storage_aio_mode,
> libvirt_extra_storage_aio_cache_mode,
> kdevops_storage_pool_path) -%}
> -<!-- These are virtio drives used for large IO experimentaiton, with LBS support -->
> +<!-- These are virtio drives used for large IO experimentation, with LBS support -->
> {% set ns = namespace(lbs_idx=0) %}
> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
> @@ -106,7 +126,7 @@ the drives can vary by type, so we have one macro by type of drive.
> libvirt_extra_storage_aio_mode,
> libvirt_extra_storage_aio_cache_mode,
> kdevops_storage_pool_path) -%}
> -<!-- These are NVMe drives used for large IO experimentaiton, with LBS support -->
> +<!-- These are NVMe drives used for large IO experimentation, with LBS support -->
> {% set ns = namespace(lbs_idx=0) %}
> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
> diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> index 874e5b0623b9..2de13da4ab8e 100644
> --- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
> +++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> @@ -19,14 +19,21 @@
> kdevops_storage_pool_path) }}
> {% else %}
> {{ drives.gen_drive_virtio(4,
> - kdevops_storage_pool_path,
> - hostname,
> - libvirt_extra_drive_format,
> - libvirt_extra_storage_aio_mode,
> - libvirt_extra_storage_aio_cache_mode,
> - libvirt_extra_storage_virtio_logical_block_size,
> - libvirt_extra_storage_virtio_physical_block_size) }}
> + kdevops_storage_pool_path,
> + hostname,
> + libvirt_extra_drive_format,
> + libvirt_extra_storage_aio_mode,
> + libvirt_extra_storage_aio_cache_mode,
> + libvirt_extra_storage_virtio_logical_block_size,
> + libvirt_extra_storage_virtio_physical_block_size) }}
> {% endif %}
> +{% elif libvirt_extra_storage_drive_scsi %}
> +{{ drives.gen_drive_scsi(4,
> + kdevops_storage_pool_path,
> + hostname,
> + libvirt_extra_drive_format,
> + libvirt_extra_storage_aio_mode,
> + libvirt_extra_storage_aio_cache_mode) }}
> {% elif libvirt_extra_storage_drive_nvme %}
> {% if libvirt_largeio_enable %}
> {{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
> diff --git a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> index e5b88efefc63..91e105cd3bff 100644
> --- a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> +++ b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> @@ -76,6 +76,16 @@ vagrant_global:
> physical_block_size: {{ libvirt_extra_storage_virtio_physical_block_size }}
> logical_block_size: {{ libvirt_extra_storage_virtio_logical_block_size }}
> {% endif %}
> +{% elif libvirt_extra_storage_drive_scsi %}
> + extra_disks:
> + data:
> + size: 102400
> + scratch:
> + size: 102400
> + extra1:
> + size: 102400
> + extra2:
> + size: 102400
> {% elif libvirt_extra_storage_drive_nvme %}
> extra_disks:
> data:
> diff --git a/workflows/steady_state/Kconfig b/workflows/steady_state/Kconfig
> index 335b833cfbcc..f7064d078c0a 100644
> --- a/workflows/steady_state/Kconfig
> +++ b/workflows/steady_state/Kconfig
> @@ -6,6 +6,7 @@ config SSD_STEADY_STATE_DEVICE
> default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
> default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
> default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
> + default "/dev/sdc" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
> default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
> default "/dev/nvme1n1" if TERRAFORM_GCE
> default "/dev/sdd" if TERRAFORM_AZURE
I would split the spelling corrections into a separate patch. That might
even be helpful for LLM training.
This patch looks like it is missing a change to
playbooks/roles/volume_group/tasks/guestfs.yml
Add a task specific to guestfs SCSI drives after the IDE task. You can
test by running the pynfs workflow, which will try to set up a single
guest with NFSD and an LVM group made from these disk devices.
--
Chuck Lever
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] guestfs: add scsi extra storage option
2025-07-10 22:53 ` Chuck Lever
@ 2025-07-11 0:45 ` Luis Chamberlain
2025-07-11 12:53 ` Chuck Lever
0 siblings, 1 reply; 9+ messages in thread
From: Luis Chamberlain @ 2025-07-11 0:45 UTC (permalink / raw)
To: Chuck Lever
Cc: Daniel Gomez, Swarna Prabhu, swarnassp6, kdevops,
Luis Chamberlain
On Thu, Jul 10, 2025 at 06:53:37PM -0400, Chuck Lever wrote:
> On 7/10/25 1:35 PM, Luis Chamberlain wrote:
> > From: Luis Chamberlain <mcgrof@gmail.com>
> >
> > Add scsi device support for virtualization.
> >
> > Generated-by: ChatGPT Codex
> > Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> > ---
> > defconfigs/blktests_scsi | 2 +-
> > kconfigs/Kconfig.libvirt | 25 ++++++++++++-------
> > playbooks/roles/gen_nodes/defaults/main.yml | 1 +
> > .../roles/gen_nodes/templates/Vagrantfile.j2 | 11 ++++++--
> > playbooks/roles/gen_nodes/templates/drives.j2 | 24 ++++++++++++++++--
> > .../roles/gen_nodes/templates/gen_drives.j2 | 21 ++++++++++------
> > .../kdevops_nodes_split_start.j2.yaml | 10 ++++++++
> > workflows/steady_state/Kconfig | 1 +
> > 8 files changed, 74 insertions(+), 21 deletions(-)
> >
> > diff --git a/defconfigs/blktests_scsi b/defconfigs/blktests_scsi
> > index d49e72893690..28462939e11e 100644
> > --- a/defconfigs/blktests_scsi
> > +++ b/defconfigs/blktests_scsi
> > @@ -9,7 +9,7 @@ CONFIG_GUESTFS_BRINGUP_DEBUG_1=y
> > CONFIG_BOOTLINUX=y
> > CONFIG_BOOTLINUX_9P=y
> >
> > -CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
> > +CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI=y
> >
> > CONFIG_WORKFLOWS_TESTS=y
> > CONFIG_WORKFLOWS_LINUX_TESTS=y
> > diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
> > index 8654eb9b1a06..f737b79faea4 100644
> > --- a/kconfigs/Kconfig.libvirt
> > +++ b/kconfigs/Kconfig.libvirt
> > @@ -487,7 +487,7 @@ config LIBVIRT_MACHINE_TYPE_DEFAULT
> > is the old i440x, so you will not get PCIe support.
> >
> > We only want to support PCI-E capable guests with libguest so the
> > - deafult is not allowed on libguest.
> > + default is not allowed on libguest.
> >
> > config LIBVIRT_MACHINE_TYPE_Q35
> > bool "q35"
> > @@ -548,11 +548,18 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
> > you won't be able to test ZNS.
> >
> > config LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
> > - bool "ide"
> > - output yaml
> > - help
> > - Use the QEMU ide driver for extra storage drives. This is useful for
> > - really old Linux distributions that lack the virtio backend driver.
> > + bool "ide"
> > + output yaml
> > + help
> > + Use the QEMU ide driver for extra storage drives. This is useful for
> > + really old Linux distributions that lack the virtio backend driver.
> > +
> > +config LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
> > + bool "scsi"
> > + output yaml
> > + help
> > + Use the QEMU SCSI driver for extra storage drives. This relies on a
> > + virtio-scsi controller with scsi-hd devices attached.
> >
> > endchoice
> >
> > @@ -817,7 +824,7 @@ config LIBVIRT_AIO_MODE_NATIVE
> > help
> > Use the aio=native mode. For some older kernels it is known that
> > native will cause corruption if used on ext4 or xfs filesystem if
> > - you also use cace=none. This corruption is documented for RHEL:
> > + you also use cache=none. This corruption is documented for RHEL:
> >
> > https://access.redhat.com/articles/41313
> > https://bugzilla.redhat.com/show_bug.cgi?id=615309
> > @@ -1104,7 +1111,7 @@ config LIBVIRT_ENABLE_GDB
> > output yaml
> > help
> > Select this option if you want to enable debugging support for GDB.
> > - By default , it is assumed that gdb is disabled since we dont want
> > + By default , it is assumed that gdb is disabled since we don't want
> > to complicate this for the CI runs. If enabled then libvirt guest
> > xml for each guest will be configured to use gdb on a specific
> > tcp port.
> > @@ -1116,7 +1123,7 @@ config LIBVIRT_GDB_BASEPORT
> > depends on LIBVIRT_ENABLE_GDB
> > help
> > This option defines the base port to be used for the GDB.
> > - Esentially we need to make QEMU listen for an incoming connection from
> > + Essentially we need to make QEMU listen for an incoming connection from
> > gdb on a TCP port. The default port is chosen to be 1234. However we
> > introduce variability for assigning the port to each guest by defining
> > a base port and adding an index to it based on the number of libvrt guest
> > diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
> > index 6a899be45128..2cf450b84281 100644
> > --- a/playbooks/roles/gen_nodes/defaults/main.yml
> > +++ b/playbooks/roles/gen_nodes/defaults/main.yml
> > @@ -70,6 +70,7 @@ libvirt_extra_drive_id_prefix: 'drv'
> > libvirt_extra_storage_drive_nvme: False
> > libvirt_extra_storage_drive_virtio: False
> > libvirt_extra_storage_drive_ide: False
> > +libvirt_extra_storage_drive_scsi: False
> > libvirt_extra_storage_aio_mode: "native"
> > libvirt_extra_storage_aio_cache_mode: "none"
> > # Note that NVMe on qemu does not allow the physical block size
> > diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> > index 2298c3455251..a52f5566280e 100644
> > --- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> > +++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> > @@ -244,7 +244,7 @@ Vagrant.configure("2") do |config|
> > #
> > # Just create a PCI or PCI-E root bus dedicated for extra drives. We
> > # use 0x08 to place this PCI / PCI-E root bus as we know this is
> > - # avilable on modern x86-64 systems. Eventually we may need to bump
> > + # available on modern x86-64 systems. Eventually we may need to bump
> > # this to 0x9, but it would be better instead to have vagant-libvirt
> > # speak "add a new PCI or PCI-E root bus" and "add extra drives" whether
> > # that is nvme or virtio.
> > @@ -252,7 +252,7 @@ Vagrant.configure("2") do |config|
> > {% if not libvirt_override_machine_type %}
> > # For i440x on x86_64 (default on libvirt as of today) we use PCI, not
> > # PCI-E. Below assumes i440x. i440x cannot support CXL as it does not
> > - # suport PCI-E.
> > + # support PCI-E.
> > libvirt.qemuargs :value => "-device"
> > libvirt.qemuargs :value => "pci-bridge,id=custom-pci-for-{{ extra_disk_driver }},chassis_nr=1,bus=pci.0,addr=0x8"
> > {% else %}
> > @@ -417,6 +417,13 @@ Vagrant.configure("2") do |config|
> > libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
> > libvirt.qemuargs :value => "-device"
> > libvirt.qemuargs :value => "virtio-blk-pci,drive=#{disk_id},id=virtio-#{disk_id},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},iothread=kdevops-virtio-iothread-#{port}#{extra_drive_largio_args},logical_block_size=#{virtio_lbs},physical_block_size=#{virtio_pbs}"
> > +{% elif libvirt_extra_storage_drive_scsi %}
> > + libvirt.qemuargs :value => "-device"
> > + libvirt.qemuargs :value => "virtio-scsi-pci,id=scsi#{port},bus=#{bus_for_extra_drives},addr=#{pci_function}"
> > + libvirt.qemuargs :value => "-drive"
> > + libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
> > + libvirt.qemuargs :value => "-device"
> > + libvirt.qemuargs :value => "scsi-hd,drive=#{disk_id},bus=scsi#{port}.0"
> > {% elif libvirt_extra_storage_drive_nvme %}
> > if ! largio
> > nvme_lbs = "{{ libvirt_extra_storage_nvme_logical_block_size }}"
> > diff --git a/playbooks/roles/gen_nodes/templates/drives.j2 b/playbooks/roles/gen_nodes/templates/drives.j2
> > index 8327cd430981..1241b88a9379 100644
> > --- a/playbooks/roles/gen_nodes/templates/drives.j2
> > +++ b/playbooks/roles/gen_nodes/templates/drives.j2
> > @@ -40,6 +40,26 @@ the drives can vary by type, so we have one macro by type of drive.
> > {% endfor %}
> > <!-- End of virtio drives-->
> > {%- endmacro -%}
> > +{%- macro gen_drive_scsi(num_drives,
> > + kdevops_storage_pool_path,
> > + hostname,
> > + libvirt_extra_drive_format,
> > + libvirt_extra_storage_aio_mode,
> > + libvirt_extra_storage_aio_cache_mode) -%}
> > +<!-- We generated {{ num_drives }} scsi dives -->
> > +{% for n in range(0,num_drives) %}
> > + <!-- This is scsi drive # {{ n + 1 }} -->
> > + <qemu:arg value='-device'/>
> > + <qemu:arg value='pcie-root-port,id=pcie-port-for-scsi-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ "%0x" | format( n | int) }},chassis=5{{ n }}'/>
> > + <qemu:arg value='-device'/>
> > + <qemu:arg value='virtio-scsi-pci,id=scsi{{ n }},bus=pcie-port-for-scsi-{{ n }},addr=0x0'/>
> > + <qemu:arg value='-drive'/>
> > + <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}'/>
> > + <qemu:arg value='-device'/>
> > + <qemu:arg value='scsi-hd,drive=drv{{ n }},bus=scsi{{ n }}.0'/>
> > +{% endfor %}
> > +<!-- End of scsi drives-->
> > +{%- endmacro -%}
> > {%- macro gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
> > libvirt_largeio_logical_compat_size,
> > libvirt_largeio_pow_limit,
> > @@ -49,7 +69,7 @@ the drives can vary by type, so we have one macro by type of drive.
> > libvirt_extra_storage_aio_mode,
> > libvirt_extra_storage_aio_cache_mode,
> > kdevops_storage_pool_path) -%}
> > -<!-- These are virtio drives used for large IO experimentaiton, with LBS support -->
> > +<!-- These are virtio drives used for large IO experimentation, with LBS support -->
> > {% set ns = namespace(lbs_idx=0) %}
> > {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
> > {% for n in range(0,libvirt_largeio_pow_limit+1) %}
> > @@ -106,7 +126,7 @@ the drives can vary by type, so we have one macro by type of drive.
> > libvirt_extra_storage_aio_mode,
> > libvirt_extra_storage_aio_cache_mode,
> > kdevops_storage_pool_path) -%}
> > -<!-- These are NVMe drives used for large IO experimentaiton, with LBS support -->
> > +<!-- These are NVMe drives used for large IO experimentation, with LBS support -->
> > {% set ns = namespace(lbs_idx=0) %}
> > {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
> > {% for n in range(0,libvirt_largeio_pow_limit+1) %}
> > diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> > index 874e5b0623b9..2de13da4ab8e 100644
> > --- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
> > +++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> > @@ -19,14 +19,21 @@
> > kdevops_storage_pool_path) }}
> > {% else %}
> > {{ drives.gen_drive_virtio(4,
> > - kdevops_storage_pool_path,
> > - hostname,
> > - libvirt_extra_drive_format,
> > - libvirt_extra_storage_aio_mode,
> > - libvirt_extra_storage_aio_cache_mode,
> > - libvirt_extra_storage_virtio_logical_block_size,
> > - libvirt_extra_storage_virtio_physical_block_size) }}
> > + kdevops_storage_pool_path,
> > + hostname,
> > + libvirt_extra_drive_format,
> > + libvirt_extra_storage_aio_mode,
> > + libvirt_extra_storage_aio_cache_mode,
> > + libvirt_extra_storage_virtio_logical_block_size,
> > + libvirt_extra_storage_virtio_physical_block_size) }}
> > {% endif %}
> > +{% elif libvirt_extra_storage_drive_scsi %}
> > +{{ drives.gen_drive_scsi(4,
> > + kdevops_storage_pool_path,
> > + hostname,
> > + libvirt_extra_drive_format,
> > + libvirt_extra_storage_aio_mode,
> > + libvirt_extra_storage_aio_cache_mode) }}
> > {% elif libvirt_extra_storage_drive_nvme %}
> > {% if libvirt_largeio_enable %}
> > {{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
> > diff --git a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> > index e5b88efefc63..91e105cd3bff 100644
> > --- a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> > +++ b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> > @@ -76,6 +76,16 @@ vagrant_global:
> > physical_block_size: {{ libvirt_extra_storage_virtio_physical_block_size }}
> > logical_block_size: {{ libvirt_extra_storage_virtio_logical_block_size }}
> > {% endif %}
> > +{% elif libvirt_extra_storage_drive_scsi %}
> > + extra_disks:
> > + data:
> > + size: 102400
> > + scratch:
> > + size: 102400
> > + extra1:
> > + size: 102400
> > + extra2:
> > + size: 102400
> > {% elif libvirt_extra_storage_drive_nvme %}
> > extra_disks:
> > data:
> > diff --git a/workflows/steady_state/Kconfig b/workflows/steady_state/Kconfig
> > index 335b833cfbcc..f7064d078c0a 100644
> > --- a/workflows/steady_state/Kconfig
> > +++ b/workflows/steady_state/Kconfig
> > @@ -6,6 +6,7 @@ config SSD_STEADY_STATE_DEVICE
> > default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
> > default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
> > default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
> > + default "/dev/sdc" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
> > default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
> > default "/dev/nvme1n1" if TERRAFORM_GCE
> > default "/dev/sdd" if TERRAFORM_AZURE
>
> I would split the spelling corrections into a separate patch. That might
> even be helpful for LLM training.
Sure I can do that.
> This patch looks like it is missing a change to
>
> playbooks/roles/volume_group/tasks/guestfs.yml
>
> Add a task specific to guestfs SCSI drives after the IDE task. You can
> test by running the pynfs workflow, which will try to set up a single
> guest with NFSD and an LVM group made from these disk devices.
Are you OK if I add this after this patch, I'd like to test codex to see
if I can just input your requirement to do that. It used to be AI for
kdevops was gettind D grading, I am now grading it at B with Codex.
Luis
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] guestfs: add scsi extra storage option
2025-07-11 0:45 ` Luis Chamberlain
@ 2025-07-11 12:53 ` Chuck Lever
2025-07-18 4:48 ` Luis Chamberlain
0 siblings, 1 reply; 9+ messages in thread
From: Chuck Lever @ 2025-07-11 12:53 UTC (permalink / raw)
To: Luis Chamberlain
Cc: Daniel Gomez, Swarna Prabhu, swarnassp6, kdevops,
Luis Chamberlain
On 7/10/25 8:45 PM, Luis Chamberlain wrote:
> On Thu, Jul 10, 2025 at 06:53:37PM -0400, Chuck Lever wrote:
>> On 7/10/25 1:35 PM, Luis Chamberlain wrote:
>>> From: Luis Chamberlain <mcgrof@gmail.com>
>>>
>>> Add scsi device support for virtualization.
>>>
>>> Generated-by: ChatGPT Codex
>>> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
>>> ---
>>> defconfigs/blktests_scsi | 2 +-
>>> kconfigs/Kconfig.libvirt | 25 ++++++++++++-------
>>> playbooks/roles/gen_nodes/defaults/main.yml | 1 +
>>> .../roles/gen_nodes/templates/Vagrantfile.j2 | 11 ++++++--
>>> playbooks/roles/gen_nodes/templates/drives.j2 | 24 ++++++++++++++++--
>>> .../roles/gen_nodes/templates/gen_drives.j2 | 21 ++++++++++------
>>> .../kdevops_nodes_split_start.j2.yaml | 10 ++++++++
>>> workflows/steady_state/Kconfig | 1 +
>>> 8 files changed, 74 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/defconfigs/blktests_scsi b/defconfigs/blktests_scsi
>>> index d49e72893690..28462939e11e 100644
>>> --- a/defconfigs/blktests_scsi
>>> +++ b/defconfigs/blktests_scsi
>>> @@ -9,7 +9,7 @@ CONFIG_GUESTFS_BRINGUP_DEBUG_1=y
>>> CONFIG_BOOTLINUX=y
>>> CONFIG_BOOTLINUX_9P=y
>>>
>>> -CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
>>> +CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI=y
>>>
>>> CONFIG_WORKFLOWS_TESTS=y
>>> CONFIG_WORKFLOWS_LINUX_TESTS=y
>>> diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
>>> index 8654eb9b1a06..f737b79faea4 100644
>>> --- a/kconfigs/Kconfig.libvirt
>>> +++ b/kconfigs/Kconfig.libvirt
>>> @@ -487,7 +487,7 @@ config LIBVIRT_MACHINE_TYPE_DEFAULT
>>> is the old i440x, so you will not get PCIe support.
>>>
>>> We only want to support PCI-E capable guests with libguest so the
>>> - deafult is not allowed on libguest.
>>> + default is not allowed on libguest.
>>>
>>> config LIBVIRT_MACHINE_TYPE_Q35
>>> bool "q35"
>>> @@ -548,11 +548,18 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
>>> you won't be able to test ZNS.
>>>
>>> config LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
>>> - bool "ide"
>>> - output yaml
>>> - help
>>> - Use the QEMU ide driver for extra storage drives. This is useful for
>>> - really old Linux distributions that lack the virtio backend driver.
>>> + bool "ide"
>>> + output yaml
>>> + help
>>> + Use the QEMU ide driver for extra storage drives. This is useful for
>>> + really old Linux distributions that lack the virtio backend driver.
>>> +
>>> +config LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
>>> + bool "scsi"
>>> + output yaml
>>> + help
>>> + Use the QEMU SCSI driver for extra storage drives. This relies on a
>>> + virtio-scsi controller with scsi-hd devices attached.
>>>
>>> endchoice
>>>
>>> @@ -817,7 +824,7 @@ config LIBVIRT_AIO_MODE_NATIVE
>>> help
>>> Use the aio=native mode. For some older kernels it is known that
>>> native will cause corruption if used on ext4 or xfs filesystem if
>>> - you also use cace=none. This corruption is documented for RHEL:
>>> + you also use cache=none. This corruption is documented for RHEL:
>>>
>>> https://access.redhat.com/articles/41313
>>> https://bugzilla.redhat.com/show_bug.cgi?id=615309
>>> @@ -1104,7 +1111,7 @@ config LIBVIRT_ENABLE_GDB
>>> output yaml
>>> help
>>> Select this option if you want to enable debugging support for GDB.
>>> - By default , it is assumed that gdb is disabled since we dont want
>>> + By default , it is assumed that gdb is disabled since we don't want
>>> to complicate this for the CI runs. If enabled then libvirt guest
>>> xml for each guest will be configured to use gdb on a specific
>>> tcp port.
>>> @@ -1116,7 +1123,7 @@ config LIBVIRT_GDB_BASEPORT
>>> depends on LIBVIRT_ENABLE_GDB
>>> help
>>> This option defines the base port to be used for the GDB.
>>> - Esentially we need to make QEMU listen for an incoming connection from
>>> + Essentially we need to make QEMU listen for an incoming connection from
>>> gdb on a TCP port. The default port is chosen to be 1234. However we
>>> introduce variability for assigning the port to each guest by defining
>>> a base port and adding an index to it based on the number of libvrt guest
>>> diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
>>> index 6a899be45128..2cf450b84281 100644
>>> --- a/playbooks/roles/gen_nodes/defaults/main.yml
>>> +++ b/playbooks/roles/gen_nodes/defaults/main.yml
>>> @@ -70,6 +70,7 @@ libvirt_extra_drive_id_prefix: 'drv'
>>> libvirt_extra_storage_drive_nvme: False
>>> libvirt_extra_storage_drive_virtio: False
>>> libvirt_extra_storage_drive_ide: False
>>> +libvirt_extra_storage_drive_scsi: False
>>> libvirt_extra_storage_aio_mode: "native"
>>> libvirt_extra_storage_aio_cache_mode: "none"
>>> # Note that NVMe on qemu does not allow the physical block size
>>> diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>> index 2298c3455251..a52f5566280e 100644
>>> --- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>> +++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>> @@ -244,7 +244,7 @@ Vagrant.configure("2") do |config|
>>> #
>>> # Just create a PCI or PCI-E root bus dedicated for extra drives. We
>>> # use 0x08 to place this PCI / PCI-E root bus as we know this is
>>> - # avilable on modern x86-64 systems. Eventually we may need to bump
>>> + # available on modern x86-64 systems. Eventually we may need to bump
>>> # this to 0x9, but it would be better instead to have vagant-libvirt
>>> # speak "add a new PCI or PCI-E root bus" and "add extra drives" whether
>>> # that is nvme or virtio.
>>> @@ -252,7 +252,7 @@ Vagrant.configure("2") do |config|
>>> {% if not libvirt_override_machine_type %}
>>> # For i440x on x86_64 (default on libvirt as of today) we use PCI, not
>>> # PCI-E. Below assumes i440x. i440x cannot support CXL as it does not
>>> - # suport PCI-E.
>>> + # support PCI-E.
>>> libvirt.qemuargs :value => "-device"
>>> libvirt.qemuargs :value => "pci-bridge,id=custom-pci-for-{{ extra_disk_driver }},chassis_nr=1,bus=pci.0,addr=0x8"
>>> {% else %}
>>> @@ -417,6 +417,13 @@ Vagrant.configure("2") do |config|
>>> libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
>>> libvirt.qemuargs :value => "-device"
>>> libvirt.qemuargs :value => "virtio-blk-pci,drive=#{disk_id},id=virtio-#{disk_id},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},iothread=kdevops-virtio-iothread-#{port}#{extra_drive_largio_args},logical_block_size=#{virtio_lbs},physical_block_size=#{virtio_pbs}"
>>> +{% elif libvirt_extra_storage_drive_scsi %}
>>> + libvirt.qemuargs :value => "-device"
>>> + libvirt.qemuargs :value => "virtio-scsi-pci,id=scsi#{port},bus=#{bus_for_extra_drives},addr=#{pci_function}"
>>> + libvirt.qemuargs :value => "-drive"
>>> + libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
>>> + libvirt.qemuargs :value => "-device"
>>> + libvirt.qemuargs :value => "scsi-hd,drive=#{disk_id},bus=scsi#{port}.0"
>>> {% elif libvirt_extra_storage_drive_nvme %}
>>> if ! largio
>>> nvme_lbs = "{{ libvirt_extra_storage_nvme_logical_block_size }}"
>>> diff --git a/playbooks/roles/gen_nodes/templates/drives.j2 b/playbooks/roles/gen_nodes/templates/drives.j2
>>> index 8327cd430981..1241b88a9379 100644
>>> --- a/playbooks/roles/gen_nodes/templates/drives.j2
>>> +++ b/playbooks/roles/gen_nodes/templates/drives.j2
>>> @@ -40,6 +40,26 @@ the drives can vary by type, so we have one macro by type of drive.
>>> {% endfor %}
>>> <!-- End of virtio drives-->
>>> {%- endmacro -%}
>>> +{%- macro gen_drive_scsi(num_drives,
>>> + kdevops_storage_pool_path,
>>> + hostname,
>>> + libvirt_extra_drive_format,
>>> + libvirt_extra_storage_aio_mode,
>>> + libvirt_extra_storage_aio_cache_mode) -%}
>>> +<!-- We generated {{ num_drives }} scsi dives -->
>>> +{% for n in range(0,num_drives) %}
>>> + <!-- This is scsi drive # {{ n + 1 }} -->
>>> + <qemu:arg value='-device'/>
>>> + <qemu:arg value='pcie-root-port,id=pcie-port-for-scsi-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ "%0x" | format( n | int) }},chassis=5{{ n }}'/>
>>> + <qemu:arg value='-device'/>
>>> + <qemu:arg value='virtio-scsi-pci,id=scsi{{ n }},bus=pcie-port-for-scsi-{{ n }},addr=0x0'/>
>>> + <qemu:arg value='-drive'/>
>>> + <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}'/>
>>> + <qemu:arg value='-device'/>
>>> + <qemu:arg value='scsi-hd,drive=drv{{ n }},bus=scsi{{ n }}.0'/>
>>> +{% endfor %}
>>> +<!-- End of scsi drives-->
>>> +{%- endmacro -%}
>>> {%- macro gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
>>> libvirt_largeio_logical_compat_size,
>>> libvirt_largeio_pow_limit,
>>> @@ -49,7 +69,7 @@ the drives can vary by type, so we have one macro by type of drive.
>>> libvirt_extra_storage_aio_mode,
>>> libvirt_extra_storage_aio_cache_mode,
>>> kdevops_storage_pool_path) -%}
>>> -<!-- These are virtio drives used for large IO experimentaiton, with LBS support -->
>>> +<!-- These are virtio drives used for large IO experimentation, with LBS support -->
>>> {% set ns = namespace(lbs_idx=0) %}
>>> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
>>> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
>>> @@ -106,7 +126,7 @@ the drives can vary by type, so we have one macro by type of drive.
>>> libvirt_extra_storage_aio_mode,
>>> libvirt_extra_storage_aio_cache_mode,
>>> kdevops_storage_pool_path) -%}
>>> -<!-- These are NVMe drives used for large IO experimentaiton, with LBS support -->
>>> +<!-- These are NVMe drives used for large IO experimentation, with LBS support -->
>>> {% set ns = namespace(lbs_idx=0) %}
>>> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
>>> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
>>> diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
>>> index 874e5b0623b9..2de13da4ab8e 100644
>>> --- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
>>> +++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
>>> @@ -19,14 +19,21 @@
>>> kdevops_storage_pool_path) }}
>>> {% else %}
>>> {{ drives.gen_drive_virtio(4,
>>> - kdevops_storage_pool_path,
>>> - hostname,
>>> - libvirt_extra_drive_format,
>>> - libvirt_extra_storage_aio_mode,
>>> - libvirt_extra_storage_aio_cache_mode,
>>> - libvirt_extra_storage_virtio_logical_block_size,
>>> - libvirt_extra_storage_virtio_physical_block_size) }}
>>> + kdevops_storage_pool_path,
>>> + hostname,
>>> + libvirt_extra_drive_format,
>>> + libvirt_extra_storage_aio_mode,
>>> + libvirt_extra_storage_aio_cache_mode,
>>> + libvirt_extra_storage_virtio_logical_block_size,
>>> + libvirt_extra_storage_virtio_physical_block_size) }}
>>> {% endif %}
>>> +{% elif libvirt_extra_storage_drive_scsi %}
>>> +{{ drives.gen_drive_scsi(4,
>>> + kdevops_storage_pool_path,
>>> + hostname,
>>> + libvirt_extra_drive_format,
>>> + libvirt_extra_storage_aio_mode,
>>> + libvirt_extra_storage_aio_cache_mode) }}
>>> {% elif libvirt_extra_storage_drive_nvme %}
>>> {% if libvirt_largeio_enable %}
>>> {{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
>>> diff --git a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
>>> index e5b88efefc63..91e105cd3bff 100644
>>> --- a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
>>> +++ b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
>>> @@ -76,6 +76,16 @@ vagrant_global:
>>> physical_block_size: {{ libvirt_extra_storage_virtio_physical_block_size }}
>>> logical_block_size: {{ libvirt_extra_storage_virtio_logical_block_size }}
>>> {% endif %}
>>> +{% elif libvirt_extra_storage_drive_scsi %}
>>> + extra_disks:
>>> + data:
>>> + size: 102400
>>> + scratch:
>>> + size: 102400
>>> + extra1:
>>> + size: 102400
>>> + extra2:
>>> + size: 102400
>>> {% elif libvirt_extra_storage_drive_nvme %}
>>> extra_disks:
>>> data:
>>> diff --git a/workflows/steady_state/Kconfig b/workflows/steady_state/Kconfig
>>> index 335b833cfbcc..f7064d078c0a 100644
>>> --- a/workflows/steady_state/Kconfig
>>> +++ b/workflows/steady_state/Kconfig
>>> @@ -6,6 +6,7 @@ config SSD_STEADY_STATE_DEVICE
>>> default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
>>> default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
>>> default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
>>> + default "/dev/sdc" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
>>> default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
>>> default "/dev/nvme1n1" if TERRAFORM_GCE
>>> default "/dev/sdd" if TERRAFORM_AZURE
>>
>> I would split the spelling corrections into a separate patch. That might
>> even be helpful for LLM training.
>
> Sure I can do that.
>
>> This patch looks like it is missing a change to
>>
>> playbooks/roles/volume_group/tasks/guestfs.yml
>>
>> Add a task specific to guestfs SCSI drives after the IDE task. You can
>> test by running the pynfs workflow, which will try to set up a single
>> guest with NFSD and an LVM group made from these disk devices.
>
> Are you OK if I add this after this patch, I'd like to test codex to see
> if I can just input your requirement to do that. It used to be AI for
> kdevops was gettind D grading, I am now grading it at B with Codex.
I don't think there's a strong concern about bisect-ability here, so
it should be OK as a separate patch.
--
Chuck Lever
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] guestfs: add scsi extra storage option
2025-07-10 17:35 [PATCH] guestfs: add scsi extra storage option Luis Chamberlain
2025-07-10 22:53 ` Chuck Lever
@ 2025-07-12 7:47 ` Daniel Gomez
2025-07-13 16:30 ` Chuck Lever
1 sibling, 1 reply; 9+ messages in thread
From: Daniel Gomez @ 2025-07-12 7:47 UTC (permalink / raw)
To: Luis Chamberlain, Chuck Lever, Daniel Gomez, Swarna Prabhu,
swarnassp6, kdevops
Cc: Luis Chamberlain
On 10/07/2025 19.35, Luis Chamberlain wrote:
> From: Luis Chamberlain <mcgrof@gmail.com>
>
> Add scsi device support for virtualization.
>
> Generated-by: ChatGPT Codex
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
I know this has been merged already. Still, here some minor comments:
> ---
> defconfigs/blktests_scsi | 2 +-
> kconfigs/Kconfig.libvirt | 25 ++++++++++++-------
> playbooks/roles/gen_nodes/defaults/main.yml | 1 +
> .../roles/gen_nodes/templates/Vagrantfile.j2 | 11 ++++++--
> playbooks/roles/gen_nodes/templates/drives.j2 | 24 ++++++++++++++++--
> .../roles/gen_nodes/templates/gen_drives.j2 | 21 ++++++++++------
> .../kdevops_nodes_split_start.j2.yaml | 10 ++++++++
> workflows/steady_state/Kconfig | 1 +
> 8 files changed, 74 insertions(+), 21 deletions(-)
>
> diff --git a/defconfigs/blktests_scsi b/defconfigs/blktests_scsi
> index d49e72893690..28462939e11e 100644
> --- a/defconfigs/blktests_scsi
> +++ b/defconfigs/blktests_scsi
> @@ -9,7 +9,7 @@ CONFIG_GUESTFS_BRINGUP_DEBUG_1=y
> CONFIG_BOOTLINUX=y
> CONFIG_BOOTLINUX_9P=y
>
> -CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
> +CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI=y
>
> CONFIG_WORKFLOWS_TESTS=y
> CONFIG_WORKFLOWS_LINUX_TESTS=y
> diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
> index 8654eb9b1a06..f737b79faea4 100644
> --- a/kconfigs/Kconfig.libvirt
> +++ b/kconfigs/Kconfig.libvirt
> @@ -487,7 +487,7 @@ config LIBVIRT_MACHINE_TYPE_DEFAULT
> is the old i440x, so you will not get PCIe support.
>
> We only want to support PCI-E capable guests with libguest so the
> - deafult is not allowed on libguest.
> + default is not allowed on libguest.
>
> config LIBVIRT_MACHINE_TYPE_Q35
> bool "q35"
> @@ -548,11 +548,18 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
> you won't be able to test ZNS.
>
> config LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
> - bool "ide"
> - output yaml
> - help
> - Use the QEMU ide driver for extra storage drives. This is useful for
> - really old Linux distributions that lack the virtio backend driver.
> + bool "ide"
> + output yaml
> + help
> + Use the QEMU ide driver for extra storage drives. This is useful for
> + really old Linux distributions that lack the virtio backend driver.
> +
This has the wrong indentation type (spaces instead of tabs).
> +config LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
> + bool "scsi"
> + output yaml
> + help
> + Use the QEMU SCSI driver for extra storage drives. This relies on a
> + virtio-scsi controller with scsi-hd devices attached.
>
> endchoice
>
> @@ -817,7 +824,7 @@ config LIBVIRT_AIO_MODE_NATIVE
> help
> Use the aio=native mode. For some older kernels it is known that
> native will cause corruption if used on ext4 or xfs filesystem if
> - you also use cace=none. This corruption is documented for RHEL:
> + you also use cache=none. This corruption is documented for RHEL:
>
> https://access.redhat.com/articles/41313
> https://bugzilla.redhat.com/show_bug.cgi?id=615309
> @@ -1104,7 +1111,7 @@ config LIBVIRT_ENABLE_GDB
> output yaml
> help
> Select this option if you want to enable debugging support for GDB.
> - By default , it is assumed that gdb is disabled since we dont want
> + By default , it is assumed that gdb is disabled since we don't want
> to complicate this for the CI runs. If enabled then libvirt guest
> xml for each guest will be configured to use gdb on a specific
> tcp port.
> @@ -1116,7 +1123,7 @@ config LIBVIRT_GDB_BASEPORT
> depends on LIBVIRT_ENABLE_GDB
> help
> This option defines the base port to be used for the GDB.
> - Esentially we need to make QEMU listen for an incoming connection from
> + Essentially we need to make QEMU listen for an incoming connection from
> gdb on a TCP port. The default port is chosen to be 1234. However we
> introduce variability for assigning the port to each guest by defining
> a base port and adding an index to it based on the number of libvrt guest
> diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
> index 6a899be45128..2cf450b84281 100644
> --- a/playbooks/roles/gen_nodes/defaults/main.yml
> +++ b/playbooks/roles/gen_nodes/defaults/main.yml
> @@ -70,6 +70,7 @@ libvirt_extra_drive_id_prefix: 'drv'
> libvirt_extra_storage_drive_nvme: False
> libvirt_extra_storage_drive_virtio: False
> libvirt_extra_storage_drive_ide: False
> +libvirt_extra_storage_drive_scsi: False
> libvirt_extra_storage_aio_mode: "native"
> libvirt_extra_storage_aio_cache_mode: "none"
> # Note that NVMe on qemu does not allow the physical block size
> diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> index 2298c3455251..a52f5566280e 100644
> --- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> +++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
We need to nuke this Vagrant stuff...
> @@ -244,7 +244,7 @@ Vagrant.configure("2") do |config|
> #
> # Just create a PCI or PCI-E root bus dedicated for extra drives. We
> # use 0x08 to place this PCI / PCI-E root bus as we know this is
> - # avilable on modern x86-64 systems. Eventually we may need to bump
> + # available on modern x86-64 systems. Eventually we may need to bump
> # this to 0x9, but it would be better instead to have vagant-libvirt
> # speak "add a new PCI or PCI-E root bus" and "add extra drives" whether
> # that is nvme or virtio.
> @@ -252,7 +252,7 @@ Vagrant.configure("2") do |config|
> {% if not libvirt_override_machine_type %}
> # For i440x on x86_64 (default on libvirt as of today) we use PCI, not
> # PCI-E. Below assumes i440x. i440x cannot support CXL as it does not
> - # suport PCI-E.
> + # support PCI-E.
> libvirt.qemuargs :value => "-device"
> libvirt.qemuargs :value => "pci-bridge,id=custom-pci-for-{{ extra_disk_driver }},chassis_nr=1,bus=pci.0,addr=0x8"
> {% else %}
> @@ -417,6 +417,13 @@ Vagrant.configure("2") do |config|
> libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
> libvirt.qemuargs :value => "-device"
> libvirt.qemuargs :value => "virtio-blk-pci,drive=#{disk_id},id=virtio-#{disk_id},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},iothread=kdevops-virtio-iothread-#{port}#{extra_drive_largio_args},logical_block_size=#{virtio_lbs},physical_block_size=#{virtio_pbs}"
> +{% elif libvirt_extra_storage_drive_scsi %}
> + libvirt.qemuargs :value => "-device"
> + libvirt.qemuargs :value => "virtio-scsi-pci,id=scsi#{port},bus=#{bus_for_extra_drives},addr=#{pci_function}"
> + libvirt.qemuargs :value => "-drive"
> + libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
> + libvirt.qemuargs :value => "-device"
> + libvirt.qemuargs :value => "scsi-hd,drive=#{disk_id},bus=scsi#{port}.0"
> {% elif libvirt_extra_storage_drive_nvme %}
> if ! largio
> nvme_lbs = "{{ libvirt_extra_storage_nvme_logical_block_size }}"
> diff --git a/playbooks/roles/gen_nodes/templates/drives.j2 b/playbooks/roles/gen_nodes/templates/drives.j2
> index 8327cd430981..1241b88a9379 100644
> --- a/playbooks/roles/gen_nodes/templates/drives.j2
> +++ b/playbooks/roles/gen_nodes/templates/drives.j2
> @@ -40,6 +40,26 @@ the drives can vary by type, so we have one macro by type of drive.
> {% endfor %}
> <!-- End of virtio drives-->
> {%- endmacro -%}
> +{%- macro gen_drive_scsi(num_drives,
> + kdevops_storage_pool_path,
> + hostname,
> + libvirt_extra_drive_format,
> + libvirt_extra_storage_aio_mode,
> + libvirt_extra_storage_aio_cache_mode) -%}
> +<!-- We generated {{ num_drives }} scsi dives -->
> +{% for n in range(0,num_drives) %}
> + <!-- This is scsi drive # {{ n + 1 }} -->
> + <qemu:arg value='-device'/>
> + <qemu:arg value='pcie-root-port,id=pcie-port-for-scsi-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ "%0x" | format( n | int) }},chassis=5{{ n }}'/>
> + <qemu:arg value='-device'/>
> + <qemu:arg value='virtio-scsi-pci,id=scsi{{ n }},bus=pcie-port-for-scsi-{{ n }},addr=0x0'/>
> + <qemu:arg value='-drive'/>
> + <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}'/>
> + <qemu:arg value='-device'/>
> + <qemu:arg value='scsi-hd,drive=drv{{ n }},bus=scsi{{ n }}.0'/>
> +{% endfor %}
> +<!-- End of scsi drives-->
> +{%- endmacro -%}
> {%- macro gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
> libvirt_largeio_logical_compat_size,
> libvirt_largeio_pow_limit,
> @@ -49,7 +69,7 @@ the drives can vary by type, so we have one macro by type of drive.
> libvirt_extra_storage_aio_mode,
> libvirt_extra_storage_aio_cache_mode,
> kdevops_storage_pool_path) -%}
> -<!-- These are virtio drives used for large IO experimentaiton, with LBS support -->
> +<!-- These are virtio drives used for large IO experimentation, with LBS support -->
> {% set ns = namespace(lbs_idx=0) %}
> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
> @@ -106,7 +126,7 @@ the drives can vary by type, so we have one macro by type of drive.
> libvirt_extra_storage_aio_mode,
> libvirt_extra_storage_aio_cache_mode,
> kdevops_storage_pool_path) -%}
> -<!-- These are NVMe drives used for large IO experimentaiton, with LBS support -->
> +<!-- These are NVMe drives used for large IO experimentation, with LBS support -->
> {% set ns = namespace(lbs_idx=0) %}
> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
> diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> index 874e5b0623b9..2de13da4ab8e 100644
> --- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
> +++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> @@ -19,14 +19,21 @@
> kdevops_storage_pool_path) }}
> {% else %}
> {{ drives.gen_drive_virtio(4,
> - kdevops_storage_pool_path,
> - hostname,
> - libvirt_extra_drive_format,
> - libvirt_extra_storage_aio_mode,
> - libvirt_extra_storage_aio_cache_mode,
> - libvirt_extra_storage_virtio_logical_block_size,
> - libvirt_extra_storage_virtio_physical_block_size) }}
> + kdevops_storage_pool_path,
> + hostname,
> + libvirt_extra_drive_format,
> + libvirt_extra_storage_aio_mode,
> + libvirt_extra_storage_aio_cache_mode,
> + libvirt_extra_storage_virtio_logical_block_size,
> + libvirt_extra_storage_virtio_physical_block_size) }}
> {% endif %}
> +{% elif libvirt_extra_storage_drive_scsi %}
> +{{ drives.gen_drive_scsi(4,
> + kdevops_storage_pool_path,
> + hostname,
> + libvirt_extra_drive_format,
> + libvirt_extra_storage_aio_mode,
> + libvirt_extra_storage_aio_cache_mode) }}
> {% elif libvirt_extra_storage_drive_nvme %}
> {% if libvirt_largeio_enable %}
> {{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
Same for this hunk.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] guestfs: add scsi extra storage option
2025-07-12 7:47 ` Daniel Gomez
@ 2025-07-13 16:30 ` Chuck Lever
2025-07-13 16:35 ` Chuck Lever
0 siblings, 1 reply; 9+ messages in thread
From: Chuck Lever @ 2025-07-13 16:30 UTC (permalink / raw)
To: Daniel Gomez, Luis Chamberlain, Daniel Gomez, Swarna Prabhu,
swarnassp6, kdevops
Cc: Luis Chamberlain
On 7/12/25 3:47 AM, Daniel Gomez wrote:
> On 10/07/2025 19.35, Luis Chamberlain wrote:
>> From: Luis Chamberlain <mcgrof@gmail.com>
>> diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>> index 2298c3455251..a52f5566280e 100644
>> --- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>> +++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>
> We need to nuke this Vagrant stuff...
>
Agreed. I would like to see it gone before I move forward with
integrating ssh config management between terraform and guestfs.
I've been looking at ways of chipping away at removing Vagrant
but the implementation is integrated into a number of subsystems
that kdevops still uses. It doesn't seem to be a matter of just
ripping it out.
--
Chuck Lever
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] guestfs: add scsi extra storage option
2025-07-13 16:30 ` Chuck Lever
@ 2025-07-13 16:35 ` Chuck Lever
0 siblings, 0 replies; 9+ messages in thread
From: Chuck Lever @ 2025-07-13 16:35 UTC (permalink / raw)
To: Daniel Gomez, Luis Chamberlain, Daniel Gomez, Swarna Prabhu,
swarnassp6, kdevops
Cc: Luis Chamberlain
On 7/13/25 12:30 PM, Chuck Lever wrote:
> On 7/12/25 3:47 AM, Daniel Gomez wrote:
>> On 10/07/2025 19.35, Luis Chamberlain wrote:
>>> From: Luis Chamberlain <mcgrof@gmail.com>
>
>>> diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>> index 2298c3455251..a52f5566280e 100644
>>> --- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>> +++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>
>> We need to nuke this Vagrant stuff...
>>
>
> Agreed. I would like to see it gone before I move forward with
> integrating ssh config management between terraform and guestfs.
>
> I've been looking at ways of chipping away at removing Vagrant
> but the implementation is integrated into a number of subsystems
> that kdevops still uses. It doesn't seem to be a matter of just
> ripping it out.
I guess Luis disagrees about how challenging it would be to rip it out
;-)
--
Chuck Lever
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] guestfs: add scsi extra storage option
2025-07-11 12:53 ` Chuck Lever
@ 2025-07-18 4:48 ` Luis Chamberlain
2025-07-18 13:22 ` Chuck Lever
0 siblings, 1 reply; 9+ messages in thread
From: Luis Chamberlain @ 2025-07-18 4:48 UTC (permalink / raw)
To: Chuck Lever
Cc: Daniel Gomez, Swarna Prabhu, swarnassp6, kdevops,
Luis Chamberlain
On Fri, Jul 11, 2025 at 08:53:59AM -0400, Chuck Lever wrote:
> On 7/10/25 8:45 PM, Luis Chamberlain wrote:
> > On Thu, Jul 10, 2025 at 06:53:37PM -0400, Chuck Lever wrote:
> >> On 7/10/25 1:35 PM, Luis Chamberlain wrote:
> >>> From: Luis Chamberlain <mcgrof@gmail.com>
> >>>
> >>> Add scsi device support for virtualization.
> >>>
> >>> Generated-by: ChatGPT Codex
> >>> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> >>> ---
> >>> defconfigs/blktests_scsi | 2 +-
> >>> kconfigs/Kconfig.libvirt | 25 ++++++++++++-------
> >>> playbooks/roles/gen_nodes/defaults/main.yml | 1 +
> >>> .../roles/gen_nodes/templates/Vagrantfile.j2 | 11 ++++++--
> >>> playbooks/roles/gen_nodes/templates/drives.j2 | 24 ++++++++++++++++--
> >>> .../roles/gen_nodes/templates/gen_drives.j2 | 21 ++++++++++------
> >>> .../kdevops_nodes_split_start.j2.yaml | 10 ++++++++
> >>> workflows/steady_state/Kconfig | 1 +
> >>> 8 files changed, 74 insertions(+), 21 deletions(-)
> >>>
> >>> diff --git a/defconfigs/blktests_scsi b/defconfigs/blktests_scsi
> >>> index d49e72893690..28462939e11e 100644
> >>> --- a/defconfigs/blktests_scsi
> >>> +++ b/defconfigs/blktests_scsi
> >>> @@ -9,7 +9,7 @@ CONFIG_GUESTFS_BRINGUP_DEBUG_1=y
> >>> CONFIG_BOOTLINUX=y
> >>> CONFIG_BOOTLINUX_9P=y
> >>>
> >>> -CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
> >>> +CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI=y
> >>>
> >>> CONFIG_WORKFLOWS_TESTS=y
> >>> CONFIG_WORKFLOWS_LINUX_TESTS=y
> >>> diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
> >>> index 8654eb9b1a06..f737b79faea4 100644
> >>> --- a/kconfigs/Kconfig.libvirt
> >>> +++ b/kconfigs/Kconfig.libvirt
> >>> @@ -487,7 +487,7 @@ config LIBVIRT_MACHINE_TYPE_DEFAULT
> >>> is the old i440x, so you will not get PCIe support.
> >>>
> >>> We only want to support PCI-E capable guests with libguest so the
> >>> - deafult is not allowed on libguest.
> >>> + default is not allowed on libguest.
> >>>
> >>> config LIBVIRT_MACHINE_TYPE_Q35
> >>> bool "q35"
> >>> @@ -548,11 +548,18 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
> >>> you won't be able to test ZNS.
> >>>
> >>> config LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
> >>> - bool "ide"
> >>> - output yaml
> >>> - help
> >>> - Use the QEMU ide driver for extra storage drives. This is useful for
> >>> - really old Linux distributions that lack the virtio backend driver.
> >>> + bool "ide"
> >>> + output yaml
> >>> + help
> >>> + Use the QEMU ide driver for extra storage drives. This is useful for
> >>> + really old Linux distributions that lack the virtio backend driver.
> >>> +
> >>> +config LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
> >>> + bool "scsi"
> >>> + output yaml
> >>> + help
> >>> + Use the QEMU SCSI driver for extra storage drives. This relies on a
> >>> + virtio-scsi controller with scsi-hd devices attached.
> >>>
> >>> endchoice
> >>>
> >>> @@ -817,7 +824,7 @@ config LIBVIRT_AIO_MODE_NATIVE
> >>> help
> >>> Use the aio=native mode. For some older kernels it is known that
> >>> native will cause corruption if used on ext4 or xfs filesystem if
> >>> - you also use cace=none. This corruption is documented for RHEL:
> >>> + you also use cache=none. This corruption is documented for RHEL:
> >>>
> >>> https://access.redhat.com/articles/41313
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=615309
> >>> @@ -1104,7 +1111,7 @@ config LIBVIRT_ENABLE_GDB
> >>> output yaml
> >>> help
> >>> Select this option if you want to enable debugging support for GDB.
> >>> - By default , it is assumed that gdb is disabled since we dont want
> >>> + By default , it is assumed that gdb is disabled since we don't want
> >>> to complicate this for the CI runs. If enabled then libvirt guest
> >>> xml for each guest will be configured to use gdb on a specific
> >>> tcp port.
> >>> @@ -1116,7 +1123,7 @@ config LIBVIRT_GDB_BASEPORT
> >>> depends on LIBVIRT_ENABLE_GDB
> >>> help
> >>> This option defines the base port to be used for the GDB.
> >>> - Esentially we need to make QEMU listen for an incoming connection from
> >>> + Essentially we need to make QEMU listen for an incoming connection from
> >>> gdb on a TCP port. The default port is chosen to be 1234. However we
> >>> introduce variability for assigning the port to each guest by defining
> >>> a base port and adding an index to it based on the number of libvrt guest
> >>> diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
> >>> index 6a899be45128..2cf450b84281 100644
> >>> --- a/playbooks/roles/gen_nodes/defaults/main.yml
> >>> +++ b/playbooks/roles/gen_nodes/defaults/main.yml
> >>> @@ -70,6 +70,7 @@ libvirt_extra_drive_id_prefix: 'drv'
> >>> libvirt_extra_storage_drive_nvme: False
> >>> libvirt_extra_storage_drive_virtio: False
> >>> libvirt_extra_storage_drive_ide: False
> >>> +libvirt_extra_storage_drive_scsi: False
> >>> libvirt_extra_storage_aio_mode: "native"
> >>> libvirt_extra_storage_aio_cache_mode: "none"
> >>> # Note that NVMe on qemu does not allow the physical block size
> >>> diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> >>> index 2298c3455251..a52f5566280e 100644
> >>> --- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> >>> +++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
> >>> @@ -244,7 +244,7 @@ Vagrant.configure("2") do |config|
> >>> #
> >>> # Just create a PCI or PCI-E root bus dedicated for extra drives. We
> >>> # use 0x08 to place this PCI / PCI-E root bus as we know this is
> >>> - # avilable on modern x86-64 systems. Eventually we may need to bump
> >>> + # available on modern x86-64 systems. Eventually we may need to bump
> >>> # this to 0x9, but it would be better instead to have vagant-libvirt
> >>> # speak "add a new PCI or PCI-E root bus" and "add extra drives" whether
> >>> # that is nvme or virtio.
> >>> @@ -252,7 +252,7 @@ Vagrant.configure("2") do |config|
> >>> {% if not libvirt_override_machine_type %}
> >>> # For i440x on x86_64 (default on libvirt as of today) we use PCI, not
> >>> # PCI-E. Below assumes i440x. i440x cannot support CXL as it does not
> >>> - # suport PCI-E.
> >>> + # support PCI-E.
> >>> libvirt.qemuargs :value => "-device"
> >>> libvirt.qemuargs :value => "pci-bridge,id=custom-pci-for-{{ extra_disk_driver }},chassis_nr=1,bus=pci.0,addr=0x8"
> >>> {% else %}
> >>> @@ -417,6 +417,13 @@ Vagrant.configure("2") do |config|
> >>> libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
> >>> libvirt.qemuargs :value => "-device"
> >>> libvirt.qemuargs :value => "virtio-blk-pci,drive=#{disk_id},id=virtio-#{disk_id},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},iothread=kdevops-virtio-iothread-#{port}#{extra_drive_largio_args},logical_block_size=#{virtio_lbs},physical_block_size=#{virtio_pbs}"
> >>> +{% elif libvirt_extra_storage_drive_scsi %}
> >>> + libvirt.qemuargs :value => "-device"
> >>> + libvirt.qemuargs :value => "virtio-scsi-pci,id=scsi#{port},bus=#{bus_for_extra_drives},addr=#{pci_function}"
> >>> + libvirt.qemuargs :value => "-drive"
> >>> + libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
> >>> + libvirt.qemuargs :value => "-device"
> >>> + libvirt.qemuargs :value => "scsi-hd,drive=#{disk_id},bus=scsi#{port}.0"
> >>> {% elif libvirt_extra_storage_drive_nvme %}
> >>> if ! largio
> >>> nvme_lbs = "{{ libvirt_extra_storage_nvme_logical_block_size }}"
> >>> diff --git a/playbooks/roles/gen_nodes/templates/drives.j2 b/playbooks/roles/gen_nodes/templates/drives.j2
> >>> index 8327cd430981..1241b88a9379 100644
> >>> --- a/playbooks/roles/gen_nodes/templates/drives.j2
> >>> +++ b/playbooks/roles/gen_nodes/templates/drives.j2
> >>> @@ -40,6 +40,26 @@ the drives can vary by type, so we have one macro by type of drive.
> >>> {% endfor %}
> >>> <!-- End of virtio drives-->
> >>> {%- endmacro -%}
> >>> +{%- macro gen_drive_scsi(num_drives,
> >>> + kdevops_storage_pool_path,
> >>> + hostname,
> >>> + libvirt_extra_drive_format,
> >>> + libvirt_extra_storage_aio_mode,
> >>> + libvirt_extra_storage_aio_cache_mode) -%}
> >>> +<!-- We generated {{ num_drives }} scsi dives -->
> >>> +{% for n in range(0,num_drives) %}
> >>> + <!-- This is scsi drive # {{ n + 1 }} -->
> >>> + <qemu:arg value='-device'/>
> >>> + <qemu:arg value='pcie-root-port,id=pcie-port-for-scsi-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ "%0x" | format( n | int) }},chassis=5{{ n }}'/>
> >>> + <qemu:arg value='-device'/>
> >>> + <qemu:arg value='virtio-scsi-pci,id=scsi{{ n }},bus=pcie-port-for-scsi-{{ n }},addr=0x0'/>
> >>> + <qemu:arg value='-drive'/>
> >>> + <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}'/>
> >>> + <qemu:arg value='-device'/>
> >>> + <qemu:arg value='scsi-hd,drive=drv{{ n }},bus=scsi{{ n }}.0'/>
> >>> +{% endfor %}
> >>> +<!-- End of scsi drives-->
> >>> +{%- endmacro -%}
> >>> {%- macro gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
> >>> libvirt_largeio_logical_compat_size,
> >>> libvirt_largeio_pow_limit,
> >>> @@ -49,7 +69,7 @@ the drives can vary by type, so we have one macro by type of drive.
> >>> libvirt_extra_storage_aio_mode,
> >>> libvirt_extra_storage_aio_cache_mode,
> >>> kdevops_storage_pool_path) -%}
> >>> -<!-- These are virtio drives used for large IO experimentaiton, with LBS support -->
> >>> +<!-- These are virtio drives used for large IO experimentation, with LBS support -->
> >>> {% set ns = namespace(lbs_idx=0) %}
> >>> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
> >>> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
> >>> @@ -106,7 +126,7 @@ the drives can vary by type, so we have one macro by type of drive.
> >>> libvirt_extra_storage_aio_mode,
> >>> libvirt_extra_storage_aio_cache_mode,
> >>> kdevops_storage_pool_path) -%}
> >>> -<!-- These are NVMe drives used for large IO experimentaiton, with LBS support -->
> >>> +<!-- These are NVMe drives used for large IO experimentation, with LBS support -->
> >>> {% set ns = namespace(lbs_idx=0) %}
> >>> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
> >>> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
> >>> diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> >>> index 874e5b0623b9..2de13da4ab8e 100644
> >>> --- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
> >>> +++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
> >>> @@ -19,14 +19,21 @@
> >>> kdevops_storage_pool_path) }}
> >>> {% else %}
> >>> {{ drives.gen_drive_virtio(4,
> >>> - kdevops_storage_pool_path,
> >>> - hostname,
> >>> - libvirt_extra_drive_format,
> >>> - libvirt_extra_storage_aio_mode,
> >>> - libvirt_extra_storage_aio_cache_mode,
> >>> - libvirt_extra_storage_virtio_logical_block_size,
> >>> - libvirt_extra_storage_virtio_physical_block_size) }}
> >>> + kdevops_storage_pool_path,
> >>> + hostname,
> >>> + libvirt_extra_drive_format,
> >>> + libvirt_extra_storage_aio_mode,
> >>> + libvirt_extra_storage_aio_cache_mode,
> >>> + libvirt_extra_storage_virtio_logical_block_size,
> >>> + libvirt_extra_storage_virtio_physical_block_size) }}
> >>> {% endif %}
> >>> +{% elif libvirt_extra_storage_drive_scsi %}
> >>> +{{ drives.gen_drive_scsi(4,
> >>> + kdevops_storage_pool_path,
> >>> + hostname,
> >>> + libvirt_extra_drive_format,
> >>> + libvirt_extra_storage_aio_mode,
> >>> + libvirt_extra_storage_aio_cache_mode) }}
> >>> {% elif libvirt_extra_storage_drive_nvme %}
> >>> {% if libvirt_largeio_enable %}
> >>> {{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
> >>> diff --git a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> >>> index e5b88efefc63..91e105cd3bff 100644
> >>> --- a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> >>> +++ b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
> >>> @@ -76,6 +76,16 @@ vagrant_global:
> >>> physical_block_size: {{ libvirt_extra_storage_virtio_physical_block_size }}
> >>> logical_block_size: {{ libvirt_extra_storage_virtio_logical_block_size }}
> >>> {% endif %}
> >>> +{% elif libvirt_extra_storage_drive_scsi %}
> >>> + extra_disks:
> >>> + data:
> >>> + size: 102400
> >>> + scratch:
> >>> + size: 102400
> >>> + extra1:
> >>> + size: 102400
> >>> + extra2:
> >>> + size: 102400
> >>> {% elif libvirt_extra_storage_drive_nvme %}
> >>> extra_disks:
> >>> data:
> >>> diff --git a/workflows/steady_state/Kconfig b/workflows/steady_state/Kconfig
> >>> index 335b833cfbcc..f7064d078c0a 100644
> >>> --- a/workflows/steady_state/Kconfig
> >>> +++ b/workflows/steady_state/Kconfig
> >>> @@ -6,6 +6,7 @@ config SSD_STEADY_STATE_DEVICE
> >>> default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
> >>> default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
> >>> default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
> >>> + default "/dev/sdc" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
> >>> default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
> >>> default "/dev/nvme1n1" if TERRAFORM_GCE
> >>> default "/dev/sdd" if TERRAFORM_AZURE
> >>
> >> I would split the spelling corrections into a separate patch. That might
> >> even be helpful for LLM training.
> >
> > Sure I can do that.
> >
> >> This patch looks like it is missing a change to
> >>
> >> playbooks/roles/volume_group/tasks/guestfs.yml
> >>
> >> Add a task specific to guestfs SCSI drives after the IDE task. You can
> >> test by running the pynfs workflow, which will try to set up a single
> >> guest with NFSD and an LVM group made from these disk devices.
> >
> > Are you OK if I add this after this patch, I'd like to test codex to see
> > if I can just input your requirement to do that. It used to be AI for
> > kdevops was gettind D grading, I am now grading it at B with Codex.
> I don't think there's a strong concern about bisect-ability here, so
> it should be OK as a separate patch.
You mean like this? (written by claude)
diff --git a/playbooks/roles/volume_group/tasks/guestfs.yml b/playbooks/roles/volume_group/tasks/guestfs.yml
index a5536159..0a285f16 100644
--- a/playbooks/roles/volume_group/tasks/guestfs.yml
+++ b/playbooks/roles/volume_group/tasks/guestfs.yml
@@ -31,6 +31,13 @@
- libvirt_extra_storage_drive_ide is defined
- libvirt_extra_storage_drive_ide|bool
+- name: Set the SCSI device search pattern
+ ansible.builtin.set_fact:
+ by_id_pattern: "scsi-0QEMU_QEMU_HARDDISK_kdevops*"
+ when:
+ - libvirt_extra_storage_drive_scsi is defined
+ - libvirt_extra_storage_drive_scsi|bool
+
- name: Verify there are block devices to search
ansible.builtin.fail:
msg: No supported block devices are available for NFSD.
Luis
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH] guestfs: add scsi extra storage option
2025-07-18 4:48 ` Luis Chamberlain
@ 2025-07-18 13:22 ` Chuck Lever
0 siblings, 0 replies; 9+ messages in thread
From: Chuck Lever @ 2025-07-18 13:22 UTC (permalink / raw)
To: Luis Chamberlain
Cc: Daniel Gomez, Swarna Prabhu, swarnassp6, kdevops,
Luis Chamberlain
On 7/18/25 12:48 AM, Luis Chamberlain wrote:
> On Fri, Jul 11, 2025 at 08:53:59AM -0400, Chuck Lever wrote:
>> On 7/10/25 8:45 PM, Luis Chamberlain wrote:
>>> On Thu, Jul 10, 2025 at 06:53:37PM -0400, Chuck Lever wrote:
>>>> On 7/10/25 1:35 PM, Luis Chamberlain wrote:
>>>>> From: Luis Chamberlain <mcgrof@gmail.com>
>>>>>
>>>>> Add scsi device support for virtualization.
>>>>>
>>>>> Generated-by: ChatGPT Codex
>>>>> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
>>>>> ---
>>>>> defconfigs/blktests_scsi | 2 +-
>>>>> kconfigs/Kconfig.libvirt | 25 ++++++++++++-------
>>>>> playbooks/roles/gen_nodes/defaults/main.yml | 1 +
>>>>> .../roles/gen_nodes/templates/Vagrantfile.j2 | 11 ++++++--
>>>>> playbooks/roles/gen_nodes/templates/drives.j2 | 24 ++++++++++++++++--
>>>>> .../roles/gen_nodes/templates/gen_drives.j2 | 21 ++++++++++------
>>>>> .../kdevops_nodes_split_start.j2.yaml | 10 ++++++++
>>>>> workflows/steady_state/Kconfig | 1 +
>>>>> 8 files changed, 74 insertions(+), 21 deletions(-)
>>>>>
>>>>> diff --git a/defconfigs/blktests_scsi b/defconfigs/blktests_scsi
>>>>> index d49e72893690..28462939e11e 100644
>>>>> --- a/defconfigs/blktests_scsi
>>>>> +++ b/defconfigs/blktests_scsi
>>>>> @@ -9,7 +9,7 @@ CONFIG_GUESTFS_BRINGUP_DEBUG_1=y
>>>>> CONFIG_BOOTLINUX=y
>>>>> CONFIG_BOOTLINUX_9P=y
>>>>>
>>>>> -CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
>>>>> +CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI=y
>>>>>
>>>>> CONFIG_WORKFLOWS_TESTS=y
>>>>> CONFIG_WORKFLOWS_LINUX_TESTS=y
>>>>> diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
>>>>> index 8654eb9b1a06..f737b79faea4 100644
>>>>> --- a/kconfigs/Kconfig.libvirt
>>>>> +++ b/kconfigs/Kconfig.libvirt
>>>>> @@ -487,7 +487,7 @@ config LIBVIRT_MACHINE_TYPE_DEFAULT
>>>>> is the old i440x, so you will not get PCIe support.
>>>>>
>>>>> We only want to support PCI-E capable guests with libguest so the
>>>>> - deafult is not allowed on libguest.
>>>>> + default is not allowed on libguest.
>>>>>
>>>>> config LIBVIRT_MACHINE_TYPE_Q35
>>>>> bool "q35"
>>>>> @@ -548,11 +548,18 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
>>>>> you won't be able to test ZNS.
>>>>>
>>>>> config LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
>>>>> - bool "ide"
>>>>> - output yaml
>>>>> - help
>>>>> - Use the QEMU ide driver for extra storage drives. This is useful for
>>>>> - really old Linux distributions that lack the virtio backend driver.
>>>>> + bool "ide"
>>>>> + output yaml
>>>>> + help
>>>>> + Use the QEMU ide driver for extra storage drives. This is useful for
>>>>> + really old Linux distributions that lack the virtio backend driver.
>>>>> +
>>>>> +config LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
>>>>> + bool "scsi"
>>>>> + output yaml
>>>>> + help
>>>>> + Use the QEMU SCSI driver for extra storage drives. This relies on a
>>>>> + virtio-scsi controller with scsi-hd devices attached.
>>>>>
>>>>> endchoice
>>>>>
>>>>> @@ -817,7 +824,7 @@ config LIBVIRT_AIO_MODE_NATIVE
>>>>> help
>>>>> Use the aio=native mode. For some older kernels it is known that
>>>>> native will cause corruption if used on ext4 or xfs filesystem if
>>>>> - you also use cace=none. This corruption is documented for RHEL:
>>>>> + you also use cache=none. This corruption is documented for RHEL:
>>>>>
>>>>> https://access.redhat.com/articles/41313
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=615309
>>>>> @@ -1104,7 +1111,7 @@ config LIBVIRT_ENABLE_GDB
>>>>> output yaml
>>>>> help
>>>>> Select this option if you want to enable debugging support for GDB.
>>>>> - By default , it is assumed that gdb is disabled since we dont want
>>>>> + By default , it is assumed that gdb is disabled since we don't want
>>>>> to complicate this for the CI runs. If enabled then libvirt guest
>>>>> xml for each guest will be configured to use gdb on a specific
>>>>> tcp port.
>>>>> @@ -1116,7 +1123,7 @@ config LIBVIRT_GDB_BASEPORT
>>>>> depends on LIBVIRT_ENABLE_GDB
>>>>> help
>>>>> This option defines the base port to be used for the GDB.
>>>>> - Esentially we need to make QEMU listen for an incoming connection from
>>>>> + Essentially we need to make QEMU listen for an incoming connection from
>>>>> gdb on a TCP port. The default port is chosen to be 1234. However we
>>>>> introduce variability for assigning the port to each guest by defining
>>>>> a base port and adding an index to it based on the number of libvrt guest
>>>>> diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
>>>>> index 6a899be45128..2cf450b84281 100644
>>>>> --- a/playbooks/roles/gen_nodes/defaults/main.yml
>>>>> +++ b/playbooks/roles/gen_nodes/defaults/main.yml
>>>>> @@ -70,6 +70,7 @@ libvirt_extra_drive_id_prefix: 'drv'
>>>>> libvirt_extra_storage_drive_nvme: False
>>>>> libvirt_extra_storage_drive_virtio: False
>>>>> libvirt_extra_storage_drive_ide: False
>>>>> +libvirt_extra_storage_drive_scsi: False
>>>>> libvirt_extra_storage_aio_mode: "native"
>>>>> libvirt_extra_storage_aio_cache_mode: "none"
>>>>> # Note that NVMe on qemu does not allow the physical block size
>>>>> diff --git a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2 b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>>>> index 2298c3455251..a52f5566280e 100644
>>>>> --- a/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>>>> +++ b/playbooks/roles/gen_nodes/templates/Vagrantfile.j2
>>>>> @@ -244,7 +244,7 @@ Vagrant.configure("2") do |config|
>>>>> #
>>>>> # Just create a PCI or PCI-E root bus dedicated for extra drives. We
>>>>> # use 0x08 to place this PCI / PCI-E root bus as we know this is
>>>>> - # avilable on modern x86-64 systems. Eventually we may need to bump
>>>>> + # available on modern x86-64 systems. Eventually we may need to bump
>>>>> # this to 0x9, but it would be better instead to have vagant-libvirt
>>>>> # speak "add a new PCI or PCI-E root bus" and "add extra drives" whether
>>>>> # that is nvme or virtio.
>>>>> @@ -252,7 +252,7 @@ Vagrant.configure("2") do |config|
>>>>> {% if not libvirt_override_machine_type %}
>>>>> # For i440x on x86_64 (default on libvirt as of today) we use PCI, not
>>>>> # PCI-E. Below assumes i440x. i440x cannot support CXL as it does not
>>>>> - # suport PCI-E.
>>>>> + # support PCI-E.
>>>>> libvirt.qemuargs :value => "-device"
>>>>> libvirt.qemuargs :value => "pci-bridge,id=custom-pci-for-{{ extra_disk_driver }},chassis_nr=1,bus=pci.0,addr=0x8"
>>>>> {% else %}
>>>>> @@ -417,6 +417,13 @@ Vagrant.configure("2") do |config|
>>>>> libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
>>>>> libvirt.qemuargs :value => "-device"
>>>>> libvirt.qemuargs :value => "virtio-blk-pci,drive=#{disk_id},id=virtio-#{disk_id},serial=#{serial_id},bus=#{bus_for_extra_drives},addr=#{pci_function},iothread=kdevops-virtio-iothread-#{port}#{extra_drive_largio_args},logical_block_size=#{virtio_lbs},physical_block_size=#{virtio_pbs}"
>>>>> +{% elif libvirt_extra_storage_drive_scsi %}
>>>>> + libvirt.qemuargs :value => "-device"
>>>>> + libvirt.qemuargs :value => "virtio-scsi-pci,id=scsi#{port},bus=#{bus_for_extra_drives},addr=#{pci_function}"
>>>>> + libvirt.qemuargs :value => "-drive"
>>>>> + libvirt.qemuargs :value => "file=#{extra_disk},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=#{disk_id}"
>>>>> + libvirt.qemuargs :value => "-device"
>>>>> + libvirt.qemuargs :value => "scsi-hd,drive=#{disk_id},bus=scsi#{port}.0"
>>>>> {% elif libvirt_extra_storage_drive_nvme %}
>>>>> if ! largio
>>>>> nvme_lbs = "{{ libvirt_extra_storage_nvme_logical_block_size }}"
>>>>> diff --git a/playbooks/roles/gen_nodes/templates/drives.j2 b/playbooks/roles/gen_nodes/templates/drives.j2
>>>>> index 8327cd430981..1241b88a9379 100644
>>>>> --- a/playbooks/roles/gen_nodes/templates/drives.j2
>>>>> +++ b/playbooks/roles/gen_nodes/templates/drives.j2
>>>>> @@ -40,6 +40,26 @@ the drives can vary by type, so we have one macro by type of drive.
>>>>> {% endfor %}
>>>>> <!-- End of virtio drives-->
>>>>> {%- endmacro -%}
>>>>> +{%- macro gen_drive_scsi(num_drives,
>>>>> + kdevops_storage_pool_path,
>>>>> + hostname,
>>>>> + libvirt_extra_drive_format,
>>>>> + libvirt_extra_storage_aio_mode,
>>>>> + libvirt_extra_storage_aio_cache_mode) -%}
>>>>> +<!-- We generated {{ num_drives }} scsi dives -->
>>>>> +{% for n in range(0,num_drives) %}
>>>>> + <!-- This is scsi drive # {{ n + 1 }} -->
>>>>> + <qemu:arg value='-device'/>
>>>>> + <qemu:arg value='pcie-root-port,id=pcie-port-for-scsi-{{ n }},multifunction=on,bus=pcie.1,addr=0x{{ "%0x" | format( n | int) }},chassis=5{{ n }}'/>
>>>>> + <qemu:arg value='-device'/>
>>>>> + <qemu:arg value='virtio-scsi-pci,id=scsi{{ n }},bus=pcie-port-for-scsi-{{ n }},addr=0x0'/>
>>>>> + <qemu:arg value='-drive'/>
>>>>> + <qemu:arg value='file={{ kdevops_storage_pool_path }}/guestfs/{{ hostname }}/extra{{ n }}.{{ libvirt_extra_drive_format }},format={{ libvirt_extra_drive_format }},if=none,aio={{ libvirt_extra_storage_aio_mode }},cache={{ libvirt_extra_storage_aio_cache_mode }},id=drv{{ n }}'/>
>>>>> + <qemu:arg value='-device'/>
>>>>> + <qemu:arg value='scsi-hd,drive=drv{{ n }},bus=scsi{{ n }}.0'/>
>>>>> +{% endfor %}
>>>>> +<!-- End of scsi drives-->
>>>>> +{%- endmacro -%}
>>>>> {%- macro gen_drive_large_io_virtio(libvirt_largeio_logical_compat,
>>>>> libvirt_largeio_logical_compat_size,
>>>>> libvirt_largeio_pow_limit,
>>>>> @@ -49,7 +69,7 @@ the drives can vary by type, so we have one macro by type of drive.
>>>>> libvirt_extra_storage_aio_mode,
>>>>> libvirt_extra_storage_aio_cache_mode,
>>>>> kdevops_storage_pool_path) -%}
>>>>> -<!-- These are virtio drives used for large IO experimentaiton, with LBS support -->
>>>>> +<!-- These are virtio drives used for large IO experimentation, with LBS support -->
>>>>> {% set ns = namespace(lbs_idx=0) %}
>>>>> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
>>>>> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
>>>>> @@ -106,7 +126,7 @@ the drives can vary by type, so we have one macro by type of drive.
>>>>> libvirt_extra_storage_aio_mode,
>>>>> libvirt_extra_storage_aio_cache_mode,
>>>>> kdevops_storage_pool_path) -%}
>>>>> -<!-- These are NVMe drives used for large IO experimentaiton, with LBS support -->
>>>>> +<!-- These are NVMe drives used for large IO experimentation, with LBS support -->
>>>>> {% set ns = namespace(lbs_idx=0) %}
>>>>> {% set max_pbs = libvirt_largeio_logical_compat_size * (2 ** libvirt_largeio_pow_limit) %}
>>>>> {% for n in range(0,libvirt_largeio_pow_limit+1) %}
>>>>> diff --git a/playbooks/roles/gen_nodes/templates/gen_drives.j2 b/playbooks/roles/gen_nodes/templates/gen_drives.j2
>>>>> index 874e5b0623b9..2de13da4ab8e 100644
>>>>> --- a/playbooks/roles/gen_nodes/templates/gen_drives.j2
>>>>> +++ b/playbooks/roles/gen_nodes/templates/gen_drives.j2
>>>>> @@ -19,14 +19,21 @@
>>>>> kdevops_storage_pool_path) }}
>>>>> {% else %}
>>>>> {{ drives.gen_drive_virtio(4,
>>>>> - kdevops_storage_pool_path,
>>>>> - hostname,
>>>>> - libvirt_extra_drive_format,
>>>>> - libvirt_extra_storage_aio_mode,
>>>>> - libvirt_extra_storage_aio_cache_mode,
>>>>> - libvirt_extra_storage_virtio_logical_block_size,
>>>>> - libvirt_extra_storage_virtio_physical_block_size) }}
>>>>> + kdevops_storage_pool_path,
>>>>> + hostname,
>>>>> + libvirt_extra_drive_format,
>>>>> + libvirt_extra_storage_aio_mode,
>>>>> + libvirt_extra_storage_aio_cache_mode,
>>>>> + libvirt_extra_storage_virtio_logical_block_size,
>>>>> + libvirt_extra_storage_virtio_physical_block_size) }}
>>>>> {% endif %}
>>>>> +{% elif libvirt_extra_storage_drive_scsi %}
>>>>> +{{ drives.gen_drive_scsi(4,
>>>>> + kdevops_storage_pool_path,
>>>>> + hostname,
>>>>> + libvirt_extra_drive_format,
>>>>> + libvirt_extra_storage_aio_mode,
>>>>> + libvirt_extra_storage_aio_cache_mode) }}
>>>>> {% elif libvirt_extra_storage_drive_nvme %}
>>>>> {% if libvirt_largeio_enable %}
>>>>> {{ drives.gen_drive_large_io_nvme(libvirt_largeio_logical_compat,
>>>>> diff --git a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
>>>>> index e5b88efefc63..91e105cd3bff 100644
>>>>> --- a/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
>>>>> +++ b/playbooks/roles/gen_nodes/templates/kdevops_nodes_split_start.j2.yaml
>>>>> @@ -76,6 +76,16 @@ vagrant_global:
>>>>> physical_block_size: {{ libvirt_extra_storage_virtio_physical_block_size }}
>>>>> logical_block_size: {{ libvirt_extra_storage_virtio_logical_block_size }}
>>>>> {% endif %}
>>>>> +{% elif libvirt_extra_storage_drive_scsi %}
>>>>> + extra_disks:
>>>>> + data:
>>>>> + size: 102400
>>>>> + scratch:
>>>>> + size: 102400
>>>>> + extra1:
>>>>> + size: 102400
>>>>> + extra2:
>>>>> + size: 102400
>>>>> {% elif libvirt_extra_storage_drive_nvme %}
>>>>> extra_disks:
>>>>> data:
>>>>> diff --git a/workflows/steady_state/Kconfig b/workflows/steady_state/Kconfig
>>>>> index 335b833cfbcc..f7064d078c0a 100644
>>>>> --- a/workflows/steady_state/Kconfig
>>>>> +++ b/workflows/steady_state/Kconfig
>>>>> @@ -6,6 +6,7 @@ config SSD_STEADY_STATE_DEVICE
>>>>> default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
>>>>> default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
>>>>> default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
>>>>> + default "/dev/sdc" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_SCSI
>>>>> default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
>>>>> default "/dev/nvme1n1" if TERRAFORM_GCE
>>>>> default "/dev/sdd" if TERRAFORM_AZURE
>>>>
>>>> I would split the spelling corrections into a separate patch. That might
>>>> even be helpful for LLM training.
>>>
>>> Sure I can do that.
>>>
>>>> This patch looks like it is missing a change to
>>>>
>>>> playbooks/roles/volume_group/tasks/guestfs.yml
>>>>
>>>> Add a task specific to guestfs SCSI drives after the IDE task. You can
>>>> test by running the pynfs workflow, which will try to set up a single
>>>> guest with NFSD and an LVM group made from these disk devices.
>>>
>>> Are you OK if I add this after this patch, I'd like to test codex to see
>>> if I can just input your requirement to do that. It used to be AI for
>>> kdevops was gettind D grading, I am now grading it at B with Codex.
>> I don't think there's a strong concern about bisect-ability here, so
>> it should be OK as a separate patch.
>
> You mean like this? (written by claude)
>
> diff --git a/playbooks/roles/volume_group/tasks/guestfs.yml b/playbooks/roles/volume_group/tasks/guestfs.yml
> index a5536159..0a285f16 100644
> --- a/playbooks/roles/volume_group/tasks/guestfs.yml
> +++ b/playbooks/roles/volume_group/tasks/guestfs.yml
> @@ -31,6 +31,13 @@
> - libvirt_extra_storage_drive_ide is defined
> - libvirt_extra_storage_drive_ide|bool
>
> +- name: Set the SCSI device search pattern
> + ansible.builtin.set_fact:
> + by_id_pattern: "scsi-0QEMU_QEMU_HARDDISK_kdevops*"
> + when:
> + - libvirt_extra_storage_drive_scsi is defined
> + - libvirt_extra_storage_drive_scsi|bool
> +
> - name: Verify there are block devices to search
> ansible.builtin.fail:
> msg: No supported block devices are available for NFSD.
>
> Luis
That looks like the right ballpark, yes.
You might want to add something like this for the fstests spare
media drive too, come to think of it. Up to you.
--
Chuck Lever
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-07-18 13:22 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-10 17:35 [PATCH] guestfs: add scsi extra storage option Luis Chamberlain
2025-07-10 22:53 ` Chuck Lever
2025-07-11 0:45 ` Luis Chamberlain
2025-07-11 12:53 ` Chuck Lever
2025-07-18 4:48 ` Luis Chamberlain
2025-07-18 13:22 ` Chuck Lever
2025-07-12 7:47 ` Daniel Gomez
2025-07-13 16:30 ` Chuck Lever
2025-07-13 16:35 ` Chuck Lever
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox