From: Thomas Huth <thuth@redhat.com>
To: "John Levon" <john.levon@nutanix.com>,
qemu-devel@nongnu.org, "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Thanos Makatos" <thanos.makatos@nutanix.com>,
"Cédric Le Goater" <clg@redhat.com>,
"Zhao Liu" <zhao1.liu@intel.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Mark Cave-Ayland" <mark.caveayland@nutanix.com>
Subject: Re: [PATCH v5] tests/functional: add a vfio-user smoke test
Date: Thu, 11 Sep 2025 17:27:24 +0200 [thread overview]
Message-ID: <0d71919b-d5f9-47cd-9979-a692f3cf6a8d@redhat.com> (raw)
In-Reply-To: <20250911142228.1955529-1-john.levon@nutanix.com>
Hi!
On 11/09/2025 16.22, John Levon wrote:
> From: Mark Cave-Ayland <mark.caveayland@nutanix.com>
>
> Add a basic test of the vfio-user PCI client implementation.
>
> Co-authored-by: John Levon <john.levon@nutanix.com>
> Signed-off-by: Mark Cave-Ayland <mark.caveayland@nutanix.com>
> Signed-off-by: John Levon <john.levon@nutanix.com>
> ---
...
> diff --git a/tests/functional/x86_64/test_vfio_user_client.py b/tests/functional/x86_64/test_vfio_user_client.py
> new file mode 100755
> index 0000000000..1e4c5bc875
> --- /dev/null
> +++ b/tests/functional/x86_64/test_vfio_user_client.py
> @@ -0,0 +1,197 @@
> +#!/usr/bin/env python3
> +#
> +# Copyright (c) 2025 Nutanix, Inc.
> +#
> +# Author:
> +# Mark Cave-Ayland <mark.caveayland@nutanix.com>
> +# John Levon <john.levon@nutanix.com>
> +#
> +# SPDX-License-Identifier: GPL-2.0-or-later
> +"""
> +Check basic vfio-user-pci client functionality. The test starts two VMs:
> +
> + - the server VM runs the libvfio-user "gpio" example server inside it,
> + piping vfio-user traffic between a local UNIX socket and a virtio-serial
> + port. On the host, the virtio-serial port is backed by a local socket.
> +
> + - the client VM loads the gpio-pci-idio-16 kernel module, with the
> + vfio-user client connecting to the above local UNIX socket.
> +
> +This way, we don't depend on trying to run a vfio-user server on the host
> +itself.
> +
> +Once both VMs are running, we run some basic configuration on the gpio device
> +and verify that the server is logging the expected out. As this is consistent
> +given the same VM images, we just do a simple direct comparison.
> +"""
I'm not a python expert, but I guess it would make sense to move that
description block next to the "class VfioUserClient(QemuSystemTest):" line
so that it's the description for the class? (that would fix the "Missing
class docstring" that you get when using "pylint" on your code)
> +import difflib
> +import logging
> +import os
> +import select
> +import shutil
> +import socket
> +import subprocess
> +import time
pylint complains:
tests/functional/x86_64/test_vfio_user_client.py:28:0: W0611: Unused import
difflib (unused-import)
tests/functional/x86_64/test_vfio_user_client.py:29:0: W0611: Unused import
logging (unused-import)
tests/functional/x86_64/test_vfio_user_client.py:30:0: W0611: Unused import
os (unused-import)
tests/functional/x86_64/test_vfio_user_client.py:31:0: W0611: Unused import
select (unused-import)
tests/functional/x86_64/test_vfio_user_client.py:32:0: W0611: Unused import
shutil (unused-import)
tests/functional/x86_64/test_vfio_user_client.py:33:0: W0611: Unused import
socket (unused-import)
tests/functional/x86_64/test_vfio_user_client.py:34:0: W0611: Unused import
subprocess (unused-import)
tests/functional/x86_64/test_vfio_user_client.py:35:0: W0611: Unused import
time (unused-import)
... so I think you can remove those.
> +from qemu_test import Asset
> +from qemu_test import QemuSystemTest
> +from qemu_test import exec_command
Same for "exec_command" ... you don't use it in your test here.
> +from qemu_test import exec_command_and_wait_for_pattern
> +from qemu_test import wait_for_console_pattern
> +
> +# Exact output can vary, so we just sample for some expected lines.
> +EXPECTED_SERVER_LINES = [
> + "gpio: adding DMA region [0, 0xc0000) offset=0 flags=0x3",
> + "gpio: devinfo flags 0x3, num_regions 9, num_irqs 5",
> + "gpio: region_info[0] offset 0 flags 0 size 0 argsz 32",
> + "gpio: region_info[1] offset 0 flags 0 size 0 argsz 32",
> + "gpio: region_info[2] offset 0 flags 0x3 size 256 argsz 32",
> + "gpio: region_info[3] offset 0 flags 0 size 0 argsz 32",
> + "gpio: region_info[4] offset 0 flags 0 size 0 argsz 32",
> + "gpio: region_info[5] offset 0 flags 0 size 0 argsz 32",
> + "gpio: region_info[7] offset 0 flags 0x3 size 256 argsz 32",
> + "gpio: region7: read 256 bytes at 0",
> + "gpio: region7: read 0 from (0x30:4)",
> + "gpio: cleared EROM",
> + "gpio: I/O space enabled",
> + "gpio: memory space enabled",
> + "gpio: SERR# enabled",
> + "gpio: region7: wrote 0x103 to (0x4:2)",
> + "gpio: I/O space enabled",
> + "gpio: memory space enabled",
> +]
> +
> +class VfioUserClient(QemuSystemTest):
> +
> + ASSET_REPO = 'https://github.com/mcayland-ntx/libvfio-user-test'
Not sure whether that indirection works with the asset pre-caching
mechanism? Daniel, could you comment on that?
> + ASSET_KERNEL = Asset(
> + f'{ASSET_REPO}/raw/refs/heads/main/images/bzImage',
> + '40292fa6ce95d516e26bccf5974e138d0db65a6de0bc540cabae060fe9dea605'
> + )
> +
> + ASSET_ROOTFS = Asset(
> + f'{ASSET_REPO}/raw/refs/heads/main/images/rootfs.ext2',
> + 'e1e3abae8aebb8e6e77f08b1c531caeacf46250c94c815655c6bbea59fc3d1c1'
> + )
> +
> +
> + def prepare_images(self):
> + """Download the images for the VMs."""
> + self.kernel_path = self.ASSET_KERNEL.fetch()
> + self.rootfs_path = self.ASSET_ROOTFS.fetch()
> +
> + def configure_server_vm_args(self, server_vm, sock_path):
> + """
> + Configuration for the server VM. Set up virtio-serial device backed by
> + the given socket path.
> + """
> + server_vm.add_args('-kernel', self.kernel_path)
> + server_vm.add_args('-append', 'console=ttyS0 root=/dev/sda')
> + server_vm.add_args('-drive',
> + f"file={self.rootfs_path},if=ide,format=raw,id=drv0")
> + server_vm.add_args('-snapshot')
> + server_vm.add_args('-chardev',
> + f"socket,id=sock0,path={sock_path},telnet=off,server=on,wait=off")
> + server_vm.add_args('-device', 'virtio-serial')
> + server_vm.add_args('-device',
> + 'virtserialport,chardev=sock0,name=org.fedoraproject.port.0')
> +
> + def configure_client_vm_args(self, client_vm, sock_path):
> + """
> + Configuration for the client VM. Point the vfio-user-pci device to the
> + socket path configured above.
> + """
> +
> + client_vm.add_args('-kernel', self.kernel_path)
> + client_vm.add_args('-append', 'console=ttyS0 root=/dev/sda')
> + client_vm.add_args('-drive',
> + f'file={self.rootfs_path},if=ide,format=raw,id=drv0')
> + client_vm.add_args('-snapshot')
> + client_vm.add_args('-device',
> + '{"driver":"vfio-user-pci",' +
> + '"socket":{"path": "%s", "type": "unix"}}' % sock_path)
> +
> + def setup_vfio_user_pci_server(self, server_vm):
> + """
> + Start the libvfio-user server within the server VM, and arrange
> + for data to shuttle between its socket and the virtio serial port.
> + """
> + wait_for_console_pattern(self, 'login:', None, server_vm)
> + exec_command_and_wait_for_pattern(self, 'root', '#', None, server_vm)
> +
> + exec_command_and_wait_for_pattern(self,
> + 'gpio-pci-idio-16 -v /tmp/vfio-user.sock >/var/tmp/gpio.out 2>&1 &',
> + '#', None, server_vm)
> + # wait for libvfio-user to initialize properly
> + exec_command_and_wait_for_pattern(self, 'sleep 5', '#', None, server_vm)
Could the sleep be avoided? ... it's still a race condition (even if it's
unlikely when you wait for 5 seconds), and always sleeping 5 seconds slows
down the test quite a bit ...
Could you maybe poll something instead, e.g. output of "dmesg" or something
in the file system? (sorry, I don't have any clue about vfio-user, so I
don't know any better suggestions)
> + exec_command_and_wait_for_pattern(self,
> + 'socat UNIX-CONNECT:/tmp/vfio-user.sock /dev/vport0p1,ignoreeof ' +
> + ' &', '#', None, server_vm)
> +
> + def test_vfio_user_pci(self):
> + self.prepare_images()
Please move the "prepare_images" after the set_machine() and
require_device() calls. Reason: set_machine() and require_device() could
skip the test if it's not available in the qemu binary, so in that case you
don't want to try to fetch the assets first.
> + self.set_machine('pc')
> + self.require_device('virtio-serial')
> + self.require_device('vfio-user-pci')
> +
> + sock_dir = self.socket_dir()
> + socket_path = sock_dir.name + '/vfio-user.sock'
Better use os.path.join() instead of hard-coding slashes.
Thanks,
Thomas
next prev parent reply other threads:[~2025-09-11 15:28 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-11 14:22 [PATCH v5] tests/functional: add a vfio-user smoke test John Levon
2025-09-11 15:27 ` Thomas Huth [this message]
2025-09-11 15:50 ` Daniel P. Berrangé
2025-09-11 16:11 ` John Levon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0d71919b-d5f9-47cd-9979-a692f3cf6a8d@redhat.com \
--to=thuth@redhat.com \
--cc=berrange@redhat.com \
--cc=clg@redhat.com \
--cc=john.levon@nutanix.com \
--cc=mark.caveayland@nutanix.com \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=thanos.makatos@nutanix.com \
--cc=zhao1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).