qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Albert Esteve <aesteve@redhat.com>
Cc: qemu-devel@nongnu.org, david@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	hi@alyssa.is, jasowang@redhat.com,
	"Laurent Vivier" <lvivier@redhat.com>,
	dbassey@redhat.com, "Stefano Garzarella" <sgarzare@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	stevensd@chromium.org, "Fabiano Rosas" <farosas@suse.de>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	slp@redhat.com
Subject: Re: [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
Date: Wed, 20 Aug 2025 16:33:48 -0400	[thread overview]
Message-ID: <20250820203348.GA131468@fedora> (raw)
In-Reply-To: <CADSE00J61r4Wt94s6OfCqt9V8sVaisgDajvKEYFmG1FJKdVfng@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 8775 bytes --]

On Tue, Aug 19, 2025 at 02:16:47PM +0200, Albert Esteve wrote:
> On Tue, Aug 19, 2025 at 12:42 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Mon, Aug 18, 2025 at 12:03:51PM +0200, Albert Esteve wrote:
> > > Improve vhost-user-test to properly validate
> > > VHOST_USER_GET_SHMEM_CONFIG message handling by
> > > directly simulating the message exchange.
> > >
> > > The test manually triggers the
> > > VHOST_USER_GET_SHMEM_CONFIG message by calling
> > > chr_read() with a crafted VhostUserMsg, allowing direct
> > > validation of the shmem configuration response handler.
> >
> > It looks like this test case invokes its own chr_read() function without
> > going through QEMU, so I don't understand what this is testing?
> 
> I spent some time trying to test it, but in the end I could not
> instatiate vhost-user-device because it is non user_creatable. I did
> not find any test for vhost-user-device anywhere else either. But I
> had already added most of the infrastructure here so I fallback to
> chr_read() communication to avoid having to delete everything. My
> though was that once we have other devices that use shared memory,
> they could tweak the test to instantiate the proper device and test
> this and the map/unmap operations.
> 
> Although after writing this, I think other devices will actually a
> specific layout for their shared memory. So
> VHOST_USER_GET_SHMEM_CONFIG is only ever going to be used by
> vhost-user-device.
> 
> In general, trying to test this patch series has been a headache other
> than trying with external device code I have. If you have an idea that
> I could try to test this, I can try. Otherwise, probably is best to
> remove this commit from the series and wait for another vhost-user
> device that uses map/unmap to land to be able to test it.

Alex Bennee has renamed vhost-user-device to vhost-user-test-device and
set user_creatable = true:
https://lore.kernel.org/qemu-devel/20250820195632.1956795-1-alex.bennee@linaro.org/T/#t

> 
> 
> 
> >
> > >
> > > Added TestServerShmem structure to track shmem
> > > configuration state, including nregions_sent and
> > > sizes_sent arrays for comprehensive validation.
> > > The test verifies that the response contains the expected
> > > number of shared memory regions and their corresponding
> > > sizes.
> > >
> > > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > > ---
> > >  tests/qtest/vhost-user-test.c | 91 +++++++++++++++++++++++++++++++++++
> > >  1 file changed, 91 insertions(+)
> > >
> > > diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
> > > index 75cb3e44b2..44a5e90b2e 100644
> > > --- a/tests/qtest/vhost-user-test.c
> > > +++ b/tests/qtest/vhost-user-test.c
> > > @@ -88,6 +88,7 @@ typedef enum VhostUserRequest {
> > >      VHOST_USER_SET_VRING_ENABLE = 18,
> > >      VHOST_USER_GET_CONFIG = 24,
> > >      VHOST_USER_SET_CONFIG = 25,
> > > +    VHOST_USER_GET_SHMEM_CONFIG = 44,
> > >      VHOST_USER_MAX
> > >  } VhostUserRequest;
> > >
> > > @@ -109,6 +110,20 @@ typedef struct VhostUserLog {
> > >      uint64_t mmap_offset;
> > >  } VhostUserLog;
> > >
> > > +#define VIRTIO_MAX_SHMEM_REGIONS 256
> > > +
> > > +typedef struct VhostUserShMemConfig {
> > > +    uint32_t nregions;
> > > +    uint32_t padding;
> > > +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> > > +} VhostUserShMemConfig;
> > > +
> > > +typedef struct TestServerShmem {
> > > +    bool test_enabled;
> > > +    uint32_t nregions_sent;
> > > +    uint64_t sizes_sent[VIRTIO_MAX_SHMEM_REGIONS];
> > > +} TestServerShmem;
> > > +
> > >  typedef struct VhostUserMsg {
> > >      VhostUserRequest request;
> > >
> > > @@ -124,6 +139,7 @@ typedef struct VhostUserMsg {
> > >          struct vhost_vring_addr addr;
> > >          VhostUserMemory memory;
> > >          VhostUserLog log;
> > > +        VhostUserShMemConfig shmem;
> > >      } payload;
> > >  } QEMU_PACKED VhostUserMsg;
> > >
> > > @@ -170,6 +186,7 @@ typedef struct TestServer {
> > >      bool test_fail;
> > >      int test_flags;
> > >      int queues;
> > > +    TestServerShmem shmem;
> > >      struct vhost_user_ops *vu_ops;
> > >  } TestServer;
> > >
> > > @@ -513,6 +530,31 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
> > >          qos_printf("set_vring(%d)=%s\n", msg.payload.state.index,
> > >                     msg.payload.state.num ? "enabled" : "disabled");
> > >          break;
> > > +
> > > +    case VHOST_USER_GET_SHMEM_CONFIG:
> > > +        if (!s->shmem.test_enabled) {
> > > +            /* Reply with error if shmem feature not enabled */
> > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > +            msg.size = sizeof(uint64_t);
> > > +            msg.payload.u64 = -1; /* Error */
> > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > +        } else {
> > > +            /* Reply with test shmem configuration */
> > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > +            msg.size = sizeof(VhostUserShMemConfig);
> > > +            msg.payload.shmem.nregions = 2; /* Test with 2 regions */
> > > +            msg.payload.shmem.padding = 0;
> > > +            msg.payload.shmem.memory_sizes[0] = 0x100000; /* 1MB */
> > > +            msg.payload.shmem.memory_sizes[1] = 0x200000; /* 2MB */
> > > +
> > > +            /* Record what we're sending for test validation */
> > > +            s->shmem.nregions_sent = msg.payload.shmem.nregions;
> > > +            s->shmem.sizes_sent[0] = msg.payload.shmem.memory_sizes[0];
> > > +            s->shmem.sizes_sent[1] = msg.payload.shmem.memory_sizes[1];
> > > +
> > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > +        }
> > > +        break;
> > >
> > >      default:
> > >          qos_printf("vhost-user: un-handled message: %d\n", msg.request);
> > > @@ -809,6 +851,22 @@ static void *vhost_user_test_setup_shm(GString *cmd_line, void *arg)
> > >      return server;
> > >  }
> > >
> > > +static void *vhost_user_test_setup_shmem_config(GString *cmd_line, void *arg)
> > > +{
> > > +    TestServer *server = test_server_new("vhost-user-test", arg);
> > > +    test_server_listen(server);
> > > +
> > > +    /* Enable shmem testing for this server */
> > > +    server->shmem.test_enabled = true;
> > > +
> > > +    append_mem_opts(server, cmd_line, 256, TEST_MEMFD_SHM);
> > > +    server->vu_ops->append_opts(server, cmd_line, "");
> > > +
> > > +    g_test_queue_destroy(vhost_user_test_cleanup, server);
> > > +
> > > +    return server;
> > > +}
> > > +
> > >  static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
> > >  {
> > >      TestServer *server = arg;
> > > @@ -1089,6 +1147,33 @@ static struct vhost_user_ops g_vu_net_ops = {
> > >      .get_protocol_features = vu_net_get_protocol_features,
> > >  };
> > >
> > > +/* Test function for VHOST_USER_GET_SHMEM_CONFIG message */
> > > +static void test_shmem_config(void *obj, void *arg, QGuestAllocator *alloc)
> > > +{
> > > +    TestServer *s = arg;
> > > +
> > > +    g_assert_true(s->shmem.test_enabled);
> > > +
> > > +    g_mutex_lock(&s->data_mutex);
> > > +    s->shmem.nregions_sent = 0;
> > > +    s->shmem.sizes_sent[0] = 0;
> > > +    s->shmem.sizes_sent[1] = 0;
> > > +    g_mutex_unlock(&s->data_mutex);
> > > +
> > > +    VhostUserMsg msg = {
> > > +        .request = VHOST_USER_GET_SHMEM_CONFIG,
> > > +        .flags = VHOST_USER_VERSION,
> > > +        .size = 0,
> > > +    };
> > > +    chr_read(s, (uint8_t *) &msg, VHOST_USER_HDR_SIZE);
> > > +
> > > +    g_mutex_lock(&s->data_mutex);
> > > +    g_assert_cmpint(s->shmem.nregions_sent, ==, 2);
> > > +    g_assert_cmpint(s->shmem.sizes_sent[0], ==, 0x100000); /* 1MB */
> > > +    g_assert_cmpint(s->shmem.sizes_sent[1], ==, 0x200000); /* 2MB */
> > > +    g_mutex_unlock(&s->data_mutex);
> > > +}
> > > +
> > >  static void register_vhost_user_test(void)
> > >  {
> > >      QOSGraphTestOptions opts = {
> > > @@ -1136,6 +1221,12 @@ static void register_vhost_user_test(void)
> > >      qos_add_test("vhost-user/multiqueue",
> > >                   "virtio-net",
> > >                   test_multiqueue, &opts);
> > > +
> > > +    opts.before = vhost_user_test_setup_shmem_config;
> > > +    opts.edge.extra_device_opts = "";
> > > +    qos_add_test("vhost-user/shmem-config",
> > > +                 "virtio-net",
> > > +                 test_shmem_config, &opts);
> > >  }
> > >  libqos_init(register_vhost_user_test);
> > >
> > > --
> > > 2.49.0
> > >
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  parent reply	other threads:[~2025-08-20 20:34 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
2025-08-18 10:03 ` [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
2025-08-18 18:58   ` Stefan Hajnoczi
2025-08-19 12:47     ` Albert Esteve
2025-08-19 12:56       ` Albert Esteve
2025-08-19  9:22   ` David Hildenbrand
2025-08-18 10:03 ` [PATCH v7 2/8] vhost_user.rst: Align VhostUserMsg excerpt members Albert Esteve
2025-08-18 10:03 ` [PATCH v7 3/8] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
2025-08-18 10:03 ` [PATCH v7 4/8] vhost_user: Add frontend get_shmem_config command Albert Esteve
2025-08-18 10:03 ` [PATCH v7 5/8] vhost_user.rst: Add GET_SHMEM_CONFIG message Albert Esteve
2025-08-18 10:03 ` [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test Albert Esteve
2025-08-18 23:14   ` Stefan Hajnoczi
2025-08-19 12:16     ` Albert Esteve
2025-08-20  8:47       ` Alyssa Ross
2025-08-20 15:42       ` Stefan Hajnoczi
2025-08-20 20:33       ` Stefan Hajnoczi [this message]
2025-08-21  6:50         ` Albert Esteve
2025-09-10 11:10           ` Albert Esteve
2025-08-18 10:03 ` [PATCH v7 7/8] qmp: add shmem feature map Albert Esteve
2025-08-18 10:03 ` [PATCH v7 8/8] vhost-user-device: Add shared memory BAR Albert Esteve
2025-08-19 10:42   ` Stefan Hajnoczi
2025-08-19 11:41     ` Albert Esteve

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250820203348.GA131468@fedora \
    --to=stefanha@redhat.com \
    --cc=aesteve@redhat.com \
    --cc=alex.bennee@linaro.org \
    --cc=david@redhat.com \
    --cc=dbassey@redhat.com \
    --cc=farosas@suse.de \
    --cc=hi@alyssa.is \
    --cc=jasowang@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=slp@redhat.com \
    --cc=stevensd@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).