qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Albert Esteve <aesteve@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Garzarella <sgarzare@redhat.com>,
	qemu-stable@nongnu.org
Subject: Re: [PATCH v4 1/1] vhost-user: fix shared object lookup handler logic
Date: Tue, 11 Nov 2025 03:58:51 -0500	[thread overview]
Message-ID: <20251111035843-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CADSE00LjbhB5sgjFnph8U8u31Htn=UJ9_Nit+Z2AOBY+eT8qKA@mail.gmail.com>

On Tue, Nov 11, 2025 at 09:48:04AM +0100, Albert Esteve wrote:
> On Fri, Nov 7, 2025 at 3:58 PM Albert Esteve <aesteve@redhat.com> wrote:
> >
> > On Fri, Nov 7, 2025 at 2:26 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >
> > > On Fri, Oct 17, 2025 at 09:20:11AM +0200, Albert Esteve wrote:
> > > > Refactor backend_read() function and add a reply_ack variable
> > > > to have the option for handlers to force tweak whether they should
> > > > send a reply or not without depending on VHOST_USER_NEED_REPLY_MASK
> > > > flag.
> > > >
> > > > This fixes an issue with
> > > > vhost_user_backend_handle_shared_object_lookup() logic, as the
> > > > error path was not closing the backend channel correctly. So,
> > > > we can remove the reply call from within the handler, make
> > > > sure it returns early on errors as other handlers do and
> > > > set the reply_ack variable on backend_read() to true to ensure
> > > > that it will send a response, thus keeping the original intent.
> > > >
> > > > Fixes: 160947666276c5b7f6bca4d746bcac2966635d79
> > >
> > >
> > > I fixed this, should include the subject.
> > >
> > > > Cc: qemu-stable@nongnu.org
> > > > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > >
> > >
> > > So I picked this for now, but I worry that we are opening ourselves
> > > up to buggy backends which forget to set the flag and expect
> > > frontend to behave properly.
> > >
> > > I think a better fix would be instead of forcing, check reply_ack,
> > > and fail if not set correctly.
> >
> > Fair point. I will send a follow-up PATCH with this check and fail early.
> 
> So I was going to do this follow-up patch but then reading the spec in
> vhost-user.rst I saw:
> ```For the message types that already solicit a reply from the back-end,
> the presence of ``VHOST_USER_PROTOCOL_F_REPLY_ACK`` or need_reply bit
> being set brings no behavioural change.```
> 
> Since this message type (SHARED_OBJECT_LOOKUP) explicitly solicit a
> reply, I understand that the correct behaviour is what we did in this
> patch: forcing it to true.

ok


> >
> > >
> > > > ---
> > > >  hw/virtio/vhost-user.c | 42 +++++++++++++++---------------------------
> > > >  1 file changed, 15 insertions(+), 27 deletions(-)
> > > >
> > > > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > > > index 890be55937..762d7218d3 100644
> > > > --- a/hw/virtio/vhost-user.c
> > > > +++ b/hw/virtio/vhost-user.c
> > > > @@ -1696,14 +1696,6 @@ static bool vhost_user_send_resp(QIOChannel *ioc, VhostUserHeader *hdr,
> > > >      return !qio_channel_writev_all(ioc, iov, ARRAY_SIZE(iov), errp);
> > > >  }
> > > >
> > > > -static bool
> > > > -vhost_user_backend_send_dmabuf_fd(QIOChannel *ioc, VhostUserHeader *hdr,
> > > > -                                  VhostUserPayload *payload, Error **errp)
> > > > -{
> > > > -    hdr->size = sizeof(payload->u64);
> > > > -    return vhost_user_send_resp(ioc, hdr, payload, errp);
> > > > -}
> > > > -
> > > >  int vhost_user_get_shared_object(struct vhost_dev *dev, unsigned char *uuid,
> > > >                                   int *dmabuf_fd)
> > > >  {
> > > > @@ -1744,19 +1736,15 @@ int vhost_user_get_shared_object(struct vhost_dev *dev, unsigned char *uuid,
> > > >
> > > >  static int
> > > >  vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> > > > -                                               QIOChannel *ioc,
> > > > -                                               VhostUserHeader *hdr,
> > > > -                                               VhostUserPayload *payload)
> > > > +                                               VhostUserShared *object)
> > > >  {
> > > >      QemuUUID uuid;
> > > >      CharBackend *chr = u->user->chr;
> > > > -    Error *local_err = NULL;
> > > >      int dmabuf_fd = -1;
> > > >      int fd_num = 0;
> > > >
> > > > -    memcpy(uuid.data, payload->object.uuid, sizeof(payload->object.uuid));
> > > > +    memcpy(uuid.data, object->uuid, sizeof(object->uuid));
> > > >
> > > > -    payload->u64 = 0;
> > > >      switch (virtio_object_type(&uuid)) {
> > > >      case TYPE_DMABUF:
> > > >          dmabuf_fd = virtio_lookup_dmabuf(&uuid);
> > > > @@ -1765,18 +1753,16 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> > > >      {
> > > >          struct vhost_dev *dev = virtio_lookup_vhost_device(&uuid);
> > > >          if (dev == NULL) {
> > > > -            payload->u64 = -EINVAL;
> > > > -            break;
> > > > +            return -EINVAL;
> > > >          }
> > > >          int ret = vhost_user_get_shared_object(dev, uuid.data, &dmabuf_fd);
> > > >          if (ret < 0) {
> > > > -            payload->u64 = ret;
> > > > +            return ret;
> > > >          }
> > > >          break;
> > > >      }
> > > >      case TYPE_INVALID:
> > > > -        payload->u64 = -EINVAL;
> > > > -        break;
> > > > +        return -EINVAL;
> > > >      }
> > > >
> > > >      if (dmabuf_fd != -1) {
> > > > @@ -1785,11 +1771,6 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> > > >
> > > >      if (qemu_chr_fe_set_msgfds(chr, &dmabuf_fd, fd_num) < 0) {
> > > >          error_report("Failed to set msg fds.");
> > > > -        payload->u64 = -EINVAL;
> > > > -    }
> > > > -
> > > > -    if (!vhost_user_backend_send_dmabuf_fd(ioc, hdr, payload, &local_err)) {
> > > > -        error_report_err(local_err);
> > > >          return -EINVAL;
> > > >      }
> > > >
> > > > @@ -2008,6 +1989,7 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > > >      struct iovec iov;
> > > >      g_autofree int *fd = NULL;
> > > >      size_t fdsize = 0;
> > > > +    bool reply_ack;
> > > >      int i;
> > > >
> > > >      /* Read header */
> > > > @@ -2026,6 +2008,8 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > > >          goto err;
> > > >      }
> > > >
> > > > +    reply_ack = hdr.flags & VHOST_USER_NEED_REPLY_MASK;
> > > > +
> > > >      /* Read payload */
> > > >      if (qio_channel_read_all(ioc, (char *) &payload, hdr.size, &local_err)) {
> > > >          error_report_err(local_err);
> > > > @@ -2051,11 +2035,14 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > > >                                                               &payload.object);
> > > >          break;
> > > >      case VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP:
> > > > -        ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> > > > -                                                             &hdr, &payload);
> > > > +        /* The backend always expects a response */
> > > > +        reply_ack = true;
> > > > +        ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque,
> > > > +                                                             &payload.object);
> > > >          break;
> > > >      case VHOST_USER_BACKEND_SHMEM_MAP:
> > > >          /* Handler manages its own response, check error and close connection */
> > > > +        reply_ack = false;
> > > >          if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
> > > >                                                  fd ? fd[0] : -1) < 0) {
> > > >              goto err;
> > > > @@ -2063,6 +2050,7 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > > >          break;
> > > >      case VHOST_USER_BACKEND_SHMEM_UNMAP:
> > > >          /* Handler manages its own response, check error and close connection */
> > > > +        reply_ack = false;
> > > >          if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
> > > >              goto err;
> > > >          }
> > > > @@ -2076,7 +2064,7 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > > >       * REPLY_ACK feature handling. Other reply types has to be managed
> > > >       * directly in their request handlers.
> > > >       */
> > > > -    if (hdr.flags & VHOST_USER_NEED_REPLY_MASK) {
> > > > +    if (reply_ack) {
> > > >          payload.u64 = !!ret;
> > > >          hdr.size = sizeof(payload.u64);
> > > >
> > > > --
> > > > 2.49.0
> > >



      reply	other threads:[~2025-11-11  8:59 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-17  7:20 [PATCH v4 0/1] vhost-user: fix shared object lookup handler logic Albert Esteve
2025-10-17  7:20 ` [PATCH v4 1/1] " Albert Esteve
2025-10-17  7:52   ` Stefano Garzarella
2025-11-07 13:25   ` Michael S. Tsirkin
2025-11-07 14:58     ` Albert Esteve
2025-11-11  8:48       ` Albert Esteve
2025-11-11  8:58         ` Michael S. Tsirkin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251111035843-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=aesteve@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-stable@nongnu.org \
    --cc=sgarzare@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).