qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: qemu-block@nongnu.org, mst@redhat.com, sgarzare@redhat.com,
	qemu-devel@nongnu.org, qemu-stable@nongnu.org
Subject: Re: [PATCH] vdpa-dev: Fix initialisation order to restore VDUSE compatibility
Date: Mon, 5 Feb 2024 14:49:18 +0100	[thread overview]
Message-ID: <ZcDnXpE_IGkSVzTu@redhat.com> (raw)
In-Reply-To: <CAJaqyWfMETO=C6oBj2VVo+CZ=kNe-aJSYja0WpLDLbFQV5sTnA@mail.gmail.com>

Am 05.02.2024 um 13:22 hat Eugenio Perez Martin geschrieben:
> On Fri, Feb 2, 2024 at 2:25 PM Kevin Wolf <kwolf@redhat.com> wrote:
> >
> > VDUSE requires that virtqueues are first enabled before the DRIVER_OK
> > status flag is set; with the current API of the kernel module, it is
> > impossible to enable the opposite order in our block export code because
> > userspace is not notified when a virtqueue is enabled.
> >
> > This requirement also mathces the normal initialisation order as done by
> > the generic vhost code in QEMU. However, commit 6c482547 accidentally
> > changed the order for vdpa-dev and broke access to VDUSE devices with
> > this.
> >
> > This changes vdpa-dev to use the normal order again and use the standard
> > vhost callback .vhost_set_vring_enable for this. VDUSE devices can be
> > used with vdpa-dev again after this fix.
> >
> > Cc: qemu-stable@nongnu.org
> > Fixes: 6c4825476a4351530bcac17abab72295b75ffe98
> > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > ---
> >  hw/virtio/vdpa-dev.c   |  5 +----
> >  hw/virtio/vhost-vdpa.c | 17 +++++++++++++++++
> >  2 files changed, 18 insertions(+), 4 deletions(-)
> >
> > diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
> > index eb9ecea83b..13e87f06f6 100644
> > --- a/hw/virtio/vdpa-dev.c
> > +++ b/hw/virtio/vdpa-dev.c
> > @@ -253,14 +253,11 @@ static int vhost_vdpa_device_start(VirtIODevice *vdev, Error **errp)
> >
> >      s->dev.acked_features = vdev->guest_features;
> >
> > -    ret = vhost_dev_start(&s->dev, vdev, false);
> > +    ret = vhost_dev_start(&s->dev, vdev, true);
> >      if (ret < 0) {
> >          error_setg_errno(errp, -ret, "Error starting vhost");
> >          goto err_guest_notifiers;
> >      }
> > -    for (i = 0; i < s->dev.nvqs; ++i) {
> > -        vhost_vdpa_set_vring_ready(&s->vdpa, i);
> > -    }
> >      s->started = true;
> >
> >      /*
> > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > index 3a43beb312..c4574d56c5 100644
> > --- a/hw/virtio/vhost-vdpa.c
> > +++ b/hw/virtio/vhost-vdpa.c
> > @@ -904,6 +904,22 @@ int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx)
> >      return r;
> >  }
> >
> > +static int vhost_vdpa_set_vring_enable(struct vhost_dev *dev, int enable)
> > +{
> > +    struct vhost_vdpa *v = dev->opaque;
> > +    unsigned int i;
> > +    int ret;
> > +
> > +    for (i = 0; i < dev->nvqs; ++i) {
> > +        ret = vhost_vdpa_set_vring_ready(v, i);
> > +        if (ret < 0) {
> > +            return ret;
> > +        }
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> >  static int vhost_vdpa_set_config_call(struct vhost_dev *dev,
> >                                         int fd)
> >  {
> > @@ -1524,6 +1540,7 @@ const VhostOps vdpa_ops = {
> >          .vhost_set_features = vhost_vdpa_set_features,
> >          .vhost_reset_device = vhost_vdpa_reset_device,
> >          .vhost_get_vq_index = vhost_vdpa_get_vq_index,
> > +        .vhost_set_vring_enable = vhost_vdpa_set_vring_enable,
> >          .vhost_get_config  = vhost_vdpa_get_config,
> >          .vhost_set_config = vhost_vdpa_set_config,
> >          .vhost_requires_shm_log = NULL,
> 
> vhost-vdpa net enables CVQ before dataplane ones to configure all the
> device in the destination of a live migration. To go back again to
> this callback would cause the device to enable the dataplane before
> virtqueues are configured again.

Not that it makes a difference, but I don't think it would actually be
going back. Even before your commit 6c482547, we were not making use of
the generic callback but just called the function in a slightly
different place (but less different than after commit 6c482547).

But anyway... Why don't the other vhost backend need the same for
vhost-net then? Do they just not support live migration?

I don't know the code well enough to say where the problem is, but if
vhost-vdpa networking code relies on the usual vhost operations not
being implemented and bypasses VhostOps to replace it, that sounds like
a design problem to me. Maybe VhostOps needs a new operation to enable
just a single virtqueue that can be used by the networking code instead?

> How does VDUSE userspace knows how many queues should enable? Can't
> the kernel perform the necessary actions after DRIVER_OK, like
> configuring the kick etc?

Not sure if I understand the question. The vdpa kernel interface always
enables individual queues, so the VDUSE userspace will enable whatever
queues it was asked to enable. The only restriction is that the queues
need to be enabled before setting DRIVER_OK.

The interface that enables all virtqueues at once seems to be just
.vhost_set_vring_enable in QEMU.

Kevin



  reply	other threads:[~2024-02-05 13:50 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-02 13:25 [PATCH] vdpa-dev: Fix initialisation order to restore VDUSE compatibility Kevin Wolf
2024-02-05 10:51 ` Stefano Garzarella
2024-02-06  2:47   ` Jason Wang
2024-02-06  8:31     ` Stefano Garzarella
2024-02-06 14:55       ` Stefano Garzarella
2024-02-07  3:17       ` Jason Wang
2024-02-07  8:05         ` Cindy Lu
2024-02-07  8:53           ` Stefano Garzarella
2024-02-07  8:47         ` Stefano Garzarella
2024-02-08  7:02           ` Jason Wang
2024-02-05 12:22 ` Eugenio Perez Martin
2024-02-05 13:49   ` Kevin Wolf [this message]
2024-02-06 16:44     ` Eugenio Perez Martin
2024-02-07 10:18       ` Kevin Wolf
2024-02-07 11:06         ` Stefano Garzarella
2024-02-08 17:20         ` Eugenio Perez Martin
2024-02-05 13:13 ` Kevin Wolf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZcDnXpE_IGkSVzTu@redhat.com \
    --to=kwolf@redhat.com \
    --cc=eperezma@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-stable@nongnu.org \
    --cc=sgarzare@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).