qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Eugenio Perez Martin <eperezma@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: Parav Pandit <parav@mellanox.com>,
	Juan Quintela <quintela@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	qemu-level <qemu-devel@nongnu.org>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Xiao W Wang <xiao.w.wang@intel.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Eli Cohen <eli@mellanox.com>, Eric Blake <eblake@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [RFC PATCH v4 11/20] vhost: Route host->guest notification through shadow virtqueue
Date: Wed, 20 Oct 2021 08:36:58 +0200	[thread overview]
Message-ID: <CAJaqyWcwBmZupND3KCPOW3Hxbau1eUX6SPXE4mU9heKGCOT2rw@mail.gmail.com> (raw)
In-Reply-To: <CACGkMEtPAR6qwMN5++Q+e5aJGtzMgzo59_+Jf7=Ra=rtdLYS8g@mail.gmail.com>

On Wed, Oct 20, 2021 at 4:01 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Oct 19, 2021 at 4:40 PM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Fri, Oct 15, 2021 at 6:42 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > >
> > > 在 2021/10/15 上午12:39, Eugenio Perez Martin 写道:
> > > > On Wed, Oct 13, 2021 at 5:47 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >>
> > > >> 在 2021/10/1 下午3:05, Eugenio Pérez 写道:
> > > >>> This will make qemu aware of the device used buffers, allowing it to
> > > >>> write the guest memory with its contents if needed.
> > > >>>
> > > >>> Since the use of vhost_virtqueue_start can unmasks and discard call
> > > >>> events, vhost_virtqueue_start should be modified in one of these ways:
> > > >>> * Split in two: One of them uses all logic to start a queue with no
> > > >>>     side effects for the guest, and another one tha actually assumes that
> > > >>>     the guest has just started the device. Vdpa should use just the
> > > >>>     former.
> > > >>> * Actually store and check if the guest notifier is masked, and do it
> > > >>>     conditionally.
> > > >>> * Left as it is, and duplicate all the logic in vhost-vdpa.
> > > >>>
> > > >>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > >>> ---
> > > >>>    hw/virtio/vhost-shadow-virtqueue.c | 19 +++++++++++++++
> > > >>>    hw/virtio/vhost-vdpa.c             | 38 +++++++++++++++++++++++++++++-
> > > >>>    2 files changed, 56 insertions(+), 1 deletion(-)
> > > >>>
> > > >>> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> > > >>> index 21dc99ab5d..3fe129cf63 100644
> > > >>> --- a/hw/virtio/vhost-shadow-virtqueue.c
> > > >>> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > > >>> @@ -53,6 +53,22 @@ static void vhost_handle_guest_kick(EventNotifier *n)
> > > >>>        event_notifier_set(&svq->kick_notifier);
> > > >>>    }
> > > >>>
> > > >>> +/* Forward vhost notifications */
> > > >>> +static void vhost_svq_handle_call_no_test(EventNotifier *n)
> > > >>> +{
> > > >>> +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> > > >>> +                                             call_notifier);
> > > >>> +
> > > >>> +    event_notifier_set(&svq->guest_call_notifier);
> > > >>> +}
> > > >>> +
> > > >>> +static void vhost_svq_handle_call(EventNotifier *n)
> > > >>> +{
> > > >>> +    if (likely(event_notifier_test_and_clear(n))) {
> > > >>> +        vhost_svq_handle_call_no_test(n);
> > > >>> +    }
> > > >>> +}
> > > >>> +
> > > >>>    /*
> > > >>>     * Obtain the SVQ call notifier, where vhost device notifies SVQ that there
> > > >>>     * exists pending used buffers.
> > > >>> @@ -180,6 +196,8 @@ VhostShadowVirtqueue *vhost_svq_new(struct vhost_dev *dev, int idx)
> > > >>>        }
> > > >>>
> > > >>>        svq->vq = virtio_get_queue(dev->vdev, vq_idx);
> > > >>> +    event_notifier_set_handler(&svq->call_notifier,
> > > >>> +                               vhost_svq_handle_call);
> > > >>>        return g_steal_pointer(&svq);
> > > >>>
> > > >>>    err_init_call_notifier:
> > > >>> @@ -195,6 +213,7 @@ err_init_kick_notifier:
> > > >>>    void vhost_svq_free(VhostShadowVirtqueue *vq)
> > > >>>    {
> > > >>>        event_notifier_cleanup(&vq->kick_notifier);
> > > >>> +    event_notifier_set_handler(&vq->call_notifier, NULL);
> > > >>>        event_notifier_cleanup(&vq->call_notifier);
> > > >>>        g_free(vq);
> > > >>>    }
> > > >>> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > >>> index bc34de2439..6c5f4c98b8 100644
> > > >>> --- a/hw/virtio/vhost-vdpa.c
> > > >>> +++ b/hw/virtio/vhost-vdpa.c
> > > >>> @@ -712,13 +712,40 @@ static bool vhost_vdpa_svq_start_vq(struct vhost_dev *dev, unsigned idx)
> > > >>>    {
> > > >>>        struct vhost_vdpa *v = dev->opaque;
> > > >>>        VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, idx);
> > > >>> -    return vhost_svq_start(dev, idx, svq);
> > > >>> +    EventNotifier *vhost_call_notifier = vhost_svq_get_svq_call_notifier(svq);
> > > >>> +    struct vhost_vring_file vhost_call_file = {
> > > >>> +        .index = idx + dev->vq_index,
> > > >>> +        .fd = event_notifier_get_fd(vhost_call_notifier),
> > > >>> +    };
> > > >>> +    int r;
> > > >>> +    bool b;
> > > >>> +
> > > >>> +    /* Set shadow vq -> guest notifier */
> > > >>> +    assert(v->call_fd[idx]);
> > > >>
> > > >> We need aovid the asser() here. On which case we can hit this?
> > > >>
> > > > I would say that there is no way we can actually hit it, so let's remove it.
> > > >
> > > >>> +    vhost_svq_set_guest_call_notifier(svq, v->call_fd[idx]);
> > > >>> +
> > > >>> +    b = vhost_svq_start(dev, idx, svq);
> > > >>> +    if (unlikely(!b)) {
> > > >>> +        return false;
> > > >>> +    }
> > > >>> +
> > > >>> +    /* Set device -> SVQ notifier */
> > > >>> +    r = vhost_vdpa_set_vring_dev_call(dev, &vhost_call_file);
> > > >>> +    if (unlikely(r)) {
> > > >>> +        error_report("vhost_vdpa_set_vring_call for shadow vq failed");
> > > >>> +        return false;
> > > >>> +    }
> > > >>
> > > >> Similar to kick, do we need to set_vring_call() before vhost_svq_start()?
> > > >>
> > > > It should not matter at this moment because the device should not be
> > > > started at this point and device calls should not run
> > > > vhost_svq_handle_call until BQL is released.
> > >
> > >
> > > Yes, we stop virtqueue before.
> > >
> > >
> > > >
> > > > The "logic" of doing it after is to make clear that svq must be fully
> > > > initialized before processing device calls, even in the case that we
> > > > extract SVQ in its own iothread or similar. But this could be done
> > > > before vhost_svq_start for sure.
> > > >
> > > >>> +
> > > >>> +    /* Check for pending calls */
> > > >>> +    event_notifier_set(vhost_call_notifier);
> > > >>
> > > >> Interesting, can this result spurious interrupt?
> > > >>
> > > > This actually "queues" a vhost_svq_handle_call after the BQL release,
> > > > where the device should be fully reset. In that regard, if there are
> > > > no used descriptors there will not be an irq raised to the guest. Does
> > > > that answer the question? Or have I missed something?
> > >
> > >
> > > Yes, please explain this in the comment.
> > >
> >
> > I'm reviewing this again, and actually I think I was wrong in solving the issue.
> >
> > Since at this point the device is being configured, there is no chance
> > that we had a missing call notification here: A previous kick is
> > needed for the device to generate any calls, and these cannot be
> > processed.
> >
> > What is not solved in this series is that we could have pending used
> > buffers in vdpa device stopping SVQ, but queuing a check for that is
> > not going to solve anything, since SVQ vring would be already
> > destroyed:
> >
> > * vdpa device marks N > 0 buffers as used, and calls.
> > * Before processing them, SVQ stop is called. SVQ have not processed
> > these, and cleans them, making this event_notifier_set useless.
> >
> > So this would require a few changes. Mainly, instead of queueing a
> > check for used, these need to be checked before svq cleaning. After
> > that, obtain the VQ state (is not obtained in the stop at the moment,
> > trusting in guest's used idx) and run a last
> > vhost_svq_handle_call_no_test while the device is paused.
>
> It looks to me what's really important is that SVQ needs to
> drain/forwared used buffers after vdpa is stopped. Then we should be
> fine.
>

Right. I think I picked the wrong place to raise the concern, but the
next revision will include the drain of the pending buffers.

Thanks!

> >
> > Thanks!
> >
> > >
> > > >
> > > >>> +    return true;
> > > >>>    }
> > > >>>
> > > >>>    static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool enable)
> > > >>>    {
> > > >>>        struct vhost_dev *hdev = v->dev;
> > > >>>        unsigned n;
> > > >>> +    int r;
> > > >>>
> > > >>>        if (enable == v->shadow_vqs_enabled) {
> > > >>>            return hdev->nvqs;
> > > >>> @@ -752,9 +779,18 @@ static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool enable)
> > > >>>        if (!enable) {
> > > >>>            /* Disable all queues or clean up failed start */
> > > >>>            for (n = 0; n < v->shadow_vqs->len; ++n) {
> > > >>> +            struct vhost_vring_file file = {
> > > >>> +                .index = vhost_vdpa_get_vq_index(hdev, n),
> > > >>> +                .fd = v->call_fd[n],
> > > >>> +            };
> > > >>> +
> > > >>> +            r = vhost_vdpa_set_vring_call(hdev, &file);
> > > >>> +            assert(r == 0);
> > > >>> +
> > > >>>                unsigned vq_idx = vhost_vdpa_get_vq_index(hdev, n);
> > > >>>                VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, n);
> > > >>>                vhost_svq_stop(hdev, n, svq);
> > > >>> +            /* TODO: This can unmask or override call fd! */
> > > >>
> > > >> I don't get this comment. Does this mean the current code can't work
> > > >> with mask_notifiers? If yes, this is something we need to fix.
> > > >>
> > > > Yes, but it will be addressed in the next series. I should have
> > > > explained it bette here, sorry :).
> > >
> > >
> > > Ok.
> > >
> > > Thanks
> > >
> > >
> > > >
> > > > Thanks!
> > > >
> > > >> Thanks
> > > >>
> > > >>
> > > >>>                vhost_virtqueue_start(hdev, hdev->vdev, &hdev->vqs[n], vq_idx);
> > > >>>            }
> > > >>>
> > >
> >
>



  reply	other threads:[~2021-10-20  6:40 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-01  7:05 [RFC PATCH v4 00/20] vDPA shadow virtqueue Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 01/20] virtio: Add VIRTIO_F_QUEUE_STATE Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 02/20] virtio-net: Honor VIRTIO_CONFIG_S_DEVICE_STOPPED Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 03/20] virtio: Add virtio_queue_is_host_notifier_enabled Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 04/20] vhost: Make vhost_virtqueue_{start,stop} public Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 05/20] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez
2021-10-12  5:18   ` Markus Armbruster
2021-10-12 13:08     ` Eugenio Perez Martin
2021-10-12 13:45       ` Markus Armbruster
2021-10-14 12:01         ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 06/20] vhost: Add VhostShadowVirtqueue Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 07/20] vdpa: Register vdpa devices in a list Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 08/20] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
2021-10-12  5:19   ` Markus Armbruster
2021-10-12 13:09     ` Eugenio Perez Martin
2021-10-13  3:27   ` Jason Wang
2021-10-14 12:00     ` Eugenio Perez Martin
2021-10-15  3:45       ` Jason Wang
2021-10-15  9:08         ` Eugenio Perez Martin
2021-10-15 18:21       ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 09/20] vdpa: Save call_fd in vhost-vdpa Eugenio Pérez
2021-10-13  3:43   ` Jason Wang
2021-10-14 12:11     ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 10/20] vhost-vdpa: Take into account SVQ in vhost_vdpa_set_vring_call Eugenio Pérez
2021-10-13  3:43   ` Jason Wang
2021-10-14 12:18     ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 11/20] vhost: Route host->guest notification through shadow virtqueue Eugenio Pérez
2021-10-13  3:47   ` Jason Wang
2021-10-14 16:39     ` Eugenio Perez Martin
2021-10-15  4:42       ` Jason Wang
2021-10-19  8:39         ` Eugenio Perez Martin
2021-10-20  2:01           ` Jason Wang
2021-10-20  6:36             ` Eugenio Perez Martin [this message]
2021-10-13  3:49   ` Jason Wang
2021-10-14 15:58     ` Eugenio Perez Martin
2021-10-15  4:24       ` Jason Wang
2021-10-01  7:05 ` [RFC PATCH v4 12/20] virtio: Add vhost_shadow_vq_get_vring_addr Eugenio Pérez
2021-10-13  3:54   ` Jason Wang
2021-10-14 14:39     ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 13/20] vdpa: Save host and guest features Eugenio Pérez
2021-10-13  3:56   ` Jason Wang
2021-10-14 15:03     ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 14/20] vhost: Add vhost_svq_valid_device_features to shadow vq Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 15/20] vhost: Shadow virtqueue buffers forwarding Eugenio Pérez
2021-10-12  5:21   ` Markus Armbruster
2021-10-12 13:28     ` Eugenio Perez Martin
2021-10-12 13:48       ` Markus Armbruster
2021-10-14 15:04         ` Eugenio Perez Martin
2021-10-13  4:31   ` Jason Wang
2021-10-14 17:56     ` Eugenio Perez Martin
2021-10-15  4:23       ` Jason Wang
2021-10-15  9:33         ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 16/20] vhost: Check for device VRING_USED_F_NO_NOTIFY at shadow virtqueue kick Eugenio Pérez
2021-10-13  4:35   ` Jason Wang
2021-10-15  6:17     ` Eugenio Perez Martin
2021-10-01  7:06 ` [RFC PATCH v4 17/20] vhost: Use VRING_AVAIL_F_NO_INTERRUPT at device call on shadow virtqueue Eugenio Pérez
2021-10-13  4:36   ` Jason Wang
2021-10-15  6:22     ` Eugenio Perez Martin
2021-10-01  7:06 ` [RFC PATCH v4 18/20] vhost: Add VhostIOVATree Eugenio Pérez
2021-10-19  8:32   ` Jason Wang
2021-10-19  9:22     ` Jason Wang
2021-10-20  7:54       ` Eugenio Perez Martin
2021-10-20  9:01         ` Jason Wang
2021-10-20 12:06           ` Eugenio Perez Martin
2021-10-21  2:34             ` Jason Wang
2021-10-21  7:03               ` Eugenio Perez Martin
2021-10-21  8:12                 ` Jason Wang
2021-10-21 14:33                   ` Eugenio Perez Martin
2021-10-26  4:29                     ` Jason Wang
2021-10-20  7:36     ` Eugenio Perez Martin
2021-10-01  7:06 ` [RFC PATCH v4 19/20] vhost: Use a tree to store memory mappings Eugenio Pérez
2021-10-01  7:06 ` [RFC PATCH v4 20/20] vdpa: Add custom IOTLB translations to SVQ Eugenio Pérez
2021-10-13  5:34   ` Jason Wang
2021-10-15  7:27     ` Eugenio Perez Martin
2021-10-15  7:37       ` Jason Wang
2021-10-15  8:20         ` Eugenio Perez Martin
2021-10-15  8:37           ` Jason Wang
2021-10-15  9:14           ` Eugenio Perez Martin
2021-10-19  9:24   ` Jason Wang
2021-10-19 10:28     ` Eugenio Perez Martin
2021-10-20  2:02       ` Jason Wang
2021-10-20  2:07         ` Jason Wang
2021-10-20  6:51           ` Eugenio Perez Martin
2021-10-20  9:03             ` Jason Wang
2021-10-20 11:56               ` Eugenio Perez Martin
2021-10-21  2:38                 ` Jason Wang
2021-10-26  4:32                 ` Jason Wang
2021-10-12  3:59 ` [RFC PATCH v4 00/20] vDPA shadow virtqueue Jason Wang
2021-10-12  4:06   ` Jason Wang
2021-10-12  9:09     ` Eugenio Perez Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJaqyWcwBmZupND3KCPOW3Hxbau1eUX6SPXE4mU9heKGCOT2rw@mail.gmail.com \
    --to=eperezma@redhat.com \
    --cc=armbru@redhat.com \
    --cc=eblake@redhat.com \
    --cc=eli@mellanox.com \
    --cc=hanand@xilinx.com \
    --cc=jasowang@redhat.com \
    --cc=mst@redhat.com \
    --cc=parav@mellanox.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xiao.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).