From: Jason Wang <jasowang@redhat.com>
To: "Eugenio Pérez" <eperezma@redhat.com>, qemu-devel@nongnu.org
Cc: Parav Pandit <parav@mellanox.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Markus Armbruster <armbru@redhat.com>,
virtualization@lists.linux-foundation.org,
Harpreet Singh Anand <hanand@xilinx.com>,
Xiao W Wang <xiao.w.wang@intel.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Eli Cohen <eli@mellanox.com>, Eric Blake <eblake@redhat.com>,
Michael Lilja <ml@napatech.com>
Subject: Re: [RFC PATCH v4 11/20] vhost: Route host->guest notification through shadow virtqueue
Date: Wed, 13 Oct 2021 11:47:44 +0800 [thread overview]
Message-ID: <ab9a7771-5f9b-6413-3e38-bd3dc7373256@redhat.com> (raw)
In-Reply-To: <20211001070603.307037-12-eperezma@redhat.com>
在 2021/10/1 下午3:05, Eugenio Pérez 写道:
> This will make qemu aware of the device used buffers, allowing it to
> write the guest memory with its contents if needed.
>
> Since the use of vhost_virtqueue_start can unmasks and discard call
> events, vhost_virtqueue_start should be modified in one of these ways:
> * Split in two: One of them uses all logic to start a queue with no
> side effects for the guest, and another one tha actually assumes that
> the guest has just started the device. Vdpa should use just the
> former.
> * Actually store and check if the guest notifier is masked, and do it
> conditionally.
> * Left as it is, and duplicate all the logic in vhost-vdpa.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
> hw/virtio/vhost-shadow-virtqueue.c | 19 +++++++++++++++
> hw/virtio/vhost-vdpa.c | 38 +++++++++++++++++++++++++++++-
> 2 files changed, 56 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index 21dc99ab5d..3fe129cf63 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -53,6 +53,22 @@ static void vhost_handle_guest_kick(EventNotifier *n)
> event_notifier_set(&svq->kick_notifier);
> }
>
> +/* Forward vhost notifications */
> +static void vhost_svq_handle_call_no_test(EventNotifier *n)
> +{
> + VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> + call_notifier);
> +
> + event_notifier_set(&svq->guest_call_notifier);
> +}
> +
> +static void vhost_svq_handle_call(EventNotifier *n)
> +{
> + if (likely(event_notifier_test_and_clear(n))) {
> + vhost_svq_handle_call_no_test(n);
> + }
> +}
> +
> /*
> * Obtain the SVQ call notifier, where vhost device notifies SVQ that there
> * exists pending used buffers.
> @@ -180,6 +196,8 @@ VhostShadowVirtqueue *vhost_svq_new(struct vhost_dev *dev, int idx)
> }
>
> svq->vq = virtio_get_queue(dev->vdev, vq_idx);
> + event_notifier_set_handler(&svq->call_notifier,
> + vhost_svq_handle_call);
> return g_steal_pointer(&svq);
>
> err_init_call_notifier:
> @@ -195,6 +213,7 @@ err_init_kick_notifier:
> void vhost_svq_free(VhostShadowVirtqueue *vq)
> {
> event_notifier_cleanup(&vq->kick_notifier);
> + event_notifier_set_handler(&vq->call_notifier, NULL);
> event_notifier_cleanup(&vq->call_notifier);
> g_free(vq);
> }
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index bc34de2439..6c5f4c98b8 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -712,13 +712,40 @@ static bool vhost_vdpa_svq_start_vq(struct vhost_dev *dev, unsigned idx)
> {
> struct vhost_vdpa *v = dev->opaque;
> VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, idx);
> - return vhost_svq_start(dev, idx, svq);
> + EventNotifier *vhost_call_notifier = vhost_svq_get_svq_call_notifier(svq);
> + struct vhost_vring_file vhost_call_file = {
> + .index = idx + dev->vq_index,
> + .fd = event_notifier_get_fd(vhost_call_notifier),
> + };
> + int r;
> + bool b;
> +
> + /* Set shadow vq -> guest notifier */
> + assert(v->call_fd[idx]);
We need aovid the asser() here. On which case we can hit this?
> + vhost_svq_set_guest_call_notifier(svq, v->call_fd[idx]);
> +
> + b = vhost_svq_start(dev, idx, svq);
> + if (unlikely(!b)) {
> + return false;
> + }
> +
> + /* Set device -> SVQ notifier */
> + r = vhost_vdpa_set_vring_dev_call(dev, &vhost_call_file);
> + if (unlikely(r)) {
> + error_report("vhost_vdpa_set_vring_call for shadow vq failed");
> + return false;
> + }
Similar to kick, do we need to set_vring_call() before vhost_svq_start()?
> +
> + /* Check for pending calls */
> + event_notifier_set(vhost_call_notifier);
Interesting, can this result spurious interrupt?
> + return true;
> }
>
> static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool enable)
> {
> struct vhost_dev *hdev = v->dev;
> unsigned n;
> + int r;
>
> if (enable == v->shadow_vqs_enabled) {
> return hdev->nvqs;
> @@ -752,9 +779,18 @@ static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool enable)
> if (!enable) {
> /* Disable all queues or clean up failed start */
> for (n = 0; n < v->shadow_vqs->len; ++n) {
> + struct vhost_vring_file file = {
> + .index = vhost_vdpa_get_vq_index(hdev, n),
> + .fd = v->call_fd[n],
> + };
> +
> + r = vhost_vdpa_set_vring_call(hdev, &file);
> + assert(r == 0);
> +
> unsigned vq_idx = vhost_vdpa_get_vq_index(hdev, n);
> VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, n);
> vhost_svq_stop(hdev, n, svq);
> + /* TODO: This can unmask or override call fd! */
I don't get this comment. Does this mean the current code can't work
with mask_notifiers? If yes, this is something we need to fix.
Thanks
> vhost_virtqueue_start(hdev, hdev->vdev, &hdev->vqs[n], vq_idx);
> }
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2021-10-13 3:48 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20211001070603.307037-1-eperezma@redhat.com>
2021-10-12 3:59 ` [RFC PATCH v4 00/20] vDPA shadow virtqueue Jason Wang
2021-10-12 4:06 ` Jason Wang
[not found] ` <20211001070603.307037-9-eperezma@redhat.com>
2021-10-13 3:27 ` [RFC PATCH v4 08/20] vhost: Route guest->host notification through " Jason Wang
[not found] ` <CAJaqyWd2joWx3kKz=cJBs4UxZofP7ETkbpg9+cSQSE2MSyBtUg@mail.gmail.com>
2021-10-15 3:45 ` Jason Wang
[not found] ` <20211001070603.307037-10-eperezma@redhat.com>
2021-10-13 3:43 ` [RFC PATCH v4 09/20] vdpa: Save call_fd in vhost-vdpa Jason Wang
[not found] ` <20211001070603.307037-11-eperezma@redhat.com>
2021-10-13 3:43 ` [RFC PATCH v4 10/20] vhost-vdpa: Take into account SVQ in vhost_vdpa_set_vring_call Jason Wang
[not found] ` <20211001070603.307037-12-eperezma@redhat.com>
2021-10-13 3:47 ` Jason Wang [this message]
[not found] ` <CAJaqyWfm734HrwTJK71hUQNYVkyDaR8OiqtGro_AX9i_pXfmBQ@mail.gmail.com>
2021-10-15 4:42 ` [RFC PATCH v4 11/20] vhost: Route host->guest notification through shadow virtqueue Jason Wang
[not found] ` <CAJaqyWcO9oaGsRe-oMNbmHx7G4Mw0vZfc+7WYQ23+SteoFVn4Q@mail.gmail.com>
2021-10-20 2:01 ` Jason Wang
2021-10-13 3:49 ` Jason Wang
[not found] ` <CAJaqyWcQ314RN7-U1bYqCMXb+-nyhSi3ddqWv90ofFucMbveUw@mail.gmail.com>
2021-10-15 4:24 ` Jason Wang
[not found] ` <20211001070603.307037-13-eperezma@redhat.com>
2021-10-13 3:54 ` [RFC PATCH v4 12/20] virtio: Add vhost_shadow_vq_get_vring_addr Jason Wang
[not found] ` <20211001070603.307037-14-eperezma@redhat.com>
2021-10-13 3:56 ` [RFC PATCH v4 13/20] vdpa: Save host and guest features Jason Wang
[not found] ` <20211001070603.307037-16-eperezma@redhat.com>
2021-10-13 4:31 ` [RFC PATCH v4 15/20] vhost: Shadow virtqueue buffers forwarding Jason Wang
[not found] ` <CAJaqyWeaJyxh-tt45wxONzuOLhVt6wO48e2ufZZ3uECHTDofFw@mail.gmail.com>
2021-10-15 4:23 ` Jason Wang
[not found] ` <20211001070603.307037-17-eperezma@redhat.com>
2021-10-13 4:35 ` [RFC PATCH v4 16/20] vhost: Check for device VRING_USED_F_NO_NOTIFY at shadow virtqueue kick Jason Wang
[not found] ` <20211001070603.307037-18-eperezma@redhat.com>
2021-10-13 4:36 ` [RFC PATCH v4 17/20] vhost: Use VRING_AVAIL_F_NO_INTERRUPT at device call on shadow virtqueue Jason Wang
[not found] ` <20211001070603.307037-21-eperezma@redhat.com>
2021-10-13 5:34 ` [RFC PATCH v4 20/20] vdpa: Add custom IOTLB translations to SVQ Jason Wang
[not found] ` <CAJaqyWdEGWFNrxqKxRya=ybRiP0wTZ0aPksBBeOe9KOjOmUnqA@mail.gmail.com>
2021-10-15 7:37 ` Jason Wang
[not found] ` <CAJaqyWf7pFiw2twq9BPyr9fOJFa9ZpSMcbnoknOfC_pbuUWkmg@mail.gmail.com>
2021-10-15 8:37 ` Jason Wang
2021-10-19 9:24 ` Jason Wang
[not found] ` <CAJaqyWcRcm9rwuTqJHS0FmuMrXpoCvF34TzXKQmxXTfZssZ-jA@mail.gmail.com>
2021-10-20 2:02 ` Jason Wang
2021-10-20 2:07 ` Jason Wang
[not found] ` <CAJaqyWe6R_32Se75XF3+NUZyiWr+cLYQ_86LExmom-vCRT9G0g@mail.gmail.com>
2021-10-20 9:03 ` Jason Wang
[not found] ` <CAJaqyWd9LjpA5w2f1s+pNmdNjYPvcbJgPqY+Qv1fWb+6LPPzAg@mail.gmail.com>
2021-10-21 2:38 ` Jason Wang
2021-10-26 4:32 ` Jason Wang
[not found] ` <20211001070603.307037-19-eperezma@redhat.com>
2021-10-19 8:32 ` [RFC PATCH v4 18/20] vhost: Add VhostIOVATree Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ab9a7771-5f9b-6413-3e38-bd3dc7373256@redhat.com \
--to=jasowang@redhat.com \
--cc=armbru@redhat.com \
--cc=eblake@redhat.com \
--cc=eli@mellanox.com \
--cc=eperezma@redhat.com \
--cc=hanand@xilinx.com \
--cc=ml@napatech.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=xiao.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).