From: Stefan Hajnoczi <stefanha@redhat.com>
To: Hanna Czenczek <hreitz@redhat.com>
Cc: qemu-block@nongnu.org, qemu-devel@nongnu.org,
Fiona Ebner <f.ebner@proxmox.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Kevin Wolf <kwolf@redhat.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
Fam Zheng <fam@euphon.net>
Subject: Re: [PATCH 2/2] virtio: Keep notifications disabled during drain
Date: Thu, 25 Jan 2024 13:03:26 -0500 [thread overview]
Message-ID: <20240125180326.GA36016@fedora> (raw)
In-Reply-To: <20240124173834.66320-3-hreitz@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 6083 bytes --]
On Wed, Jan 24, 2024 at 06:38:30PM +0100, Hanna Czenczek wrote:
> During drain, we do not care about virtqueue notifications, which is why
> we remove the handlers on it. When removing those handlers, whether vq
> notifications are enabled or not depends on whether we were in polling
> mode or not; if not, they are enabled (by default); if so, they have
> been disabled by the io_poll_start callback.
>
> Because we do not care about those notifications after removing the
> handlers, this is fine. However, we have to explicitly ensure they are
> enabled when re-attaching the handlers, so we will resume receiving
> notifications. We do this in virtio_queue_aio_attach_host_notifier*().
> If such a function is called while we are in a polling section,
> attaching the notifiers will then invoke the io_poll_start callback,
> re-disabling notifications.
>
> Because we will always miss virtqueue updates in the drained section, we
> also need to poll the virtqueue once after attaching the notifiers.
>
> Buglink: https://issues.redhat.com/browse/RHEL-3934
> Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
> ---
> include/block/aio.h | 7 ++++++-
> hw/virtio/virtio.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 48 insertions(+), 1 deletion(-)
>
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 5d0a114988..8378553eb9 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -480,9 +480,14 @@ void aio_set_event_notifier(AioContext *ctx,
> AioPollFn *io_poll,
> EventNotifierHandler *io_poll_ready);
>
> -/* Set polling begin/end callbacks for an event notifier that has already been
> +/*
> + * Set polling begin/end callbacks for an event notifier that has already been
> * registered with aio_set_event_notifier. Do nothing if the event notifier is
> * not registered.
> + *
> + * Note that if the io_poll_end() callback (or the entire notifier) is removed
> + * during polling, it will not be called, so an io_poll_begin() is not
> + * necessarily always followed by an io_poll_end().
> */
> void aio_set_event_notifier_poll(AioContext *ctx,
> EventNotifier *notifier,
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 7549094154..4166da9e97 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -3556,6 +3556,17 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
>
> void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
> {
> + /*
> + * virtio_queue_aio_detach_host_notifier() can leave notifications disabled.
> + * Re-enable them. (And if detach has not been used before, notifications
> + * being enabled is still the default state while a notifier is attached;
> + * see virtio_queue_host_notifier_aio_poll_end(), which will always leave
> + * notifications enabled once the polling section is left.)
> + */
> + if (!virtio_queue_get_notification(vq)) {
> + virtio_queue_set_notification(vq, 1);
> + }
> +
> aio_set_event_notifier(ctx, &vq->host_notifier,
> virtio_queue_host_notifier_read,
> virtio_queue_host_notifier_aio_poll,
> @@ -3563,6 +3574,13 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
> aio_set_event_notifier_poll(ctx, &vq->host_notifier,
> virtio_queue_host_notifier_aio_poll_begin,
> virtio_queue_host_notifier_aio_poll_end);
> +
> + /*
> + * We will have ignored notifications about new requests from the guest
> + * during the drain, so "kick" the virt queue to process those requests
> + * now.
> + */
> + virtio_queue_notify(vq->vdev, vq->queue_index);
event_notifier_set(&vq->host_notifier) is easier to understand because
it doesn't contain a non-host_notifier code path that we must not take.
Is there a reason why you used virtio_queue_notify() instead?
Otherwise:
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> }
>
> /*
> @@ -3573,14 +3591,38 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
> */
> void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
> {
> + /* See virtio_queue_aio_attach_host_notifier() */
> + if (!virtio_queue_get_notification(vq)) {
> + virtio_queue_set_notification(vq, 1);
> + }
> +
> aio_set_event_notifier(ctx, &vq->host_notifier,
> virtio_queue_host_notifier_read,
> NULL, NULL);
> +
> + /*
> + * See virtio_queue_aio_attach_host_notifier().
> + * Note that this may be unnecessary for the type of virtqueues this
> + * function is used for. Still, it will not hurt to have a quick look into
> + * whether we can/should process any of the virtqueue elements.
> + */
> + virtio_queue_notify(vq->vdev, vq->queue_index);
> }
>
> void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
> {
> aio_set_event_notifier(ctx, &vq->host_notifier, NULL, NULL, NULL);
> +
> + /*
> + * aio_set_event_notifier_poll() does not guarantee whether io_poll_end()
> + * will run after io_poll_begin(), so by removing the notifier, we do not
> + * know whether virtio_queue_host_notifier_aio_poll_end() has run after a
> + * previous virtio_queue_host_notifier_aio_poll_begin(), i.e. whether
> + * notifications are enabled or disabled. It does not really matter anyway;
> + * we just removed the notifier, so we do not care about notifications until
> + * we potentially re-attach it. The attach_host_notifier functions will
> + * ensure that notifications are enabled again when they are needed.
> + */
> }
>
> void virtio_queue_host_notifier_read(EventNotifier *n)
> --
> 2.43.0
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2024-01-25 18:04 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-24 17:38 [PATCH 0/2] virtio: Keep notifications disabled during drain Hanna Czenczek
2024-01-24 17:38 ` [PATCH 1/2] virtio-scsi: Attach event vq notifier with no_poll Hanna Czenczek
2024-01-24 22:00 ` Stefan Hajnoczi
2024-01-25 9:43 ` Fiona Ebner
2024-01-24 17:38 ` [PATCH 2/2] virtio: Keep notifications disabled during drain Hanna Czenczek
2024-01-25 9:43 ` Fiona Ebner
2024-01-25 18:03 ` Stefan Hajnoczi [this message]
2024-01-25 18:18 ` Hanna Czenczek
2024-01-25 18:32 ` Hanna Czenczek
2024-01-25 21:32 ` Stefan Hajnoczi
2024-01-25 18:05 ` [PATCH 0/2] " Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240125180326.GA36016@fedora \
--to=stefanha@redhat.com \
--cc=f.ebner@proxmox.com \
--cc=fam@euphon.net \
--cc=hreitz@redhat.com \
--cc=kwolf@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).