qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <xadimgnik@gmail.com>
To: mark.syms@citrix.com, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect
Date: Mon, 24 Apr 2023 11:32:41 +0100	[thread overview]
Message-ID: <54a37172-cad5-3b27-36fc-3b7768e39df8@xen.org> (raw)
In-Reply-To: <20230420110205.688689-1-mark.syms@citrix.com>

On 20/04/2023 12:02, mark.syms@citrix.com wrote:
> From: Mark Syms <mark.syms@citrix.com>
> 
> Ensure the PV ring is drained on disconnect. Also ensure all pending
> AIO is complete, otherwise AIO tries to complete into a mapping of the
> ring which has been torn down.
> 
> Signed-off-by: Mark Syms <mark.syms@citrix.com>
> ---
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony Perard <anthony.perard@citrix.com>
> CC: Paul Durrant <paul@xen.org>
> CC: xen-devel@lists.xenproject.org
> 
> v2:
>   * Ensure all inflight requests are completed before teardown
>   * RESEND to fix formatting
> ---
>   hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
>   1 file changed, 25 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
> index 734da42ea7..d9da4090bf 100644
> --- a/hw/block/dataplane/xen-block.c
> +++ b/hw/block/dataplane/xen-block.c
> @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
>   
>       dataplane->more_work = 0;
>   
> +    if (dataplane->sring == 0) {
> +        return done_something;
> +    }
> +

I think you could just return false here... Nothing is ever going to be 
done if there's no ring :-)

>       rc = dataplane->rings.common.req_cons;
>       rp = dataplane->rings.common.sring->req_prod;
>       xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
> @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane >   void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
>   {
>       XenDevice *xendev;
> +    XenBlockRequest *request, *next;
>   
>       if (!dataplane) {
>           return;
>       }
>   
> +    /* We're about to drain the ring. We can cancel the scheduling of any
> +     * bottom half now */
> +    qemu_bh_cancel(dataplane->bh);
> +
> +    /* Ensure we have drained the ring */
> +    aio_context_acquire(dataplane->ctx);
> +    do {
> +        xen_block_handle_requests(dataplane);
> +    } while (dataplane->more_work);
> +    aio_context_release(dataplane->ctx);
> +

I don't think we want to be taking new requests, do we?

> +    /* Now ensure that all inflight requests are complete */
> +    while (!QLIST_EMPTY(&dataplane->inflight)) {
> +        QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) {
> +            blk_aio_flush(request->dataplane->blk, xen_block_complete_aio,
> +                        request);
> +        }
> +    }
> +

I think this could possibly be simplified by doing the drain after the 
call to blk_set_aio_context(), as long as we set dataplane->ctx to 
qemu_get_aio_context(). Alos, as long as more_work is not set then it 
should still be safe to cancel the bh before the drain AFAICT.

   Paul

>       xendev = dataplane->xendev;
>   
>       aio_context_acquire(dataplane->ctx);
> +
>       if (dataplane->event_channel) {
>           /* Only reason for failure is a NULL channel */
>           xen_device_set_event_channel_context(xendev, dataplane->event_channel,
> @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
>       blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
>       aio_context_release(dataplane->ctx);
>   
> -    /*
> -     * Now that the context has been moved onto the main thread, cancel
> -     * further processing.
> -     */
> -    qemu_bh_cancel(dataplane->bh);
> -
>       if (dataplane->event_channel) {
>           Error *local_err = NULL;
>   



  reply	other threads:[~2023-04-24 10:33 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-20 11:02 [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect mark.syms--- via
2023-04-24 10:32 ` Paul Durrant [this message]
2023-04-24 12:07   ` Mark Syms
2023-04-24 13:17     ` Tim Smith
2023-04-24 13:51       ` Paul Durrant
2023-04-26  8:32         ` Tim Smith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54a37172-cad5-3b27-36fc-3b7768e39df8@xen.org \
    --to=xadimgnik@gmail.com \
    --cc=anthony.perard@citrix.com \
    --cc=mark.syms@citrix.com \
    --cc=paul@xen.org \
    --cc=qemu-devel@nongnu.org \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).