qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Anthony PERARD via <qemu-devel@nongnu.org>
To: <qemu-devel@nongnu.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>
Cc: <xen-devel@lists.xenproject.org>, <qemu-block@nongnu.org>
Subject: Re: [PATCH] xen-block: Fix removal of backend instance via xenstore
Date: Mon, 22 Mar 2021 14:31:54 +0000	[thread overview]
Message-ID: <YFiqWsUC2q+01xQD@perard> (raw)
In-Reply-To: <20210308143232.83388-1-anthony.perard@citrix.com>

Hi Paul, Stefano,

Could one of you could give a Ack to this patch?

Thanks,


On Mon, Mar 08, 2021 at 02:32:32PM +0000, Anthony PERARD wrote:
> From: Anthony PERARD <anthony.perard@citrix.com>
> 
> Whenever a Xen block device is detach via xenstore, the image
> associated with it remained open by the backend QEMU and an error is
> logged:
>     qemu-system-i386: failed to destroy drive: Node xvdz-qcow2 is in use
> 
> This happened since object_unparent() doesn't immediately frees the
> object and thus keep a reference to the node we are trying to free.
> The reference is hold by the "drive" property and the call
> xen_block_drive_destroy() fails.
> 
> In order to fix that, we call drain_call_rcu() to run the callback
> setup by bus_remove_child() via object_unparent().
> 
> Fixes: 2d24a6466154 ("device-core: use RCU for list of children of a bus")
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
> CCing people whom introduced/reviewed the change to use RCU to give
> them a chance to say if the change is fine.
> ---
>  hw/block/xen-block.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index a3b69e27096f..fe5f828e2d25 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -972,6 +972,15 @@ static void xen_block_device_destroy(XenBackendInstance *backend,
>  
>      object_unparent(OBJECT(xendev));
>  
> +    /*
> +     * Drall all pending RCU callbacks as object_unparent() frees `xendev'
> +     * in a RCU callback.
> +     * And due to the property "drive" still existing in `xendev', we
> +     * cann't destroy the XenBlockDrive associated with `xendev' with
> +     * xen_block_drive_destroy() below.
> +     */
> +    drain_call_rcu();
> +
>      if (iothread) {
>          xen_block_iothread_destroy(iothread, errp);
>          if (*errp) {
> -- 
> Anthony PERARD
> 

-- 
Anthony PERARD


  parent reply	other threads:[~2021-03-22 15:05 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-08 14:32 [PATCH] xen-block: Fix removal of backend instance via xenstore Anthony PERARD via
2021-03-08 14:38 ` Paolo Bonzini
2021-03-08 17:29   ` Anthony PERARD via
2021-03-08 17:37     ` Paolo Bonzini
2021-03-08 18:14       ` Anthony PERARD via
2021-03-08 18:23         ` Paolo Bonzini
2021-03-22 14:31 ` Anthony PERARD via [this message]
2021-03-22 15:03 ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YFiqWsUC2q+01xQD@perard \
    --to=qemu-devel@nongnu.org \
    --cc=anthony.perard@citrix.com \
    --cc=paul@xen.org \
    --cc=qemu-block@nongnu.org \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).