From: Paolo Bonzini <pbonzini@redhat.com>
To: l00284672 <lizhengui@huawei.com>, kwolf@redhat.com
Cc: jiangyiwen@huawei.com, wangjie88@huawei.com,
qemu-devel@nongnu.org, qemu-block@nongnu.org,
eric.fangyi@huawei.com
Subject: Re: [Qemu-devel] [PATCH v2] virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi disk
Date: Mon, 22 Jul 2019 14:01:47 +0200 [thread overview]
Message-ID: <3d900e19-5025-c882-627b-8217a8ba1542@redhat.com> (raw)
In-Reply-To: <1563696502-7972-1-git-send-email-lizhengui@huawei.com>
On 21/07/19 10:08, l00284672 wrote:
> commit a6f230c move blockbackend back to main AioContext on unplug. It set the AioContext of
> SCSIDevice to the main AioContex, but s->ctx is still the iothread AioContex(if the scsi controller
> is configure with iothread). So if there are having in-flight requests during unplug, a failing assertion
> happend. The bt is below:
> (gdb) bt
> #0 0x0000ffff86aacbd0 in raise () from /lib64/libc.so.6
> #1 0x0000ffff86aadf7c in abort () from /lib64/libc.so.6
> #2 0x0000ffff86aa6124 in __assert_fail_base () from /lib64/libc.so.6
> #3 0x0000ffff86aa61a4 in __assert_fail () from /lib64/libc.so.6
> #4 0x0000000000529118 in virtio_scsi_ctx_check (d=<optimized out>, s=<optimized out>, s=<optimized out>) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:246
> #5 0x0000000000529ec4 in virtio_scsi_handle_cmd_req_prepare (s=0x2779ec00, req=0xffff740397d0) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:559
> #6 0x000000000052a228 in virtio_scsi_handle_cmd_vq (s=0x2779ec00, vq=0xffff7c6d7110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:603
> #7 0x000000000052afa8 in virtio_scsi_data_plane_handle_cmd (vdev=<optimized out>, vq=0xffff7c6d7110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi-dataplane.c:59
> #8 0x000000000054d94c in virtio_queue_host_notifier_aio_poll (opaque=<optimized out>) at /home/qemu-4.0.0/hw/virtio/virtio.c:2452
>
> assert(blk_get_aio_context(d->conf.blk) == s->ctx) failed.
>
> To avoid assertion failed, moving the "if" after qdev_simple_device_unplug_cb.
>
> In addition, to avoid another qemu crash below, add aio_disable_external before
> qdev_simple_device_unplug_cb, which disable the further processing of external clients
> when doing qdev_simple_device_unplug_cb.
> (gdb) bt
> #0 scsi_req_unref (req=0xffff6802c6f0) at hw/scsi/scsi-bus.c:1283
> #1 0x00000000005294a4 in virtio_scsi_handle_cmd_req_submit (req=<optimized out>,
> s=<optimized out>) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:589
> #2 0x000000000052a2a8 in virtio_scsi_handle_cmd_vq (s=s@entry=0x9c90e90,
> vq=vq@entry=0xffff7c05f110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:625
> #3 0x000000000052afd8 in virtio_scsi_data_plane_handle_cmd (vdev=<optimized out>,
> vq=0xffff7c05f110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi-dataplane.c:60
> #4 0x000000000054d97c in virtio_queue_host_notifier_aio_poll (opaque=<optimized out>)
> at /home/qemu-4.0.0/hw/virtio/virtio.c:2447
> #5 0x00000000009b204c in run_poll_handlers_once (ctx=ctx@entry=0x6efea40,
> timeout=timeout@entry=0xffff7d7f7308) at util/aio-posix.c:521
> #6 0x00000000009b2b64 in run_poll_handlers (ctx=ctx@entry=0x6efea40,
> max_ns=max_ns@entry=4000, timeout=timeout@entry=0xffff7d7f7308) at util/aio-posix.c:559
> #7 0x00000000009b2ca0 in try_poll_mode (ctx=ctx@entry=0x6efea40, timeout=0xffff7d7f7308,
> timeout@entry=0xffff7d7f7348) at util/aio-posix.c:594
> #8 0x00000000009b31b8 in aio_poll (ctx=0x6efea40, blocking=blocking@entry=true)
> at util/aio-posix.c:636
> #9 0x00000000006973cc in iothread_run (opaque=0x6ebd800) at iothread.c:75
> #10 0x00000000009b592c in qemu_thread_start (args=0x6efef60) at util/qemu-thread-posix.c:502
> #11 0x0000ffff8057f8bc in start_thread () from /lib64/libpthread.so.0
> #12 0x0000ffff804e5f8c in thread_start () from /lib64/libc.so.6
> (gdb) p bus
> $1 = (SCSIBus *) 0x0
>
> Signed-off-by: Zhengui li <lizhengui@huawei.com>
> ---
> hw/scsi/virtio-scsi.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> index 839f120..79e555f 100644
> --- a/hw/scsi/virtio-scsi.c
> +++ b/hw/scsi/virtio-scsi.c
> @@ -837,13 +837,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
> virtio_scsi_release(s);
> }
>
> + aio_disable_external(s->ctx);
> + qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
> + aio_enable_external(s->ctx);
> +
> if (s->ctx) {
> virtio_scsi_acquire(s);
> blk_set_aio_context(sd->conf.blk, qemu_get_aio_context());
> virtio_scsi_release(s);
> }
> -
> - qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
> }
>
> static struct SCSIBusInfo virtio_scsi_scsi_info = {
>
Queued, thanks.
Paolo
next prev parent reply other threads:[~2019-07-22 12:02 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-21 8:08 [Qemu-devel] [PATCH v2] virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi disk l00284672
2019-07-22 12:01 ` Paolo Bonzini [this message]
-- strict thread matches above, loose matches on Subject: below --
2019-07-22 21:05 Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3d900e19-5025-c882-627b-8217a8ba1542@redhat.com \
--to=pbonzini@redhat.com \
--cc=eric.fangyi@huawei.com \
--cc=jiangyiwen@huawei.com \
--cc=kwolf@redhat.com \
--cc=lizhengui@huawei.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=wangjie88@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).