qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Farhan Ali <alifm@linux.vnet.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>, Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	mreitz@redhat.com, famz@redhat.com,
	QEMU Developers <qemu-devel@nongnu.org>,
	"open list:virtio-ccw" <qemu-s390x@nongnu.org>
Subject: Re: [Qemu-devel] [BUG] I/O thread segfault for QEMU on s390x
Date: Fri, 2 Mar 2018 09:23:18 +0000	[thread overview]
Message-ID: <20180302092318.GA6026@stefanha-x1.localdomain> (raw)
In-Reply-To: <079a5da7-6586-b974-6b99-e5de055b1bd1@linux.vnet.ibm.com>

[-- Attachment #1: Type: text/plain, Size: 3295 bytes --]

On Thu, Mar 01, 2018 at 09:33:35AM -0500, Farhan Ali wrote:
> Hi,
> 
> I have been noticing some segfaults for QEMU on s390x, and I have been
> hitting this issue quite reliably (at least once in 10 runs of a test case).
> The qemu version is 2.11.50, and I have systemd created coredumps
> when this happens.
> 
> Here is a back trace of the segfaulting thread:

The backtrace looks normal.

Please post the QEMU command-line and the details of the segfault (which
memory access faulted?).

> #0  0x000003ffafed202c in swapcontext () from /lib64/libc.so.6
> #1  0x000002aa355c02ee in qemu_coroutine_new () at
> util/coroutine-ucontext.c:164
> #2  0x000002aa355bec34 in qemu_coroutine_create
> (entry=entry@entry=0x2aa3550f7a8 <blk_aio_read_entry>,
> opaque=opaque@entry=0x3ffa002afa0) at util/qemu-coroutine.c:76
> #3  0x000002aa35510262 in blk_aio_prwv (blk=0x2aa65fbefa0, offset=<optimized
> out>, bytes=<optimized out>, qiov=0x3ffa002a9c0,
> co_entry=co_entry@entry=0x2aa3550f7a8 <blk_aio_read_entry>, flags=0,
>     cb=0x2aa35340a50 <virtio_blk_rw_complete>, opaque=0x3ffa002a960) at
> block/block-backend.c:1299
> #4  0x000002aa35510376 in blk_aio_preadv (blk=<optimized out>,
> offset=<optimized out>, qiov=<optimized out>, flags=<optimized out>,
> cb=<optimized out>, opaque=0x3ffa002a960) at block/block-backend.c:1392
> #5  0x000002aa3534114e in submit_requests (niov=<optimized out>,
> num_reqs=<optimized out>, start=<optimized out>, mrb=<optimized out>,
> blk=<optimized out>) at
> /usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:372
> #6  virtio_blk_submit_multireq (blk=<optimized out>,
> mrb=mrb@entry=0x3ffad77e640) at
> /usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:402
> #7  0x000002aa353422e0 in virtio_blk_handle_vq (s=0x2aa6611e7d8,
> vq=0x3ffb0f5f010) at /usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:620
> #8  0x000002aa3536655a in virtio_queue_notify_aio_vq
> (vq=vq@entry=0x3ffb0f5f010) at
> /usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:1515
> #9  0x000002aa35366cd6 in virtio_queue_notify_aio_vq (vq=0x3ffb0f5f010) at
> /usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:1511
> #10 virtio_queue_host_notifier_aio_poll (opaque=0x3ffb0f5f078) at
> /usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:2409
> #11 0x000002aa355a8ba4 in run_poll_handlers_once
> (ctx=ctx@entry=0x2aa65f99310) at util/aio-posix.c:497
> #12 0x000002aa355a9b74 in run_poll_handlers (max_ns=<optimized out>,
> ctx=0x2aa65f99310) at util/aio-posix.c:534
> #13 try_poll_mode (blocking=true, ctx=0x2aa65f99310) at util/aio-posix.c:562
> #14 aio_poll (ctx=0x2aa65f99310, blocking=blocking@entry=true) at
> util/aio-posix.c:602
> #15 0x000002aa353d2d0a in iothread_run (opaque=0x2aa65f990f0) at
> iothread.c:60
> #16 0x000003ffb0f07e82 in start_thread () from /lib64/libpthread.so.0
> #17 0x000003ffaff91596 in thread_start () from /lib64/libc.so.6
> 
> 
> I don't have much knowledge about i/o threads and the block layer code in
> QEMU, so I would like to report to the community about this issue.
> I believe this very similar to the bug that I reported upstream couple of
> days ago
> (https://lists.gnu.org/archive/html/qemu-devel/2018-02/msg04452.html).
> 
> Any help would be greatly appreciated.
> 
> Thanks
> Farhan
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

  parent reply	other threads:[~2018-03-02  9:23 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-01 14:33 [Qemu-devel] [BUG] I/O thread segfault for QEMU on s390x Farhan Ali
2018-03-02  6:13 ` Fam Zheng
2018-03-02 15:35   ` Farhan Ali
2018-03-02  9:23 ` Stefan Hajnoczi [this message]
2018-03-02 15:30   ` Farhan Ali
2018-03-05 11:03     ` Stefan Hajnoczi
2018-03-05 18:45       ` Farhan Ali
2018-03-05 18:54         ` Christian Borntraeger
2018-03-05 19:07           ` Peter Maydell
2018-03-05 19:08           ` Christian Borntraeger
2018-03-05 19:43             ` Farhan Ali
2018-03-06  6:34             ` Martin Schwidefsky
2018-03-07 12:52               ` Farhan Ali

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180302092318.GA6026@stefanha-x1.localdomain \
    --to=stefanha@redhat.com \
    --cc=alifm@linux.vnet.ibm.com \
    --cc=borntraeger@de.ibm.com \
    --cc=cohuck@redhat.com \
    --cc=famz@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-s390x@nongnu.org \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).