From: tu bo <tubo@linux.vnet.ibm.com>
To: Christian Borntraeger <borntraeger@de.ibm.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Cornelia Huck <cornelia.huck@de.ibm.com>,
qemu-devel@nongnu.org
Cc: famz@redhat.com, stefanha@redhat.com, mst@redhat.com
Subject: Re: [Qemu-devel] [PATCH 0/6] virtio: refactor host notifiers
Date: Thu, 31 Mar 2016 10:47:20 +0800 [thread overview]
Message-ID: <56FC8FB8.9060701@linux.vnet.ibm.com> (raw)
In-Reply-To: <56FA6CF2.2030907@de.ibm.com>
Hi Christian:
On 03/29/2016 07:54 PM, Christian Borntraeger wrote:
> On 03/29/2016 11:14 AM, tu bo wrote:
>> Hi Paolo:
>>
>> On 03/29/2016 02:11 AM, Paolo Bonzini wrote:
>>> On 28/03/2016 05:55, TU BO wrote:
>>>> Hi Cornelia:
>>>>
>>>> I got two crash with qemu master + "[PATCH 0/6] virtio: refactor host
>>>> notifiers",
>>>
>>> Hi Tu Bo,
>>>
>>> please always include the assertion patch at
>>> https://lists.gnu.org/archive/html/qemu-block/2016-03/msg00546.html in
>>> your tests. Can you include the backtrace from all threads with that patch?
>>>
>> thanks for your reminder about the assertion patch. Here is the backtrace with qemu master + assertion patch + "[PATCH 0/6] virtio: refactor host notifiers",
>>
>> I got two crashes,
>>
>> 1. For 1st crash,
>> (gdb) thread apply all bt
>>
>> Thread 8 (Thread 0x3ff8daf1910 (LWP 52859)):
>> #0 0x000003ff9718ec62 in do_futex_timed_wait () from /lib64/libpthread.so.0
>> #1 0x000003ff9718ed76 in sem_timedwait () from /lib64/libpthread.so.0
>> #2 0x000002aa2d755868 in qemu_sem_timedwait (sem=0x3ff88000fa8, ms=<optimized out>) at util/qemu-thread-posix.c:245
>> #3 0x000002aa2d6803e4 in worker_thread (opaque=0x3ff88000f40) at thread-pool.c:92
>> #4 0x000003ff971884c6 in start_thread () from /lib64/libpthread.so.0
>> #5 0x000003ff96802ec2 in thread_start () from /lib64/libc.so.6
>>
>> Thread 7 (Thread 0x3ff8e679910 (LWP 52856)):
>> #0 0x000003ff9718ec62 in do_futex_timed_wait () from /lib64/libpthread.so.0
>> #1 0x000003ff9718ed76 in sem_timedwait () from /lib64/libpthread.so.0
>> #2 0x000002aa2d755868 in qemu_sem_timedwait (sem=0x2aa2e1fbfa8, ms=<optimized out>) at util/qemu-thread-posix.c:245
>> #3 0x000002aa2d6803e4 in worker_thread (opaque=0x2aa2e1fbf40) at thread-pool.c:92
>> #4 0x000003ff971884c6 in start_thread () from /lib64/libpthread.so.0
>> #5 0x000003ff96802ec2 in thread_start () from /lib64/libc.so.6
>>
>> Thread 6 (Thread 0x3ff9497f910 (LWP 52850)):
>> #0 0x000003ff9718c50e in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
>> #1 0x000003ff96d19792 in g_cond_wait () from /lib64/libglib-2.0.so.0
>> #2 0x000002aa2d7165d2 in wait_for_trace_records_available () at trace/simple.c:147
>> ---Type <return> to continue, or q <return> to quit---
>> #3 writeout_thread (opaque=<optimized out>) at trace/simple.c:165
>> #4 0x000003ff96cfa44c in g_thread_proxy () from /lib64/libglib-2.0.so.0
>> #5 0x000003ff971884c6 in start_thread () from /lib64/libpthread.so.0
>> #6 0x000003ff96802ec2 in thread_start () from /lib64/libc.so.6
>>
>> Thread 5 (Thread 0x3ff8efff910 (LWP 52855)):
>> #0 0x000003ff967f819a in ioctl () from /lib64/libc.so.6
>> #1 0x000002aa2d546f3e in kvm_vcpu_ioctl (cpu=cpu@entry=0x2aa2e239030, type=type@entry=44672)
>> at /usr/src/debug/qemu-2.5.50/kvm-all.c:1984
>> #2 0x000002aa2d54701e in kvm_cpu_exec (cpu=0x2aa2e239030) at /usr/src/debug/qemu-2.5.50/kvm-all.c:1834
>> #3 0x000002aa2d533cd6 in qemu_kvm_cpu_thread_fn (arg=<optimized out>) at /usr/src/debug/qemu-2.5.50/cpus.c:1056
>> #4 0x000003ff971884c6 in start_thread () from /lib64/libpthread.so.0
>> #5 0x000003ff96802ec2 in thread_start () from /lib64/libc.so.6
>>
>> Thread 4 (Thread 0x3ff951ff910 (LWP 52849)):
>> #0 0x000003ff967fcf56 in syscall () from /lib64/libc.so.6
>> #1 0x000002aa2d755a36 in futex_wait (val=<optimized out>, ev=<optimized out>) at util/qemu-thread-posix.c:292
>> #2 qemu_event_wait (ev=0x2aa2ddb5914 <rcu_call_ready_event>) at util/qemu-thread-posix.c:399
>> #3 0x000002aa2d765002 in call_rcu_thread (opaque=<optimized out>) at util/rcu.c:250
>> #4 0x000003ff971884c6 in start_thread () from /lib64/libpthread.so.0
>> #5 0x000003ff96802ec2 in thread_start () from /lib64/libc.so.6
>> ---Type <return> to continue, or q <return> to quit---
>>
>> Thread 3 (Thread 0x3ff978e0bf0 (LWP 52845)):
>> #0 0x000003ff967f66e6 in ppoll () from /lib64/libc.so.6
>> #1 0x000002aa2d68928e in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
>> #2 qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=-1) at qemu-timer.c:313
>> #3 0x000002aa2d688b02 in os_host_main_loop_wait (timeout=-1) at main-loop.c:251
>> #4 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:505
>> #5 0x000002aa2d4faade in main_loop () at vl.c:1933
>> #6 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4646
>>
>> Thread 2 (Thread 0x3ff8ffff910 (LWP 52851)):
>> #0 0x000003ff967f66e6 in ppoll () from /lib64/libc.so.6
>> #1 0x000002aa2d68928e in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
>> #2 qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=-1) at qemu-timer.c:313
>> #3 0x000002aa2d68a788 in aio_poll (ctx=0x2aa2de77e00, blocking=<optimized out>) at aio-posix.c:453
>> #4 0x000002aa2d5b909c in iothread_run (opaque=0x2aa2de77220) at iothread.c:46
>> #5 0x000003ff971884c6 in start_thread () from /lib64/libpthread.so.0
>> #6 0x000003ff96802ec2 in thread_start () from /lib64/libc.so.6
>>
>> Thread 1 (Thread 0x3ff8f7ff910 (LWP 52854)):
>> #0 0x000003ff9673b650 in raise () from /lib64/libc.so.6
>> ---Type <return> to continue, or q <return> to quit---
>> #1 0x000003ff9673ced8 in abort () from /lib64/libc.so.6
>> #2 0x000003ff96733666 in __assert_fail_base () from /lib64/libc.so.6
>> #3 0x000003ff967336f4 in __assert_fail () from /lib64/libc.so.6
>> #4 0x000002aa2d562608 in virtio_blk_handle_output (vdev=<optimized out>, vq=<optimized out>)
>> at /usr/src/debug/qemu-2.5.50/hw/block/virtio-blk.c:595
>
> Hmmm, are you sure that you used the newly compiled qemu and not the one from our internal daily rpms
> that we have?
>
yes, I got the latest qemu master from tuxmaker, then compiled and
installed the newly compiled qemu for my box. thx
next prev parent reply other threads:[~2016-03-31 2:47 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-24 16:15 [Qemu-devel] [PATCH 0/6] virtio: refactor host notifiers Cornelia Huck
2016-03-24 16:15 ` [Qemu-devel] [PATCH 1/6] virtio-bus: common ioeventfd infrastructure Cornelia Huck
2016-03-24 16:15 ` [Qemu-devel] [PATCH 2/6] virtio-bus: have callers tolerate new host notifier api Cornelia Huck
2016-03-24 16:15 ` [Qemu-devel] [PATCH 3/6] virtio-ccw: convert to ioeventfd callbacks Cornelia Huck
2016-03-24 16:15 ` [Qemu-devel] [PATCH 4/6] virtio-pci: " Cornelia Huck
2016-03-24 16:15 ` [Qemu-devel] [PATCH 5/6] virtio-mmio: " Cornelia Huck
2016-03-24 16:15 ` [Qemu-devel] [PATCH 6/6] virtio-bus: remove old set_host_notifier callback Cornelia Huck
2016-03-24 17:06 ` [Qemu-devel] [PATCH 0/6] virtio: refactor host notifiers Paolo Bonzini
2016-03-29 8:18 ` Cornelia Huck
2016-03-29 9:15 ` Paolo Bonzini
2016-03-25 9:52 ` Fam Zheng
2016-03-28 3:55 ` TU BO
2016-03-28 18:11 ` Paolo Bonzini
2016-03-29 9:14 ` tu bo
2016-03-29 11:45 ` Cornelia Huck
2016-03-29 13:50 ` Paolo Bonzini
2016-03-29 16:27 ` Christian Borntraeger
2016-03-31 2:37 ` tu bo
2016-03-31 5:47 ` tu bo
2016-03-29 11:54 ` Christian Borntraeger
2016-03-31 2:47 ` tu bo [this message]
2016-03-29 13:23 ` Christian Borntraeger
2016-03-29 13:38 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56FC8FB8.9060701@linux.vnet.ibm.com \
--to=tubo@linux.vnet.ibm.com \
--cc=borntraeger@de.ibm.com \
--cc=cornelia.huck@de.ibm.com \
--cc=famz@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).