From: Yonit Halperin <yhalperi@redhat.com>
To: "Daniel P. Berrange" <berrange@redhat.com>
Cc: spice-devel@lists.freedesktop.org, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [Spice-devel] Possible SPICE/QEMU deadlock on fast client reconnect
Date: Sun, 23 Oct 2011 09:37:59 +0200 [thread overview]
Message-ID: <4EA3C457.2060500@redhat.com> (raw)
In-Reply-To: <20111021152627.GF23627@redhat.com>
Hi,
Sounds like https://bugzilla.redhat.com/show_bug.cgi?id=746950 and
https://bugs.freedesktop.org/show_bug.cgi?id=41858
Alon's patch:
http://lists.freedesktop.org/archives/spice-devel/2011-September
/005369.html
Should solve it.
Cheers,
Yonit.
On 10/21/2011 05:26 PM, Daniel P. Berrange wrote:
> In testing my patches for 'add_client' support with SPICE, I noticed
> deadlock in the QEMU/SPICE code. It only happens if I close a SPICE
> client and then immediately reconnect within about 1 second. If I
> wait a couple of seconds before reconnecting the SPICE client I don't
> see the deadlock.
>
> I'm using QEMU& SPICE git master, and GDB has this to say
>
> (gdb) thread apply all bt
>
> Thread 3 (Thread 0x7f269e142700 (LWP 17233)):
> #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:165
> #1 0x00000000004e80e9 in qemu_cond_wait (cond=<optimized out>, mutex=<optimized out>)
> at qemu-thread-posix.c:113
> #2 0x0000000000556469 in qemu_kvm_wait_io_event (env=<optimized out>)
> at /home/berrange/src/virt/qemu/cpus.c:626
> #3 qemu_kvm_cpu_thread_fn (arg=0x29217a0) at /home/berrange/src/virt/qemu/cpus.c:661
> #4 0x0000003a83207b31 in start_thread (arg=0x7f269e142700) at pthread_create.c:305
> #5 0x0000003a82edfd2d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
>
> Thread 2 (Thread 0x7f26822fc700 (LWP 17234)):
> #0 __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:140
> #1 0x0000003a83209ad6 in _L_lock_752 () from /lib64/libpthread.so.0
> #2 0x0000003a832099d7 in __pthread_mutex_lock (mutex=0x1182f60) at pthread_mutex_lock.c:65
> #3 0x00000000004e7ec9 in qemu_mutex_lock (mutex=<optimized out>) at qemu-thread-posix.c:54
> #4 0x0000000000507df5 in channel_event (event=3, info=0x2a3c610) at ui/spice-core.c:234
> #5 0x00007f269f77be87 in reds_stream_free (s=0x2a3c590) at reds.c:4073
> #6 0x00007f269f75b64f in red_channel_client_disconnect (rcc=0x7f267c069ec0) at red_channel.c:961
> #7 0x00007f269f75a131 in red_peer_handle_incoming (handler=0x7f267c06a5a0, stream=0x2a3c590)
> at red_channel.c:150
> #8 red_channel_client_receive (rcc=0x7f267c069ec0) at red_channel.c:158
> #9 0x00007f269f761fbc in handle_channel_events (in_listener=0x7f267c06a5f8, events=<optimized out>)
> at red_worker.c:9566
> #10 0x00007f269f776672 in red_worker_main (arg=<optimized out>) at red_worker.c:10813
> #11 0x0000003a83207b31 in start_thread (arg=0x7f26822fc700) at pthread_create.c:305
> #12 0x0000003a82edfd2d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
>
> Thread 1 (Thread 0x7f269f72e9c0 (LWP 17232)):
> #0 0x0000003a8320e01d in read () at ../sysdeps/unix/syscall-template.S:82
> #1 0x00007f269f75daaa in receive_data (n=4, in_buf=0x7fffe7a5a02c, fd=18) at red_worker.h:150
> #2 read_message (message=0x7fffe7a5a02c, fd=18) at red_worker.h:163
> #3 red_dispatcher_disconnect_cursor_peer (rcc=0x7f267c0c0f60) at red_dispatcher.c:157
> #4 0x00007f269f75c20d in red_client_destroy (client=0x2a35400) at red_channel.c:1189
> #5 0x00007f269f778335 in reds_client_disconnect (client=0x2a35400) at reds.c:624
> ---Type<return> to continue, or q<return> to quit---
> #6 0x00007f269f75a131 in red_peer_handle_incoming (handler=0x2a35b50, stream=0x2b037d0) at red_channel.c:150
> #7 red_channel_client_receive (rcc=0x2a35470) at red_channel.c:158
> #8 0x00007f269f75b1e8 in red_channel_client_event (fd=<optimized out>, event=<optimized out>,
> data=<optimized out>) at red_channel.c:774
> #9 0x000000000045e561 in fd_trampoline (chan=<optimized out>, cond=<optimized out>, opaque=<optimized out>)
> at iohandler.c:97
> #10 0x0000003a84e427ed in g_main_dispatch (context=0x27fc510) at gmain.c:2441
> #11 g_main_context_dispatch (context=0x27fc510) at gmain.c:3014
> #12 0x00000000004c3da3 in glib_select_poll (err=false, xfds=0x7fffe7a5a2e0, wfds=0x7fffe7a5a260, rfds=
> 0x7fffe7a5a1e0) at /home/berrange/src/virt/qemu/vl.c:1495
> #13 main_loop_wait (nonblocking=<optimized out>) at /home/berrange/src/virt/qemu/vl.c:1541
> #14 0x000000000040fdd1 in main_loop () at /home/berrange/src/virt/qemu/vl.c:1570
> #15 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
> at /home/berrange/src/virt/qemu/vl.c:3563
> (gdb)
>
>
next prev parent reply other threads:[~2011-10-23 7:37 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-21 15:26 [Qemu-devel] Possible SPICE/QEMU deadlock on fast client reconnect Daniel P. Berrange
2011-10-23 7:37 ` Yonit Halperin [this message]
2011-10-23 16:42 ` [Qemu-devel] [Spice-devel] " Alon Levy
2011-10-24 9:35 ` Daniel P. Berrange
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4EA3C457.2060500@redhat.com \
--to=yhalperi@redhat.com \
--cc=berrange@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=spice-devel@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).