qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon
@ 2019-01-29  5:14 Peter Xu
  2019-01-29 14:44 ` Thomas Huth
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Peter Xu @ 2019-01-29  5:14 UTC (permalink / raw)
  To: qemu-devel
  Cc: peterx, Thomas Huth, Dr . David Alan Gilbert, Stefan Hajnoczi,
	Lukáš Doktor, Markus Armbruster, Eric Blake,
	Paolo Bonzini

Lukas reported an hard to reproduce QMP iothread hang on s390 that
QEMU might hang at pthread_join() of the QMP monitor iothread before
quitting:

  Thread 1
  #0  0x000003ffad10932c in pthread_join
  #1  0x0000000109e95750 in qemu_thread_join
      at /home/thuth/devel/qemu/util/qemu-thread-posix.c:570
  #2  0x0000000109c95a1c in iothread_stop
  #3  0x0000000109bb0874 in monitor_cleanup
  #4  0x0000000109b55042 in main

While the iothread is still in the main loop:

  Thread 4
  #0  0x000003ffad0010e4 in ??
  #1  0x000003ffad553958 in g_main_context_iterate.isra.19
  #2  0x000003ffad553d90 in g_main_loop_run
  #3  0x0000000109c9585a in iothread_run
      at /home/thuth/devel/qemu/iothread.c:74
  #4  0x0000000109e94752 in qemu_thread_start
      at /home/thuth/devel/qemu/util/qemu-thread-posix.c:502
  #5  0x000003ffad10825a in start_thread
  #6  0x000003ffad00dcf2 in thread_start

IMHO it's because there's a race between the main thread and iothread
when stopping the thread in following sequence:

    main thread                       iothread
    ===========                       ==============
                                      aio_poll()
    iothread_get_g_main_context
      set iothread->worker_context
    iothread_stop
      schedule iothread_stop_bh
                                        execute iothread_stop_bh [1]
                                          set iothread->running=false
                                          (since main_loop==NULL so
                                           skip to quit main loop.
                                           Note: although main_loop is
                                           NULL but worker_context is
                                           not!)
                                      atomic_read(&iothread->worker_context) [2]
                                        create main_loop object
                                        g_main_loop_run() [3]
    pthread_join() [4]

We can see that when execute iothread_stop_bh() at [1] it's possible
that main_loop is still NULL because it's only created until the first
check of the worker_context later at [2].  Then the iothread will hang
in the main loop [3] and it'll starve the main thread too [4].

Here the simple solution should be that we check again the "running"
variable before check against worker_context.

CC: Thomas Huth <thuth@redhat.com>
CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Lukáš Doktor <ldoktor@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Eric Blake <eblake@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---

This hasn't yet been verified on the initial s390 systems, but since I
can reproduce it locally with this code clip:

        IOThread *iothread = iothread_create("test", NULL);
        iothread_get_g_main_context(iothread);
        iothread_stop(iothread);

so I'm still posting this out for review first in case it was hit by
other users.
---
 iothread.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/iothread.c b/iothread.c
index 2fb1cdf55d..e615b7ae52 100644
--- a/iothread.c
+++ b/iothread.c
@@ -63,7 +63,11 @@ static void *iothread_run(void *opaque)
     while (iothread->running) {
         aio_poll(iothread->ctx, true);
 
-        if (atomic_read(&iothread->worker_context)) {
+        /*
+         * We must check the running state again in case it was
+         * changed in previous aio_poll()
+         */
+        if (iothread->running && atomic_read(&iothread->worker_context)) {
             GMainLoop *loop;
 
             g_main_context_push_thread_default(iothread->worker_context);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon
  2019-01-29  5:14 [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon Peter Xu
@ 2019-01-29 14:44 ` Thomas Huth
  2019-01-30  2:50   ` Peter Xu
  2019-01-29 16:20 ` Markus Armbruster
  2019-01-30  3:28 ` Stefan Hajnoczi
  2 siblings, 1 reply; 6+ messages in thread
From: Thomas Huth @ 2019-01-29 14:44 UTC (permalink / raw)
  To: Peter Xu, qemu-devel
  Cc: Dr . David Alan Gilbert, Stefan Hajnoczi, Lukáš Doktor,
	Markus Armbruster, Eric Blake, Paolo Bonzini

On 2019-01-29 06:14, Peter Xu wrote:
> Lukas reported an hard to reproduce QMP iothread hang on s390 that
> QEMU might hang at pthread_join() of the QMP monitor iothread before
> quitting:
> 
>   Thread 1
>   #0  0x000003ffad10932c in pthread_join
>   #1  0x0000000109e95750 in qemu_thread_join
>       at /home/thuth/devel/qemu/util/qemu-thread-posix.c:570
>   #2  0x0000000109c95a1c in iothread_stop
>   #3  0x0000000109bb0874 in monitor_cleanup
>   #4  0x0000000109b55042 in main
> 
> While the iothread is still in the main loop:
> 
>   Thread 4
>   #0  0x000003ffad0010e4 in ??
>   #1  0x000003ffad553958 in g_main_context_iterate.isra.19
>   #2  0x000003ffad553d90 in g_main_loop_run
>   #3  0x0000000109c9585a in iothread_run
>       at /home/thuth/devel/qemu/iothread.c:74
>   #4  0x0000000109e94752 in qemu_thread_start
>       at /home/thuth/devel/qemu/util/qemu-thread-posix.c:502
>   #5  0x000003ffad10825a in start_thread
>   #6  0x000003ffad00dcf2 in thread_start
> 
> IMHO it's because there's a race between the main thread and iothread
> when stopping the thread in following sequence:
> 
>     main thread                       iothread
>     ===========                       ==============
>                                       aio_poll()
>     iothread_get_g_main_context
>       set iothread->worker_context
>     iothread_stop
>       schedule iothread_stop_bh
>                                         execute iothread_stop_bh [1]
>                                           set iothread->running=false
>                                           (since main_loop==NULL so
>                                            skip to quit main loop.
>                                            Note: although main_loop is
>                                            NULL but worker_context is
>                                            not!)
>                                       atomic_read(&iothread->worker_context) [2]
>                                         create main_loop object
>                                         g_main_loop_run() [3]
>     pthread_join() [4]
> 
> We can see that when execute iothread_stop_bh() at [1] it's possible
> that main_loop is still NULL because it's only created until the first
> check of the worker_context later at [2].  Then the iothread will hang
> in the main loop [3] and it'll starve the main thread too [4].
> 
> Here the simple solution should be that we check again the "running"
> variable before check against worker_context.
> 
> CC: Thomas Huth <thuth@redhat.com>
> CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> CC: Lukáš Doktor <ldoktor@redhat.com>
> CC: Markus Armbruster <armbru@redhat.com>
> CC: Eric Blake <eblake@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> Reported-by: Lukáš Doktor <ldoktor@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> 
> This hasn't yet been verified on the initial s390 systems, but since I
> can reproduce it locally with this code clip:
> 
>         IOThread *iothread = iothread_create("test", NULL);
>         iothread_get_g_main_context(iothread);
>         iothread_stop(iothread);
> 
> so I'm still posting this out for review first in case it was hit by
> other users.
> ---
>  iothread.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/iothread.c b/iothread.c
> index 2fb1cdf55d..e615b7ae52 100644
> --- a/iothread.c
> +++ b/iothread.c
> @@ -63,7 +63,11 @@ static void *iothread_run(void *opaque)
>      while (iothread->running) {
>          aio_poll(iothread->ctx, true);
>  
> -        if (atomic_read(&iothread->worker_context)) {
> +        /*
> +         * We must check the running state again in case it was
> +         * changed in previous aio_poll()
> +         */
> +        if (iothread->running && atomic_read(&iothread->worker_context)) {
>              GMainLoop *loop;
>  
>              g_main_context_push_thread_default(iothread->worker_context);
> 

I ran this on s390x with Lukáš' reproducer for a while now, and so far I
haven't seen any hangs anymore. Thus this seems to fix the issue as far
as I can see, thanks!

Tested-by: Thomas Huth <thuth@redhat.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon
  2019-01-29  5:14 [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon Peter Xu
  2019-01-29 14:44 ` Thomas Huth
@ 2019-01-29 16:20 ` Markus Armbruster
  2019-01-30  3:01   ` Peter Xu
  2019-01-30  3:28 ` Stefan Hajnoczi
  2 siblings, 1 reply; 6+ messages in thread
From: Markus Armbruster @ 2019-01-29 16:20 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Lukáš Doktor, Thomas Huth,
	Dr . David Alan Gilbert, Stefan Hajnoczi, Paolo Bonzini,
	Marc-André Lureau

Marc-André (cc'ed) recently fixed a deadlock (commit 34f1f3e06d8
"monitor: avoid potential dead-lock when cleaning up").  Looks like we
got more.

Peter Xu <peterx@redhat.com> writes:

> Lukas reported an hard to reproduce QMP iothread hang on s390 that
> QEMU might hang at pthread_join() of the QMP monitor iothread before
> quitting:
>
>   Thread 1
>   #0  0x000003ffad10932c in pthread_join
>   #1  0x0000000109e95750 in qemu_thread_join
>       at /home/thuth/devel/qemu/util/qemu-thread-posix.c:570
>   #2  0x0000000109c95a1c in iothread_stop
>   #3  0x0000000109bb0874 in monitor_cleanup
>   #4  0x0000000109b55042 in main
>
> While the iothread is still in the main loop:
>
>   Thread 4
>   #0  0x000003ffad0010e4 in ??
>   #1  0x000003ffad553958 in g_main_context_iterate.isra.19
>   #2  0x000003ffad553d90 in g_main_loop_run
>   #3  0x0000000109c9585a in iothread_run
>       at /home/thuth/devel/qemu/iothread.c:74
>   #4  0x0000000109e94752 in qemu_thread_start
>       at /home/thuth/devel/qemu/util/qemu-thread-posix.c:502
>   #5  0x000003ffad10825a in start_thread
>   #6  0x000003ffad00dcf2 in thread_start
>
> IMHO it's because there's a race between the main thread and iothread
> when stopping the thread in following sequence:
>
>     main thread                       iothread
>     ===========                       ==============
>                                       aio_poll()
>     iothread_get_g_main_context
>       set iothread->worker_context
>     iothread_stop
>       schedule iothread_stop_bh
>                                         execute iothread_stop_bh [1]
>                                           set iothread->running=false
>                                           (since main_loop==NULL so
>                                            skip to quit main loop.
>                                            Note: although main_loop is
>                                            NULL but worker_context is
>                                            not!)
>                                       atomic_read(&iothread->worker_context) [2]
>                                         create main_loop object
>                                         g_main_loop_run() [3]
>     pthread_join() [4]
>
> We can see that when execute iothread_stop_bh() at [1] it's possible
> that main_loop is still NULL because it's only created until the first
> check of the worker_context later at [2].  Then the iothread will hang
> in the main loop [3] and it'll starve the main thread too [4].
>
> Here the simple solution should be that we check again the "running"
> variable before check against worker_context.
>
> CC: Thomas Huth <thuth@redhat.com>
> CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> CC: Lukáš Doktor <ldoktor@redhat.com>
> CC: Markus Armbruster <armbru@redhat.com>
> CC: Eric Blake <eblake@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> Reported-by: Lukáš Doktor <ldoktor@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>
> This hasn't yet been verified on the initial s390 systems, but since I
> can reproduce it locally with this code clip:
>
>         IOThread *iothread = iothread_create("test", NULL);
>         iothread_get_g_main_context(iothread);
>         iothread_stop(iothread);
>
> so I'm still posting this out for review first in case it was hit by
> other users.
> ---
>  iothread.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/iothread.c b/iothread.c
> index 2fb1cdf55d..e615b7ae52 100644
> --- a/iothread.c
> +++ b/iothread.c
> @@ -63,7 +63,11 @@ static void *iothread_run(void *opaque)
>      while (iothread->running) {
>          aio_poll(iothread->ctx, true);
>  
> -        if (atomic_read(&iothread->worker_context)) {
> +        /*
> +         * We must check the running state again in case it was
> +         * changed in previous aio_poll()
> +         */
> +        if (iothread->running && atomic_read(&iothread->worker_context)) {
>              GMainLoop *loop;
>  
>              g_main_context_push_thread_default(iothread->worker_context);

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon
  2019-01-29 14:44 ` Thomas Huth
@ 2019-01-30  2:50   ` Peter Xu
  0 siblings, 0 replies; 6+ messages in thread
From: Peter Xu @ 2019-01-30  2:50 UTC (permalink / raw)
  To: Thomas Huth
  Cc: qemu-devel, Dr . David Alan Gilbert, Stefan Hajnoczi,
	Lukáš Doktor, Markus Armbruster, Eric Blake,
	Paolo Bonzini

On Tue, Jan 29, 2019 at 03:44:05PM +0100, Thomas Huth wrote:

[...]

> I ran this on s390x with Lukáš' reproducer for a while now, and so far I
> haven't seen any hangs anymore. Thus this seems to fix the issue as far
> as I can see, thanks!
> 
> Tested-by: Thomas Huth <thuth@redhat.com>

Thanks for the quick follow up, Thomas!

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon
  2019-01-29 16:20 ` Markus Armbruster
@ 2019-01-30  3:01   ` Peter Xu
  0 siblings, 0 replies; 6+ messages in thread
From: Peter Xu @ 2019-01-30  3:01 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: qemu-devel, Lukáš Doktor, Thomas Huth,
	Dr . David Alan Gilbert, Stefan Hajnoczi, Paolo Bonzini,
	Marc-André Lureau

On Tue, Jan 29, 2019 at 05:20:40PM +0100, Markus Armbruster wrote:
> Marc-André (cc'ed) recently fixed a deadlock (commit 34f1f3e06d8
> "monitor: avoid potential dead-lock when cleaning up").  Looks like we
> got more.

Yes, and a few tests too.  Hopefully these fallouts can settle soon
before the rcs.  Looks like there's still plenty of time for it to be
ready (softfreeze is March 19th).

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon
  2019-01-29  5:14 [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon Peter Xu
  2019-01-29 14:44 ` Thomas Huth
  2019-01-29 16:20 ` Markus Armbruster
@ 2019-01-30  3:28 ` Stefan Hajnoczi
  2 siblings, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2019-01-30  3:28 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Lukáš Doktor, Thomas Huth,
	Dr . David Alan Gilbert, Markus Armbruster, Stefan Hajnoczi,
	Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 3550 bytes --]

On Tue, Jan 29, 2019 at 01:14:32PM +0800, Peter Xu wrote:
> Lukas reported an hard to reproduce QMP iothread hang on s390 that
> QEMU might hang at pthread_join() of the QMP monitor iothread before
> quitting:
> 
>   Thread 1
>   #0  0x000003ffad10932c in pthread_join
>   #1  0x0000000109e95750 in qemu_thread_join
>       at /home/thuth/devel/qemu/util/qemu-thread-posix.c:570
>   #2  0x0000000109c95a1c in iothread_stop
>   #3  0x0000000109bb0874 in monitor_cleanup
>   #4  0x0000000109b55042 in main
> 
> While the iothread is still in the main loop:
> 
>   Thread 4
>   #0  0x000003ffad0010e4 in ??
>   #1  0x000003ffad553958 in g_main_context_iterate.isra.19
>   #2  0x000003ffad553d90 in g_main_loop_run
>   #3  0x0000000109c9585a in iothread_run
>       at /home/thuth/devel/qemu/iothread.c:74
>   #4  0x0000000109e94752 in qemu_thread_start
>       at /home/thuth/devel/qemu/util/qemu-thread-posix.c:502
>   #5  0x000003ffad10825a in start_thread
>   #6  0x000003ffad00dcf2 in thread_start
> 
> IMHO it's because there's a race between the main thread and iothread
> when stopping the thread in following sequence:
> 
>     main thread                       iothread
>     ===========                       ==============
>                                       aio_poll()
>     iothread_get_g_main_context
>       set iothread->worker_context
>     iothread_stop
>       schedule iothread_stop_bh
>                                         execute iothread_stop_bh [1]
>                                           set iothread->running=false
>                                           (since main_loop==NULL so
>                                            skip to quit main loop.
>                                            Note: although main_loop is
>                                            NULL but worker_context is
>                                            not!)
>                                       atomic_read(&iothread->worker_context) [2]
>                                         create main_loop object
>                                         g_main_loop_run() [3]
>     pthread_join() [4]
> 
> We can see that when execute iothread_stop_bh() at [1] it's possible
> that main_loop is still NULL because it's only created until the first
> check of the worker_context later at [2].  Then the iothread will hang
> in the main loop [3] and it'll starve the main thread too [4].
> 
> Here the simple solution should be that we check again the "running"
> variable before check against worker_context.
> 
> CC: Thomas Huth <thuth@redhat.com>
> CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> CC: Lukáš Doktor <ldoktor@redhat.com>
> CC: Markus Armbruster <armbru@redhat.com>
> CC: Eric Blake <eblake@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> Reported-by: Lukáš Doktor <ldoktor@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> 
> This hasn't yet been verified on the initial s390 systems, but since I
> can reproduce it locally with this code clip:
> 
>         IOThread *iothread = iothread_create("test", NULL);
>         iothread_get_g_main_context(iothread);
>         iothread_stop(iothread);
> 
> so I'm still posting this out for review first in case it was hit by
> other users.
> ---
>  iothread.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)

Thanks, applied to my block tree:
https://github.com/stefanha/qemu/commits/block

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-01-30  3:28 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-01-29  5:14 [Qemu-devel] [PATCH] iothread: fix iothread hang when stop too soon Peter Xu
2019-01-29 14:44 ` Thomas Huth
2019-01-30  2:50   ` Peter Xu
2019-01-29 16:20 ` Markus Armbruster
2019-01-30  3:01   ` Peter Xu
2019-01-30  3:28 ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).