qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread
@ 2017-05-22 13:57 Stefan Hajnoczi
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 1/4] block: count bdrv_co_rw_vmstate() requests Stefan Hajnoczi
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2017-05-22 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, Paolo Bonzini, Kevin Wolf, Fam Zheng, Stefan Hajnoczi

v3:
 * Add missing bdrv_drain_all_end() in error code paths [Kevin]
v2:
 * New patch to use bdrv_drain_all_begin/end() in savevm/loadvm [Kevin]
   (All other patches unchanged)

The 'savevm' command hangs when -object iothread is used.  See patches for
details, but basically the vmstate read/write code didn't conform to the latest
block layer locking rules.

Stefan Hajnoczi (4):
  block: count bdrv_co_rw_vmstate() requests
  block: use BDRV_POLL_WHILE() in bdrv_rw_vmstate()
  migration: avoid recursive AioContext locking in save_vmstate()
  migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all()

 block/io.c         | 21 +++++++++++++--------
 migration/savevm.c | 30 ++++++++++++++++++++++++++----
 2 files changed, 39 insertions(+), 12 deletions(-)

-- 
2.9.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v3 1/4] block: count bdrv_co_rw_vmstate() requests
  2017-05-22 13:57 [Qemu-devel] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
@ 2017-05-22 13:57 ` Stefan Hajnoczi
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 2/4] block: use BDRV_POLL_WHILE() in bdrv_rw_vmstate() Stefan Hajnoczi
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2017-05-22 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, Paolo Bonzini, Kevin Wolf, Fam Zheng, Stefan Hajnoczi

Call bdrv_inc/dec_in_flight() for vmstate reads/writes.  This seems
unnecessary at first glance because vmstate reads/writes are done
synchronously while the guest is stopped.  But we need the bdrv_wakeup()
in bdrv_dec_in_flight() so the main loop sees request completion.
Besides, it's cleaner to count vmstate reads/writes like ordinary
read/write requests.

The bdrv_wakeup() partially fixes a 'savevm' hang with -object iothread.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/io.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/block/io.c b/block/io.c
index fdd7485..cc56e90 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1988,17 +1988,24 @@ bdrv_co_rw_vmstate(BlockDriverState *bs, QEMUIOVector *qiov, int64_t pos,
                    bool is_read)
 {
     BlockDriver *drv = bs->drv;
+    int ret = -ENOTSUP;
+
+    bdrv_inc_in_flight(bs);
 
     if (!drv) {
-        return -ENOMEDIUM;
+        ret = -ENOMEDIUM;
     } else if (drv->bdrv_load_vmstate) {
-        return is_read ? drv->bdrv_load_vmstate(bs, qiov, pos)
-                       : drv->bdrv_save_vmstate(bs, qiov, pos);
+        if (is_read) {
+            ret = drv->bdrv_load_vmstate(bs, qiov, pos);
+        } else {
+            ret = drv->bdrv_save_vmstate(bs, qiov, pos);
+        }
     } else if (bs->file) {
-        return bdrv_co_rw_vmstate(bs->file->bs, qiov, pos, is_read);
+        ret = bdrv_co_rw_vmstate(bs->file->bs, qiov, pos, is_read);
     }
 
-    return -ENOTSUP;
+    bdrv_dec_in_flight(bs);
+    return ret;
 }
 
 static void coroutine_fn bdrv_co_rw_vmstate_entry(void *opaque)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v3 2/4] block: use BDRV_POLL_WHILE() in bdrv_rw_vmstate()
  2017-05-22 13:57 [Qemu-devel] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 1/4] block: count bdrv_co_rw_vmstate() requests Stefan Hajnoczi
@ 2017-05-22 13:57 ` Stefan Hajnoczi
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate() Stefan Hajnoczi
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2017-05-22 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, Paolo Bonzini, Kevin Wolf, Fam Zheng, Stefan Hajnoczi

Calling aio_poll() directly may have been fine previously, but this is
the future, man!  The difference between an aio_poll() loop and
BDRV_POLL_WHILE() is that BDRV_POLL_WHILE() releases the AioContext
around aio_poll().

This allows the IOThread to run fd handlers or BHs to complete the
request.  Failure to release the AioContext causes deadlocks.

Using BDRV_POLL_WHILE() partially fixes a 'savevm' hang with -object
iothread.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/io.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/block/io.c b/block/io.c
index cc56e90..f0041cd 100644
--- a/block/io.c
+++ b/block/io.c
@@ -2031,9 +2031,7 @@ bdrv_rw_vmstate(BlockDriverState *bs, QEMUIOVector *qiov, int64_t pos,
         Coroutine *co = qemu_coroutine_create(bdrv_co_rw_vmstate_entry, &data);
 
         bdrv_coroutine_enter(bs, co);
-        while (data.ret == -EINPROGRESS) {
-            aio_poll(bdrv_get_aio_context(bs), true);
-        }
+        BDRV_POLL_WHILE(bs, data.ret == -EINPROGRESS);
         return data.ret;
     }
 }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate()
  2017-05-22 13:57 [Qemu-devel] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 1/4] block: count bdrv_co_rw_vmstate() requests Stefan Hajnoczi
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 2/4] block: use BDRV_POLL_WHILE() in bdrv_rw_vmstate() Stefan Hajnoczi
@ 2017-05-22 13:57 ` Stefan Hajnoczi
  2017-06-14 10:10   ` Pavel Butsykin
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 4/4] migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all() Stefan Hajnoczi
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2017-05-22 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, Paolo Bonzini, Kevin Wolf, Fam Zheng, Stefan Hajnoczi

AioContext was designed to allow nested acquire/release calls.  It uses
a recursive mutex so callers don't need to worry about nesting...or so
we thought.

BDRV_POLL_WHILE() is used to wait for block I/O requests.  It releases
the AioContext temporarily around aio_poll().  This gives IOThreads a
chance to acquire the AioContext to process I/O completions.

It turns out that recursive locking and BDRV_POLL_WHILE() don't mix.
BDRV_POLL_WHILE() only releases the AioContext once, so the IOThread
will not be able to acquire the AioContext if it was acquired
multiple times.

Instead of trying to release AioContext n times in BDRV_POLL_WHILE(),
this patch simply avoids nested locking in save_vmstate().  It's the
simplest fix and we should step back to consider the big picture with
all the recent changes to block layer threading.

This patch is the final fix to solve 'savevm' hanging with -object
iothread.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 migration/savevm.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/migration/savevm.c b/migration/savevm.c
index f5e8194..3ca319f 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2150,6 +2150,14 @@ int save_vmstate(const char *name, Error **errp)
         goto the_end;
     }
 
+    /* The bdrv_all_create_snapshot() call that follows acquires the AioContext
+     * for itself.  BDRV_POLL_WHILE() does not support nested locking because
+     * it only releases the lock once.  Therefore synchronous I/O will deadlock
+     * unless we release the AioContext before bdrv_all_create_snapshot().
+     */
+    aio_context_release(aio_context);
+    aio_context = NULL;
+
     ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, &bs);
     if (ret < 0) {
         error_setg(errp, "Error while creating snapshot on '%s'",
@@ -2160,7 +2168,9 @@ int save_vmstate(const char *name, Error **errp)
     ret = 0;
 
  the_end:
-    aio_context_release(aio_context);
+    if (aio_context) {
+        aio_context_release(aio_context);
+    }
     if (saved_vm_running) {
         vm_start();
     }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v3 4/4] migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all()
  2017-05-22 13:57 [Qemu-devel] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate() Stefan Hajnoczi
@ 2017-05-22 13:57 ` Stefan Hajnoczi
  2017-05-22 17:40   ` Eric Blake
  2017-05-30 13:52 ` [Qemu-devel] [Qemu-block] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
  2017-06-12 13:47 ` Stefan Hajnoczi
  5 siblings, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2017-05-22 13:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, Paolo Bonzini, Kevin Wolf, Fam Zheng, Stefan Hajnoczi

blk/bdrv_drain_all() only takes effect for a single instant and then
resumes block jobs, guest devices, and other external clients like the
NBD server.  This can be handy when performing a synchronous drain
before terminating the program, for example.

Monitor commands usually need to quiesce I/O across an entire code
region so blk/bdrv_drain_all() is not suitable.  They must use
bdrv_drain_all_begin/end() to mark the region.  This prevents new I/O
requests from slipping in or worse - block jobs completing and modifying
the graph.

I audited other blk/bdrv_drain_all() callers but did not find anything
that needs a similar fix.  This patch fixes the savevm/loadvm commands.
Although I haven't encountered a read world issue this makes the code
safer.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 migration/savevm.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/migration/savevm.c b/migration/savevm.c
index 3ca319f..c7c5ea5 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2113,6 +2113,8 @@ int save_vmstate(const char *name, Error **errp)
     }
     vm_stop(RUN_STATE_SAVE_VM);
 
+    bdrv_drain_all_begin();
+
     aio_context_acquire(aio_context);
 
     memset(sn, 0, sizeof(*sn));
@@ -2171,6 +2173,9 @@ int save_vmstate(const char *name, Error **errp)
     if (aio_context) {
         aio_context_release(aio_context);
     }
+
+    bdrv_drain_all_end();
+
     if (saved_vm_running) {
         vm_start();
     }
@@ -2279,20 +2284,21 @@ int load_vmstate(const char *name, Error **errp)
     }
 
     /* Flush all IO requests so they don't interfere with the new state.  */
-    bdrv_drain_all();
+    bdrv_drain_all_begin();
 
     ret = bdrv_all_goto_snapshot(name, &bs);
     if (ret < 0) {
         error_setg(errp, "Error %d while activating snapshot '%s' on '%s'",
                      ret, name, bdrv_get_device_name(bs));
-        return ret;
+        goto err_drain;
     }
 
     /* restore the VM state */
     f = qemu_fopen_bdrv(bs_vm_state, 0);
     if (!f) {
         error_setg(errp, "Could not open VM state file");
-        return -EINVAL;
+        ret = -EINVAL;
+        goto err_drain;
     }
 
     qemu_system_reset(VMRESET_SILENT);
@@ -2303,6 +2309,8 @@ int load_vmstate(const char *name, Error **errp)
     qemu_fclose(f);
     aio_context_release(aio_context);
 
+    bdrv_drain_all_end();
+
     migration_incoming_state_destroy();
     if (ret < 0) {
         error_setg(errp, "Error %d while loading VM state", ret);
@@ -2310,6 +2318,10 @@ int load_vmstate(const char *name, Error **errp)
     }
 
     return 0;
+
+err_drain:
+    bdrv_drain_all_end();
+    return ret;
 }
 
 void vmstate_register_ram(MemoryRegion *mr, DeviceState *dev)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 4/4] migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all()
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 4/4] migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all() Stefan Hajnoczi
@ 2017-05-22 17:40   ` Eric Blake
  0 siblings, 0 replies; 13+ messages in thread
From: Eric Blake @ 2017-05-22 17:40 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Fam Zheng, qemu-block

[-- Attachment #1: Type: text/plain, Size: 1248 bytes --]

On 05/22/2017 08:57 AM, Stefan Hajnoczi wrote:
> blk/bdrv_drain_all() only takes effect for a single instant and then
> resumes block jobs, guest devices, and other external clients like the
> NBD server.  This can be handy when performing a synchronous drain
> before terminating the program, for example.
> 
> Monitor commands usually need to quiesce I/O across an entire code
> region so blk/bdrv_drain_all() is not suitable.  They must use
> bdrv_drain_all_begin/end() to mark the region.  This prevents new I/O
> requests from slipping in or worse - block jobs completing and modifying
> the graph.
> 
> I audited other blk/bdrv_drain_all() callers but did not find anything
> that needs a similar fix.  This patch fixes the savevm/loadvm commands.
> Although I haven't encountered a read world issue this makes the code
> safer.
> 
> Suggested-by: Kevin Wolf <kwolf@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  migration/savevm.c | 18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread
  2017-05-22 13:57 [Qemu-devel] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
                   ` (3 preceding siblings ...)
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 4/4] migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all() Stefan Hajnoczi
@ 2017-05-30 13:52 ` Stefan Hajnoczi
  2017-06-12 13:47 ` Stefan Hajnoczi
  5 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2017-05-30 13:52 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-devel, Paolo Bonzini, Fam Zheng, qemu-block, Stefan Hajnoczi

[-- Attachment #1: Type: text/plain, Size: 901 bytes --]

On Mon, May 22, 2017 at 02:57:00PM +0100, Stefan Hajnoczi wrote:
> v3:
>  * Add missing bdrv_drain_all_end() in error code paths [Kevin]
> v2:
>  * New patch to use bdrv_drain_all_begin/end() in savevm/loadvm [Kevin]
>    (All other patches unchanged)
> 
> The 'savevm' command hangs when -object iothread is used.  See patches for
> details, but basically the vmstate read/write code didn't conform to the latest
> block layer locking rules.
> 
> Stefan Hajnoczi (4):
>   block: count bdrv_co_rw_vmstate() requests
>   block: use BDRV_POLL_WHILE() in bdrv_rw_vmstate()
>   migration: avoid recursive AioContext locking in save_vmstate()
>   migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all()
> 
>  block/io.c         | 21 +++++++++++++--------
>  migration/savevm.c | 30 ++++++++++++++++++++++++++----
>  2 files changed, 39 insertions(+), 12 deletions(-)

Ping

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread
  2017-05-22 13:57 [Qemu-devel] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
                   ` (4 preceding siblings ...)
  2017-05-30 13:52 ` [Qemu-devel] [Qemu-block] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
@ 2017-06-12 13:47 ` Stefan Hajnoczi
  2017-06-13 12:14   ` Kevin Wolf
  5 siblings, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2017-06-12 13:47 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-devel, Paolo Bonzini, Fam Zheng, qemu-block, stefanha

[-- Attachment #1: Type: text/plain, Size: 915 bytes --]

On Mon, May 22, 2017 at 02:57:00PM +0100, Stefan Hajnoczi wrote:
> v3:
>  * Add missing bdrv_drain_all_end() in error code paths [Kevin]
> v2:
>  * New patch to use bdrv_drain_all_begin/end() in savevm/loadvm [Kevin]
>    (All other patches unchanged)
> 
> The 'savevm' command hangs when -object iothread is used.  See patches for
> details, but basically the vmstate read/write code didn't conform to the latest
> block layer locking rules.
> 
> Stefan Hajnoczi (4):
>   block: count bdrv_co_rw_vmstate() requests
>   block: use BDRV_POLL_WHILE() in bdrv_rw_vmstate()
>   migration: avoid recursive AioContext locking in save_vmstate()
>   migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all()
> 
>  block/io.c         | 21 +++++++++++++--------
>  migration/savevm.c | 30 ++++++++++++++++++++++++++----
>  2 files changed, 39 insertions(+), 12 deletions(-)

Ping ^ 2

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread
  2017-06-12 13:47 ` Stefan Hajnoczi
@ 2017-06-13 12:14   ` Kevin Wolf
  0 siblings, 0 replies; 13+ messages in thread
From: Kevin Wolf @ 2017-06-13 12:14 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Paolo Bonzini, Fam Zheng, qemu-block, stefanha

[-- Attachment #1: Type: text/plain, Size: 1055 bytes --]

Am 12.06.2017 um 15:47 hat Stefan Hajnoczi geschrieben:
> On Mon, May 22, 2017 at 02:57:00PM +0100, Stefan Hajnoczi wrote:
> > v3:
> >  * Add missing bdrv_drain_all_end() in error code paths [Kevin]
> > v2:
> >  * New patch to use bdrv_drain_all_begin/end() in savevm/loadvm [Kevin]
> >    (All other patches unchanged)
> > 
> > The 'savevm' command hangs when -object iothread is used.  See patches for
> > details, but basically the vmstate read/write code didn't conform to the latest
> > block layer locking rules.
> > 
> > Stefan Hajnoczi (4):
> >   block: count bdrv_co_rw_vmstate() requests
> >   block: use BDRV_POLL_WHILE() in bdrv_rw_vmstate()
> >   migration: avoid recursive AioContext locking in save_vmstate()
> >   migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all()
> > 
> >  block/io.c         | 21 +++++++++++++--------
> >  migration/savevm.c | 30 ++++++++++++++++++++++++++----
> >  2 files changed, 39 insertions(+), 12 deletions(-)
> 
> Ping ^ 2

Thanks, applied to the block branch.

Kevin

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate()
  2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate() Stefan Hajnoczi
@ 2017-06-14 10:10   ` Pavel Butsykin
  2017-06-14 13:15     ` Pavel Butsykin
  0 siblings, 1 reply; 13+ messages in thread
From: Pavel Butsykin @ 2017-06-14 10:10 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Fam Zheng, qemu-block


On 22.05.2017 16:57, Stefan Hajnoczi wrote:
> AioContext was designed to allow nested acquire/release calls.  It uses
> a recursive mutex so callers don't need to worry about nesting...or so
> we thought.
> 
> BDRV_POLL_WHILE() is used to wait for block I/O requests.  It releases
> the AioContext temporarily around aio_poll().  This gives IOThreads a
> chance to acquire the AioContext to process I/O completions.
> 
> It turns out that recursive locking and BDRV_POLL_WHILE() don't mix.
> BDRV_POLL_WHILE() only releases the AioContext once, so the IOThread
> will not be able to acquire the AioContext if it was acquired
> multiple times.
> 
> Instead of trying to release AioContext n times in BDRV_POLL_WHILE(),
> this patch simply avoids nested locking in save_vmstate().  It's the
> simplest fix and we should step back to consider the big picture with
> all the recent changes to block layer threading.
> 
> This patch is the final fix to solve 'savevm' hanging with -object
> iothread.

The same I see in external_snapshot_prepare():
     /* Acquire AioContext now so any threads operating on old_bs stop */
     state->aio_context = bdrv_get_aio_context(state->old_bs);
     aio_context_acquire(state->aio_context);
     bdrv_drained_begin(state->old_bs);

     if (!bdrv_is_inserted(state->old_bs)) {
         error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
         return;
     }

     if (bdrv_op_is_blocked(state->old_bs,
                            BLOCK_OP_TYPE_EXTERNAL_SNAPSHOT, errp)) {
         return;
     }

     if (!bdrv_is_read_only(state->old_bs)) {
         if (bdrv_flush(state->old_bs)) {      <---!!!

and at the moment BDRV_POLL_WHILE(bs, flush_co.ret == NOT_DONE),
we have at least two locks.. So here is another deadlock.

> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   migration/savevm.c | 12 +++++++++++-
>   1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/savevm.c b/migration/savevm.c
> index f5e8194..3ca319f 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -2150,6 +2150,14 @@ int save_vmstate(const char *name, Error **errp)
>           goto the_end;
>       }
>   
> +    /* The bdrv_all_create_snapshot() call that follows acquires the AioContext
> +     * for itself.  BDRV_POLL_WHILE() does not support nested locking because
> +     * it only releases the lock once.  Therefore synchronous I/O will deadlock
> +     * unless we release the AioContext before bdrv_all_create_snapshot().
> +     */
> +    aio_context_release(aio_context);
> +    aio_context = NULL;
> +
>       ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, &bs);
>       if (ret < 0) {
>           error_setg(errp, "Error while creating snapshot on '%s'",
> @@ -2160,7 +2168,9 @@ int save_vmstate(const char *name, Error **errp)
>       ret = 0;
>   
>    the_end:
> -    aio_context_release(aio_context);
> +    if (aio_context) {
> +        aio_context_release(aio_context);
> +    }
>       if (saved_vm_running) {
>           vm_start();
>       }
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate()
  2017-06-14 10:10   ` Pavel Butsykin
@ 2017-06-14 13:15     ` Pavel Butsykin
  2017-06-14 14:43       ` Kevin Wolf
  0 siblings, 1 reply; 13+ messages in thread
From: Pavel Butsykin @ 2017-06-14 13:15 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Kevin Wolf, Paolo Bonzini, Fam Zheng, qemu-block

On 14.06.2017 13:10, Pavel Butsykin wrote:
> 
> On 22.05.2017 16:57, Stefan Hajnoczi wrote:
>> AioContext was designed to allow nested acquire/release calls.  It uses
>> a recursive mutex so callers don't need to worry about nesting...or so
>> we thought.
>>
>> BDRV_POLL_WHILE() is used to wait for block I/O requests.  It releases
>> the AioContext temporarily around aio_poll().  This gives IOThreads a
>> chance to acquire the AioContext to process I/O completions.
>>
>> It turns out that recursive locking and BDRV_POLL_WHILE() don't mix.
>> BDRV_POLL_WHILE() only releases the AioContext once, so the IOThread
>> will not be able to acquire the AioContext if it was acquired
>> multiple times.
>>
>> Instead of trying to release AioContext n times in BDRV_POLL_WHILE(),
>> this patch simply avoids nested locking in save_vmstate().  It's the
>> simplest fix and we should step back to consider the big picture with
>> all the recent changes to block layer threading.
>>
>> This patch is the final fix to solve 'savevm' hanging with -object
>> iothread.
> 
> The same I see in external_snapshot_prepare():
>      /* Acquire AioContext now so any threads operating on old_bs stop */
>      state->aio_context = bdrv_get_aio_context(state->old_bs);
>      aio_context_acquire(state->aio_context);
>      bdrv_drained_begin(state->old_bs);
> 
>      if (!bdrv_is_inserted(state->old_bs)) {
>          error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
>          return;
>      }
> 
>      if (bdrv_op_is_blocked(state->old_bs,
>                             BLOCK_OP_TYPE_EXTERNAL_SNAPSHOT, errp)) {
>          return;
>      }
> 
>      if (!bdrv_is_read_only(state->old_bs)) {
>          if (bdrv_flush(state->old_bs)) {      <---!!!
> 
> and at the moment BDRV_POLL_WHILE(bs, flush_co.ret == NOT_DONE),
> we have at least two locks.. So here is another deadlock.

Sorry, here different kind of deadlock. In external_snapshot case, the
deadlock can happen only if state->old_bs->aio_context == my_iothread->ctx,
because in this case the aio_co_enter() always calls aio_co_schedule():

void bdrv_coroutine_enter(BlockDriverState *bs, Coroutine *co)
{
     aio_co_enter(bdrv_get_aio_context(bs), co);
}
...
void aio_co_enter(AioContext *ctx, struct Coroutine *co)
{
     if (ctx != qemu_get_current_aio_context()) {
         aio_co_schedule(ctx, co);
         return;
     }
...

But the iothread will not be able to perform coroutine, because the 
my_iothread->ctx is already under lock here:

static void external_snapshot_prepare(BlkActionState *common,
...
     /* Acquire AioContext now so any threads operating on old_bs stop */
     state->aio_context = bdrv_get_aio_context(state->old_bs);
     aio_context_acquire(state->aio_context);
     bdrv_drained_begin(state->old_bs);
...

As a result, we can see the following deadlock:

Thread 1 (Thread 0x7f0da6af9bc0 (LWP 973294)):
#0  0x00007f0da49b5ebf in __GI_ppoll (fds=0x7f0da87434b0, nfds=1, 
timeout=<optimized out>, sigmask=0x0)
     at ../sysdeps/unix/sysv/linux/ppoll.c:56
#1  0x00007f0da71e8234 in qemu_poll_ns (fds=0x7f0da87434b0, nfds=1, 
timeout=-1) at util/qemu-timer.c:322
#2  0x00007f0da71eaf32 in aio_poll (ctx=0x7f0da870e7c0, blocking=true) 
at util/aio-posix.c:622
#3  0x00007f0da715ff9c in bdrv_flush (bs=0x7f0da876c270) at block/io.c:2397
#4  0x00007f0da6e98344 in external_snapshot_prepare 
(common=0x7f0da9d727b0, errp=0x7ffec3893f38)
     at blockdev.c:1686
#5  0x00007f0da6e99537 in qmp_transaction (dev_list=0x7f0da98abb40, 
has_props=false,
     props=0x7f0daa2788c0, errp=0x7ffec3893fc0) at blockdev.c:2205
#6  0x00007f0da6ebee21 in qmp_marshal_transaction (args=0x7f0da9e7b700, 
ret=0x7ffec3894030,
     errp=0x7ffec3894028) at qmp-marshal.c:5952
#7  0x00007f0da71d9783 in do_qmp_dispatch (cmds=0x7f0da785f940 
<qmp_commands>, request=0x7f0da87b8400,
     errp=0x7ffec3894090) at qapi/qmp-dispatch.c:104
#8  0x00007f0da71d98bb in qmp_dispatch (cmds=0x7f0da785f940 
<qmp_commands>, request=0x7f0da87b8400)
     at qapi/qmp-dispatch.c:131
#9  0x00007f0da6d6a0a1 in handle_qmp_command (parser=0x7f0da874c150, 
tokens=0x7f0da870e2e0)
     at /root/qemu/monitor.c:3834
#10 0x00007f0da71e0c62 in json_message_process_token 
(lexer=0x7f0da874c158, input=0x7f0da870e200,
     type=JSON_RCURLY, x=422, y=35) at qobject/json-streamer.c:105
#11 0x00007f0da720ba27 in json_lexer_feed_char (lexer=0x7f0da874c158, 
ch=125 '}', flush=false)
     at qobject/json-lexer.c:319
#12 0x00007f0da720bb67 in json_lexer_feed (lexer=0x7f0da874c158, 
buffer=0x7ffec3894320 "}", size=1)
     at qobject/json-lexer.c:369
#13 0x00007f0da71e0d02 in json_message_parser_feed 
(parser=0x7f0da874c150, buffer=0x7ffec3894320 "}",
     size=1) at qobject/json-streamer.c:124
---Type <return> to continue, or q <return> to quit---
#14 0x00007f0da6d6a263 in monitor_qmp_read (opaque=0x7f0da874c0d0, 
buf=0x7ffec3894320 "}", size=1)
     at /root/qemu/monitor.c:3876
#15 0x00007f0da717af53 in qemu_chr_be_write_impl (s=0x7f0da871c3f0, 
buf=0x7ffec3894320 "}", len=1)
     at chardev/char.c:167
#16 0x00007f0da717afbb in qemu_chr_be_write (s=0x7f0da871c3f0, 
buf=0x7ffec3894320 "}", len=1)
     at chardev/char.c:179
#17 0x00007f0da7183204 in tcp_chr_read (chan=0x7f0daa211680, 
cond=G_IO_IN, opaque=0x7f0da871c3f0)
     at chardev/char-socket.c:441
#18 0x00007f0da7196e93 in qio_channel_fd_source_dispatch 
(source=0x7f0da871cb80,
     callback=0x7f0da7183061 <tcp_chr_read>, user_data=0x7f0da871c3f0) 
at io/channel-watch.c:84
#19 0x00007f0da55fbd7a in g_main_dispatch (context=0x7f0da870eb90) at 
gmain.c:3152
#20 g_main_context_dispatch (context=0x7f0da870eb90) at gmain.c:3767
#21 0x00007f0da71e9224 in glib_pollfds_poll () at util/main-loop.c:213
#22 0x00007f0da71e9322 in os_host_main_loop_wait (timeout=811021610) at 
util/main-loop.c:261
#23 0x00007f0da71e93de in main_loop_wait (nonblocking=0) at 
util/main-loop.c:517
#24 0x00007f0da6ea6786 in main_loop () at vl.c:1918
#25 0x00007f0da6eae43d in main (argc=65, argv=0x7ffec38958b8, 
envp=0x7ffec3895ac8) at vl.c:4752

Thread 7 (Thread 0x7f0da04f6700 (LWP 973297)):
#0  __lll_lock_wait () at 
../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x00007f0da4c93d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007f0da4c93c3a in __GI___pthread_mutex_lock 
(mutex=0x7f0da871dc80) at pthread_mutex_lock.c:85
#3  0x00007f0da71ed62a in qemu_mutex_lock (mutex=0x7f0da871dc80) at 
util/qemu-thread-posix.c:61
#4  0x00007f0da71e6d1d in aio_context_acquire (ctx=0x7f0da871dc20) at 
util/async.c:489
#5  0x00007f0da71e6945 in co_schedule_bh_cb (opaque=0x7f0da871dc20) at 
util/async.c:390
#6  0x00007f0da71e60ee in aio_bh_call (bh=0x7f0da871dd50) at util/async.c:90
#7  0x00007f0da71e6185 in aio_bh_poll (ctx=0x7f0da871dc20) at 
util/async.c:118
#8  0x00007f0da71eb212 in aio_poll (ctx=0x7f0da871dc20, blocking=true) 
at util/aio-posix.c:682
#9  0x00007f0da6e9e1ec in iothread_run (opaque=0x7f0da871da90) at 
iothread.c:59
#10 0x00007f0da4c91dc5 in start_thread (arg=0x7f0da04f6700) at 
pthread_create.c:308
#11 0x00007f0da49c073d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread1 is waiting for that iothread(Thread7) will execute 
coroutine(bdrv_flush_co_entry),
but the iothread is locked in Thread1.

Maybe we should take the lock only if state->aio_context == 
qemu_get_current_aio_context

>> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> Reviewed-by: Eric Blake <eblake@redhat.com>
>> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>   migration/savevm.c | 12 +++++++++++-
>>   1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/migration/savevm.c b/migration/savevm.c
>> index f5e8194..3ca319f 100644
>> --- a/migration/savevm.c
>> +++ b/migration/savevm.c
>> @@ -2150,6 +2150,14 @@ int save_vmstate(const char *name, Error **errp)
>>           goto the_end;
>>       }
>> +    /* The bdrv_all_create_snapshot() call that follows acquires the 
>> AioContext
>> +     * for itself.  BDRV_POLL_WHILE() does not support nested locking 
>> because
>> +     * it only releases the lock once.  Therefore synchronous I/O 
>> will deadlock
>> +     * unless we release the AioContext before 
>> bdrv_all_create_snapshot().
>> +     */
>> +    aio_context_release(aio_context);
>> +    aio_context = NULL;
>> +
>>       ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, &bs);
>>       if (ret < 0) {
>>           error_setg(errp, "Error while creating snapshot on '%s'",
>> @@ -2160,7 +2168,9 @@ int save_vmstate(const char *name, Error **errp)
>>       ret = 0;
>>    the_end:
>> -    aio_context_release(aio_context);
>> +    if (aio_context) {
>> +        aio_context_release(aio_context);
>> +    }
>>       if (saved_vm_running) {
>>           vm_start();
>>       }
>>
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate()
  2017-06-14 13:15     ` Pavel Butsykin
@ 2017-06-14 14:43       ` Kevin Wolf
  2017-06-15 13:33         ` Pavel Butsykin
  0 siblings, 1 reply; 13+ messages in thread
From: Kevin Wolf @ 2017-06-14 14:43 UTC (permalink / raw)
  To: Pavel Butsykin
  Cc: Stefan Hajnoczi, qemu-devel, Paolo Bonzini, Fam Zheng, qemu-block

Am 14.06.2017 um 15:15 hat Pavel Butsykin geschrieben:
> On 14.06.2017 13:10, Pavel Butsykin wrote:
> >
> >On 22.05.2017 16:57, Stefan Hajnoczi wrote:
> >>AioContext was designed to allow nested acquire/release calls.  It uses
> >>a recursive mutex so callers don't need to worry about nesting...or so
> >>we thought.
> >>
> >>BDRV_POLL_WHILE() is used to wait for block I/O requests.  It releases
> >>the AioContext temporarily around aio_poll().  This gives IOThreads a
> >>chance to acquire the AioContext to process I/O completions.
> >>
> >>It turns out that recursive locking and BDRV_POLL_WHILE() don't mix.
> >>BDRV_POLL_WHILE() only releases the AioContext once, so the IOThread
> >>will not be able to acquire the AioContext if it was acquired
> >>multiple times.
> >>
> >>Instead of trying to release AioContext n times in BDRV_POLL_WHILE(),
> >>this patch simply avoids nested locking in save_vmstate().  It's the
> >>simplest fix and we should step back to consider the big picture with
> >>all the recent changes to block layer threading.
> >>
> >>This patch is the final fix to solve 'savevm' hanging with -object
> >>iothread.
> >
> >The same I see in external_snapshot_prepare():
> >[...]
> >and at the moment BDRV_POLL_WHILE(bs, flush_co.ret == NOT_DONE),
> >we have at least two locks.. So here is another deadlock.
> 
> Sorry, here different kind of deadlock. In external_snapshot case, the
> deadlock can happen only if state->old_bs->aio_context == my_iothread->ctx,
> because in this case the aio_co_enter() always calls aio_co_schedule():

Can you please write qemu-iotests case for any deadlock case that we're
seeing? Stefan, we could also use one for the bug fixed in this series.

Kevin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate()
  2017-06-14 14:43       ` Kevin Wolf
@ 2017-06-15 13:33         ` Pavel Butsykin
  0 siblings, 0 replies; 13+ messages in thread
From: Pavel Butsykin @ 2017-06-15 13:33 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Stefan Hajnoczi, qemu-devel, Paolo Bonzini, Fam Zheng, qemu-block

[-- Attachment #1: Type: text/plain, Size: 2868 bytes --]

On 14.06.2017 17:43, Kevin Wolf wrote:
> Am 14.06.2017 um 15:15 hat Pavel Butsykin geschrieben:
>> On 14.06.2017 13:10, Pavel Butsykin wrote:
>>>
>>> On 22.05.2017 16:57, Stefan Hajnoczi wrote:
>>>> AioContext was designed to allow nested acquire/release calls.  It uses
>>>> a recursive mutex so callers don't need to worry about nesting...or so
>>>> we thought.
>>>>
>>>> BDRV_POLL_WHILE() is used to wait for block I/O requests.  It releases
>>>> the AioContext temporarily around aio_poll().  This gives IOThreads a
>>>> chance to acquire the AioContext to process I/O completions.
>>>>
>>>> It turns out that recursive locking and BDRV_POLL_WHILE() don't mix.
>>>> BDRV_POLL_WHILE() only releases the AioContext once, so the IOThread
>>>> will not be able to acquire the AioContext if it was acquired
>>>> multiple times.
>>>>
>>>> Instead of trying to release AioContext n times in BDRV_POLL_WHILE(),
>>>> this patch simply avoids nested locking in save_vmstate().  It's the
>>>> simplest fix and we should step back to consider the big picture with
>>>> all the recent changes to block layer threading.
>>>>
>>>> This patch is the final fix to solve 'savevm' hanging with -object
>>>> iothread.
>>>
>>> The same I see in external_snapshot_prepare():
>>> [...]
>>> and at the moment BDRV_POLL_WHILE(bs, flush_co.ret == NOT_DONE),
>>> we have at least two locks.. So here is another deadlock.
>>
>> Sorry, here different kind of deadlock. In external_snapshot case, the
>> deadlock can happen only if state->old_bs->aio_context == my_iothread->ctx,
>> because in this case the aio_co_enter() always calls aio_co_schedule():
> 
> Can you please write qemu-iotests case for any deadlock case that we're
> seeing? Stefan, we could also use one for the bug fixed in this series.

It's 085 test, only need to enable iothread. (patch attached)
# ./check -qcow2 085 -iothread
...
+Timeout waiting for return on handle 0
Failures: 085
Failed 1 of 1 tests

The timeout because of the deadlock.


Actually the deadlock for the same reason :D

A recursive lock occurs when in the transaction more than one action:
void qmp_transaction(TransactionActionList *dev_list,
...
     /* We don't do anything in this loop that commits us to the 
operations */
     while (NULL != dev_entry) {
...
         state->ops->prepare(state, &local_err);
         if (local_err) {
             error_propagate(errp, local_err);
             goto delete_and_fail;
         }
     }

     QSIMPLEQ_FOREACH(state, &snap_bdrv_states, entry) {
         if (state->ops->commit) {
             state->ops->commit(state);

There the contex lock is acquired in state->ops->prepare(), but is
released in state->ops->commit() (at bdrv_reopen_multiple()). And when
in a transaction two or more actions, we will see a recursive lock.
Unfortunately I have no idea how cheap it is to fix this.

> Kevin
> 

[-- Attachment #2: 0001-qemu-iotests-add-iothread-option.patch --]
[-- Type: text/x-patch, Size: 3713 bytes --]

>From 674132232f94c1db8015ed780ba84f49fb0fd2bc Mon Sep 17 00:00:00 2001
From: Pavel Butsykin <pbutsykin@virtuozzo.com>
Date: Thu, 15 Jun 2017 15:42:26 +0300
Subject: [PATCH] qemu-iotests: add -iothread option

Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
---
 tests/qemu-iotests/085         | 9 ++++++++-
 tests/qemu-iotests/common      | 7 +++++++
 tests/qemu-iotests/common.qemu | 9 +++++++--
 3 files changed, 22 insertions(+), 3 deletions(-)

diff --git a/tests/qemu-iotests/085 b/tests/qemu-iotests/085
index b97adcd8db..7b4419deeb 100755
--- a/tests/qemu-iotests/085
+++ b/tests/qemu-iotests/085
@@ -131,7 +131,14 @@ echo === Running QEMU ===
 echo
 
 qemu_comm_method="qmp"
-_launch_qemu -drive file="${TEST_IMG}.1",if=virtio -drive file="${TEST_IMG}.2",if=virtio
+if [ "${IOTHREAD_QEMU}" = "y" ]; then
+    _launch_qemu -drive file="${TEST_IMG}.1",if=none,id=virtio0 \
+    -device virtio-blk-pci,iothread=iothread1,drive=virtio0 \
+    -drive file="${TEST_IMG}.2",if=none,id=virtio1 \
+    -device virtio-blk-pci,iothread=iothread1,drive=virtio1
+else
+    _launch_qemu -drive file="${TEST_IMG}.1",if=virtio -drive file="${TEST_IMG}.2",if=virtio
+fi
 h=$QEMU_HANDLE
 
 echo
diff --git a/tests/qemu-iotests/common b/tests/qemu-iotests/common
index f2a7199c4b..fffbf39d55 100644
--- a/tests/qemu-iotests/common
+++ b/tests/qemu-iotests/common
@@ -53,6 +53,7 @@ export QEMU_IO_OPTIONS=""
 export CACHEMODE_IS_DEFAULT=true
 export QEMU_OPTIONS="-nodefaults -machine accel=qtest"
 export VALGRIND_QEMU=
+export IOTHREAD_QEMU=
 export IMGKEYSECRET=
 export IMGOPTSSYNTAX=false
 
@@ -165,6 +166,7 @@ image protocol options
 other options
     -xdiff              graphical mode diff
     -nocache            use O_DIRECT on backing file
+    -iothread           enable iothread
     -misalign           misalign memory allocations
     -n                  show me, do not run tests
     -o options          -o options to pass to qemu-img create/convert
@@ -297,6 +299,11 @@ testlist options
             xpand=false
             ;;
 
+        -iothread)
+            IOTHREAD_QEMU='y'
+            xpand=false
+            ;;
+
         -g)        # -g group ... pick from group file
             group=true
             xpand=false
diff --git a/tests/qemu-iotests/common.qemu b/tests/qemu-iotests/common.qemu
index 7a78a00999..230898ab43 100644
--- a/tests/qemu-iotests/common.qemu
+++ b/tests/qemu-iotests/common.qemu
@@ -139,6 +139,7 @@ function _launch_qemu()
     local comm=
     local fifo_out=
     local fifo_in=
+    local iothread=
 
     if (shopt -s nocasematch; [[ "${qemu_comm_method}" == "monitor" ]])
     then
@@ -148,6 +149,10 @@ function _launch_qemu()
         comm="-monitor none -qmp stdio"
     fi
 
+    if [ "${IOTHREAD_QEMU}" = "y" ]; then
+        iothread="--enable-kvm -object iothread,id=iothread1"
+    fi
+
     fifo_out=${QEMU_FIFO_OUT}_${_QEMU_HANDLE}
     fifo_in=${QEMU_FIFO_IN}_${_QEMU_HANDLE}
     mkfifo "${fifo_out}"
@@ -155,12 +160,12 @@ function _launch_qemu()
 
     if [ -z "$keep_stderr" ]; then
         QEMU_NEED_PID='y'\
-        ${QEMU} -nographic -serial none ${comm} "${@}" >"${fifo_out}" \
+        ${QEMU} ${iothread} -nographic -serial none ${comm} "${@}" >"${fifo_out}" \
                                                        2>&1 \
                                                        <"${fifo_in}" &
     elif [ "$keep_stderr" = "y" ]; then
         QEMU_NEED_PID='y'\
-        ${QEMU} -nographic -serial none ${comm} "${@}" >"${fifo_out}" \
+        ${QEMU} ${iothread} -nographic -serial none ${comm} "${@}" >"${fifo_out}" \
                                                        <"${fifo_in}" &
     else
         exit 1
-- 
2.13.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-06-15 13:35 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-05-22 13:57 [Qemu-devel] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 1/4] block: count bdrv_co_rw_vmstate() requests Stefan Hajnoczi
2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 2/4] block: use BDRV_POLL_WHILE() in bdrv_rw_vmstate() Stefan Hajnoczi
2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 3/4] migration: avoid recursive AioContext locking in save_vmstate() Stefan Hajnoczi
2017-06-14 10:10   ` Pavel Butsykin
2017-06-14 13:15     ` Pavel Butsykin
2017-06-14 14:43       ` Kevin Wolf
2017-06-15 13:33         ` Pavel Butsykin
2017-05-22 13:57 ` [Qemu-devel] [PATCH v3 4/4] migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all() Stefan Hajnoczi
2017-05-22 17:40   ` Eric Blake
2017-05-30 13:52 ` [Qemu-devel] [Qemu-block] [PATCH v3 0/4] block: fix 'savevm' hang with -object iothread Stefan Hajnoczi
2017-06-12 13:47 ` Stefan Hajnoczi
2017-06-13 12:14   ` Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).