From: Paolo Bonzini <pbonzini@redhat.com>
To: "Denis V. Lunev" <den@openvz.org>
Cc: Amit Shah <amit.shah@redhat.com>,
Juan Quintela <quintela@redhat.com>,
qemu-devel@nongnu.org, Anna Melekhova <annam@virtuozzo.com>
Subject: Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
Date: Mon, 28 Sep 2015 13:55:10 +0200 [thread overview]
Message-ID: <56092A9E.30200@redhat.com> (raw)
In-Reply-To: <1443440518-4384-1-git-send-email-den@openvz.org>
On 28/09/2015 13:41, Denis V. Lunev wrote:
> Release qemu global mutex before call synchronize_rcu().
> synchronize_rcu() waiting for all readers to finish their critical
> sections. There is at least one critical section in which we try
> to get QGM (critical section is in address_space_rw() and
> prepare_mmio_access() is trying to aquire QGM).
>
> Both functions (migration_end() and migration_bitmap_extend())
> are called from main thread which is holding QGM.
>
> Thus there is a race condition that ends up with deadlock:
> main thread working thread
> Lock QGA |
> | Call KVM_EXIT_IO handler
> | |
> | Open rcu reader's critical section
> Migration cleanup bh |
> | |
> synchronize_rcu() is |
> waiting for readers |
> | prepare_mmio_access() is waiting for QGM
> \ /
> deadlock
>
> The patch changes bitmap freeing from direct g_free after synchronize_rcu
> to free inside call_rcu.
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> Reported-by: Igor Redko <redkoi@virtuozzo.com>
> Tested-by: Igor Redko <redkoi@virtuozzo.com>
> CC: Anna Melekhova <annam@virtuozzo.com>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Wen Congyang <wency@cn.fujitsu.com>
> ---
> migration/ram.c | 44 +++++++++++++++++++++++++++-----------------
> 1 file changed, 27 insertions(+), 17 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 7f007e6..e7c5bcf 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -221,12 +221,16 @@ static RAMBlock *last_seen_block;
> /* This is the last block from where we have sent data */
> static RAMBlock *last_sent_block;
> static ram_addr_t last_offset;
> -static unsigned long *migration_bitmap;
> static QemuMutex migration_bitmap_mutex;
> static uint64_t migration_dirty_pages;
> static uint32_t last_version;
> static bool ram_bulk_stage;
>
> +static struct BitmapRcu {
> + struct rcu_head rcu;
> + unsigned long *bmap;
> +} *migration_bitmap_rcu;
> +
> struct CompressParam {
> bool start;
> bool done;
> @@ -508,7 +512,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
>
> unsigned long next;
>
> - bitmap = atomic_rcu_read(&migration_bitmap);
> + bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> if (ram_bulk_stage && nr > base) {
> next = nr + 1;
> } else {
> @@ -526,7 +530,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
> static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
> {
> unsigned long *bitmap;
> - bitmap = atomic_rcu_read(&migration_bitmap);
> + bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> migration_dirty_pages +=
> cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
> }
> @@ -1024,17 +1028,22 @@ void free_xbzrle_decoded_buf(void)
> xbzrle_decoded_buf = NULL;
> }
>
> +static void migration_bitmap_free(struct BitmapRcu *bmap)
> +{
> + g_free(bmap->bmap);
> + g_free(bmap);
> +}
> +
> static void migration_end(void)
> {
> /* caller have hold iothread lock or is in a bh, so there is
> * no writing race against this migration_bitmap
> */
> - unsigned long *bitmap = migration_bitmap;
> - atomic_rcu_set(&migration_bitmap, NULL);
> + struct BitmapRcu *bitmap = migration_bitmap_rcu;
> + atomic_rcu_set(&migration_bitmap_rcu, NULL);
> if (bitmap) {
> memory_global_dirty_log_stop();
> - synchronize_rcu();
> - g_free(bitmap);
> + call_rcu(bitmap, migration_bitmap_free, rcu);
> }
>
> XBZRLE_cache_lock();
> @@ -1070,9 +1079,10 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
> /* called in qemu main thread, so there is
> * no writing race against this migration_bitmap
> */
> - if (migration_bitmap) {
> - unsigned long *old_bitmap = migration_bitmap, *bitmap;
> - bitmap = bitmap_new(new);
> + if (migration_bitmap_rcu) {
> + struct BitmapRcu *old_bitmap = migration_bitmap_rcu, *bitmap;
> + bitmap = g_new(struct BitmapRcu, 1);
> + bitmap->bmap = bitmap_new(new);
>
> /* prevent migration_bitmap content from being set bit
> * by migration_bitmap_sync_range() at the same time.
> @@ -1080,13 +1090,12 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
> * at the same time.
> */
> qemu_mutex_lock(&migration_bitmap_mutex);
> - bitmap_copy(bitmap, old_bitmap, old);
> - bitmap_set(bitmap, old, new - old);
> - atomic_rcu_set(&migration_bitmap, bitmap);
> + bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
> + bitmap_set(bitmap->bmap, old, new - old);
> + atomic_rcu_set(&migration_bitmap_rcu, bitmap);
> qemu_mutex_unlock(&migration_bitmap_mutex);
> migration_dirty_pages += new - old;
> - synchronize_rcu();
> - g_free(old_bitmap);
> + call_rcu(old_bitmap, migration_bitmap_free, rcu);
> }
> }
>
> @@ -1145,8 +1154,9 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> reset_ram_globals();
>
> ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> - migration_bitmap = bitmap_new(ram_bitmap_pages);
> - bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
> + migration_bitmap_rcu = g_new(struct BitmapRcu, 1);
> + migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages);
> + bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages);
>
> /*
> * Count the total number of pages used by ram blocks not including any
>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
next prev parent reply other threads:[~2015-09-28 11:55 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-28 11:41 [Qemu-devel] [PATCH 1/1] migration: fix deadlock Denis V. Lunev
2015-09-28 11:55 ` Paolo Bonzini [this message]
2015-09-29 5:13 ` Amit Shah
2015-09-29 5:43 ` Denis V. Lunev
2015-09-29 5:46 ` Denis V. Lunev
2015-09-30 16:16 ` Juan Quintela
-- strict thread matches above, loose matches on Subject: below --
2015-09-24 12:53 Denis V. Lunev
2015-09-25 1:21 ` Wen Congyang
2015-09-25 8:03 ` Denis V. Lunev
2015-09-25 8:23 ` Wen Congyang
2015-09-29 15:32 ` Igor Redko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56092A9E.30200@redhat.com \
--to=pbonzini@redhat.com \
--cc=amit.shah@redhat.com \
--cc=annam@virtuozzo.com \
--cc=den@openvz.org \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).