qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 1/1] migration: fix deadlock
@ 2015-09-24 12:53 Denis V. Lunev
  2015-09-25  1:21 ` Wen Congyang
  0 siblings, 1 reply; 11+ messages in thread
From: Denis V. Lunev @ 2015-09-24 12:53 UTC (permalink / raw)
  Cc: Amit Shah, Igor Redko, Juan Quintela, qemu-devel, Denis V. Lunev

From: Igor Redko <redkoi@virtuozzo.com>

Release qemu global mutex before call synchronize_rcu().
synchronize_rcu() waiting for all readers to finish their critical
sections. There is at least one critical section in which we try
to get QGM (critical section is in address_space_rw() and
prepare_mmio_access() is trying to aquire QGM).

Both functions (migration_end() and migration_bitmap_extend())
are called from main thread which is holding QGM.

Thus there is a race condition that ends up with deadlock:
main thread		working thread
Lock QGA                |
|             Call KVM_EXIT_IO handler
|			            |
|        Open rcu reader's critical section
Migration cleanup bh    |
|                       |
synchronize_rcu() is    |
waiting for readers     |
|            prepare_mmio_access() is waiting for QGM
  \                   /
         deadlock

The patch just releases QGM before calling synchronize_rcu().

Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
---
 migration/ram.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 7f007e6..d01febc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1028,12 +1028,16 @@ static void migration_end(void)
 {
     /* caller have hold iothread lock or is in a bh, so there is
      * no writing race against this migration_bitmap
+     * but rcu used not only for migration_bitmap, so we should
+     * release QGM or we get in deadlock.
      */
     unsigned long *bitmap = migration_bitmap;
     atomic_rcu_set(&migration_bitmap, NULL);
     if (bitmap) {
         memory_global_dirty_log_stop();
+        qemu_mutex_unlock_iothread();
         synchronize_rcu();
+        qemu_mutex_lock_iothread();
         g_free(bitmap);
     }
 
@@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
         atomic_rcu_set(&migration_bitmap, bitmap);
         qemu_mutex_unlock(&migration_bitmap_mutex);
         migration_dirty_pages += new - old;
+        qemu_mutex_unlock_iothread();
         synchronize_rcu();
+        qemu_mutex_lock_iothread();
         g_free(old_bitmap);
     }
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-24 12:53 Denis V. Lunev
@ 2015-09-25  1:21 ` Wen Congyang
  2015-09-25  8:03   ` Denis V. Lunev
  0 siblings, 1 reply; 11+ messages in thread
From: Wen Congyang @ 2015-09-25  1:21 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: Amit Shah, Igor Redko, qemu-devel, Juan Quintela

On 09/24/2015 08:53 PM, Denis V. Lunev wrote:
> From: Igor Redko <redkoi@virtuozzo.com>
> 
> Release qemu global mutex before call synchronize_rcu().
> synchronize_rcu() waiting for all readers to finish their critical
> sections. There is at least one critical section in which we try
> to get QGM (critical section is in address_space_rw() and
> prepare_mmio_access() is trying to aquire QGM).
> 
> Both functions (migration_end() and migration_bitmap_extend())
> are called from main thread which is holding QGM.
> 
> Thus there is a race condition that ends up with deadlock:
> main thread		working thread
> Lock QGA                |
> |             Call KVM_EXIT_IO handler
> |			            |
> |        Open rcu reader's critical section
> Migration cleanup bh    |
> |                       |
> synchronize_rcu() is    |
> waiting for readers     |
> |            prepare_mmio_access() is waiting for QGM
>   \                   /
>          deadlock
> 
> The patch just releases QGM before calling synchronize_rcu().
> 
> Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
> Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
> ---
>  migration/ram.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 7f007e6..d01febc 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1028,12 +1028,16 @@ static void migration_end(void)
>  {
>      /* caller have hold iothread lock or is in a bh, so there is
>       * no writing race against this migration_bitmap
> +     * but rcu used not only for migration_bitmap, so we should
> +     * release QGM or we get in deadlock.
>       */
>      unsigned long *bitmap = migration_bitmap;
>      atomic_rcu_set(&migration_bitmap, NULL);
>      if (bitmap) {
>          memory_global_dirty_log_stop();
> +        qemu_mutex_unlock_iothread();
>          synchronize_rcu();
> +        qemu_mutex_lock_iothread();

migration_end() can called in two cases:
1. migration_completed
2. migration is cancelled

In case 1, you should not unlock iothread, otherwise, the vm's state may be changed
unexpectedly.

>          g_free(bitmap);
>      }
>  
> @@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>          atomic_rcu_set(&migration_bitmap, bitmap);
>          qemu_mutex_unlock(&migration_bitmap_mutex);
>          migration_dirty_pages += new - old;
> +        qemu_mutex_unlock_iothread();
>          synchronize_rcu();
> +        qemu_mutex_lock_iothread();

Hmm, I think it is OK to unlock iothread here

>          g_free(old_bitmap);
>      }
>  }
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-25  1:21 ` Wen Congyang
@ 2015-09-25  8:03   ` Denis V. Lunev
  2015-09-25  8:23     ` Wen Congyang
  0 siblings, 1 reply; 11+ messages in thread
From: Denis V. Lunev @ 2015-09-25  8:03 UTC (permalink / raw)
  To: Wen Congyang; +Cc: Amit Shah, Igor Redko, qemu-devel, Juan Quintela

On 09/25/2015 04:21 AM, Wen Congyang wrote:
> On 09/24/2015 08:53 PM, Denis V. Lunev wrote:
>> From: Igor Redko <redkoi@virtuozzo.com>
>>
>> Release qemu global mutex before call synchronize_rcu().
>> synchronize_rcu() waiting for all readers to finish their critical
>> sections. There is at least one critical section in which we try
>> to get QGM (critical section is in address_space_rw() and
>> prepare_mmio_access() is trying to aquire QGM).
>>
>> Both functions (migration_end() and migration_bitmap_extend())
>> are called from main thread which is holding QGM.
>>
>> Thus there is a race condition that ends up with deadlock:
>> main thread		working thread
>> Lock QGA                |
>> |             Call KVM_EXIT_IO handler
>> |			            |
>> |        Open rcu reader's critical section
>> Migration cleanup bh    |
>> |                       |
>> synchronize_rcu() is    |
>> waiting for readers     |
>> |            prepare_mmio_access() is waiting for QGM
>>    \                   /
>>           deadlock
>>
>> The patch just releases QGM before calling synchronize_rcu().
>>
>> Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
>> Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>> CC: Juan Quintela <quintela@redhat.com>
>> CC: Amit Shah <amit.shah@redhat.com>
>> ---
>>   migration/ram.c | 6 ++++++
>>   1 file changed, 6 insertions(+)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 7f007e6..d01febc 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1028,12 +1028,16 @@ static void migration_end(void)
>>   {
>>       /* caller have hold iothread lock or is in a bh, so there is
>>        * no writing race against this migration_bitmap
>> +     * but rcu used not only for migration_bitmap, so we should
>> +     * release QGM or we get in deadlock.
>>        */
>>       unsigned long *bitmap = migration_bitmap;
>>       atomic_rcu_set(&migration_bitmap, NULL);
>>       if (bitmap) {
>>           memory_global_dirty_log_stop();
>> +        qemu_mutex_unlock_iothread();
>>           synchronize_rcu();
>> +        qemu_mutex_lock_iothread();
> migration_end() can called in two cases:
> 1. migration_completed
> 2. migration is cancelled
>
> In case 1, you should not unlock iothread, otherwise, the vm's state may be changed
> unexpectedly.

sorry, but there is now very good choice here. We should either
unlock or not call synchronize_rcu which is also an option.

In the other case the rework should be much more sufficient.

Den

>>           g_free(bitmap);
>>       }
>>   
>> @@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>>           atomic_rcu_set(&migration_bitmap, bitmap);
>>           qemu_mutex_unlock(&migration_bitmap_mutex);
>>           migration_dirty_pages += new - old;
>> +        qemu_mutex_unlock_iothread();
>>           synchronize_rcu();
>> +        qemu_mutex_lock_iothread();
> Hmm, I think it is OK to unlock iothread here
>
>>           g_free(old_bitmap);
>>       }
>>   }
>>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-25  8:03   ` Denis V. Lunev
@ 2015-09-25  8:23     ` Wen Congyang
  2015-09-29 15:32       ` Igor Redko
  0 siblings, 1 reply; 11+ messages in thread
From: Wen Congyang @ 2015-09-25  8:23 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: Amit Shah, Igor Redko, qemu-devel, Juan Quintela

On 09/25/2015 04:03 PM, Denis V. Lunev wrote:
> On 09/25/2015 04:21 AM, Wen Congyang wrote:
>> On 09/24/2015 08:53 PM, Denis V. Lunev wrote:
>>> From: Igor Redko <redkoi@virtuozzo.com>
>>>
>>> Release qemu global mutex before call synchronize_rcu().
>>> synchronize_rcu() waiting for all readers to finish their critical
>>> sections. There is at least one critical section in which we try
>>> to get QGM (critical section is in address_space_rw() and
>>> prepare_mmio_access() is trying to aquire QGM).
>>>
>>> Both functions (migration_end() and migration_bitmap_extend())
>>> are called from main thread which is holding QGM.
>>>
>>> Thus there is a race condition that ends up with deadlock:
>>> main thread        working thread
>>> Lock QGA                |
>>> |             Call KVM_EXIT_IO handler
>>> |                        |
>>> |        Open rcu reader's critical section
>>> Migration cleanup bh    |
>>> |                       |
>>> synchronize_rcu() is    |
>>> waiting for readers     |
>>> |            prepare_mmio_access() is waiting for QGM
>>>    \                   /
>>>           deadlock
>>>
>>> The patch just releases QGM before calling synchronize_rcu().
>>>
>>> Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
>>> Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
>>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>>> CC: Juan Quintela <quintela@redhat.com>
>>> CC: Amit Shah <amit.shah@redhat.com>
>>> ---
>>>   migration/ram.c | 6 ++++++
>>>   1 file changed, 6 insertions(+)
>>>
>>> diff --git a/migration/ram.c b/migration/ram.c
>>> index 7f007e6..d01febc 100644
>>> --- a/migration/ram.c
>>> +++ b/migration/ram.c
>>> @@ -1028,12 +1028,16 @@ static void migration_end(void)
>>>   {
>>>       /* caller have hold iothread lock or is in a bh, so there is
>>>        * no writing race against this migration_bitmap
>>> +     * but rcu used not only for migration_bitmap, so we should
>>> +     * release QGM or we get in deadlock.
>>>        */
>>>       unsigned long *bitmap = migration_bitmap;
>>>       atomic_rcu_set(&migration_bitmap, NULL);
>>>       if (bitmap) {
>>>           memory_global_dirty_log_stop();
>>> +        qemu_mutex_unlock_iothread();
>>>           synchronize_rcu();
>>> +        qemu_mutex_lock_iothread();
>> migration_end() can called in two cases:
>> 1. migration_completed
>> 2. migration is cancelled
>>
>> In case 1, you should not unlock iothread, otherwise, the vm's state may be changed
>> unexpectedly.
> 
> sorry, but there is now very good choice here. We should either
> unlock or not call synchronize_rcu which is also an option.
> 
> In the other case the rework should be much more sufficient.

I don't reproduce this bug. But according to your description, the bug only exists
in case 2. Is it right?

> 
> Den
> 
>>>           g_free(bitmap);
>>>       }
>>>   @@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>>>           atomic_rcu_set(&migration_bitmap, bitmap);
>>>           qemu_mutex_unlock(&migration_bitmap_mutex);
>>>           migration_dirty_pages += new - old;
>>> +        qemu_mutex_unlock_iothread();
>>>           synchronize_rcu();
>>> +        qemu_mutex_lock_iothread();
>> Hmm, I think it is OK to unlock iothread here
>>
>>>           g_free(old_bitmap);
>>>       }
>>>   }
>>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Qemu-devel] [PATCH 1/1] migration: fix deadlock
@ 2015-09-28 11:41 Denis V. Lunev
  2015-09-28 11:55 ` Paolo Bonzini
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Denis V. Lunev @ 2015-09-28 11:41 UTC (permalink / raw)
  Cc: Juan Quintela, qemu-devel, Anna Melekhova, Paolo Bonzini,
	Amit Shah, Denis V. Lunev

Release qemu global mutex before call synchronize_rcu().
synchronize_rcu() waiting for all readers to finish their critical
sections. There is at least one critical section in which we try
to get QGM (critical section is in address_space_rw() and
prepare_mmio_access() is trying to aquire QGM).

Both functions (migration_end() and migration_bitmap_extend())
are called from main thread which is holding QGM.

Thus there is a race condition that ends up with deadlock:
main thread     working thread
Lock QGA                |
|             Call KVM_EXIT_IO handler
|                       |
|        Open rcu reader's critical section
Migration cleanup bh    |
|                       |
synchronize_rcu() is    |
waiting for readers     |
|            prepare_mmio_access() is waiting for QGM
  \                   /
         deadlock

The patch changes bitmap freeing from direct g_free after synchronize_rcu
to free inside call_rcu.

Signed-off-by: Denis V. Lunev <den@openvz.org>
Reported-by: Igor Redko <redkoi@virtuozzo.com>
Tested-by: Igor Redko <redkoi@virtuozzo.com>
CC: Anna Melekhova <annam@virtuozzo.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Wen Congyang <wency@cn.fujitsu.com>
---
 migration/ram.c | 44 +++++++++++++++++++++++++++-----------------
 1 file changed, 27 insertions(+), 17 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 7f007e6..e7c5bcf 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -221,12 +221,16 @@ static RAMBlock *last_seen_block;
 /* This is the last block from where we have sent data */
 static RAMBlock *last_sent_block;
 static ram_addr_t last_offset;
-static unsigned long *migration_bitmap;
 static QemuMutex migration_bitmap_mutex;
 static uint64_t migration_dirty_pages;
 static uint32_t last_version;
 static bool ram_bulk_stage;
 
+static struct BitmapRcu {
+    struct rcu_head rcu;
+    unsigned long *bmap;
+} *migration_bitmap_rcu;
+
 struct CompressParam {
     bool start;
     bool done;
@@ -508,7 +512,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
 
     unsigned long next;
 
-    bitmap = atomic_rcu_read(&migration_bitmap);
+    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
     if (ram_bulk_stage && nr > base) {
         next = nr + 1;
     } else {
@@ -526,7 +530,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
 static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
 {
     unsigned long *bitmap;
-    bitmap = atomic_rcu_read(&migration_bitmap);
+    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
     migration_dirty_pages +=
         cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
 }
@@ -1024,17 +1028,22 @@ void free_xbzrle_decoded_buf(void)
     xbzrle_decoded_buf = NULL;
 }
 
+static void migration_bitmap_free(struct BitmapRcu *bmap)
+{
+    g_free(bmap->bmap);
+    g_free(bmap);
+}
+
 static void migration_end(void)
 {
     /* caller have hold iothread lock or is in a bh, so there is
      * no writing race against this migration_bitmap
      */
-    unsigned long *bitmap = migration_bitmap;
-    atomic_rcu_set(&migration_bitmap, NULL);
+    struct BitmapRcu *bitmap = migration_bitmap_rcu;
+    atomic_rcu_set(&migration_bitmap_rcu, NULL);
     if (bitmap) {
         memory_global_dirty_log_stop();
-        synchronize_rcu();
-        g_free(bitmap);
+        call_rcu(bitmap, migration_bitmap_free, rcu);
     }
 
     XBZRLE_cache_lock();
@@ -1070,9 +1079,10 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
     /* called in qemu main thread, so there is
      * no writing race against this migration_bitmap
      */
-    if (migration_bitmap) {
-        unsigned long *old_bitmap = migration_bitmap, *bitmap;
-        bitmap = bitmap_new(new);
+    if (migration_bitmap_rcu) {
+        struct BitmapRcu *old_bitmap = migration_bitmap_rcu, *bitmap;
+        bitmap = g_new(struct BitmapRcu, 1);
+        bitmap->bmap = bitmap_new(new);
 
         /* prevent migration_bitmap content from being set bit
          * by migration_bitmap_sync_range() at the same time.
@@ -1080,13 +1090,12 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
          * at the same time.
          */
         qemu_mutex_lock(&migration_bitmap_mutex);
-        bitmap_copy(bitmap, old_bitmap, old);
-        bitmap_set(bitmap, old, new - old);
-        atomic_rcu_set(&migration_bitmap, bitmap);
+        bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
+        bitmap_set(bitmap->bmap, old, new - old);
+        atomic_rcu_set(&migration_bitmap_rcu, bitmap);
         qemu_mutex_unlock(&migration_bitmap_mutex);
         migration_dirty_pages += new - old;
-        synchronize_rcu();
-        g_free(old_bitmap);
+        call_rcu(old_bitmap, migration_bitmap_free, rcu);
     }
 }
 
@@ -1145,8 +1154,9 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     reset_ram_globals();
 
     ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
-    migration_bitmap = bitmap_new(ram_bitmap_pages);
-    bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
+    migration_bitmap_rcu = g_new(struct BitmapRcu, 1);
+    migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages);
+    bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages);
 
     /*
      * Count the total number of pages used by ram blocks not including any
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-28 11:41 [Qemu-devel] [PATCH 1/1] migration: fix deadlock Denis V. Lunev
@ 2015-09-28 11:55 ` Paolo Bonzini
  2015-09-29  5:13 ` Amit Shah
  2015-09-30 16:16 ` Juan Quintela
  2 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2015-09-28 11:55 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: Amit Shah, Juan Quintela, qemu-devel, Anna Melekhova



On 28/09/2015 13:41, Denis V. Lunev wrote:
> Release qemu global mutex before call synchronize_rcu().
> synchronize_rcu() waiting for all readers to finish their critical
> sections. There is at least one critical section in which we try
> to get QGM (critical section is in address_space_rw() and
> prepare_mmio_access() is trying to aquire QGM).
> 
> Both functions (migration_end() and migration_bitmap_extend())
> are called from main thread which is holding QGM.
> 
> Thus there is a race condition that ends up with deadlock:
> main thread     working thread
> Lock QGA                |
> |             Call KVM_EXIT_IO handler
> |                       |
> |        Open rcu reader's critical section
> Migration cleanup bh    |
> |                       |
> synchronize_rcu() is    |
> waiting for readers     |
> |            prepare_mmio_access() is waiting for QGM
>   \                   /
>          deadlock
> 
> The patch changes bitmap freeing from direct g_free after synchronize_rcu
> to free inside call_rcu.
> 
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> Reported-by: Igor Redko <redkoi@virtuozzo.com>
> Tested-by: Igor Redko <redkoi@virtuozzo.com>
> CC: Anna Melekhova <annam@virtuozzo.com>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Wen Congyang <wency@cn.fujitsu.com>
> ---
>  migration/ram.c | 44 +++++++++++++++++++++++++++-----------------
>  1 file changed, 27 insertions(+), 17 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 7f007e6..e7c5bcf 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -221,12 +221,16 @@ static RAMBlock *last_seen_block;
>  /* This is the last block from where we have sent data */
>  static RAMBlock *last_sent_block;
>  static ram_addr_t last_offset;
> -static unsigned long *migration_bitmap;
>  static QemuMutex migration_bitmap_mutex;
>  static uint64_t migration_dirty_pages;
>  static uint32_t last_version;
>  static bool ram_bulk_stage;
>  
> +static struct BitmapRcu {
> +    struct rcu_head rcu;
> +    unsigned long *bmap;
> +} *migration_bitmap_rcu;
> +
>  struct CompressParam {
>      bool start;
>      bool done;
> @@ -508,7 +512,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
>  
>      unsigned long next;
>  
> -    bitmap = atomic_rcu_read(&migration_bitmap);
> +    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
>      if (ram_bulk_stage && nr > base) {
>          next = nr + 1;
>      } else {
> @@ -526,7 +530,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
>  static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
>  {
>      unsigned long *bitmap;
> -    bitmap = atomic_rcu_read(&migration_bitmap);
> +    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
>      migration_dirty_pages +=
>          cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
>  }
> @@ -1024,17 +1028,22 @@ void free_xbzrle_decoded_buf(void)
>      xbzrle_decoded_buf = NULL;
>  }
>  
> +static void migration_bitmap_free(struct BitmapRcu *bmap)
> +{
> +    g_free(bmap->bmap);
> +    g_free(bmap);
> +}
> +
>  static void migration_end(void)
>  {
>      /* caller have hold iothread lock or is in a bh, so there is
>       * no writing race against this migration_bitmap
>       */
> -    unsigned long *bitmap = migration_bitmap;
> -    atomic_rcu_set(&migration_bitmap, NULL);
> +    struct BitmapRcu *bitmap = migration_bitmap_rcu;
> +    atomic_rcu_set(&migration_bitmap_rcu, NULL);
>      if (bitmap) {
>          memory_global_dirty_log_stop();
> -        synchronize_rcu();
> -        g_free(bitmap);
> +        call_rcu(bitmap, migration_bitmap_free, rcu);
>      }
>  
>      XBZRLE_cache_lock();
> @@ -1070,9 +1079,10 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>      /* called in qemu main thread, so there is
>       * no writing race against this migration_bitmap
>       */
> -    if (migration_bitmap) {
> -        unsigned long *old_bitmap = migration_bitmap, *bitmap;
> -        bitmap = bitmap_new(new);
> +    if (migration_bitmap_rcu) {
> +        struct BitmapRcu *old_bitmap = migration_bitmap_rcu, *bitmap;
> +        bitmap = g_new(struct BitmapRcu, 1);
> +        bitmap->bmap = bitmap_new(new);
>  
>          /* prevent migration_bitmap content from being set bit
>           * by migration_bitmap_sync_range() at the same time.
> @@ -1080,13 +1090,12 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>           * at the same time.
>           */
>          qemu_mutex_lock(&migration_bitmap_mutex);
> -        bitmap_copy(bitmap, old_bitmap, old);
> -        bitmap_set(bitmap, old, new - old);
> -        atomic_rcu_set(&migration_bitmap, bitmap);
> +        bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
> +        bitmap_set(bitmap->bmap, old, new - old);
> +        atomic_rcu_set(&migration_bitmap_rcu, bitmap);
>          qemu_mutex_unlock(&migration_bitmap_mutex);
>          migration_dirty_pages += new - old;
> -        synchronize_rcu();
> -        g_free(old_bitmap);
> +        call_rcu(old_bitmap, migration_bitmap_free, rcu);
>      }
>  }
>  
> @@ -1145,8 +1154,9 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>      reset_ram_globals();
>  
>      ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> -    migration_bitmap = bitmap_new(ram_bitmap_pages);
> -    bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
> +    migration_bitmap_rcu = g_new(struct BitmapRcu, 1);
> +    migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages);
> +    bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages);
>  
>      /*
>       * Count the total number of pages used by ram blocks not including any
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-28 11:41 [Qemu-devel] [PATCH 1/1] migration: fix deadlock Denis V. Lunev
  2015-09-28 11:55 ` Paolo Bonzini
@ 2015-09-29  5:13 ` Amit Shah
  2015-09-29  5:43   ` Denis V. Lunev
  2015-09-29  5:46   ` Denis V. Lunev
  2015-09-30 16:16 ` Juan Quintela
  2 siblings, 2 replies; 11+ messages in thread
From: Amit Shah @ 2015-09-29  5:13 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: Paolo Bonzini, Juan Quintela, qemu-devel, Anna Melekhova

There have been multiple versions of this patch on the list, can you
please annotate that this is v3 so it supersedes the earlier v2?

Also, please include a changelog in the description in patch 0 so we
know what happened between the various versions.

Thanks,

On (Mon) 28 Sep 2015 [14:41:58], Denis V. Lunev wrote:
> Release qemu global mutex before call synchronize_rcu().
> synchronize_rcu() waiting for all readers to finish their critical
> sections. There is at least one critical section in which we try
> to get QGM (critical section is in address_space_rw() and
> prepare_mmio_access() is trying to aquire QGM).
> 
> Both functions (migration_end() and migration_bitmap_extend())
> are called from main thread which is holding QGM.
> 
> Thus there is a race condition that ends up with deadlock:
> main thread     working thread
> Lock QGA                |
> |             Call KVM_EXIT_IO handler
> |                       |
> |        Open rcu reader's critical section
> Migration cleanup bh    |
> |                       |
> synchronize_rcu() is    |
> waiting for readers     |
> |            prepare_mmio_access() is waiting for QGM
>   \                   /
>          deadlock
> 
> The patch changes bitmap freeing from direct g_free after synchronize_rcu
> to free inside call_rcu.
> 
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> Reported-by: Igor Redko <redkoi@virtuozzo.com>
> Tested-by: Igor Redko <redkoi@virtuozzo.com>
> CC: Anna Melekhova <annam@virtuozzo.com>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Wen Congyang <wency@cn.fujitsu.com>
> ---
>  migration/ram.c | 44 +++++++++++++++++++++++++++-----------------
>  1 file changed, 27 insertions(+), 17 deletions(-)

		Amit

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-29  5:13 ` Amit Shah
@ 2015-09-29  5:43   ` Denis V. Lunev
  2015-09-29  5:46   ` Denis V. Lunev
  1 sibling, 0 replies; 11+ messages in thread
From: Denis V. Lunev @ 2015-09-29  5:43 UTC (permalink / raw)
  To: Amit Shah; +Cc: Paolo Bonzini, Juan Quintela, qemu-devel, Anna Melekhova

On 09/29/2015 08:13 AM, Amit Shah wrote:
> There have been multiple versions of this patch on the list, can you
> please annotate that this is v3 so it supersedes the earlier v2?
>
> Also, please include a changelog in the description in patch 0 so we
> know what happened between the various versions.
>
> Thanks,
>
> On (Mon) 28 Sep 2015 [14:41:58], Denis V. Lunev wrote:
>> Release qemu global mutex before call synchronize_rcu().
>> synchronize_rcu() waiting for all readers to finish their critical
>> sections. There is at least one critical section in which we try
>> to get QGM (critical section is in address_space_rw() and
>> prepare_mmio_access() is trying to aquire QGM).
>>
>> Both functions (migration_end() and migration_bitmap_extend())
>> are called from main thread which is holding QGM.
>>
>> Thus there is a race condition that ends up with deadlock:
>> main thread     working thread
>> Lock QGA                |
>> |             Call KVM_EXIT_IO handler
>> |                       |
>> |        Open rcu reader's critical section
>> Migration cleanup bh    |
>> |                       |
>> synchronize_rcu() is    |
>> waiting for readers     |
>> |            prepare_mmio_access() is waiting for QGM
>>    \                   /
>>           deadlock
>>
>> The patch changes bitmap freeing from direct g_free after synchronize_rcu
>> to free inside call_rcu.
>>
>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>> Reported-by: Igor Redko <redkoi@virtuozzo.com>
>> Tested-by: Igor Redko <redkoi@virtuozzo.com>
>> CC: Anna Melekhova <annam@virtuozzo.com>
>> CC: Juan Quintela <quintela@redhat.com>
>> CC: Amit Shah <amit.shah@redhat.com>
>> CC: Paolo Bonzini <pbonzini@redhat.com>
>> CC: Wen Congyang <wency@cn.fujitsu.com>
>> ---
>>   migration/ram.c | 44 +++++++++++++++++++++++++++-----------------
>>   1 file changed, 27 insertions(+), 17 deletions(-)
> 		Amit
this one is correct. I am sorry, I have missed v3 here in the subject.

Den

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-29  5:13 ` Amit Shah
  2015-09-29  5:43   ` Denis V. Lunev
@ 2015-09-29  5:46   ` Denis V. Lunev
  1 sibling, 0 replies; 11+ messages in thread
From: Denis V. Lunev @ 2015-09-29  5:46 UTC (permalink / raw)
  To: Amit Shah; +Cc: Paolo Bonzini, Juan Quintela, qemu-devel, Anna Melekhova

On 09/29/2015 08:13 AM, Amit Shah wrote:
> There have been multiple versions of this patch on the list, can you
> please annotate that this is v3 so it supersedes the earlier v2?
>
> Also, please include a changelog in the description in patch 0 so we
> know what happened between the various versions.
>
> Thanks,
Changes from v2:
- switched from single allocation and bitmap_alloc_rcu helper and g_free_rcu
   to separate allocation of bitmap and RCU object and free via call_rcu

Changes from v1:
- unlocking is replaced with g_free_rcu

Den

P.S. Sorry for inconvenience.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-25  8:23     ` Wen Congyang
@ 2015-09-29 15:32       ` Igor Redko
  0 siblings, 0 replies; 11+ messages in thread
From: Igor Redko @ 2015-09-29 15:32 UTC (permalink / raw)
  To: Wen Congyang, Denis V. Lunev; +Cc: Amit Shah, qemu-devel, Juan Quintela

On 25.09.2015 11:23, Wen Congyang wrote:
> On 09/25/2015 04:03 PM, Denis V. Lunev wrote:
>> On 09/25/2015 04:21 AM, Wen Congyang wrote:
>>> On 09/24/2015 08:53 PM, Denis V. Lunev wrote:
>>>> From: Igor Redko <redkoi@virtuozzo.com>
>>>>
>>>> Release qemu global mutex before call synchronize_rcu().
>>>> synchronize_rcu() waiting for all readers to finish their critical
>>>> sections. There is at least one critical section in which we try
>>>> to get QGM (critical section is in address_space_rw() and
>>>> prepare_mmio_access() is trying to aquire QGM).
>>>>
>>>> Both functions (migration_end() and migration_bitmap_extend())
>>>> are called from main thread which is holding QGM.
>>>>
>>>> Thus there is a race condition that ends up with deadlock:
>>>> main thread        working thread
>>>> Lock QGA                |
>>>> |             Call KVM_EXIT_IO handler
>>>> |                        |
>>>> |        Open rcu reader's critical section
>>>> Migration cleanup bh    |
>>>> |                       |
>>>> synchronize_rcu() is    |
>>>> waiting for readers     |
>>>> |            prepare_mmio_access() is waiting for QGM
>>>>     \                   /
>>>>            deadlock
>>>>
>>>> The patch just releases QGM before calling synchronize_rcu().
>>>>
>>>> Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
>>>> Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
>>>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>>>> CC: Juan Quintela <quintela@redhat.com>
>>>> CC: Amit Shah <amit.shah@redhat.com>
>>>> ---
>>>>    migration/ram.c | 6 ++++++
>>>>    1 file changed, 6 insertions(+)
>>>>
>>>> diff --git a/migration/ram.c b/migration/ram.c
>>>> index 7f007e6..d01febc 100644
>>>> --- a/migration/ram.c
>>>> +++ b/migration/ram.c
>>>> @@ -1028,12 +1028,16 @@ static void migration_end(void)
>>>>    {
>>>>        /* caller have hold iothread lock or is in a bh, so there is
>>>>         * no writing race against this migration_bitmap
>>>> +     * but rcu used not only for migration_bitmap, so we should
>>>> +     * release QGM or we get in deadlock.
>>>>         */
>>>>        unsigned long *bitmap = migration_bitmap;
>>>>        atomic_rcu_set(&migration_bitmap, NULL);
>>>>        if (bitmap) {
>>>>            memory_global_dirty_log_stop();
>>>> +        qemu_mutex_unlock_iothread();
>>>>            synchronize_rcu();
>>>> +        qemu_mutex_lock_iothread();
>>> migration_end() can called in two cases:
>>> 1. migration_completed
>>> 2. migration is cancelled
>>>
>>> In case 1, you should not unlock iothread, otherwise, the vm's state may be changed
>>> unexpectedly.
>>
>> sorry, but there is now very good choice here. We should either
>> unlock or not call synchronize_rcu which is also an option.
>>
>> In the other case the rework should be much more sufficient.
>
> I don't reproduce this bug. But according to your description, the bug only exists
> in case 2. Is it right?
>
When migration is successfully completed, VM has been already stopped 
before migration_end() is being called. VM must be running to reproduce 
this bug. So, yes bug exists only in case 2

FYI
To reproduce this bug you need 2 hosts with qemu+libvirt (host0 and 
host1) configured for migration.
0. Create VM on host0 and install centos7
1. Shutdown VM.
2. Start VM (virsh start <VM_name>) and right after that start migration 
to host1 (smth like 'virsh migrate --live --verbose <VM_name> 
"qemu+ssh://host1/system"')
3. Stop migration after ~1 sec (after migration process have been 
started, but before it completed. for example when you see "Migration: [ 
  5 %]")
Works for me 9/10
deadlock: no response from VM and no response from qemu monitor (for 
example 'virsh qemu-monitor-command --hmp <VM_NAME> "info migrate"' will 
hang indefinitely)

Another way:
0. Create VM with e1000 network card on host0 and install centos7
1. Run iperf on VM (or any other load on network)
2. Start migration
3. Stop migration before it completed.
For this approach e1000 network card is essential because it generates 
KVM_EXIT_MMIO.

>>
>> Den
>>
>>>>            g_free(bitmap);
>>>>        }
>>>>    @@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>>>>            atomic_rcu_set(&migration_bitmap, bitmap);
>>>>            qemu_mutex_unlock(&migration_bitmap_mutex);
>>>>            migration_dirty_pages += new - old;
>>>> +        qemu_mutex_unlock_iothread();
>>>>            synchronize_rcu();
>>>> +        qemu_mutex_lock_iothread();
>>> Hmm, I think it is OK to unlock iothread here
>>>
>>>>            g_free(old_bitmap);
>>>>        }
>>>>    }
>>>>
>>
>> .
>>
>
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
  2015-09-28 11:41 [Qemu-devel] [PATCH 1/1] migration: fix deadlock Denis V. Lunev
  2015-09-28 11:55 ` Paolo Bonzini
  2015-09-29  5:13 ` Amit Shah
@ 2015-09-30 16:16 ` Juan Quintela
  2 siblings, 0 replies; 11+ messages in thread
From: Juan Quintela @ 2015-09-30 16:16 UTC (permalink / raw)
  To: Denis V. Lunev; +Cc: Amit Shah, Paolo Bonzini, qemu-devel, Anna Melekhova

"Denis V. Lunev" <den@openvz.org> wrote:
> Release qemu global mutex before call synchronize_rcu().
> synchronize_rcu() waiting for all readers to finish their critical
> sections. There is at least one critical section in which we try
> to get QGM (critical section is in address_space_rw() and
> prepare_mmio_access() is trying to aquire QGM).
>
> Both functions (migration_end() and migration_bitmap_extend())
> are called from main thread which is holding QGM.
>
> Thus there is a race condition that ends up with deadlock:
> main thread     working thread
> Lock QGA                |
> |             Call KVM_EXIT_IO handler
> |                       |
> |        Open rcu reader's critical section
> Migration cleanup bh    |
> |                       |
> synchronize_rcu() is    |
> waiting for readers     |
> |            prepare_mmio_access() is waiting for QGM
>   \                   /
>          deadlock
>
> The patch changes bitmap freeing from direct g_free after synchronize_rcu
> to free inside call_rcu.
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> Reported-by: Igor Redko <redkoi@virtuozzo.com>
> Tested-by: Igor Redko <redkoi@virtuozzo.com>
> CC: Anna Melekhova <annam@virtuozzo.com>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Wen Congyang <wency@cn.fujitsu.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>

Appliefd to my tree.

PD, no I still don't understood how RCU gave us so many corner cases wrong.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-09-30 16:17 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-28 11:41 [Qemu-devel] [PATCH 1/1] migration: fix deadlock Denis V. Lunev
2015-09-28 11:55 ` Paolo Bonzini
2015-09-29  5:13 ` Amit Shah
2015-09-29  5:43   ` Denis V. Lunev
2015-09-29  5:46   ` Denis V. Lunev
2015-09-30 16:16 ` Juan Quintela
  -- strict thread matches above, loose matches on Subject: below --
2015-09-24 12:53 Denis V. Lunev
2015-09-25  1:21 ` Wen Congyang
2015-09-25  8:03   ` Denis V. Lunev
2015-09-25  8:23     ` Wen Congyang
2015-09-29 15:32       ` Igor Redko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).