* [Qemu-devel] [PATCH 1/1] migration: fix deadlock
@ 2015-09-24 12:53 Denis V. Lunev
2015-09-25 1:21 ` Wen Congyang
0 siblings, 1 reply; 18+ messages in thread
From: Denis V. Lunev @ 2015-09-24 12:53 UTC (permalink / raw)
Cc: Amit Shah, Igor Redko, Juan Quintela, qemu-devel, Denis V. Lunev
From: Igor Redko <redkoi@virtuozzo.com>
Release qemu global mutex before call synchronize_rcu().
synchronize_rcu() waiting for all readers to finish their critical
sections. There is at least one critical section in which we try
to get QGM (critical section is in address_space_rw() and
prepare_mmio_access() is trying to aquire QGM).
Both functions (migration_end() and migration_bitmap_extend())
are called from main thread which is holding QGM.
Thus there is a race condition that ends up with deadlock:
main thread working thread
Lock QGA |
| Call KVM_EXIT_IO handler
| |
| Open rcu reader's critical section
Migration cleanup bh |
| |
synchronize_rcu() is |
waiting for readers |
| prepare_mmio_access() is waiting for QGM
\ /
deadlock
The patch just releases QGM before calling synchronize_rcu().
Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
---
migration/ram.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/migration/ram.c b/migration/ram.c
index 7f007e6..d01febc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1028,12 +1028,16 @@ static void migration_end(void)
{
/* caller have hold iothread lock or is in a bh, so there is
* no writing race against this migration_bitmap
+ * but rcu used not only for migration_bitmap, so we should
+ * release QGM or we get in deadlock.
*/
unsigned long *bitmap = migration_bitmap;
atomic_rcu_set(&migration_bitmap, NULL);
if (bitmap) {
memory_global_dirty_log_stop();
+ qemu_mutex_unlock_iothread();
synchronize_rcu();
+ qemu_mutex_lock_iothread();
g_free(bitmap);
}
@@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
atomic_rcu_set(&migration_bitmap, bitmap);
qemu_mutex_unlock(&migration_bitmap_mutex);
migration_dirty_pages += new - old;
+ qemu_mutex_unlock_iothread();
synchronize_rcu();
+ qemu_mutex_lock_iothread();
g_free(old_bitmap);
}
}
--
2.1.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
2015-09-24 12:53 [Qemu-devel] [PATCH 1/1] migration: fix deadlock Denis V. Lunev
@ 2015-09-25 1:21 ` Wen Congyang
2015-09-25 8:03 ` Denis V. Lunev
0 siblings, 1 reply; 18+ messages in thread
From: Wen Congyang @ 2015-09-25 1:21 UTC (permalink / raw)
To: Denis V. Lunev; +Cc: Amit Shah, Igor Redko, qemu-devel, Juan Quintela
On 09/24/2015 08:53 PM, Denis V. Lunev wrote:
> From: Igor Redko <redkoi@virtuozzo.com>
>
> Release qemu global mutex before call synchronize_rcu().
> synchronize_rcu() waiting for all readers to finish their critical
> sections. There is at least one critical section in which we try
> to get QGM (critical section is in address_space_rw() and
> prepare_mmio_access() is trying to aquire QGM).
>
> Both functions (migration_end() and migration_bitmap_extend())
> are called from main thread which is holding QGM.
>
> Thus there is a race condition that ends up with deadlock:
> main thread working thread
> Lock QGA |
> | Call KVM_EXIT_IO handler
> | |
> | Open rcu reader's critical section
> Migration cleanup bh |
> | |
> synchronize_rcu() is |
> waiting for readers |
> | prepare_mmio_access() is waiting for QGM
> \ /
> deadlock
>
> The patch just releases QGM before calling synchronize_rcu().
>
> Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
> Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
> ---
> migration/ram.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 7f007e6..d01febc 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1028,12 +1028,16 @@ static void migration_end(void)
> {
> /* caller have hold iothread lock or is in a bh, so there is
> * no writing race against this migration_bitmap
> + * but rcu used not only for migration_bitmap, so we should
> + * release QGM or we get in deadlock.
> */
> unsigned long *bitmap = migration_bitmap;
> atomic_rcu_set(&migration_bitmap, NULL);
> if (bitmap) {
> memory_global_dirty_log_stop();
> + qemu_mutex_unlock_iothread();
> synchronize_rcu();
> + qemu_mutex_lock_iothread();
migration_end() can called in two cases:
1. migration_completed
2. migration is cancelled
In case 1, you should not unlock iothread, otherwise, the vm's state may be changed
unexpectedly.
> g_free(bitmap);
> }
>
> @@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
> atomic_rcu_set(&migration_bitmap, bitmap);
> qemu_mutex_unlock(&migration_bitmap_mutex);
> migration_dirty_pages += new - old;
> + qemu_mutex_unlock_iothread();
> synchronize_rcu();
> + qemu_mutex_lock_iothread();
Hmm, I think it is OK to unlock iothread here
> g_free(old_bitmap);
> }
> }
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
2015-09-25 1:21 ` Wen Congyang
@ 2015-09-25 8:03 ` Denis V. Lunev
2015-09-25 8:23 ` Wen Congyang
0 siblings, 1 reply; 18+ messages in thread
From: Denis V. Lunev @ 2015-09-25 8:03 UTC (permalink / raw)
To: Wen Congyang; +Cc: Amit Shah, Igor Redko, qemu-devel, Juan Quintela
On 09/25/2015 04:21 AM, Wen Congyang wrote:
> On 09/24/2015 08:53 PM, Denis V. Lunev wrote:
>> From: Igor Redko <redkoi@virtuozzo.com>
>>
>> Release qemu global mutex before call synchronize_rcu().
>> synchronize_rcu() waiting for all readers to finish their critical
>> sections. There is at least one critical section in which we try
>> to get QGM (critical section is in address_space_rw() and
>> prepare_mmio_access() is trying to aquire QGM).
>>
>> Both functions (migration_end() and migration_bitmap_extend())
>> are called from main thread which is holding QGM.
>>
>> Thus there is a race condition that ends up with deadlock:
>> main thread working thread
>> Lock QGA |
>> | Call KVM_EXIT_IO handler
>> | |
>> | Open rcu reader's critical section
>> Migration cleanup bh |
>> | |
>> synchronize_rcu() is |
>> waiting for readers |
>> | prepare_mmio_access() is waiting for QGM
>> \ /
>> deadlock
>>
>> The patch just releases QGM before calling synchronize_rcu().
>>
>> Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
>> Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>> CC: Juan Quintela <quintela@redhat.com>
>> CC: Amit Shah <amit.shah@redhat.com>
>> ---
>> migration/ram.c | 6 ++++++
>> 1 file changed, 6 insertions(+)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 7f007e6..d01febc 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1028,12 +1028,16 @@ static void migration_end(void)
>> {
>> /* caller have hold iothread lock or is in a bh, so there is
>> * no writing race against this migration_bitmap
>> + * but rcu used not only for migration_bitmap, so we should
>> + * release QGM or we get in deadlock.
>> */
>> unsigned long *bitmap = migration_bitmap;
>> atomic_rcu_set(&migration_bitmap, NULL);
>> if (bitmap) {
>> memory_global_dirty_log_stop();
>> + qemu_mutex_unlock_iothread();
>> synchronize_rcu();
>> + qemu_mutex_lock_iothread();
> migration_end() can called in two cases:
> 1. migration_completed
> 2. migration is cancelled
>
> In case 1, you should not unlock iothread, otherwise, the vm's state may be changed
> unexpectedly.
sorry, but there is now very good choice here. We should either
unlock or not call synchronize_rcu which is also an option.
In the other case the rework should be much more sufficient.
Den
>> g_free(bitmap);
>> }
>>
>> @@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>> atomic_rcu_set(&migration_bitmap, bitmap);
>> qemu_mutex_unlock(&migration_bitmap_mutex);
>> migration_dirty_pages += new - old;
>> + qemu_mutex_unlock_iothread();
>> synchronize_rcu();
>> + qemu_mutex_lock_iothread();
> Hmm, I think it is OK to unlock iothread here
>
>> g_free(old_bitmap);
>> }
>> }
>>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
2015-09-25 8:03 ` Denis V. Lunev
@ 2015-09-25 8:23 ` Wen Congyang
2015-09-25 9:09 ` [Qemu-devel] [PATCH v2 0/2] " Denis V. Lunev
2015-09-29 15:32 ` [Qemu-devel] [PATCH 1/1] " Igor Redko
0 siblings, 2 replies; 18+ messages in thread
From: Wen Congyang @ 2015-09-25 8:23 UTC (permalink / raw)
To: Denis V. Lunev; +Cc: Amit Shah, Igor Redko, qemu-devel, Juan Quintela
On 09/25/2015 04:03 PM, Denis V. Lunev wrote:
> On 09/25/2015 04:21 AM, Wen Congyang wrote:
>> On 09/24/2015 08:53 PM, Denis V. Lunev wrote:
>>> From: Igor Redko <redkoi@virtuozzo.com>
>>>
>>> Release qemu global mutex before call synchronize_rcu().
>>> synchronize_rcu() waiting for all readers to finish their critical
>>> sections. There is at least one critical section in which we try
>>> to get QGM (critical section is in address_space_rw() and
>>> prepare_mmio_access() is trying to aquire QGM).
>>>
>>> Both functions (migration_end() and migration_bitmap_extend())
>>> are called from main thread which is holding QGM.
>>>
>>> Thus there is a race condition that ends up with deadlock:
>>> main thread working thread
>>> Lock QGA |
>>> | Call KVM_EXIT_IO handler
>>> | |
>>> | Open rcu reader's critical section
>>> Migration cleanup bh |
>>> | |
>>> synchronize_rcu() is |
>>> waiting for readers |
>>> | prepare_mmio_access() is waiting for QGM
>>> \ /
>>> deadlock
>>>
>>> The patch just releases QGM before calling synchronize_rcu().
>>>
>>> Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
>>> Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
>>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>>> CC: Juan Quintela <quintela@redhat.com>
>>> CC: Amit Shah <amit.shah@redhat.com>
>>> ---
>>> migration/ram.c | 6 ++++++
>>> 1 file changed, 6 insertions(+)
>>>
>>> diff --git a/migration/ram.c b/migration/ram.c
>>> index 7f007e6..d01febc 100644
>>> --- a/migration/ram.c
>>> +++ b/migration/ram.c
>>> @@ -1028,12 +1028,16 @@ static void migration_end(void)
>>> {
>>> /* caller have hold iothread lock or is in a bh, so there is
>>> * no writing race against this migration_bitmap
>>> + * but rcu used not only for migration_bitmap, so we should
>>> + * release QGM or we get in deadlock.
>>> */
>>> unsigned long *bitmap = migration_bitmap;
>>> atomic_rcu_set(&migration_bitmap, NULL);
>>> if (bitmap) {
>>> memory_global_dirty_log_stop();
>>> + qemu_mutex_unlock_iothread();
>>> synchronize_rcu();
>>> + qemu_mutex_lock_iothread();
>> migration_end() can called in two cases:
>> 1. migration_completed
>> 2. migration is cancelled
>>
>> In case 1, you should not unlock iothread, otherwise, the vm's state may be changed
>> unexpectedly.
>
> sorry, but there is now very good choice here. We should either
> unlock or not call synchronize_rcu which is also an option.
>
> In the other case the rework should be much more sufficient.
I don't reproduce this bug. But according to your description, the bug only exists
in case 2. Is it right?
>
> Den
>
>>> g_free(bitmap);
>>> }
>>> @@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>>> atomic_rcu_set(&migration_bitmap, bitmap);
>>> qemu_mutex_unlock(&migration_bitmap_mutex);
>>> migration_dirty_pages += new - old;
>>> + qemu_mutex_unlock_iothread();
>>> synchronize_rcu();
>>> + qemu_mutex_lock_iothread();
>> Hmm, I think it is OK to unlock iothread here
>>
>>> g_free(old_bitmap);
>>> }
>>> }
>>>
>
> .
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [Qemu-devel] [PATCH v2 0/2] migration: fix deadlock
2015-09-25 8:23 ` Wen Congyang
@ 2015-09-25 9:09 ` Denis V. Lunev
2015-09-25 9:09 ` [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0 Denis V. Lunev
` (2 more replies)
2015-09-29 15:32 ` [Qemu-devel] [PATCH 1/1] " Igor Redko
1 sibling, 3 replies; 18+ messages in thread
From: Denis V. Lunev @ 2015-09-25 9:09 UTC (permalink / raw)
Cc: Igor Redko, Juan Quintela, Anna Melekhova, qemu-devel, Amit Shah,
Denis V. Lunev
Release qemu global mutex before call synchronize_rcu().
synchronize_rcu() waiting for all readers to finish their critical
sections. There is at least one critical section in which we try
to get QGM (critical section is in address_space_rw() and
prepare_mmio_access() is trying to aquire QGM).
Both functions (migration_end() and migration_bitmap_extend())
are called from main thread which is holding QGM.
Thus there is a race condition that ends up with deadlock:
main thread working thread
Lock QGA |
| Call KVM_EXIT_IO handler
| |
| Open rcu reader's critical section
Migration cleanup bh |
| |
synchronize_rcu() is |
waiting for readers |
| prepare_mmio_access() is waiting for QGM
\ /
deadlock
Patches here are quick and dirty, compile-tested only to validate the
architectual approach.
Igor, Anna, can you pls start your tests with these patches instead of your
original one. Thank you.
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Igor Redko <redkoi@virtuozzo.com>
CC: Anna Melekhova <annam@virtuozzo.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
Denis V. Lunev (2):
migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
migration: fix deadlock
migration/ram.c | 45 ++++++++++++++++++++++++++++-----------------
1 file changed, 28 insertions(+), 17 deletions(-)
--
2.1.4
^ permalink raw reply [flat|nested] 18+ messages in thread
* [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
2015-09-25 9:09 ` [Qemu-devel] [PATCH v2 0/2] " Denis V. Lunev
@ 2015-09-25 9:09 ` Denis V. Lunev
2015-09-25 9:24 ` Wen Congyang
2015-09-25 9:09 ` [Qemu-devel] [PATCH 2/2] migration: fix deadlock Denis V. Lunev
2015-09-25 9:46 ` [Qemu-devel] [PATCH v2 0/2] " Wen Congyang
2 siblings, 1 reply; 18+ messages in thread
From: Denis V. Lunev @ 2015-09-25 9:09 UTC (permalink / raw)
Cc: Igor Redko, Juan Quintela, Anna Melekhova, qemu-devel, Amit Shah,
Denis V. Lunev
we can omit calling of bitmap_set in migration_bitmap_extend and
ram_save_setup just after bitmap_new, which properly zeroes memory
inside.
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Igor Redko <redkoi@virtuozzo.com>
CC: Anna Melekhova <annam@virtuozzo.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
CC: Wen Congyang <wency@cn.fujitsu.com>
---
migration/ram.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 7f007e6..a712c68 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1081,7 +1081,6 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
*/
qemu_mutex_lock(&migration_bitmap_mutex);
bitmap_copy(bitmap, old_bitmap, old);
- bitmap_set(bitmap, old, new - old);
atomic_rcu_set(&migration_bitmap, bitmap);
qemu_mutex_unlock(&migration_bitmap_mutex);
migration_dirty_pages += new - old;
@@ -1146,7 +1145,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
migration_bitmap = bitmap_new(ram_bitmap_pages);
- bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
/*
* Count the total number of pages used by ram blocks not including any
--
2.1.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [Qemu-devel] [PATCH 2/2] migration: fix deadlock
2015-09-25 9:09 ` [Qemu-devel] [PATCH v2 0/2] " Denis V. Lunev
2015-09-25 9:09 ` [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0 Denis V. Lunev
@ 2015-09-25 9:09 ` Denis V. Lunev
2015-09-25 9:35 ` Wen Congyang
2015-09-25 9:46 ` [Qemu-devel] [PATCH v2 0/2] " Wen Congyang
2 siblings, 1 reply; 18+ messages in thread
From: Denis V. Lunev @ 2015-09-25 9:09 UTC (permalink / raw)
Cc: Igor Redko, Juan Quintela, Anna Melekhova, qemu-devel,
Paolo Bonzini, Amit Shah, Denis V. Lunev
Release qemu global mutex before call synchronize_rcu().
synchronize_rcu() waiting for all readers to finish their critical
sections. There is at least one critical section in which we try
to get QGM (critical section is in address_space_rw() and
prepare_mmio_access() is trying to aquire QGM).
Both functions (migration_end() and migration_bitmap_extend())
are called from main thread which is holding QGM.
Thus there is a race condition that ends up with deadlock:
main thread working thread
Lock QGA |
| Call KVM_EXIT_IO handler
| |
| Open rcu reader's critical section
Migration cleanup bh |
| |
synchronize_rcu() is |
waiting for readers |
| prepare_mmio_access() is waiting for QGM
\ /
deadlock
The patch changes bitmap freeing from direct g_free after synchronize_rcu
to g_free_rcu.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reported-by: Igor Redko <redkoi@virtuozzo.com>
CC: Igor Redko <redkoi@virtuozzo.com>
CC: Anna Melekhova <annam@virtuozzo.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Wen Congyang <wency@cn.fujitsu.com>
---
migration/ram.c | 43 ++++++++++++++++++++++++++++---------------
1 file changed, 28 insertions(+), 15 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index a712c68..56b6fce 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -221,12 +221,27 @@ static RAMBlock *last_seen_block;
/* This is the last block from where we have sent data */
static RAMBlock *last_sent_block;
static ram_addr_t last_offset;
-static unsigned long *migration_bitmap;
static QemuMutex migration_bitmap_mutex;
static uint64_t migration_dirty_pages;
static uint32_t last_version;
static bool ram_bulk_stage;
+static struct BitmapRcu {
+ struct rcu_head rcu;
+ unsigned long bmap[0];
+} *migration_bitmap_rcu;
+
+static inline struct BitmapRcu *bitmap_new_rcu(long nbits)
+{
+ long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
+ struct BitmapRcu *ptr = g_try_malloc0(len + sizeof(struct BitmapRcu));
+ if (ptr == NULL) {
+ abort();
+ }
+ return ptr;
+}
+
+
struct CompressParam {
bool start;
bool done;
@@ -508,7 +523,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
unsigned long next;
- bitmap = atomic_rcu_read(&migration_bitmap);
+ bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
if (ram_bulk_stage && nr > base) {
next = nr + 1;
} else {
@@ -526,7 +541,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
{
unsigned long *bitmap;
- bitmap = atomic_rcu_read(&migration_bitmap);
+ bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
migration_dirty_pages +=
cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
}
@@ -1029,12 +1044,11 @@ static void migration_end(void)
/* caller have hold iothread lock or is in a bh, so there is
* no writing race against this migration_bitmap
*/
- unsigned long *bitmap = migration_bitmap;
- atomic_rcu_set(&migration_bitmap, NULL);
+ struct BitmapRcu *bitmap = migration_bitmap_rcu;
+ atomic_rcu_set(&migration_bitmap_rcu, NULL);
if (bitmap) {
memory_global_dirty_log_stop();
- synchronize_rcu();
- g_free(bitmap);
+ g_free_rcu(bitmap, rcu);
}
XBZRLE_cache_lock();
@@ -1070,9 +1084,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
/* called in qemu main thread, so there is
* no writing race against this migration_bitmap
*/
- if (migration_bitmap) {
- unsigned long *old_bitmap = migration_bitmap, *bitmap;
- bitmap = bitmap_new(new);
+ if (migration_bitmap_rcu) {
+ struct BitmapRcu *old_bitmap = migration_bitmap_rcu, *bitmap;
+ bitmap = bitmap_new_rcu(new);
/* prevent migration_bitmap content from being set bit
* by migration_bitmap_sync_range() at the same time.
@@ -1080,12 +1094,11 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
* at the same time.
*/
qemu_mutex_lock(&migration_bitmap_mutex);
- bitmap_copy(bitmap, old_bitmap, old);
- atomic_rcu_set(&migration_bitmap, bitmap);
+ bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
+ atomic_rcu_set(&migration_bitmap_rcu, bitmap);
qemu_mutex_unlock(&migration_bitmap_mutex);
migration_dirty_pages += new - old;
- synchronize_rcu();
- g_free(old_bitmap);
+ g_free_rcu(old_bitmap, rcu);
}
}
@@ -1144,7 +1157,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
reset_ram_globals();
ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
- migration_bitmap = bitmap_new(ram_bitmap_pages);
+ migration_bitmap_rcu = bitmap_new_rcu(ram_bitmap_pages);
/*
* Count the total number of pages used by ram blocks not including any
--
2.1.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
2015-09-25 9:09 ` [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0 Denis V. Lunev
@ 2015-09-25 9:24 ` Wen Congyang
2015-09-25 9:31 ` Denis V. Lunev
0 siblings, 1 reply; 18+ messages in thread
From: Wen Congyang @ 2015-09-25 9:24 UTC (permalink / raw)
To: Denis V. Lunev
Cc: Amit Shah, Igor Redko, Juan Quintela, qemu-devel, Anna Melekhova
On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
> we can omit calling of bitmap_set in migration_bitmap_extend and
> ram_save_setup just after bitmap_new, which properly zeroes memory
> inside.
This patch is wrong. bitmap_set() is set all bits of the memory to 1,
not 0.
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Igor Redko <redkoi@virtuozzo.com>
> CC: Anna Melekhova <annam@virtuozzo.com>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
> CC: Wen Congyang <wency@cn.fujitsu.com>
> ---
> migration/ram.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 7f007e6..a712c68 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1081,7 +1081,6 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
> */
> qemu_mutex_lock(&migration_bitmap_mutex);
> bitmap_copy(bitmap, old_bitmap, old);
> - bitmap_set(bitmap, old, new - old);
> atomic_rcu_set(&migration_bitmap, bitmap);
> qemu_mutex_unlock(&migration_bitmap_mutex);
> migration_dirty_pages += new - old;
> @@ -1146,7 +1145,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>
> ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> migration_bitmap = bitmap_new(ram_bitmap_pages);
> - bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
>
> /*
> * Count the total number of pages used by ram blocks not including any
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
2015-09-25 9:24 ` Wen Congyang
@ 2015-09-25 9:31 ` Denis V. Lunev
2015-09-25 9:37 ` Wen Congyang
0 siblings, 1 reply; 18+ messages in thread
From: Denis V. Lunev @ 2015-09-25 9:31 UTC (permalink / raw)
To: Wen Congyang
Cc: Amit Shah, Igor Redko, Juan Quintela, qemu-devel, Anna Melekhova
On 09/25/2015 12:24 PM, Wen Congyang wrote:
> On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
>> we can omit calling of bitmap_set in migration_bitmap_extend and
>> ram_save_setup just after bitmap_new, which properly zeroes memory
>> inside.
> This patch is wrong. bitmap_set() is set all bits of the memory to 1,
> not 0.
>
OK, then I'll replace g_try_malloc0 with g_try_malloc in the next patch
to avoid
double memset
>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>> CC: Igor Redko <redkoi@virtuozzo.com>
>> CC: Anna Melekhova <annam@virtuozzo.com>
>> CC: Juan Quintela <quintela@redhat.com>
>> CC: Amit Shah <amit.shah@redhat.com>
>> CC: Wen Congyang <wency@cn.fujitsu.com>
>> ---
>> migration/ram.c | 2 --
>> 1 file changed, 2 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 7f007e6..a712c68 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1081,7 +1081,6 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>> */
>> qemu_mutex_lock(&migration_bitmap_mutex);
>> bitmap_copy(bitmap, old_bitmap, old);
>> - bitmap_set(bitmap, old, new - old);
>> atomic_rcu_set(&migration_bitmap, bitmap);
>> qemu_mutex_unlock(&migration_bitmap_mutex);
>> migration_dirty_pages += new - old;
>> @@ -1146,7 +1145,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>>
>> ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
>> migration_bitmap = bitmap_new(ram_bitmap_pages);
>> - bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
>>
>> /*
>> * Count the total number of pages used by ram blocks not including any
>>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] migration: fix deadlock
2015-09-25 9:09 ` [Qemu-devel] [PATCH 2/2] migration: fix deadlock Denis V. Lunev
@ 2015-09-25 9:35 ` Wen Congyang
0 siblings, 0 replies; 18+ messages in thread
From: Wen Congyang @ 2015-09-25 9:35 UTC (permalink / raw)
To: Denis V. Lunev
Cc: Igor Redko, Juan Quintela, qemu-devel, Anna Melekhova, Amit Shah,
Paolo Bonzini
On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
> Release qemu global mutex before call synchronize_rcu().
> synchronize_rcu() waiting for all readers to finish their critical
> sections. There is at least one critical section in which we try
> to get QGM (critical section is in address_space_rw() and
> prepare_mmio_access() is trying to aquire QGM).
>
> Both functions (migration_end() and migration_bitmap_extend())
> are called from main thread which is holding QGM.
>
> Thus there is a race condition that ends up with deadlock:
> main thread working thread
> Lock QGA |
> | Call KVM_EXIT_IO handler
> | |
> | Open rcu reader's critical section
> Migration cleanup bh |
> | |
> synchronize_rcu() is |
> waiting for readers |
> | prepare_mmio_access() is waiting for QGM
> \ /
> deadlock
>
> The patch changes bitmap freeing from direct g_free after synchronize_rcu
> to g_free_rcu.
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> Reported-by: Igor Redko <redkoi@virtuozzo.com>
> CC: Igor Redko <redkoi@virtuozzo.com>
> CC: Anna Melekhova <annam@virtuozzo.com>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Wen Congyang <wency@cn.fujitsu.com>
> ---
> migration/ram.c | 43 ++++++++++++++++++++++++++++---------------
> 1 file changed, 28 insertions(+), 15 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index a712c68..56b6fce 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -221,12 +221,27 @@ static RAMBlock *last_seen_block;
> /* This is the last block from where we have sent data */
> static RAMBlock *last_sent_block;
> static ram_addr_t last_offset;
> -static unsigned long *migration_bitmap;
> static QemuMutex migration_bitmap_mutex;
> static uint64_t migration_dirty_pages;
> static uint32_t last_version;
> static bool ram_bulk_stage;
>
> +static struct BitmapRcu {
> + struct rcu_head rcu;
> + unsigned long bmap[0];
> +} *migration_bitmap_rcu;
> +
> +static inline struct BitmapRcu *bitmap_new_rcu(long nbits)
> +{
> + long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
> + struct BitmapRcu *ptr = g_try_malloc0(len + sizeof(struct BitmapRcu));
It is better to allocate memory twice, one is for BitmapRcu, another is calling
bitmap_new(). The user doesn't need to know how the bitmap is implemented.
> + if (ptr == NULL) {
> + abort();
> + }
> + return ptr;
> +}
> +
> +
> struct CompressParam {
> bool start;
> bool done;
> @@ -508,7 +523,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
>
> unsigned long next;
>
> - bitmap = atomic_rcu_read(&migration_bitmap);
> + bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> if (ram_bulk_stage && nr > base) {
> next = nr + 1;
> } else {
> @@ -526,7 +541,7 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
> static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
> {
> unsigned long *bitmap;
> - bitmap = atomic_rcu_read(&migration_bitmap);
> + bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> migration_dirty_pages +=
> cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length);
> }
> @@ -1029,12 +1044,11 @@ static void migration_end(void)
> /* caller have hold iothread lock or is in a bh, so there is
> * no writing race against this migration_bitmap
> */
> - unsigned long *bitmap = migration_bitmap;
> - atomic_rcu_set(&migration_bitmap, NULL);
> + struct BitmapRcu *bitmap = migration_bitmap_rcu;
> + atomic_rcu_set(&migration_bitmap_rcu, NULL);
> if (bitmap) {
> memory_global_dirty_log_stop();
> - synchronize_rcu();
> - g_free(bitmap);
> + g_free_rcu(bitmap, rcu);
> }
>
> XBZRLE_cache_lock();
> @@ -1070,9 +1084,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
> /* called in qemu main thread, so there is
> * no writing race against this migration_bitmap
> */
> - if (migration_bitmap) {
> - unsigned long *old_bitmap = migration_bitmap, *bitmap;
> - bitmap = bitmap_new(new);
> + if (migration_bitmap_rcu) {
> + struct BitmapRcu *old_bitmap = migration_bitmap_rcu, *bitmap;
> + bitmap = bitmap_new_rcu(new);
>
> /* prevent migration_bitmap content from being set bit
> * by migration_bitmap_sync_range() at the same time.
> @@ -1080,12 +1094,11 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
> * at the same time.
> */
> qemu_mutex_lock(&migration_bitmap_mutex);
> - bitmap_copy(bitmap, old_bitmap, old);
> - atomic_rcu_set(&migration_bitmap, bitmap);
> + bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
> + atomic_rcu_set(&migration_bitmap_rcu, bitmap);
> qemu_mutex_unlock(&migration_bitmap_mutex);
> migration_dirty_pages += new - old;
> - synchronize_rcu();
> - g_free(old_bitmap);
> + g_free_rcu(old_bitmap, rcu);
> }
> }
>
> @@ -1144,7 +1157,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> reset_ram_globals();
>
> ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> - migration_bitmap = bitmap_new(ram_bitmap_pages);
> + migration_bitmap_rcu = bitmap_new_rcu(ram_bitmap_pages);
>
> /*
> * Count the total number of pages used by ram blocks not including any
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
2015-09-25 9:31 ` Denis V. Lunev
@ 2015-09-25 9:37 ` Wen Congyang
2015-09-25 10:05 ` Denis V. Lunev
0 siblings, 1 reply; 18+ messages in thread
From: Wen Congyang @ 2015-09-25 9:37 UTC (permalink / raw)
To: Denis V. Lunev
Cc: Amit Shah, Igor Redko, Juan Quintela, qemu-devel, Anna Melekhova
On 09/25/2015 05:31 PM, Denis V. Lunev wrote:
> On 09/25/2015 12:24 PM, Wen Congyang wrote:
>> On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
>>> we can omit calling of bitmap_set in migration_bitmap_extend and
>>> ram_save_setup just after bitmap_new, which properly zeroes memory
>>> inside.
>> This patch is wrong. bitmap_set() is set all bits of the memory to 1,
>> not 0.
>>
>
> OK, then I'll replace g_try_malloc0 with g_try_malloc in the next patch to avoid
> double memset
No, bitmap_new() is called in many places. Some caller needs zero memory, and
some caller doesn't need...
>
>>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>>> CC: Igor Redko <redkoi@virtuozzo.com>
>>> CC: Anna Melekhova <annam@virtuozzo.com>
>>> CC: Juan Quintela <quintela@redhat.com>
>>> CC: Amit Shah <amit.shah@redhat.com>
>>> CC: Wen Congyang <wency@cn.fujitsu.com>
>>> ---
>>> migration/ram.c | 2 --
>>> 1 file changed, 2 deletions(-)
>>>
>>> diff --git a/migration/ram.c b/migration/ram.c
>>> index 7f007e6..a712c68 100644
>>> --- a/migration/ram.c
>>> +++ b/migration/ram.c
>>> @@ -1081,7 +1081,6 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>>> */
>>> qemu_mutex_lock(&migration_bitmap_mutex);
>>> bitmap_copy(bitmap, old_bitmap, old);
>>> - bitmap_set(bitmap, old, new - old);
>>> atomic_rcu_set(&migration_bitmap, bitmap);
>>> qemu_mutex_unlock(&migration_bitmap_mutex);
>>> migration_dirty_pages += new - old;
>>> @@ -1146,7 +1145,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>>> ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
>>> migration_bitmap = bitmap_new(ram_bitmap_pages);
>>> - bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
>>> /*
>>> * Count the total number of pages used by ram blocks not including any
>>>
>
> .
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/2] migration: fix deadlock
2015-09-25 9:09 ` [Qemu-devel] [PATCH v2 0/2] " Denis V. Lunev
2015-09-25 9:09 ` [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0 Denis V. Lunev
2015-09-25 9:09 ` [Qemu-devel] [PATCH 2/2] migration: fix deadlock Denis V. Lunev
@ 2015-09-25 9:46 ` Wen Congyang
2015-09-28 10:55 ` Igor Redko
2 siblings, 1 reply; 18+ messages in thread
From: Wen Congyang @ 2015-09-25 9:46 UTC (permalink / raw)
To: Denis V. Lunev
Cc: Igor Redko, Juan Quintela, qemu-devel, Anna Melekhova, Amit Shah,
Paolo Bonzini
On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
> Release qemu global mutex before call synchronize_rcu().
> synchronize_rcu() waiting for all readers to finish their critical
> sections. There is at least one critical section in which we try
> to get QGM (critical section is in address_space_rw() and
> prepare_mmio_access() is trying to aquire QGM).
>
> Both functions (migration_end() and migration_bitmap_extend())
> are called from main thread which is holding QGM.
>
> Thus there is a race condition that ends up with deadlock:
> main thread working thread
> Lock QGA |
> | Call KVM_EXIT_IO handler
> | |
> | Open rcu reader's critical section
> Migration cleanup bh |
> | |
> synchronize_rcu() is |
> waiting for readers |
> | prepare_mmio_access() is waiting for QGM
> \ /
> deadlock
>
> Patches here are quick and dirty, compile-tested only to validate the
> architectual approach.
>
> Igor, Anna, can you pls start your tests with these patches instead of your
> original one. Thank you.
Can you give me the backtrace of the working thread?
I think it is very bad to wait some lock in rcu reader's cirtical section.
To Paolo:
Do we allow this in rcu critical section?
Thanks
Wen Congyang
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Igor Redko <redkoi@virtuozzo.com>
> CC: Anna Melekhova <annam@virtuozzo.com>
> CC: Juan Quintela <quintela@redhat.com>
> CC: Amit Shah <amit.shah@redhat.com>
>
> Denis V. Lunev (2):
> migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
> migration: fix deadlock
>
> migration/ram.c | 45 ++++++++++++++++++++++++++++-----------------
> 1 file changed, 28 insertions(+), 17 deletions(-)
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
2015-09-25 9:37 ` Wen Congyang
@ 2015-09-25 10:05 ` Denis V. Lunev
0 siblings, 0 replies; 18+ messages in thread
From: Denis V. Lunev @ 2015-09-25 10:05 UTC (permalink / raw)
To: Wen Congyang
Cc: Amit Shah, Igor Redko, Juan Quintela, qemu-devel, Anna Melekhova
On 09/25/2015 12:37 PM, Wen Congyang wrote:
> On 09/25/2015 05:31 PM, Denis V. Lunev wrote:
>> On 09/25/2015 12:24 PM, Wen Congyang wrote:
>>> On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
>>>> we can omit calling of bitmap_set in migration_bitmap_extend and
>>>> ram_save_setup just after bitmap_new, which properly zeroes memory
>>>> inside.
>>> This patch is wrong. bitmap_set() is set all bits of the memory to 1,
>>> not 0.
>>>
>> OK, then I'll replace g_try_malloc0 with g_try_malloc in the next patch to avoid
>> double memset
> No, bitmap_new() is called in many places. Some caller needs zero memory, and
> some caller doesn't need...
>
you are correct with your proposal with double allocation for the next
patch.
With a specific function like I have did in v2 g_try_malloc0 can be
replaced.
Anyway, I am fine with your proposal, this is not a (very) big deal.
EINPROGRESS
Den
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/2] migration: fix deadlock
2015-09-25 9:46 ` [Qemu-devel] [PATCH v2 0/2] " Wen Congyang
@ 2015-09-28 10:55 ` Igor Redko
2015-09-28 15:12 ` Igor Redko
2015-09-29 8:47 ` Dr. David Alan Gilbert
0 siblings, 2 replies; 18+ messages in thread
From: Igor Redko @ 2015-09-28 10:55 UTC (permalink / raw)
To: Wen Congyang
Cc: Juan Quintela, qemu-devel, Anna Melekhova, Paolo Bonzini,
Amit Shah, Denis V. Lunev
On Пт., 2015-09-25 at 17:46 +0800, Wen Congyang wrote:
> On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
> > Release qemu global mutex before call synchronize_rcu().
> > synchronize_rcu() waiting for all readers to finish their critical
> > sections. There is at least one critical section in which we try
> > to get QGM (critical section is in address_space_rw() and
> > prepare_mmio_access() is trying to aquire QGM).
> >
> > Both functions (migration_end() and migration_bitmap_extend())
> > are called from main thread which is holding QGM.
> >
> > Thus there is a race condition that ends up with deadlock:
> > main thread working thread
> > Lock QGA |
> > | Call KVM_EXIT_IO handler
> > | |
> > | Open rcu reader's critical section
> > Migration cleanup bh |
> > | |
> > synchronize_rcu() is |
> > waiting for readers |
> > | prepare_mmio_access() is waiting for QGM
> > \ /
> > deadlock
> >
> > Patches here are quick and dirty, compile-tested only to validate the
> > architectual approach.
> >
> > Igor, Anna, can you pls start your tests with these patches instead of your
> > original one. Thank you.
>
> Can you give me the backtrace of the working thread?
>
> I think it is very bad to wait some lock in rcu reader's cirtical section.
#0 __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1 0x00007f1ef113ccfd in __GI___pthread_mutex_lock (mutex=0x7f1ef4145ce0 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2 0x00007f1ef3c36546 in qemu_mutex_lock (mutex=0x7f1ef4145ce0 <qemu_global_mutex>) at util/qemu-thread-posix.c:73
#3 0x00007f1ef387ff46 in qemu_mutex_lock_iothread () at /home/user/my_qemu/qemu/cpus.c:1170
#4 0x00007f1ef38514a2 in prepare_mmio_access (mr=0x7f1ef612f200) at /home/user/my_qemu/qemu/exec.c:2390
#5 0x00007f1ef385157e in address_space_rw (as=0x7f1ef40ec940 <address_space_io>, addr=49402, attrs=..., buf=0x7f1ef3f97000 "\001", len=1, is_write=true)
at /home/user/my_qemu/qemu/exec.c:2425
#6 0x00007f1ef3897c53 in kvm_handle_io (port=49402, attrs=..., data=0x7f1ef3f97000, direction=1, size=1, count=1) at /home/user/my_qemu/qemu/kvm-all.c:1680
#7 0x00007f1ef3898144 in kvm_cpu_exec (cpu=0x7f1ef5010fc0) at /home/user/my_qemu/qemu/kvm-all.c:1849
#8 0x00007f1ef387fa91 in qemu_kvm_cpu_thread_fn (arg=0x7f1ef5010fc0) at /home/user/my_qemu/qemu/cpus.c:979
#9 0x00007f1ef113a6aa in start_thread (arg=0x7f1eef0b9700) at pthread_create.c:333
#10 0x00007f1ef0e6feed in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
>
> >
> > Signed-off-by: Denis V. Lunev <den@openvz.org>
> > CC: Igor Redko <redkoi@virtuozzo.com>
> > CC: Anna Melekhova <annam@virtuozzo.com>
> > CC: Juan Quintela <quintela@redhat.com>
> > CC: Amit Shah <amit.shah@redhat.com>
> >
> > Denis V. Lunev (2):
> > migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
> > migration: fix deadlock
> >
> > migration/ram.c | 45 ++++++++++++++++++++++++++++-----------------
> > 1 file changed, 28 insertions(+), 17 deletions(-)
> >
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/2] migration: fix deadlock
2015-09-28 10:55 ` Igor Redko
@ 2015-09-28 15:12 ` Igor Redko
2015-09-29 8:47 ` Dr. David Alan Gilbert
1 sibling, 0 replies; 18+ messages in thread
From: Igor Redko @ 2015-09-28 15:12 UTC (permalink / raw)
To: Wen Congyang
Cc: Juan Quintela, qemu-devel, Anna Melekhova, Paolo Bonzini,
Amit Shah, Denis V. Lunev
On Пн., 2015-09-28 at 13:55 +0300, Igor Redko wrote:
> On Пт., 2015-09-25 at 17:46 +0800, Wen Congyang wrote:
> > On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
> > > Release qemu global mutex before call synchronize_rcu().
> > > synchronize_rcu() waiting for all readers to finish their critical
> > > sections. There is at least one critical section in which we try
> > > to get QGM (critical section is in address_space_rw() and
> > > prepare_mmio_access() is trying to aquire QGM).
> > >
> > > Both functions (migration_end() and migration_bitmap_extend())
> > > are called from main thread which is holding QGM.
> > >
> > > Thus there is a race condition that ends up with deadlock:
> > > main thread working thread
> > > Lock QGA |
> > > | Call KVM_EXIT_IO handler
> > > | |
> > > | Open rcu reader's critical section
> > > Migration cleanup bh |
> > > | |
> > > synchronize_rcu() is |
> > > waiting for readers |
> > > | prepare_mmio_access() is waiting for QGM
> > > \ /
> > > deadlock
> > >
> > > Patches here are quick and dirty, compile-tested only to validate the
> > > architectual approach.
> > >
> > > Igor, Anna, can you pls start your tests with these patches instead of your
> > > original one. Thank you.
> >
> > Can you give me the backtrace of the working thread?
> >
> > I think it is very bad to wait some lock in rcu reader's cirtical section.
>
> #0 __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
> #1 0x00007f1ef113ccfd in __GI___pthread_mutex_lock (mutex=0x7f1ef4145ce0 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
> #2 0x00007f1ef3c36546 in qemu_mutex_lock (mutex=0x7f1ef4145ce0 <qemu_global_mutex>) at util/qemu-thread-posix.c:73
> #3 0x00007f1ef387ff46 in qemu_mutex_lock_iothread () at /home/user/my_qemu/qemu/cpus.c:1170
> #4 0x00007f1ef38514a2 in prepare_mmio_access (mr=0x7f1ef612f200) at /home/user/my_qemu/qemu/exec.c:2390
> #5 0x00007f1ef385157e in address_space_rw (as=0x7f1ef40ec940 <address_space_io>, addr=49402, attrs=..., buf=0x7f1ef3f97000 "\001", len=1, is_write=true)
> at /home/user/my_qemu/qemu/exec.c:2425
> #6 0x00007f1ef3897c53 in kvm_handle_io (port=49402, attrs=..., data=0x7f1ef3f97000, direction=1, size=1, count=1) at /home/user/my_qemu/qemu/kvm-all.c:1680
> #7 0x00007f1ef3898144 in kvm_cpu_exec (cpu=0x7f1ef5010fc0) at /home/user/my_qemu/qemu/kvm-all.c:1849
> #8 0x00007f1ef387fa91 in qemu_kvm_cpu_thread_fn (arg=0x7f1ef5010fc0) at /home/user/my_qemu/qemu/cpus.c:979
> #9 0x00007f1ef113a6aa in start_thread (arg=0x7f1eef0b9700) at pthread_create.c:333
> #10 0x00007f1ef0e6feed in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Backtrace of the main thread:
#0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1 0x00007f6ade8e9a17 in futex_wait (ev=0x7f6adf26e120 <rcu_gp_event>, val=4294967295) at util/qemu-thread-posix.c:301
#2 0x00007f6ade8e9b2d in qemu_event_wait (ev=0x7f6adf26e120 <rcu_gp_event>) at util/qemu-thread-posix.c:399
#3 0x00007f6ade8fd7ec in wait_for_readers () at util/rcu.c:120
#4 0x00007f6ade8fd875 in synchronize_rcu () at util/rcu.c:149
#5 0x00007f6ade5640fd in migration_end () at /home/user/my_qemu/qemu/migration/ram.c:1036
#6 0x00007f6ade564194 in ram_migration_cancel (opaque=0x0) at /home/user/my_qemu/qemu/migration/ram.c:1054
#7 0x00007f6ade567d5a in qemu_savevm_state_cancel () at /home/user/my_qemu/qemu/migration/savevm.c:915
#8 0x00007f6ade7b4bdf in migrate_fd_cleanup (opaque=0x7f6aded8fd40 <current_migration>) at migration/migration.c:582
#9 0x00007f6ade804d15 in aio_bh_poll (ctx=0x7f6adf895e50) at async.c:87
#10 0x00007f6ade814dcb in aio_dispatch (ctx=0x7f6adf895e50) at aio-posix.c:135
#11 0x00007f6ade8050b5 in aio_ctx_dispatch (source=0x7f6adf895e50, callback=0x0, user_data=0x0) at async.c:226
#12 0x00007f6adc9a3c3d in g_main_context_dispatch () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#13 0x00007f6ade813274 in glib_pollfds_poll () at main-loop.c:208
#14 0x00007f6ade813351 in os_host_main_loop_wait (timeout=422420000) at main-loop.c:253
#15 0x00007f6ade813410 in main_loop_wait (nonblocking=0) at main-loop.c:502
#16 0x00007f6ade64ae6a in main_loop () at vl.c:1902
#17 0x00007f6ade652c32 in main (argc=70, argv=0x7ffcc3b674e8, envp=0x7ffcc3b67720) at vl.c:4653
>
> >
> > >
> > > Signed-off-by: Denis V. Lunev <den@openvz.org>
> > > CC: Igor Redko <redkoi@virtuozzo.com>
> > > CC: Anna Melekhova <annam@virtuozzo.com>
> > > CC: Juan Quintela <quintela@redhat.com>
> > > CC: Amit Shah <amit.shah@redhat.com>
> > >
> > > Denis V. Lunev (2):
> > > migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
> > > migration: fix deadlock
> > >
> > > migration/ram.c | 45 ++++++++++++++++++++++++++++-----------------
> > > 1 file changed, 28 insertions(+), 17 deletions(-)
> > >
> >
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/2] migration: fix deadlock
2015-09-28 10:55 ` Igor Redko
2015-09-28 15:12 ` Igor Redko
@ 2015-09-29 8:47 ` Dr. David Alan Gilbert
2015-09-30 14:28 ` Igor Redko
1 sibling, 1 reply; 18+ messages in thread
From: Dr. David Alan Gilbert @ 2015-09-29 8:47 UTC (permalink / raw)
To: Igor Redko
Cc: Juan Quintela, Anna Melekhova, qemu-devel, Denis V. Lunev,
Amit Shah, Paolo Bonzini
* Igor Redko (redkoi@virtuozzo.com) wrote:
> On Пт., 2015-09-25 at 17:46 +0800, Wen Congyang wrote:
> > On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
> > > Release qemu global mutex before call synchronize_rcu().
> > > synchronize_rcu() waiting for all readers to finish their critical
> > > sections. There is at least one critical section in which we try
> > > to get QGM (critical section is in address_space_rw() and
> > > prepare_mmio_access() is trying to aquire QGM).
> > >
> > > Both functions (migration_end() and migration_bitmap_extend())
> > > are called from main thread which is holding QGM.
> > >
> > > Thus there is a race condition that ends up with deadlock:
> > > main thread working thread
> > > Lock QGA |
> > > | Call KVM_EXIT_IO handler
> > > | |
> > > | Open rcu reader's critical section
> > > Migration cleanup bh |
> > > | |
> > > synchronize_rcu() is |
> > > waiting for readers |
> > > | prepare_mmio_access() is waiting for QGM
> > > \ /
> > > deadlock
> > >
> > > Patches here are quick and dirty, compile-tested only to validate the
> > > architectual approach.
> > >
> > > Igor, Anna, can you pls start your tests with these patches instead of your
> > > original one. Thank you.
> >
> > Can you give me the backtrace of the working thread?
> >
> > I think it is very bad to wait some lock in rcu reader's cirtical section.
>
> #0 __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
> #1 0x00007f1ef113ccfd in __GI___pthread_mutex_lock (mutex=0x7f1ef4145ce0 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
> #2 0x00007f1ef3c36546 in qemu_mutex_lock (mutex=0x7f1ef4145ce0 <qemu_global_mutex>) at util/qemu-thread-posix.c:73
> #3 0x00007f1ef387ff46 in qemu_mutex_lock_iothread () at /home/user/my_qemu/qemu/cpus.c:1170
> #4 0x00007f1ef38514a2 in prepare_mmio_access (mr=0x7f1ef612f200) at /home/user/my_qemu/qemu/exec.c:2390
> #5 0x00007f1ef385157e in address_space_rw (as=0x7f1ef40ec940 <address_space_io>, addr=49402, attrs=..., buf=0x7f1ef3f97000 "\001", len=1, is_write=true)
> at /home/user/my_qemu/qemu/exec.c:2425
> #6 0x00007f1ef3897c53 in kvm_handle_io (port=49402, attrs=..., data=0x7f1ef3f97000, direction=1, size=1, count=1) at /home/user/my_qemu/qemu/kvm-all.c:1680
> #7 0x00007f1ef3898144 in kvm_cpu_exec (cpu=0x7f1ef5010fc0) at /home/user/my_qemu/qemu/kvm-all.c:1849
> #8 0x00007f1ef387fa91 in qemu_kvm_cpu_thread_fn (arg=0x7f1ef5010fc0) at /home/user/my_qemu/qemu/cpus.c:979
> #9 0x00007f1ef113a6aa in start_thread (arg=0x7f1eef0b9700) at pthread_create.c:333
> #10 0x00007f1ef0e6feed in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Do you have a test to run in the guest that easily triggers this?
Dave
> >
> > >
> > > Signed-off-by: Denis V. Lunev <den@openvz.org>
> > > CC: Igor Redko <redkoi@virtuozzo.com>
> > > CC: Anna Melekhova <annam@virtuozzo.com>
> > > CC: Juan Quintela <quintela@redhat.com>
> > > CC: Amit Shah <amit.shah@redhat.com>
> > >
> > > Denis V. Lunev (2):
> > > migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
> > > migration: fix deadlock
> > >
> > > migration/ram.c | 45 ++++++++++++++++++++++++++++-----------------
> > > 1 file changed, 28 insertions(+), 17 deletions(-)
> > >
> >
>
>
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH 1/1] migration: fix deadlock
2015-09-25 8:23 ` Wen Congyang
2015-09-25 9:09 ` [Qemu-devel] [PATCH v2 0/2] " Denis V. Lunev
@ 2015-09-29 15:32 ` Igor Redko
1 sibling, 0 replies; 18+ messages in thread
From: Igor Redko @ 2015-09-29 15:32 UTC (permalink / raw)
To: Wen Congyang, Denis V. Lunev; +Cc: Amit Shah, qemu-devel, Juan Quintela
On 25.09.2015 11:23, Wen Congyang wrote:
> On 09/25/2015 04:03 PM, Denis V. Lunev wrote:
>> On 09/25/2015 04:21 AM, Wen Congyang wrote:
>>> On 09/24/2015 08:53 PM, Denis V. Lunev wrote:
>>>> From: Igor Redko <redkoi@virtuozzo.com>
>>>>
>>>> Release qemu global mutex before call synchronize_rcu().
>>>> synchronize_rcu() waiting for all readers to finish their critical
>>>> sections. There is at least one critical section in which we try
>>>> to get QGM (critical section is in address_space_rw() and
>>>> prepare_mmio_access() is trying to aquire QGM).
>>>>
>>>> Both functions (migration_end() and migration_bitmap_extend())
>>>> are called from main thread which is holding QGM.
>>>>
>>>> Thus there is a race condition that ends up with deadlock:
>>>> main thread working thread
>>>> Lock QGA |
>>>> | Call KVM_EXIT_IO handler
>>>> | |
>>>> | Open rcu reader's critical section
>>>> Migration cleanup bh |
>>>> | |
>>>> synchronize_rcu() is |
>>>> waiting for readers |
>>>> | prepare_mmio_access() is waiting for QGM
>>>> \ /
>>>> deadlock
>>>>
>>>> The patch just releases QGM before calling synchronize_rcu().
>>>>
>>>> Signed-off-by: Igor Redko <redkoi@virtuozzo.com>
>>>> Reviewed-by: Anna Melekhova <annam@virtuozzo.com>
>>>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>>>> CC: Juan Quintela <quintela@redhat.com>
>>>> CC: Amit Shah <amit.shah@redhat.com>
>>>> ---
>>>> migration/ram.c | 6 ++++++
>>>> 1 file changed, 6 insertions(+)
>>>>
>>>> diff --git a/migration/ram.c b/migration/ram.c
>>>> index 7f007e6..d01febc 100644
>>>> --- a/migration/ram.c
>>>> +++ b/migration/ram.c
>>>> @@ -1028,12 +1028,16 @@ static void migration_end(void)
>>>> {
>>>> /* caller have hold iothread lock or is in a bh, so there is
>>>> * no writing race against this migration_bitmap
>>>> + * but rcu used not only for migration_bitmap, so we should
>>>> + * release QGM or we get in deadlock.
>>>> */
>>>> unsigned long *bitmap = migration_bitmap;
>>>> atomic_rcu_set(&migration_bitmap, NULL);
>>>> if (bitmap) {
>>>> memory_global_dirty_log_stop();
>>>> + qemu_mutex_unlock_iothread();
>>>> synchronize_rcu();
>>>> + qemu_mutex_lock_iothread();
>>> migration_end() can called in two cases:
>>> 1. migration_completed
>>> 2. migration is cancelled
>>>
>>> In case 1, you should not unlock iothread, otherwise, the vm's state may be changed
>>> unexpectedly.
>>
>> sorry, but there is now very good choice here. We should either
>> unlock or not call synchronize_rcu which is also an option.
>>
>> In the other case the rework should be much more sufficient.
>
> I don't reproduce this bug. But according to your description, the bug only exists
> in case 2. Is it right?
>
When migration is successfully completed, VM has been already stopped
before migration_end() is being called. VM must be running to reproduce
this bug. So, yes bug exists only in case 2
FYI
To reproduce this bug you need 2 hosts with qemu+libvirt (host0 and
host1) configured for migration.
0. Create VM on host0 and install centos7
1. Shutdown VM.
2. Start VM (virsh start <VM_name>) and right after that start migration
to host1 (smth like 'virsh migrate --live --verbose <VM_name>
"qemu+ssh://host1/system"')
3. Stop migration after ~1 sec (after migration process have been
started, but before it completed. for example when you see "Migration: [
5 %]")
Works for me 9/10
deadlock: no response from VM and no response from qemu monitor (for
example 'virsh qemu-monitor-command --hmp <VM_NAME> "info migrate"' will
hang indefinitely)
Another way:
0. Create VM with e1000 network card on host0 and install centos7
1. Run iperf on VM (or any other load on network)
2. Start migration
3. Stop migration before it completed.
For this approach e1000 network card is essential because it generates
KVM_EXIT_MMIO.
>>
>> Den
>>
>>>> g_free(bitmap);
>>>> }
>>>> @@ -1085,7 +1089,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>>>> atomic_rcu_set(&migration_bitmap, bitmap);
>>>> qemu_mutex_unlock(&migration_bitmap_mutex);
>>>> migration_dirty_pages += new - old;
>>>> + qemu_mutex_unlock_iothread();
>>>> synchronize_rcu();
>>>> + qemu_mutex_lock_iothread();
>>> Hmm, I think it is OK to unlock iothread here
>>>
>>>> g_free(old_bitmap);
>>>> }
>>>> }
>>>>
>>
>> .
>>
>
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/2] migration: fix deadlock
2015-09-29 8:47 ` Dr. David Alan Gilbert
@ 2015-09-30 14:28 ` Igor Redko
0 siblings, 0 replies; 18+ messages in thread
From: Igor Redko @ 2015-09-30 14:28 UTC (permalink / raw)
To: Dr. David Alan Gilbert
Cc: Juan Quintela, Anna Melekhova, qemu-devel, Denis V. Lunev,
Amit Shah, Paolo Bonzini
On 29.09.2015 11:47, Dr. David Alan Gilbert wrote:
> * Igor Redko (redkoi@virtuozzo.com) wrote:
>> On Пт., 2015-09-25 at 17:46 +0800, Wen Congyang wrote:
>>> On 09/25/2015 05:09 PM, Denis V. Lunev wrote:
>>>> Release qemu global mutex before call synchronize_rcu().
>>>> synchronize_rcu() waiting for all readers to finish their critical
>>>> sections. There is at least one critical section in which we try
>>>> to get QGM (critical section is in address_space_rw() and
>>>> prepare_mmio_access() is trying to aquire QGM).
>>>>
>>>> Both functions (migration_end() and migration_bitmap_extend())
>>>> are called from main thread which is holding QGM.
>>>>
>>>> Thus there is a race condition that ends up with deadlock:
>>>> main thread working thread
>>>> Lock QGA |
>>>> | Call KVM_EXIT_IO handler
>>>> | |
>>>> | Open rcu reader's critical section
>>>> Migration cleanup bh |
>>>> | |
>>>> synchronize_rcu() is |
>>>> waiting for readers |
>>>> | prepare_mmio_access() is waiting for QGM
>>>> \ /
>>>> deadlock
>>>>
>>>> Patches here are quick and dirty, compile-tested only to validate the
>>>> architectual approach.
>>>>
>>>> Igor, Anna, can you pls start your tests with these patches instead of your
>>>> original one. Thank you.
>>>
>>> Can you give me the backtrace of the working thread?
>>>
>>> I think it is very bad to wait some lock in rcu reader's cirtical section.
>>
>> #0 __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
>> #1 0x00007f1ef113ccfd in __GI___pthread_mutex_lock (mutex=0x7f1ef4145ce0 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
>> #2 0x00007f1ef3c36546 in qemu_mutex_lock (mutex=0x7f1ef4145ce0 <qemu_global_mutex>) at util/qemu-thread-posix.c:73
>> #3 0x00007f1ef387ff46 in qemu_mutex_lock_iothread () at /home/user/my_qemu/qemu/cpus.c:1170
>> #4 0x00007f1ef38514a2 in prepare_mmio_access (mr=0x7f1ef612f200) at /home/user/my_qemu/qemu/exec.c:2390
>> #5 0x00007f1ef385157e in address_space_rw (as=0x7f1ef40ec940 <address_space_io>, addr=49402, attrs=..., buf=0x7f1ef3f97000 "\001", len=1, is_write=true)
>> at /home/user/my_qemu/qemu/exec.c:2425
>> #6 0x00007f1ef3897c53 in kvm_handle_io (port=49402, attrs=..., data=0x7f1ef3f97000, direction=1, size=1, count=1) at /home/user/my_qemu/qemu/kvm-all.c:1680
>> #7 0x00007f1ef3898144 in kvm_cpu_exec (cpu=0x7f1ef5010fc0) at /home/user/my_qemu/qemu/kvm-all.c:1849
>> #8 0x00007f1ef387fa91 in qemu_kvm_cpu_thread_fn (arg=0x7f1ef5010fc0) at /home/user/my_qemu/qemu/cpus.c:979
>> #9 0x00007f1ef113a6aa in start_thread (arg=0x7f1eef0b9700) at pthread_create.c:333
>> #10 0x00007f1ef0e6feed in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
>
> Do you have a test to run in the guest that easily triggers this?
>
> Dave
There are two ways to trigger this. Both of them need 2 hosts with
qemu+libvirt (host0 and host1) configured for migration.
First way:
0. Create VM on host0 and install centos7
1. Shutdown VM.
2. Start VM (virsh start <VM_name>) and right after that start migration
to host1 (smth like 'virsh migrate --live --verbose <VM_name>
"qemu+ssh://host1/system"')
3. Stop migration after ~1 sec (after migration process have been
started, but before it completed. for example when you see "Migration: [
5 %]")
deadlock: no response from VM and no response from qemu monitor (for
example 'virsh qemu-monitor-command --hmp <VM_NAME> "info migrate"' will
hang indefinitely) 9/10
Second way:
0. Create VM with e1000 network card on host0 and install centos7
1. Run iperf on VM (or any other load on network)
2. Start migration
3. Stop migration before it completed.
For this approach e1000 network card is essential because it generates
KVM_EXIT_MMIO.
Igor
>>>
>>>>
>>>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>>>> CC: Igor Redko <redkoi@virtuozzo.com>
>>>> CC: Anna Melekhova <annam@virtuozzo.com>
>>>> CC: Juan Quintela <quintela@redhat.com>
>>>> CC: Amit Shah <amit.shah@redhat.com>
>>>>
>>>> Denis V. Lunev (2):
>>>> migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0
>>>> migration: fix deadlock
>>>>
>>>> migration/ram.c | 45 ++++++++++++++++++++++++++++-----------------
>>>> 1 file changed, 28 insertions(+), 17 deletions(-)
>>>>
>>>
>>
>>
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2015-09-30 14:28 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-24 12:53 [Qemu-devel] [PATCH 1/1] migration: fix deadlock Denis V. Lunev
2015-09-25 1:21 ` Wen Congyang
2015-09-25 8:03 ` Denis V. Lunev
2015-09-25 8:23 ` Wen Congyang
2015-09-25 9:09 ` [Qemu-devel] [PATCH v2 0/2] " Denis V. Lunev
2015-09-25 9:09 ` [Qemu-devel] [PATCH 1/2] migration: bitmap_set is unnecessary as bitmap_new uses g_try_malloc0 Denis V. Lunev
2015-09-25 9:24 ` Wen Congyang
2015-09-25 9:31 ` Denis V. Lunev
2015-09-25 9:37 ` Wen Congyang
2015-09-25 10:05 ` Denis V. Lunev
2015-09-25 9:09 ` [Qemu-devel] [PATCH 2/2] migration: fix deadlock Denis V. Lunev
2015-09-25 9:35 ` Wen Congyang
2015-09-25 9:46 ` [Qemu-devel] [PATCH v2 0/2] " Wen Congyang
2015-09-28 10:55 ` Igor Redko
2015-09-28 15:12 ` Igor Redko
2015-09-29 8:47 ` Dr. David Alan Gilbert
2015-09-30 14:28 ` Igor Redko
2015-09-29 15:32 ` [Qemu-devel] [PATCH 1/1] " Igor Redko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).