From: Jianjun Duan <duanj@linux.vnet.ibm.com>
To: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Juan Quintela <quintela@redhat.com>,
Peter Crosthwaite <crosthwaite.peter@gmail.com>,
qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>,
Amit Shah <amit.shah@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Richard Henderson <rth@twiddle.net>
Subject: Re: [Qemu-devel] [PATCH] add migration capability to bypass the shared memory
Date: Mon, 16 Jan 2017 09:38:04 -0800 [thread overview]
Message-ID: <199ab4e3-9c30-6275-41e6-c7eab77d5e31@linux.vnet.ibm.com> (raw)
In-Reply-To: <CAJhGHyC0Jfjd8yzVc3qCoafX_yPFT3+pNkQrgvms3eKfCVGefA@mail.gmail.com>
On 01/12/2017 08:44 PM, Lai Jiangshan wrote:
> On Fri, Jan 13, 2017 at 3:19 AM, Jianjun Duan <duanj@linux.vnet.ibm.com> wrote:
>> I have a question related to interplay of bypassing the shared memory in
>> migration and memory hotplugging. If on the source guest a big chunk of
>> memory is plugged in, will the shared memory still be mapped the same
>> way on the guest? i.e, the mapping from guest physical address to the
>> host virtual address be the same?
>
> I don't understand the question, the patch doesn't change
> the memory hotplugging nor the way how the pages are mapped
> in the guest physical.
>
Let me try to rephrase it. Is there a scenario in which the mapping
could change? If the mapping could change, will bypassing shared memory
still work?
Thanks,
Jianjun
>>
>> Thanks,
>> Jianjun
>>
>>
>> On 08/29/2016 09:11 PM, Lai Jiangshan wrote:
>>> On Wed, Aug 10, 2016 at 5:03 PM, Juan Quintela <quintela@redhat.com> wrote:
>>>> Lai Jiangshan <jiangshanlai@gmail.com> wrote:
>>>>
>>>> Hi
>>>>
>>>> First of all, I like a lot the patchset, but I would preffer to split it
>>>> to find "possible" bugs along the lines, especially in postcopy, but not only.
>>>
>>> Hello, thanks for review and comments
>>>
>>> I tried to make the patch be sane and tight.
>>> I don't see any strong reason to split it without complicating the patch.
>>>
>>>>
>>>> [very nice description of the patch]
>>>>
>>>> Nothing to say about the QMP and shared memory detection, looks correct
>>>> to me.
>>>>
>>>>> diff --git a/migration/ram.c b/migration/ram.c
>>>>> index 815bc0e..880972d 100644
>>>>> --- a/migration/ram.c
>>>>> +++ b/migration/ram.c
>>>>> @@ -605,6 +605,28 @@ static void migration_bitmap_sync_init(void)
>>>>> num_dirty_pages_period = 0;
>>>>> xbzrle_cache_miss_prev = 0;
>>>>> iterations_prev = 0;
>>>>> + migration_dirty_pages = 0;
>>>>> +}
>>>>> +
>>>>> +static void migration_bitmap_init(unsigned long *bitmap)
>>>>> +{
>>>>> + RAMBlock *block;
>>>>> +
>>>>> + bitmap_clear(bitmap, 0, last_ram_offset() >> TARGET_PAGE_BITS);
>>>>> + rcu_read_lock();
>>>>> + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>>>> + if (!migrate_bypass_shared_memory() || !qemu_ram_is_shared(block)) {
>>>>> + bitmap_set(bitmap, block->offset >> TARGET_PAGE_BITS,
>>>>> + block->used_length >> TARGET_PAGE_BITS);
>>>>> +
>>>>> + /*
>>>>> + * Count the total number of pages used by ram blocks not including
>>>>> + * any gaps due to alignment or unplugs.
>>>>> + */
>>>>> + migration_dirty_pages += block->used_length >> TARGET_PAGE_BITS;
>>>>> + }
>>>>> + }
>>>>> + rcu_read_unlock();
>>>>> }
>>>>
>>>> We can split this function in a different patch.
>>>
>>> it calls the new function migrate_bypass_shared_memory().
>>> it is no a good idea to split it out.
>>>
>>>> I haven't fully search
>>>> if we care about taking the rcu lock here. The thing that I am more
>>>> interested is in knowing what happens when we don't set
>>>> migration_dirty_pages as the full "possible" memory pages.
>>>
>>> I hadn't tested it with postcopy, I don't know how to use postcopy.
>>> From my review I can't find obvious bugs about it.
>>>
>>> I don't think there is any good reason to use migrate_bypass
>>> and postcopy together, I can disable the migrate_bypass
>>> when postcopy==true if you want.
>>>
>>>>
>>>> Once here, should we check for ROM regions?
>>>>
>>>> BTW, could'nt we use:
>>>>
>>>> int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque)
>>>> {
>>>> RAMBlock *block;
>>>> int ret = 0;
>>>>
>>>> rcu_read_lock();
>>>> QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>>> ret = func(block->idstr, block->host, block->offset,
>>>> block->used_length, opaque);
>>>> if (ret) {
>>>> break;
>>>> }
>>>> }
>>>> rcu_read_unlock();
>>>> return ret;
>>>> }
>>>>
>>>
>>> the patch only introduces only one "QLIST_FOREACH_RCU(ram_list.blocks)"
>>> but
>>> # git grep 'QLIST_FOREACH_RCU.*ram_list' | wc -l
>>> # 16
>>>
>>> I don't want to introduce qemu_ram_foreach_block()
>>> and touch another 15 places.
>>> I hope someone do it after merged.
>>>
>>>
>>>>
>>>>
>>>>>
>>>>> static void migration_bitmap_sync(void)
>>>>> @@ -631,7 +653,9 @@ static void migration_bitmap_sync(void)
>>>>> qemu_mutex_lock(&migration_bitmap_mutex);
>>>>> rcu_read_lock();
>>>>> QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>>>> - migration_bitmap_sync_range(block->offset, block->used_length);
>>>>> + if (!migrate_bypass_shared_memory() || !qemu_ram_is_shared(block)) {
>>>>> + migration_bitmap_sync_range(block->offset, block->used_length);
>>>>> + }
>>>>> }
>>>>> rcu_read_unlock();
>>>>> qemu_mutex_unlock(&migration_bitmap_mutex);
>>>>
>>>> Oops, another place where we were not using qemu_ram_foreach_block :p
>>>>
>>>>
>>>>> @@ -1926,19 +1950,14 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>>>>> ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
>>>>> migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
>>>>> migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages);
>>>>> - bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages);
>>>>> + migration_bitmap_init(migration_bitmap_rcu->bmap);
>>>>>
>>>>> if (migrate_postcopy_ram()) {
>>>>> migration_bitmap_rcu->unsentmap = bitmap_new(ram_bitmap_pages);
>>>>> - bitmap_set(migration_bitmap_rcu->unsentmap, 0, ram_bitmap_pages);
>>>>> + bitmap_copy(migration_bitmap_rcu->unsentmap,
>>>>> + migration_bitmap_rcu->bmap, ram_bitmap_pages);
>>>>> }
>>>>
>>>> I think that if we go this route, we should move the whole if inside the
>>>> migration_bitmap_init?
>>>
>>> good! I will do it when I update the patch.
>>>
>>> Thanks,
>>> Lai
>>>
>>>>
>>>>>
>>>>> - /*
>>>>> - * Count the total number of pages used by ram blocks not including any
>>>>> - * gaps due to alignment or unplugs.
>>>>> - */
>>>>> - migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>>>>> -
>>>>> memory_global_dirty_log_start();
>>>>> migration_bitmap_sync();
>>>>> qemu_mutex_unlock_ramlist();
>>>>
>>>>
>>>> As said, very happy with the patch. And it got much simpler that I
>>>> would have expected.
>>>>
>>>> Thanks, Juan.
>>>
>>
>
next prev parent reply other threads:[~2017-01-16 17:38 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-09 12:10 [Qemu-devel] [PATCH] add migration capability to bypass the shared memory Lai Jiangshan
2016-08-09 13:51 ` no-reply
2016-08-09 19:12 ` Dr. David Alan Gilbert
2016-08-10 0:54 ` [Qemu-devel] [PATCH V2] " Lai Jiangshan
2016-08-10 1:22 ` Fam Zheng
2016-08-10 2:22 ` Li, Liang Z
2016-08-10 3:22 ` Lai Jiangshan
2016-08-10 5:04 ` Li, Liang Z
2016-08-10 9:11 ` Juan Quintela
2016-08-11 7:11 ` Li, Liang Z
2016-08-11 14:31 ` Lai Jiangshan
2016-08-11 14:45 ` Lai Jiangshan
2017-09-21 3:41 ` Zhang Haoyu
2016-08-12 6:48 ` Li, Liang Z
2016-08-12 7:19 ` Lai Jiangshan
2017-09-25 7:42 ` Zhang Haoyu
2017-09-25 12:13 ` Zhang Haoyu
2016-08-10 1:09 ` [Qemu-devel] [PATCH] " Lai Jiangshan
2016-08-10 9:03 ` Juan Quintela
2016-08-30 4:11 ` Lai Jiangshan
2016-11-18 13:01 ` Alexander Graf
2017-01-12 19:19 ` Jianjun Duan
2017-01-13 4:44 ` Lai Jiangshan
2017-01-16 17:38 ` Jianjun Duan [this message]
2017-09-21 6:33 ` Zhang Haoyu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=199ab4e3-9c30-6275-41e6-c7eab77d5e31@linux.vnet.ibm.com \
--to=duanj@linux.vnet.ibm.com \
--cc=amit.shah@redhat.com \
--cc=armbru@redhat.com \
--cc=crosthwaite.peter@gmail.com \
--cc=jiangshanlai@gmail.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).