From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Pankaj Gupta <pankaj.gupta@cloud.ionos.com>,
Juan Quintela <quintela@redhat.com>,
teawater <teawaterz@linux.alibaba.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
qemu-devel@nongnu.org,
Alex Williamson <alex.williamson@redhat.com>,
Marek Kedzierski <mkedzier@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Andrey Gruzdev <andrey.gruzdev@virtuozzo.com>,
Wei Yang <richard.weiyang@linux.alibaba.com>
Subject: Re: [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager
Date: Thu, 29 Jul 2021 10:14:47 +0200 [thread overview]
Message-ID: <a1c80a40-2828-3373-c906-870f0dbb6db8@redhat.com> (raw)
In-Reply-To: <YQG74AsEBE0uaN4U@t490s>
>>>>> The thing is I still think this extra operation during sync() can be ignored by
>>>>> simply clear dirty log during bitmap init, then.. why not? :)
>>>>
>>>> I guess clearing the dirty log (especially in KVM) might be more expensive.
>>>
>>> If we send one ioctl per cb that'll be expensive for sure. I think it'll be
>>> fine if we send one clear ioctl to kvm, summarizing the whole bitmap to clear.
>>>
>>> The other thing is imho having overhead during bitmap init is always better
>>> than having that during sync(). :)
>>
>> Oh, right, so you're saying, after we set the dirty bmap to all ones and
>> excluded the discarded parts, setting the respective bits to 0, we simply
>> issue clearing of the whole area?
>>
>> For now I assumed we would have to clear per cb.
>
> Hmm when I replied I thought we can pass in a bitmap to ->log_clear() but I
> just remembered memory API actually hides the bitmap interface..
>
> Reset the whole region works, but it'll slow down migration starts, more
> importantly that'll be with mmu write lock so we will lose most clear-log
> benefit for the initial round of migration and stuck the guest #pf at the
> meantime...
>
> Let's try do that in cb()s as you mentioned; I think that'll still be okay,
> because so far the clear log block size is much larger (1gb), 1tb is worst case
> 1000 ioctls during bitmap init, slightly better than 250k calls during sync(),
> maybe? :)
Just to get it right, what you propose is calling
migration_clear_memory_region_dirty_bitmap_range() from each cb(). Due
to the clear_bmap, we will end up clearing each chunk (e.g., 1GB) at
most once.
But if our layout is fragmented, we can actually end up clearing all
chunks (1024 ioctls for 1TB), resulting in a slower migration start.
Any gut feeling how much slower migration start could be with largish
(e.g., 1 TiB) regions?
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-07-29 8:16 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-21 9:27 [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager David Hildenbrand
2021-07-21 9:27 ` [PATCH v2 1/6] memory: Introduce replay_discarded callback for RamDiscardManager David Hildenbrand
2021-07-23 16:34 ` Peter Xu
2021-07-21 9:27 ` [PATCH v2 2/6] virtio-mem: Implement replay_discarded RamDiscardManager callback David Hildenbrand
2021-07-23 16:34 ` Peter Xu
2021-07-21 9:27 ` [PATCH v2 3/6] migration/ram: Handle RAMBlocks with a RamDiscardManager on the migration source David Hildenbrand
2021-07-21 9:27 ` [PATCH v2 4/6] virtio-mem: Drop precopy notifier David Hildenbrand
2021-07-23 16:34 ` Peter Xu
2021-07-21 9:27 ` [PATCH v2 5/6] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination David Hildenbrand
2021-07-23 16:34 ` Peter Xu
2021-07-23 18:36 ` David Hildenbrand
2021-07-23 18:52 ` Peter Xu
2021-07-23 19:01 ` David Hildenbrand
2021-07-23 22:10 ` Peter Xu
2021-07-29 12:14 ` David Hildenbrand
2021-07-29 15:52 ` Peter Xu
2021-07-29 16:15 ` David Hildenbrand
2021-07-29 19:20 ` Peter Xu
2021-07-29 19:22 ` David Hildenbrand
2021-07-21 9:27 ` [PATCH v2 6/6] migration/ram: Handle RAMBlocks with a RamDiscardManager on background snapshots David Hildenbrand
2021-07-23 16:37 ` Peter Xu
2021-07-22 11:29 ` [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager Dr. David Alan Gilbert
2021-07-22 11:43 ` David Hildenbrand
2021-07-23 16:12 ` Peter Xu
2021-07-23 18:41 ` David Hildenbrand
2021-07-23 22:19 ` Peter Xu
2021-07-27 9:25 ` David Hildenbrand
2021-07-27 17:10 ` Peter Xu
2021-07-28 17:39 ` David Hildenbrand
2021-07-28 19:42 ` Peter Xu
2021-07-28 19:46 ` David Hildenbrand
2021-07-28 20:19 ` Peter Xu
2021-07-29 8:14 ` David Hildenbrand [this message]
2021-07-29 16:12 ` Peter Xu
2021-07-29 16:19 ` David Hildenbrand
2021-07-29 19:32 ` Peter Xu
2021-07-29 19:39 ` David Hildenbrand
2021-07-29 20:00 ` Peter Xu
2021-07-29 20:06 ` David Hildenbrand
2021-07-29 20:28 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a1c80a40-2828-3373-c906-870f0dbb6db8@redhat.com \
--to=david@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=andrey.gruzdev@virtuozzo.com \
--cc=dgilbert@redhat.com \
--cc=ehabkost@redhat.com \
--cc=mkedzier@redhat.com \
--cc=mst@redhat.com \
--cc=pankaj.gupta@cloud.ionos.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=richard.weiyang@linux.alibaba.com \
--cc=teawaterz@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).