From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>, "Wang, Wei W" <wei.w.wang@intel.com>
Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>,
Juan Quintela <quintela@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
Leonardo Bras Soares Passos <lsoaresp@redhat.com>
Subject: Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Thu, 1 Jul 2021 16:21:38 +0200 [thread overview]
Message-ID: <304fc749-03a0-b58d-05cc-f0d78350e015@redhat.com> (raw)
In-Reply-To: <YN26SDxZS1aShbHi@t490s>
On 01.07.21 14:51, Peter Xu wrote:
> On Thu, Jul 01, 2021 at 04:42:38AM +0000, Wang, Wei W wrote:
>> On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:
>>> Taking the mutex every time for each dirty bit to clear is too slow, especially we'll
>>> take/release even if the dirty bit is cleared. So far it's only used to sync with
>>> special cases with qemu_guest_free_page_hint() against migration thread,
>>> nothing really that serious yet. Let's move the lock to be upper.
>>>
>>> There're two callers of migration_bitmap_clear_dirty().
>>>
>>> For migration, move it into ram_save_iterate(). With the help of MAX_WAIT
>>> logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, so taking
>>> the lock once there at the entry. It also means any call sites to
>>> qemu_guest_free_page_hint() can be delayed; but it should be very rare, only
>>> during migration, and I don't see a problem with it.
>>>
>>> For COLO, move it up to colo_flush_ram_cache(). I think COLO forgot to take
>>> that lock even when calling ramblock_sync_dirty_bitmap(), where another
>>> example is migration_bitmap_sync() who took it right. So let the mutex cover
>>> both the
>>> ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.
>>>
>>> It's even possible to drop the lock so we use atomic operations upon rb->bmap
>>> and the variable migration_dirty_pages. I didn't do it just to still be safe, also
>>> not predictable whether the frequent atomic ops could bring overhead too e.g.
>>> on huge vms when it happens very often. When that really comes, we can
>>> keep a local counter and periodically call atomic ops. Keep it simple for now.
>>>
>>
>> If free page opt is enabled, 50ms waiting time might be too long for handling just one hint (via qemu_guest_free_page_hint)?
>> How about making the lock conditionally?
>> e.g.
>> #define QEMU_LOCK_GUARD_COND (lock, cond) {
>> if (cond)
>> QEMU_LOCK_GUARD(lock);
>> }
>> Then in migration_bitmap_clear_dirty:
>> QEMU_LOCK_GUARD_COND(&rs->bitmap_mutex, rs->fpo_enabled);
>
> Yeah that's indeed some kind of comment I'd like to get from either you or
> David when I add the cc list.. :)
>
> I was curious how that would affect the guest when the free page hint helper
> can stuck for a while. Per my understanding it's fully async as the blocked
> thread here is asynchronously with the guest since both virtio-balloon and
> virtio-mem are fully async. If so, would it really affect the guest a lot? Is
> it still tolerable if it only happens during migration?
For virtio-mem, we call qemu_guest_free_page_hint() synchronously from
the migration thread, directly via the precopy notifier. I recently sent
patches that stop using qemu_guest_free_page_hint() from virtio-mem
code. Until then, virtio-mem shouldn't care too much about that change
here I guess, as it doesn't interact with guest reqests.
https://lkml.kernel.org/r/20210616162940.28630-1-david@redhat.com
For virtio-balloon, it's called via the (asynchronous) iothread.
>
> Taking that mutex for each dirty bit is still an overkill to me, irrelevant of
> whether it's "conditional" or not. If I'm the cloud admin, I would more prefer
> migration finishes earlier, imho, rather than freeing some more pages on the
> host (after migration all pages will be gone!). If it still blocks the guest
> in some unhealthy way I still prefer to take the lock here, however maybe make
> it shorter than 50ms.
Spoiler alert: the introduction of clean bitmaps partially broke free
page hinting already (as clearing happens deferred -- and might never
happen if we don't migrate *any* page within a clean bitmap chunk, so
pages actually remain dirty ...). "broke" here means that pages still
get migrated even though they were reported by the guest. We'd actually
not want to use clean bmaps with free page hinting ... long story short,
free page hinting is a very fragile beast already and some of the hints
are basically ignored and pure overhead ...
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-07-01 14:36 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-30 20:08 [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty() Peter Xu
2021-07-01 4:42 ` Wang, Wei W
2021-07-01 12:51 ` Peter Xu
2021-07-01 14:21 ` David Hildenbrand [this message]
2021-07-02 2:48 ` Wang, Wei W
2021-07-02 7:06 ` David Hildenbrand
2021-07-03 2:53 ` Wang, Wei W
2021-07-05 13:41 ` David Hildenbrand
2021-07-06 9:41 ` Wang, Wei W
2021-07-06 10:05 ` David Hildenbrand
2021-07-06 17:39 ` Peter Xu
2021-07-07 12:45 ` Wang, Wei W
2021-07-07 16:45 ` Peter Xu
2021-07-07 23:25 ` Wang, Wei W
2021-07-08 0:21 ` Peter Xu
2021-07-06 17:47 ` Peter Xu
2021-07-07 8:34 ` Wang, Wei W
2021-07-07 16:54 ` Peter Xu
2021-07-08 2:55 ` Wang, Wei W
2021-07-08 18:10 ` Peter Xu
2021-07-02 2:29 ` Wang, Wei W
2021-07-06 17:59 ` Peter Xu
2021-07-07 8:33 ` Wang, Wei W
2021-07-07 16:44 ` Peter Xu
2021-07-08 2:49 ` Wang, Wei W
2021-07-08 18:30 ` Peter Xu
2021-07-09 8:58 ` Wang, Wei W
2021-07-09 14:48 ` Peter Xu
2021-07-13 8:20 ` Wang, Wei W
2021-07-03 16:31 ` Lukas Straub
2021-07-04 14:14 ` Lukas Straub
2021-07-06 18:37 ` Peter Xu
2021-07-13 8:40 ` Wang, Wei W
2021-07-13 10:22 ` David Hildenbrand
2021-07-14 5:03 ` Wang, Wei W
2021-07-13 15:59 ` Peter Xu
2021-07-14 5:04 ` Wang, Wei W
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=304fc749-03a0-b58d-05cc-f0d78350e015@redhat.com \
--to=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=lsoaresp@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=wei.w.wang@intel.com \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).