From: Peter Xu <peterx@redhat.com>
To: "Wang, Wei W" <wei.w.wang@intel.com>
Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>,
Juan Quintela <quintela@redhat.com>,
David Hildenbrand <david@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
Leonardo Bras Soares Passos <lsoaresp@redhat.com>
Subject: Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Tue, 13 Jul 2021 11:59:12 -0400 [thread overview]
Message-ID: <YO24UM1oWQqLMtvZ@t490s> (raw)
In-Reply-To: <9a8224c9a02b4d9395f6581b24deaa54@intel.com>
On Tue, Jul 13, 2021 at 08:40:21AM +0000, Wang, Wei W wrote:
> On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:
> > Taking the mutex every time for each dirty bit to clear is too slow, especially we'll
> > take/release even if the dirty bit is cleared. So far it's only used to sync with
> > special cases with qemu_guest_free_page_hint() against migration thread,
> > nothing really that serious yet. Let's move the lock to be upper.
> >
> > There're two callers of migration_bitmap_clear_dirty().
> >
> > For migration, move it into ram_save_iterate(). With the help of MAX_WAIT
> > logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, so taking
> > the lock once there at the entry. It also means any call sites to
> > qemu_guest_free_page_hint() can be delayed; but it should be very rare, only
> > during migration, and I don't see a problem with it.
> >
> > For COLO, move it up to colo_flush_ram_cache(). I think COLO forgot to take
> > that lock even when calling ramblock_sync_dirty_bitmap(), where another
> > example is migration_bitmap_sync() who took it right. So let the mutex cover
> > both the
> > ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.
> >
> > It's even possible to drop the lock so we use atomic operations upon rb->bmap
> > and the variable migration_dirty_pages. I didn't do it just to still be safe, also
> > not predictable whether the frequent atomic ops could bring overhead too e.g.
> > on huge vms when it happens very often. When that really comes, we can
> > keep a local counter and periodically call atomic ops. Keep it simple for now.
> >
> > Cc: Wei Wang <wei.w.wang@intel.com>
> > Cc: David Hildenbrand <david@redhat.com>
> > Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>
> > Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > Cc: Juan Quintela <quintela@redhat.com>
> > Cc: Leonardo Bras Soares Passos <lsoaresp@redhat.com>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
>
> FWIW
> Reviewed-by: Wei Wang <wei.w.wang@intel.com>
Thanks, Wei.
>
> If no one could help do a regression test of free page hint, please document something like this in the patch:
> Locking at the coarser granularity is possible to minimize the improvement brought by free page hints, but seems not causing critical issues.
> We will let users of free page hints to report back any requirement and come up with a better solution later.
Didn't get a chance to document it as it's in a pull now; but as long as you're
okay with no-per-page lock (which I still don't agree with), I can follow this up.
Are below parameters enough for me to enable free-page-hint?
-object iothread,id=io1 \
-device virtio-balloon,free-page-hint=on,iothread=io1 \
I tried to verify it's running by tracing inside guest with kprobe
report_free_page_func() but it didn't really trigger. Guest has kernel
5.11.12-300.fc34.x86_64, should be fairly new to have that supported. Do you
know what I'm missing?
P.S. This also reminded me that, maybe we want an entry in qemu-options.hx for
balloon device, as it has lots of options, some of them may not be easy to be
setup right.
--
Peter Xu
next prev parent reply other threads:[~2021-07-13 16:00 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-30 20:08 [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty() Peter Xu
2021-07-01 4:42 ` Wang, Wei W
2021-07-01 12:51 ` Peter Xu
2021-07-01 14:21 ` David Hildenbrand
2021-07-02 2:48 ` Wang, Wei W
2021-07-02 7:06 ` David Hildenbrand
2021-07-03 2:53 ` Wang, Wei W
2021-07-05 13:41 ` David Hildenbrand
2021-07-06 9:41 ` Wang, Wei W
2021-07-06 10:05 ` David Hildenbrand
2021-07-06 17:39 ` Peter Xu
2021-07-07 12:45 ` Wang, Wei W
2021-07-07 16:45 ` Peter Xu
2021-07-07 23:25 ` Wang, Wei W
2021-07-08 0:21 ` Peter Xu
2021-07-06 17:47 ` Peter Xu
2021-07-07 8:34 ` Wang, Wei W
2021-07-07 16:54 ` Peter Xu
2021-07-08 2:55 ` Wang, Wei W
2021-07-08 18:10 ` Peter Xu
2021-07-02 2:29 ` Wang, Wei W
2021-07-06 17:59 ` Peter Xu
2021-07-07 8:33 ` Wang, Wei W
2021-07-07 16:44 ` Peter Xu
2021-07-08 2:49 ` Wang, Wei W
2021-07-08 18:30 ` Peter Xu
2021-07-09 8:58 ` Wang, Wei W
2021-07-09 14:48 ` Peter Xu
2021-07-13 8:20 ` Wang, Wei W
2021-07-03 16:31 ` Lukas Straub
2021-07-04 14:14 ` Lukas Straub
2021-07-06 18:37 ` Peter Xu
2021-07-13 8:40 ` Wang, Wei W
2021-07-13 10:22 ` David Hildenbrand
2021-07-14 5:03 ` Wang, Wei W
2021-07-13 15:59 ` Peter Xu [this message]
2021-07-14 5:04 ` Wang, Wei W
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YO24UM1oWQqLMtvZ@t490s \
--to=peterx@redhat.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=lsoaresp@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=wei.w.wang@intel.com \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).