qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Wang, Wei W" <wei.w.wang@intel.com>
To: Peter Xu <peterx@redhat.com>
Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>,
	Juan Quintela <quintela@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Leonardo Bras Soares Passos <lsoaresp@redhat.com>
Subject: RE: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Fri, 2 Jul 2021 02:29:41 +0000	[thread overview]
Message-ID: <27cb8a0141fa493a8d4bb6bb918e8a82@intel.com> (raw)
In-Reply-To: <YN26SDxZS1aShbHi@t490s>

On Thursday, July 1, 2021 8:51 PM, Peter Xu wrote:
> On Thu, Jul 01, 2021 at 04:42:38AM +0000, Wang, Wei W wrote:
> > On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:
> > > Taking the mutex every time for each dirty bit to clear is too slow,
> > > especially we'll take/release even if the dirty bit is cleared.  So
> > > far it's only used to sync with special cases with
> > > qemu_guest_free_page_hint() against migration thread, nothing really that
> serious yet.  Let's move the lock to be upper.
> > >
> > > There're two callers of migration_bitmap_clear_dirty().
> > >
> > > For migration, move it into ram_save_iterate().  With the help of
> > > MAX_WAIT logic, we'll only run ram_save_iterate() for no more than
> > > 50ms-ish time, so taking the lock once there at the entry.  It also
> > > means any call sites to
> > > qemu_guest_free_page_hint() can be delayed; but it should be very
> > > rare, only during migration, and I don't see a problem with it.
> > >
> > > For COLO, move it up to colo_flush_ram_cache().  I think COLO forgot
> > > to take that lock even when calling ramblock_sync_dirty_bitmap(),
> > > where another example is migration_bitmap_sync() who took it right.
> > > So let the mutex cover both the
> > > ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.
> > >
> > > It's even possible to drop the lock so we use atomic operations upon
> > > rb->bmap and the variable migration_dirty_pages.  I didn't do it
> > > just to still be safe, also not predictable whether the frequent atomic ops
> could bring overhead too e.g.
> > > on huge vms when it happens very often.  When that really comes, we
> > > can keep a local counter and periodically call atomic ops.  Keep it simple for
> now.
> > >
> >
> > If free page opt is enabled, 50ms waiting time might be too long for handling
> just one hint (via qemu_guest_free_page_hint)?
> > How about making the lock conditionally?
> > e.g.
> > #define QEMU_LOCK_GUARD_COND (lock, cond) {
> > 	if (cond)
> > 		QEMU_LOCK_GUARD(lock);
> > }
> > Then in migration_bitmap_clear_dirty:
> > QEMU_LOCK_GUARD_COND(&rs->bitmap_mutex, rs->fpo_enabled);
> 
> Yeah that's indeed some kind of comment I'd like to get from either you or David
> when I add the cc list.. :)
> 
> I was curious how that would affect the guest when the free page hint helper can
> stuck for a while.  Per my understanding it's fully async as the blocked thread
> here is asynchronously with the guest since both virtio-balloon and virtio-mem
> are fully async. If so, would it really affect the guest a lot?  Is it still tolerable if it
> only happens during migration?

Yes, it is async and won't block the guest. But it will make the optimization doesn’t run as expected.
The intention is to have the migration thread skip the transfer of the free pages, but now the migration
thread is kind of using the 50ms lock to prevent the clearing of free pages while it is likely just sending free pages inside the lock.
(the reported free pages are better to be cleared in the bitmap in time in case they have already sent)

> 
> Taking that mutex for each dirty bit is still an overkill to me, irrelevant of whether
> it's "conditional" or not.  

With that, if free page opt is off, the mutex is skipped, isn't it?

> If I'm the cloud admin, I would more prefer migration
> finishes earlier, imho, rather than freeing some more pages on the host (after
> migration all pages will be gone!).  If it still blocks the guest in some unhealthy
> way I still prefer to take the lock here, however maybe make it shorter than
> 50ms.
> 

Yes, with the optimization, migration will be finished earlier.
Why it needs to free pages on the host?
(just skip sending the page)

Best,
Wei




  parent reply	other threads:[~2021-07-02  2:30 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30 20:08 [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty() Peter Xu
2021-07-01  4:42 ` Wang, Wei W
2021-07-01 12:51   ` Peter Xu
2021-07-01 14:21     ` David Hildenbrand
2021-07-02  2:48       ` Wang, Wei W
2021-07-02  7:06         ` David Hildenbrand
2021-07-03  2:53           ` Wang, Wei W
2021-07-05 13:41             ` David Hildenbrand
2021-07-06  9:41               ` Wang, Wei W
2021-07-06 10:05                 ` David Hildenbrand
2021-07-06 17:39                   ` Peter Xu
2021-07-07 12:45                     ` Wang, Wei W
2021-07-07 16:45                       ` Peter Xu
2021-07-07 23:25                         ` Wang, Wei W
2021-07-08  0:21                           ` Peter Xu
2021-07-06 17:47             ` Peter Xu
2021-07-07  8:34               ` Wang, Wei W
2021-07-07 16:54                 ` Peter Xu
2021-07-08  2:55                   ` Wang, Wei W
2021-07-08 18:10                     ` Peter Xu
2021-07-02  2:29     ` Wang, Wei W [this message]
2021-07-06 17:59       ` Peter Xu
2021-07-07  8:33         ` Wang, Wei W
2021-07-07 16:44           ` Peter Xu
2021-07-08  2:49             ` Wang, Wei W
2021-07-08 18:30               ` Peter Xu
2021-07-09  8:58                 ` Wang, Wei W
2021-07-09 14:48                   ` Peter Xu
2021-07-13  8:20                     ` Wang, Wei W
2021-07-03 16:31 ` Lukas Straub
2021-07-04 14:14   ` Lukas Straub
2021-07-06 18:37     ` Peter Xu
2021-07-13  8:40 ` Wang, Wei W
2021-07-13 10:22   ` David Hildenbrand
2021-07-14  5:03     ` Wang, Wei W
2021-07-13 15:59   ` Peter Xu
2021-07-14  5:04     ` Wang, Wei W

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=27cb8a0141fa493a8d4bb6bb918e8a82@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=lsoaresp@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=zhang.zhanghailiang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).