From: "Wang, Wei W" <wei.w.wang@intel.com>
To: Peter Xu <peterx@redhat.com>, David Hildenbrand <david@redhat.com>
Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>,
Juan Quintela <quintela@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
Leonardo Bras Soares Passos <lsoaresp@redhat.com>
Subject: RE: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Wed, 7 Jul 2021 12:45:32 +0000 [thread overview]
Message-ID: <b94e02b7e7bd4f2a8cbed73cb7756a68@intel.com> (raw)
In-Reply-To: <YOSVZLwZzY/rZ0db@t490s>
On Wednesday, July 7, 2021 1:40 AM, Peter Xu wrote:
> On Tue, Jul 06, 2021 at 12:05:49PM +0200, David Hildenbrand wrote:
> > On 06.07.21 11:41, Wang, Wei W wrote:
> > > On Monday, July 5, 2021 9:42 PM, David Hildenbrand wrote:
> > > > On 03.07.21 04:53, Wang, Wei W wrote:
> > > > > On Friday, July 2, 2021 3:07 PM, David Hildenbrand wrote:
> > > > > > On 02.07.21 04:48, Wang, Wei W wrote:
> > > > > > > On Thursday, July 1, 2021 10:22 PM, David Hildenbrand wrote:
> > > > > > > > On 01.07.21 14:51, Peter Xu wrote:
> > > > > >
> > > > > > I think that clearly shows the issue.
> > > > > >
> > > > > > My theory I did not verify yet: Assume we have 1GB chunks in the clear
> bmap.
> > > > > > Assume the VM reports all pages within a 1GB chunk as free
> > > > > > (easy with a fresh VM). While processing hints, we will clear
> > > > > > the bits from the dirty bmap in the RAMBlock. As we will never
> > > > > > migrate any page of that 1GB chunk, we will not actually clear
> > > > > > the dirty bitmap of the memory region. When re-syncing, we
> > > > > > will set all bits bits in the dirty bmap again from the dirty
> > > > > > bitmap in the memory region. Thus, many of our hints end up
> > > > > > being mostly ignored. The smaller the clear bmap chunk, the
> > > > more extreme the issue.
> > > > >
> > > > > OK, that looks possible. We need to clear the related bits from
> > > > > the memory region bitmap before skipping the free pages. Could
> > > > > you try with
> > > > below patch:
> > > >
> > > > I did a quick test (with the memhog example) and it seems like it mostly
> works.
> > > > However, we're now doing the bitmap clearing from another, racing
> > > > with the migration thread. Especially:
> > > >
> > > > 1. Racing with clear_bmap_set() via
> > > > cpu_physical_memory_sync_dirty_bitmap()
> > > > 2. Racing with migration_bitmap_clear_dirty()
> > > >
> > > > So that might need some thought, if I'm not wrong.
> > >
> > > I think this is similar to the non-clear_bmap case, where the
> > > rb->bmap is set or cleared by the migration thread and
> > > qemu_guest_free_page_hint. For example, the migration thread could find a
> bit is set from rb->bmap before qemu_guest_free_page_hint gets that bit
> cleared in time.
> > > The result is that the free page is transferred, which isn't necessary, but won't
> cause any issue.
> > > This is very rare in practice.
> >
> > Here are my concerns regarding races:
> >
> > a) We now end up calling migration_clear_memory_region_dirty_bitmap()
> > without holding the bitmap_mutex. We have to clarify if that is ok. At
> > least
> > migration_bitmap_clear_dirty() holds it *while* clearing the log and
> > migration_bitmap_sync() while setting bits in the clear_bmap. I think
> > we also have to hold the mutex on the new path.
>
> Makes sense; I think we can let bitmap_mutex to protect both dirty/clear
> bitmaps, and also the dirty pages counter. I'll comment in Wei's patch too later.
Btw, what would you think if we change mutex to QemuSpin? It will also reduce the overhead, I think.
Best,
Wei
next prev parent reply other threads:[~2021-07-07 12:47 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-30 20:08 [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty() Peter Xu
2021-07-01 4:42 ` Wang, Wei W
2021-07-01 12:51 ` Peter Xu
2021-07-01 14:21 ` David Hildenbrand
2021-07-02 2:48 ` Wang, Wei W
2021-07-02 7:06 ` David Hildenbrand
2021-07-03 2:53 ` Wang, Wei W
2021-07-05 13:41 ` David Hildenbrand
2021-07-06 9:41 ` Wang, Wei W
2021-07-06 10:05 ` David Hildenbrand
2021-07-06 17:39 ` Peter Xu
2021-07-07 12:45 ` Wang, Wei W [this message]
2021-07-07 16:45 ` Peter Xu
2021-07-07 23:25 ` Wang, Wei W
2021-07-08 0:21 ` Peter Xu
2021-07-06 17:47 ` Peter Xu
2021-07-07 8:34 ` Wang, Wei W
2021-07-07 16:54 ` Peter Xu
2021-07-08 2:55 ` Wang, Wei W
2021-07-08 18:10 ` Peter Xu
2021-07-02 2:29 ` Wang, Wei W
2021-07-06 17:59 ` Peter Xu
2021-07-07 8:33 ` Wang, Wei W
2021-07-07 16:44 ` Peter Xu
2021-07-08 2:49 ` Wang, Wei W
2021-07-08 18:30 ` Peter Xu
2021-07-09 8:58 ` Wang, Wei W
2021-07-09 14:48 ` Peter Xu
2021-07-13 8:20 ` Wang, Wei W
2021-07-03 16:31 ` Lukas Straub
2021-07-04 14:14 ` Lukas Straub
2021-07-06 18:37 ` Peter Xu
2021-07-13 8:40 ` Wang, Wei W
2021-07-13 10:22 ` David Hildenbrand
2021-07-14 5:03 ` Wang, Wei W
2021-07-13 15:59 ` Peter Xu
2021-07-14 5:04 ` Wang, Wei W
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b94e02b7e7bd4f2a8cbed73cb7756a68@intel.com \
--to=wei.w.wang@intel.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=lsoaresp@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).