From: "Dr. David Alan Gilbert (git)" <dgilbert@redhat.com>
To: qemu-devel@nongnu.org, wei.w.wang@intel.com, peterx@redhat.com
Cc: berrange@redhat.com, leobras@redhat.com, david@redhat.com,
quintela@redhat.com
Subject: [PULL 7/7] migration: clear the memory region dirty bitmap when skipping free pages
Date: Mon, 26 Jul 2021 13:43:31 +0100 [thread overview]
Message-ID: <20210726124331.124710-8-dgilbert@redhat.com> (raw)
In-Reply-To: <20210726124331.124710-1-dgilbert@redhat.com>
From: Wei Wang <wei.w.wang@intel.com>
When skipping free pages to send, their corresponding dirty bits in the
memory region dirty bitmap need to be cleared. Otherwise the skipped
pages will be sent in the next round after the migration thread syncs
dirty bits from the memory region dirty bitmap.
Cc: David Hildenbrand <david@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Message-Id: <20210722083055.23352-1-wei.w.wang@intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/ram.c | 74 +++++++++++++++++++++++++++++++++++++------------
1 file changed, 56 insertions(+), 18 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 08b3cb7a4a..7a43bfd7af 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -789,6 +789,53 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
return find_next_bit(bitmap, size, start);
}
+static void migration_clear_memory_region_dirty_bitmap(RAMState *rs,
+ RAMBlock *rb,
+ unsigned long page)
+{
+ uint8_t shift;
+ hwaddr size, start;
+
+ if (!rb->clear_bmap || !clear_bmap_test_and_clear(rb, page)) {
+ return;
+ }
+
+ shift = rb->clear_bmap_shift;
+ /*
+ * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
+ * can make things easier sometimes since then start address
+ * of the small chunk will always be 64 pages aligned so the
+ * bitmap will always be aligned to unsigned long. We should
+ * even be able to remove this restriction but I'm simply
+ * keeping it.
+ */
+ assert(shift >= 6);
+
+ size = 1ULL << (TARGET_PAGE_BITS + shift);
+ start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
+ trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
+ memory_region_clear_dirty_bitmap(rb->mr, start, size);
+}
+
+static void
+migration_clear_memory_region_dirty_bitmap_range(RAMState *rs,
+ RAMBlock *rb,
+ unsigned long start,
+ unsigned long npages)
+{
+ unsigned long i, chunk_pages = 1UL << rb->clear_bmap_shift;
+ unsigned long chunk_start = QEMU_ALIGN_DOWN(start, chunk_pages);
+ unsigned long chunk_end = QEMU_ALIGN_UP(start + npages, chunk_pages);
+
+ /*
+ * Clear pages from start to start + npages - 1, so the end boundary is
+ * exclusive.
+ */
+ for (i = chunk_start; i < chunk_end; i += chunk_pages) {
+ migration_clear_memory_region_dirty_bitmap(rs, rb, i);
+ }
+}
+
static inline bool migration_bitmap_clear_dirty(RAMState *rs,
RAMBlock *rb,
unsigned long page)
@@ -803,26 +850,9 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
* the page in the chunk we clear the remote dirty bitmap for all.
* Clearing it earlier won't be a problem, but too late will.
*/
- if (rb->clear_bmap && clear_bmap_test_and_clear(rb, page)) {
- uint8_t shift = rb->clear_bmap_shift;
- hwaddr size = 1ULL << (TARGET_PAGE_BITS + shift);
- hwaddr start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
-
- /*
- * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
- * can make things easier sometimes since then start address
- * of the small chunk will always be 64 pages aligned so the
- * bitmap will always be aligned to unsigned long. We should
- * even be able to remove this restriction but I'm simply
- * keeping it.
- */
- assert(shift >= 6);
- trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
- memory_region_clear_dirty_bitmap(rb->mr, start, size);
- }
+ migration_clear_memory_region_dirty_bitmap(rs, rb, page);
ret = test_and_clear_bit(page, rb->bmap);
-
if (ret) {
rs->migration_dirty_pages--;
}
@@ -2741,6 +2771,14 @@ void qemu_guest_free_page_hint(void *addr, size_t len)
npages = used_len >> TARGET_PAGE_BITS;
qemu_mutex_lock(&ram_state->bitmap_mutex);
+ /*
+ * The skipped free pages are equavalent to be sent from clear_bmap's
+ * perspective, so clear the bits from the memory region bitmap which
+ * are initially set. Otherwise those skipped pages will be sent in
+ * the next round after syncing from the memory region bitmap.
+ */
+ migration_clear_memory_region_dirty_bitmap_range(ram_state, block,
+ start, npages);
ram_state->migration_dirty_pages -=
bitmap_count_one_with_offset(block->bmap, start, npages);
bitmap_clear(block->bmap, start, npages);
--
2.31.1
next prev parent reply other threads:[~2021-07-26 12:47 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-26 12:43 [PULL 0/7] migration queue Dr. David Alan Gilbert (git)
2021-07-26 12:43 ` [PULL 1/7] tests/qtest/migration-test.c: use 127.0.0.1 instead of 0 Dr. David Alan Gilbert (git)
2021-07-26 12:43 ` [PULL 2/7] migration: Fix missing join() of rp_thread Dr. David Alan Gilbert (git)
2021-07-26 12:43 ` [PULL 3/7] migration: Make from_dst_file accesses thread-safe Dr. David Alan Gilbert (git)
2021-07-26 12:43 ` [PULL 4/7] migration: Introduce migration_ioc_[un]register_yank() Dr. David Alan Gilbert (git)
2021-07-26 12:43 ` [PULL 5/7] migration: Teach QEMUFile to be QIOChannel-aware Dr. David Alan Gilbert (git)
2021-07-26 12:43 ` [PULL 6/7] migration: Move the yank unregister of channel_close out Dr. David Alan Gilbert (git)
2021-07-26 12:43 ` Dr. David Alan Gilbert (git) [this message]
2021-07-27 12:24 ` [PULL 0/7] migration queue Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210726124331.124710-8-dgilbert@redhat.com \
--to=dgilbert@redhat.com \
--cc=berrange@redhat.com \
--cc=david@redhat.com \
--cc=leobras@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).