From: David Hildenbrand <david@redhat.com>
To: Wei Wang <wei.w.wang@intel.com>, qemu-devel@nongnu.org
Cc: mst@redhat.com, dgilbert@redhat.com, peterx@redhat.com,
quintela@redhat.com
Subject: Re: [PATCH v2] migration: clear the memory region dirty bitmap when skipping free pages
Date: Thu, 15 Jul 2021 11:28:34 +0200 [thread overview]
Message-ID: <2581d2a2-de9d-7937-4d71-25a33cfbce3e@redhat.com> (raw)
In-Reply-To: <20210715075326.421977-1-wei.w.wang@intel.com>
On 15.07.21 09:53, Wei Wang wrote:
> When skipping free pages to send, their corresponding dirty bits in the
> memory region dirty bitmap need to be cleared. Otherwise the skipped
> pages will be sent in the next round after the migration thread syncs
> dirty bits from the memory region dirty bitmap.
>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> ---
> migration/ram.c | 72 ++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 54 insertions(+), 18 deletions(-)
>
> v1->v2 changelog:
> - move migration_clear_memory_region_dirty_bitmap under bitmap_mutex as
> we lack confidence to have it outside the lock for now.
> - clean the unnecessary subproject commit.
>
> diff --git a/migration/ram.c b/migration/ram.c
> index b5fc454b2f..69e06b55ec 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -789,6 +789,51 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
> return find_next_bit(bitmap, size, start);
> }
>
> +static void migration_clear_memory_region_dirty_bitmap(RAMState *rs,
> + RAMBlock *rb,
> + unsigned long page)
> +{
> + uint8_t shift;
> + hwaddr size, start;
> +
> + if (!rb->clear_bmap || !clear_bmap_test_and_clear(rb, page)) {
> + return;
> + }
> +
> + shift = rb->clear_bmap_shift;
You could initialize this right at the beginning of the function without doing any harm.
> + /*
> + * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
> + * can make things easier sometimes since then start address
> + * of the small chunk will always be 64 pages aligned so the
> + * bitmap will always be aligned to unsigned long. We should
> + * even be able to remove this restriction but I'm simply
> + * keeping it.
> + */
> + assert(shift >= 6);
> +
> + size = 1ULL << (TARGET_PAGE_BITS + shift);
> + start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
these as well as.
> + trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
> + memory_region_clear_dirty_bitmap(rb->mr, start, size);
> +}
> +
> +static void
> +migration_clear_memory_region_dirty_bitmap_range(RAMState *rs,
> + RAMBlock *rb,
> + unsigned long start,
> + unsigned long npages)
> +{
> + unsigned long page_to_clear, i, nchunks;
> + unsigned long chunk_pages = 1UL << rb->clear_bmap_shift;
> +
> + nchunks = (start + npages) / chunk_pages - start / chunk_pages + 1;
Wouldn't you have to align the start and the end range up/down
to properly calculate the number of chunks?
The following might be better and a little easier to grasp:
unsigned long chunk_pages = 1ULL << rb->clear_bmap_shift;
unsigned long aligned_start = QEMU_ALIGN_DOWN(start, chunk_pages);
unsigned long aligned_end = QEMU_ALIGN_UP(start + npages, chunk_pages)
/*
* Clear the clar_bmap of all covered chunks. It's sufficient to call it for
* one page within a chunk.
*/
for (start = aligned_start, start != aligned_end, start += chunk_pages) {
migration_clear_memory_region_dirty_bitmap(rs, rb, start);
}
> +
> + for (i = 0; i < nchunks; i++) {
> + page_to_clear = start + i * chunk_pages;
> + migration_clear_memory_region_dirty_bitmap(rs, rb, page_to_clear);
> + }
> +}
> +
> static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> RAMBlock *rb,
> unsigned long page)
> @@ -803,26 +848,9 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> * the page in the chunk we clear the remote dirty bitmap for all.
> * Clearing it earlier won't be a problem, but too late will.
> */
> - if (rb->clear_bmap && clear_bmap_test_and_clear(rb, page)) {
> - uint8_t shift = rb->clear_bmap_shift;
> - hwaddr size = 1ULL << (TARGET_PAGE_BITS + shift);
> - hwaddr start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
> -
> - /*
> - * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
> - * can make things easier sometimes since then start address
> - * of the small chunk will always be 64 pages aligned so the
> - * bitmap will always be aligned to unsigned long. We should
> - * even be able to remove this restriction but I'm simply
> - * keeping it.
> - */
> - assert(shift >= 6);
> - trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
> - memory_region_clear_dirty_bitmap(rb->mr, start, size);
> - }
> + migration_clear_memory_region_dirty_bitmap(rs, rb, page);
>
> ret = test_and_clear_bit(page, rb->bmap);
> -
unrelated change but ok for me.
> if (ret) {
> rs->migration_dirty_pages--;
> }
> @@ -2741,6 +2769,14 @@ void qemu_guest_free_page_hint(void *addr, size_t len)
> npages = used_len >> TARGET_PAGE_BITS;
>
> qemu_mutex_lock(&ram_state->bitmap_mutex);
> + /*
> + * The skipped free pages are equavelent to be sent from clear_bmap's
s/equavelent/equivalent/
> + * perspective, so clear the bits from the memory region bitmap which
> + * are initially set. Otherwise those skipped pages will be sent in
> + * the next round after syncing from the memory region bitmap.
> + */
> + migration_clear_memory_region_dirty_bitmap_range(ram_state, block,
> + start, npages);
> ram_state->migration_dirty_pages -=
> bitmap_count_one_with_offset(block->bmap, start, npages);
> bitmap_clear(block->bmap, start, npages);
>
Apart from that, lgtm.
(although I find the use of "start" to describe a PFN and not an
address very confusing, but it's already in the current code ...
start_pfn or just pfn as used in the kernel would be much clearer)
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-07-15 9:29 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-15 7:53 [PATCH v2] migration: clear the memory region dirty bitmap when skipping free pages Wei Wang
2021-07-15 9:28 ` David Hildenbrand [this message]
2021-07-16 6:15 ` Wang, Wei W
2021-07-16 8:26 ` David Hildenbrand
2021-07-19 5:18 ` Wang, Wei W
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2581d2a2-de9d-7937-4d71-25a33cfbce3e@redhat.com \
--to=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=mst@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).