From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: qemu-devel@nongnu.org, Manish Mishra <manish.mishra@nutanix.com>,
Juan Quintela <quintela@redhat.com>,
ani@anisinha.ca,
Leonardo Bras Soares Passos <lsoaresp@redhat.com>,
"Daniel P . Berrange" <berrange@redhat.com>
Subject: Re: [PATCH 07/14] migration: Teach PSS about host page
Date: Wed, 5 Oct 2022 12:12:12 +0100 [thread overview]
Message-ID: <Yz1mjOijY/nuKM7w@work-vm> (raw)
In-Reply-To: <20220920225215.48840-1-peterx@redhat.com>
* Peter Xu (peterx@redhat.com) wrote:
> Migration code has a lot to do with host pages. Teaching PSS core about
> the idea of host page helps a lot and makes the code clean. Meanwhile,
> this prepares for the future changes that can leverage the new PSS helpers
> that this patch introduces to send host page in another thread.
>
> Three more fields are introduced for this:
>
> (1) host_page_sending: this is set to true when QEMU is sending a host
> page, false otherwise.
>
> (2) host_page_{start|end}: these point to the start/end of host page
> we're sending, and it's only valid when host_page_sending==true.
>
> For example, when we look up the next dirty page on the ramblock, with
> host_page_sending==true, we'll not try to look for anything beyond the
> current host page boundary. This can be slightly efficient than current
> code because currently we'll set pss->page to next dirty bit (which can be
> over current host page boundary) and reset it to host page boundary if we
> found it goes beyond that.
>
> With above, we can easily make migration_bitmap_find_dirty() self contained
> by updating pss->page properly. rs* parameter is removed because it's not
> even used in old code.
>
> When sending a host page, we should use the pss helpers like this:
>
> - pss_host_page_prepare(pss): called before sending host page
> - pss_within_range(pss): whether we're still working on the cur host page?
> - pss_host_page_finish(pss): called after sending a host page
>
> Then we can use ram_save_target_page() to save one small page.
>
> Currently ram_save_host_page() is still the only user. If there'll be
> another function to send host page (e.g. in return path thread) in the
> future, it should follow the same style.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
> migration/ram.c | 95 +++++++++++++++++++++++++++++++++++++++----------
> 1 file changed, 76 insertions(+), 19 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 5bd3d76bf0..3f720b6de2 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -474,6 +474,11 @@ struct PageSearchStatus {
> * postcopy pages via postcopy preempt channel.
> */
> bool postcopy_target_channel;
> + /* Whether we're sending a host page */
> + bool host_page_sending;
> + /* The start/end of current host page. Only valid if host_page_sending==true */
> + unsigned long host_page_start;
> + unsigned long host_page_end;
> };
> typedef struct PageSearchStatus PageSearchStatus;
>
> @@ -851,26 +856,38 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
> }
>
> /**
> - * migration_bitmap_find_dirty: find the next dirty page from start
> + * pss_find_next_dirty: find the next dirty page of current ramblock
> *
> - * Returns the page offset within memory region of the start of a dirty page
> + * This function updates pss->page to point to the next dirty page index
> + * within the ramblock to migrate, or the end of ramblock when nothing
> + * found. Note that when pss->host_page_sending==true it means we're
> + * during sending a host page, so we won't look for dirty page that is
> + * outside the host page boundary.
> *
> - * @rs: current RAM state
> - * @rb: RAMBlock where to search for dirty pages
> - * @start: page where we start the search
> + * @pss: the current page search status
> */
> -static inline
> -unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
> - unsigned long start)
> +static void pss_find_next_dirty(PageSearchStatus *pss)
> {
> + RAMBlock *rb = pss->block;
> unsigned long size = rb->used_length >> TARGET_PAGE_BITS;
> unsigned long *bitmap = rb->bmap;
>
> if (ramblock_is_ignored(rb)) {
> - return size;
> + /* Points directly to the end, so we know no dirty page */
> + pss->page = size;
> + return;
> + }
> +
> + /*
> + * If during sending a host page, only look for dirty pages within the
> + * current host page being send.
> + */
> + if (pss->host_page_sending) {
> + assert(pss->host_page_end);
> + size = MIN(size, pss->host_page_end);
> }
>
> - return find_next_bit(bitmap, size, start);
> + pss->page = find_next_bit(bitmap, size, pss->page);
> }
>
> static void migration_clear_memory_region_dirty_bitmap(RAMBlock *rb,
> @@ -1556,7 +1573,9 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
> pss->postcopy_requested = false;
> pss->postcopy_target_channel = RAM_CHANNEL_PRECOPY;
>
> - pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
> + /* Update pss->page for the next dirty bit in ramblock */
> + pss_find_next_dirty(pss);
> +
> if (pss->complete_round && pss->block == rs->last_seen_block &&
> pss->page >= rs->last_page) {
> /*
> @@ -2446,6 +2465,44 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
> }
> }
>
> +/* Should be called before sending a host page */
> +static void pss_host_page_prepare(PageSearchStatus *pss)
> +{
> + /* How many guest pages are there in one host page? */
> + size_t guest_pfns = qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> +
> + pss->host_page_sending = true;
> + pss->host_page_start = ROUND_DOWN(pss->page, guest_pfns);
> + pss->host_page_end = ROUND_UP(pss->page + 1, guest_pfns);
> +}
> +
> +/*
> + * Whether the page pointed by PSS is within the host page being sent.
> + * Must be called after a previous pss_host_page_prepare().
> + */
> +static bool pss_within_range(PageSearchStatus *pss)
> +{
> + ram_addr_t ram_addr;
> +
> + assert(pss->host_page_sending);
> +
> + /* Over host-page boundary? */
> + if (pss->page >= pss->host_page_end) {
> + return false;
> + }
> +
> + ram_addr = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
> +
> + return offset_in_ramblock(pss->block, ram_addr);
> +}
> +
> +static void pss_host_page_finish(PageSearchStatus *pss)
> +{
> + pss->host_page_sending = false;
> + /* This is not needed, but just to reset it */
> + pss->host_page_start = pss->host_page_end = 0;
> +}
> +
> /**
> * ram_save_host_page: save a whole host page
> *
> @@ -2468,8 +2525,6 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> int tmppages, pages = 0;
> size_t pagesize_bits =
> qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> - unsigned long hostpage_boundary =
> - QEMU_ALIGN_UP(pss->page + 1, pagesize_bits);
> unsigned long start_page = pss->page;
> int res;
>
> @@ -2482,6 +2537,9 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> postcopy_preempt_choose_channel(rs, pss);
> }
>
> + /* Update host page boundary information */
> + pss_host_page_prepare(pss);
> +
> do {
> if (postcopy_needs_preempt(rs, pss)) {
> postcopy_do_preempt(rs, pss);
> @@ -2520,15 +2578,14 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> }
>
> if (tmppages < 0) {
> + pss_host_page_finish(pss);
> return tmppages;
> }
>
> - pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
> - } while ((pss->page < hostpage_boundary) &&
> - offset_in_ramblock(pss->block,
> - ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
> - /* The offset we leave with is the min boundary of host page and block */
> - pss->page = MIN(pss->page, hostpage_boundary);
> + pss_find_next_dirty(pss);
> + } while (pss_within_range(pss));
> +
> + pss_host_page_finish(pss);
>
> /*
> * When with postcopy preempt mode, flush the data as soon as possible for
> --
> 2.32.0
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2022-10-05 11:14 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-20 22:50 [PATCH 00/14] migration: Postcopy Preempt-Full Peter Xu
2022-09-20 22:50 ` [PATCH 01/14] migration: Add postcopy_preempt_active() Peter Xu
2022-09-20 22:50 ` [PATCH 02/14] migration: Cleanup xbzrle zero page cache update logic Peter Xu
2022-10-04 10:33 ` Dr. David Alan Gilbert
2022-09-20 22:50 ` [PATCH 03/14] migration: Trivial cleanup save_page_header() on same block check Peter Xu
2022-10-04 10:41 ` Dr. David Alan Gilbert
2022-09-20 22:50 ` [PATCH 04/14] migration: Remove RAMState.f references in compression code Peter Xu
2022-10-04 10:54 ` Dr. David Alan Gilbert
2022-10-04 14:36 ` Peter Xu
2022-09-20 22:52 ` [PATCH 05/14] migration: Yield bitmap_mutex properly when sending/sleeping Peter Xu
2022-10-04 13:55 ` Dr. David Alan Gilbert
2022-10-04 19:13 ` Peter Xu
2022-10-05 11:18 ` Dr. David Alan Gilbert
2022-10-05 13:40 ` Peter Xu
2022-10-05 19:48 ` Peter Xu
2022-09-20 22:52 ` [PATCH 06/14] migration: Use atomic ops properly for page accountings Peter Xu
2022-10-04 16:59 ` Dr. David Alan Gilbert
2022-10-04 19:23 ` Peter Xu
2022-10-05 11:38 ` Dr. David Alan Gilbert
2022-10-05 13:53 ` Peter Xu
2022-10-06 20:40 ` Peter Xu
2022-09-20 22:52 ` [PATCH 07/14] migration: Teach PSS about host page Peter Xu
2022-10-05 11:12 ` Dr. David Alan Gilbert [this message]
2022-09-20 22:52 ` [PATCH 08/14] migration: Introduce pss_channel Peter Xu
2022-10-05 13:03 ` Dr. David Alan Gilbert
2022-09-20 22:52 ` [PATCH 09/14] migration: Add pss_init() Peter Xu
2022-10-05 13:09 ` Dr. David Alan Gilbert
2022-09-20 22:52 ` [PATCH 10/14] migration: Make PageSearchStatus part of RAMState Peter Xu
2022-10-05 18:51 ` Dr. David Alan Gilbert
2022-10-05 19:41 ` Peter Xu
2022-10-06 8:36 ` Dr. David Alan Gilbert
2022-10-06 8:37 ` Dr. David Alan Gilbert
2022-09-20 22:52 ` [PATCH 11/14] migration: Move last_sent_block into PageSearchStatus Peter Xu
2022-10-06 16:59 ` Dr. David Alan Gilbert
2022-10-06 18:34 ` Peter Xu
2022-10-06 18:38 ` Dr. David Alan Gilbert
2022-09-20 22:52 ` [PATCH 12/14] migration: Send requested page directly in rp-return thread Peter Xu
2022-10-06 17:51 ` Dr. David Alan Gilbert
2022-09-20 22:52 ` [PATCH 13/14] migration: Remove old preempt code around state maintainance Peter Xu
2022-09-21 0:47 ` Peter Xu
2022-09-21 13:54 ` Peter Xu
2022-10-06 17:56 ` Dr. David Alan Gilbert
2022-09-20 22:52 ` [PATCH 14/14] migration: Drop rs->f Peter Xu
2022-10-06 17:57 ` Dr. David Alan Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yz1mjOijY/nuKM7w@work-vm \
--to=dgilbert@redhat.com \
--cc=ani@anisinha.ca \
--cc=berrange@redhat.com \
--cc=lsoaresp@redhat.com \
--cc=manish.mishra@nutanix.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).