From: Peter Xu <peterx@redhat.com>
To: Kunkun Jiang <jiangkunkun@huawei.com>
Cc: "Juan Quintela" <quintela@redhat.com>,
"David Edmondson" <dme@dme.org>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
"open list:All patches CC here" <qemu-devel@nongnu.org>,
"Cédric Le Goater" <clg@kaod.org>,
"Alexey Romko" <nevilad@yahoo.com>,
"Zenghui Yu" <yuzenghui@huawei.com>,
wanghaibin.wang@huawei.com, "Keqian Zhu" <zhukeqian1@huawei.com>,
"Andrey Gruzdev" <andrey.gruzdev@virtuozzo.com>
Subject: Re: [PATCH v3 3/3] migration/ram: Optimize ram_save_host_page()
Date: Fri, 5 Mar 2021 09:30:33 -0500 [thread overview]
Message-ID: <20210305143033.GF397383@xz-x1> (raw)
In-Reply-To: <20210305075035.1852-4-jiangkunkun@huawei.com>
On Fri, Mar 05, 2021 at 03:50:35PM +0800, Kunkun Jiang wrote:
> Starting from pss->page, ram_save_host_page() will check every page
> and send the dirty pages up to the end of the current host page or
> the boundary of used_length of the block. If the host page size is
> a huge page, the step "check" will take a lot of time.
>
> This will improve performance to use migration_bitmap_find_dirty().
Is there any measurement done?
This looks like an optimization, but to me it seems to have changed a lot
context that it doesn't need to... Do you think it'll also work to just look up
dirty again and update pss->page properly if migration_bitmap_clear_dirty()
returned zero?
Thanks,
>
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
> ---
> migration/ram.c | 39 +++++++++++++++++++--------------------
> 1 file changed, 19 insertions(+), 20 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 9fc5b2997c..28215aefe4 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1991,6 +1991,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
> int pages = 0;
> size_t pagesize_bits =
> qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> + unsigned long hostpage_boundary =
> + QEMU_ALIGN_UP(pss->page + 1, pagesize_bits);
> unsigned long start_page = pss->page;
> int res;
>
> @@ -2003,30 +2005,27 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
> int pages_this_iteration = 0;
>
> /* Check if the page is dirty and send it if it is */
> - if (!migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
> - pss->page++;
> - continue;
> - }
> -
> - pages_this_iteration = ram_save_target_page(rs, pss, last_stage);
> - if (pages_this_iteration < 0) {
> - return pages_this_iteration;
> - }
> + if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
> + pages_this_iteration = ram_save_target_page(rs, pss, last_stage);
> + if (pages_this_iteration < 0) {
> + return pages_this_iteration;
> + }
>
> - pages += pages_this_iteration;
> - pss->page++;
> - /*
> - * Allow rate limiting to happen in the middle of huge pages if
> - * something is sent in the current iteration.
> - */
> - if (pagesize_bits > 1 && pages_this_iteration > 0) {
> - migration_rate_limit();
> + pages += pages_this_iteration;
> + /*
> + * Allow rate limiting to happen in the middle of huge pages if
> + * something is sent in the current iteration.
> + */
> + if (pagesize_bits > 1 && pages_this_iteration > 0) {
> + migration_rate_limit();
> + }
> }
> - } while ((pss->page & (pagesize_bits - 1)) &&
> + pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
> + } while ((pss->page < hostpage_boundary) &&
> offset_in_ramblock(pss->block,
> ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
> - /* The offset we leave with is the last one we looked at */
> - pss->page--;
> + /* The offset we leave with is the min boundary of host page and block */
> + pss->page = MIN(pss->page, hostpage_boundary) - 1;
>
> res = ram_save_release_protection(rs, pss, start_page);
> return (res < 0 ? res : pages);
> --
> 2.23.0
>
--
Peter Xu
next prev parent reply other threads:[~2021-03-05 14:32 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-05 7:50 [PATCH v3 0/3] Some modifications about ram_save_host_page() Kunkun Jiang
2021-03-05 7:50 ` [PATCH v3 1/3] migration/ram: Modify the code comment of ram_save_host_page() Kunkun Jiang
2021-03-05 13:59 ` Peter Xu
2021-03-08 10:33 ` Kunkun Jiang
2021-03-08 21:03 ` Peter Xu
2021-03-09 12:46 ` Kunkun Jiang
2021-03-05 7:50 ` [PATCH v3 2/3] migration/ram: Reduce unnecessary rate limiting Kunkun Jiang
2021-03-05 14:22 ` Peter Xu
2021-03-08 10:34 ` Kunkun Jiang
2021-03-08 21:12 ` Peter Xu
2021-03-09 14:33 ` Kunkun Jiang
2021-03-09 16:15 ` Peter Xu
2021-03-10 1:23 ` Kunkun Jiang
2021-03-05 7:50 ` [PATCH v3 3/3] migration/ram: Optimize ram_save_host_page() Kunkun Jiang
2021-03-05 14:30 ` Peter Xu [this message]
2021-03-08 13:58 ` Kunkun Jiang
2021-03-08 21:36 ` Peter Xu
2021-03-09 12:47 ` Kunkun Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210305143033.GF397383@xz-x1 \
--to=peterx@redhat.com \
--cc=andrey.gruzdev@virtuozzo.com \
--cc=clg@kaod.org \
--cc=dgilbert@redhat.com \
--cc=dme@dme.org \
--cc=jiangkunkun@huawei.com \
--cc=nevilad@yahoo.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=wanghaibin.wang@huawei.com \
--cc=yuzenghui@huawei.com \
--cc=zhukeqian1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).