From: Jitendra Kolhe <jitendra.kolhe@hpe.com>
To: "Li, Liang Z" <liang.z.li@intel.com>,
Roman Kagan <rkagan@virtuozzo.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"dgilbert@redhat.com" <dgilbert@redhat.com>,
"simhan@hpe.com" <simhan@hpe.com>,
"mohan_parthasarathy@hpe.com" <mohan_parthasarathy@hpe.com>
Subject: Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.
Date: Tue, 15 Mar 2016 18:50:45 +0530 [thread overview]
Message-ID: <56E80C2D.9020304@hpe.com> (raw)
In-Reply-To: <56E2D88D.2060702@hpe.com>
On 3/11/2016 8:09 PM, Jitendra Kolhe wrote:
>> You mean the total live migration time for the unmodified qemu and the
>> 'you modified for test' qemu
>> are almost the same?
>>
>
> Not sure I understand the question, but if 'you modified for test' means
> below modifications to save_zero_page(), then answer is no. Here is what
> I tried, let’s say we have 3 versions of qemu (below timings are for
> 16GB idle guest with 12GB ballooned out)
>
> v1. Unmodified qemu – absolutely not code change – Total Migration time
> = ~7600ms (I rounded this one to ~8000ms)
> v2. Modified qemu 1 – with proposed patch set (which skips both zero
> pages scan and migrating control information for ballooned out pages) -
> Total Migration time = ~5700ms
> v3. Modified qemu 2 – only with changes to save_zero_page() as discussed
> in previous mail (and of course using proposed patch set only to
> maintain bitmap for ballooned out pages) – Total migration time is
> irrelevant in this case.
> Total Zero page scan time = ~1789ms
> Total (save_page_header + qemu_put_byte(f, 0)) = ~556ms.
> Everything seems to add up here (may not be exact) – 5700+1789+559 =
> ~8000ms
>
> I see 2 factors that we have not considered in this add up a. overhead
> for migrating balloon bitmap to target and b. as you mentioned below
> overhead of qemu_clock_get_ns().
Missed one more factor of testing each page against balloon bitmap
during migration, which is consuming around ~320ms for same
configuration. If we remove this overhead which is introduced by
proposed patch set from above calculation we almost get total migration
time for unmodified qemu (5700-320+1789+559=~7700ms)
Thanks,
- Jitendra
next prev parent reply other threads:[~2016-03-15 13:21 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-04 9:02 [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver Jitendra Kolhe
2016-03-07 17:05 ` Eric Blake
2016-03-10 9:49 ` Roman Kagan
2016-03-11 5:59 ` Jitendra Kolhe
2016-03-11 7:25 ` Li, Liang Z
2016-03-11 10:20 ` Jitendra Kolhe
2016-03-11 10:54 ` Li, Liang Z
2016-03-11 14:39 ` Jitendra Kolhe
2016-03-15 13:20 ` Jitendra Kolhe [this message]
2016-03-18 11:27 ` Roman Kagan
2016-03-22 5:47 ` Jitendra Kolhe
-- strict thread matches above, loose matches on Subject: below --
2016-03-10 8:57 Jitendra Kolhe
2016-03-10 17:27 ` Eric Blake
2016-03-11 2:20 ` Jitendra Kolhe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56E80C2D.9020304@hpe.com \
--to=jitendra.kolhe@hpe.com \
--cc=dgilbert@redhat.com \
--cc=liang.z.li@intel.com \
--cc=mohan_parthasarathy@hpe.com \
--cc=qemu-devel@nongnu.org \
--cc=rkagan@virtuozzo.com \
--cc=simhan@hpe.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).