From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Xiao Guangrong <guangrong.xiao@gmail.com>,
pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com,
qemu-devel@nongnu.org, kvm@vger.kernel.org, wei.w.wang@intel.com,
jiang.biao2@zte.com.cn, eblake@redhat.com,
Xiao Guangrong <xiaoguangrong@tencent.com>
Subject: Re: [Qemu-devel] [PATCH v2 3/8] migration: show the statistics of compression
Date: Wed, 25 Jul 2018 17:44:02 +0100 [thread overview]
Message-ID: <20180725164401.GD2365@work-vm> (raw)
In-Reply-To: <20180723080559.GI2491@xz-mi>
* Peter Xu (peterx@redhat.com) wrote:
> On Mon, Jul 23, 2018 at 03:39:18PM +0800, Xiao Guangrong wrote:
> >
> >
> > On 07/23/2018 12:36 PM, Peter Xu wrote:
> > > On Thu, Jul 19, 2018 at 08:15:15PM +0800, guangrong.xiao@gmail.com wrote:
> > > > @@ -1597,6 +1608,24 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
> > > > rs->xbzrle_cache_miss_prev) / iter_count;
> > > > rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
> > > > }
> > > > +
> > > > + if (migrate_use_compression()) {
> > > > + uint64_t comp_pages;
> > > > +
> > > > + compression_counters.busy_rate = (double)(compression_counters.busy -
> > > > + rs->compress_thread_busy_prev) / iter_count;
> > >
> > > Here I'm not sure it's correct...
> > >
> > > "iter_count" stands for ramstate.iterations. It's increased per
> > > ram_find_and_save_block(), so IMHO it might contain multiple guest
> >
> > ram_find_and_save_block() returns if a page is successfully posted and
> > it only posts 1 page out at one time.
>
> ram_find_and_save_block() calls ram_save_host_page(), and we should be
> sending multiple guest pages in ram_save_host_page() if the host page
> is a huge page?
>
> >
> > > pages. However compression_counters.busy should be per guest page.
> > >
> >
> > Actually, it's derived from xbzrle_counters.cache_miss_rate:
> > xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss -
> > rs->xbzrle_cache_miss_prev) / iter_count;
>
> Then this is suspecious to me too...
Actually; I think this isn't totally wrong; iter_count is the *difference* in
iterations since the last time it was updated:
uint64_t iter_count = rs->iterations - rs->iterations_prev;
xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss -
rs->xbzrle_cache_miss_prev) / iter_count;
so this is:
cache-misses-since-last-update
------------------------------
iterations since last-update
so the 'miss_rate' is ~misses / iteration.
Although that doesn't really correspond to time.
Dave
> >
> > > > + rs->compress_thread_busy_prev = compression_counters.busy;
> > > > +
> > > > + comp_pages = compression_counters.pages - rs->compress_pages_prev;
> > > > + if (comp_pages) {
> > > > + compression_counters.compression_rate =
> > > > + (double)(compression_counters.reduced_size -
> > > > + rs->compress_reduced_size_prev) /
> > > > + (comp_pages * TARGET_PAGE_SIZE);
> > > > + rs->compress_pages_prev = compression_counters.pages;
> > > > + rs->compress_reduced_size_prev = compression_counters.reduced_size;
> > > > + }
> > > > + }
> > > > }
> > > > static void migration_bitmap_sync(RAMState *rs)
> > > > @@ -1872,6 +1901,9 @@ static void flush_compressed_data(RAMState *rs)
> > > > qemu_mutex_lock(&comp_param[idx].mutex);
> > > > if (!comp_param[idx].quit) {
> > > > len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
> > > > + /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */
> > > > + compression_counters.reduced_size += TARGET_PAGE_SIZE - len + 8;
> > >
> > > I would agree with Dave here - why we store the "reduced size" instead
> > > of the size of the compressed data (which I think should be len - 8)?
> > >
> >
> > len-8 is the size of data after compressed rather than the data improved
> > by compression that is not straightforward for the user to see how much
> > the improvement is by applying compression.
> >
> > Hmm... but it is not a big deal to me... :)
>
> Yeah it might be a personal preference indeed. :)
>
> It's just natural to do that this way for me since AFAIU the
> compression ratio is defined as:
>
> compressed data size
> compression ratio = ------------------------
> original data size
>
> >
> > > Meanwhile, would a helper be nicer? Like:
> >
> > Yup, that's nicer indeed.
>
> Regards,
>
> --
> Peter Xu
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2018-07-25 16:44 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-19 12:15 [Qemu-devel] [PATCH v2 0/8] migration: compression optimization guangrong.xiao
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 1/8] migration: do not wait for free thread guangrong.xiao
2018-07-23 3:25 ` Peter Xu
2018-07-23 7:16 ` Xiao Guangrong
2018-07-23 18:36 ` Eric Blake
2018-07-24 7:40 ` Xiao Guangrong
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 2/8] migration: fix counting normal page for compression guangrong.xiao
2018-07-23 3:33 ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 3/8] migration: show the statistics of compression guangrong.xiao
2018-07-23 4:36 ` Peter Xu
2018-07-23 7:39 ` Xiao Guangrong
2018-07-23 8:05 ` Peter Xu
2018-07-23 8:40 ` Xiao Guangrong
2018-07-23 9:15 ` Peter Xu
2018-07-24 7:37 ` Xiao Guangrong
2018-07-25 16:44 ` Dr. David Alan Gilbert [this message]
2018-07-26 5:29 ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 4/8] migration: introduce save_zero_page_to_file guangrong.xiao
2018-07-23 4:40 ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 5/8] migration: drop the return value of do_compress_ram_page guangrong.xiao
2018-07-23 4:48 ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 6/8] migration: move handle of zero page to the thread guangrong.xiao
2018-07-23 5:03 ` Peter Xu
2018-07-23 7:56 ` Xiao Guangrong
2018-07-23 8:28 ` Peter Xu
2018-07-23 8:44 ` Xiao Guangrong
2018-07-23 9:40 ` Peter Xu
2018-07-24 7:39 ` Xiao Guangrong
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 7/8] migration: hold the lock only if it is really needed guangrong.xiao
2018-07-23 5:36 ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 8/8] migration: do not flush_compressed_data at the end of each iteration guangrong.xiao
2018-07-23 5:49 ` Peter Xu
2018-07-23 8:05 ` Xiao Guangrong
2018-07-23 8:35 ` Peter Xu
2018-07-23 8:53 ` Xiao Guangrong
2018-07-23 9:01 ` Peter Xu
2018-07-24 7:29 ` Xiao Guangrong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180725164401.GD2365@work-vm \
--to=dgilbert@redhat.com \
--cc=eblake@redhat.com \
--cc=guangrong.xiao@gmail.com \
--cc=jiang.biao2@zte.com.cn \
--cc=kvm@vger.kernel.org \
--cc=mst@redhat.com \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=wei.w.wang@intel.com \
--cc=xiaoguangrong@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).