qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Xiao Guangrong <guangrong.xiao@gmail.com>,
	pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com,
	qemu-devel@nongnu.org, kvm@vger.kernel.org, wei.w.wang@intel.com,
	jiang.biao2@zte.com.cn, eblake@redhat.com,
	Xiao Guangrong <xiaoguangrong@tencent.com>
Subject: Re: [Qemu-devel] [PATCH v2 3/8] migration: show the statistics of compression
Date: Thu, 26 Jul 2018 13:29:15 +0800	[thread overview]
Message-ID: <20180726052915.GK2479@xz-mi> (raw)
In-Reply-To: <20180725164401.GD2365@work-vm>

On Wed, Jul 25, 2018 at 05:44:02PM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > On Mon, Jul 23, 2018 at 03:39:18PM +0800, Xiao Guangrong wrote:
> > > 
> > > 
> > > On 07/23/2018 12:36 PM, Peter Xu wrote:
> > > > On Thu, Jul 19, 2018 at 08:15:15PM +0800, guangrong.xiao@gmail.com wrote:
> > > > > @@ -1597,6 +1608,24 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
> > > > >               rs->xbzrle_cache_miss_prev) / iter_count;
> > > > >           rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
> > > > >       }
> > > > > +
> > > > > +    if (migrate_use_compression()) {
> > > > > +        uint64_t comp_pages;
> > > > > +
> > > > > +        compression_counters.busy_rate = (double)(compression_counters.busy -
> > > > > +            rs->compress_thread_busy_prev) / iter_count;
> > > > 
> > > > Here I'm not sure it's correct...
> > > > 
> > > > "iter_count" stands for ramstate.iterations.  It's increased per
> > > > ram_find_and_save_block(), so IMHO it might contain multiple guest
> > > 
> > > ram_find_and_save_block() returns if a page is successfully posted and
> > > it only posts 1 page out at one time.
> > 
> > ram_find_and_save_block() calls ram_save_host_page(), and we should be
> > sending multiple guest pages in ram_save_host_page() if the host page
> > is a huge page?
> > 
> > > 
> > > > pages.  However compression_counters.busy should be per guest page.
> > > > 
> > > 
> > > Actually, it's derived from xbzrle_counters.cache_miss_rate:
> > >         xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss -
> > >             rs->xbzrle_cache_miss_prev) / iter_count;
> > 
> > Then this is suspecious to me too...
> 
> Actually; I think this isn't totally wrong;  iter_count is the *difference* in
> iterations since the last time it was updated:
> 
>    uint64_t iter_count = rs->iterations - rs->iterations_prev;
> 
>         xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss -
>             rs->xbzrle_cache_miss_prev) / iter_count;
> 
> so this is:
>       cache-misses-since-last-update
>       ------------------------------
>         iterations since last-update
> 
> so the 'miss_rate' is ~misses / iteration.
> Although that doesn't really correspond to time.

I'm not sure I got the idea here, the thing is that I think the
counters are for different granularities which might be problematic:

- xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's
  per-guest-page granularity

- RAMState.iterations is done for each ram_find_and_save_block(), so
  it's per-host-page granularity

An example is that when we migrate a 2M huge page in the guest, we
will only increase the RAMState.iterations by 1 (since
ram_find_and_save_block() will be called once), but we might increase
xbzrle_counters.cache_miss for 2M/4K=512 times (we'll call
save_xbzrle_page() that many times) if all the pages got cache miss.
Then IMHO the cache miss rate will be 512/1=51200% (while it should
actually be just 100% cache miss).

Regards,

-- 
Peter Xu

  reply	other threads:[~2018-07-26  5:29 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-19 12:15 [Qemu-devel] [PATCH v2 0/8] migration: compression optimization guangrong.xiao
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 1/8] migration: do not wait for free thread guangrong.xiao
2018-07-23  3:25   ` Peter Xu
2018-07-23  7:16     ` Xiao Guangrong
2018-07-23 18:36   ` Eric Blake
2018-07-24  7:40     ` Xiao Guangrong
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 2/8] migration: fix counting normal page for compression guangrong.xiao
2018-07-23  3:33   ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 3/8] migration: show the statistics of compression guangrong.xiao
2018-07-23  4:36   ` Peter Xu
2018-07-23  7:39     ` Xiao Guangrong
2018-07-23  8:05       ` Peter Xu
2018-07-23  8:40         ` Xiao Guangrong
2018-07-23  9:15           ` Peter Xu
2018-07-24  7:37             ` Xiao Guangrong
2018-07-25 16:44         ` Dr. David Alan Gilbert
2018-07-26  5:29           ` Peter Xu [this message]
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 4/8] migration: introduce save_zero_page_to_file guangrong.xiao
2018-07-23  4:40   ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 5/8] migration: drop the return value of do_compress_ram_page guangrong.xiao
2018-07-23  4:48   ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 6/8] migration: move handle of zero page to the thread guangrong.xiao
2018-07-23  5:03   ` Peter Xu
2018-07-23  7:56     ` Xiao Guangrong
2018-07-23  8:28       ` Peter Xu
2018-07-23  8:44         ` Xiao Guangrong
2018-07-23  9:40           ` Peter Xu
2018-07-24  7:39             ` Xiao Guangrong
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 7/8] migration: hold the lock only if it is really needed guangrong.xiao
2018-07-23  5:36   ` Peter Xu
2018-07-19 12:15 ` [Qemu-devel] [PATCH v2 8/8] migration: do not flush_compressed_data at the end of each iteration guangrong.xiao
2018-07-23  5:49   ` Peter Xu
2018-07-23  8:05     ` Xiao Guangrong
2018-07-23  8:35       ` Peter Xu
2018-07-23  8:53         ` Xiao Guangrong
2018-07-23  9:01           ` Peter Xu
2018-07-24  7:29             ` Xiao Guangrong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180726052915.GK2479@xz-mi \
    --to=peterx@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eblake@redhat.com \
    --cc=guangrong.xiao@gmail.com \
    --cc=jiang.biao2@zte.com.cn \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=wei.w.wang@intel.com \
    --cc=xiaoguangrong@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).