qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Chao Fan <fanc.fnst@cn.fujitsu.com>
To: Juan Quintela <quintela@redhat.com>
Cc: pbonzini@redhat.com, dgilbert@redhat.com, qemu-devel@nongnu.org,
	berrange@redhat.com, caoj.fnst@cn.fujitsu.com,
	douly.fnst@cn.fujitsu.com, maozy.fnst@cn.fujitsu.com,
	Li Zhijian <lizhijian@cn.fujitsu.com>
Subject: Re: [Qemu-devel] [PATCH v2] Change the method to calculate dirty-pages-rate
Date: Tue, 14 Mar 2017 17:35:34 +0800	[thread overview]
Message-ID: <20170314093534.GA13034@localhost.localdomain> (raw)
In-Reply-To: <8760jcuuax.fsf@secure.mitica>

On Tue, Mar 14, 2017 at 09:38:46AM +0100, Juan Quintela wrote:
>Chao Fan <fanc.fnst@cn.fujitsu.com> wrote:
>> In function cpu_physical_memory_sync_dirty_bitmap, file
>> include/exec/ram_addr.h:
>>
>> if (src[idx][offset]) {
>>     unsigned long bits = atomic_xchg(&src[idx][offset], 0);
>>     unsigned long new_dirty;
>>     new_dirty = ~dest[k];
>>     dest[k] |= bits;
>>     new_dirty &= bits;
>>     num_dirty += ctpopl(new_dirty);
>> }
>>
>> After these codes executed, only the pages not dirtied in bitmap(dest),
>> but dirtied in dirty_memory[DIRTY_MEMORY_MIGRATION] will be calculated.
>> For example:
>> When ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] = 0b00001111,
>> and atomic_rcu_read(&migration_bitmap_rcu)->bmap = 0b00000011,
>> the new_dirty will be 0b00001100, and this function will return 2 but not
>> 4 which is expected.
>> the dirty pages in dirty_memory[DIRTY_MEMORY_MIGRATION] are all new,
>> so these should be calculated also.
>#
>> Signed-off-by: Chao Fan <fanc.fnst@cn.fujitsu.com>
>> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
>>
>> ---
>> v2: Remove the parameter 'num_dirty_pages_init'
>>     Fix incoming parameters of trace_migration_bitmap_sync_end
>
>Reviewed-by: Juan Quintela <quintela@redhat.com>
Hi Juan,

Thank you for your review!

>
>Just curious, does this change show any difference in any load?
I think this method can show the new dirty pages more precisely than
before, so it's helpful to determine the cpu throttle value.

You can see this mail:
https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg03479.html

And according to Daniel's suggestion, 'inst-dirty-pages-rate' in my
old patch isn't needed anymore after this patch:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg436183.html

Thanks,
Chao Fan
>
>Later, Juan.
>
>

  reply	other threads:[~2017-03-14  9:36 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-14  1:55 [Qemu-devel] [PATCH v2] Change the method to calculate dirty-pages-rate Chao Fan
2017-03-14  8:38 ` Juan Quintela
2017-03-14  8:38 ` Juan Quintela
2017-03-14  9:35   ` Chao Fan [this message]
2017-03-15 16:49 ` Juan Quintela
2017-03-16  7:58 ` Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170314093534.GA13034@localhost.localdomain \
    --to=fanc.fnst@cn.fujitsu.com \
    --cc=berrange@redhat.com \
    --cc=caoj.fnst@cn.fujitsu.com \
    --cc=dgilbert@redhat.com \
    --cc=douly.fnst@cn.fujitsu.com \
    --cc=lizhijian@cn.fujitsu.com \
    --cc=maozy.fnst@cn.fujitsu.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).