From: Chao Fan <fanc.fnst@cn.fujitsu.com>
To: qemu-devel@nongnu.org, quintela@redhat.com, amit.shah@redhat.com
Cc: caoj.fnst@cn.fujitsu.com, douly.fnst@cn.fujitsu.com,
lizhijian@cn.fujitsu.com, izumi.taku@jp.fujitsu.com
Subject: Re: [Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload
Date: Thu, 29 Dec 2016 18:38:43 +0800 [thread overview]
Message-ID: <20161229103843.GA11831@localhost.localdomain> (raw)
In-Reply-To: <20161229091619.31049-1-fanc.fnst@cn.fujitsu.com>
Hi all,
There is something to explain in this RFC PATCH.
On Thu, Dec 29, 2016 at 05:16:19PM +0800, Chao Fan wrote:
>This RFC PATCH is my demo about the new feature, here is my POC mail:
>https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg00646.html
>
>When migration_bitmap_sync executed, get the time and read bitmap to
>calculate how many dirty pages born between two sync.
>Use inst_dirty_pages / (time_now - time_prev) / ram_size to get
>inst_dirty_pages_rate. Then map from the inst_dirty_pages_rate
>to cpu throttle value. I have no idea how to map it. So I just do
>that in a simple way. The mapping way is just a guess and should
>be improved.
>
>This is just a demo. There are more methods.
>1.In another file, calculate the inst_dirty_pages_rate every second
> or two seconds or another fixed time. Then set the cpu throttle
> value according to the inst_dirty_pages_rate
>2.When inst_dirty_pages_rate gets a threshold, begin cpu throttle
> and set the throttle value.
>
>Any comments will be welcome.
>
>Signed-off-by: Chao Fan <fanc.fnst@cn.fujitsu.com>
>---
> include/qemu/bitmap.h | 17 +++++++++++++++++
> migration/ram.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 66 insertions(+)
>
>diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
>index 63ea2d0..dc99f9b 100644
>--- a/include/qemu/bitmap.h
>+++ b/include/qemu/bitmap.h
>@@ -235,4 +235,21 @@ static inline unsigned long *bitmap_zero_extend(unsigned long *old,
> return new;
> }
>
>+static inline unsigned long bitmap_weight(const unsigned long *src, long nbits)
It is a function imported from kernel, to calculate the number of
dirty pages.
>+{
>+ unsigned long i, count = 0, nlong = nbits / BITS_PER_LONG;
>+
>+ if (small_nbits(nbits)) {
>+ return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits));
>+ }
>+ for (i = 0; i < nlong; i++) {
>+ count += hweight_long(src[i]);
>+ }
>+ if (nbits % BITS_PER_LONG) {
>+ count += hweight_long(src[i] & BITMAP_LAST_WORD_MASK(nbits));
>+ }
>+
>+ return count;
>+}
>+
> #endif /* BITMAP_H */
>diff --git a/migration/ram.c b/migration/ram.c
>index a1c8089..f96e3e3 100644
>--- a/migration/ram.c
>+++ b/migration/ram.c
>@@ -44,6 +44,7 @@
> #include "exec/ram_addr.h"
> #include "qemu/rcu_queue.h"
> #include "migration/colo.h"
>+#include "hw/boards.h"
>
> #ifdef DEBUG_MIGRATION_RAM
> #define DPRINTF(fmt, ...) \
>@@ -599,6 +600,9 @@ static int64_t num_dirty_pages_period;
> static uint64_t xbzrle_cache_miss_prev;
> static uint64_t iterations_prev;
>
>+static int64_t dirty_pages_time_prev;
>+static int64_t dirty_pages_time_now;
>+
> static void migration_bitmap_sync_init(void)
> {
> start_time = 0;
>@@ -606,6 +610,49 @@ static void migration_bitmap_sync_init(void)
> num_dirty_pages_period = 0;
> xbzrle_cache_miss_prev = 0;
> iterations_prev = 0;
>+
>+ dirty_pages_time_prev = 0;
>+ dirty_pages_time_now = 0;
>+}
>+
>+static void migration_inst_rate(void)
>+{
>+ RAMBlock *block;
>+ MigrationState *s = migrate_get_current();
>+ int64_t inst_dirty_pages_rate, inst_dirty_pages = 0;
>+ int64_t i;
>+ unsigned long *num;
>+ unsigned long len = 0;
>+
>+ dirty_pages_time_now = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
When sync executed, we do this. And maybe every 1 second or another fixed
time to get the pages and time is also OK. But I have no idear which is
better.
>+ if (dirty_pages_time_prev != 0) {
>+ rcu_read_lock();
>+ DirtyMemoryBlocks *blocks = atomic_rcu_read(
>+ &ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
>+ QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>+ if (len == 0) {
>+ len = block->offset;
>+ }
>+ len += block->used_length;
>+ }
>+ ram_addr_t idx = (len >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
>+ if (((len >> TARGET_PAGE_BITS) % DIRTY_MEMORY_BLOCK_SIZE) != 0) {
>+ idx++;
>+ }
>+ for (i = 0; i < idx; i++) {
>+ num = blocks->blocks[i];
>+ inst_dirty_pages += bitmap_weight(num, DIRTY_MEMORY_BLOCK_SIZE);
>+ }
>+ rcu_read_unlock();
>+
>+ inst_dirty_pages_rate = inst_dirty_pages * TARGET_PAGE_SIZE *
>+ 1024 * 1024 * 1000 /
The time we get is ms, so pages *1000 to make time changed to second.
The two *1024 is just to keep the magnitude, otherwise the
inst_dirty_pages is so small that the rate will be 0.
>+ (dirty_pages_time_now - dirty_pages_time_prev) /
>+ current_machine->ram_size;
>+ s->parameters.cpu_throttle_initial = inst_dirty_pages_rate / 200;
>+ s->parameters.cpu_throttle_increment = inst_dirty_pages_rate / 200;
Here the 200 is just a guess, because I don't know how map from
inst_dirty_pages_rate to throttle value. So just fill in a number.
I think there are better methods to map this. Then there will be a
better way to set the throttle value than the default 20/10.
Thanks,
Chao Fan
>+ }
>+ dirty_pages_time_prev = dirty_pages_time_now;
> }
>
> static void migration_bitmap_sync(void)
>@@ -629,6 +676,8 @@ static void migration_bitmap_sync(void)
> trace_migration_bitmap_sync_start();
> memory_global_dirty_log_sync();
>
>+ migration_inst_rate();
>+
> qemu_mutex_lock(&migration_bitmap_mutex);
> rcu_read_lock();
> QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>--
>2.9.3
>
next prev parent reply other threads:[~2016-12-29 10:39 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-29 9:16 [Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload Chao Fan
2016-12-29 10:38 ` Chao Fan [this message]
2017-01-12 2:17 ` Cao jin
2017-01-18 5:10 ` Chao Fan
2017-01-27 12:07 ` Dr. David Alan Gilbert
2017-02-06 6:25 ` Chao Fan
2017-02-24 13:01 ` Dr. David Alan Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161229103843.GA11831@localhost.localdomain \
--to=fanc.fnst@cn.fujitsu.com \
--cc=amit.shah@redhat.com \
--cc=caoj.fnst@cn.fujitsu.com \
--cc=douly.fnst@cn.fujitsu.com \
--cc=izumi.taku@jp.fujitsu.com \
--cc=lizhijian@cn.fujitsu.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).