public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Izik Eidus <ieidus@redhat.com>
Cc: avi@redhat.com, kvm@vger.kernel.org
Subject: Re: [PATCH] fix migration with big mem guests
Date: Mon, 5 Apr 2010 17:50:40 -0300	[thread overview]
Message-ID: <20100405205040.GA15814@amt.cnet> (raw)
In-Reply-To: <20100405022637.75dbdc73@redhat.com>

On Mon, Apr 05, 2010 at 02:26:37AM +0300, Izik Eidus wrote:
> Hi,
> 
> (Below is explenation about the bug to who does`nt familier)
> 
> In the beggining I tried to make this code run with
> qemu_bh() but the result was performence catastrophic
> 
> The reason is that the migration code just doesn`t built
> to run at such high granularity, for example sutff like:
> 
> static ram_addr_t ram_save_remaining(void)
> {
>     ram_addr_t addr;
>     ram_addr_t count = 0;
> 
>     for (addr = 0; addr < last_ram_offset; addr += TARGET_PAGE_SIZE) {
>         if (cpu_physical_memory_get_dirty(addr, MIGRATION_DIRTY_FLAG))
>             count++;
>     }
> 
>     return count;
> }
> 
> That get called from ram_save_live(), were taking way too much time...
> (Just think that I tried to read each time small data, and run it at
>  each time that main_loop_wait() finish (from qemu_bh_poll())
> 
> Then I thought ok - let`s add a timer that the bh code will run to me
> only once in a time - however the migration code already have timer
> that is set, so it like it make the most sense to use it...
> 
> If anyone have any better idea how to solve this issue, I will be very
> happy to hear.

Izik,

Looks good to me. Please send to qemu-devel.

> 
> Thanks.
> 
> >From 2d9c25f1fee61f50cb130769c3779707a6ef90d9 Mon Sep 17 00:00:00 2001
> From: Izik Eidus <ieidus@redhat.com>
> Date: Mon, 5 Apr 2010 02:05:09 +0300
> Subject: [PATCH] qemu-kvm: fix migration with large mem
> 
> In cases of guests with large mem that have pages
> that all their bytes content are the same, we will
> spend alot of time reading the memory from the guest
> (is_dup_page())
> 
> It is happening beacuse ram_save_live() function have
> limit of how much we can send to the dest but not how
> much we read from it, and in cases we have many is_dup_page()
> hits, we might read huge amount of data without updating important
> stuff like the timers...
> 
> The guest lose all its repsonsibility and have many softlock ups
> inside itself.
> 
> this patch add limit on the size we can read from the guest each
> iteration.
> 
> Thanks.
> 
> Signed-off-by: Izik Eidus <ieidus@redhat.com>
> ---
>  vl.c |    6 +++++-
>  1 files changed, 5 insertions(+), 1 deletions(-)
> 
> diff --git a/vl.c b/vl.c
> index d959fdb..777988d 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -174,6 +174,8 @@ int main(int argc, char **argv)
>  
>  #define DEFAULT_RAM_SIZE 128
>  
> +#define MAX_SAVE_BLOCK_READ 10 * 1024 * 1024
> +
>  #define MAX_VIRTIO_CONSOLES 1
>  
>  static const char *data_dir;
> @@ -2854,6 +2856,7 @@ static int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>      uint64_t bytes_transferred_last;
>      double bwidth = 0;
>      uint64_t expected_time = 0;
> +    int data_read = 0;
>  
>      if (stage < 0) {
>          cpu_physical_memory_set_dirty_tracking(0);
> @@ -2883,10 +2886,11 @@ static int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>      bytes_transferred_last = bytes_transferred;
>      bwidth = qemu_get_clock_ns(rt_clock);
>  
> -    while (!qemu_file_rate_limit(f)) {
> +    while (!qemu_file_rate_limit(f) && data_read < MAX_SAVE_BLOCK_READ) {
>          int ret;
>  
>          ret = ram_save_block(f);
> +        data_read += ret * TARGET_PAGE_SIZE;
>          bytes_transferred += ret * TARGET_PAGE_SIZE;
>          if (ret == 0) /* no more blocks */
>              break;
> -- 
> 1.6.6.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

      reply	other threads:[~2010-04-05 20:52 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-04-04 23:26 [PATCH] fix migration with big mem guests Izik Eidus
2010-04-05 20:50 ` Marcelo Tosatti [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100405205040.GA15814@amt.cnet \
    --to=mtosatti@redhat.com \
    --cc=avi@redhat.com \
    --cc=ieidus@redhat.com \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox