qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Umesh Deshpande <udeshpan@redhat.com>
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: Re: [Qemu-devel] [RFC 3/4] A separate thread for the VM migration
Date: Wed, 20 Jul 2011 16:02:46 -0300	[thread overview]
Message-ID: <20110720190246.GB20170@amt.cnet> (raw)
In-Reply-To: <812725771.1447271.1311134444174.JavaMail.root@zmail01.collab.prod.int.phx2.redhat.com>

On Wed, Jul 20, 2011 at 12:00:44AM -0400, Umesh Deshpande wrote:
> This patch creates a separate thread for the guest migration on the source side. The migration routine is called from the migration clock.
> 
> Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
> ---
>  arch_init.c      |    8 +++++++
>  buffered_file.c  |   10 ++++-----
>  migration-tcp.c  |   18 ++++++++---------
>  migration-unix.c |    7 ++----
>  migration.c      |   56 +++++++++++++++++++++++++++++--------------------------
>  migration.h      |    4 +--
>  6 files changed, 57 insertions(+), 46 deletions(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index f81a729..6d44b72 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -260,6 +260,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>          return 0;
>      }
>  
> +    if (stage != 3) {
> +        qemu_mutex_lock_iothread();
> +    }
> +
>      if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) {
>          qemu_file_set_error(f);
>          return 0;
> @@ -267,6 +271,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>  
>      sync_migration_bitmap(0, TARGET_PHYS_ADDR_MAX);
>  
> +    if (stage != 3) {
> +        qemu_mutex_unlock_iothread();
> +    }
> +

Many data structures shared by vcpus/iothread and migration thread are
accessed simultaneously without protection. Instead of simply moving
the entire migration routines to a thread, i'd suggest moving only the
time consuming work in ram_save_block (dup_page and put_buffer), after
properly audit for shared access. And send more than one page a time, of
course.

A separate lock for ram_list is probably necessary, so that it can
be accessed from the migration thread.

  reply	other threads:[~2011-07-20 19:03 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <464342277.1446996.1311132896813.JavaMail.root@zmail01.collab.prod.int.phx2.redhat.com>
2011-07-20  4:00 ` [Qemu-devel] [RFC 3/4] A separate thread for the VM migration Umesh Deshpande
2011-07-20 19:02   ` Marcelo Tosatti [this message]
2011-07-21 23:28     ` Umesh Deshpande

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110720190246.GB20170@amt.cnet \
    --to=mtosatti@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=qemu-devel@nongnu.org \
    --cc=udeshpan@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).