From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:57445) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qjc3Y-0007of-Qj for qemu-devel@nongnu.org; Wed, 20 Jul 2011 15:03:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Qjc3W-0006Vg-EA for qemu-devel@nongnu.org; Wed, 20 Jul 2011 15:03:40 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54446) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qjc3V-0006Uw-Jh for qemu-devel@nongnu.org; Wed, 20 Jul 2011 15:03:37 -0400 Date: Wed, 20 Jul 2011 16:02:46 -0300 From: Marcelo Tosatti Message-ID: <20110720190246.GB20170@amt.cnet> References: <464342277.1446996.1311132896813.JavaMail.root@zmail01.collab.prod.int.phx2.redhat.com> <812725771.1447271.1311134444174.JavaMail.root@zmail01.collab.prod.int.phx2.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <812725771.1447271.1311134444174.JavaMail.root@zmail01.collab.prod.int.phx2.redhat.com> Subject: Re: [Qemu-devel] [RFC 3/4] A separate thread for the VM migration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Umesh Deshpande Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org On Wed, Jul 20, 2011 at 12:00:44AM -0400, Umesh Deshpande wrote: > This patch creates a separate thread for the guest migration on the source side. The migration routine is called from the migration clock. > > Signed-off-by: Umesh Deshpande > --- > arch_init.c | 8 +++++++ > buffered_file.c | 10 ++++----- > migration-tcp.c | 18 ++++++++--------- > migration-unix.c | 7 ++---- > migration.c | 56 +++++++++++++++++++++++++++++-------------------------- > migration.h | 4 +-- > 6 files changed, 57 insertions(+), 46 deletions(-) > > diff --git a/arch_init.c b/arch_init.c > index f81a729..6d44b72 100644 > --- a/arch_init.c > +++ b/arch_init.c > @@ -260,6 +260,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque) > return 0; > } > > + if (stage != 3) { > + qemu_mutex_lock_iothread(); > + } > + > if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) { > qemu_file_set_error(f); > return 0; > @@ -267,6 +271,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque) > > sync_migration_bitmap(0, TARGET_PHYS_ADDR_MAX); > > + if (stage != 3) { > + qemu_mutex_unlock_iothread(); > + } > + Many data structures shared by vcpus/iothread and migration thread are accessed simultaneously without protection. Instead of simply moving the entire migration routines to a thread, i'd suggest moving only the time consuming work in ram_save_block (dup_page and put_buffer), after properly audit for shared access. And send more than one page a time, of course. A separate lock for ram_list is probably necessary, so that it can be accessed from the migration thread.