From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55113) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZbYss-0007ER-E4 for qemu-devel@nongnu.org; Mon, 14 Sep 2015 14:53:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZbYsn-0005NE-EJ for qemu-devel@nongnu.org; Mon, 14 Sep 2015 14:53:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57504) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZbYsn-0005K0-5T for qemu-devel@nongnu.org; Mon, 14 Sep 2015 14:53:41 -0400 Date: Mon, 14 Sep 2015 19:53:35 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20150914185335.GH25616@work-vm> References: <20150811134826.GI4520@redhat.com> <20150812052346.GC4587@in.ibm.com> <1441692486.14597.17.camel@ellerman.id.au> <20150908063948.GB678@in.ibm.com> <20150908085946.GC2246@work-vm> <20150908095915.GC678@in.ibm.com> <20150908124652.GK2246@work-vm> <20150908133647.GA17433@in.ibm.com> <20150908141356.GM2246@work-vm> <20150910122431.GL17433@in.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable In-Reply-To: <20150910122431.GL17433@in.ibm.com> Subject: Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Bharata B Rao Cc: qemu-devel@nongnu.org, David Gibson * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote: > (cc trimmed since this looks like an issue that is contained within QEMU) >=20 > On Tue, Sep 08, 2015 at 03:13:56PM +0100, Dr. David Alan Gilbert wrote: > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote: > > > On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrot= e: > > > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote: > > > > > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert = wrote: > > > > > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote: > > > > > > > In fact I had successfully done postcopy migration of sPAPR g= uest with > > > > > > > this setup. > > > > > >=20 > > > > > > Interesting - I'd not got that far myself on power; I was hitti= ng a problem > > > > > > loading htab ( htab_load() bad index 2113929216 (14848+0 entrie= s) in htab stream (htab_shift=3D25) ) > > > > > >=20 > > > > > > Did you have to make any changes to the qemu code to get that h= appy? > > > > >=20 > > > > > I should have mentioned that I tried only QEMU driven migration w= ithin > > > > > the same host using wp3-postcopy branch of your tree. I don't see= the > > > > > above issue. > > > > >=20 > > > > > (qemu) info migrate > > > > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off ze= ro-blocks: off compress: off x-postcopy-ram: on=20 > > > > > Migration status: completed > > > > > total time: 39432 milliseconds > > > > > downtime: 162 milliseconds > > > > > setup: 14 milliseconds > > > > > transferred ram: 1297209 kbytes > > > > > throughput: 270.72 mbps > > > > > remaining ram: 0 kbytes > > > > > total ram: 4194560 kbytes > > > > > duplicate: 734015 pages > > > > > skipped: 0 pages > > > > > normal: 318469 pages > > > > > normal bytes: 1273876 kbytes > > > > > dirty sync count: 4 > > > > >=20 > > > > > I will try migration between different hosts soon and check. > > > >=20 > > > > I hit that on the same host; are you sure you've switched into post= copy mode; > > > > i.e. issued a migrate_start_postcopy before the end of migration? > > >=20 > > > Sorry I was following your discussion with Li in this thread > > >=20 > > > https://www.marc.info/?l=3Dqemu-devel&m=3D143035620026744&w=3D4 > > >=20 > > > and it wasn't obvious to me that anything apart from turning on the > > > x-postcopy-ram capability was required :( > >=20 > > OK. > >=20 > > > So I do see the problem now. > > >=20 > > > At the source > > > ------------- > > > Error reading data from KVM HTAB fd: Bad file descriptor > > > Segmentation fault > > >=20 > > > At the target > > > ------------- > > > htab_load() bad index 2113929216 (14336+0 entries) in htab stream (ht= ab_shift=3D25) > > > qemu-system-ppc64: error while loading state section id 56(spapr/htab) > > > qemu-system-ppc64: postcopy_ram_listen_thread: loadvm failed: -22 > > > qemu-system-ppc64: VQ 0 size 0x100 Guest index 0x0 inconsistent with = Host index 0x1f: delta 0xffe1 > > > qemu-system-ppc64: error while loading state for instance 0x0 of devi= ce 'pci@800000020000000:00.0/virtio-net' > > > *** Error in `./ppc64-softmmu/qemu-system-ppc64': corrupted double-li= nked list: 0x00000100241234a0 *** > > > =3D=3D=3D=3D=3D=3D=3D Backtrace: =3D=3D=3D=3D=3D=3D=3D=3D=3D > > > /lib64/power8/libc.so.6Segmentation fault > >=20 > > Good - my current world has got rid of the segfaults/corruption in the = cleanup on power - but those > > are only after it stumbled over the htab problem. > >=20 > > I don't know the innards of power/htab, so if you've got any pointers o= n what upset it > > I'd be happy for some pointers. > =20 > When migrate_start_postcopy is issued, for HTAB, the SaveStateEntry > save_live_iterate call is coming after save_live_complete. In case of HTA= B, > the spapr->htab_fd is closed when HTAB saving is completed in > save_live_complete handler. When save_live_iterate call comes after this, > we end up accessing an invalid fd resulting in the migration failure > we are seeing here. >=20 > - With postcopy migration, is it expected to get a save_live_iterate > call after save_live_complete ? IIUC, save_live_complete signals the > completion of the saving. Is save_live_iterate handler expected to > handle this condition ? >=20 > I am able to get past this failure and get migration to complete successf= ully > by this below hack where I teach save_live_iterate handler to ignore > the requests after save_live_complete has been called. The fix I'm going with is included below; only smoke tested on x86 so far, I'll grab a Power to test it on before I republish this set. (and this is on my working tree rather than the version I last published, but it should be reasonably close) =46rom c51e5f8e8cef4ca5a47c1446803a9b35aa7d738d Mon Sep 17 00:00:00 2001 =46rom: "Dr. David Alan Gilbert" Date: Mon, 14 Sep 2015 19:27:45 +0100 Subject: [PATCH] Don't iterate on precopy-only devices during postcopy During the postcopy phase we must not call the iterate method on precopy-only devices, since they may have done some cleanup during the _complete call at the end of the precopy phase. Signed-off-by: Dr. David Alan Gilbert --- include/sysemu/sysemu.h | 2 +- migration/migration.c | 2 +- migration/savevm.c | 13 +++++++++++-- 3 files changed, 13 insertions(+), 4 deletions(-) diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h index ccf278e..018a628 100644 --- a/include/sysemu/sysemu.h +++ b/include/sysemu/sysemu.h @@ -108,7 +108,7 @@ bool qemu_savevm_state_blocked(Error **errp); void qemu_savevm_state_begin(QEMUFile *f, const MigrationParams *params); void qemu_savevm_state_header(QEMUFile *f); -int qemu_savevm_state_iterate(QEMUFile *f); +int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy); void qemu_savevm_state_complete_postcopy(QEMUFile *f); void qemu_savevm_state_complete_precopy(QEMUFile *f); void qemu_savevm_state_cancel(void); diff --git a/migration/migration.c b/migration/migration.c index 0468bc4..e9e8f6a 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1589,7 +1589,7 @@ static void *migration_thread(void *opaque) continue; } /* Just another iteration step */ - qemu_savevm_state_iterate(s->file); + qemu_savevm_state_iterate(s->file, entered_postcopy); } else { trace_migration_thread_low_pending(pending_size); =20 diff --git a/migration/savevm.c b/migration/savevm.c index 42f67a6..9ae9841 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -931,7 +931,7 @@ void qemu_savevm_state_begin(QEMUFile *f, * 0 : We haven't finished, caller have to go again * 1 : We have finished, we can go to complete phase */ -int qemu_savevm_state_iterate(QEMUFile *f) +int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy) { SaveStateEntry *se; int ret =3D 1; @@ -946,6 +946,15 @@ int qemu_savevm_state_iterate(QEMUFile *f) continue; } } + /* + * In the postcopy phase, any device that doesn't know how to + * do postcopy should have saved it's state in the _complete + * call that's already run, it might get confused if we call + * iterate afterwards. + */ + if (postcopy && !se->ops->save_live_complete_postcopy) { + return 0; + } if (qemu_file_rate_limit(f)) { return 0; } @@ -1160,7 +1169,7 @@ static int qemu_savevm_state(QEMUFile *f, Error **err= p) qemu_mutex_lock_iothread(); =20 while (qemu_file_get_error(f) =3D=3D 0) { - if (qemu_savevm_state_iterate(f) > 0) { + if (qemu_savevm_state_iterate(f, false) > 0) { break; } } --=20 2.4.3 -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK