From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50331) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Za0v3-0004ee-BV for qemu-devel@nongnu.org; Thu, 10 Sep 2015 08:25:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Za0uy-0004dV-D7 for qemu-devel@nongnu.org; Thu, 10 Sep 2015 08:25:37 -0400 Received: from e23smtp09.au.ibm.com ([202.81.31.142]:54853) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Za0ux-0004cV-RM for qemu-devel@nongnu.org; Thu, 10 Sep 2015 08:25:32 -0400 Received: from /spool/local by e23smtp09.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 10 Sep 2015 22:25:28 +1000 Received: from d23relay07.au.ibm.com (d23relay07.au.ibm.com [9.190.26.37]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id 556553578052 for ; Thu, 10 Sep 2015 22:25:25 +1000 (EST) Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138]) by d23relay07.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t8ACPHgV58261570 for ; Thu, 10 Sep 2015 22:25:26 +1000 Received: from d23av02.au.ibm.com (localhost [127.0.0.1]) by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t8ACOqS7022140 for ; Thu, 10 Sep 2015 22:24:52 +1000 Date: Thu, 10 Sep 2015 17:54:31 +0530 From: Bharata B Rao Message-ID: <20150910122431.GL17433@in.ibm.com> References: <20150811100728.GB4587@in.ibm.com> <20150811134826.GI4520@redhat.com> <20150812052346.GC4587@in.ibm.com> <1441692486.14597.17.camel@ellerman.id.au> <20150908063948.GB678@in.ibm.com> <20150908085946.GC2246@work-vm> <20150908095915.GC678@in.ibm.com> <20150908124652.GK2246@work-vm> <20150908133647.GA17433@in.ibm.com> <20150908141356.GM2246@work-vm> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150908141356.GM2246@work-vm> Subject: Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall Reply-To: bharata@linux.vnet.ibm.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert" Cc: qemu-devel@nongnu.org, David Gibson (cc trimmed since this looks like an issue that is contained within QEMU) On Tue, Sep 08, 2015 at 03:13:56PM +0100, Dr. David Alan Gilbert wrote: > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote: > > On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote: > > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote: > > > > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote: > > > > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote: > > > > > > In fact I had successfully done postcopy migration of sPAPR guest with > > > > > > this setup. > > > > > > > > > > Interesting - I'd not got that far myself on power; I was hitting a problem > > > > > loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in htab stream (htab_shift=25) ) > > > > > > > > > > Did you have to make any changes to the qemu code to get that happy? > > > > > > > > I should have mentioned that I tried only QEMU driven migration within > > > > the same host using wp3-postcopy branch of your tree. I don't see the > > > > above issue. > > > > > > > > (qemu) info migrate > > > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off x-postcopy-ram: on > > > > Migration status: completed > > > > total time: 39432 milliseconds > > > > downtime: 162 milliseconds > > > > setup: 14 milliseconds > > > > transferred ram: 1297209 kbytes > > > > throughput: 270.72 mbps > > > > remaining ram: 0 kbytes > > > > total ram: 4194560 kbytes > > > > duplicate: 734015 pages > > > > skipped: 0 pages > > > > normal: 318469 pages > > > > normal bytes: 1273876 kbytes > > > > dirty sync count: 4 > > > > > > > > I will try migration between different hosts soon and check. > > > > > > I hit that on the same host; are you sure you've switched into postcopy mode; > > > i.e. issued a migrate_start_postcopy before the end of migration? > > > > Sorry I was following your discussion with Li in this thread > > > > https://www.marc.info/?l=qemu-devel&m=143035620026744&w=4 > > > > and it wasn't obvious to me that anything apart from turning on the > > x-postcopy-ram capability was required :( > > OK. > > > So I do see the problem now. > > > > At the source > > ------------- > > Error reading data from KVM HTAB fd: Bad file descriptor > > Segmentation fault > > > > At the target > > ------------- > > htab_load() bad index 2113929216 (14336+0 entries) in htab stream (htab_shift=25) > > qemu-system-ppc64: error while loading state section id 56(spapr/htab) > > qemu-system-ppc64: postcopy_ram_listen_thread: loadvm failed: -22 > > qemu-system-ppc64: VQ 0 size 0x100 Guest index 0x0 inconsistent with Host index 0x1f: delta 0xffe1 > > qemu-system-ppc64: error while loading state for instance 0x0 of device 'pci@800000020000000:00.0/virtio-net' > > *** Error in `./ppc64-softmmu/qemu-system-ppc64': corrupted double-linked list: 0x00000100241234a0 *** > > ======= Backtrace: ========= > > /lib64/power8/libc.so.6Segmentation fault > > Good - my current world has got rid of the segfaults/corruption in the cleanup on power - but those > are only after it stumbled over the htab problem. > > I don't know the innards of power/htab, so if you've got any pointers on what upset it > I'd be happy for some pointers. When migrate_start_postcopy is issued, for HTAB, the SaveStateEntry save_live_iterate call is coming after save_live_complete. In case of HTAB, the spapr->htab_fd is closed when HTAB saving is completed in save_live_complete handler. When save_live_iterate call comes after this, we end up accessing an invalid fd resulting in the migration failure we are seeing here. - With postcopy migration, is it expected to get a save_live_iterate call after save_live_complete ? IIUC, save_live_complete signals the completion of the saving. Is save_live_iterate handler expected to handle this condition ? I am able to get past this failure and get migration to complete successfully by this below hack where I teach save_live_iterate handler to ignore the requests after save_live_complete has been called. diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 2f8155d..550e234 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -1236,6 +1236,11 @@ static int htab_save_iterate(QEMUFile *f, void *opaque) return rc; } + if (spapr->htab_fd == -1) { + rc = 1; + goto out; + } + rc = kvmppc_save_htab(f, spapr->htab_fd, MAX_KVM_BUF_SIZE, MAX_ITERATION_NS); if (rc < 0) { @@ -1247,6 +1252,7 @@ static int htab_save_iterate(QEMUFile *f, void *opaque) rc = htab_save_later_pass(f, spapr, MAX_ITERATION_NS); } +out: /* End marker */ qemu_put_be32(f, 0); qemu_put_be16(f, 0); (qemu) migrate_set_capability x-postcopy-ram on (qemu) migrate -d tcp:localhost:4444 (qemu) migrate_start_postcopy (qemu) info migrate capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off x-postcopy-ram: on Migration status: completed total time: 3801 milliseconds downtime: 147 milliseconds setup: 17 milliseconds transferred ram: 1091652 kbytes throughput: 2365.71 mbps remaining ram: 0 kbytes total ram: 4194560 kbytes duplicate: 781969 pages skipped: 0 pages normal: 267087 pages normal bytes: 1068348 kbytes dirty sync count: 2