From: Bharata B Rao <bharata@linux.vnet.ibm.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: qemu-devel@nongnu.org, David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall
Date: Thu, 10 Sep 2015 17:54:31 +0530 [thread overview]
Message-ID: <20150910122431.GL17433@in.ibm.com> (raw)
In-Reply-To: <20150908141356.GM2246@work-vm>
(cc trimmed since this looks like an issue that is contained within QEMU)
On Tue, Sep 08, 2015 at 03:13:56PM +0100, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote:
> > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > > > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> > > > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > > > > > In fact I had successfully done postcopy migration of sPAPR guest with
> > > > > > this setup.
> > > > >
> > > > > Interesting - I'd not got that far myself on power; I was hitting a problem
> > > > > loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in htab stream (htab_shift=25) )
> > > > >
> > > > > Did you have to make any changes to the qemu code to get that happy?
> > > >
> > > > I should have mentioned that I tried only QEMU driven migration within
> > > > the same host using wp3-postcopy branch of your tree. I don't see the
> > > > above issue.
> > > >
> > > > (qemu) info migrate
> > > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off x-postcopy-ram: on
> > > > Migration status: completed
> > > > total time: 39432 milliseconds
> > > > downtime: 162 milliseconds
> > > > setup: 14 milliseconds
> > > > transferred ram: 1297209 kbytes
> > > > throughput: 270.72 mbps
> > > > remaining ram: 0 kbytes
> > > > total ram: 4194560 kbytes
> > > > duplicate: 734015 pages
> > > > skipped: 0 pages
> > > > normal: 318469 pages
> > > > normal bytes: 1273876 kbytes
> > > > dirty sync count: 4
> > > >
> > > > I will try migration between different hosts soon and check.
> > >
> > > I hit that on the same host; are you sure you've switched into postcopy mode;
> > > i.e. issued a migrate_start_postcopy before the end of migration?
> >
> > Sorry I was following your discussion with Li in this thread
> >
> > https://www.marc.info/?l=qemu-devel&m=143035620026744&w=4
> >
> > and it wasn't obvious to me that anything apart from turning on the
> > x-postcopy-ram capability was required :(
>
> OK.
>
> > So I do see the problem now.
> >
> > At the source
> > -------------
> > Error reading data from KVM HTAB fd: Bad file descriptor
> > Segmentation fault
> >
> > At the target
> > -------------
> > htab_load() bad index 2113929216 (14336+0 entries) in htab stream (htab_shift=25)
> > qemu-system-ppc64: error while loading state section id 56(spapr/htab)
> > qemu-system-ppc64: postcopy_ram_listen_thread: loadvm failed: -22
> > qemu-system-ppc64: VQ 0 size 0x100 Guest index 0x0 inconsistent with Host index 0x1f: delta 0xffe1
> > qemu-system-ppc64: error while loading state for instance 0x0 of device 'pci@800000020000000:00.0/virtio-net'
> > *** Error in `./ppc64-softmmu/qemu-system-ppc64': corrupted double-linked list: 0x00000100241234a0 ***
> > ======= Backtrace: =========
> > /lib64/power8/libc.so.6Segmentation fault
>
> Good - my current world has got rid of the segfaults/corruption in the cleanup on power - but those
> are only after it stumbled over the htab problem.
>
> I don't know the innards of power/htab, so if you've got any pointers on what upset it
> I'd be happy for some pointers.
When migrate_start_postcopy is issued, for HTAB, the SaveStateEntry
save_live_iterate call is coming after save_live_complete. In case of HTAB,
the spapr->htab_fd is closed when HTAB saving is completed in
save_live_complete handler. When save_live_iterate call comes after this,
we end up accessing an invalid fd resulting in the migration failure
we are seeing here.
- With postcopy migration, is it expected to get a save_live_iterate
call after save_live_complete ? IIUC, save_live_complete signals the
completion of the saving. Is save_live_iterate handler expected to
handle this condition ?
I am able to get past this failure and get migration to complete successfully
by this below hack where I teach save_live_iterate handler to ignore
the requests after save_live_complete has been called.
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 2f8155d..550e234 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1236,6 +1236,11 @@ static int htab_save_iterate(QEMUFile *f, void *opaque)
return rc;
}
+ if (spapr->htab_fd == -1) {
+ rc = 1;
+ goto out;
+ }
+
rc = kvmppc_save_htab(f, spapr->htab_fd,
MAX_KVM_BUF_SIZE, MAX_ITERATION_NS);
if (rc < 0) {
@@ -1247,6 +1252,7 @@ static int htab_save_iterate(QEMUFile *f, void *opaque)
rc = htab_save_later_pass(f, spapr, MAX_ITERATION_NS);
}
+out:
/* End marker */
qemu_put_be32(f, 0);
qemu_put_be16(f, 0);
(qemu) migrate_set_capability x-postcopy-ram on
(qemu) migrate -d tcp:localhost:4444
(qemu) migrate_start_postcopy
(qemu) info migrate
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off x-postcopy-ram: on
Migration status: completed
total time: 3801 milliseconds
downtime: 147 milliseconds
setup: 17 milliseconds
transferred ram: 1091652 kbytes
throughput: 2365.71 mbps
remaining ram: 0 kbytes
total ram: 4194560 kbytes
duplicate: 781969 pages
skipped: 0 pages
normal: 267087 pages
normal bytes: 1068348 kbytes
dirty sync count: 2
next prev parent reply other threads:[~2015-09-10 12:25 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-14 17:30 [Qemu-devel] [PATCH 00/23] userfaultfd v4 Andrea Arcangeli
2015-05-14 17:30 ` [Qemu-devel] [PATCH 01/23] userfaultfd: linux/Documentation/vm/userfaultfd.txt Andrea Arcangeli
2015-09-11 8:47 ` Michael Kerrisk (man-pages)
2015-12-04 15:50 ` Michael Kerrisk (man-pages)
2015-12-04 17:55 ` Andrea Arcangeli
2015-05-14 17:30 ` [Qemu-devel] [PATCH 02/23] userfaultfd: waitqueue: add nr wake parameter to __wake_up_locked_key Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 03/23] userfaultfd: uAPI Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 04/23] userfaultfd: linux/userfaultfd_k.h Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 05/23] userfaultfd: add vm_userfaultfd_ctx to the vm_area_struct Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 06/23] userfaultfd: add VM_UFFD_MISSING and VM_UFFD_WP Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 07/23] userfaultfd: call handle_userfault() for userfaultfd_missing() faults Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 08/23] userfaultfd: teach vma_merge to merge across vma->vm_userfaultfd_ctx Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 09/23] userfaultfd: prevent khugepaged to merge if userfaultfd is armed Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 10/23] userfaultfd: add new syscall to provide memory externalization Andrea Arcangeli
2015-05-14 17:49 ` Linus Torvalds
2015-05-15 16:04 ` Andrea Arcangeli
2015-05-15 18:22 ` Linus Torvalds
2015-06-23 19:00 ` Dave Hansen
2015-06-23 21:41 ` Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 11/23] userfaultfd: Rename uffd_api.bits into .features Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 12/23] userfaultfd: Rename uffd_api.bits into .features fixup Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 13/23] userfaultfd: change the read API to return a uffd_msg Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 14/23] userfaultfd: wake pending userfaults Andrea Arcangeli
2015-10-22 12:10 ` Peter Zijlstra
2015-10-22 13:20 ` Andrea Arcangeli
2015-10-22 13:38 ` Peter Zijlstra
2015-10-22 14:18 ` Andrea Arcangeli
2015-10-22 15:15 ` Peter Zijlstra
2015-10-22 15:30 ` Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 15/23] userfaultfd: optimize read() and poll() to be O(1) Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 16/23] userfaultfd: allocate the userfaultfd_ctx cacheline aligned Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 17/23] userfaultfd: solve the race between UFFDIO_COPY|ZEROPAGE and read Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 18/23] userfaultfd: buildsystem activation Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall Andrea Arcangeli
2015-08-11 10:07 ` Bharata B Rao
2015-08-11 13:48 ` Andrea Arcangeli
2015-08-12 5:23 ` Bharata B Rao
2015-09-08 6:08 ` Michael Ellerman
2015-09-08 6:39 ` Bharata B Rao
2015-09-08 7:14 ` Michael Ellerman
2015-09-08 10:40 ` Michael Ellerman
2015-09-08 12:28 ` Dr. David Alan Gilbert
2015-09-08 8:59 ` Dr. David Alan Gilbert
2015-09-08 10:00 ` Bharata B Rao
2015-09-08 12:46 ` Dr. David Alan Gilbert
2015-09-08 13:37 ` Bharata B Rao
2015-09-08 14:13 ` Dr. David Alan Gilbert
2015-09-10 12:24 ` Bharata B Rao [this message]
2015-09-11 19:15 ` Dr. David Alan Gilbert
2015-09-14 18:53 ` Dr. David Alan Gilbert
2015-05-14 17:31 ` [Qemu-devel] [PATCH 20/23] userfaultfd: UFFDIO_COPY|UFFDIO_ZEROPAGE uAPI Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 21/23] userfaultfd: mcopy_atomic|mfill_zeropage: UFFDIO_COPY|UFFDIO_ZEROPAGE preparation Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 22/23] userfaultfd: avoid mmap_sem read recursion in mcopy_atomic Andrea Arcangeli
2015-05-22 20:18 ` Andrew Morton
2015-05-22 20:48 ` Andrea Arcangeli
2015-05-22 21:18 ` Andrew Morton
2015-05-23 1:04 ` Andrea Arcangeli
2015-05-14 17:31 ` [Qemu-devel] [PATCH 23/23] userfaultfd: UFFDIO_COPY and UFFDIO_ZEROPAGE Andrea Arcangeli
2015-05-18 14:24 ` [Qemu-devel] [PATCH 00/23] userfaultfd v4 Pavel Emelyanov
2015-05-19 21:38 ` Andrew Morton
2015-05-19 21:59 ` Richard Weinberger
2015-05-20 14:17 ` Andrea Arcangeli
2015-05-20 13:23 ` Andrea Arcangeli
2015-05-21 13:11 ` Kirill Smelkov
2015-05-21 15:52 ` Andrea Arcangeli
2015-05-22 16:35 ` Kirill Smelkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150910122431.GL17433@in.ibm.com \
--to=bharata@linux.vnet.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).