From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: aarcange@redhat.com, liang.z.li@intel.com, qemu-devel@nongnu.org,
luis@cs.umu.se, bharata@linux.vnet.ibm.com, amit.shah@redhat.com,
pbonzini@redhat.com
Subject: Re: [Qemu-devel] [PATCH v8 45/54] Host page!=target page: Cleanup bitmaps
Date: Tue, 3 Nov 2015 17:32:01 +0000 [thread overview]
Message-ID: <20151103173200.GA31958@work-vm> (raw)
In-Reply-To: <87wpu78cco.fsf@neno.neno>
* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert (git)" <dgilbert@redhat.com> wrote:
> > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> >
> > Prior to the start of postcopy, ensure that everything that will
> > be transferred later is a whole host-page in size.
> >
> > This is accomplished by discarding partially transferred host pages
> > and marking any that are partially dirty as fully dirty.
> >
> > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > + struct RAMBlock *block;
> > + unsigned int host_ratio = qemu_host_page_size / TARGET_PAGE_SIZE;
> > +
> > + if (qemu_host_page_size == TARGET_PAGE_SIZE) {
> > + /* Easy case - TPS==HPS - nothing to be done */
> > + return 0;
> > + }
> > +
> > + /* Easiest way to make sure we don't resume in the middle of a host-page */
> > + last_seen_block = NULL;
> > + last_sent_block = NULL;
> > + last_offset = 0;
>
>
> It should be enough with the last one, right? if you put
> last_seen/sent_block to NULL, you will return from the beggining each
> time that you do a migration bitmap sync, penalizing the pages on the
> begining of the cycle. Even better than:
>
> last_offset = 0 is doing a:
>
> last_offset &= HOST_PAGE_MASK
>
> or whatever is the constant, no?
These only happen once at the transition to postcopy; so I just
make sure by resetting the 3 associated variables; you're probably
right you could just do it by setting last_offset or last_seen_block;
but resetting all 3 gives you something that's obviously consistent.
> > +
> > + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> > + unsigned long first = block->offset >> TARGET_PAGE_BITS;
> > + unsigned long len = block->used_length >> TARGET_PAGE_BITS;
> > + unsigned long last = first + (len - 1);
> > + unsigned long found_set;
> > + unsigned long search_start;
>
> next_search? search_next?
Based on your comments below, it's gone.
> > +
> > + PostcopyDiscardState *pds =
> > + postcopy_discard_send_init(ms, first, block->idstr);
> > +
> > + /* First pass: Discard all partially sent host pages */
> > + found_set = find_next_bit(ms->sentmap, last + 1, first);
> > + while (found_set <= last) {
> > + bool do_discard = false;
> > + unsigned long discard_start_addr;
> > + /*
> > + * If the start of this run of pages is in the middle of a host
> > + * page, then we need to discard this host page.
> > + */
> > + if (found_set % host_ratio) {
> > + do_discard = true;
> > + found_set -= found_set % host_ratio;
>
> please, create a PAGE_HOST_ALIGN() macro, or whatever you want to call it?
Note this is aligning by bit rather than page; so I think the
uses in here are the only case; however I've redone it as:
host_offset = found_set % host_ratio;
if (host_offset) {
do_discard = true;
found_set -= host_offset;
}
so no repetition.
> > + * next 1 bit
> > + */
> > + search_start = found_zero + 1;
>
> change for this
>
> found_set = found_zero + 1;
>
> > + }
> > + }
> > + /* Find the next 1 for the next iteration */
> > + found_set = find_next_bit(ms->sentmap, last + 1, search_start);
>
>
> and move previous line to:
>
> > + if (do_discard) {
> > + unsigned long page;
> > +
> > + /* Tell the destination to discard this page */
> > + postcopy_discard_send_range(ms, pds, discard_start_addr,
> > + discard_start_addr + host_ratio - 1);
> > + /* Clean up the bitmap */
> > + for (page = discard_start_addr;
> > + page < discard_start_addr + host_ratio; page++) {
> > + /* All pages in this host page are now not sent */
> > + clear_bit(page, ms->sentmap);
> > +
> > + /*
> > + * Remark them as dirty, updating the count for any pages
> > + * that weren't previously dirty.
> > + */
> > + migration_dirty_pages += !test_and_set_bit(page,
> > + migration_bitmap);
> > + }
>
>
> to here
> /* Find the next 1 for the next iteration */
> found_set = find_next_bit(ms->sentmap, last + 1, search_start);
> }
OK, see the version below; I think I've done it as suggested; I've got rid of the
'search_start' and just reused the found_set (that's now run_start).
> > + }
>
> ?
>
>
> > +
> > + /*
> > + * Second pass: Ensure that all partially dirty host pages are made
> > + * fully dirty.
> > + */
> > + found_set = find_next_bit(migration_bitmap, last + 1, first);
> > + while (found_set <= last) {
> > + bool do_dirty = false;
> > + unsigned long dirty_start_addr;
> > + /*
> > + * If the start of this run of pages is in the middle of a host
> > + * page, then we need to mark the whole of this host page dirty
> > + */
> > + if (found_set % host_ratio) {
> > + do_dirty = true;
> > + found_set -= found_set % host_ratio;
> > + dirty_start_addr = found_set;
> > + search_start = found_set + host_ratio;
> > + } else {
> > + /* Find the end of this run */
> > + unsigned long found_zero;
> > + found_zero = find_next_zero_bit(migration_bitmap, last + 1,
> > + found_set + 1);
> > + /*
> > + * If the 0 isn't at the start of a host page, then the
> > + * run of 1's doesn't finish at the end of a host page
> > + * and we need to discard.
> > + */
> > + if (found_zero % host_ratio) {
> > + do_dirty = true;
> > + dirty_start_addr = found_zero - (found_zero % host_ratio);
> > + /*
> > + * This host page has gone, the next loop iteration starts
> > + * from the next page with a 1 bit
> > + */
> > + search_start = dirty_start_addr + host_ratio;
> > + } else {
> > + /*
> > + * No discards on this iteration, next loop starts from
> > + * next 1 bit
> > + */
> > + search_start = found_zero + 1;
> > + }
> > + }
> > +
> > + /* Find the next 1 for the next iteration */
> > + found_set = find_next_bit(migration_bitmap, last + 1, search_start);
> > +
> > + if (do_dirty) {
> > + unsigned long page;
> > +
> > + if (test_bit(dirty_start_addr, ms->sentmap)) {
> > + /*
> > + * If the page being redirtied is marked as sent, then it
> > + * must have been fully sent (otherwise it would have been
> > + * discarded by the previous pass.)
> > + * Discard it now.
> > + */
> > + postcopy_discard_send_range(ms, pds, dirty_start_addr,
> > + dirty_start_addr +
> > + host_ratio - 1);
> > + }
> > +
> > + /* Clean up the bitmap */
> > + for (page = dirty_start_addr;
> > + page < dirty_start_addr + host_ratio; page++) {
> > +
> > + /* Clear the sentmap bits for the discard case above */
> > + clear_bit(page, ms->sentmap);
> > +
> > + /*
> > + * Mark them as dirty, updating the count for any pages
> > + * that weren't previously dirty.
> > + */
> > + migration_dirty_pages += !test_and_set_bit(page,
> > + migration_bitmap);
> > + }
> > + }
> > + }
>
>
> This is exactly the same code than the previous half of the function,
> you just need to factor out in a function?
>
> walk_btimap_host_page_chunks or whatever, and pass the two bits that
> change? the bitmap, and what to do with the ranges that are not there?
Split out; see below - it gets a little bit more hairy since sentmap is
now unsentmap, so we need a few if's; but it's still lost the duplication:
(build tested only so far):
Dave
commit 15003123520ee5c358b2233c0bc30635aa90eb75
Author: Dr. David Alan Gilbert <dgilbert@redhat.com>
Date: Fri Sep 26 15:15:14 2014 +0100
Host page!=target page: Cleanup bitmaps
Prior to the start of postcopy, ensure that everything that will
be transferred later is a whole host-page in size.
This is accomplished by discarding partially transferred host pages
and marking any that are partially dirty as fully dirty.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
diff --git a/migration/ram.c b/migration/ram.c
index fe782e7..e30ed2b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1576,6 +1576,167 @@ static int postcopy_each_ram_send_discard(MigrationState *ms)
}
/*
+ * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
+ * the two bitmaps, that are similar, but one is inverted.
+ *
+ * We search for runs of target-pages that don't start or end on a
+ * host page boundary;
+ * unsent_pass=true: Cleans up partially unsent host pages by searching
+ * the unsentmap
+ * unsent_pass=false: Cleans up partially dirty host pages by searching
+ * the main migration bitmap
+ *
+ */
+static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
+ RAMBlock *block,
+ PostcopyDiscardState *pds)
+{
+ unsigned long *bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+ unsigned int host_ratio = qemu_host_page_size / TARGET_PAGE_SIZE;
+ unsigned long first = block->offset >> TARGET_PAGE_BITS;
+ unsigned long len = block->used_length >> TARGET_PAGE_BITS;
+ unsigned long last = first + (len - 1);
+ unsigned long run_start;
+
+ if (unsent_pass) {
+ /* Find a sent page */
+ run_start = find_next_zero_bit(ms->unsentmap, last + 1, first);
+ } else {
+ /* Find a dirty page */
+ run_start = find_next_bit(bitmap, last + 1, first);
+ }
+
+ while (run_start <= last) {
+ bool do_fixup = false;
+ unsigned long fixup_start_addr;
+ unsigned long host_offset;
+
+ /*
+ * If the start of this run of pages is in the middle of a host
+ * page, then we need to fixup this host page.
+ */
+ host_offset = run_start % host_ratio;
+ if (host_offset) {
+ do_fixup = true;
+ run_start -= host_offset;
+ fixup_start_addr = run_start;
+ /* For the next pass */
+ run_start = run_start + host_ratio;
+ } else {
+ /* Find the end of this run */
+ unsigned long run_end;
+ if (unsent_pass) {
+ run_end = find_next_bit(ms->unsentmap, last + 1, run_start + 1);
+ } else {
+ run_end = find_next_zero_bit(bitmap, last + 1, run_start + 1);
+ }
+ /*
+ * If the end isn't at the start of a host page, then the
+ * run doesn't finish at the end of a host page
+ * and we need to discard.
+ */
+ host_offset = run_end % host_ratio;
+ if (host_offset) {
+ do_fixup = true;
+ fixup_start_addr = run_end - host_offset;
+ /*
+ * This host page has gone, the next loop iteration starts
+ * from after the fixup
+ */
+ run_start = fixup_start_addr + host_ratio;
+ } else {
+ /*
+ * No discards on this iteration, next loop starts from
+ * next sent/dirty page
+ */
+ run_start = run_end + 1;
+ }
+ }
+
+ if (do_fixup) {
+ unsigned long page;
+
+ /* Tell the destination to discard this page */
+ if (unsent_pass || !test_bit(fixup_start_addr, ms->unsentmap)) {
+ /* For the unsent_pass we:
+ * discard partially sent pages
+ * For the !unsent_pass (dirty) we:
+ * discard partially dirty pages that were sent
+ * (any partially sent pages were already discarded
+ * by the previous unsent_pass)
+ */
+ postcopy_discard_send_range(ms, pds, fixup_start_addr,
+ host_ratio);
+ }
+
+ /* Clean up the bitmap */
+ for (page = fixup_start_addr;
+ page < fixup_start_addr + host_ratio; page++) {
+ /* All pages in this host page are now not sent */
+ set_bit(page, ms->unsentmap);
+
+ /*
+ * Remark them as dirty, updating the count for any pages
+ * that weren't previously dirty.
+ */
+ migration_dirty_pages += !test_and_set_bit(page, bitmap);
+ }
+ }
+
+ if (unsent_pass) {
+ /* Find the next sent page for the next iteration */
+ run_start = find_next_zero_bit(ms->unsentmap, last + 1,
+ run_start);
+ } else {
+ /* Find the next dirty page for the next iteration */
+ run_start = find_next_bit(bitmap, last + 1, run_start);
+ }
+ }
+}
+
+/*
+ * Utility for the outgoing postcopy code.
+ *
+ * Discard any partially sent host-page size chunks, mark any partially
+ * dirty host-page size chunks as all dirty.
+ *
+ * Returns: 0 on success
+ */
+static int postcopy_chunk_hostpages(MigrationState *ms)
+{
+ struct RAMBlock *block;
+
+ if (qemu_host_page_size == TARGET_PAGE_SIZE) {
+ /* Easy case - TPS==HPS - nothing to be done */
+ return 0;
+ }
+
+ /* Easiest way to make sure we don't resume in the middle of a host-page */
+ last_seen_block = NULL;
+ last_sent_block = NULL;
+ last_offset = 0;
+
+ QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+ unsigned long first = block->offset >> TARGET_PAGE_BITS;
+
+ PostcopyDiscardState *pds =
+ postcopy_discard_send_init(ms, first, block->idstr);
+
+ /* First pass: Discard all partially sent host pages */
+ postcopy_chunk_hostpages_pass(ms, true, block, pds);
+ /*
+ * Second pass: Ensure that all partially dirty host pages are made
+ * fully dirty.
+ */
+ postcopy_chunk_hostpages_pass(ms, false, block, pds);
+
+ postcopy_discard_send_finish(ms, pds);
+ } /* ram_list loop */
+
+ return 0;
+}
+
+/*
* Transmit the set of pages to be discarded after precopy to the target
* these are pages that:
* a) Have been previously transmitted but are now dirty again
@@ -1594,6 +1755,13 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
/* This should be our last sync, the src is now paused */
migration_bitmap_sync();
+ /* Deal with TPS != HPS */
+ ret = postcopy_chunk_hostpages(ms);
+ if (ret) {
+ rcu_read_unlock();
+ return ret;
+ }
+
/*
* Update the unsentmap to be unsentmap = unsentmap | dirty
*/
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2015-11-03 17:32 UTC|newest]
Thread overview: 118+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-29 8:37 [Qemu-devel] [PATCH v8 00/54] Postcopy implementation Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 01/54] Add postcopy documentation Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 02/54] Provide runtime Target page information Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 03/54] Init page sizes in qtest Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 04/54] Move configuration section writing Dr. David Alan Gilbert (git)
2015-10-05 6:44 ` Amit Shah
2015-10-30 12:47 ` Dr. David Alan Gilbert
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 05/54] qemu_ram_block_from_host Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 06/54] Rename mis->file to from_src_file Dr. David Alan Gilbert (git)
2015-09-29 10:41 ` Amit Shah
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 07/54] Add qemu_get_buffer_in_place to avoid copies some of the time Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 08/54] Add wrapper for setting blocking status on a QEMUFile Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 09/54] Add QEMU_MADV_NOHUGEPAGE Dr. David Alan Gilbert (git)
2015-10-28 10:35 ` Amit Shah
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 10/54] migration/ram.c: Use RAMBlock rather than MemoryRegion Dr. David Alan Gilbert (git)
2015-10-28 10:36 ` Amit Shah
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 11/54] ram_debug_dump_bitmap: Dump a migration bitmap as text Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 12/54] migrate_init: Call from savevm Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 13/54] Move dirty page search state into separate structure Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 14/54] ram_find_and_save_block: Split out the finding Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 15/54] Rename save_live_complete to save_live_complete_precopy Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 16/54] Return path: Open a return path on QEMUFile for sockets Dr. David Alan Gilbert (git)
2015-10-02 15:29 ` Daniel P. Berrange
2015-10-02 16:32 ` Dr. David Alan Gilbert
2015-10-02 17:03 ` Daniel P. Berrange
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 17/54] Return path: socket_writev_buffer: Block even on non-blocking fd's Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 18/54] Migration commands Dr. David Alan Gilbert (git)
2015-10-20 11:22 ` Juan Quintela
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 19/54] Return path: Control commands Dr. David Alan Gilbert (git)
2015-10-20 11:27 ` Juan Quintela
2015-10-26 11:42 ` Dr. David Alan Gilbert
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 20/54] Return path: Send responses from destination to source Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 21/54] Return path: Source handling of return path Dr. David Alan Gilbert (git)
2015-10-20 11:33 ` Juan Quintela
2015-10-26 12:06 ` Dr. David Alan Gilbert
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 22/54] Rework loadvm path for subloops Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 23/54] Add migration-capability boolean for postcopy-ram Dr. David Alan Gilbert (git)
2015-09-29 20:22 ` Eric Blake
2015-09-30 7:00 ` Amit Shah
2015-09-30 12:44 ` Eric Blake
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 24/54] Add wrappers and handlers for sending/receiving the postcopy-ram migration messages Dr. David Alan Gilbert (git)
2015-10-20 11:50 ` Juan Quintela
2015-10-26 12:22 ` Dr. David Alan Gilbert
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 25/54] MIG_CMD_PACKAGED: Send a packaged chunk of migration stream Dr. David Alan Gilbert (git)
2015-10-20 13:25 ` Juan Quintela
2015-10-26 16:21 ` Dr. David Alan Gilbert
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 26/54] Modify save_live_pending for postcopy Dr. David Alan Gilbert (git)
2015-10-28 11:03 ` Amit Shah
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 27/54] postcopy: OS support test Dr. David Alan Gilbert (git)
2015-10-20 13:31 ` Juan Quintela
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 28/54] migrate_start_postcopy: Command to trigger transition to postcopy Dr. David Alan Gilbert (git)
2015-09-30 16:25 ` Eric Blake
2015-09-30 16:30 ` Dr. David Alan Gilbert
2015-10-20 13:33 ` Juan Quintela
2015-10-28 11:17 ` Amit Shah
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 29/54] MIGRATION_STATUS_POSTCOPY_ACTIVE: Add new migration state Dr. David Alan Gilbert (git)
2015-10-20 13:35 ` Juan Quintela
2015-10-30 18:19 ` Dr. David Alan Gilbert
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 30/54] Avoid sending vmdescription during postcopy Dr. David Alan Gilbert (git)
2015-10-20 13:35 ` Juan Quintela
2015-10-28 11:19 ` Amit Shah
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 31/54] Add qemu_savevm_state_complete_postcopy Dr. David Alan Gilbert (git)
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 32/54] Postcopy: Maintain sentmap and calculate discard Dr. David Alan Gilbert (git)
2015-10-21 11:17 ` Juan Quintela
2015-10-30 18:43 ` Dr. David Alan Gilbert
2015-11-02 17:31 ` Dr. David Alan Gilbert
2015-11-02 18:19 ` Dr. David Alan Gilbert
2015-11-02 20:14 ` Dr. David Alan Gilbert
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 33/54] postcopy: Incoming initialisation Dr. David Alan Gilbert (git)
2015-10-21 8:35 ` Juan Quintela
2015-11-03 17:59 ` Dr. David Alan Gilbert
2015-11-03 18:32 ` Juan Quintela
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 34/54] postcopy: ram_enable_notify to switch on userfault Dr. David Alan Gilbert (git)
2015-10-28 11:40 ` Amit Shah
2015-09-29 8:37 ` [Qemu-devel] [PATCH v8 35/54] Postcopy: Postcopy startup in migration thread Dr. David Alan Gilbert (git)
2015-10-21 8:57 ` Juan Quintela
2015-10-26 17:12 ` Dr. David Alan Gilbert
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 36/54] Split out end of migration code from migration_thread Dr. David Alan Gilbert (git)
2015-10-21 9:11 ` Juan Quintela
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 37/54] Postcopy: End of iteration Dr. David Alan Gilbert (git)
2015-10-21 9:16 ` Juan Quintela
2015-10-29 5:10 ` Amit Shah
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 38/54] Page request: Add MIG_RP_MSG_REQ_PAGES reverse command Dr. David Alan Gilbert (git)
2015-10-21 11:12 ` Juan Quintela
2015-10-26 16:58 ` Dr. David Alan Gilbert
2015-10-29 5:17 ` Amit Shah
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 39/54] Page request: Process incoming page request Dr. David Alan Gilbert (git)
2015-10-21 11:17 ` Juan Quintela
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 40/54] Page request: Consume pages off the post-copy queue Dr. David Alan Gilbert (git)
2015-10-26 16:32 ` Juan Quintela
2015-11-03 11:52 ` Dr. David Alan Gilbert
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 41/54] postcopy_ram.c: place_page and helpers Dr. David Alan Gilbert (git)
2015-10-28 10:28 ` Juan Quintela
2015-10-28 13:11 ` Dr. David Alan Gilbert
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 42/54] Postcopy: Use helpers to map pages during migration Dr. David Alan Gilbert (git)
2015-10-28 10:58 ` Juan Quintela
2015-10-30 12:59 ` Dr. David Alan Gilbert
2015-10-30 16:35 ` Dr. David Alan Gilbert
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 43/54] Don't sync dirty bitmaps in postcopy Dr. David Alan Gilbert (git)
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 44/54] Don't iterate on precopy-only devices during postcopy Dr. David Alan Gilbert (git)
2015-10-28 11:01 ` Juan Quintela
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 45/54] Host page!=target page: Cleanup bitmaps Dr. David Alan Gilbert (git)
2015-10-28 11:24 ` Juan Quintela
2015-11-03 17:32 ` Dr. David Alan Gilbert [this message]
2015-11-03 18:30 ` Juan Quintela
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 46/54] postcopy: Check order of received target pages Dr. David Alan Gilbert (git)
2015-10-28 11:26 ` Juan Quintela
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 47/54] Round up RAMBlock sizes to host page sizes Dr. David Alan Gilbert (git)
2015-10-28 11:28 ` Juan Quintela
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 48/54] Postcopy; Handle userfault requests Dr. David Alan Gilbert (git)
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 49/54] Start up a postcopy/listener thread ready for incoming page data Dr. David Alan Gilbert (git)
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 50/54] postcopy: Wire up loadvm_postcopy_handle_ commands Dr. David Alan Gilbert (git)
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 51/54] Postcopy: Mark nohugepage before discard Dr. David Alan Gilbert (git)
2015-10-28 14:02 ` Juan Quintela
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 52/54] End of migration for postcopy Dr. David Alan Gilbert (git)
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 53/54] Disable mlock around incoming postcopy Dr. David Alan Gilbert (git)
2015-10-21 9:17 ` Juan Quintela
2015-09-29 8:38 ` [Qemu-devel] [PATCH v8 54/54] Inhibit ballooning during postcopy Dr. David Alan Gilbert (git)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151103173200.GA31958@work-vm \
--to=dgilbert@redhat.com \
--cc=aarcange@redhat.com \
--cc=amit.shah@redhat.com \
--cc=bharata@linux.vnet.ibm.com \
--cc=liang.z.li@intel.com \
--cc=luis@cs.umu.se \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).