From: Wen Congyang <wency@cn.fujitsu.com>
To: xen devel <xen-devel@lists.xen.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
Wen Congyang <wency@cn.fujitsu.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
Jiang Yunhong <yunhong.jiang@intel.com>,
Dong Eddie <eddie.dong@intel.com>,
Hong Tao <bobby.hong@huawei.com>,
Yang Hongyang <yanghy@cn.fujitsu.com>,
Lai Jiangshan <laijs@cn.fujitsu.com>
Subject: [RFC Patch v2 01/45] copy the correct page to memory
Date: Fri, 8 Aug 2014 15:01:00 +0800 [thread overview]
Message-ID: <1407481305-19808-2-git-send-email-wency@cn.fujitsu.com> (raw)
In-Reply-To: <1407481305-19808-1-git-send-email-wency@cn.fujitsu.com>
From: Hong Tao <bobby.hong@huawei.com>
apply_batch() only handles MAX_BATCH_SIZE pages at one time. If
there is some bogus/unmapped/allocate-only/broken page, we will
skip it. So when we call apply_batch() again, the first page's
index is curbatch - invalid_pages. invalid_pages stores the number
of bogus/unmapped/allocate-only/broken pages we have found.
In many cases, invalid_pages is 0, so we don't catch this error.
Signed-off-by: Hong Tao <bobby.hong@huawei.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
tools/libxc/xc_domain_restore.c | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index e73e0a2..6c346f9 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -1106,7 +1106,7 @@ static int pagebuf_get(xc_interface *xch, struct restore_ctx *ctx,
static int apply_batch(xc_interface *xch, uint32_t dom, struct restore_ctx *ctx,
xen_pfn_t* region_mfn, unsigned long* pfn_type, int pae_extended_cr3,
struct xc_mmu* mmu,
- pagebuf_t* pagebuf, int curbatch)
+ pagebuf_t* pagebuf, int curbatch, int *invalid_pages)
{
int i, j, curpage, nr_mfns;
int k, scount;
@@ -1121,6 +1121,12 @@ static int apply_batch(xc_interface *xch, uint32_t dom, struct restore_ctx *ctx,
struct domain_info_context *dinfo = &ctx->dinfo;
int* pfn_err = NULL;
int rc = -1;
+ int local_invalid_pages = 0;
+ /* We have handled curbatch pages before this batch, and there are
+ * *invalid_pages pages that are not in pagebuf->pages. So the first
+ * page for this page is (curbatch - *invalid_pages) page.
+ */
+ int first_page = curbatch - *invalid_pages;
unsigned long mfn, pfn, pagetype;
@@ -1293,10 +1299,13 @@ static int apply_batch(xc_interface *xch, uint32_t dom, struct restore_ctx *ctx,
pfn = pagebuf->pfn_types[i + curbatch] & ~XEN_DOMCTL_PFINFO_LTAB_MASK;
pagetype = pagebuf->pfn_types[i + curbatch] & XEN_DOMCTL_PFINFO_LTAB_MASK;
- if ( pagetype == XEN_DOMCTL_PFINFO_XTAB
+ if ( pagetype == XEN_DOMCTL_PFINFO_XTAB
|| pagetype == XEN_DOMCTL_PFINFO_XALLOC)
+ {
+ local_invalid_pages++;
/* a bogus/unmapped/allocate-only page: skip it */
continue;
+ }
if ( pagetype == XEN_DOMCTL_PFINFO_BROKEN )
{
@@ -1306,6 +1315,8 @@ static int apply_batch(xc_interface *xch, uint32_t dom, struct restore_ctx *ctx,
"dom=%d, pfn=%lx\n", dom, pfn);
goto err_mapped;
}
+
+ local_invalid_pages++;
continue;
}
@@ -1344,7 +1355,7 @@ static int apply_batch(xc_interface *xch, uint32_t dom, struct restore_ctx *ctx,
}
}
else
- memcpy(page, pagebuf->pages + (curpage + curbatch) * PAGE_SIZE,
+ memcpy(page, pagebuf->pages + (first_page + curpage) * PAGE_SIZE,
PAGE_SIZE);
pagetype &= XEN_DOMCTL_PFINFO_LTABTYPE_MASK;
@@ -1418,6 +1429,7 @@ static int apply_batch(xc_interface *xch, uint32_t dom, struct restore_ctx *ctx,
} /* end of 'batch' for loop */
rc = nraces;
+ *invalid_pages += local_invalid_pages;
err_mapped:
munmap(region_base, j*PAGE_SIZE);
@@ -1621,7 +1633,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
loadpages:
for ( ; ; )
{
- int j, curbatch;
+ int j, curbatch, invalid_pages;
xc_report_progress_step(xch, n, dinfo->p2m_size);
@@ -1665,11 +1677,13 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
/* break pagebuf into batches */
curbatch = 0;
+ invalid_pages = 0;
while ( curbatch < j ) {
int brc;
brc = apply_batch(xch, dom, ctx, region_mfn, pfn_type,
- pae_extended_cr3, mmu, &pagebuf, curbatch);
+ pae_extended_cr3, mmu, &pagebuf, curbatch,
+ &invalid_pages);
if ( brc < 0 )
goto out;
--
1.9.3
next prev parent reply other threads:[~2014-08-08 7:01 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-08 7:00 [RFC Patch v2 00/45] COarse-grain LOck-stepping Virtual Machines for Non-stop Service Wen Congyang
2014-08-08 7:01 ` Wen Congyang [this message]
2014-08-08 7:01 ` [RFC Patch v2 02/45] csum the correct page Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 03/45] don't zero out ioreq page Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 04/45] Refactor domain_suspend_callback_common() Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 05/45] Update libxl__domain_resume() for colo Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 06/45] Update libxl__domain_suspend_common_switch_qemu_logdirty() " Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 07/45] Introduce a new internal API libxl__domain_unpause() Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 08/45] Update libxl__domain_unpause() to support qemu-xen Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 09/45] support to resume uncooperative HVM guests Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 10/45] update datecopier to support sending data only Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 11/45] introduce a new API to aync read data from fd Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 12/45] move remus related codes to libxl_remus.c Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 13/45] rename remus device to checkpoint device Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 14/45] adjust the indentation Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 15/45] don't touch remus in checkpoint_device Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 16/45] Update libxl_save_msgs_gen.pl to support return data from xl to xc Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 17/45] Allow slave sends data to master Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 18/45] secondary vm suspend/resume/checkpoint code Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 19/45] primary vm suspend/get_dirty_pfn/resume/checkpoint code Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 20/45] xc_domain_save: flush cache before calling callbacks->postcopy() in colo mode Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 21/45] COLO: xc related codes Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 22/45] send store mfn and console mfn to xl before resuming secondary vm Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 23/45] implement the cmdline for COLO Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 24/45] HACK: do checkpoint per 20ms Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 25/45] colo: dynamic allocate aio_requests to avoid -EBUSY error Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 26/45] fix memory leak in block-remus Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 27/45] pass uuid to the callback td_open Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 28/45] return the correct dev path Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 29/45] blktap2: use correct way to get remus_image Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 30/45] don't call client_flush() when switching to unprotected mode Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 31/45] remus: fix bug in tdremus_close() Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 32/45] blktap2: use correct way to get free event id Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 33/45] blktap2: don't return negative " Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 34/45] blktap2: use correct way to define array Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 35/45] blktap2: connect to backup asynchronously Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 36/45] switch to unprotected mode before closing Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 37/45] blktap2: move async connect related codes to block-replication.c Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 38/45] blktap2: move ramdisk " Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 39/45] block-colo: implement colo disk replication Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 40/45] pass correct file to qemu if we use blktap2 Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 41/45] support blktap remus in xl Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 42/45] support blktap colo in xl: Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 43/45] update libxl__device_disk_from_xs_be() to support blktap device Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 44/45] libxl/colo: setup and control disk replication for blktap2 backends Wen Congyang
2014-08-08 7:01 ` [RFC Patch v2 45/45] x86/hvm: Always set pending event injection when loading VMC[BS] state Wen Congyang
2014-08-08 7:24 ` Jan Beulich
2014-08-08 7:29 ` Wen Congyang
2014-08-26 16:02 ` Jan Beulich
2014-08-27 0:46 ` Wen Congyang
2014-08-27 14:58 ` Aravind Gopalakrishnan
2014-08-28 1:04 ` Wen Congyang
2014-08-28 8:54 ` Andrew Cooper
2014-08-28 11:17 ` Wen Congyang
2014-08-28 11:31 ` Paul Durrant
2014-08-29 5:59 ` Wen Congyang
2014-08-28 9:53 ` Tim Deegan
2014-08-27 23:24 ` Tian, Kevin
2014-08-27 15:02 ` Andrew Cooper
2014-08-08 7:01 ` [RFC Patch v2 46/45] Introduce "xen-load-devices-state" Wen Congyang
2014-08-08 7:19 ` [RFC Patch v2 00/45] COarse-grain LOck-stepping Virtual Machines for Non-stop Service Jan Beulich
2014-08-08 7:39 ` Wen Congyang
2014-08-08 8:21 ` Wen Congyang
2014-08-08 9:02 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1407481305-19808-2-git-send-email-wency@cn.fujitsu.com \
--to=wency@cn.fujitsu.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=bobby.hong@huawei.com \
--cc=eddie.dong@intel.com \
--cc=laijs@cn.fujitsu.com \
--cc=xen-devel@lists.xen.org \
--cc=yanghy@cn.fujitsu.com \
--cc=yunhong.jiang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).