* [PATCH 0/2] Bulk mem-share identical domains
@ 2015-10-04 20:25 Tamas K Lengyel
2015-10-04 20:25 ` [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Tamas K Lengyel @ 2015-10-04 20:25 UTC (permalink / raw)
To: xen-devel
Cc: Tamas K Lengyel, Wei Liu, Ian Campbell, Stefano Stabellini,
George Dunlap, Andrew Cooper, Ian Jackson, Jan Beulich,
Keir Fraser
The following patches add a convenience memop to the mem_sharing system,
allowing for the rapid deduplication of memory pages between identical domains.
The envisioned use-case for this is the following:
1) Create two domains from the same snapshot using xl.
This step can also be performed by piping an existing domain's memory with
"xl save -c <domain> <pipe> | xl restore -p <new_cfg> <pipe>"
It is up for the user to create the appropriate configuration for the clone,
including setting up a CoW-disk as well.
2) Enable memory sharing on both domains
3) Execute bulk dedup between the domains.
Performing memory deduplication this way reduces the number of hypercalls that
need to be performed, thus significantly speeding up the operation.
The subsystem is currently orphaned and without a maintainer. AFAIK I'm the
only active user of this subsystem.
These patches are also available via git at
https://github.com/tklengyel/xen/tree/memshr_bulk
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Tamas K Lengyel (2):
x86/mem-sharing: Bulk mem-sharing entire domains
tests/mem-sharing: Add bulk option to memshrtool
tools/libxc/include/xenctrl.h | 11 +++++
tools/libxc/xc_memshr.c | 14 ++++++
tools/tests/mem-sharing/memshrtool.c | 20 ++++++++
xen/arch/x86/mm/mem_sharing.c | 90 +++++++++++++++++++++++++++++++++++-
xen/arch/x86/x86_64/compat/mm.c | 6 ++-
xen/arch/x86/x86_64/mm.c | 6 ++-
xen/include/asm-x86/mem_sharing.h | 3 +-
xen/include/public/memory.h | 1 +
8 files changed, 144 insertions(+), 7 deletions(-)
--
2.1.4
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
2015-10-04 20:25 [PATCH 0/2] Bulk mem-share identical domains Tamas K Lengyel
@ 2015-10-04 20:25 ` Tamas K Lengyel
2015-10-05 15:39 ` Andrew Cooper
2015-10-06 9:19 ` Wei Liu
2015-10-04 20:25 ` [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool Tamas K Lengyel
2015-10-06 14:26 ` [PATCH 0/2] Bulk mem-share identical domains Ian Campbell
2 siblings, 2 replies; 16+ messages in thread
From: Tamas K Lengyel @ 2015-10-04 20:25 UTC (permalink / raw)
To: xen-devel
Cc: Tamas K Lengyel, Wei Liu, Ian Campbell, Stefano Stabellini,
George Dunlap, Andrew Cooper, Ian Jackson, Jan Beulich,
Tamas K Lengyel, Keir Fraser
Currently mem-sharing can be performed on a page-by-page base from the control
domain. However, when completely deduplicating (cloning) a VM, this requires
at least 3 hypercalls per page. As the user has to loop through all pages up
to max_gpfn, this process is very slow and wasteful.
This patch introduces a new mem_sharing memop for bulk deduplication where
the user doesn't have to separately nominate each page in both the source and
destination domain, and the looping over all pages happen in the hypervisor.
This significantly reduces the overhead of completely deduplicating entire
domains.
Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
tools/libxc/include/xenctrl.h | 11 +++++
tools/libxc/xc_memshr.c | 14 ++++++
xen/arch/x86/mm/mem_sharing.c | 90 ++++++++++++++++++++++++++++++++++++++-
xen/arch/x86/x86_64/compat/mm.c | 6 ++-
xen/arch/x86/x86_64/mm.c | 6 ++-
xen/include/asm-x86/mem_sharing.h | 3 +-
xen/include/public/memory.h | 1 +
7 files changed, 124 insertions(+), 7 deletions(-)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 3bfa00b..dd82549 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2594,6 +2594,17 @@ int xc_memshr_add_to_physmap(xc_interface *xch,
domid_t client_domain,
unsigned long client_gfn);
+/* Allows to deduplicate the entire memory of a client domain in bulk. Using
+ * this function is equivalent of calling xc_memshr_nominate_gfn for each gfn
+ * in the two domains followed by xc_memshr_share_gfns.
+ *
+ * May fail with EINVAL if the source and client domain have different
+ * memory size or if memory sharing is not enabled on either of the domains.
+ */
+int xc_memshr_bulk_dedup(xc_interface *xch,
+ domid_t source_domain,
+ domid_t client_domain);
+
/* Debug calls: return the number of pages referencing the shared frame backing
* the input argument. Should be one or greater.
*
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index deb0aa4..ecb0f5c 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -181,6 +181,20 @@ int xc_memshr_add_to_physmap(xc_interface *xch,
return xc_memshr_memop(xch, source_domain, &mso);
}
+int xc_memshr_bulk_dedup(xc_interface *xch,
+ domid_t source_domain,
+ domid_t client_domain)
+{
+ xen_mem_sharing_op_t mso;
+
+ memset(&mso, 0, sizeof(mso));
+
+ mso.op = XENMEM_sharing_op_bulk_dedup;
+ mso.u.share.client_domain = client_domain;
+
+ return xc_memshr_memop(xch, source_domain, &mso);
+}
+
int xc_memshr_domain_resume(xc_interface *xch,
domid_t domid)
{
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index a95e105..319f52f 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -29,6 +29,7 @@
#include <xen/rcupdate.h>
#include <xen/guest_access.h>
#include <xen/vm_event.h>
+#include <xen/hypercall.h>
#include <asm/page.h>
#include <asm/string.h>
#include <asm/p2m.h>
@@ -1293,9 +1294,44 @@ int relinquish_shared_pages(struct domain *d)
return rc;
}
-int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
+static long bulk_share(struct domain *d, struct domain *cd,
+ unsigned long max_gfn, unsigned long start,
+ uint32_t mask)
{
- int rc;
+ long rc = 0;
+ shr_handle_t sh, ch;
+
+ while( start <= max_gfn )
+ {
+ if ( mem_sharing_nominate_page(d, start, 0, &sh) != 0 )
+ goto next;
+
+ if ( mem_sharing_nominate_page(cd, start, 0, &ch) != 0 )
+ goto next;
+
+ mem_sharing_share_pages(d, start, sh, cd, start, ch);
+
+next:
+ ++start;
+
+ /* Check for continuation if it's not the last iteration. */
+ if ( start < max_gfn && !(start & mask)
+ && hypercall_preempt_check() )
+ {
+ rc = start;
+ break;
+ }
+ }
+
+ return rc;
+}
+
+
+long mem_sharing_memop(unsigned long cmd,
+ XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
+{
+ unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
+ long rc;
xen_mem_sharing_op_t mso;
struct domain *d;
@@ -1467,6 +1503,56 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
}
break;
+ case XENMEM_sharing_op_bulk_dedup:
+ {
+ unsigned long max_sgfn, max_cgfn;
+ struct domain *cd;
+
+ rc = -EINVAL;
+ if ( !mem_sharing_enabled(d) )
+ goto out;
+
+ rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
+ &cd);
+ if ( rc )
+ goto out;
+
+ rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
+ if ( rc )
+ {
+ rcu_unlock_domain(cd);
+ goto out;
+ }
+
+ if ( !mem_sharing_enabled(cd) )
+ {
+ rcu_unlock_domain(cd);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ max_sgfn = domain_get_maximum_gpfn(d);
+ max_cgfn = domain_get_maximum_gpfn(cd);
+
+ if ( max_sgfn != max_cgfn || max_sgfn < start_iter )
+ {
+ rcu_unlock_domain(cd);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ rc = bulk_share(d, cd, max_sgfn, start_iter, MEMOP_CMD_MASK);
+ if ( rc > 0 )
+ {
+ ASSERT(!(rc & MEMOP_CMD_MASK));
+ rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
+ XENMEM_sharing_op | rc, arg);
+ }
+
+ rcu_unlock_domain(cd);
+ }
+ break;
+
case XENMEM_sharing_op_debug_gfn:
{
unsigned long gfn = mso.u.debug.u.gfn;
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index d034bd0..15cf16f 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -53,8 +53,9 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
compat_pfn_t mfn;
unsigned int i;
int rc = 0;
+ int op = cmd & MEMOP_CMD_MASK;
- switch ( cmd )
+ switch ( op )
{
case XENMEM_set_memory_map:
{
@@ -190,7 +191,8 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
case XENMEM_sharing_op:
- return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t));
+ return mem_sharing_memop(cmd,
+ guest_handle_cast(arg, xen_mem_sharing_op_t));
default:
rc = -ENOSYS;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index d918002..14c2d33 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -930,8 +930,9 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
xen_pfn_t mfn, last_mfn;
unsigned int i;
long rc = 0;
+ int op = cmd & MEMOP_CMD_MASK;
- switch ( cmd )
+ switch ( op )
{
case XENMEM_machphys_mfn_list:
if ( copy_from_guest(&xmml, arg, 1) )
@@ -1011,7 +1012,8 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
case XENMEM_sharing_op:
- return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t));
+ return mem_sharing_memop(cmd,
+ guest_handle_cast(arg, xen_mem_sharing_op_t));
default:
rc = -ENOSYS;
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index 3840a14..6d344d2 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -89,7 +89,8 @@ static inline int mem_sharing_unshare_page(struct domain *d,
*/
int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
bool_t allow_sleep);
-int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg);
+long mem_sharing_memop(unsigned long cmd,
+ XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg);
int mem_sharing_domctl(struct domain *d,
xen_domctl_mem_sharing_op_t *mec);
int mem_sharing_audit(void);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 320de91..4bc9fc9 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -447,6 +447,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
#define XENMEM_sharing_op_debug_gref 5
#define XENMEM_sharing_op_add_physmap 6
#define XENMEM_sharing_op_audit 7
+#define XENMEM_sharing_op_bulk_dedup 8
#define XENMEM_SHARING_OP_S_HANDLE_INVALID (-10)
#define XENMEM_SHARING_OP_C_HANDLE_INVALID (-9)
--
2.1.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool
2015-10-04 20:25 [PATCH 0/2] Bulk mem-share identical domains Tamas K Lengyel
2015-10-04 20:25 ` [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
@ 2015-10-04 20:25 ` Tamas K Lengyel
2015-10-06 9:20 ` Wei Liu
2015-10-06 14:26 ` [PATCH 0/2] Bulk mem-share identical domains Ian Campbell
2 siblings, 1 reply; 16+ messages in thread
From: Tamas K Lengyel @ 2015-10-04 20:25 UTC (permalink / raw)
To: xen-devel
Cc: Tamas K Lengyel, Wei Liu, Ian Campbell, Stefano Stabellini,
Ian Jackson, Tamas K Lengyel
Add the bulk option to the test tool to perform complete deduplication
between domains.
Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
tools/tests/mem-sharing/memshrtool.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/tools/tests/mem-sharing/memshrtool.c b/tools/tests/mem-sharing/memshrtool.c
index 6454bc3..18dc970 100644
--- a/tools/tests/mem-sharing/memshrtool.c
+++ b/tools/tests/mem-sharing/memshrtool.c
@@ -23,6 +23,8 @@ static int usage(const char* prog)
printf(" nominate <domid> <gfn> - Nominate a page for sharing.\n");
printf(" share <domid> <gfn> <handle> <source> <source-gfn> <source-handle>\n");
printf(" - Share two pages.\n");
+ printf(" bulk <source-domid> <destination-domid>\n");
+ printf(" - Share all pages between domains.\n");
printf(" unshare <domid> <gfn> - Unshare a page by grabbing a writable map.\n");
printf(" add-to-physmap <domid> <gfn> <source> <source-gfn> <source-handle>\n");
printf(" - Populate a page in a domain with a shared page.\n");
@@ -179,6 +181,24 @@ int main(int argc, const char** argv)
}
printf("Audit returned %d errors.\n", rc);
}
+ else if( !strcasecmp(cmd, "bulk") )
+ {
+ domid_t sdomid, cdomid;
+ int rc;
+
+ if( argc != 4 )
+ return usage(argv[0]);
+
+ sdomid = strtol(argv[2], NULL, 0);
+ cdomid = strtol(argv[3], NULL, 0);
+ rc = xc_memshr_bulk_dedup(xch, sdomid, cdomid);
+ if ( rc < 0 )
+ {
+ printf("error executing xc_memshr_bulk_dedup: %s\n", strerror(errno));
+ return rc;
+ }
+ printf("Successfully cloned the domains\n");
+ }
return 0;
}
--
2.1.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
2015-10-04 20:25 ` [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
@ 2015-10-05 15:39 ` Andrew Cooper
2015-10-05 16:51 ` Tamas K Lengyel
2015-10-06 9:19 ` Wei Liu
1 sibling, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2015-10-05 15:39 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: Wei Liu, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Jan Beulich, Tamas K Lengyel, Keir Fraser
On 04/10/15 21:25, Tamas K Lengyel wrote:
> Currently mem-sharing can be performed on a page-by-page base from the control
> domain. However, when completely deduplicating (cloning) a VM, this requires
> at least 3 hypercalls per page. As the user has to loop through all pages up
> to max_gpfn, this process is very slow and wasteful.
Indeed.
>
> This patch introduces a new mem_sharing memop for bulk deduplication where
> the user doesn't have to separately nominate each page in both the source and
> destination domain, and the looping over all pages happen in the hypervisor.
> This significantly reduces the overhead of completely deduplicating entire
> domains.
Looks good in principle.
> @@ -1467,6 +1503,56 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
> }
> break;
>
> + case XENMEM_sharing_op_bulk_dedup:
> + {
> + unsigned long max_sgfn, max_cgfn;
> + struct domain *cd;
> +
> + rc = -EINVAL;
> + if ( !mem_sharing_enabled(d) )
> + goto out;
> +
> + rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
> + &cd);
> + if ( rc )
> + goto out;
> +
> + rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
> + if ( rc )
> + {
> + rcu_unlock_domain(cd);
> + goto out;
> + }
> +
> + if ( !mem_sharing_enabled(cd) )
> + {
> + rcu_unlock_domain(cd);
> + rc = -EINVAL;
> + goto out;
> + }
> +
> + max_sgfn = domain_get_maximum_gpfn(d);
> + max_cgfn = domain_get_maximum_gpfn(cd);
> +
> + if ( max_sgfn != max_cgfn || max_sgfn < start_iter )
> + {
> + rcu_unlock_domain(cd);
> + rc = -EINVAL;
> + goto out;
> + }
> +
> + rc = bulk_share(d, cd, max_sgfn, start_iter, MEMOP_CMD_MASK);
> + if ( rc > 0 )
> + {
> + ASSERT(!(rc & MEMOP_CMD_MASK));
The way other continuations like this work is to shift the remaining
work left by MEMOP_EXTENT_SHIFT.
This avoids bulk_share() needing to know MEMOP_CMD_MASK, but does chop 6
bits off the available max_sgfn.
However, a better alternative would be to extend xen_mem_sharing_op and
stash the continue information in a new union. That would avoid the
mask games, and also avoid limiting the maximum potential gfn.
~Andrew
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
2015-10-05 15:39 ` Andrew Cooper
@ 2015-10-05 16:51 ` Tamas K Lengyel
0 siblings, 0 replies; 16+ messages in thread
From: Tamas K Lengyel @ 2015-10-05 16:51 UTC (permalink / raw)
To: Andrew Cooper
Cc: Wei Liu, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Jan Beulich, Xen-devel, Keir Fraser
[-- Attachment #1.1: Type: text/plain, Size: 2164 bytes --]
>
> > @@ -1467,6 +1503,56 @@ int
> mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
> > }
> > break;
> >
> > + case XENMEM_sharing_op_bulk_dedup:
> > + {
> > + unsigned long max_sgfn, max_cgfn;
> > + struct domain *cd;
> > +
> > + rc = -EINVAL;
> > + if ( !mem_sharing_enabled(d) )
> > + goto out;
> > +
> > + rc =
> rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
> > + &cd);
> > + if ( rc )
> > + goto out;
> > +
> > + rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
> > + if ( rc )
> > + {
> > + rcu_unlock_domain(cd);
> > + goto out;
> > + }
> > +
> > + if ( !mem_sharing_enabled(cd) )
> > + {
> > + rcu_unlock_domain(cd);
> > + rc = -EINVAL;
> > + goto out;
> > + }
> > +
> > + max_sgfn = domain_get_maximum_gpfn(d);
> > + max_cgfn = domain_get_maximum_gpfn(cd);
> > +
> > + if ( max_sgfn != max_cgfn || max_sgfn < start_iter )
> > + {
> > + rcu_unlock_domain(cd);
> > + rc = -EINVAL;
> > + goto out;
> > + }
> > +
> > + rc = bulk_share(d, cd, max_sgfn, start_iter,
> MEMOP_CMD_MASK);
> > + if ( rc > 0 )
> > + {
> > + ASSERT(!(rc & MEMOP_CMD_MASK));
>
> The way other continuations like this work is to shift the remaining
> work left by MEMOP_EXTENT_SHIFT.
>
> This avoids bulk_share() needing to know MEMOP_CMD_MASK, but does chop 6
> bits off the available max_sgfn.
>
> However, a better alternative would be to extend xen_mem_sharing_op and
> stash the continue information in a new union. That would avoid the
> mask games, and also avoid limiting the maximum potential gfn.
>
> ~Andrew
I agree, I was thinking of extending it anyway to return the number of
pages that was shared, so this could be looped in there too.
Thanks,
Tamas
[-- Attachment #1.2: Type: text/html, Size: 3186 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
2015-10-04 20:25 ` [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
2015-10-05 15:39 ` Andrew Cooper
@ 2015-10-06 9:19 ` Wei Liu
1 sibling, 0 replies; 16+ messages in thread
From: Wei Liu @ 2015-10-06 9:19 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Wei Liu, Ian Campbell, Stefano Stabellini, George Dunlap,
Andrew Cooper, Ian Jackson, Jan Beulich, Tamas K Lengyel,
xen-devel, Keir Fraser
On Sun, Oct 04, 2015 at 02:25:38PM -0600, Tamas K Lengyel wrote:
> Currently mem-sharing can be performed on a page-by-page base from the control
> domain. However, when completely deduplicating (cloning) a VM, this requires
> at least 3 hypercalls per page. As the user has to loop through all pages up
> to max_gpfn, this process is very slow and wasteful.
>
> This patch introduces a new mem_sharing memop for bulk deduplication where
> the user doesn't have to separately nominate each page in both the source and
> destination domain, and the looping over all pages happen in the hypervisor.
> This significantly reduces the overhead of completely deduplicating entire
> domains.
>
> Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> tools/libxc/include/xenctrl.h | 11 +++++
> tools/libxc/xc_memshr.c | 14 ++++++
> xen/arch/x86/mm/mem_sharing.c | 90 ++++++++++++++++++++++++++++++++++++++-
> xen/arch/x86/x86_64/compat/mm.c | 6 ++-
> xen/arch/x86/x86_64/mm.c | 6 ++-
> xen/include/asm-x86/mem_sharing.h | 3 +-
> xen/include/public/memory.h | 1 +
> 7 files changed, 124 insertions(+), 7 deletions(-)
>
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 3bfa00b..dd82549 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -2594,6 +2594,17 @@ int xc_memshr_add_to_physmap(xc_interface *xch,
> domid_t client_domain,
> unsigned long client_gfn);
>
> +/* Allows to deduplicate the entire memory of a client domain in bulk. Using
> + * this function is equivalent of calling xc_memshr_nominate_gfn for each gfn
> + * in the two domains followed by xc_memshr_share_gfns.
> + *
> + * May fail with EINVAL if the source and client domain have different
> + * memory size or if memory sharing is not enabled on either of the domains.
> + */
> +int xc_memshr_bulk_dedup(xc_interface *xch,
> + domid_t source_domain,
> + domid_t client_domain);
> +
> /* Debug calls: return the number of pages referencing the shared frame backing
> * the input argument. Should be one or greater.
> *
> diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
> index deb0aa4..ecb0f5c 100644
> --- a/tools/libxc/xc_memshr.c
> +++ b/tools/libxc/xc_memshr.c
> @@ -181,6 +181,20 @@ int xc_memshr_add_to_physmap(xc_interface *xch,
> return xc_memshr_memop(xch, source_domain, &mso);
> }
>
> +int xc_memshr_bulk_dedup(xc_interface *xch,
> + domid_t source_domain,
> + domid_t client_domain)
> +{
> + xen_mem_sharing_op_t mso;
> +
> + memset(&mso, 0, sizeof(mso));
> +
> + mso.op = XENMEM_sharing_op_bulk_dedup;
> + mso.u.share.client_domain = client_domain;
> +
> + return xc_memshr_memop(xch, source_domain, &mso);
> +}
> +
Tools bits:
Acked-by: Wei Liu <wei.liu2@citrix.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool
2015-10-04 20:25 ` [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool Tamas K Lengyel
@ 2015-10-06 9:20 ` Wei Liu
2015-10-06 14:15 ` Ian Campbell
0 siblings, 1 reply; 16+ messages in thread
From: Wei Liu @ 2015-10-06 9:20 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Wei Liu, Ian Campbell, Stefano Stabellini, Ian Jackson,
Tamas K Lengyel, xen-devel
On Sun, Oct 04, 2015 at 02:25:39PM -0600, Tamas K Lengyel wrote:
> Add the bulk option to the test tool to perform complete deduplication
> between domains.
>
> Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
> ---
> tools/tests/mem-sharing/memshrtool.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
An unrelated note: do you think it make sense to move mem-sharing out of
tests/ ? It doesn't look like a test to me.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool
2015-10-06 9:20 ` Wei Liu
@ 2015-10-06 14:15 ` Ian Campbell
2015-10-06 16:17 ` Tamas K Lengyel
0 siblings, 1 reply; 16+ messages in thread
From: Ian Campbell @ 2015-10-06 14:15 UTC (permalink / raw)
To: Wei Liu, Tamas K Lengyel
Cc: Tamas K Lengyel, xen-devel, Ian Jackson, Stefano Stabellini
On Tue, 2015-10-06 at 10:20 +0100, Wei Liu wrote:
> An unrelated note: do you think it make sense to move mem-sharing out of
> tests/ ? It doesn't look like a test to me.
It was originally a sort of "unit test" / "manually poke it" type utility
rather than end user usable functionality. That might have changed though
(or be in the process of doing so).
Ian.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/2] Bulk mem-share identical domains
2015-10-04 20:25 [PATCH 0/2] Bulk mem-share identical domains Tamas K Lengyel
2015-10-04 20:25 ` [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
2015-10-04 20:25 ` [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool Tamas K Lengyel
@ 2015-10-06 14:26 ` Ian Campbell
2015-10-06 14:52 ` Andrew Cooper
2015-10-06 16:05 ` Tamas K Lengyel
2 siblings, 2 replies; 16+ messages in thread
From: Ian Campbell @ 2015-10-06 14:26 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: Wei Liu, Stefano Stabellini, George Dunlap, Andrew Cooper,
Ian Jackson, Jan Beulich, Keir Fraser
On Sun, 2015-10-04 at 14:25 -0600, Tamas K Lengyel wrote:
> The following patches add a convenience memop to the mem_sharing system,
> allowing for the rapid deduplication of memory pages between identical
> domains.
>
> The envisioned use-case for this is the following:
> 1) Create two domains from the same snapshot using xl.
> This step can also be performed by piping an existing domain's memory
> with
> "xl save -c <domain> <pipe> | xl restore -p <new_cfg> <pipe>"
> It is up for the user to create the appropriate configuration for the
> clone,
> including setting up a CoW-disk as well.
> 2) Enable memory sharing on both domains
> 3) Execute bulk dedup between the domains.
This is a neat trick, but has the downside of first shovelling all the data
over a pipe and then needing to allocate it transiently before dedupping it
again.
Have you looked at the possibility of doing the save+restore in the same
process with a cut through for the RAM part which just dups the page into
the target domain?
Once upon a time (migr v1) that would certainly have been impossibly hard,
but with migr v2 it might be a lot easier to integrate something like that
(although surely not as easy as what you've done here!).
Just an idea, and not intended at all as an argument for not taking this
series or anything.
Ian.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/2] Bulk mem-share identical domains
2015-10-06 14:26 ` [PATCH 0/2] Bulk mem-share identical domains Ian Campbell
@ 2015-10-06 14:52 ` Andrew Cooper
2015-10-06 16:12 ` Tamas K Lengyel
2015-10-06 16:05 ` Tamas K Lengyel
1 sibling, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2015-10-06 14:52 UTC (permalink / raw)
To: Ian Campbell, Tamas K Lengyel, xen-devel
Cc: Wei Liu, Stefano Stabellini, George Dunlap, Ian Jackson,
Jan Beulich, Keir Fraser
On 06/10/15 15:26, Ian Campbell wrote:
> On Sun, 2015-10-04 at 14:25 -0600, Tamas K Lengyel wrote:
>> The following patches add a convenience memop to the mem_sharing system,
>> allowing for the rapid deduplication of memory pages between identical
>> domains.
>>
>> The envisioned use-case for this is the following:
>> 1) Create two domains from the same snapshot using xl.
>> This step can also be performed by piping an existing domain's memory
>> with
>> "xl save -c <domain> <pipe> | xl restore -p <new_cfg> <pipe>"
>> It is up for the user to create the appropriate configuration for the
>> clone,
>> including setting up a CoW-disk as well.
>> 2) Enable memory sharing on both domains
>> 3) Execute bulk dedup between the domains.
> This is a neat trick, but has the downside of first shovelling all the data
> over a pipe and then needing to allocate it transiently before dedupping it
> again.
>
> Have you looked at the possibility of doing the save+restore in the same
> process with a cut through for the RAM part which just dups the page into
> the target domain?
>
> Once upon a time (migr v1) that would certainly have been impossibly hard,
> but with migr v2 it might be a lot easier to integrate something like that
> (although surely not as easy as what you've done here!).
>
> Just an idea, and not intended at all as an argument for not taking this
> series or anything.
If we are making modifications like this, make something like
XEN_DOMCTL_domain_clone which takes a source domid (must exist), pauses
it, creates a new domain, copies some state and shares all memory CoW
from source to the new domain.
This will be far more efficient still than moving all the memory through
userspace in dom0.
~Andrew
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/2] Bulk mem-share identical domains
2015-10-06 14:26 ` [PATCH 0/2] Bulk mem-share identical domains Ian Campbell
2015-10-06 14:52 ` Andrew Cooper
@ 2015-10-06 16:05 ` Tamas K Lengyel
2015-10-07 10:48 ` Wei Liu
1 sibling, 1 reply; 16+ messages in thread
From: Tamas K Lengyel @ 2015-10-06 16:05 UTC (permalink / raw)
To: Ian Campbell
Cc: Wei Liu, Keir Fraser, Stefano Stabellini, George Dunlap,
Andrew Cooper, Ian Jackson, Jan Beulich, Xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 2482 bytes --]
On Tue, Oct 6, 2015 at 8:26 AM, Ian Campbell <ian.campbell@citrix.com>
wrote:
> On Sun, 2015-10-04 at 14:25 -0600, Tamas K Lengyel wrote:
> > The following patches add a convenience memop to the mem_sharing system,
> > allowing for the rapid deduplication of memory pages between identical
> > domains.
> >
> > The envisioned use-case for this is the following:
> > 1) Create two domains from the same snapshot using xl.
> > This step can also be performed by piping an existing domain's memory
> > with
> > "xl save -c <domain> <pipe> | xl restore -p <new_cfg> <pipe>"
> > It is up for the user to create the appropriate configuration for the
> > clone,
> > including setting up a CoW-disk as well.
> > 2) Enable memory sharing on both domains
> > 3) Execute bulk dedup between the domains.
>
> This is a neat trick, but has the downside of first shovelling all the data
> over a pipe and then needing to allocate it transiently before dedupping it
> again.
>
Precisely.
>
> Have you looked at the possibility of doing the save+restore in the same
> process with a cut through for the RAM part which just dups the page into
> the target domain?
>
I have, but I have to stay untangling the internals of xl is pretty
daunting..
>
> Once upon a time (migr v1) that would certainly have been impossibly hard,
> but with migr v2 it might be a lot easier to integrate something like that
> (although surely not as easy as what you've done here!).
>
> Just an idea, and not intended at all as an argument for not taking this
> series or anything.
>
So another trick that works pretty well for PV domains is to simply create
them paused:
xl create -p <cfg>
This takes pretty much no time and can be followed up by the bulk memory
deduplication. The clone is fully functional afterwards.
Unfortunately for HVM domains this is not sufficient as QEMU needs to be
setup as well, for which right now only xl restore works. I experimented
with saving the QEMU state separately, creating the paused HVM domain,
killing it's qemu process, then starting a new one with the exact same
parameters but with the extra -loadvm flag (reserving an extra page plus
trying to wire up the hvmparams). Unfortunately this still crashes the
guest after unpause so I'm pretty much stuck on that side. So yea, any help
with that would be greatly appreciated ;) If there was an xl option to do
what "xl restore" is doing, but to only load the QEMU state, that would be
awesome.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 3398 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/2] Bulk mem-share identical domains
2015-10-06 14:52 ` Andrew Cooper
@ 2015-10-06 16:12 ` Tamas K Lengyel
0 siblings, 0 replies; 16+ messages in thread
From: Tamas K Lengyel @ 2015-10-06 16:12 UTC (permalink / raw)
To: Andrew Cooper
Cc: Wei Liu, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Jan Beulich, Xen-devel, Keir Fraser
[-- Attachment #1.1: Type: text/plain, Size: 2752 bytes --]
On Tue, Oct 6, 2015 at 8:52 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:
> On 06/10/15 15:26, Ian Campbell wrote:
> > On Sun, 2015-10-04 at 14:25 -0600, Tamas K Lengyel wrote:
> >> The following patches add a convenience memop to the mem_sharing system,
> >> allowing for the rapid deduplication of memory pages between identical
> >> domains.
> >>
> >> The envisioned use-case for this is the following:
> >> 1) Create two domains from the same snapshot using xl.
> >> This step can also be performed by piping an existing domain's memory
> >> with
> >> "xl save -c <domain> <pipe> | xl restore -p <new_cfg> <pipe>"
> >> It is up for the user to create the appropriate configuration for the
> >> clone,
> >> including setting up a CoW-disk as well.
> >> 2) Enable memory sharing on both domains
> >> 3) Execute bulk dedup between the domains.
> > This is a neat trick, but has the downside of first shovelling all the
> data
> > over a pipe and then needing to allocate it transiently before dedupping
> it
> > again.
> >
> > Have you looked at the possibility of doing the save+restore in the same
> > process with a cut through for the RAM part which just dups the page into
> > the target domain?
> >
> > Once upon a time (migr v1) that would certainly have been impossibly
> hard,
> > but with migr v2 it might be a lot easier to integrate something like
> that
> > (although surely not as easy as what you've done here!).
> >
> > Just an idea, and not intended at all as an argument for not taking this
> > series or anything.
>
> If we are making modifications like this, make something like
> XEN_DOMCTL_domain_clone which takes a source domid (must exist), pauses
> it, creates a new domain, copies some state and shares all memory CoW
> from source to the new domain.
>
> This will be far more efficient still than moving all the memory through
> userspace in dom0.
>
> ~Andrew
>
It would be far more efficient, but unfortunately there is more to cloning
then just the hypervisor side. Andres already made and suggested using
xc_memshr_add_to_physmap, which could be used during domain creation to do
something similar. However, 1) there is no reference implementation on how
to do that domain creation cleanly 2) cloning HVM domains also requires
cloning QEMU. As the only reference implementation on how to create an HVM
domain from scratch is in libxl right now, it's a pretty huge task to
untangle it to make something like this happen (at least for me). I've been
staring at this from time-to-time for the past couple years and I'm still
no closer to a working solution. In the interim, this patch at least
improves something that I know that works and requires minimal changes to
the tools/hypervisor.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 3521 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool
2015-10-06 14:15 ` Ian Campbell
@ 2015-10-06 16:17 ` Tamas K Lengyel
0 siblings, 0 replies; 16+ messages in thread
From: Tamas K Lengyel @ 2015-10-06 16:17 UTC (permalink / raw)
To: Ian Campbell
Cc: Tamas K Lengyel, Xen-devel, Wei Liu, Ian Jackson,
Stefano Stabellini
[-- Attachment #1.1: Type: text/plain, Size: 645 bytes --]
On Tue, Oct 6, 2015 at 8:15 AM, Ian Campbell <ian.campbell@citrix.com>
wrote:
> On Tue, 2015-10-06 at 10:20 +0100, Wei Liu wrote:
> > An unrelated note: do you think it make sense to move mem-sharing out of
> > tests/ ? It doesn't look like a test to me.
>
> It was originally a sort of "unit test" / "manually poke it" type utility
> rather than end user usable functionality. That might have changed though
> (or be in the process of doing so).
>
> Ian
>
I'm not intending to make it into a end-user tool either though it could be
now actually have some value for end-users doing cloning. For me it's just
a reference implementation.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 1136 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/2] Bulk mem-share identical domains
2015-10-06 16:05 ` Tamas K Lengyel
@ 2015-10-07 10:48 ` Wei Liu
2015-10-08 21:07 ` Tamas K Lengyel
0 siblings, 1 reply; 16+ messages in thread
From: Wei Liu @ 2015-10-07 10:48 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Wei Liu, Ian Campbell, Stefano Stabellini, George Dunlap,
Andrew Cooper, Ian Jackson, Jan Beulich, Xen-devel, Keir Fraser
On Tue, Oct 06, 2015 at 10:05:12AM -0600, Tamas K Lengyel wrote:
> On Tue, Oct 6, 2015 at 8:26 AM, Ian Campbell <ian.campbell@citrix.com>
> wrote:
>
> > On Sun, 2015-10-04 at 14:25 -0600, Tamas K Lengyel wrote:
> > > The following patches add a convenience memop to the mem_sharing system,
> > > allowing for the rapid deduplication of memory pages between identical
> > > domains.
> > >
> > > The envisioned use-case for this is the following:
> > > 1) Create two domains from the same snapshot using xl.
> > > This step can also be performed by piping an existing domain's memory
> > > with
> > > "xl save -c <domain> <pipe> | xl restore -p <new_cfg> <pipe>"
> > > It is up for the user to create the appropriate configuration for the
> > > clone,
> > > including setting up a CoW-disk as well.
> > > 2) Enable memory sharing on both domains
> > > 3) Execute bulk dedup between the domains.
> >
> > This is a neat trick, but has the downside of first shovelling all the data
> > over a pipe and then needing to allocate it transiently before dedupping it
> > again.
> >
>
> Precisely.
>
>
> >
> > Have you looked at the possibility of doing the save+restore in the same
> > process with a cut through for the RAM part which just dups the page into
> > the target domain?
> >
>
> I have, but I have to stay untangling the internals of xl is pretty
> daunting..
>
>
> >
> > Once upon a time (migr v1) that would certainly have been impossibly hard,
> > but with migr v2 it might be a lot easier to integrate something like that
> > (although surely not as easy as what you've done here!).
> >
> > Just an idea, and not intended at all as an argument for not taking this
> > series or anything.
> >
>
> So another trick that works pretty well for PV domains is to simply create
> them paused:
> xl create -p <cfg>
>
> This takes pretty much no time and can be followed up by the bulk memory
> deduplication. The clone is fully functional afterwards.
> Unfortunately for HVM domains this is not sufficient as QEMU needs to be
> setup as well, for which right now only xl restore works. I experimented
> with saving the QEMU state separately, creating the paused HVM domain,
> killing it's qemu process, then starting a new one with the exact same
> parameters but with the extra -loadvm flag (reserving an extra page plus
> trying to wire up the hvmparams). Unfortunately this still crashes the
> guest after unpause so I'm pretty much stuck on that side. So yea, any help
> with that would be greatly appreciated ;) If there was an xl option to do
> what "xl restore" is doing, but to only load the QEMU state, that would be
> awesome.
>
In case you miss it, there is now soft-reset support which dumps all
memory plus various states from one domain to another, and toolstack
will take care of QEMU and various userspace bits. This might be useful
to you?
To be clear, this is just FYI, not suggesting we block this series.
Wei.
> Tamas
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/2] Bulk mem-share identical domains
2015-10-07 10:48 ` Wei Liu
@ 2015-10-08 21:07 ` Tamas K Lengyel
2015-10-09 12:35 ` Wei Liu
0 siblings, 1 reply; 16+ messages in thread
From: Tamas K Lengyel @ 2015-10-08 21:07 UTC (permalink / raw)
To: Wei Liu
Cc: Keir Fraser, Ian Campbell, Stefano Stabellini, George Dunlap,
Andrew Cooper, Ian Jackson, Jan Beulich, Xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 749 bytes --]
In case you miss it, there is now soft-reset support which dumps all
> memory plus various states from one domain to another, and toolstack
> will take care of QEMU and various userspace bits. This might be useful
> to you?
>
> To be clear, this is just FYI, not suggesting we block this series.
>
> Wei.
>
Hi Wei,
it might be very useful but on a casual scan I couldn't really find much on
the soft-reset option (no xl cmdline option and only a single call to
xc_domain_soft_reset in libxl.c). For cloning I would need the origin
domain to remain loaded as before (at least the memory, qemu can be killed)
and then I would only need the QEMU setup bits from soft-reset. Any
pointers on how to go about this would be very helpful ;)
Thanks,
Tamas
[-- Attachment #1.2: Type: text/html, Size: 1082 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/2] Bulk mem-share identical domains
2015-10-08 21:07 ` Tamas K Lengyel
@ 2015-10-09 12:35 ` Wei Liu
0 siblings, 0 replies; 16+ messages in thread
From: Wei Liu @ 2015-10-09 12:35 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Wei Liu, Ian Campbell, Stefano Stabellini, George Dunlap,
Andrew Cooper, Ian Jackson, Jan Beulich, Xen-devel, Keir Fraser
On Thu, Oct 08, 2015 at 03:07:19PM -0600, Tamas K Lengyel wrote:
> In case you miss it, there is now soft-reset support which dumps all
> > memory plus various states from one domain to another, and toolstack
> > will take care of QEMU and various userspace bits. This might be useful
> > to you?
> >
> > To be clear, this is just FYI, not suggesting we block this series.
> >
> > Wei.
> >
>
> Hi Wei,
> it might be very useful but on a casual scan I couldn't really find much on
> the soft-reset option (no xl cmdline option and only a single call to
> xc_domain_soft_reset in libxl.c). For cloning I would need the origin
> domain to remain loaded as before (at least the memory, qemu can be killed)
> and then I would only need the QEMU setup bits from soft-reset. Any
> pointers on how to go about this would be very helpful ;)
>
Soft-reset is in fact a slightly modified version of save / restore. I
don't think you can directly use soft-reset to clone a domain. What I
meant was you might be able to reuse some of the code in soft-reset, at
least on toolstack side.
For example, you can invent a hypercall to share all pages and transfer
states from a guest to another. In toolstack, you create a new domain,
save original domain's QEMU state, issue aforementioned hypercall (*),
and restore QEMU. It would still require some coding to disentangle
toolstack code to do what you need, but these different phases already
exist in toolstack code for soft-reset except for the hypercall.
What makes your need different from soft-reset is that, a) the hypercall is
different b) you don't destroy the original domain afterwards.
YMMV.
Wei.
> Thanks,
> Tamas
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2015-10-09 12:35 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-04 20:25 [PATCH 0/2] Bulk mem-share identical domains Tamas K Lengyel
2015-10-04 20:25 ` [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
2015-10-05 15:39 ` Andrew Cooper
2015-10-05 16:51 ` Tamas K Lengyel
2015-10-06 9:19 ` Wei Liu
2015-10-04 20:25 ` [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool Tamas K Lengyel
2015-10-06 9:20 ` Wei Liu
2015-10-06 14:15 ` Ian Campbell
2015-10-06 16:17 ` Tamas K Lengyel
2015-10-06 14:26 ` [PATCH 0/2] Bulk mem-share identical domains Ian Campbell
2015-10-06 14:52 ` Andrew Cooper
2015-10-06 16:12 ` Tamas K Lengyel
2015-10-06 16:05 ` Tamas K Lengyel
2015-10-07 10:48 ` Wei Liu
2015-10-08 21:07 ` Tamas K Lengyel
2015-10-09 12:35 ` Wei Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).