xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Tamas K Lengyel <tlengyel@novetta.com>,
	Keir Fraser <keir@xen.org>
Subject: Re: [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
Date: Mon, 5 Oct 2015 16:39:53 +0100	[thread overview]
Message-ID: <561299C9.3060501@citrix.com> (raw)
In-Reply-To: <1443990339-19590-2-git-send-email-tamas@tklengyel.com>

On 04/10/15 21:25, Tamas K Lengyel wrote:
> Currently mem-sharing can be performed on a page-by-page base from the control
> domain. However, when completely deduplicating (cloning) a VM, this requires
> at least 3 hypercalls per page. As the user has to loop through all pages up
> to max_gpfn, this process is very slow and wasteful.

Indeed.

>
> This patch introduces a new mem_sharing memop for bulk deduplication where
> the user doesn't have to separately nominate each page in both the source and
> destination domain, and the looping over all pages happen in the hypervisor.
> This significantly reduces the overhead of completely deduplicating entire
> domains.

Looks good in principle.

> @@ -1467,6 +1503,56 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>          }
>          break;
>  
> +        case XENMEM_sharing_op_bulk_dedup:
> +        {
> +            unsigned long max_sgfn, max_cgfn;
> +            struct domain *cd;
> +
> +            rc = -EINVAL;
> +            if ( !mem_sharing_enabled(d) )
> +                goto out;
> +
> +            rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
> +                                                   &cd);
> +            if ( rc )
> +                goto out;
> +
> +            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
> +            if ( rc )
> +            {
> +                rcu_unlock_domain(cd);
> +                goto out;
> +            }
> +
> +            if ( !mem_sharing_enabled(cd) )
> +            {
> +                rcu_unlock_domain(cd);
> +                rc = -EINVAL;
> +                goto out;
> +            }
> +
> +            max_sgfn = domain_get_maximum_gpfn(d);
> +            max_cgfn = domain_get_maximum_gpfn(cd);
> +
> +            if ( max_sgfn != max_cgfn || max_sgfn < start_iter )
> +            {
> +                rcu_unlock_domain(cd);
> +                rc = -EINVAL;
> +                goto out;
> +            }
> +
> +            rc = bulk_share(d, cd, max_sgfn, start_iter, MEMOP_CMD_MASK);
> +            if ( rc > 0 )
> +            {
> +                ASSERT(!(rc & MEMOP_CMD_MASK));

The way other continuations like this work is to shift the remaining
work left by MEMOP_EXTENT_SHIFT.

This avoids bulk_share() needing to know MEMOP_CMD_MASK, but does chop 6
bits off the available max_sgfn.

However, a better alternative would be to extend xen_mem_sharing_op and
stash the continue information in a new union.  That would avoid the
mask games, and also avoid limiting the maximum potential gfn.

~Andrew

  reply	other threads:[~2015-10-05 15:40 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-04 20:25 [PATCH 0/2] Bulk mem-share identical domains Tamas K Lengyel
2015-10-04 20:25 ` [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
2015-10-05 15:39   ` Andrew Cooper [this message]
2015-10-05 16:51     ` Tamas K Lengyel
2015-10-06  9:19   ` Wei Liu
2015-10-04 20:25 ` [PATCH 2/2] tests/mem-sharing: Add bulk option to memshrtool Tamas K Lengyel
2015-10-06  9:20   ` Wei Liu
2015-10-06 14:15     ` Ian Campbell
2015-10-06 16:17       ` Tamas K Lengyel
2015-10-06 14:26 ` [PATCH 0/2] Bulk mem-share identical domains Ian Campbell
2015-10-06 14:52   ` Andrew Cooper
2015-10-06 16:12     ` Tamas K Lengyel
2015-10-06 16:05   ` Tamas K Lengyel
2015-10-07 10:48     ` Wei Liu
2015-10-08 21:07       ` Tamas K Lengyel
2015-10-09 12:35         ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=561299C9.3060501@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=keir@xen.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=tamas@tklengyel.com \
    --cc=tlengyel@novetta.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).