xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Tamas K Lengyel <tlengyel@novetta.com>,
	Keir Fraser <keir@xen.org>
Subject: Re: [PATCH v2 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
Date: Fri, 9 Oct 2015 14:26:06 +0100	[thread overview]
Message-ID: <5617C06E.9070701@citrix.com> (raw)
In-Reply-To: <1444337833-18934-1-git-send-email-tamas@tklengyel.com>

On 08/10/15 21:57, Tamas K Lengyel wrote:
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index a95e105..4cdddb1 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1293,6 +1293,37 @@ int relinquish_shared_pages(struct domain *d)
>      return rc;
>  }
>  
> +static int bulk_share(struct domain *d, struct domain *cd, unsigned long max,
> +                      struct mem_sharing_op_bulk_share *bulk)
> +{
> +    int rc = 0;
> +    shr_handle_t sh, ch;
> +
> +    while( bulk->start <= max )
> +    {
> +        if ( mem_sharing_nominate_page(d, bulk->start, 0, &sh) != 0 )

This swallows the error from mem_sharing_nominate_page().  Some errors
might be safe to ignore in this context, but ones like ENOMEM most
certainly are not.

You should record the error into rc and switch ( rc ) to ignore/process
the error, passing hard errors straight up.

> +            goto next;
> +
> +        if ( mem_sharing_nominate_page(cd, bulk->start, 0, &ch) != 0 )
> +            goto next;
> +
> +        if ( !mem_sharing_share_pages(d, bulk->start, sh, cd, bulk->start, ch) )
> +            ++(bulk->shared);
> +
> +next:
> +        ++(bulk->start);
> +
> +        /* Check for continuation if it's not the last iteration. */
> +        if ( bulk->start < max && hypercall_preempt_check() )
> +        {
> +            rc = 1;

Using -ERESTART here allows...

> +            break;
> +        }
> +    }
> +
> +    return rc;
> +}
> +
>  int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>  {
>      int rc;
> @@ -1467,6 +1498,59 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>          }
>          break;
>  
> +        case XENMEM_sharing_op_bulk_share:
> +        {
> +            unsigned long max_sgfn, max_cgfn;
> +            struct domain *cd;
> +
> +            rc = -EINVAL;
> +            if ( !mem_sharing_enabled(d) )
> +                goto out;
> +
> +            rc = rcu_lock_live_remote_domain_by_id(mso.u.bulk.client_domain,
> +                                                   &cd);
> +            if ( rc )
> +                goto out;
> +
> +            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
> +            if ( rc )
> +            {
> +                rcu_unlock_domain(cd);
> +                goto out;
> +            }
> +
> +            if ( !mem_sharing_enabled(cd) )
> +            {
> +                rcu_unlock_domain(cd);
> +                rc = -EINVAL;
> +                goto out;
> +            }
> +
> +            max_sgfn = domain_get_maximum_gpfn(d);
> +            max_cgfn = domain_get_maximum_gpfn(cd);
> +
> +            if ( max_sgfn != max_cgfn || max_sgfn < mso.u.bulk.start )
> +            {
> +                rcu_unlock_domain(cd);
> +                rc = -EINVAL;
> +                goto out;
> +            }
> +
> +            rc = bulk_share(d, cd, max_sgfn, &mso.u.bulk);
> +            if ( rc )

... this check to be selective.

> +            {
> +                if ( __copy_to_guest(arg, &mso, 1) )

This __copy_to_guest() needs to happen unconditionally before creating
the continuation, as it contains the continuation information.

It also needs to happen on the success path, so .shared is correct.

~Andrew

  parent reply	other threads:[~2015-10-09 13:26 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-08 20:57 [PATCH v2 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
2015-10-08 20:57 ` [PATCH v2 2/2] tests/mem-sharing: Add bulk option to memshrtool Tamas K Lengyel
2015-10-09  7:51 ` [PATCH v2 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Jan Beulich
2015-10-09 17:55   ` Tamas K Lengyel
2015-10-12  6:42     ` Jan Beulich
2015-10-15 16:09       ` Tamas K Lengyel
2015-10-15 16:54         ` Tamas K Lengyel
2015-10-16  6:34           ` Jan Beulich
2015-10-09 13:26 ` Andrew Cooper [this message]
2015-10-09 18:13   ` Tamas K Lengyel
2015-10-12 10:22     ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5617C06E.9070701@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=keir@xen.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=tamas@tklengyel.com \
    --cc=tlengyel@novetta.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).