From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v2 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Date: Fri, 9 Oct 2015 14:26:06 +0100 Message-ID: <5617C06E.9070701@citrix.com> References: <1444337833-18934-1-git-send-email-tamas@tklengyel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZkXgf-0001C0-Fr for xen-devel@lists.xenproject.org; Fri, 09 Oct 2015 13:26:17 +0000 In-Reply-To: <1444337833-18934-1-git-send-email-tamas@tklengyel.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Tamas K Lengyel , xen-devel@lists.xenproject.org Cc: Wei Liu , Ian Campbell , Stefano Stabellini , George Dunlap , Ian Jackson , Jan Beulich , Tamas K Lengyel , Keir Fraser List-Id: xen-devel@lists.xenproject.org On 08/10/15 21:57, Tamas K Lengyel wrote: > diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c > index a95e105..4cdddb1 100644 > --- a/xen/arch/x86/mm/mem_sharing.c > +++ b/xen/arch/x86/mm/mem_sharing.c > @@ -1293,6 +1293,37 @@ int relinquish_shared_pages(struct domain *d) > return rc; > } > > +static int bulk_share(struct domain *d, struct domain *cd, unsigned long max, > + struct mem_sharing_op_bulk_share *bulk) > +{ > + int rc = 0; > + shr_handle_t sh, ch; > + > + while( bulk->start <= max ) > + { > + if ( mem_sharing_nominate_page(d, bulk->start, 0, &sh) != 0 ) This swallows the error from mem_sharing_nominate_page(). Some errors might be safe to ignore in this context, but ones like ENOMEM most certainly are not. You should record the error into rc and switch ( rc ) to ignore/process the error, passing hard errors straight up. > + goto next; > + > + if ( mem_sharing_nominate_page(cd, bulk->start, 0, &ch) != 0 ) > + goto next; > + > + if ( !mem_sharing_share_pages(d, bulk->start, sh, cd, bulk->start, ch) ) > + ++(bulk->shared); > + > +next: > + ++(bulk->start); > + > + /* Check for continuation if it's not the last iteration. */ > + if ( bulk->start < max && hypercall_preempt_check() ) > + { > + rc = 1; Using -ERESTART here allows... > + break; > + } > + } > + > + return rc; > +} > + > int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg) > { > int rc; > @@ -1467,6 +1498,59 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg) > } > break; > > + case XENMEM_sharing_op_bulk_share: > + { > + unsigned long max_sgfn, max_cgfn; > + struct domain *cd; > + > + rc = -EINVAL; > + if ( !mem_sharing_enabled(d) ) > + goto out; > + > + rc = rcu_lock_live_remote_domain_by_id(mso.u.bulk.client_domain, > + &cd); > + if ( rc ) > + goto out; > + > + rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op); > + if ( rc ) > + { > + rcu_unlock_domain(cd); > + goto out; > + } > + > + if ( !mem_sharing_enabled(cd) ) > + { > + rcu_unlock_domain(cd); > + rc = -EINVAL; > + goto out; > + } > + > + max_sgfn = domain_get_maximum_gpfn(d); > + max_cgfn = domain_get_maximum_gpfn(cd); > + > + if ( max_sgfn != max_cgfn || max_sgfn < mso.u.bulk.start ) > + { > + rcu_unlock_domain(cd); > + rc = -EINVAL; > + goto out; > + } > + > + rc = bulk_share(d, cd, max_sgfn, &mso.u.bulk); > + if ( rc ) ... this check to be selective. > + { > + if ( __copy_to_guest(arg, &mso, 1) ) This __copy_to_guest() needs to happen unconditionally before creating the continuation, as it contains the continuation information. It also needs to happen on the success path, so .shared is correct. ~Andrew