From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v6] x86/p2m: use large pages for MMIO mappings Date: Mon, 1 Feb 2016 15:00:04 +0000 Message-ID: <56AF72F4.7020708@citrix.com> References: <56AF301402000078000CCC63@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aQFxZ-0000Cb-9F for xen-devel@lists.xenproject.org; Mon, 01 Feb 2016 15:00:09 +0000 In-Reply-To: <56AF301402000078000CCC63@prv-mh.provo.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich , xen-devel Cc: Kevin Tian , Wei Liu , Ian Campbell , Stefano Stabellini , George Dunlap , Tim Deegan , Ian Jackson , Jun Nakajima , Keir Fraser List-Id: xen-devel@lists.xenproject.org On 01/02/16 09:14, Jan Beulich wrote: > --- a/xen/arch/x86/mm/p2m.c > +++ b/xen/arch/x86/mm/p2m.c > @@ -899,48 +899,64 @@ void p2m_change_type_range(struct domain > p2m_unlock(p2m); > } > > -/* Returns: 0 for success, -errno for failure */ > +/* > + * Returns: > + * 0 for success > + * -errno for failure > + * 1 + new order for caller to retry with smaller order (guaranteed > + * to be smaller than order passed in) > + */ > static int set_typed_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, > - p2m_type_t gfn_p2mt, p2m_access_t access) > + unsigned int order, p2m_type_t gfn_p2mt, > + p2m_access_t access) > { > int rc = 0; > p2m_access_t a; > p2m_type_t ot; > mfn_t omfn; > + unsigned int cur_order = 0; > struct p2m_domain *p2m = p2m_get_hostp2m(d); > > if ( !paging_mode_translate(d) ) > return -EIO; > > - gfn_lock(p2m, gfn, 0); > - omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, NULL, NULL); > + gfn_lock(p2m, gfn, order); > + omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, &cur_order, NULL); > + if ( cur_order < order ) > + { > + gfn_unlock(p2m, gfn, order); > + return cur_order + 1; > + } > if ( p2m_is_grant(ot) || p2m_is_foreign(ot) ) > { > - gfn_unlock(p2m, gfn, 0); > + gfn_unlock(p2m, gfn, order); > domain_crash(d); > return -ENOENT; > } > else if ( p2m_is_ram(ot) ) > { > - ASSERT(mfn_valid(omfn)); > - set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY); > + unsigned long i; > + > + for ( i = 0; i < (1UL << order); ++i ) > + { > + ASSERT(mfn_valid(_mfn(mfn_x(omfn) + i))); > + set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY); On further consideration, shouldn't we have a preemption check here? Removing a 1GB superpage's worth of RAM mappings is going to execute for an unreasonably long time. ~Andrew