From mboxrd@z Thu Jan 1 00:00:00 1970 From: Julien Grall Subject: Re: [RFC 1/2] xen/mm: Clarify the granularity for each Frame Number Date: Wed, 12 Aug 2015 12:13:22 +0100 Message-ID: <55CB2A52.4080704@citrix.com> References: <1438774133-1564-1-git-send-email-julien.grall@citrix.com> <1438774133-1564-2-git-send-email-julien.grall@citrix.com> <55C1F617.8090208@citrix.com> <55C2034F.9070508@citrix.com> <55C20595.8030502@citrix.com> <55C20D2D.1050806@citrix.com> <55CB0F000200007800099F11@prv-mh.provo.novell.com> <55CB1899.802@citrix.com> <55CB3D2A020000780009A132@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZPTzD-00013A-L1 for xen-devel@lists.xenproject.org; Wed, 12 Aug 2015 11:14:23 +0000 In-Reply-To: <55CB3D2A020000780009A132@prv-mh.provo.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Wei.Liu2@citrix.com, ian.campbell@citrix.com, Stefano Stabellini , Andrew Cooper , Ian Jackson , TimDeegan , stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org, Keir Fraser List-Id: xen-devel@lists.xenproject.org On 12/08/15 11:33, Jan Beulich wrote: >>>> On 12.08.15 at 11:57, wrote: >> On 12/08/2015 08:16, Jan Beulich wrote: >>>>>> On 05.08.15 at 15:18, wrote: >>>> On 05/08/15 13:46, Andrew Cooper wrote: >>>>> On 05/08/15 13:36, Julien Grall wrote: >>>>>> So we need to introduce the concept of in each definition. This patch >>>>>> makes clear that MFN and GFN is always 4KB and PFN may vary. >>>>> >>>>> Is (or rather will) a 4K dom0 able to make 4K mappings of a 64K domU? >>>>> How is a 64K dom0 expected to make mappings of a 4K domU? >>>> >>>> The Xen interface will stay 4K even with 64K guest. We have to support >>>> 64K guest/dom0 on the current Xen because some distro may do the choice >>>> to only ship 64K. >>> >>> Interesting. Does Linux on ARM not require any atomic page table >>> entry updates? I ask because I can't see how you would emulate >>> such when you need to deal with 16 of them at a time. >> >> I'm not sure to understand this question. >> >> ARM64 is able to support different page granularity (4KB, 16KB and >> 64KB). You have to setup the page table registers during the boot in >> order to specify the granularity used for the whole page table. > > But you said you use 4k pages in Xen nevertheless. I.e. page tables > would still be at 4k granularity, i.e. you'd need to update 16 entries > for a single 64k page. Or can you have 64k pages in L1 and 4k pages > in L2? The page table for each stage are completely dissociated. So you can use a different page granularity for Xen and Linux. >>>> In my current implementation of Linux 64K support (see [1]), there is no >>>> changes in Xen (hypervisor and tools). Linux is breaking the 64K page in >>>> 4K chunk. >>>> >>>> When the backend is 64K, it will map the foreign 4K at the top of a 64K >>>> page. It's a waste of memory, but it's easier to implement and it's >>>> still and improvement compare to have Linux crashing at boot. >>> >>> Waste of memory? You're only mapping an existing chunk of memory. >>> DYM waste of address space? >> >> No, I really meant waste of memory. The current grant API in Linux is >> allocating one Linux Page per grant. Although the grant is always 4K, so >> we won't be able to make use of the 60K for anything as long as we use >> this page for a grant. >> >> So if the grant is pre-allocated (such as for PV block), we won't be >> able use nr_grant * 60KB memory. > > I still don't follow - grant mappings ought to be done into ballooned > (i.e. empty) pages, i.e. no memory would get wasted unless there > are too few balloon pages available. Everything balloon out is less memory that can be used by Linux. If we are only using 1/15 of the balloon out page that a huge waste of memory for me. -- Julien Grall