From: Ian Campbell <ian.campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>,
xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
Keir Fraser <keir@xen.org>, Wei Liu <wei.liu2@citrix.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH] x86/NUMA: make init_node_heap() respect Xen heap limit
Date: Tue, 1 Sep 2015 11:28:47 +0100 [thread overview]
Message-ID: <1441103327.27618.26.camel@citrix.com> (raw)
In-Reply-To: <55DEE85D020000780009D4FA@prv-mh.provo.novell.com>
On Thu, 2015-08-27 at 02:37 -0600, Jan Beulich wrote:
> On NUMA systems, where we try to use node local memory for the basic
> control structures of the buddy allocator, this special case needs to
> take into consideration a possible address width limit placed on the
> Xen heap. In turn this (but also other, more abstract considerations)
> requires that xenheap_max_mfn() not be called more than once (at most
> we might permit it to be called a second time with a larger value than
> was passed the first time), and be called only before calling
> end_boot_allocator().
>
> While inspecting all the involved code, a couple of off-by-one issues
> were found (and are being corrected here at once):
> - arch_init_memory() cleared one too many page table slots
> - the highmem_start based invocation of xenheap_max_mfn() passed too
> big a value
> - xenheap_max_mfn() calculated the wrong bit count in edge cases
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
@@ -428,14 +434,18 @@ static unsigned long init_node_heap(int
> }
> #ifdef DIRECTMAP_VIRT_END
> else if ( *use_tail && nr >= needed &&
> - (mfn + nr) <= (virt_to_mfn(eva - 1) + 1) )
> + (mfn + nr) <= (virt_to_mfn(eva - 1) + 1) &&
> + (!xenheap_bits ||
> + !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
This logic appears twice (with just s/nr/needed/ the second time), and it
is a reasonably complex set of conditions. Moving it into a helper might be
a nice cleanup, which would also allow a descriptive name to be used and
also perhaps separating the various conditions into their own if (...)
return which might aid readability.
Ian.
next prev parent reply other threads:[~2015-09-01 10:29 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-27 8:37 [PATCH] x86/NUMA: make init_node_heap() respect Xen heap limit Jan Beulich
2015-08-27 9:25 ` Wei Liu
2015-08-27 10:11 ` Andrew Cooper
2015-08-27 14:43 ` Wei Liu
2015-09-01 10:28 ` Ian Campbell [this message]
2015-09-03 20:01 ` Julien Grall
2015-09-03 20:58 ` Julien Grall
2015-09-04 7:37 ` Jan Beulich
2015-09-04 8:27 ` Ian Campbell
2015-09-04 8:39 ` Jan Beulich
2015-09-04 8:52 ` Ian Campbell
2015-09-04 9:09 ` Jan Beulich
2015-09-04 11:29 ` Julien Grall
2015-09-04 12:02 ` Jan Beulich
2015-09-04 12:05 ` Wei Liu
2015-09-04 12:50 ` Julien Grall
2015-09-04 12:57 ` Ian Campbell
2015-09-04 12:52 ` Ian Campbell
2015-09-04 12:53 ` Julien Grall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1441103327.27618.26.camel@citrix.com \
--to=ian.campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=keir@xen.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).