From: Jon Tollefson <kniht@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org,
Linux Memory Management List <linux-mm@kvack.org>,
linuxppc-dev <linuxppc-dev@ozlabs.org>
Cc: Paul Mackerras <paulus@samba.org>,
Andi Kleen <andi@firstfloor.org>,
Adam Litke <agl@linux.vnet.ibm.com>
Subject: [PATCH 4/4] powerpc: define page support for 16G pages
Date: Wed, 26 Mar 2008 16:29:44 -0500 [thread overview]
Message-ID: <47EAC048.30006@linux.vnet.ibm.com> (raw)
In-Reply-To: <47EABE2D.7080400@linux.vnet.ibm.com>
The huge page size is setup for 16G pages if that size is specified at boot-time. The support for
multiple huge page sizes is not being utilized yet. That will be in a future patch.
Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com>
---
hugetlbpage.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 44d3d55..b6a02b7 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -26,6 +26,7 @@
#define HPAGE_SHIFT_64K 16
#define HPAGE_SHIFT_16M 24
+#define HPAGE_SHIFT_16G 34
#define NUM_LOW_AREAS (0x100000000UL >> SID_SHIFT)
#define NUM_HIGH_AREAS (PGTABLE_RANGE >> HTLB_AREA_SHIFT)
@@ -589,9 +590,11 @@ void set_huge_psize(int psize)
{
/* Check that it is a page size supported by the hardware and
* that it fits within pagetable limits. */
- if (mmu_psize_defs[psize].shift && mmu_psize_defs[psize].shift < SID_SHIFT &&
+ if (mmu_psize_defs[psize].shift &&
+ mmu_psize_defs[psize].shift < SID_SHIFT_1T &&
(mmu_psize_defs[psize].shift > MIN_HUGEPTE_SHIFT ||
- mmu_psize_defs[psize].shift == HPAGE_SHIFT_64K)) {
+ mmu_psize_defs[psize].shift == HPAGE_SHIFT_64K ||
+ mmu_psize_defs[psize].shift == HPAGE_SHIFT_16G)) {
HPAGE_SHIFT = mmu_psize_defs[psize].shift;
mmu_huge_psize = psize;
#ifdef CONFIG_PPC_64K_PAGES
@@ -599,6 +602,8 @@ void set_huge_psize(int psize)
#else
if (HPAGE_SHIFT == HPAGE_SHIFT_64K)
hugepte_shift = (PMD_SHIFT-HPAGE_SHIFT);
+ else if (HPAGE_SHIFT == HPAGE_SHIFT_16G)
+ hugepte_shift = (PGDIR_SHIFT-HPAGE_SHIFT);
else
hugepte_shift = (PUD_SHIFT-HPAGE_SHIFT);
#endif
@@ -625,6 +630,9 @@ static int __init hugepage_setup_sz(char *str)
case HPAGE_SHIFT_16M:
mmu_psize = MMU_PAGE_16M;
break;
+ case HPAGE_SHIFT_16G:
+ mmu_psize = MMU_PAGE_16G;
+ break;
}
if (mmu_psize >=0 && mmu_psize_defs[mmu_psize].shift)
next prev parent reply other threads:[~2008-03-26 21:29 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-03-26 21:20 [PATCH 0/4] 16G huge page support for powerpc Jon Tollefson
2008-03-26 21:24 ` [PATCH 1/4] allow arch specific function for allocating gigantic pages Jon Tollefson
2008-03-26 21:49 ` Andi Kleen
2008-03-26 21:26 ` [PATCH 2/4] powerpc: " Jon Tollefson
2008-03-26 21:27 ` [PATCH 3/4] powerpc: scan device tree and save gigantic page locations Jon Tollefson
2008-03-26 21:29 ` Jon Tollefson [this message]
2008-03-26 21:47 ` [PATCH 0/4] 16G huge page support for powerpc Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47EAC048.30006@linux.vnet.ibm.com \
--to=kniht@linux.vnet.ibm.com \
--cc=agl@linux.vnet.ibm.com \
--cc=andi@firstfloor.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@ozlabs.org \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).