* [PATCH V3] powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlb
@ 2018-03-30 12:04 Aneesh Kumar K.V
2018-04-04 14:39 ` [V3] " Michael Ellerman
0 siblings, 1 reply; 2+ messages in thread
From: Aneesh Kumar K.V @ 2018-03-30 12:04 UTC (permalink / raw)
To: benh, paulus, mpe; +Cc: linuxppc-dev, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
With 64k page size, we have hugetlb pte entries at the pmd and pud level for
book3s64. We don't need to create a separate page table cache for that. With 4k
we need to make sure hugepd page table cache for 16M is placed at PUD level
and 16G at the PGD level.
Simplify all these by not using HUGEPD_PD_SHIFT which is confusing for book3s64.
Without this patch, with 64k page size we create pagetable caches with shift
value 10 and 7 which are not used at all.
Fixes: 419df06eea5b ("powerpc: Reduce the PTE_INDEX_SIZE")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
arch/powerpc/mm/hugetlbpage.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index f4153f21d214..99cf86096970 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -122,9 +122,6 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
#if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_8xx)
#define HUGEPD_PGD_SHIFT PGDIR_SHIFT
#define HUGEPD_PUD_SHIFT PUD_SHIFT
-#else
-#define HUGEPD_PGD_SHIFT PUD_SHIFT
-#define HUGEPD_PUD_SHIFT PMD_SHIFT
#endif
/*
@@ -670,15 +667,26 @@ static int __init hugetlbpage_init(void)
shift = mmu_psize_to_shift(psize);
- if (add_huge_page_size(1ULL << shift) < 0)
+#ifdef CONFIG_PPC_BOOK3S_64
+ if (shift > PGDIR_SHIFT)
continue;
-
+ else if (shift > PUD_SHIFT)
+ pdshift = PGDIR_SHIFT;
+ else if (shift > PMD_SHIFT)
+ pdshift = PUD_SHIFT;
+ else
+ pdshift = PMD_SHIFT;
+#else
if (shift < HUGEPD_PUD_SHIFT)
pdshift = PMD_SHIFT;
else if (shift < HUGEPD_PGD_SHIFT)
pdshift = PUD_SHIFT;
else
pdshift = PGDIR_SHIFT;
+#endif
+
+ if (add_huge_page_size(1ULL << shift) < 0)
+ continue;
/*
* if we have pdshift and shift value same, we don't
* use pgt cache for hugepd.
--
2.14.3
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [V3] powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlb
2018-03-30 12:04 [PATCH V3] powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlb Aneesh Kumar K.V
@ 2018-04-04 14:39 ` Michael Ellerman
0 siblings, 0 replies; 2+ messages in thread
From: Michael Ellerman @ 2018-04-04 14:39 UTC (permalink / raw)
To: Aneesh Kumar K.V, benh, paulus; +Cc: linuxppc-dev, Aneesh Kumar K.V
On Fri, 2018-03-30 at 12:04:08 UTC, "Aneesh Kumar K.V" wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>
> With 64k page size, we have hugetlb pte entries at the pmd and pud level for
> book3s64. We don't need to create a separate page table cache for that. With 4k
> we need to make sure hugepd page table cache for 16M is placed at PUD level
> and 16G at the PGD level.
>
> Simplify all these by not using HUGEPD_PD_SHIFT which is confusing for book3s64.
>
> Without this patch, with 64k page size we create pagetable caches with shift
> value 10 and 7 which are not used at all.
>
> Fixes: 419df06eea5b ("powerpc: Reduce the PTE_INDEX_SIZE")
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/6fa504835d6969144b2bd3699684dd
cheers
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2018-04-04 14:39 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-03-30 12:04 [PATCH V3] powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlb Aneesh Kumar K.V
2018-04-04 14:39 ` [V3] " Michael Ellerman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).