* [PATCH 0/5] Huge pages for short descriptors on ARM
@ 2014-02-18 15:27 Steve Capper
2014-02-18 15:27 ` [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young} Steve Capper
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Steve Capper @ 2014-02-18 15:27 UTC (permalink / raw)
To: linux-arm-kernel, linux, linux-mm
Cc: will.deacon, catalin.marinas, arnd, dsaxena, robherring2,
Steve Capper
Hello,
This series brings HugeTLB pages and Transparent Huge Pages (THP) to
ARM on short descriptors.
We use a pair of 1MB sections to represent a 2MB huge page. Both
HugeTLB and THP entries are represented by PMDs with the same bit
layout.
The short descriptor page table manipulation code on ARM makes a
distinction between Linux and hardware ptes and performs the necessary
translation in the assembler pte setter functions. The huge page code
instead manipulates the hardware entries directly.
There is one small bit of translation that takes place to populate an
appropriate pgprot_t value for the VMA containing the huge page. Once
we have that pgprot_t, we can manipulate huge ptes/pmds as normal with
the bit and modify funcs.
In order to be able to manipulate huge ptes directly, I've introduced
three new manipulation functions: huge_pte_page, huge_present and
huge_pte_young. If undefined, these will default to the standard pte
analogues.
I have tested this series on an Arndale board running 3.14-rc3. The
libhugetlbfs checks, LTP and some custom THP PROT_NONE tests were used
to test this series.
Since the RFC in December, I have rebased the code against 3.14-rc3 and
tidied up the code.
Cheers,
--
Steve
Steve Capper (5):
mm: hugetlb: Introduce huge_pte_{page,present,young}
arm: mm: Adjust the parameters for __sync_icache_dcache
arm: mm: Make mmu_gather aware of huge pages
arm: mm: HugeTLB support for non-LPAE systems
arm: mm: Add Transparent HugePage support for non-LPAE
arch/arm/Kconfig | 4 +-
arch/arm/include/asm/hugetlb-2level.h | 121 +++++++++++++++++++++++++++++++
arch/arm/include/asm/hugetlb-3level.h | 6 ++
arch/arm/include/asm/hugetlb.h | 10 +--
arch/arm/include/asm/pgtable-2level.h | 133 +++++++++++++++++++++++++++++++++-
arch/arm/include/asm/pgtable-3level.h | 3 +-
arch/arm/include/asm/pgtable.h | 9 +--
arch/arm/include/asm/tlb.h | 14 +++-
arch/arm/kernel/head.S | 10 ++-
arch/arm/mm/fault.c | 13 ----
arch/arm/mm/flush.c | 9 +--
arch/arm/mm/fsr-2level.c | 4 +-
arch/arm/mm/hugetlbpage.c | 2 +-
arch/arm/mm/mmu.c | 51 +++++++++++++
include/linux/hugetlb.h | 12 +++
mm/hugetlb.c | 22 +++---
16 files changed, 370 insertions(+), 53 deletions(-)
create mode 100644 arch/arm/include/asm/hugetlb-2level.h
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young}
2014-02-18 15:27 [PATCH 0/5] Huge pages for short descriptors on ARM Steve Capper
@ 2014-02-18 15:27 ` Steve Capper
2014-03-03 8:01 ` Steve Capper
` (3 more replies)
2014-02-18 15:27 ` [PATCH 2/5] arm: mm: Adjust the parameters for __sync_icache_dcache Steve Capper
` (3 subsequent siblings)
4 siblings, 4 replies; 10+ messages in thread
From: Steve Capper @ 2014-02-18 15:27 UTC (permalink / raw)
To: linux-arm-kernel, linux, linux-mm
Cc: will.deacon, catalin.marinas, arnd, dsaxena, robherring2,
Steve Capper
Introduce huge pte versions of pte_page, pte_present and pte_young.
This allows ARM (without LPAE) to use alternative pte processing logic
for huge ptes.
Where these functions are not defined by architectural code they
fallback to the standard functions.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
include/linux/hugetlb.h | 12 ++++++++++++
mm/hugetlb.c | 22 +++++++++++-----------
2 files changed, 23 insertions(+), 11 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 8c43cc4..4992487 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -353,6 +353,18 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
}
#endif
+#ifndef huge_pte_page
+#define huge_pte_page(pte) pte_page(pte)
+#endif
+
+#ifndef huge_pte_present
+#define huge_pte_present(pte) pte_present(pte)
+#endif
+
+#ifndef huge_pte_mkyoung
+#define huge_pte_mkyoung(pte) pte_mkyoung(pte)
+#endif
+
static inline struct hstate *page_hstate(struct page *page)
{
VM_BUG_ON_PAGE(!PageHuge(page), page);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c01cb9f..d1a38c9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2319,7 +2319,7 @@ static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
entry = huge_pte_wrprotect(mk_huge_pte(page,
vma->vm_page_prot));
}
- entry = pte_mkyoung(entry);
+ entry = huge_pte_mkyoung(entry);
entry = pte_mkhuge(entry);
entry = arch_make_huge_pte(entry, vma, page, writable);
@@ -2379,7 +2379,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
if (cow)
huge_ptep_set_wrprotect(src, addr, src_pte);
entry = huge_ptep_get(src_pte);
- ptepage = pte_page(entry);
+ ptepage = huge_pte_page(entry);
get_page(ptepage);
page_dup_rmap(ptepage);
set_huge_pte_at(dst, addr, dst_pte, entry);
@@ -2398,7 +2398,7 @@ static int is_hugetlb_entry_migration(pte_t pte)
{
swp_entry_t swp;
- if (huge_pte_none(pte) || pte_present(pte))
+ if (huge_pte_none(pte) || huge_pte_present(pte))
return 0;
swp = pte_to_swp_entry(pte);
if (non_swap_entry(swp) && is_migration_entry(swp))
@@ -2411,7 +2411,7 @@ static int is_hugetlb_entry_hwpoisoned(pte_t pte)
{
swp_entry_t swp;
- if (huge_pte_none(pte) || pte_present(pte))
+ if (huge_pte_none(pte) || huge_pte_present(pte))
return 0;
swp = pte_to_swp_entry(pte);
if (non_swap_entry(swp) && is_hwpoison_entry(swp))
@@ -2464,7 +2464,7 @@ again:
goto unlock;
}
- page = pte_page(pte);
+ page = huge_pte_page(pte);
/*
* If a reference page is supplied, it is because a specific
* page is being unmapped, not a range. Ensure the page we
@@ -2614,7 +2614,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long mmun_start; /* For mmu_notifiers */
unsigned long mmun_end; /* For mmu_notifiers */
- old_page = pte_page(pte);
+ old_page = huge_pte_page(pte);
retry_avoidcopy:
/* If no-one else is actually using this page, avoid the copy
@@ -2965,7 +2965,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
* Note that locking order is always pagecache_page -> page,
* so no worry about deadlock.
*/
- page = pte_page(entry);
+ page = huge_pte_page(entry);
get_page(page);
if (page != pagecache_page)
lock_page(page);
@@ -2985,7 +2985,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
}
entry = huge_pte_mkdirty(entry);
}
- entry = pte_mkyoung(entry);
+ entry = huge_pte_mkyoung(entry);
if (huge_ptep_set_access_flags(vma, address, ptep, entry,
flags & FAULT_FLAG_WRITE))
update_mmu_cache(vma, address, ptep);
@@ -3077,7 +3077,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
}
pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT;
- page = pte_page(huge_ptep_get(pte));
+ page = huge_pte_page(huge_ptep_get(pte));
same_page:
if (pages) {
pages[i] = mem_map_offset(page, pfn_offset);
@@ -3425,7 +3425,7 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
{
struct page *page;
- page = pte_page(*(pte_t *)pmd);
+ page = huge_pte_page(*(pte_t *)pmd);
if (page)
page += ((address & ~PMD_MASK) >> PAGE_SHIFT);
return page;
@@ -3437,7 +3437,7 @@ follow_huge_pud(struct mm_struct *mm, unsigned long address,
{
struct page *page;
- page = pte_page(*(pte_t *)pud);
+ page = huge_pte_page(*(pte_t *)pud);
if (page)
page += ((address & ~PUD_MASK) >> PAGE_SHIFT);
return page;
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/5] arm: mm: Adjust the parameters for __sync_icache_dcache
2014-02-18 15:27 [PATCH 0/5] Huge pages for short descriptors on ARM Steve Capper
2014-02-18 15:27 ` [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young} Steve Capper
@ 2014-02-18 15:27 ` Steve Capper
2014-02-18 15:27 ` [PATCH 3/5] arm: mm: Make mmu_gather aware of huge pages Steve Capper
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Steve Capper @ 2014-02-18 15:27 UTC (permalink / raw)
To: linux-arm-kernel, linux, linux-mm
Cc: will.deacon, catalin.marinas, arnd, dsaxena, robherring2,
Steve Capper
Rather than take a pte_t as an input, break this down to the pfn
and whether or not the memory is executable.
This allows us to use this function for ptes and pmds.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
arch/arm/include/asm/pgtable.h | 6 +++---
arch/arm/mm/flush.c | 9 ++++-----
2 files changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 7d59b52..9b4ad36 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -225,11 +225,11 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
#define pte_present_user(pte) (pte_present(pte) && (pte_val(pte) & L_PTE_USER))
#if __LINUX_ARM_ARCH__ < 6
-static inline void __sync_icache_dcache(pte_t pteval)
+static inline void __sync_icache_dcache(unsigned long pfn, int exec);
{
}
#else
-extern void __sync_icache_dcache(pte_t pteval);
+extern void __sync_icache_dcache(unsigned long pfn, int exec);
#endif
static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
@@ -238,7 +238,7 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
unsigned long ext = 0;
if (addr < TASK_SIZE && pte_present_user(pteval)) {
- __sync_icache_dcache(pteval);
+ __sync_icache_dcache(pte_pfn(pteval), pte_exec(pteval));
ext |= PTE_EXT_NG;
}
diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
index 3387e60..df0d5ca 100644
--- a/arch/arm/mm/flush.c
+++ b/arch/arm/mm/flush.c
@@ -232,16 +232,15 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p
}
#if __LINUX_ARM_ARCH__ >= 6
-void __sync_icache_dcache(pte_t pteval)
+void __sync_icache_dcache(unsigned long pfn, int exec)
{
- unsigned long pfn;
struct page *page;
struct address_space *mapping;
- if (cache_is_vipt_nonaliasing() && !pte_exec(pteval))
+ if (cache_is_vipt_nonaliasing() && !exec)
/* only flush non-aliasing VIPT caches for exec mappings */
return;
- pfn = pte_pfn(pteval);
+
if (!pfn_valid(pfn))
return;
@@ -254,7 +253,7 @@ void __sync_icache_dcache(pte_t pteval)
if (!test_and_set_bit(PG_dcache_clean, &page->flags))
__flush_dcache_page(mapping, page);
- if (pte_exec(pteval))
+ if (exec)
__flush_icache_all();
}
#endif
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/5] arm: mm: Make mmu_gather aware of huge pages
2014-02-18 15:27 [PATCH 0/5] Huge pages for short descriptors on ARM Steve Capper
2014-02-18 15:27 ` [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young} Steve Capper
2014-02-18 15:27 ` [PATCH 2/5] arm: mm: Adjust the parameters for __sync_icache_dcache Steve Capper
@ 2014-02-18 15:27 ` Steve Capper
2014-02-18 15:27 ` [PATCH 4/5] arm: mm: HugeTLB support for non-LPAE systems Steve Capper
2014-02-18 15:27 ` [PATCH 5/5] arm: mm: Add Transparent HugePage support for non-LPAE Steve Capper
4 siblings, 0 replies; 10+ messages in thread
From: Steve Capper @ 2014-02-18 15:27 UTC (permalink / raw)
To: linux-arm-kernel, linux, linux-mm
Cc: will.deacon, catalin.marinas, arnd, dsaxena, robherring2,
Steve Capper
Huge pages on short descriptors are arranged as pairs of 1MB sections.
We need to be careful and ensure that the TLBs for both sections are
flushed when we tlb_add_flush on a HugeTLB page.
This patch extends the tlb flush range to HPAGE_SIZE rather than
PAGE_SIZE when addresses belonging to huge page VMAs are added to
the flush range.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
arch/arm/include/asm/tlb.h | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h
index 0baf7f0..b2498e6 100644
--- a/arch/arm/include/asm/tlb.h
+++ b/arch/arm/include/asm/tlb.h
@@ -81,10 +81,17 @@ static inline void tlb_flush(struct mmu_gather *tlb)
static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr)
{
if (!tlb->fullmm) {
+ unsigned long size = PAGE_SIZE;
+
if (addr < tlb->range_start)
tlb->range_start = addr;
- if (addr + PAGE_SIZE > tlb->range_end)
- tlb->range_end = addr + PAGE_SIZE;
+
+ if (!config_enabled(CONFIG_ARM_LPAE) && tlb->vma
+ && is_vm_hugetlb_page(tlb->vma))
+ size = HPAGE_SIZE;
+
+ if (addr + size > tlb->range_end)
+ tlb->range_end = addr + size;
}
}
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/5] arm: mm: HugeTLB support for non-LPAE systems
2014-02-18 15:27 [PATCH 0/5] Huge pages for short descriptors on ARM Steve Capper
` (2 preceding siblings ...)
2014-02-18 15:27 ` [PATCH 3/5] arm: mm: Make mmu_gather aware of huge pages Steve Capper
@ 2014-02-18 15:27 ` Steve Capper
2014-02-18 15:27 ` [PATCH 5/5] arm: mm: Add Transparent HugePage support for non-LPAE Steve Capper
4 siblings, 0 replies; 10+ messages in thread
From: Steve Capper @ 2014-02-18 15:27 UTC (permalink / raw)
To: linux-arm-kernel, linux, linux-mm
Cc: will.deacon, catalin.marinas, arnd, dsaxena, robherring2,
Steve Capper
Add huge page support for systems with short descriptors. Rather than
store separate linux/hardware huge ptes, we work directly with the
hardware descriptors at the pmd level.
As we work directly with the pmd and need to store information that
doesn't directly correspond to hardware bits (such as the accessed
flag and dirty bit); we re-purporse the domain bits of the short
section descriptor. In order to use these domain bits for storage,
we need to make ourselves a client for all 16 domains and this is
done in head.S.
Storing extra information in the domain bits also makes it a lot
easier to implement Transparent Huge Pages, and some of the code in
pgtable-2level.h is arranged to facilitate THP support in a later
patch.
Non-LPAE HugeTLB pages are incompatible with the huge page migration
code (enabled when CONFIG_MEMORY_FAILURE is selected) as that code
dereferences PTEs directly, rather than calling huge_ptep_get and
set_huge_pte_at.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
arch/arm/Kconfig | 2 +-
arch/arm/include/asm/hugetlb-2level.h | 121 ++++++++++++++++++++++++++++++++++
arch/arm/include/asm/hugetlb-3level.h | 6 ++
arch/arm/include/asm/hugetlb.h | 10 ++-
arch/arm/include/asm/pgtable-2level.h | 101 ++++++++++++++++++++++++++--
arch/arm/include/asm/pgtable-3level.h | 2 +-
arch/arm/include/asm/pgtable.h | 1 +
arch/arm/kernel/head.S | 10 ++-
arch/arm/mm/fault.c | 13 ----
arch/arm/mm/fsr-2level.c | 4 +-
arch/arm/mm/hugetlbpage.c | 2 +-
arch/arm/mm/mmu.c | 51 ++++++++++++++
12 files changed, 294 insertions(+), 29 deletions(-)
create mode 100644 arch/arm/include/asm/hugetlb-2level.h
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e254198..58b17b1 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1816,7 +1816,7 @@ config HW_PERF_EVENTS
config SYS_SUPPORTS_HUGETLBFS
def_bool y
- depends on ARM_LPAE
+ depends on ARM_LPAE || (!CPU_USE_DOMAINS && !MEMORY_FAILURE)
config HAVE_ARCH_TRANSPARENT_HUGEPAGE
def_bool y
diff --git a/arch/arm/include/asm/hugetlb-2level.h b/arch/arm/include/asm/hugetlb-2level.h
new file mode 100644
index 0000000..d270ca2
--- /dev/null
+++ b/arch/arm/include/asm/hugetlb-2level.h
@@ -0,0 +1,121 @@
+/*
+ * arch/arm/include/asm/hugetlb-2level.h
+ *
+ * Copyright (C) 2014 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _ASM_ARM_HUGETLB_2LEVEL_H
+#define _ASM_ARM_HUGETLB_2LEVEL_H
+
+
+static inline pte_t huge_ptep_get(pte_t *ptep)
+{
+ return *ptep;
+}
+
+static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte)
+{
+ set_pmd_at(mm, addr, (pmd_t *) ptep, __pmd(pte_val(pte)));
+}
+
+static inline pte_t pte_mkhuge(pte_t pte) { return pte; }
+
+static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+{
+ pmd_t *pmdp = (pmd_t *)ptep;
+ pmd_clear(pmdp);
+ flush_tlb_range(vma, addr, addr + HPAGE_SIZE);
+}
+
+static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ pmd_t *pmdp = (pmd_t *) ptep;
+ set_pmd_at(mm, addr, pmdp, pmd_wrprotect(*pmdp));
+}
+
+
+static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ pmd_t *pmdp = (pmd_t *)ptep;
+ pte_t pte = huge_ptep_get(ptep);
+ pmd_clear(pmdp);
+
+ return pte;
+}
+
+static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep,
+ pte_t pte, int dirty)
+{
+ int changed = !pte_same(huge_ptep_get(ptep), pte);
+ if (changed) {
+ set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
+ flush_tlb_range(vma, addr, addr + HPAGE_SIZE);
+ }
+
+ return changed;
+}
+
+static inline pte_t huge_pte_mkwrite(pte_t pte)
+{
+ pmd_t pmd = __pmd(pte_val(pte));
+ pmd = pmd_mkwrite(pmd);
+ return __pte(pmd_val(pmd));
+}
+
+static inline pte_t huge_pte_mkdirty(pte_t pte)
+{
+ pmd_t pmd = __pmd(pte_val(pte));
+ pmd = pmd_mkdirty(pmd);
+ return __pte(pmd_val(pmd));
+}
+
+static inline unsigned long huge_pte_dirty(pte_t pte)
+{
+ return pmd_dirty(__pmd(pte_val(pte)));
+}
+
+static inline unsigned long huge_pte_write(pte_t pte)
+{
+ return pmd_write(__pmd(pte_val(pte)));
+}
+
+static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep)
+{
+ pmd_t *pmdp = (pmd_t *)ptep;
+ pmd_clear(pmdp);
+}
+
+static inline pte_t mk_huge_pte(struct page *page, pgprot_t pgprot)
+{
+ pmd_t pmd = mk_pmd(page,pgprot);
+ return __pte(pmd_val(pmd));
+}
+
+static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
+{
+ pmd_t pmd = pmd_modify(__pmd(pte_val(pte)), newprot);
+ return __pte(pmd_val(pmd));
+}
+
+static inline pte_t huge_pte_wrprotect(pte_t pte)
+{
+ pmd_t pmd = pmd_wrprotect(__pmd(pte_val(pte)));
+ return __pte(pmd_val(pmd));
+}
+
+#endif /* _ASM_ARM_HUGETLB_2LEVEL_H */
diff --git a/arch/arm/include/asm/hugetlb-3level.h b/arch/arm/include/asm/hugetlb-3level.h
index d4014fb..c633119 100644
--- a/arch/arm/include/asm/hugetlb-3level.h
+++ b/arch/arm/include/asm/hugetlb-3level.h
@@ -22,6 +22,7 @@
#ifndef _ASM_ARM_HUGETLB_3LEVEL_H
#define _ASM_ARM_HUGETLB_3LEVEL_H
+#include <asm-generic/hugetlb.h>
/*
* If our huge pte is non-zero then mark the valid bit.
@@ -68,4 +69,9 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
}
+static inline pte_t huge_pte_wrprotect(pte_t pte)
+{
+ return pte_wrprotect(pte);
+}
+
#endif /* _ASM_ARM_HUGETLB_3LEVEL_H */
diff --git a/arch/arm/include/asm/hugetlb.h b/arch/arm/include/asm/hugetlb.h
index 1f1b1cd..1d7f7b7 100644
--- a/arch/arm/include/asm/hugetlb.h
+++ b/arch/arm/include/asm/hugetlb.h
@@ -23,9 +23,12 @@
#define _ASM_ARM_HUGETLB_H
#include <asm/page.h>
-#include <asm-generic/hugetlb.h>
+#ifdef CONFIG_ARM_LPAE
#include <asm/hugetlb-3level.h>
+#else
+#include <asm/hugetlb-2level.h>
+#endif
static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb,
unsigned long addr, unsigned long end,
@@ -62,11 +65,6 @@ static inline int huge_pte_none(pte_t pte)
return pte_none(pte);
}
-static inline pte_t huge_pte_wrprotect(pte_t pte)
-{
- return pte_wrprotect(pte);
-}
-
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h
index dfff709..1fb2050 100644
--- a/arch/arm/include/asm/pgtable-2level.h
+++ b/arch/arm/include/asm/pgtable-2level.h
@@ -155,6 +155,19 @@
#define pud_clear(pudp) do { } while (0)
#define set_pud(pud,pudp) do { } while (0)
+static inline int pmd_thp_or_huge(pmd_t pmd)
+{
+ if ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_FAULT)
+ return pmd_val(pmd);
+
+ return ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT);
+}
+
+static inline int pte_huge(pte_t pte)
+{
+ return pmd_thp_or_huge(__pmd(pte_val(pte)));
+}
+
static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
{
return (pmd_t *)pud;
@@ -183,11 +196,91 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
#define set_pte_ext(ptep,pte,ext) cpu_set_pte_ext(ptep,pte,ext)
/*
- * We don't have huge page support for short descriptors, for the moment
- * define empty stubs for use by pin_page_for_write.
+ * now follows some of the definitions to allow huge page support, we can't put
+ * these in the hugetlb source files as they are also required for transparent
+ * hugepage support.
*/
-#define pmd_hugewillfault(pmd) (0)
-#define pmd_thp_or_huge(pmd) (0)
+
+#define HPAGE_SHIFT PMD_SHIFT
+#define HPAGE_SIZE (_AC(1, UL) << HPAGE_SHIFT)
+#define HPAGE_MASK (~(HPAGE_SIZE - 1))
+#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
+
+/*
+ * We re-purpose the following domain bits in the section descriptor
+ */
+#define PMD_DSECT_DIRTY (_AT(pmdval_t, 1) << 5)
+#define PMD_DSECT_AF (_AT(pmdval_t, 1) << 6)
+
+#define PMD_BIT_FUNC(fn,op) \
+static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; }
+
+static inline unsigned long pmd_pfn(pmd_t pmd)
+{
+ /*
+ * for a section, we need to mask off more of the pmd
+ * before looking up the pfn.
+ */
+ if (pmd_thp_or_huge(pmd))
+ return __phys_to_pfn(pmd_val(pmd) & HPAGE_MASK);
+ else
+ return __phys_to_pfn(pmd_val(pmd) & PHYS_MASK);
+}
+
+#define huge_pte_page(pte) (pfn_to_page((pte_val(pte) & HPAGE_MASK) >> PAGE_SHIFT))
+#define huge_pte_present(pte) (1)
+#define huge_pte_mkyoung(pte) (__pte(pmd_val(pmd_mkyoung(__pmd(pte_val(pte))))))
+
+extern pgprot_t get_huge_pgprot(pgprot_t newprot);
+
+#define pfn_pmd(pfn,prot) __pmd(__pfn_to_phys(pfn) | pgprot_val(prot));
+#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),get_huge_pgprot(prot));
+
+PMD_BIT_FUNC(mkdirty, |= PMD_DSECT_DIRTY);
+PMD_BIT_FUNC(mkwrite, |= PMD_SECT_AP_WRITE);
+PMD_BIT_FUNC(wrprotect, &= ~PMD_SECT_AP_WRITE);
+PMD_BIT_FUNC(mknexec, |= PMD_SECT_XN);
+PMD_BIT_FUNC(rmprotnone, |= PMD_TYPE_SECT);
+PMD_BIT_FUNC(mkyoung, |= PMD_DSECT_AF);
+
+#define pmd_young(pmd) (pmd_val(pmd) & PMD_DSECT_AF)
+#define pmd_write(pmd) (pmd_val(pmd) & PMD_SECT_AP_WRITE)
+#define pmd_exec(pmd) (!(pmd_val(pmd) & PMD_SECT_XN))
+#define pmd_dirty(pmd) (pmd_val(pmd) & PMD_DSECT_DIRTY)
+
+#define pmd_hugewillfault(pmd) (!pmd_young(pmd) || !pmd_write(pmd))
+
+#define __HAVE_ARCH_PMD_WRITE
+
+extern void __sync_icache_dcache(unsigned long pfn, int exec);
+
+static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+ pmd_t *pmdp, pmd_t pmd)
+{
+ VM_BUG_ON((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_TABLE);
+
+ if (!pmd_val(pmd)) {
+ pmdp[0] = pmdp[1] = pmd;
+ } else {
+ pmdp[0] = __pmd(pmd_val(pmd));
+ pmdp[1] = __pmd(pmd_val(pmd) + SECTION_SIZE);
+
+ __sync_icache_dcache(pmd_pfn(pmd), pmd_exec(pmd));
+ }
+
+ flush_pmd_entry(pmdp);
+}
+
+static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+{
+ pgprot_t hugeprot = get_huge_pgprot(newprot);
+ const pmdval_t mask = PMD_SECT_XN | PMD_SECT_AP_WRITE |
+ PMD_TYPE_SECT;
+
+ pmd_val(pmd) = (pmd_val(pmd) & ~mask) | (pgprot_val(hugeprot) & mask);
+
+ return pmd;
+}
#endif /* __ASSEMBLY__ */
diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
index 03243f7..c1c8b37 100644
--- a/arch/arm/include/asm/pgtable-3level.h
+++ b/arch/arm/include/asm/pgtable-3level.h
@@ -210,7 +210,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
#define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY))
#define pmd_hugewillfault(pmd) (!pmd_young(pmd) || !pmd_write(pmd))
-#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))
+#define pmd_thp_or_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT))
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT))
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 9b4ad36..9cc40bc 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -220,6 +220,7 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
#define pte_dirty(pte) (pte_val(pte) & L_PTE_DIRTY)
#define pte_young(pte) (pte_val(pte) & L_PTE_YOUNG)
#define pte_exec(pte) (!(pte_val(pte) & L_PTE_XN))
+#define pte_protnone(pte) (pte_val(pte) & L_PTE_NONE)
#define pte_special(pte) (0)
#define pte_present_user(pte) (pte_present(pte) && (pte_val(pte) & L_PTE_USER))
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index 914616e..1651d3b 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -434,13 +434,21 @@ __enable_mmu:
bic r0, r0, #CR_I
#endif
#ifndef CONFIG_ARM_LPAE
+#ifndef CONFIG_SYS_SUPPORTS_HUGETLBFS
mov r5, #(domain_val(DOMAIN_USER, DOMAIN_MANAGER) | \
domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \
domain_val(DOMAIN_TABLE, DOMAIN_MANAGER) | \
domain_val(DOMAIN_IO, DOMAIN_CLIENT))
+#else
+ @ set ourselves as the client in all domains
+ @ this allows us to then use the 4 domain bits in the
+ @ section descriptors in our transparent huge pages
+ ldr r5, =0x55555555
+#endif /* CONFIG_SYS_SUPPORTS_HUGETLBFS */
+
mcr p15, 0, r5, c3, c0, 0 @ load domain access register
mcr p15, 0, r4, c2, c0, 0 @ load page table pointer
-#endif
+#endif /* CONFIG_ARM_LPAE */
b __turn_mmu_on
ENDPROC(__enable_mmu)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index eb8830a..faae9bd 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -491,19 +491,6 @@ do_translation_fault(unsigned long addr, unsigned int fsr,
#endif /* CONFIG_MMU */
/*
- * Some section permission faults need to be handled gracefully.
- * They can happen due to a __{get,put}_user during an oops.
- */
-#ifndef CONFIG_ARM_LPAE
-static int
-do_sect_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
-{
- do_bad_area(addr, fsr, regs);
- return 0;
-}
-#endif /* CONFIG_ARM_LPAE */
-
-/*
* This abort handler always returns "fault".
*/
static int
diff --git a/arch/arm/mm/fsr-2level.c b/arch/arm/mm/fsr-2level.c
index 18ca74c..c1a2afc 100644
--- a/arch/arm/mm/fsr-2level.c
+++ b/arch/arm/mm/fsr-2level.c
@@ -16,7 +16,7 @@ static struct fsr_info fsr_info[] = {
{ do_bad, SIGBUS, 0, "external abort on non-linefetch" },
{ do_bad, SIGSEGV, SEGV_ACCERR, "page domain fault" },
{ do_bad, SIGBUS, 0, "external abort on translation" },
- { do_sect_fault, SIGSEGV, SEGV_ACCERR, "section permission fault" },
+ { do_page_fault, SIGSEGV, SEGV_ACCERR, "section permission fault" },
{ do_bad, SIGBUS, 0, "external abort on translation" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "page permission fault" },
/*
@@ -56,7 +56,7 @@ static struct fsr_info ifsr_info[] = {
{ do_bad, SIGBUS, 0, "unknown 10" },
{ do_bad, SIGSEGV, SEGV_ACCERR, "page domain fault" },
{ do_bad, SIGBUS, 0, "external abort on translation" },
- { do_sect_fault, SIGSEGV, SEGV_ACCERR, "section permission fault" },
+ { do_page_fault, SIGSEGV, SEGV_ACCERR, "section permission fault" },
{ do_bad, SIGBUS, 0, "external abort on translation" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "page permission fault" },
{ do_bad, SIGBUS, 0, "unknown 16" },
diff --git a/arch/arm/mm/hugetlbpage.c b/arch/arm/mm/hugetlbpage.c
index 54ee616..619b082 100644
--- a/arch/arm/mm/hugetlbpage.c
+++ b/arch/arm/mm/hugetlbpage.c
@@ -54,7 +54,7 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
int pmd_huge(pmd_t pmd)
{
- return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
+ return pmd_thp_or_huge(pmd);
}
int pmd_huge_support(void)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4f08c13..74ebb43 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -385,6 +385,44 @@ SET_MEMORY_FN(x, pte_set_x)
SET_MEMORY_FN(nx, pte_set_nx)
/*
+ * If the system supports huge pages and we are running with short descriptors,
+ * then compute the pgprot values for a huge page. We do not need to do this
+ * with LPAE as there is no software/hardware bit distinction for ptes.
+ *
+ * We are only interested in:
+ * 1) The memory type: huge pages are user pages so a section of type
+ * MT_MEMORY_RW. This is used to create new huge ptes/thps.
+ *
+ * 2) XN, PROT_NONE, WRITE. These are set/unset through protection changes
+ * by pte_modify or pmd_modify and are used to make new ptes/thps.
+ *
+ * The other bits: dirty, young, splitting are not modified by pte_modify
+ * or pmd_modify nor are they used to create new ptes or pmds thus they are not
+ * considered here.
+ */
+#if defined(CONFIG_SYS_SUPPORTS_HUGETLBFS) && !defined(CONFIG_ARM_LPAE)
+static pgprot_t _hugepgprotval;
+
+pgprot_t get_huge_pgprot(pgprot_t newprot)
+{
+ pte_t inprot = __pte(pgprot_val(newprot));
+ pmd_t pmdret = __pmd(pgprot_val(_hugepgprotval));
+
+ if (!pte_exec(inprot))
+ pmdret = pmd_mknexec(pmdret);
+
+ if (pte_write(inprot))
+ pmdret = pmd_mkwrite(pmdret);
+
+ if (!pte_protnone(inprot))
+ pmdret = pmd_rmprotnone(pmdret);
+
+ return __pgprot(pmd_val(pmdret));
+}
+#endif
+
+
+/*
* Adjust the PMD section entries according to the CPU in use.
*/
static void __init build_mem_type_table(void)
@@ -622,6 +660,19 @@ static void __init build_mem_type_table(void)
if (t->prot_sect)
t->prot_sect |= PMD_DOMAIN(t->domain);
}
+
+#if defined(CONFIG_SYS_SUPPORTS_HUGETLBFS) && !defined(CONFIG_ARM_LPAE)
+ /*
+ * we assume all huge pages are user pages and that hardware access
+ * flag updates are disabled (which is the case for short descriptors).
+ */
+ pgprot_val(_hugepgprotval) = mem_types[MT_MEMORY_RW].prot_sect
+ | PMD_SECT_AP_READ | PMD_SECT_nG;
+
+ pgprot_val(_hugepgprotval) &= ~(PMD_SECT_AP_WRITE | PMD_SECT_XN
+ | PMD_TYPE_SECT);
+#endif
+
}
#ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 5/5] arm: mm: Add Transparent HugePage support for non-LPAE
2014-02-18 15:27 [PATCH 0/5] Huge pages for short descriptors on ARM Steve Capper
` (3 preceding siblings ...)
2014-02-18 15:27 ` [PATCH 4/5] arm: mm: HugeTLB support for non-LPAE systems Steve Capper
@ 2014-02-18 15:27 ` Steve Capper
4 siblings, 0 replies; 10+ messages in thread
From: Steve Capper @ 2014-02-18 15:27 UTC (permalink / raw)
To: linux-arm-kernel, linux, linux-mm
Cc: will.deacon, catalin.marinas, arnd, dsaxena, robherring2,
Steve Capper
Much of the required code for THP has been implemented in the
earlier non-LPAE HugeTLB patch.
One more domain bit is used (to store whether or not the THP is
splitting).
Some THP helper functions are defined; and we have to re-define
pmd_page such that it distinguishes between page tables and
sections.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
arch/arm/Kconfig | 2 +-
arch/arm/include/asm/pgtable-2level.h | 32 ++++++++++++++++++++++++++++++++
arch/arm/include/asm/pgtable-3level.h | 1 +
arch/arm/include/asm/pgtable.h | 2 --
arch/arm/include/asm/tlb.h | 3 +++
5 files changed, 37 insertions(+), 3 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 58b17b1..48dc4b5 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1820,7 +1820,7 @@ config SYS_SUPPORTS_HUGETLBFS
config HAVE_ARCH_TRANSPARENT_HUGEPAGE
def_bool y
- depends on ARM_LPAE
+ depends on SYS_SUPPORTS_HUGETLBFS
config ARCH_WANT_GENERAL_HUGETLB
def_bool y
diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h
index 1fb2050..0882d77 100644
--- a/arch/arm/include/asm/pgtable-2level.h
+++ b/arch/arm/include/asm/pgtable-2level.h
@@ -211,6 +211,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
*/
#define PMD_DSECT_DIRTY (_AT(pmdval_t, 1) << 5)
#define PMD_DSECT_AF (_AT(pmdval_t, 1) << 6)
+#define PMD_DSECT_SPLITTING (_AT(pmdval_t, 1) << 7)
#define PMD_BIT_FUNC(fn,op) \
static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; }
@@ -235,6 +236,16 @@ extern pgprot_t get_huge_pgprot(pgprot_t newprot);
#define pfn_pmd(pfn,prot) __pmd(__pfn_to_phys(pfn) | pgprot_val(prot));
#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),get_huge_pgprot(prot));
+#define pmd_mkhuge(pmd) (pmd)
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_DSECT_SPLITTING)
+#define pmd_trans_huge(pmd) (pmd_thp_or_huge(pmd))
+#else
+static inline int pmd_trans_huge(pmd_t pmd);
+#endif
+
+#define pmd_mknotpresent(pmd) (__pmd(0))
PMD_BIT_FUNC(mkdirty, |= PMD_DSECT_DIRTY);
PMD_BIT_FUNC(mkwrite, |= PMD_SECT_AP_WRITE);
@@ -242,6 +253,8 @@ PMD_BIT_FUNC(wrprotect, &= ~PMD_SECT_AP_WRITE);
PMD_BIT_FUNC(mknexec, |= PMD_SECT_XN);
PMD_BIT_FUNC(rmprotnone, |= PMD_TYPE_SECT);
PMD_BIT_FUNC(mkyoung, |= PMD_DSECT_AF);
+PMD_BIT_FUNC(mkold, &= ~PMD_DSECT_AF);
+PMD_BIT_FUNC(mksplitting, |= PMD_DSECT_SPLITTING);
#define pmd_young(pmd) (pmd_val(pmd) & PMD_DSECT_AF)
#define pmd_write(pmd) (pmd_val(pmd) & PMD_SECT_AP_WRITE)
@@ -282,6 +295,25 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
return pmd;
}
+static inline int has_transparent_hugepage(void)
+{
+ return 1;
+}
+
+static inline struct page *pmd_page(pmd_t pmd)
+{
+ /*
+ * for a section, we need to mask off more of the pmd
+ * before looking up the page as it is a section descriptor.
+ *
+ * pmd_page only gets sections from the thp code.
+ */
+ if (pmd_trans_huge(pmd))
+ return (phys_to_page(pmd_val(pmd) & HPAGE_MASK));
+
+ return phys_to_page(pmd_val(pmd) & PHYS_MASK);
+}
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_PGTABLE_2LEVEL_H */
diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
index c1c8b37..aa2683f 100644
--- a/arch/arm/include/asm/pgtable-3level.h
+++ b/arch/arm/include/asm/pgtable-3level.h
@@ -211,6 +211,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
#define pmd_hugewillfault(pmd) (!pmd_young(pmd) || !pmd_write(pmd))
#define pmd_thp_or_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT))
+#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK))
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT))
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 9cc40bc..3675c63 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -189,8 +189,6 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
return __va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK);
}
-#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK))
-
#ifndef CONFIG_HIGHPTE
#define __pte_map(pmd) pmd_page_vaddr(*(pmd))
#define __pte_unmap(pte) do { } while (0)
diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h
index b2498e6..77037d9 100644
--- a/arch/arm/include/asm/tlb.h
+++ b/arch/arm/include/asm/tlb.h
@@ -218,6 +218,9 @@ static inline void
tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr)
{
tlb_add_flush(tlb, addr);
+#ifndef CONFIG_ARM_LPAE
+ tlb_add_flush(tlb, addr + SZ_1M);
+#endif
}
#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr)
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young}
2014-02-18 15:27 ` [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young} Steve Capper
@ 2014-03-03 8:01 ` Steve Capper
2014-03-03 18:07 ` Naoya Horiguchi
` (2 subsequent siblings)
3 siblings, 0 replies; 10+ messages in thread
From: Steve Capper @ 2014-03-03 8:01 UTC (permalink / raw)
To: linux-arm-kernel, linux, linux-mm
Cc: will.deacon, catalin.marinas, arnd, dsaxena, robherring2
On Tue, Feb 18, 2014 at 03:27:11PM +0000, Steve Capper wrote:
> Introduce huge pte versions of pte_page, pte_present and pte_young.
> This allows ARM (without LPAE) to use alternative pte processing logic
> for huge ptes.
>
> Where these functions are not defined by architectural code they
> fallback to the standard functions.
>
> Signed-off-by: Steve Capper <steve.capper@linaro.org>
Hi,
I was wondering if this patch looks reasonable to people?
Thanks,
--
Steve
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young}
2014-02-18 15:27 ` [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young} Steve Capper
2014-03-03 8:01 ` Steve Capper
@ 2014-03-03 18:07 ` Naoya Horiguchi
2014-03-03 18:07 ` Naoya Horiguchi
[not found] ` <5314c4e5.d0128c0a.2ad9.ffffeb8dSMTPIN_ADDED_BROKEN@mx.google.com>
3 siblings, 0 replies; 10+ messages in thread
From: Naoya Horiguchi @ 2014-03-03 18:07 UTC (permalink / raw)
To: steve.capper
Cc: linux-arm-kernel, linux, linux-mm, will.deacon, catalin.marinas,
arnd, dsaxena, robherring2
Hi Steve,
On Tue, Feb 18, 2014 at 03:27:11PM +0000, Steve Capper wrote:
> Introduce huge pte versions of pte_page, pte_present and pte_young.
> This allows ARM (without LPAE) to use alternative pte processing logic
> for huge ptes.
>
> Where these functions are not defined by architectural code they
> fallback to the standard functions.
>
> Signed-off-by: Steve Capper <steve.capper@linaro.org>
> ---
> include/linux/hugetlb.h | 12 ++++++++++++
> mm/hugetlb.c | 22 +++++++++++-----------
> 2 files changed, 23 insertions(+), 11 deletions(-)
How about replacing other archs' arch-dependent code with new functions?
[~/dev]$ find arch/ -name "hugetlbpage.c" | xargs grep pte_page
arch/s390/mm/hugetlbpage.c: pmd_val(pmd) |= pte_page(pte)[1].index;
arch/powerpc/mm/hugetlbpage.c: page = pte_page(*ptep);
arch/powerpc/mm/hugetlbpage.c: head = pte_page(pte);
arch/x86/mm/hugetlbpage.c: page = &pte_page(*pte)[vpfn % (HPAGE_SIZE/PAGE_SIZE)];
arch/ia64/mm/hugetlbpage.c: page = pte_page(*ptep);
arch/mips/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pmd);
arch/tile/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pmd);
arch/tile/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pud);
[~/dev]$ find arch/ -name "hugetlbpage.c" | xargs grep pte_present
arch/s390/mm/hugetlbpage.c: if (pte_present(pte)) {
arch/sparc/mm/hugetlbpage.c: if (!pte_present(*ptep) && pte_present(entry))
arch/sparc/mm/hugetlbpage.c: if (pte_present(entry))
arch/tile/mm/hugetlbpage.c: if (!pte_present(*ptep) && huge_shift[level] != 0) {
arch/tile/mm/hugetlbpage.c: if (pte_present(pte) && pte_super(pte))
arch/tile/mm/hugetlbpage.c: if (!pte_present(*pte))
Thanks,
Naoya Horiguchi
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young}
2014-02-18 15:27 ` [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young} Steve Capper
2014-03-03 8:01 ` Steve Capper
2014-03-03 18:07 ` Naoya Horiguchi
@ 2014-03-03 18:07 ` Naoya Horiguchi
[not found] ` <5314c4e5.d0128c0a.2ad9.ffffeb8dSMTPIN_ADDED_BROKEN@mx.google.com>
3 siblings, 0 replies; 10+ messages in thread
From: Naoya Horiguchi @ 2014-03-03 18:07 UTC (permalink / raw)
To: steve.capper
Cc: linux, arnd, catalin.marinas, will.deacon, linux-mm, dsaxena,
linux-arm-kernel
Hi Steve,
On Tue, Feb 18, 2014 at 03:27:11PM +0000, Steve Capper wrote:
> Introduce huge pte versions of pte_page, pte_present and pte_young.
> This allows ARM (without LPAE) to use alternative pte processing logic
> for huge ptes.
>
> Where these functions are not defined by architectural code they
> fallback to the standard functions.
>
> Signed-off-by: Steve Capper <steve.capper@linaro.org>
> ---
> include/linux/hugetlb.h | 12 ++++++++++++
> mm/hugetlb.c | 22 +++++++++++-----------
> 2 files changed, 23 insertions(+), 11 deletions(-)
How about replacing other archs' arch-dependent code with new functions?
[~/dev]$ find arch/ -name "hugetlbpage.c" | xargs grep pte_page
arch/s390/mm/hugetlbpage.c: pmd_val(pmd) |= pte_page(pte)[1].index;
arch/powerpc/mm/hugetlbpage.c: page = pte_page(*ptep);
arch/powerpc/mm/hugetlbpage.c: head = pte_page(pte);
arch/x86/mm/hugetlbpage.c: page = &pte_page(*pte)[vpfn % (HPAGE_SIZE/PAGE_SIZE)];
arch/ia64/mm/hugetlbpage.c: page = pte_page(*ptep);
arch/mips/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pmd);
arch/tile/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pmd);
arch/tile/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pud);
[~/dev]$ find arch/ -name "hugetlbpage.c" | xargs grep pte_present
arch/s390/mm/hugetlbpage.c: if (pte_present(pte)) {
arch/sparc/mm/hugetlbpage.c: if (!pte_present(*ptep) && pte_present(entry))
arch/sparc/mm/hugetlbpage.c: if (pte_present(entry))
arch/tile/mm/hugetlbpage.c: if (!pte_present(*ptep) && huge_shift[level] != 0) {
arch/tile/mm/hugetlbpage.c: if (pte_present(pte) && pte_super(pte))
arch/tile/mm/hugetlbpage.c: if (!pte_present(*pte))
Thanks,
Naoya Horiguchi
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young}
[not found] ` <5314c4e5.d0128c0a.2ad9.ffffeb8dSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2014-03-04 8:26 ` Steve Capper
0 siblings, 0 replies; 10+ messages in thread
From: Steve Capper @ 2014-03-04 8:26 UTC (permalink / raw)
To: Naoya Horiguchi
Cc: linux-arm-kernel, linux, linux-mm, will.deacon, catalin.marinas,
arnd, dsaxena, robherring2
On Mon, Mar 03, 2014 at 01:07:07PM -0500, Naoya Horiguchi wrote:
> Hi Steve,
>
Hi Naoya,
> On Tue, Feb 18, 2014 at 03:27:11PM +0000, Steve Capper wrote:
> > Introduce huge pte versions of pte_page, pte_present and pte_young.
> > This allows ARM (without LPAE) to use alternative pte processing logic
> > for huge ptes.
> >
> > Where these functions are not defined by architectural code they
> > fallback to the standard functions.
> >
> > Signed-off-by: Steve Capper <steve.capper@linaro.org>
> > ---
> > include/linux/hugetlb.h | 12 ++++++++++++
> > mm/hugetlb.c | 22 +++++++++++-----------
> > 2 files changed, 23 insertions(+), 11 deletions(-)
Thanks for taking a look at this.
>
> How about replacing other archs' arch-dependent code with new functions?
>
In the cases below, the huge_pte_ functions will always resolve to the
standard pte_ functions (unless the arch code changes); so I decided to
only change the core code as that's where the meanings of huge_pte_
can vary.
> [~/dev]$ find arch/ -name "hugetlbpage.c" | xargs grep pte_page
> arch/s390/mm/hugetlbpage.c: pmd_val(pmd) |= pte_page(pte)[1].index;
> arch/powerpc/mm/hugetlbpage.c: page = pte_page(*ptep);
> arch/powerpc/mm/hugetlbpage.c: head = pte_page(pte);
> arch/x86/mm/hugetlbpage.c: page = &pte_page(*pte)[vpfn % (HPAGE_SIZE/PAGE_SIZE)];
> arch/ia64/mm/hugetlbpage.c: page = pte_page(*ptep);
> arch/mips/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pmd);
> arch/tile/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pmd);
> arch/tile/mm/hugetlbpage.c: page = pte_page(*(pte_t *)pud);
> [~/dev]$ find arch/ -name "hugetlbpage.c" | xargs grep pte_present
> arch/s390/mm/hugetlbpage.c: if (pte_present(pte)) {
> arch/sparc/mm/hugetlbpage.c: if (!pte_present(*ptep) && pte_present(entry))
> arch/sparc/mm/hugetlbpage.c: if (pte_present(entry))
> arch/tile/mm/hugetlbpage.c: if (!pte_present(*ptep) && huge_shift[level] != 0) {
> arch/tile/mm/hugetlbpage.c: if (pte_present(pte) && pte_super(pte))
> arch/tile/mm/hugetlbpage.c: if (!pte_present(*pte))
>
Cheers,
--
Steve
> Thanks,
> Naoya Horiguchi
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2014-03-04 8:26 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-18 15:27 [PATCH 0/5] Huge pages for short descriptors on ARM Steve Capper
2014-02-18 15:27 ` [PATCH 1/5] mm: hugetlb: Introduce huge_pte_{page,present,young} Steve Capper
2014-03-03 8:01 ` Steve Capper
2014-03-03 18:07 ` Naoya Horiguchi
2014-03-03 18:07 ` Naoya Horiguchi
[not found] ` <5314c4e5.d0128c0a.2ad9.ffffeb8dSMTPIN_ADDED_BROKEN@mx.google.com>
2014-03-04 8:26 ` Steve Capper
2014-02-18 15:27 ` [PATCH 2/5] arm: mm: Adjust the parameters for __sync_icache_dcache Steve Capper
2014-02-18 15:27 ` [PATCH 3/5] arm: mm: Make mmu_gather aware of huge pages Steve Capper
2014-02-18 15:27 ` [PATCH 4/5] arm: mm: HugeTLB support for non-LPAE systems Steve Capper
2014-02-18 15:27 ` [PATCH 5/5] arm: mm: Add Transparent HugePage support for non-LPAE Steve Capper
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).