linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/11] THP support for ARC
@ 2015-08-27  9:03 Vineet Gupta
  2015-08-27  9:03 ` [PATCH 01/11] ARC: mm: pte flags comsetic cleanups, comments Vineet Gupta
                   ` (12 more replies)
  0 siblings, 13 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

Hi,

This series brings THP support to ARC. It also introduces an optional new
thp hook for arches to possibly optimize the TLB flush in thp regime.

Rebased against linux-next of today so includes new hook for Minchan's
madvise(MADV_FREE).

Please review !

Thx,
-Vineet

Vineet Gupta (11):
  ARC: mm: pte flags comsetic cleanups, comments
  ARC: mm: Introduce PTE_SPECIAL
  Documentation/features/vm: pte_special now supported by ARC
  ARCv2: mm: THP support
  ARCv2: mm: THP: boot validation/reporting
  Documentation/features/vm: THP now supported by ARC
  mm: move some code around
  mm,thp: reduce ifdef'ery for THP in generic code
  mm,thp: introduce flush_pmd_tlb_range
  ARCv2: mm: THP: Implement flush_pmd_tlb_range() optimization
  ARCv2: Add a DT which enables THP

 Documentation/features/vm/THP/arch-support.txt     |  2 +-
 .../features/vm/pte_special/arch-support.txt       |  2 +-
 arch/arc/Kconfig                                   |  4 +
 arch/arc/boot/dts/hs_thp.dts                       | 59 +++++++++++++
 arch/arc/include/asm/hugepage.h                    | 82 ++++++++++++++++++
 arch/arc/include/asm/page.h                        |  1 +
 arch/arc/include/asm/pgtable.h                     | 60 +++++++------
 arch/arc/mm/tlb.c                                  | 79 ++++++++++++++++-
 arch/arc/mm/tlbex.S                                | 21 +++--
 include/asm-generic/pgtable.h                      | 20 +++++
 mm/huge_memory.c                                   |  2 +-
 mm/pgtable-generic.c                               | 99 ++++++++++------------
 12 files changed, 345 insertions(+), 86 deletions(-)
 create mode 100644 arch/arc/boot/dts/hs_thp.dts
 create mode 100644 arch/arc/include/asm/hugepage.h

-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 01/11] ARC: mm: pte flags comsetic cleanups, comments
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-08-27  9:03 ` [PATCH 02/11] ARC: mm: Introduce PTE_SPECIAL Vineet Gupta
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

No semantical changes

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 arch/arc/include/asm/pgtable.h | 37 ++++++++++++++++---------------------
 arch/arc/mm/tlbex.S            |  2 +-
 2 files changed, 17 insertions(+), 22 deletions(-)

diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 1281718802f7..481359fe56ae 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -60,7 +60,7 @@
 #define _PAGE_EXECUTE       (1<<3)	/* Page has user execute perm (H) */
 #define _PAGE_WRITE         (1<<4)	/* Page has user write perm (H) */
 #define _PAGE_READ          (1<<5)	/* Page has user read perm (H) */
-#define _PAGE_MODIFIED      (1<<6)	/* Page modified (dirty) (S) */
+#define _PAGE_DIRTY         (1<<6)	/* Page modified (dirty) (S) */
 #define _PAGE_GLOBAL        (1<<8)	/* Page is global (H) */
 #define _PAGE_PRESENT       (1<<10)	/* TLB entry is valid (H) */
 
@@ -71,7 +71,7 @@
 #define _PAGE_WRITE         (1<<2)	/* Page has user write perm (H) */
 #define _PAGE_READ          (1<<3)	/* Page has user read perm (H) */
 #define _PAGE_ACCESSED      (1<<4)	/* Page is accessed (S) */
-#define _PAGE_MODIFIED      (1<<5)	/* Page modified (dirty) (S) */
+#define _PAGE_DIRTY         (1<<5)	/* Page modified (dirty) (S) */
 
 #if (CONFIG_ARC_MMU_VER >= 4)
 #define _PAGE_WTHRU         (1<<7)	/* Page cache mode write-thru (H) */
@@ -92,21 +92,16 @@
 #define _K_PAGE_PERMS  (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ | \
 			_PAGE_GLOBAL | _PAGE_PRESENT)
 
-#ifdef CONFIG_ARC_CACHE_PAGES
-#define _PAGE_DEF_CACHEABLE _PAGE_CACHEABLE
-#else
-#define _PAGE_DEF_CACHEABLE (0)
+#ifndef CONFIG_ARC_CACHE_PAGES
+#undef _PAGE_CACHEABLE
+#define _PAGE_CACHEABLE 0
 #endif
 
-/* Helper for every "user" page
- * -kernel can R/W/X
- * -by default cached, unless config otherwise
- * -present in memory
- */
-#define ___DEF (_PAGE_PRESENT | _PAGE_DEF_CACHEABLE)
+/* Defaults for every user page */
+#define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE)
 
 /* Set of bits not changed in pte_modify */
-#define _PAGE_CHG_MASK	(PAGE_MASK | _PAGE_ACCESSED | _PAGE_MODIFIED)
+#define _PAGE_CHG_MASK	(PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
 
 /* More Abbrevaited helpers */
 #define PAGE_U_NONE     __pgprot(___DEF)
@@ -122,7 +117,7 @@
  * user vaddr space - visible in all addr spaces, but kernel mode only
  * Thus Global, all-kernel-access, no-user-access, cached
  */
-#define PAGE_KERNEL          __pgprot(_K_PAGE_PERMS | _PAGE_DEF_CACHEABLE)
+#define PAGE_KERNEL          __pgprot(_K_PAGE_PERMS | _PAGE_CACHEABLE)
 
 /* ioremap */
 #define PAGE_KERNEL_NO_CACHE __pgprot(_K_PAGE_PERMS)
@@ -191,16 +186,16 @@
 
 /* Optimal Sizing of Pg Tbl - based on MMU page size */
 #if defined(CONFIG_ARC_PAGE_SIZE_8K)
-#define BITS_FOR_PTE	8
+#define BITS_FOR_PTE	8		/* 11:8:13 */
 #elif defined(CONFIG_ARC_PAGE_SIZE_16K)
-#define BITS_FOR_PTE	8
+#define BITS_FOR_PTE	8		/* 10:8:14 */
 #elif defined(CONFIG_ARC_PAGE_SIZE_4K)
-#define BITS_FOR_PTE	9
+#define BITS_FOR_PTE	9		/* 11:9:12 */
 #endif
 
 #define BITS_FOR_PGD	(32 - BITS_FOR_PTE - BITS_IN_PAGE)
 
-#define PGDIR_SHIFT	(BITS_FOR_PTE + BITS_IN_PAGE)
+#define PGDIR_SHIFT	(32 - BITS_FOR_PGD)
 #define PGDIR_SIZE	(1UL << PGDIR_SHIFT)	/* vaddr span, not PDG sz */
 #define PGDIR_MASK	(~(PGDIR_SIZE-1))
 
@@ -295,7 +290,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
 /* Zoo of pte_xxx function */
 #define pte_read(pte)		(pte_val(pte) & _PAGE_READ)
 #define pte_write(pte)		(pte_val(pte) & _PAGE_WRITE)
-#define pte_dirty(pte)		(pte_val(pte) & _PAGE_MODIFIED)
+#define pte_dirty(pte)		(pte_val(pte) & _PAGE_DIRTY)
 #define pte_young(pte)		(pte_val(pte) & _PAGE_ACCESSED)
 #define pte_special(pte)	(0)
 
@@ -304,8 +299,8 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
 
 PTE_BIT_FUNC(wrprotect,	&= ~(_PAGE_WRITE));
 PTE_BIT_FUNC(mkwrite,	|= (_PAGE_WRITE));
-PTE_BIT_FUNC(mkclean,	&= ~(_PAGE_MODIFIED));
-PTE_BIT_FUNC(mkdirty,	|= (_PAGE_MODIFIED));
+PTE_BIT_FUNC(mkclean,	&= ~(_PAGE_DIRTY));
+PTE_BIT_FUNC(mkdirty,	|= (_PAGE_DIRTY));
 PTE_BIT_FUNC(mkold,	&= ~(_PAGE_ACCESSED));
 PTE_BIT_FUNC(mkyoung,	|= (_PAGE_ACCESSED));
 PTE_BIT_FUNC(exprotect,	&= ~(_PAGE_EXECUTE));
diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S
index f6f4c3cb505d..b8b014c6904d 100644
--- a/arch/arc/mm/tlbex.S
+++ b/arch/arc/mm/tlbex.S
@@ -365,7 +365,7 @@ ENTRY(EV_TLBMissD)
 	lr      r3, [ecr]
 	or      r0, r0, _PAGE_ACCESSED        ; Accessed bit always
 	btst_s  r3,  ECR_C_BIT_DTLB_ST_MISS   ; See if it was a Write Access ?
-	or.nz   r0, r0, _PAGE_MODIFIED        ; if Write, set Dirty bit as well
+	or.nz   r0, r0, _PAGE_DIRTY           ; if Write, set Dirty bit as well
 	st_s    r0, [r1]                      ; Write back PTE
 
 	CONV_PTE_TO_TLB
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 02/11] ARC: mm: Introduce PTE_SPECIAL
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
  2015-08-27  9:03 ` [PATCH 01/11] ARC: mm: pte flags comsetic cleanups, comments Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-08-27  9:03 ` [PATCH 03/11] Documentation/features/vm: pte_special now supported by ARC Vineet Gupta
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

Needed for THP, but will also come in handy for fast GUP later

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 arch/arc/include/asm/pgtable.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 481359fe56ae..431a83329324 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -61,6 +61,7 @@
 #define _PAGE_WRITE         (1<<4)	/* Page has user write perm (H) */
 #define _PAGE_READ          (1<<5)	/* Page has user read perm (H) */
 #define _PAGE_DIRTY         (1<<6)	/* Page modified (dirty) (S) */
+#define _PAGE_SPECIAL       (1<<7)
 #define _PAGE_GLOBAL        (1<<8)	/* Page is global (H) */
 #define _PAGE_PRESENT       (1<<10)	/* TLB entry is valid (H) */
 
@@ -72,6 +73,7 @@
 #define _PAGE_READ          (1<<3)	/* Page has user read perm (H) */
 #define _PAGE_ACCESSED      (1<<4)	/* Page is accessed (S) */
 #define _PAGE_DIRTY         (1<<5)	/* Page modified (dirty) (S) */
+#define _PAGE_SPECIAL       (1<<6)
 
 #if (CONFIG_ARC_MMU_VER >= 4)
 #define _PAGE_WTHRU         (1<<7)	/* Page cache mode write-thru (H) */
@@ -292,7 +294,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
 #define pte_write(pte)		(pte_val(pte) & _PAGE_WRITE)
 #define pte_dirty(pte)		(pte_val(pte) & _PAGE_DIRTY)
 #define pte_young(pte)		(pte_val(pte) & _PAGE_ACCESSED)
-#define pte_special(pte)	(0)
+#define pte_special(pte)	(pte_val(pte) & _PAGE_SPECIAL)
 
 #define PTE_BIT_FUNC(fn, op) \
 	static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; }
@@ -305,8 +307,9 @@ PTE_BIT_FUNC(mkold,	&= ~(_PAGE_ACCESSED));
 PTE_BIT_FUNC(mkyoung,	|= (_PAGE_ACCESSED));
 PTE_BIT_FUNC(exprotect,	&= ~(_PAGE_EXECUTE));
 PTE_BIT_FUNC(mkexec,	|= (_PAGE_EXECUTE));
+PTE_BIT_FUNC(mkspecial,	|= (_PAGE_SPECIAL));
 
-static inline pte_t pte_mkspecial(pte_t pte) { return pte; }
+#define __HAVE_ARCH_PTE_SPECIAL
 
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 03/11] Documentation/features/vm: pte_special now supported by ARC
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
  2015-08-27  9:03 ` [PATCH 01/11] ARC: mm: pte flags comsetic cleanups, comments Vineet Gupta
  2015-08-27  9:03 ` [PATCH 02/11] ARC: mm: Introduce PTE_SPECIAL Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-08-27  9:03 ` [PATCH 04/11] ARCv2: mm: THP support Vineet Gupta
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 Documentation/features/vm/pte_special/arch-support.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/features/vm/pte_special/arch-support.txt b/Documentation/features/vm/pte_special/arch-support.txt
index aaaa21db6226..3de5434c857c 100644
--- a/Documentation/features/vm/pte_special/arch-support.txt
+++ b/Documentation/features/vm/pte_special/arch-support.txt
@@ -7,7 +7,7 @@
     |         arch |status|
     -----------------------
     |       alpha: | TODO |
-    |         arc: | TODO |
+    |         arc: |  ok  |
     |         arm: |  ok  |
     |       arm64: |  ok  |
     |       avr32: | TODO |
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 04/11] ARCv2: mm: THP support
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (2 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 03/11] Documentation/features/vm: pte_special now supported by ARC Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-08-27 15:32   ` Kirill A. Shutemov
  2015-08-27  9:03 ` [PATCH 05/11] ARCv2: mm: THP: boot validation/reporting Vineet Gupta
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

ARC Linux implements 2 level page walk: PGD:PTE
In THP regime, PTE is folded into PGD (and canonically referred to as PMD)
Thus thp PMD accessors are implemented in terms of PTE (just like sparc)

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 arch/arc/Kconfig                |  4 +++
 arch/arc/include/asm/hugepage.h | 78 +++++++++++++++++++++++++++++++++++++++++
 arch/arc/include/asm/page.h     |  1 +
 arch/arc/include/asm/pgtable.h  | 16 +++++++--
 arch/arc/mm/tlb.c               | 51 +++++++++++++++++++++++++++
 arch/arc/mm/tlbex.S             | 19 +++++++---
 6 files changed, 163 insertions(+), 6 deletions(-)
 create mode 100644 arch/arc/include/asm/hugepage.h

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 78c0621d5819..5912006391ed 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -76,6 +76,10 @@ config STACKTRACE_SUPPORT
 config HAVE_LATENCYTOP_SUPPORT
 	def_bool y
 
+config HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	def_bool y
+	depends on ARC_MMU_V4
+
 source "init/Kconfig"
 source "kernel/Kconfig.freezer"
 
diff --git a/arch/arc/include/asm/hugepage.h b/arch/arc/include/asm/hugepage.h
new file mode 100644
index 000000000000..d7614d2af454
--- /dev/null
+++ b/arch/arc/include/asm/hugepage.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright (C) 2013-15 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+
+#ifndef _ASM_ARC_HUGEPAGE_H
+#define _ASM_ARC_HUGEPAGE_H
+
+#include <linux/types.h>
+#include <asm-generic/pgtable-nopmd.h>
+
+/*
+ * ARC Linux implements 2 level page walk: PGD:PTE
+ * In THP regime, PTE is folded into PGD (and canonically referred to as PMD)
+ * Thus thp PMD accessors are implemented in terms of PTE (just like sparc)
+ */
+static inline pte_t pmd_pte(pmd_t pmd)
+{
+	return __pte(pmd_val(pmd));
+}
+
+static inline pmd_t pte_pmd(pte_t pte)
+{
+	return __pmd(pte_val(pte));
+}
+
+#define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
+#define pmd_mkwrite(pmd)	pte_pmd(pte_mkwrite(pmd_pte(pmd)))
+#define pmd_mkdirty(pmd)	pte_pmd(pte_mkdirty(pmd_pte(pmd)))
+#define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
+#define pmd_mkyoung(pmd)	pte_pmd(pte_mkyoung(pmd_pte(pmd)))
+#define pmd_mkhuge(pmd)		pte_pmd(pte_mkhuge(pmd_pte(pmd)))
+#define pmd_mknotpresent(pmd)	pte_pmd(pte_mknotpresent(pmd_pte(pmd)))
+#define pmd_mksplitting(pmd)	pte_pmd(pte_mkspecial(pmd_pte(pmd)))
+#define pmd_mkclean(pmd)	pte_pmd(pte_mkclean(pmd_pte(pmd)))
+
+#define pmd_write(pmd)		pte_write(pmd_pte(pmd))
+#define pmd_young(pmd)		pte_young(pmd_pte(pmd))
+#define pmd_pfn(pmd)		pte_pfn(pmd_pte(pmd))
+#define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
+#define pmd_special(pmd)	pte_special(pmd_pte(pmd))
+
+#define mk_pmd(page, prot)	pte_pmd(mk_pte(page, prot))
+
+#define pmd_trans_huge(pmd)	(pmd_val(pmd) & _PAGE_HW_SZ)
+#define pmd_trans_splitting(pmd)	(pmd_trans_huge(pmd) && pmd_special(pmd))
+
+#define pfn_pmd(pfn, prot)	(__pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot)))
+
+static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+{
+	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
+}
+
+static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+			      pmd_t *pmdp, pmd_t pmd)
+{
+	*pmdp = pmd;
+}
+
+extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+				 pmd_t *pmd);
+
+#define has_transparent_hugepage() 1
+
+/* Generic variants assume pgtable_t is struct page *, hence need for these */
+#define __HAVE_ARCH_PGTABLE_DEPOSIT
+extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				       pgtable_t pgtable);
+
+#define __HAVE_ARCH_PGTABLE_WITHDRAW
+extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
+
+#endif
diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
index 9c8aa41e45c2..e15ccc7940ea 100644
--- a/arch/arc/include/asm/page.h
+++ b/arch/arc/include/asm/page.h
@@ -66,6 +66,7 @@ typedef unsigned long pgtable_t;
 #define pgd_val(x)	(x)
 #define pgprot_val(x)	(x)
 #define __pte(x)	(x)
+#define __pgd(x)	(x)
 #define __pgprot(x)	(x)
 #define pte_pgprot(x)	(x)
 
diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 431a83329324..336267f2e9d9 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -83,11 +83,13 @@
 #define _PAGE_PRESENT       (1<<9)	/* TLB entry is valid (H) */
 
 #if (CONFIG_ARC_MMU_VER >= 4)
-#define _PAGE_SZ            (1<<10)	/* Page Size indicator (H) */
+#define _PAGE_HW_SZ         (1<<10)	/* Page Size indicator (H): 0 normal, 1 super */
 #endif
 
 #define _PAGE_SHARED_CODE   (1<<11)	/* Shared Code page with cmn vaddr
 					   usable for shared TLB entries (H) */
+
+#define _PAGE_UNUSED_BIT    (1<<12)
 #endif
 
 /* vmalloc permissions */
@@ -99,6 +101,10 @@
 #define _PAGE_CACHEABLE 0
 #endif
 
+#ifndef _PAGE_HW_SZ
+#define _PAGE_HW_SZ	0
+#endif
+
 /* Defaults for every user page */
 #define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE)
 
@@ -125,7 +131,7 @@
 #define PAGE_KERNEL_NO_CACHE __pgprot(_K_PAGE_PERMS)
 
 /* Masks for actual TLB "PD"s */
-#define PTE_BITS_IN_PD0		(_PAGE_GLOBAL | _PAGE_PRESENT)
+#define PTE_BITS_IN_PD0		(_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ)
 #define PTE_BITS_RWX		(_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ)
 #define PTE_BITS_NON_RWX_IN_PD1	(PAGE_MASK | _PAGE_CACHEABLE)
 
@@ -299,6 +305,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
 #define PTE_BIT_FUNC(fn, op) \
 	static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; }
 
+PTE_BIT_FUNC(mknotpresent,	&= ~(_PAGE_PRESENT));
 PTE_BIT_FUNC(wrprotect,	&= ~(_PAGE_WRITE));
 PTE_BIT_FUNC(mkwrite,	|= (_PAGE_WRITE));
 PTE_BIT_FUNC(mkclean,	&= ~(_PAGE_DIRTY));
@@ -308,6 +315,7 @@ PTE_BIT_FUNC(mkyoung,	|= (_PAGE_ACCESSED));
 PTE_BIT_FUNC(exprotect,	&= ~(_PAGE_EXECUTE));
 PTE_BIT_FUNC(mkexec,	|= (_PAGE_EXECUTE));
 PTE_BIT_FUNC(mkspecial,	|= (_PAGE_SPECIAL));
+PTE_BIT_FUNC(mkhuge,	|= (_PAGE_HW_SZ));
 
 #define __HAVE_ARCH_PTE_SPECIAL
 
@@ -381,6 +389,10 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
  * remap a physical page `pfn' of size `size' with page protection `prot'
  * into virtual address `from'
  */
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#include <asm/hugepage.h>
+#endif
+
 #include <asm-generic/pgtable.h>
 
 /* to cope with aliasing VIPT cache */
diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 2c7ce8bb7475..337eebf0d6cf 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -580,6 +580,57 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned,
 	}
 }
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+
+void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+				 pmd_t *pmd)
+{
+	pte_t pte = __pte(pmd_val(*pmd));
+	update_mmu_cache(vma, addr, &pte);
+}
+
+void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				pgtable_t pgtable)
+{
+	struct list_head *lh = (struct list_head *) pgtable;
+
+	assert_spin_locked(&mm->page_table_lock);
+
+	/* FIFO */
+	if (!pmd_huge_pte(mm, pmdp))
+		INIT_LIST_HEAD(lh);
+	else
+		list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp));
+	pmd_huge_pte(mm, pmdp) = pgtable;
+}
+
+pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
+{
+	struct list_head *lh;
+	pgtable_t pgtable;
+	pte_t *ptep;
+
+	assert_spin_locked(&mm->page_table_lock);
+
+	pgtable = pmd_huge_pte(mm, pmdp);
+	lh = (struct list_head *) pgtable;
+	if (list_empty(lh))
+		pmd_huge_pte(mm, pmdp) = (pgtable_t) NULL;
+	else {
+		pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next;
+		list_del(lh);
+	}
+
+	ptep = (pte_t *) pgtable;
+	pte_val(*ptep) = 0;
+	ptep++;
+	pte_val(*ptep) = 0;
+
+	return pgtable;
+}
+
+#endif
+
 /* Read the Cache Build Confuration Registers, Decode them and save into
  * the cpuinfo structure for later use.
  * No Validation is done here, simply read/convert the BCRs
diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S
index b8b014c6904d..552594897655 100644
--- a/arch/arc/mm/tlbex.S
+++ b/arch/arc/mm/tlbex.S
@@ -205,10 +205,18 @@ ex_saved_reg1:
 #endif
 
 	lsr     r0, r2, PGDIR_SHIFT     ; Bits for indexing into PGD
-	ld.as   r1, [r1, r0]            ; PGD entry corresp to faulting addr
-	and.f   r1, r1, PAGE_MASK       ; Ignoring protection and other flags
-	;   contains Ptr to Page Table
-	bz.d    do_slow_path_pf         ; if no Page Table, do page fault
+	ld.as   r3, [r1, r0]            ; PGD entry corresp to faulting addr
+	tst	r3, r3
+	bz	do_slow_path_pf         ; if no Page Table, do page fault
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	and.f	0, r3, _PAGE_HW_SZ	; Is this Huge PMD (thp)
+	add2.nz	r1, r1, r0
+	bnz.d	2f		; YES: PGD == PMD has THP PTE: stop pgd walk
+	mov.nz	r0, r3
+
+#endif
+	and	r1, r3, PAGE_MASK
 
 	; Get the PTE entry: The idea is
 	; (1) x = addr >> PAGE_SHIFT 	-> masks page-off bits from @fault-addr
@@ -219,6 +227,9 @@ ex_saved_reg1:
 	lsr     r0, r2, (PAGE_SHIFT - 2)
 	and     r0, r0, ( (PTRS_PER_PTE - 1) << 2)
 	ld.aw   r0, [r1, r0]            ; get PTE and PTE ptr for fault addr
+
+2:
+
 #ifdef CONFIG_ARC_DBG_TLB_MISS_COUNT
 	and.f 0, r0, _PAGE_PRESENT
 	bz   1f
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 05/11] ARCv2: mm: THP: boot validation/reporting
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (3 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 04/11] ARCv2: mm: THP support Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-08-27  9:03 ` [PATCH 06/11] Documentation/features/vm: THP now supported by ARC Vineet Gupta
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 arch/arc/mm/tlb.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 337eebf0d6cf..211d59347047 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -706,7 +706,8 @@ char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len)
 
 	if (p_mmu->s_pg_sz_m)
 		scnprintf(super_pg, 64, "%dM Super Page%s, ",
-			  p_mmu->s_pg_sz_m, " (not used)");
+			  p_mmu->s_pg_sz_m,
+			  IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) ? "" : " (not used)");
 
 	n += scnprintf(buf + n, len - n,
 		      "MMU [v%x]\t: %dk PAGE, %sJTLB %d (%dx%d), uDTLB %d, uITLB %d %s\n",
@@ -741,6 +742,11 @@ void arc_mmu_init(void)
 	if (mmu->pg_sz_k != TO_KB(PAGE_SIZE))
 		panic("MMU pg size != PAGE_SIZE (%luk)\n", TO_KB(PAGE_SIZE));
 
+	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+	    mmu->s_pg_sz_m != TO_MB(HPAGE_PMD_SIZE))
+		panic("MMU Super pg size != Linux HPAGE_PMD_SIZE (%luM)\n",
+		      (unsigned long)TO_MB(HPAGE_PMD_SIZE));
+
 	/* Enable the MMU */
 	write_aux_reg(ARC_REG_PID, MMU_ENABLE);
 
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 06/11] Documentation/features/vm: THP now supported by ARC
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (4 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 05/11] ARCv2: mm: THP: boot validation/reporting Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-08-27  9:03 ` [PATCH 07/11] mm: move some code around Vineet Gupta
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 Documentation/features/vm/THP/arch-support.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/features/vm/THP/arch-support.txt b/Documentation/features/vm/THP/arch-support.txt
index df384e3e845f..523f8307b9cd 100644
--- a/Documentation/features/vm/THP/arch-support.txt
+++ b/Documentation/features/vm/THP/arch-support.txt
@@ -7,7 +7,7 @@
     |         arch |status|
     -----------------------
     |       alpha: | TODO |
-    |         arc: |  ..  |
+    |         arc: |  ok  |
     |         arm: |  ok  |
     |       arm64: |  ok  |
     |       avr32: |  ..  |
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 07/11] mm: move some code around
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (5 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 06/11] Documentation/features/vm: THP now supported by ARC Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-08-27  9:03 ` [PATCH 08/11] mm,thp: reduce ifdef'ery for THP in generic code Vineet Gupta
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

This reduces/simplifies the diff for the next patch which moves THP
specific code.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 mm/pgtable-generic.c | 50 +++++++++++++++++++++++++-------------------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 6b674e00153c..48851894e699 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -57,6 +57,31 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
 }
 #endif
 
+#ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
+int ptep_clear_flush_young(struct vm_area_struct *vma,
+			   unsigned long address, pte_t *ptep)
+{
+	int young;
+	young = ptep_test_and_clear_young(vma, address, ptep);
+	if (young)
+		flush_tlb_page(vma, address);
+	return young;
+}
+#endif
+
+#ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
+pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
+		       pte_t *ptep)
+{
+	struct mm_struct *mm = (vma)->vm_mm;
+	pte_t pte;
+	pte = ptep_get_and_clear(mm, address, ptep);
+	if (pte_accessible(mm, pte))
+		flush_tlb_page(vma, address);
+	return pte;
+}
+#endif
+
 #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
 int pmdp_set_access_flags(struct vm_area_struct *vma,
 			  unsigned long address, pmd_t *pmdp,
@@ -77,18 +102,6 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
 }
 #endif
 
-#ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-int ptep_clear_flush_young(struct vm_area_struct *vma,
-			   unsigned long address, pte_t *ptep)
-{
-	int young;
-	young = ptep_test_and_clear_young(vma, address, ptep);
-	if (young)
-		flush_tlb_page(vma, address);
-	return young;
-}
-#endif
-
 #ifndef __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
 int pmdp_clear_flush_young(struct vm_area_struct *vma,
 			   unsigned long address, pmd_t *pmdp)
@@ -106,19 +119,6 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 }
 #endif
 
-#ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
-pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
-		       pte_t *ptep)
-{
-	struct mm_struct *mm = (vma)->vm_mm;
-	pte_t pte;
-	pte = ptep_get_and_clear(mm, address, ptep);
-	if (pte_accessible(mm, pte))
-		flush_tlb_page(vma, address);
-	return pte;
-}
-#endif
-
 #ifndef __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 08/11] mm,thp: reduce ifdef'ery for THP in generic code
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (6 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 07/11] mm: move some code around Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-09-16 22:25   ` Andrew Morton
  2015-08-27  9:03 ` [PATCH 09/11] mm,thp: introduce flush_pmd_tlb_range Vineet Gupta
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

This is purely cosmetic, just makes code more readable

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 include/asm-generic/pgtable.h | 20 ++++++++++++++++++++
 mm/pgtable-generic.c          | 24 +++---------------------
 2 files changed, 23 insertions(+), 21 deletions(-)

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 29c57b2cb344..11e8bf45a08f 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -30,9 +30,20 @@ extern int ptep_set_access_flags(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 extern int pmdp_set_access_flags(struct vm_area_struct *vma,
 				 unsigned long address, pmd_t *pmdp,
 				 pmd_t entry, int dirty);
+#else /* CONFIG_TRANSPARENT_HUGEPAGE */
+static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
+					unsigned long address, pmd_t *pmdp,
+					pmd_t entry, int dirty)
+{
+	BUG();
+	return 0;
+}
+
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
@@ -81,8 +92,17 @@ int ptep_clear_flush_young(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 int pmdp_clear_flush_young(struct vm_area_struct *vma,
 			   unsigned long address, pmd_t *pmdp);
+#else /* CONFIG_TRANSPARENT_HUGEPAGE */
+static inline int pmdp_clear_flush_young(struct vm_area_struct *vma,
+					 unsigned long address, pmd_t *pmdp)
+{
+	BUG();
+	return 0;
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 48851894e699..c9c59bb75a17 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -82,12 +82,13 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
 }
 #endif
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+
 #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
 int pmdp_set_access_flags(struct vm_area_struct *vma,
 			  unsigned long address, pmd_t *pmdp,
 			  pmd_t entry, int dirty)
 {
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	int changed = !pmd_same(*pmdp, entry);
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 	if (changed) {
@@ -95,10 +96,6 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
 		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 	}
 	return changed;
-#else /* CONFIG_TRANSPARENT_HUGEPAGE */
-	BUG();
-	return 0;
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 }
 #endif
 
@@ -107,11 +104,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 			   unsigned long address, pmd_t *pmdp)
 {
 	int young;
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
-#else
-	BUG();
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 	young = pmdp_test_and_clear_young(vma, address, pmdp);
 	if (young)
 		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
@@ -120,7 +113,6 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
 			    pmd_t *pmdp)
 {
@@ -131,11 +123,9 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
 	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 	return pmd;
 }
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_SPLITTING_FLUSH
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
 			  pmd_t *pmdp)
 {
@@ -145,11 +135,9 @@ void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
 	/* tlb flush only to serialize against gup-fast */
 	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 }
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
 #ifndef __HAVE_ARCH_PGTABLE_DEPOSIT
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 				pgtable_t pgtable)
 {
@@ -162,11 +150,9 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 		list_add(&pgtable->lru, &pmd_huge_pte(mm, pmdp)->lru);
 	pmd_huge_pte(mm, pmdp) = pgtable;
 }
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
 #ifndef __HAVE_ARCH_PGTABLE_WITHDRAW
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 /* no "address" argument so destroys page coloring of some arch */
 pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
 {
@@ -185,11 +171,9 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
 	}
 	return pgtable;
 }
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_INVALIDATE
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
 		     pmd_t *pmdp)
 {
@@ -197,11 +181,9 @@ void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
 	set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
 	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 }
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
 #ifndef pmdp_collapse_flush
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
 			  pmd_t *pmdp)
 {
@@ -217,5 +199,5 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
 	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 	return pmd;
 }
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 09/11] mm,thp: introduce flush_pmd_tlb_range
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (7 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 08/11] mm,thp: reduce ifdef'ery for THP in generic code Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-09-16 22:26   ` Andrew Morton
  2015-08-27  9:03 ` [PATCH 10/11] ARCv2: mm: THP: Implement flush_pmd_tlb_range() optimization Vineet Gupta
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 mm/huge_memory.c     |  2 +-
 mm/pgtable-generic.c | 25 +++++++++++++++++++------
 2 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 79b7a6826766..8d301dfc9d73 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1915,7 +1915,7 @@ static int __split_huge_page_map(struct page *page,
 		 * here). But it is generally safer to never allow
 		 * small and huge TLB entries for the same virtual
 		 * address to be loaded simultaneously. So instead of
-		 * doing "pmd_populate(); flush_tlb_range();" we first
+		 * doing "pmd_populate(); flush_pmd_tlb_range();" we first
 		 * mark the current pmd notpresent (atomically because
 		 * here the pmd_trans_huge and pmd_trans_splitting
 		 * must remain set at all times on the pmd until the
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index c9c59bb75a17..b0fed9a4776e 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -84,6 +84,19 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
+#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
+
+/*
+ * ARCHes with special requirements for evicting THP backing TLB entries can
+ * implement this. Otherwise also, it can help optimizing thp flush operation.
+ * flush_tlb_range() can have optimization to nuke the entire TLB if flush span
+ * is greater than a threashhold, which will likely be true for a single
+ * huge page.
+ * e.g. see arch/arc: flush_pmd_tlb_range
+ */
+#define flush_pmd_tlb_range(vma, addr, end)	flush_tlb_range(vma, addr, end)
+#endif
+
 #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
 int pmdp_set_access_flags(struct vm_area_struct *vma,
 			  unsigned long address, pmd_t *pmdp,
@@ -93,7 +106,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 	if (changed) {
 		set_pmd_at(vma->vm_mm, address, pmdp, entry);
-		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+		flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 	}
 	return changed;
 }
@@ -107,7 +120,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 	young = pmdp_test_and_clear_young(vma, address, pmdp);
 	if (young)
-		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+		flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 	return young;
 }
 #endif
@@ -120,7 +133,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 	VM_BUG_ON(!pmd_trans_huge(*pmdp));
 	pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
-	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+	flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 	return pmd;
 }
 #endif
@@ -133,7 +146,7 @@ void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 	set_pmd_at(vma->vm_mm, address, pmdp, pmd);
 	/* tlb flush only to serialize against gup-fast */
-	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+	flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 }
 #endif
 
@@ -179,7 +192,7 @@ void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
 {
 	pmd_t entry = *pmdp;
 	set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
-	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+	flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 }
 #endif
 
@@ -196,7 +209,7 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 	VM_BUG_ON(pmd_trans_huge(*pmdp));
 	pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
-	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+	flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 	return pmd;
 }
 #endif
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 10/11] ARCv2: mm: THP: Implement flush_pmd_tlb_range() optimization
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (8 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 09/11] mm,thp: introduce flush_pmd_tlb_range Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-08-27  9:03 ` [PATCH 11/11] ARCv2: Add a DT which enables THP Vineet Gupta
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

Implement the TLB flush routine to evict a sepcific Super TLB entry,
vs. moving to a new ASID on every such flush.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 arch/arc/include/asm/hugepage.h |  4 ++++
 arch/arc/mm/tlb.c               | 20 ++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/arch/arc/include/asm/hugepage.h b/arch/arc/include/asm/hugepage.h
index d7614d2af454..bf74f507e038 100644
--- a/arch/arc/include/asm/hugepage.h
+++ b/arch/arc/include/asm/hugepage.h
@@ -75,4 +75,8 @@ extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 #define __HAVE_ARCH_PGTABLE_WITHDRAW
 extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
 
+#define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
+extern void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+				unsigned long end);
+
 #endif
diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 211d59347047..d57272c4caf7 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -629,6 +629,26 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
 	return pgtable;
 }
 
+void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+			 unsigned long end)
+{
+	unsigned int cpu;
+	unsigned long flags;
+
+	local_irq_save(flags);
+
+	cpu = smp_processor_id();
+
+	if (likely(asid_mm(vma->vm_mm, cpu) != MM_CTXT_NO_ASID)) {
+		unsigned int asid = hw_pid(vma->vm_mm, cpu);
+
+		/* No need to loop here: this will always be for 1 Huge Page */
+		tlb_entry_erase(start | _PAGE_HW_SZ | asid);
+	}
+
+	local_irq_restore(flags);
+}
+
 #endif
 
 /* Read the Cache Build Confuration Registers, Decode them and save into
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 11/11] ARCv2: Add a DT which enables THP
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (9 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 10/11] ARCv2: mm: THP: Implement flush_pmd_tlb_range() optimization Vineet Gupta
@ 2015-08-27  9:03 ` Vineet Gupta
  2015-09-03  8:46 ` [PATCH 00/11] THP support for ARC Vineet Gupta
  2015-09-16 22:27 ` Andrew Morton
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27  9:03 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev, Vineet Gupta

* Enable THP at bootup
* More than 512M RAM (kernel auto-disabled THP for smaller systems)

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 arch/arc/boot/dts/hs_thp.dts | 59 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)
 create mode 100644 arch/arc/boot/dts/hs_thp.dts

diff --git a/arch/arc/boot/dts/hs_thp.dts b/arch/arc/boot/dts/hs_thp.dts
new file mode 100644
index 000000000000..818a8c968330
--- /dev/null
+++ b/arch/arc/boot/dts/hs_thp.dts
@@ -0,0 +1,59 @@
+/*
+ * Copyright (C) 2015 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+/dts-v1/;
+
+/include/ "skeleton.dtsi"
+
+/ {
+	compatible = "snps,nsim_hs";
+	interrupt-parent = <&core_intc>;
+
+	chosen {
+		bootargs = "earlycon=arc_uart,mmio32,0xc0fc1000,115200n8 console=ttyARC0,115200n8 transparent_hugepage=always";
+	};
+
+	aliases {
+		serial0 = &arcuart0;
+	};
+
+	memory {
+		device_type = "memory";
+		/* reg = <0x00000000 0x28000000>; */
+		reg = <0x00000000 0x40000000>;
+	};
+
+	fpga {
+		compatible = "simple-bus";
+		#address-cells = <1>;
+		#size-cells = <1>;
+
+		/* child and parent address space 1:1 mapped */
+		ranges;
+
+		core_intc: core-interrupt-controller {
+			compatible = "snps,archs-intc";
+			interrupt-controller;
+			#interrupt-cells = <1>;
+		};
+
+		arcuart0: serial@c0fc1000 {
+			compatible = "snps,arc-uart";
+			reg = <0xc0fc1000 0x100>;
+			interrupts = <24>;
+			clock-frequency = <80000000>;
+			current-speed = <115200>;
+			status = "okay";
+		};
+
+		arcpct0: pct {
+			compatible = "snps,archs-pct";
+			#interrupt-cells = <1>;
+			interrupts = <20>;
+		};
+	};
+};
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 04/11] ARCv2: mm: THP support
  2015-08-27  9:03 ` [PATCH 04/11] ARCv2: mm: THP support Vineet Gupta
@ 2015-08-27 15:32   ` Kirill A. Shutemov
  2015-08-27 16:56     ` Vineet Gupta
  2015-08-28  6:09     ` Vineet Gupta
  0 siblings, 2 replies; 21+ messages in thread
From: Kirill A. Shutemov @ 2015-08-27 15:32 UTC (permalink / raw)
  To: Vineet Gupta
  Cc: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim, linux-arch, linux-kernel, linux-mm,
	arc-linux-dev

On Thu, Aug 27, 2015 at 02:33:07PM +0530, Vineet Gupta wrote:
> +pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
> +{
> +	struct list_head *lh;
> +	pgtable_t pgtable;
> +	pte_t *ptep;
> +
> +	assert_spin_locked(&mm->page_table_lock);
> +
> +	pgtable = pmd_huge_pte(mm, pmdp);
> +	lh = (struct list_head *) pgtable;
> +	if (list_empty(lh))
> +		pmd_huge_pte(mm, pmdp) = (pgtable_t) NULL;
> +	else {
> +		pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next;
> +		list_del(lh);
> +	}

Side question: why pgtable_t is unsigned long on ARC and not struct page *
or pte_t *, like on other archs? We could avoid these casts.

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 04/11] ARCv2: mm: THP support
  2015-08-27 15:32   ` Kirill A. Shutemov
@ 2015-08-27 16:56     ` Vineet Gupta
  2015-08-28  6:09     ` Vineet Gupta
  1 sibling, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-27 16:56 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim, linux-arch, linux-kernel, linux-mm,
	arc-linux-dev

On Thursday 27 August 2015 09:02 PM, Kirill A. Shutemov wrote:
> On Thu, Aug 27, 2015 at 02:33:07PM +0530, Vineet Gupta wrote:
>> > +pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
>> > +{
>> > +	struct list_head *lh;
>> > +	pgtable_t pgtable;
>> > +	pte_t *ptep;
>> > +
>> > +	assert_spin_locked(&mm->page_table_lock);
>> > +
>> > +	pgtable = pmd_huge_pte(mm, pmdp);
>> > +	lh = (struct list_head *) pgtable;
>> > +	if (list_empty(lh))
>> > +		pmd_huge_pte(mm, pmdp) = (pgtable_t) NULL;
>> > +	else {
>> > +		pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next;
>> > +		list_del(lh);
>> > +	}
> Side question: why pgtable_t is unsigned long on ARC and not struct page *
> or pte_t *, like on other archs? We could avoid these casts.

This goes back how I did this for ARC long back to avoid page_address() calls in
general case. e.g. pte_alloc_one(), pmd_populate(), pte_free()... all needed to
convert struct page to unsigned long. It was micro-optimization of sorts, but
served us well.

I could perhaps see try making it pte *, that will certainly remove a bunch of
other casts as well.

-Vineet


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 04/11] ARCv2: mm: THP support
  2015-08-27 15:32   ` Kirill A. Shutemov
  2015-08-27 16:56     ` Vineet Gupta
@ 2015-08-28  6:09     ` Vineet Gupta
  1 sibling, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-08-28  6:09 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim, linux-arch@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	arc-linux-dev@synopsys.com

On Thursday 27 August 2015 09:03 PM, Kirill A. Shutemov wrote:
> On Thu, Aug 27, 2015 at 02:33:07PM +0530, Vineet Gupta wrote:
>> +pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
>> +{
>> +	struct list_head *lh;
>> +	pgtable_t pgtable;
>> +	pte_t *ptep;
>> +
>> +	assert_spin_locked(&mm->page_table_lock);
>> +
>> +	pgtable = pmd_huge_pte(mm, pmdp);
>> +	lh = (struct list_head *) pgtable;
>> +	if (list_empty(lh))
>> +		pmd_huge_pte(mm, pmdp) = (pgtable_t) NULL;
>> +	else {
>> +		pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next;
>> +		list_del(lh);
>> +	}
> Side question: why pgtable_t is unsigned long on ARC and not struct page *
> or pte_t *, like on other archs? We could avoid these casts.

Well we avoid some and add some, but I agree that it is better as pte_t *

-------------->
>From 7170a998bd4d5014ade94f4e5ba979c929d1ee18 Mon Sep 17 00:00:00 2001
From: Vineet Gupta <vgupta@synopsys.com>
Date: Fri, 28 Aug 2015 08:39:57 +0530
Subject: [PATCH] ARC: mm: switch pgtable_to to pte_t *

ARC is the only arch with unsigned long type (vs. struct page *).
Historically this was done to avoid the page_address() calls in various
arch hooks which need to get the virtual/logical address of the table.

Some arches alternately define it as pte_t *, and is as efficient as
unsigned long (generated code doesn't change)

Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
---
 arch/arc/include/asm/page.h    | 4 ++--
 arch/arc/include/asm/pgalloc.h | 6 +++---
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
index 9c8aa41e45c2..2994cac1069e 100644
--- a/arch/arc/include/asm/page.h
+++ b/arch/arc/include/asm/page.h
@@ -43,7 +43,6 @@ typedef struct {
 typedef struct {
     unsigned long pgprot;
 } pgprot_t;
-typedef unsigned long pgtable_t;
 
 #define pte_val(x)      ((x).pte)
 #define pgd_val(x)      ((x).pgd)
@@ -60,7 +59,6 @@ typedef unsigned long pgtable_t;
 typedef unsigned long pte_t;
 typedef unsigned long pgd_t;
 typedef unsigned long pgprot_t;
-typedef unsigned long pgtable_t;
 
 #define pte_val(x)    (x)
 #define pgd_val(x)    (x)
@@ -71,6 +69,8 @@ typedef unsigned long pgtable_t;
 
 #endif
 
+typedef pte_t * pgtable_t;
+
 #define ARCH_PFN_OFFSET     (CONFIG_LINUX_LINK_BASE >> PAGE_SHIFT)
 
 #define pfn_valid(pfn)      (((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h
index 81208bfd9dcb..9149b5ca26d7 100644
--- a/arch/arc/include/asm/pgalloc.h
+++ b/arch/arc/include/asm/pgalloc.h
@@ -107,7 +107,7 @@ pte_alloc_one(struct mm_struct *mm, unsigned long address)
     pgtable_t pte_pg;
     struct page *page;
 
-    pte_pg = __get_free_pages(GFP_KERNEL | __GFP_REPEAT, __get_order_pte());
+    pte_pg = (pgtable_t)__get_free_pages(GFP_KERNEL | __GFP_REPEAT,
__get_order_pte());
     if (!pte_pg)
         return 0;
     memzero((void *)pte_pg, PTRS_PER_PTE * 4);
@@ -128,12 +128,12 @@ static inline void pte_free_kernel(struct mm_struct *mm,
pte_t *pte)
 static inline void pte_free(struct mm_struct *mm, pgtable_t ptep)
 {
     pgtable_page_dtor(virt_to_page(ptep));
-    free_pages(ptep, __get_order_pte());
+    free_pages((unsigned long)ptep, __get_order_pte());
 }
 
 #define __pte_free_tlb(tlb, pte, addr)  pte_free((tlb)->mm, pte)
 
 #define check_pgt_cache()   do { } while (0)
-#define pmd_pgtable(pmd) pmd_page_vaddr(pmd)
+#define pmd_pgtable(pmd)    ((pgtable_t) pmd_page_vaddr(pmd))
 
 #endif /* _ASM_ARC_PGALLOC_H */
-- 
1.9.1



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/11] THP support for ARC
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (10 preceding siblings ...)
  2015-08-27  9:03 ` [PATCH 11/11] ARCv2: Add a DT which enables THP Vineet Gupta
@ 2015-09-03  8:46 ` Vineet Gupta
  2015-09-16 22:27 ` Andrew Morton
  12 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-09-03  8:46 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman,
	Matthew Wilcox, Minchan Kim
  Cc: linux-arch, linux-kernel, linux-mm, arc-linux-dev

Hi all,

On Thursday 27 August 2015 02:33 PM, Vineet Gupta wrote:
> Hi,
> 
> This series brings THP support to ARC. It also introduces an optional new
> thp hook for arches to possibly optimize the TLB flush in thp regime.
> 
> Rebased against linux-next of today so includes new hook for Minchan's
> madvise(MADV_FREE).
> 
> Please review !
> 
> Thx,
> -Vineet

I understand that this is busy time for people due to merge window. However is
this series in a a review'able state or do people think more changes are needed
before they can take a look.

I already did the pgtable_t switch to pte_t * as requested by Kirill (as a
separate precursor patch) and that does requires one patch in this series to be
updated. I will spin this in v2 but was wondering if we are on the right track here.

Thx,
-Vineet

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 08/11] mm,thp: reduce ifdef'ery for THP in generic code
  2015-08-27  9:03 ` [PATCH 08/11] mm,thp: reduce ifdef'ery for THP in generic code Vineet Gupta
@ 2015-09-16 22:25   ` Andrew Morton
  2015-09-16 23:45     ` Vineet Gupta
  0 siblings, 1 reply; 21+ messages in thread
From: Andrew Morton @ 2015-09-16 22:25 UTC (permalink / raw)
  To: Vineet Gupta
  Cc: Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman, Matthew Wilcox,
	Minchan Kim, linux-arch, linux-kernel, linux-mm, arc-linux-dev

On Thu, 27 Aug 2015 14:33:11 +0530 Vineet Gupta <Vineet.Gupta1@synopsys.com> wrote:

> This is purely cosmetic, just makes code more readable
> 
> ...
>
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -30,9 +30,20 @@ extern int ptep_set_access_flags(struct vm_area_struct *vma,
>  #endif
>  
>  #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  extern int pmdp_set_access_flags(struct vm_area_struct *vma,
>  				 unsigned long address, pmd_t *pmdp,
>  				 pmd_t entry, int dirty);
> +#else /* CONFIG_TRANSPARENT_HUGEPAGE */
> +static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
> +					unsigned long address, pmd_t *pmdp,
> +					pmd_t entry, int dirty)
> +{
> +	BUG();
> +	return 0;
> +}

Is it possible to simply leave this undefined?  So the kernel fails at
link time?

> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c

Good heavens that file is a mess.  Your patch does improve it.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 09/11] mm,thp: introduce flush_pmd_tlb_range
  2015-08-27  9:03 ` [PATCH 09/11] mm,thp: introduce flush_pmd_tlb_range Vineet Gupta
@ 2015-09-16 22:26   ` Andrew Morton
  2015-09-16 23:57     ` Vineet Gupta
  0 siblings, 1 reply; 21+ messages in thread
From: Andrew Morton @ 2015-09-16 22:26 UTC (permalink / raw)
  To: Vineet Gupta
  Cc: Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman, Matthew Wilcox,
	Minchan Kim, linux-arch, linux-kernel, linux-mm, arc-linux-dev

On Thu, 27 Aug 2015 14:33:12 +0530 Vineet Gupta <Vineet.Gupta1@synopsys.com> wrote:

> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -84,6 +84,19 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  
> +#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> +
> +/*
> + * ARCHes with special requirements for evicting THP backing TLB entries can
> + * implement this. Otherwise also, it can help optimizing thp flush operation.
> + * flush_tlb_range() can have optimization to nuke the entire TLB if flush span
> + * is greater than a threashhold, which will likely be true for a single
> + * huge page.
> + * e.g. see arch/arc: flush_pmd_tlb_range
> + */
> +#define flush_pmd_tlb_range(vma, addr, end)	flush_tlb_range(vma, addr, end)
> +#endif

Did you consider using a __weak function here?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/11] THP support for ARC
  2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
                   ` (11 preceding siblings ...)
  2015-09-03  8:46 ` [PATCH 00/11] THP support for ARC Vineet Gupta
@ 2015-09-16 22:27 ` Andrew Morton
  12 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2015-09-16 22:27 UTC (permalink / raw)
  To: Vineet Gupta
  Cc: Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman, Matthew Wilcox,
	Minchan Kim, linux-arch, linux-kernel, linux-mm, arc-linux-dev

On Thu, 27 Aug 2015 14:33:03 +0530 Vineet Gupta <Vineet.Gupta1@synopsys.com> wrote:

> This series brings THP support to ARC. It also introduces an optional new
> thp hook for arches to possibly optimize the TLB flush in thp regime.

The mm/ changes look OK to me.  Please maintain them in the arc tree,
merge them upstream at the same time.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 08/11] mm,thp: reduce ifdef'ery for THP in generic code
  2015-09-16 22:25   ` Andrew Morton
@ 2015-09-16 23:45     ` Vineet Gupta
  0 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-09-16 23:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman, Matthew Wilcox,
	Minchan Kim, linux-arch@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	arc-linux-dev@synopsys.com

On Wednesday 16 September 2015 03:25 PM, Andrew Morton wrote:
> On Thu, 27 Aug 2015 14:33:11 +0530 Vineet Gupta <Vineet.Gupta1@synopsys.com> wrote:
>
>> This is purely cosmetic, just makes code more readable
>>
>> ...
>>
>> --- a/include/asm-generic/pgtable.h
>> +++ b/include/asm-generic/pgtable.h
>> @@ -30,9 +30,20 @@ extern int ptep_set_access_flags(struct vm_area_struct *vma,
>>  #endif
>>  
>>  #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>  extern int pmdp_set_access_flags(struct vm_area_struct *vma,
>>  				 unsigned long address, pmd_t *pmdp,
>>  				 pmd_t entry, int dirty);
>> +#else /* CONFIG_TRANSPARENT_HUGEPAGE */
>> +static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
>> +					unsigned long address, pmd_t *pmdp,
>> +					pmd_t entry, int dirty)
>> +{
>> +	BUG();
>> +	return 0;
>> +}
> Is it possible to simply leave this undefined?  So the kernel fails at
> link time?

Sure ! There's quite a few in there which could be changed in same way ! I'll do
that in v2

Thx for reviewing.

-Vineet

>
>> --- a/mm/pgtable-generic.c
>> +++ b/mm/pgtable-generic.c
> Good heavens that file is a mess.  Your patch does improve it.
>
>
>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 09/11] mm,thp: introduce flush_pmd_tlb_range
  2015-09-16 22:26   ` Andrew Morton
@ 2015-09-16 23:57     ` Vineet Gupta
  0 siblings, 0 replies; 21+ messages in thread
From: Vineet Gupta @ 2015-09-16 23:57 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Aneesh Kumar K.V, Kirill A. Shutemov, Mel Gorman, Matthew Wilcox,
	Minchan Kim, linux-arch@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	arc-linux-dev@synopsys.com

On Wednesday 16 September 2015 03:26 PM, Andrew Morton wrote:
> On Thu, 27 Aug 2015 14:33:12 +0530 Vineet Gupta <Vineet.Gupta1@synopsys.com> wrote:
>
>> --- a/mm/pgtable-generic.c
>> +++ b/mm/pgtable-generic.c
>> @@ -84,6 +84,19 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
>>  
>>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>  
>> +#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
>> +
>> +/*
>> + * ARCHes with special requirements for evicting THP backing TLB entries can
>> + * implement this. Otherwise also, it can help optimizing thp flush operation.
>> + * flush_tlb_range() can have optimization to nuke the entire TLB if flush span
>> + * is greater than a threashhold, which will likely be true for a single
>> + * huge page.
>> + * e.g. see arch/arc: flush_pmd_tlb_range
>> + */
>> +#define flush_pmd_tlb_range(vma, addr, end)	flush_tlb_range(vma, addr, end)
>> +#endif
> Did you consider using a __weak function here?

IMHO weak doesn't apply here. All arches already have flush_tlb_range() which is
called by normal and THP code to flush the corresponding normal/THP page TLB
entry. What I want to do is differentiate the THP page flush case - bu calling a
different API (which can be optionally implemented by arch or fall back to vanilla
flush_tlb_page()). So we need to change the call itself here while weak lends
itself better to keeping the call same but just swapping the implementation.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2015-09-16 23:58 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-27  9:03 [PATCH 00/11] THP support for ARC Vineet Gupta
2015-08-27  9:03 ` [PATCH 01/11] ARC: mm: pte flags comsetic cleanups, comments Vineet Gupta
2015-08-27  9:03 ` [PATCH 02/11] ARC: mm: Introduce PTE_SPECIAL Vineet Gupta
2015-08-27  9:03 ` [PATCH 03/11] Documentation/features/vm: pte_special now supported by ARC Vineet Gupta
2015-08-27  9:03 ` [PATCH 04/11] ARCv2: mm: THP support Vineet Gupta
2015-08-27 15:32   ` Kirill A. Shutemov
2015-08-27 16:56     ` Vineet Gupta
2015-08-28  6:09     ` Vineet Gupta
2015-08-27  9:03 ` [PATCH 05/11] ARCv2: mm: THP: boot validation/reporting Vineet Gupta
2015-08-27  9:03 ` [PATCH 06/11] Documentation/features/vm: THP now supported by ARC Vineet Gupta
2015-08-27  9:03 ` [PATCH 07/11] mm: move some code around Vineet Gupta
2015-08-27  9:03 ` [PATCH 08/11] mm,thp: reduce ifdef'ery for THP in generic code Vineet Gupta
2015-09-16 22:25   ` Andrew Morton
2015-09-16 23:45     ` Vineet Gupta
2015-08-27  9:03 ` [PATCH 09/11] mm,thp: introduce flush_pmd_tlb_range Vineet Gupta
2015-09-16 22:26   ` Andrew Morton
2015-09-16 23:57     ` Vineet Gupta
2015-08-27  9:03 ` [PATCH 10/11] ARCv2: mm: THP: Implement flush_pmd_tlb_range() optimization Vineet Gupta
2015-08-27  9:03 ` [PATCH 11/11] ARCv2: Add a DT which enables THP Vineet Gupta
2015-09-03  8:46 ` [PATCH 00/11] THP support for ARC Vineet Gupta
2015-09-16 22:27 ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).