stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, "David S. Miller" <davem@davemloft.net>,
	Bob Picco <bob.picco@oracle.com>
Subject: [PATCH 3.17 138/146] sparc64: Fix physical memory management regressions with large max_phys_bits.
Date: Tue, 28 Oct 2014 11:34:40 +0800	[thread overview]
Message-ID: <20141028033349.441332581@linuxfoundation.org> (raw)
In-Reply-To: <20141028033343.441992423@linuxfoundation.org>

3.17-stable review patch.  If anyone has any objections, please let me know.

------------------

From: "David S. Miller" <davem@davemloft.net>

[ Upstream commit 0dd5b7b09e13dae32869371e08e1048349fd040c ]

If max_phys_bits needs to be > 43 (f.e. for T4 chips), things like
DEBUG_PAGEALLOC stop working because the 3-level page tables only
can cover up to 43 bits.

Another problem is that when we increased MAX_PHYS_ADDRESS_BITS up to
47, several statically allocated tables became enormous.

Compounding this is that we will need to support up to 49 bits of
physical addressing for M7 chips.

The two tables in question are sparc64_valid_addr_bitmap and
kpte_linear_bitmap.

The first holds a bitmap, with 1 bit for each 4MB chunk of physical
memory, indicating whether that chunk actually exists in the machine
and is valid.

The second table is a set of 2-bit values which tell how large of a
mapping (4MB, 256MB, 2GB, 16GB, respectively) we can use at each 256MB
chunk of ram in the system.

These tables are huge and take up an enormous amount of the BSS
section of the sparc64 kernel image.  Specifically, the
sparc64_valid_addr_bitmap is 4MB, and the kpte_linear_bitmap is 128K.

So let's solve the space wastage and the DEBUG_PAGEALLOC problem
at the same time, by using the kernel page tables (as designed) to
manage this information.

We have to keep using large mappings when DEBUG_PAGEALLOC is disabled,
and we do this by encoding huge PMDs and PUDs.

On a T4-2 with 256GB of ram the kernel page table takes up 16K with
DEBUG_PAGEALLOC disabled and 256MB with it enabled.  Furthermore, this
memory is dynamically allocated at run time rather than coded
statically into the kernel image.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Bob Picco <bob.picco@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/sparc/include/asm/page_64.h    |    3 
 arch/sparc/include/asm/pgtable_64.h |   55 ++---
 arch/sparc/include/asm/tsb.h        |   47 +++-
 arch/sparc/kernel/ktlb.S            |  108 ---------
 arch/sparc/kernel/vmlinux.lds.S     |    5 
 arch/sparc/mm/init_64.c             |  393 +++++++++++++++---------------------
 arch/sparc/mm/init_64.h             |    7 
 7 files changed, 244 insertions(+), 374 deletions(-)

--- a/arch/sparc/include/asm/page_64.h
+++ b/arch/sparc/include/asm/page_64.h
@@ -128,9 +128,6 @@ extern unsigned long PAGE_OFFSET;
  */
 #define MAX_PHYS_ADDRESS_BITS	47
 
-/* These two shift counts are used when indexing sparc64_valid_addr_bitmap
- * and kpte_linear_bitmap.
- */
 #define ILOG2_4MB		22
 #define ILOG2_256MB		28
 
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -79,22 +79,7 @@
 
 #include <linux/sched.h>
 
-extern unsigned long sparc64_valid_addr_bitmap[];
-
-/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
-static inline bool __kern_addr_valid(unsigned long paddr)
-{
-	if ((paddr >> MAX_PHYS_ADDRESS_BITS) != 0UL)
-		return false;
-	return test_bit(paddr >> ILOG2_4MB, sparc64_valid_addr_bitmap);
-}
-
-static inline bool kern_addr_valid(unsigned long addr)
-{
-	unsigned long paddr = __pa(addr);
-
-	return __kern_addr_valid(paddr);
-}
+bool kern_addr_valid(unsigned long addr);
 
 /* Entries per page directory level. */
 #define PTRS_PER_PTE	(1UL << (PAGE_SHIFT-3))
@@ -122,6 +107,7 @@ static inline bool kern_addr_valid(unsig
 #define _PAGE_R	  	  _AC(0x8000000000000000,UL) /* Keep ref bit uptodate*/
 #define _PAGE_SPECIAL     _AC(0x0200000000000000,UL) /* Special page         */
 #define _PAGE_PMD_HUGE    _AC(0x0100000000000000,UL) /* Huge page            */
+#define _PAGE_PUD_HUGE    _PAGE_PMD_HUGE
 
 /* Advertise support for _PAGE_SPECIAL */
 #define __HAVE_ARCH_PTE_SPECIAL
@@ -668,26 +654,26 @@ static inline unsigned long pmd_large(pm
 	return pte_val(pte) & _PAGE_PMD_HUGE;
 }
 
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-static inline unsigned long pmd_young(pmd_t pmd)
+static inline unsigned long pmd_pfn(pmd_t pmd)
 {
 	pte_t pte = __pte(pmd_val(pmd));
 
-	return pte_young(pte);
+	return pte_pfn(pte);
 }
 
-static inline unsigned long pmd_write(pmd_t pmd)
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static inline unsigned long pmd_young(pmd_t pmd)
 {
 	pte_t pte = __pte(pmd_val(pmd));
 
-	return pte_write(pte);
+	return pte_young(pte);
 }
 
-static inline unsigned long pmd_pfn(pmd_t pmd)
+static inline unsigned long pmd_write(pmd_t pmd)
 {
 	pte_t pte = __pte(pmd_val(pmd));
 
-	return pte_pfn(pte);
+	return pte_write(pte);
 }
 
 static inline unsigned long pmd_trans_huge(pmd_t pmd)
@@ -781,18 +767,15 @@ static inline int pmd_present(pmd_t pmd)
  * the top bits outside of the range of any physical address size we
  * support are clear as well.  We also validate the physical itself.
  */
-#define pmd_bad(pmd)			((pmd_val(pmd) & ~PAGE_MASK) || \
-					 !__kern_addr_valid(pmd_val(pmd)))
+#define pmd_bad(pmd)			(pmd_val(pmd) & ~PAGE_MASK)
 
 #define pud_none(pud)			(!pud_val(pud))
 
-#define pud_bad(pud)			((pud_val(pud) & ~PAGE_MASK) || \
-					 !__kern_addr_valid(pud_val(pud)))
+#define pud_bad(pud)			(pud_val(pud) & ~PAGE_MASK)
 
 #define pgd_none(pgd)			(!pgd_val(pgd))
 
-#define pgd_bad(pgd)			((pgd_val(pgd) & ~PAGE_MASK) || \
-					 !__kern_addr_valid(pgd_val(pgd)))
+#define pgd_bad(pgd)			(pgd_val(pgd) & ~PAGE_MASK)
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 void set_pmd_at(struct mm_struct *mm, unsigned long addr,
@@ -835,6 +818,20 @@ static inline unsigned long __pmd_page(p
 #define pgd_present(pgd)		(pgd_val(pgd) != 0U)
 #define pgd_clear(pgdp)			(pgd_val(*(pgd)) = 0UL)
 
+static inline unsigned long pud_large(pud_t pud)
+{
+	pte_t pte = __pte(pud_val(pud));
+
+	return pte_val(pte) & _PAGE_PMD_HUGE;
+}
+
+static inline unsigned long pud_pfn(pud_t pud)
+{
+	pte_t pte = __pte(pud_val(pud));
+
+	return pte_pfn(pte);
+}
+
 /* Same in both SUN4V and SUN4U.  */
 #define pte_none(pte) 			(!pte_val(pte))
 
--- a/arch/sparc/include/asm/tsb.h
+++ b/arch/sparc/include/asm/tsb.h
@@ -133,9 +133,24 @@ extern struct tsb_phys_patch_entry __tsb
 	sub	TSB, 0x8, TSB;   \
 	TSB_STORE(TSB, TAG);
 
-	/* Do a kernel page table walk.  Leaves physical PTE pointer in
-	 * REG1.  Jumps to FAIL_LABEL on early page table walk termination.
-	 * VADDR will not be clobbered, but REG2 will.
+	/* Do a kernel page table walk.  Leaves valid PTE value in
+	 * REG1.  Jumps to FAIL_LABEL on early page table walk
+	 * termination.  VADDR will not be clobbered, but REG2 will.
+	 *
+	 * There are two masks we must apply to propagate bits from
+	 * the virtual address into the PTE physical address field
+	 * when dealing with huge pages.  This is because the page
+	 * table boundaries do not match the huge page size(s) the
+	 * hardware supports.
+	 *
+	 * In these cases we propagate the bits that are below the
+	 * page table level where we saw the huge page mapping, but
+	 * are still within the relevant physical bits for the huge
+	 * page size in question.  So for PMD mappings (which fall on
+	 * bit 23, for 8MB per PMD) we must propagate bit 22 for a
+	 * 4MB huge page.  For huge PUDs (which fall on bit 33, for
+	 * 8GB per PUD), we have to accomodate 256MB and 2GB huge
+	 * pages.  So for those we propagate bits 32 to 28.
 	 */
 #define KERN_PGTABLE_WALK(VADDR, REG1, REG2, FAIL_LABEL)	\
 	sethi		%hi(swapper_pg_dir), REG1; \
@@ -150,15 +165,35 @@ extern struct tsb_phys_patch_entry __tsb
 	andn		REG2, 0x7, REG2; \
 	ldxa		[REG1 + REG2] ASI_PHYS_USE_EC, REG1; \
 	brz,pn		REG1, FAIL_LABEL; \
-	 sllx		VADDR, 64 - (PMD_SHIFT + PMD_BITS), REG2; \
+	sethi		%uhi(_PAGE_PUD_HUGE), REG2; \
+	brz,pn		REG1, FAIL_LABEL; \
+	 sllx		REG2, 32, REG2; \
+	andcc		REG1, REG2, %g0; \
+	sethi		%hi(0xf8000000), REG2; \
+	bne,pt		%xcc, 697f; \
+	 sllx		REG2, 1, REG2; \
+	sllx		VADDR, 64 - (PMD_SHIFT + PMD_BITS), REG2; \
 	srlx		REG2, 64 - PAGE_SHIFT, REG2; \
 	andn		REG2, 0x7, REG2; \
 	ldxa		[REG1 + REG2] ASI_PHYS_USE_EC, REG1; \
+	sethi		%uhi(_PAGE_PMD_HUGE), REG2; \
 	brz,pn		REG1, FAIL_LABEL; \
-	 sllx		VADDR, 64 - PMD_SHIFT, REG2; \
+	 sllx		REG2, 32, REG2; \
+	andcc		REG1, REG2, %g0; \
+	be,pn		%xcc, 698f; \
+	 sethi		%hi(0x400000), REG2; \
+697:	brgez,pn	REG1, FAIL_LABEL; \
+	 andn		REG1, REG2, REG1; \
+	and		VADDR, REG2, REG2; \
+	ba,pt		%xcc, 699f; \
+	 or		REG1, REG2, REG1; \
+698:	sllx		VADDR, 64 - PMD_SHIFT, REG2; \
 	srlx		REG2, 64 - PAGE_SHIFT, REG2; \
 	andn		REG2, 0x7, REG2; \
-	add		REG1, REG2, REG1;
+	ldxa		[REG1 + REG2] ASI_PHYS_USE_EC, REG1; \
+	brgez,pn	REG1, FAIL_LABEL; \
+	 nop; \
+699:
 
 	/* PMD has been loaded into REG1, interpret the value, seeing
 	 * if it is a HUGE PMD or a normal one.  If it is not valid
--- a/arch/sparc/kernel/ktlb.S
+++ b/arch/sparc/kernel/ktlb.S
@@ -47,14 +47,6 @@ kvmap_itlb_vmalloc_addr:
 	KERN_PGTABLE_WALK(%g4, %g5, %g2, kvmap_itlb_longpath)
 
 	TSB_LOCK_TAG(%g1, %g2, %g7)
-
-	/* Load and check PTE.  */
-	ldxa		[%g5] ASI_PHYS_USE_EC, %g5
-	mov		1, %g7
-	sllx		%g7, TSB_TAG_INVALID_BIT, %g7
-	brgez,a,pn	%g5, kvmap_itlb_longpath
-	 TSB_STORE(%g1, %g7)
-
 	TSB_WRITE(%g1, %g5, %g6)
 
 	/* fallthrough to TLB load */
@@ -118,6 +110,12 @@ kvmap_dtlb_obp:
 	ba,pt		%xcc, kvmap_dtlb_load
 	 nop
 
+kvmap_linear_early:
+	sethi		%hi(kern_linear_pte_xor), %g7
+	ldx		[%g7 + %lo(kern_linear_pte_xor)], %g2
+	ba,pt		%xcc, kvmap_dtlb_tsb4m_load
+	 xor		%g2, %g4, %g5
+
 	.align		32
 kvmap_dtlb_tsb4m_load:
 	TSB_LOCK_TAG(%g1, %g2, %g7)
@@ -146,105 +144,17 @@ kvmap_dtlb_4v:
 	/* Correct TAG_TARGET is already in %g6, check 4mb TSB.  */
 	KERN_TSB4M_LOOKUP_TL1(%g6, %g5, %g1, %g2, %g3, kvmap_dtlb_load)
 #endif
-	/* TSB entry address left in %g1, lookup linear PTE.
-	 * Must preserve %g1 and %g6 (TAG).
-	 */
-kvmap_dtlb_tsb4m_miss:
-	/* Clear the PAGE_OFFSET top virtual bits, shift
-	 * down to get PFN, and make sure PFN is in range.
-	 */
-661:	sllx		%g4, 0, %g5
-	.section	.page_offset_shift_patch, "ax"
-	.word		661b
-	.previous
-
-	/* Check to see if we know about valid memory at the 4MB
-	 * chunk this physical address will reside within.
+	/* Linear mapping TSB lookup failed.  Fallthrough to kernel
+	 * page table based lookup.
 	 */
-661:	srlx		%g5, MAX_PHYS_ADDRESS_BITS, %g2
-	.section	.page_offset_shift_patch, "ax"
-	.word		661b
-	.previous
-
-	brnz,pn		%g2, kvmap_dtlb_longpath
-	 nop
-
-	/* This unconditional branch and delay-slot nop gets patched
-	 * by the sethi sequence once the bitmap is properly setup.
-	 */
-	.globl		valid_addr_bitmap_insn
-valid_addr_bitmap_insn:
-	ba,pt		%xcc, 2f
-	 nop
-	.subsection	2
-	.globl		valid_addr_bitmap_patch
-valid_addr_bitmap_patch:
-	sethi		%hi(sparc64_valid_addr_bitmap), %g7
-	or		%g7, %lo(sparc64_valid_addr_bitmap), %g7
-	.previous
-
-661:	srlx		%g5, ILOG2_4MB, %g2
-	.section	.page_offset_shift_patch, "ax"
-	.word		661b
-	.previous
-
-	srlx		%g2, 6, %g5
-	and		%g2, 63, %g2
-	sllx		%g5, 3, %g5
-	ldx		[%g7 + %g5], %g5
-	mov		1, %g7
-	sllx		%g7, %g2, %g7
-	andcc		%g5, %g7, %g0
-	be,pn		%xcc, kvmap_dtlb_longpath
-
-2:	 sethi		%hi(kpte_linear_bitmap), %g2
-
-	/* Get the 256MB physical address index. */
-661:	sllx		%g4, 0, %g5
-	.section	.page_offset_shift_patch, "ax"
-	.word		661b
-	.previous
-
-	or		%g2, %lo(kpte_linear_bitmap), %g2
-
-661:	srlx		%g5, ILOG2_256MB, %g5
-	.section	.page_offset_shift_patch, "ax"
-	.word		661b
-	.previous
-
-	and		%g5, (32 - 1), %g7
-
-	/* Divide by 32 to get the offset into the bitmask.  */
-	srlx		%g5, 5, %g5
-	add		%g7, %g7, %g7
-	sllx		%g5, 3, %g5
-
-	/* kern_linear_pte_xor[(mask >> shift) & 3)] */
-	ldx		[%g2 + %g5], %g2
-	srlx		%g2, %g7, %g7
-	sethi		%hi(kern_linear_pte_xor), %g5
-	and		%g7, 3, %g7
-	or		%g5, %lo(kern_linear_pte_xor), %g5
-	sllx		%g7, 3, %g7
-	ldx		[%g5 + %g7], %g2
-
 	.globl		kvmap_linear_patch
 kvmap_linear_patch:
-	ba,pt		%xcc, kvmap_dtlb_tsb4m_load
-	 xor		%g2, %g4, %g5
+	ba,a,pt		%xcc, kvmap_linear_early
 
 kvmap_dtlb_vmalloc_addr:
 	KERN_PGTABLE_WALK(%g4, %g5, %g2, kvmap_dtlb_longpath)
 
 	TSB_LOCK_TAG(%g1, %g2, %g7)
-
-	/* Load and check PTE.  */
-	ldxa		[%g5] ASI_PHYS_USE_EC, %g5
-	mov		1, %g7
-	sllx		%g7, TSB_TAG_INVALID_BIT, %g7
-	brgez,a,pn	%g5, kvmap_dtlb_longpath
-	 TSB_STORE(%g1, %g7)
-
 	TSB_WRITE(%g1, %g5, %g6)
 
 	/* fallthrough to TLB load */
--- a/arch/sparc/kernel/vmlinux.lds.S
+++ b/arch/sparc/kernel/vmlinux.lds.S
@@ -122,11 +122,6 @@ SECTIONS
 		*(.swapper_4m_tsb_phys_patch)
 		__swapper_4m_tsb_phys_patch_end = .;
 	}
-	.page_offset_shift_patch : {
-		__page_offset_shift_patch = .;
-		*(.page_offset_shift_patch)
-		__page_offset_shift_patch_end = .;
-	}
 	.popc_3insn_patch : {
 		__popc_3insn_patch = .;
 		*(.popc_3insn_patch)
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -75,7 +75,6 @@ unsigned long kern_linear_pte_xor[4] __r
  * 'cpu' properties, but we need to have this table setup before the
  * MDESC is initialized.
  */
-unsigned long kpte_linear_bitmap[KPTE_BITMAP_BYTES / sizeof(unsigned long)];
 
 #ifndef CONFIG_DEBUG_PAGEALLOC
 /* A special kernel TSB for 4MB, 256MB, 2GB and 16GB linear mappings.
@@ -84,6 +83,7 @@ unsigned long kpte_linear_bitmap[KPTE_BI
  */
 extern struct tsb swapper_4m_tsb[KERNEL_TSB4M_NENTRIES];
 #endif
+extern struct tsb swapper_tsb[KERNEL_TSB_NENTRIES];
 
 static unsigned long cpu_pgsz_mask;
 
@@ -165,10 +165,6 @@ static void __init read_obp_memory(const
 	     cmp_p64, NULL);
 }
 
-unsigned long sparc64_valid_addr_bitmap[VALID_ADDR_BITMAP_BYTES /
-					sizeof(unsigned long)];
-EXPORT_SYMBOL(sparc64_valid_addr_bitmap);
-
 /* Kernel physical address base and size in bytes.  */
 unsigned long kern_base __read_mostly;
 unsigned long kern_size __read_mostly;
@@ -1369,9 +1365,145 @@ static unsigned long __init bootmem_init
 static struct linux_prom64_registers pall[MAX_BANKS] __initdata;
 static int pall_ents __initdata;
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
+static unsigned long max_phys_bits = 40;
+
+bool kern_addr_valid(unsigned long addr)
+{
+	unsigned long above = ((long)addr) >> max_phys_bits;
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	if (above != 0 && above != -1UL)
+		return false;
+
+	if (addr >= (unsigned long) KERNBASE &&
+	    addr < (unsigned long)&_end)
+		return true;
+
+	if (addr >= PAGE_OFFSET) {
+		unsigned long pa = __pa(addr);
+
+		return pfn_valid(pa >> PAGE_SHIFT);
+	}
+
+	pgd = pgd_offset_k(addr);
+	if (pgd_none(*pgd))
+		return 0;
+
+	pud = pud_offset(pgd, addr);
+	if (pud_none(*pud))
+		return 0;
+
+	if (pud_large(*pud))
+		return pfn_valid(pud_pfn(*pud));
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd))
+		return 0;
+
+	if (pmd_large(*pmd))
+		return pfn_valid(pmd_pfn(*pmd));
+
+	pte = pte_offset_kernel(pmd, addr);
+	if (pte_none(*pte))
+		return 0;
+
+	return pfn_valid(pte_pfn(*pte));
+}
+EXPORT_SYMBOL(kern_addr_valid);
+
+static unsigned long __ref kernel_map_hugepud(unsigned long vstart,
+					      unsigned long vend,
+					      pud_t *pud)
+{
+	const unsigned long mask16gb = (1UL << 34) - 1UL;
+	u64 pte_val = vstart;
+
+	/* Each PUD is 8GB */
+	if ((vstart & mask16gb) ||
+	    (vend - vstart <= mask16gb)) {
+		pte_val ^= kern_linear_pte_xor[2];
+		pud_val(*pud) = pte_val | _PAGE_PUD_HUGE;
+
+		return vstart + PUD_SIZE;
+	}
+
+	pte_val ^= kern_linear_pte_xor[3];
+	pte_val |= _PAGE_PUD_HUGE;
+
+	vend = vstart + mask16gb + 1UL;
+	while (vstart < vend) {
+		pud_val(*pud) = pte_val;
+
+		pte_val += PUD_SIZE;
+		vstart += PUD_SIZE;
+		pud++;
+	}
+	return vstart;
+}
+
+static bool kernel_can_map_hugepud(unsigned long vstart, unsigned long vend,
+				   bool guard)
+{
+	if (guard && !(vstart & ~PUD_MASK) && (vend - vstart) >= PUD_SIZE)
+		return true;
+
+	return false;
+}
+
+static unsigned long __ref kernel_map_hugepmd(unsigned long vstart,
+					      unsigned long vend,
+					      pmd_t *pmd)
+{
+	const unsigned long mask256mb = (1UL << 28) - 1UL;
+	const unsigned long mask2gb = (1UL << 31) - 1UL;
+	u64 pte_val = vstart;
+
+	/* Each PMD is 8MB */
+	if ((vstart & mask256mb) ||
+	    (vend - vstart <= mask256mb)) {
+		pte_val ^= kern_linear_pte_xor[0];
+		pmd_val(*pmd) = pte_val | _PAGE_PMD_HUGE;
+
+		return vstart + PMD_SIZE;
+	}
+
+	if ((vstart & mask2gb) ||
+	    (vend - vstart <= mask2gb)) {
+		pte_val ^= kern_linear_pte_xor[1];
+		pte_val |= _PAGE_PMD_HUGE;
+		vend = vstart + mask256mb + 1UL;
+	} else {
+		pte_val ^= kern_linear_pte_xor[2];
+		pte_val |= _PAGE_PMD_HUGE;
+		vend = vstart + mask2gb + 1UL;
+	}
+
+	while (vstart < vend) {
+		pmd_val(*pmd) = pte_val;
+
+		pte_val += PMD_SIZE;
+		vstart += PMD_SIZE;
+		pmd++;
+	}
+
+	return vstart;
+}
+
+static bool kernel_can_map_hugepmd(unsigned long vstart, unsigned long vend,
+				   bool guard)
+{
+	if (guard && !(vstart & ~PMD_MASK) && (vend - vstart) >= PMD_SIZE)
+		return true;
+
+	return false;
+}
+
 static unsigned long __ref kernel_map_range(unsigned long pstart,
-					    unsigned long pend, pgprot_t prot)
+					    unsigned long pend, pgprot_t prot,
+					    bool use_huge)
 {
 	unsigned long vstart = PAGE_OFFSET + pstart;
 	unsigned long vend = PAGE_OFFSET + pend;
@@ -1401,15 +1533,23 @@ static unsigned long __ref kernel_map_ra
 		if (pud_none(*pud)) {
 			pmd_t *new;
 
+			if (kernel_can_map_hugepud(vstart, vend, use_huge)) {
+				vstart = kernel_map_hugepud(vstart, vend, pud);
+				continue;
+			}
 			new = __alloc_bootmem(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
 			alloc_bytes += PAGE_SIZE;
 			pud_populate(&init_mm, pud, new);
 		}
 
 		pmd = pmd_offset(pud, vstart);
-		if (!pmd_present(*pmd)) {
+		if (pmd_none(*pmd)) {
 			pte_t *new;
 
+			if (kernel_can_map_hugepmd(vstart, vend, use_huge)) {
+				vstart = kernel_map_hugepmd(vstart, vend, pmd);
+				continue;
+			}
 			new = __alloc_bootmem(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
 			alloc_bytes += PAGE_SIZE;
 			pmd_populate_kernel(&init_mm, pmd, new);
@@ -1432,100 +1572,34 @@ static unsigned long __ref kernel_map_ra
 	return alloc_bytes;
 }
 
-extern unsigned int kvmap_linear_patch[1];
-#endif /* CONFIG_DEBUG_PAGEALLOC */
-
-static void __init kpte_set_val(unsigned long index, unsigned long val)
-{
-	unsigned long *ptr = kpte_linear_bitmap;
-
-	val <<= ((index % (BITS_PER_LONG / 2)) * 2);
-	ptr += (index / (BITS_PER_LONG / 2));
-
-	*ptr |= val;
-}
-
-static const unsigned long kpte_shift_min = 28; /* 256MB */
-static const unsigned long kpte_shift_max = 34; /* 16GB */
-static const unsigned long kpte_shift_incr = 3;
-
-static unsigned long kpte_mark_using_shift(unsigned long start, unsigned long end,
-					   unsigned long shift)
+static void __init flush_all_kernel_tsbs(void)
 {
-	unsigned long size = (1UL << shift);
-	unsigned long mask = (size - 1UL);
-	unsigned long remains = end - start;
-	unsigned long val;
-
-	if (remains < size || (start & mask))
-		return start;
-
-	/* VAL maps:
-	 *
-	 *	shift 28 --> kern_linear_pte_xor index 1
-	 *	shift 31 --> kern_linear_pte_xor index 2
-	 *	shift 34 --> kern_linear_pte_xor index 3
-	 */
-	val = ((shift - kpte_shift_min) / kpte_shift_incr) + 1;
-
-	remains &= ~mask;
-	if (shift != kpte_shift_max)
-		remains = size;
-
-	while (remains) {
-		unsigned long index = start >> kpte_shift_min;
+	int i;
 
-		kpte_set_val(index, val);
+	for (i = 0; i < KERNEL_TSB_NENTRIES; i++) {
+		struct tsb *ent = &swapper_tsb[i];
 
-		start += 1UL << kpte_shift_min;
-		remains -= 1UL << kpte_shift_min;
+		ent->tag = (1UL << TSB_TAG_INVALID_BIT);
 	}
+#ifndef CONFIG_DEBUG_PAGEALLOC
+	for (i = 0; i < KERNEL_TSB4M_NENTRIES; i++) {
+		struct tsb *ent = &swapper_4m_tsb[i];
 
-	return start;
-}
-
-static void __init mark_kpte_bitmap(unsigned long start, unsigned long end)
-{
-	unsigned long smallest_size, smallest_mask;
-	unsigned long s;
-
-	smallest_size = (1UL << kpte_shift_min);
-	smallest_mask = (smallest_size - 1UL);
-
-	while (start < end) {
-		unsigned long orig_start = start;
-
-		for (s = kpte_shift_max; s >= kpte_shift_min; s -= kpte_shift_incr) {
-			start = kpte_mark_using_shift(start, end, s);
-
-			if (start != orig_start)
-				break;
-		}
-
-		if (start == orig_start)
-			start = (start + smallest_size) & ~smallest_mask;
+		ent->tag = (1UL << TSB_TAG_INVALID_BIT);
 	}
+#endif
 }
 
-static void __init init_kpte_bitmap(void)
-{
-	unsigned long i;
-
-	for (i = 0; i < pall_ents; i++) {
-		unsigned long phys_start, phys_end;
-
-		phys_start = pall[i].phys_addr;
-		phys_end = phys_start + pall[i].reg_size;
-
-		mark_kpte_bitmap(phys_start, phys_end);
-	}
-}
+extern unsigned int kvmap_linear_patch[1];
 
 static void __init kernel_physical_mapping_init(void)
 {
-#ifdef CONFIG_DEBUG_PAGEALLOC
 	unsigned long i, mem_alloced = 0UL;
+	bool use_huge = true;
 
+#ifdef CONFIG_DEBUG_PAGEALLOC
+	use_huge = false;
+#endif
 	for (i = 0; i < pall_ents; i++) {
 		unsigned long phys_start, phys_end;
 
@@ -1533,7 +1607,7 @@ static void __init kernel_physical_mappi
 		phys_end = phys_start + pall[i].reg_size;
 
 		mem_alloced += kernel_map_range(phys_start, phys_end,
-						PAGE_KERNEL);
+						PAGE_KERNEL, use_huge);
 	}
 
 	printk("Allocated %ld bytes for kernel page tables.\n",
@@ -1542,8 +1616,9 @@ static void __init kernel_physical_mappi
 	kvmap_linear_patch[0] = 0x01000000; /* nop */
 	flushi(&kvmap_linear_patch[0]);
 
+	flush_all_kernel_tsbs();
+
 	__flush_tlb_all();
-#endif
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
@@ -1553,7 +1628,7 @@ void kernel_map_pages(struct page *page,
 	unsigned long phys_end = phys_start + (numpages * PAGE_SIZE);
 
 	kernel_map_range(phys_start, phys_end,
-			 (enable ? PAGE_KERNEL : __pgprot(0)));
+			 (enable ? PAGE_KERNEL : __pgprot(0)), false);
 
 	flush_tsb_kernel_range(PAGE_OFFSET + phys_start,
 			       PAGE_OFFSET + phys_end);
@@ -1581,62 +1656,11 @@ unsigned long __init find_ecache_flush_s
 unsigned long PAGE_OFFSET;
 EXPORT_SYMBOL(PAGE_OFFSET);
 
-static void __init page_offset_shift_patch_one(unsigned int *insn, unsigned long phys_bits)
-{
-	unsigned long final_shift;
-	unsigned int val = *insn;
-	unsigned int cnt;
-
-	/* We are patching in ilog2(max_supported_phys_address), and
-	 * we are doing so in a manner similar to a relocation addend.
-	 * That is, we are adding the shift value to whatever value
-	 * is in the shift instruction count field already.
-	 */
-	cnt = (val & 0x3f);
-	val &= ~0x3f;
-
-	/* If we are trying to shift >= 64 bits, clear the destination
-	 * register.  This can happen when phys_bits ends up being equal
-	 * to MAX_PHYS_ADDRESS_BITS.
-	 */
-	final_shift = (cnt + (64 - phys_bits));
-	if (final_shift >= 64) {
-		unsigned int rd = (val >> 25) & 0x1f;
-
-		val = 0x80100000 | (rd << 25);
-	} else {
-		val |= final_shift;
-	}
-	*insn = val;
-
-	__asm__ __volatile__("flush	%0"
-			     : /* no outputs */
-			     : "r" (insn));
-}
-
-static void __init page_offset_shift_patch(unsigned long phys_bits)
-{
-	extern unsigned int __page_offset_shift_patch;
-	extern unsigned int __page_offset_shift_patch_end;
-	unsigned int *p;
-
-	p = &__page_offset_shift_patch;
-	while (p < &__page_offset_shift_patch_end) {
-		unsigned int *insn = (unsigned int *)(unsigned long)*p;
-
-		page_offset_shift_patch_one(insn, phys_bits);
-
-		p++;
-	}
-}
-
 unsigned long sparc64_va_hole_top =    0xfffff80000000000UL;
 unsigned long sparc64_va_hole_bottom = 0x0000080000000000UL;
 
 static void __init setup_page_offset(void)
 {
-	unsigned long max_phys_bits = 40;
-
 	if (tlb_type == cheetah || tlb_type == cheetah_plus) {
 		/* Cheetah/Panther support a full 64-bit virtual
 		 * address, so we can use all that our page tables
@@ -1685,8 +1709,6 @@ static void __init setup_page_offset(voi
 
 	pr_info("PAGE_OFFSET is 0x%016lx (max_phys_bits == %lu)\n",
 		PAGE_OFFSET, max_phys_bits);
-
-	page_offset_shift_patch(max_phys_bits);
 }
 
 static void __init tsb_phys_patch(void)
@@ -1731,7 +1753,6 @@ static void __init tsb_phys_patch(void)
 #define NUM_KTSB_DESCR	1
 #endif
 static struct hv_tsb_descr ktsb_descr[NUM_KTSB_DESCR];
-extern struct tsb swapper_tsb[KERNEL_TSB_NENTRIES];
 
 /* The swapper TSBs are loaded with a base sequence of:
  *
@@ -2030,11 +2051,9 @@ void __init paging_init(void)
 
 	pmd = swapper_low_pmd_dir + (shift / sizeof(pmd_t));
 	pud_set(&swapper_pud_dir[0], pmd);
-	
+
 	inherit_prom_mappings();
 	
-	init_kpte_bitmap();
-
 	/* Ok, we can use our TLB miss and window trap handlers safely.  */
 	setup_tba();
 
@@ -2141,70 +2160,6 @@ int page_in_phys_avail(unsigned long pad
 	return 0;
 }
 
-static struct linux_prom64_registers pavail_rescan[MAX_BANKS] __initdata;
-static int pavail_rescan_ents __initdata;
-
-/* Certain OBP calls, such as fetching "available" properties, can
- * claim physical memory.  So, along with initializing the valid
- * address bitmap, what we do here is refetch the physical available
- * memory list again, and make sure it provides at least as much
- * memory as 'pavail' does.
- */
-static void __init setup_valid_addr_bitmap_from_pavail(unsigned long *bitmap)
-{
-	int i;
-
-	read_obp_memory("available", &pavail_rescan[0], &pavail_rescan_ents);
-
-	for (i = 0; i < pavail_ents; i++) {
-		unsigned long old_start, old_end;
-
-		old_start = pavail[i].phys_addr;
-		old_end = old_start + pavail[i].reg_size;
-		while (old_start < old_end) {
-			int n;
-
-			for (n = 0; n < pavail_rescan_ents; n++) {
-				unsigned long new_start, new_end;
-
-				new_start = pavail_rescan[n].phys_addr;
-				new_end = new_start +
-					pavail_rescan[n].reg_size;
-
-				if (new_start <= old_start &&
-				    new_end >= (old_start + PAGE_SIZE)) {
-					set_bit(old_start >> ILOG2_4MB, bitmap);
-					goto do_next_page;
-				}
-			}
-
-			prom_printf("mem_init: Lost memory in pavail\n");
-			prom_printf("mem_init: OLD start[%lx] size[%lx]\n",
-				    pavail[i].phys_addr,
-				    pavail[i].reg_size);
-			prom_printf("mem_init: NEW start[%lx] size[%lx]\n",
-				    pavail_rescan[i].phys_addr,
-				    pavail_rescan[i].reg_size);
-			prom_printf("mem_init: Cannot continue, aborting.\n");
-			prom_halt();
-
-		do_next_page:
-			old_start += PAGE_SIZE;
-		}
-	}
-}
-
-static void __init patch_tlb_miss_handler_bitmap(void)
-{
-	extern unsigned int valid_addr_bitmap_insn[];
-	extern unsigned int valid_addr_bitmap_patch[];
-
-	valid_addr_bitmap_insn[1] = valid_addr_bitmap_patch[1];
-	mb();
-	valid_addr_bitmap_insn[0] = valid_addr_bitmap_patch[0];
-	flushi(&valid_addr_bitmap_insn[0]);
-}
-
 static void __init register_page_bootmem_info(void)
 {
 #ifdef CONFIG_NEED_MULTIPLE_NODES
@@ -2217,18 +2172,6 @@ static void __init register_page_bootmem
 }
 void __init mem_init(void)
 {
-	unsigned long addr, last;
-
-	addr = PAGE_OFFSET + kern_base;
-	last = PAGE_ALIGN(kern_size) + addr;
-	while (addr < last) {
-		set_bit(__pa(addr) >> ILOG2_4MB, sparc64_valid_addr_bitmap);
-		addr += PAGE_SIZE;
-	}
-
-	setup_valid_addr_bitmap_from_pavail(sparc64_valid_addr_bitmap);
-	patch_tlb_miss_handler_bitmap();
-
 	high_memory = __va(last_valid_pfn << PAGE_SHIFT);
 
 	register_page_bootmem_info();
--- a/arch/sparc/mm/init_64.h
+++ b/arch/sparc/mm/init_64.h
@@ -8,15 +8,8 @@
  */
 
 #define MAX_PHYS_ADDRESS	(1UL << MAX_PHYS_ADDRESS_BITS)
-#define KPTE_BITMAP_CHUNK_SZ		(256UL * 1024UL * 1024UL)
-#define KPTE_BITMAP_BYTES	\
-	((MAX_PHYS_ADDRESS / KPTE_BITMAP_CHUNK_SZ) / 4)
-#define VALID_ADDR_BITMAP_CHUNK_SZ	(4UL * 1024UL * 1024UL)
-#define VALID_ADDR_BITMAP_BYTES	\
-	((MAX_PHYS_ADDRESS / VALID_ADDR_BITMAP_CHUNK_SZ) / 8)
 
 extern unsigned long kern_linear_pte_xor[4];
-extern unsigned long kpte_linear_bitmap[KPTE_BITMAP_BYTES / sizeof(unsigned long)];
 extern unsigned int sparc64_highest_unlocked_tlb_ent;
 extern unsigned long sparc64_kern_pri_context;
 extern unsigned long sparc64_kern_pri_nuc_bits;



  parent reply	other threads:[~2014-10-28  3:34 UTC|newest]

Thread overview: 147+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-28  3:32 [PATCH 3.17 000/146] 3.17.2-stable review Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 001/146] btrfs: wake up transaction thread from SYNC_FS ioctl Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 002/146] btrfs: dont go readonly on existing qgroup items Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 003/146] btrfs: Fix a deadlock in btrfs_dev_replace_finishing() Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 004/146] Btrfs: add missing compression property remove in btrfs_ioctl_setflags Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 006/146] btrfs: Fix and enhance merge_extent_mapping() to insert best fitted extent map Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 007/146] Btrfs: dont do async reclaim during log replay Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 008/146] Btrfs: try not to ENOSPC on " Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 009/146] Btrfs: cleanup error handling in build_backref_tree Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 010/146] Btrfs: fix build_backref_tree issue with multiple shared blocks Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 011/146] btrfs: Fix the wrong condition judgment about subset extent map Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 012/146] Btrfs: fix race in WAIT_SYNC ioctl Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 013/146] Revert "Btrfs: race free update of commit root for ro snapshots" Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 014/146] fs: Add a missing permission check to do_umount Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 015/146] pci_ids: Add support for Intel Quark ILB Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 016/146] kvm: x86: fix stale mmio cache bug Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 017/146] kvm: fix potentially corrupt mmio cache Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 018/146] KVM: do not bias the generation number in kvm_current_mmio_generation Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 019/146] KVM: s390: unintended fallthrough for external call Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 020/146] kvm: dont take vcpu mutex for obviously invalid vcpu ioctls Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 021/146] x86,kvm,vmx: Preserve CR4 across VM entry Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 022/146] x86/intel/quark: Switch off CR4.PGE so TLB flush uses CR3 instead Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 023/146] spi: dw-mid: respect 8 bit mode Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 024/146] spi/rockchip: fix bug that cause the failure to read data in DMA mode Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 025/146] spi: dw-mid: check that DMA was inited before exit Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 026/146] HID: wacom - remove report_id from wacom_get_report interface Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 027/146] HID: wacom: fix timeout on probe for some wacoms Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 028/146] HID: rmi: check sanity of the incoming report Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 029/146] mpc85xx_edac: Make L2 interrupt shared too Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 030/146] regmap: debugfs: fix possbile NULL pointer dereference Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 031/146] regmap: fix NULL pointer dereference in _regmap_write/read Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 032/146] regmap: fix possible ZERO_SIZE_PTR pointer dereferencing error Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 033/146] be2iscsi: check ip buffer before copying Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 034/146] mptfusion: enable no_write_same for vmware scsi disks Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 035/146] regulator: ltc3589: fix broken voltage transitions Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 036/146] qla2xxx: fix kernel NULL pointer access Greg Kroah-Hartman
2014-10-28  3:32 ` [PATCH 3.17 037/146] qla2xxx: Use correct offset to req-q-out for reserve calculation Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 038/146] qla2xxx: Fix shost use-after-free on device removal Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 039/146] dmaengine: fix xor sources continuation Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 040/146] dmaengine: pl330: Fix NULL pointer dereference on probe failure Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 041/146] dmaengine: pl330: Fix NULL pointer dereference on driver unbind Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 042/146] firmware_class: make sure fw requests contain a name Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 043/146] arm64: debug: dont re-enable debug exceptions on return from el1_dbg Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 044/146] Drivers: hv: util: Properly pack the data for file copy functionality Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 045/146] Drivers: hv: vmbus: Cleanup vmbus_post_msg() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 046/146] Drivers: hv: vmbus: Cleanup vmbus_teardown_gpadl() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 047/146] Drivers: hv: vmbus: Cleanup vmbus_close_internal() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 048/146] Drivers: hv: vmbus: Cleanup vmbus_establish_gpadl() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 049/146] Drivers: hv: vmbus: Fix a bug in vmbus_open() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 050/146] Drivers: hv: vmbus: Cleanup hv_post_message() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 051/146] mei: bus: fix possible boundaries violation Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 052/146] m68k: Disable/restore interrupts in hwreg_present()/hwreg_write() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 053/146] Fixing lease renewal Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 054/146] Documentation: lzo: document part of the encoding Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 055/146] Revert "lzo: properly check for overruns" Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 056/146] lzo: check for length overrun in variable length encoding Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 057/146] tty: omap-serial: fix division by zero Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 058/146] nfs: fix duplicate proc entries Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 059/146] NFSv4: Fix lock recovery when CREATE_SESSION/SETCLIENTID_CONFIRM fails Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 060/146] NFSv4: fix open/lock state recovery error handling Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 061/146] NFSv4.1: Fix an NFSv4.1 state renewal regression Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 062/146] nfsd4: reserve adequate space for LOCK op Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 063/146] NFS: Fix an uninitialised pointer Oops in the writeback error path Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 064/146] NFS: Fix a bogus warning in nfs_generic_pgio Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 065/146] NFSv4.1/pnfs: replace broken pnfs_put_lseg_async Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 066/146] iwlwifi: mvm: disable BT Co-running by default Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 067/146] iwlwifi: Add missing PCI IDs for the 7260 series Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 068/146] spi: dw-mid: terminate ongoing transfers at exit Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 069/146] arm64: Fix compilation error on UP builds Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 070/146] arm64: compat: fix compat types affecting struct compat_elf_prpsinfo Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 071/146] ALSA: pcm: use the same dma mmap codepath both for arm and arm64 Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 073/146] ALSA: emu10k1: Fix deadlock in synth voice lookup Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 074/146] ALSA: ALC283 codec - Avoid pop noise on headphones during suspend/resume Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 075/146] ALSA: usb-audio: Add support for Steinberg UR22 USB interface Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 076/146] ALSA: hda - hdmi: Fix missing ELD change event on plug/unplug Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 077/146] ALSA: hda - Fix inverted LED gpio setup for Lenovo Ideapad Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 078/146] ALSA: hda - Add missing terminating entry to SND_HDA_PIN_QUIRK macro Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 079/146] clk: qcom: Add IPQ8064 PLL required for USB Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 080/146] ARM: at91/dt: Fix typo regarding can0_clk Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 081/146] ARM: at91: fix at91sam9263ek DT mmc pinmuxing settings Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 082/146] ARM: at91/PMC: dont forget to write PMC_PCDR register to disable clocks Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 083/146] ARM: Kirkwood: Fix DT based DSA Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 084/146] ARM: mvebu: Netgear RN104: Use Hardware BCH ECC Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 085/146] ARM: mvebu: Netgear RN2120: " Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 086/146] ARM: mvebu: Netgear RN102: " Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 087/146] ARM: dts: imx28-evk: Let i2c0 run at 100kHz Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 088/146] ecryptfs: avoid to access NULL pointer when write metadata in xattr Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 089/146] udf: Fix loading of special inodes Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 090/146] vfio-pci: Fix remove path locking Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 091/146] xfs: ensure WB_SYNC_ALL writeback handles partial pages correctly Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 092/146] xfs: fix agno increment in xfs_inumbers() loop Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 093/146] PCI: mvebu: Fix uninitialized variable in mvebu_get_tgt_attr() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 094/146] PCI: Add missing MEM_64 mask in pci_assign_unassigned_bridge_resources() Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 095/146] PCI: Increase IBM ipr SAS Crocodile BARs to at least system page size Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 096/146] PCI: Generate uppercase hex for modalias interface class Greg Kroah-Hartman
2014-10-28  3:33 ` [PATCH 3.17 097/146] rt2800: correct BBP1_TX_POWER_CTRL mask Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 098/146] Revert "ath9k_hw: reduce ANI firstep range for older chips" Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 099/146] Bluetooth: Fix HCI H5 corrupted ack value Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 100/146] Bluetooth: Fix incorrect LE CoC PDU length restriction based on HCI MTU Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 101/146] Bluetooth: Fix issue with USB suspend in btusb driver Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 102/146] Bluetooth: Fix setting correct security level when initiating SMP Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 103/146] Bluetooth: 6lowpan: Increase the connection timeout value Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 104/146] Bluetooth: 6lowpan: Set the peer IPv6 address correctly Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 105/146] Bluetooth: 6lowpan: Route packets that are not meant to peer via correct device Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 106/146] mm: clear __GFP_FS when PF_MEMALLOC_NOIO is set Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 108/146] mm/cma: fix cma bitmap aligned mask computing Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 109/146] kernel: add support for gcc 5 Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 111/146] mm/balloon_compaction: redesign ballooned pages management Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 112/146] futex: Ensure get_futex_key_refs() always implies a barrier Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 113/146] powerpc: Fix warning reported by verify_cpu_node_mapping() Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 114/146] powerpc: Only set numa node information for present cpus at boottime Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 115/146] powerpc/iommu/ddw: Fix endianness Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 116/146] powerpc/eeh: Clear frozen device state in time Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 117/146] ima: fix fallback to use new_sync_read() Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 118/146] ima: provide flag to identify new empty files Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 119/146] ima: pass opened flag to identify newly created files Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 120/146] sparc32: dma_alloc_coherent must honour gfp flags Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 121/146] sparc64: sun4v TLB error power off events Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 122/146] sparc64: Fix corrupted thread fault code Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 123/146] sparc64: find_node adjustment Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 124/146] sparc64: Move request_irq() from ldc_bind() to ldc_alloc() Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 125/146] sparc: Let memset return the address argument Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 126/146] sparc64: Fix reversed start/end in flush_tlb_kernel_range() Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 127/146] sparc64: Fix lockdep warnings on reboot on Ultra-5 Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 128/146] sparc64: Fix FPU register corruption with AES crypto offload Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 129/146] sparc64: Do not define thread fpregs save area as zero-length array Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 130/146] sparc64: Fix hibernation code refrence to PAGE_OFFSET Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 131/146] sparc64: correctly recognise M6 and M7 cpu type Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 132/146] sparc64: support M6 and M7 for building CPU distribution map Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 133/146] sparc64: cpu hardware caps support for sparc M6 and M7 Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 134/146] sparc64: T5 PMU Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 135/146] sparc64: Switch to 4-level page tables Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 136/146] sparc64: Define VA hole at run time, rather than at compile time Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 137/146] sparc64: Adjust KTSB assembler to support larger physical addresses Greg Kroah-Hartman
2014-10-28  3:34 ` Greg Kroah-Hartman [this message]
2014-10-28  3:34 ` [PATCH 3.17 139/146] sparc64: Use kernel page tables for vmemmap Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 140/146] sparc64: Increase MAX_PHYS_ADDRESS_BITS to 53 Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 141/146] sparc64: Adjust vmalloc region size based upon available virtual address bits Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 142/146] sparc64: sparse irq Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 143/146] sparc64: Kill unnecessary tables and increase MAX_BANKS Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 144/146] sparc64: Increase size of boot string to 1024 bytes Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 145/146] sparc64: Fix register corruption in top-most kernel stack frame during boot Greg Kroah-Hartman
2014-10-28  3:34 ` [PATCH 3.17 146/146] sparc64: Implement __get_user_pages_fast() Greg Kroah-Hartman
2014-10-28 15:15 ` [PATCH 3.17 000/146] 3.17.2-stable review Guenter Roeck
2014-10-28 23:40   ` Greg Kroah-Hartman
2014-10-28 16:15 ` Shuah Khan
2014-10-28 23:40   ` Greg Kroah-Hartman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141028033349.441332581@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=bob.picco@oracle.com \
    --cc=davem@davemloft.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).