linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/20] powerpc: base 64-bit Book3E processor support (v2)
@ 2009-07-24  9:15 Benjamin Herrenschmidt
  2009-07-24  9:15 ` [PATCH 1/20] powerpc/mm: Fix misplaced #endif in pgtable-ppc64-64k.h Benjamin Herrenschmidt
                   ` (19 more replies)
  0 siblings, 20 replies; 31+ messages in thread
From: Benjamin Herrenschmidt @ 2009-07-24  9:15 UTC (permalink / raw)
  To: linuxppc-dev

Here is a series of patches that implement some basic support
for 64-bit Book3E processors that comply to architecture 2.06.

There is no specific processor announced yet. The patches make
some shortcut which means they currently rely on an implementation
that supports MMU v2 with support for the "HES" feature (HW entry
select) and with support for the "TLB reservation" feature. They
also assume a single unified TLB array. I shouldn't be very hard
to implement support for other variants of the architecture on
top of this though.

The current set of patch has no proper support yet for hugetlb,
nor for "special" interrupt levels (debug, critical and machine
check). Some minimal support for debug/critical levels is provided
specifically for the "Debug" interrupt (single step etc...) only
when it occurs from within user space code. 

The intend is to merge these in 2.6.32. They rely on pretty much
all the other patches I've been posting lately including the
generic changes to add the virtual address argument to pte_free_tlb.

v2. Various fixes, some addressing comments recieved and a whole
bunch fixing other issues including breakage of existing platforms

^ permalink raw reply	[flat|nested] 31+ messages in thread
* [PATCH 19/20] powerpc/mm: Add support for SPARSEMEM_VMEMMAP on 64-bit Book3E
@ 2009-07-23  5:59 Benjamin Herrenschmidt
  0 siblings, 0 replies; 31+ messages in thread
From: Benjamin Herrenschmidt @ 2009-07-23  5:59 UTC (permalink / raw)
  To: linuxppc-dev

The base TLB support didn't include support for SPARSEMEM_VMEMMAP, though
we did carve out some virtual space for it, the necessary support code
wasn't there. This implements it by using 16M pages for now, though the
page size could easily be changed at runtime if necessary.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/include/asm/mmu-book3e.h    |    1 
 arch/powerpc/include/asm/pgtable-ppc64.h |    3 +
 arch/powerpc/mm/init_64.c                |   55 +++++++++++++++++++++++++++----
 arch/powerpc/mm/mmu_decl.h               |    7 +++
 arch/powerpc/mm/pgtable_64.c             |    2 -
 arch/powerpc/mm/tlb_nohash.c             |   11 +++++-
 6 files changed, 68 insertions(+), 11 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/mmu-book3e.h	2009-07-23 15:15:56.000000000 +1000
+++ linux-work/arch/powerpc/include/asm/mmu-book3e.h	2009-07-23 15:16:14.000000000 +1000
@@ -219,6 +219,7 @@ extern struct mmu_psize_def mmu_psize_de
 #endif
 
 extern int mmu_linear_psize;
+extern int mmu_vmemmap_psize;
 
 #endif /* !__ASSEMBLY__ */
 
Index: linux-work/arch/powerpc/include/asm/pgtable-ppc64.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc64.h	2009-07-23 15:20:13.000000000 +1000
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc64.h	2009-07-23 15:21:35.000000000 +1000
@@ -46,6 +46,7 @@
 /*
  * The vmalloc space starts at the beginning of that region, and
  * occupies half of it on hash CPUs and a quarter of it on Book3E
+ * (we keep a quarter for the virtual memmap)
  */
 #define VMALLOC_START	KERN_VIRT_START
 #ifdef CONFIG_PPC_BOOK3E
@@ -83,7 +84,7 @@
 
 #define VMALLOC_REGION_ID	(REGION_ID(VMALLOC_START))
 #define KERNEL_REGION_ID	(REGION_ID(PAGE_OFFSET))
-#define VMEMMAP_REGION_ID	(0xfUL)
+#define VMEMMAP_REGION_ID	(0xfUL)	/* Server only */
 #define USER_REGION_ID		(0UL)
 
 /*
Index: linux-work/arch/powerpc/mm/init_64.c
===================================================================
--- linux-work.orig/arch/powerpc/mm/init_64.c	2009-07-23 15:12:36.000000000 +1000
+++ linux-work/arch/powerpc/mm/init_64.c	2009-07-23 15:54:07.000000000 +1000
@@ -205,6 +205,47 @@ static int __meminit vmemmap_populated(u
 	return 0;
 }
 
+/* On hash-based CPUs, the vmemmap is bolted in the hash table.
+ *
+ * On Book3E CPUs, the vmemmap is currently mapped in the top half of
+ * the vmalloc space using normal page tables, though the size of
+ * pages encoded in the PTEs can be different
+ */
+
+#ifdef CONFIG_PPC_BOOK3E
+static void __meminit vmemmap_create_mapping(unsigned long start,
+					     unsigned long page_size,
+					     unsigned long phys)
+{
+	/* Create a PTE encoding without page size */
+	unsigned long i, flags = _PAGE_PRESENT | _PAGE_ACCESSED |
+		_PAGE_KERNEL_RW;
+
+	/* PTEs only contain page size encodings up to 32M */
+	BUG_ON(mmu_psize_defs[mmu_vmemmap_psize].enc > 0xf);
+
+	/* Encode the size in the PTE */
+	flags |= mmu_psize_defs[mmu_vmemmap_psize].enc << 8;
+
+	/* For each PTE for that area, map things. Note that we don't
+	 * increment phys because all PTEs are of the large size and
+	 * thus must have the low bits clear
+	 */
+	for (i = 0; i < page_size; i += PAGE_SIZE)
+		BUG_ON(map_kernel_page(start + i, phys, flags));
+}
+#else /* CONFIG_PPC_BOOK3E */
+static void __meminit vmemmap_create_mapping(unsigned long start,
+					     unsigned long page_size,
+					     unsigned long phys)
+{
+	int  mapped = htab_bolt_mapping(start, start + page_size, phys,
+					PAGE_KERNEL, mmu_vmemmap_psize,
+					mmu_kernel_ssize);
+	BUG_ON(mapped < 0);
+}
+#endif /* CONFIG_PPC_BOOK3E */
+
 int __meminit vmemmap_populate(struct page *start_page,
 			       unsigned long nr_pages, int node)
 {
@@ -215,8 +256,11 @@ int __meminit vmemmap_populate(struct pa
 	/* Align to the page size of the linear mapping. */
 	start = _ALIGN_DOWN(start, page_size);
 
+	pr_debug("vmemmap_populate page %p, %ld pages, node %d\n",
+		 start_page, nr_pages, node);
+	pr_debug(" -> map %lx..%lx\n", start, end);
+
 	for (; start < end; start += page_size) {
-		int mapped;
 		void *p;
 
 		if (vmemmap_populated(start, page_size))
@@ -226,13 +270,10 @@ int __meminit vmemmap_populate(struct pa
 		if (!p)
 			return -ENOMEM;
 
-		pr_debug("vmemmap %08lx allocated at %p, physical %08lx.\n",
-			start, p, __pa(p));
+		pr_debug("      * %016lx..%016lx allocated at %p\n",
+			 start, start + page_size, p);
 
-		mapped = htab_bolt_mapping(start, start + page_size, __pa(p),
-					   pgprot_val(PAGE_KERNEL),
-					   mmu_vmemmap_psize, mmu_kernel_ssize);
-		BUG_ON(mapped < 0);
+		vmemmap_create_mapping(start, page_size, __pa(p));
 	}
 
 	return 0;
Index: linux-work/arch/powerpc/mm/tlb_nohash.c
===================================================================
--- linux-work.orig/arch/powerpc/mm/tlb_nohash.c	2009-07-23 15:16:27.000000000 +1000
+++ linux-work/arch/powerpc/mm/tlb_nohash.c	2009-07-23 15:18:11.000000000 +1000
@@ -77,6 +77,7 @@ struct mmu_psize_def mmu_psize_defs[MMU_
 
 int mmu_linear_psize;		/* Page size used for the linear mapping */
 int mmu_pte_psize;		/* Page size used for PTE pages */
+int mmu_vmemmap_psize;		/* Page size used for the virtual mem map */
 int book3e_htw_enabled;		/* Is HW tablewalk enabled ? */
 unsigned long linear_map_top;	/* Top of linear mapping */
 
@@ -333,10 +334,18 @@ static void __early_init_mmu(int boot_cp
 	unsigned int mas4;
 
 	/* XXX This will have to be decided at runtime, but right
-	 * now our boot and TLB miss code hard wires it
+	 * now our boot and TLB miss code hard wires it. Ideally
+	 * we should find out a suitable page size and patch the
+	 * TLB miss code (either that or use the PACA to store
+	 * the value we want)
 	 */
 	mmu_linear_psize = MMU_PAGE_1G;
 
+	/* XXX This should be decided at runtime based on supported
+	 * page sizes in the TLB, but for now let's assume 16M is
+	 * always there and a good fit (which it probably is)
+	 */
+	mmu_vmemmap_psize = MMU_PAGE_16M;
 
 	/* Check if HW tablewalk is present, and if yes, enable it by:
 	 *
Index: linux-work/arch/powerpc/mm/mmu_decl.h
===================================================================
--- linux-work.orig/arch/powerpc/mm/mmu_decl.h	2009-07-23 15:27:53.000000000 +1000
+++ linux-work/arch/powerpc/mm/mmu_decl.h	2009-07-23 15:28:58.000000000 +1000
@@ -117,7 +117,12 @@ extern unsigned int rtas_data, rtas_size
 struct hash_pte;
 extern struct hash_pte *Hash, *Hash_end;
 extern unsigned long Hash_size, Hash_mask;
-#endif
+
+#endif /* CONFIG_PPC32 */
+
+#ifdef CONFIG_PPC64
+extern int map_kernel_page(unsigned long ea, unsigned long pa, int flags);
+#endif /* CONFIG_PPC64 */
 
 extern unsigned long ioremap_bot;
 extern unsigned long __max_low_memory;
Index: linux-work/arch/powerpc/mm/pgtable_64.c
===================================================================
--- linux-work.orig/arch/powerpc/mm/pgtable_64.c	2009-07-23 15:27:39.000000000 +1000
+++ linux-work/arch/powerpc/mm/pgtable_64.c	2009-07-23 15:27:42.000000000 +1000
@@ -79,7 +79,7 @@ static void *early_alloc_pgtable(unsigne
  * map_kernel_page adds an entry to the ioremap page table
  * and adds an entry to the HPT, possibly bolting it
  */
-static int map_kernel_page(unsigned long ea, unsigned long pa, int flags)
+int map_kernel_page(unsigned long ea, unsigned long pa, int flags)
 {
 	pgd_t *pgdp;
 	pud_t *pudp;

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2009-08-04  7:22 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-07-24  9:15 [PATCH 0/20] powerpc: base 64-bit Book3E processor support (v2) Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 1/20] powerpc/mm: Fix misplaced #endif in pgtable-ppc64-64k.h Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 2/20] powerpc/of: Remove useless register save/restore when calling OF back Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 3/20] powerpc/mm: Add HW threads support to no_hash TLB management Benjamin Herrenschmidt
2009-07-31  3:12   ` Kumar Gala
2009-07-31  3:35     ` Kumar Gala
2009-07-31 22:29       ` Benjamin Herrenschmidt
2009-08-03  2:03         ` Michael Ellerman
2009-08-03 16:21           ` Kumar Gala
2009-08-03 17:06             ` Dave Kleikamp
2009-08-03 17:57               ` Dave Kleikamp
2009-08-04  7:22                 ` Benjamin Herrenschmidt
2009-08-03 21:03           ` Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 4/20] powerpc/mm: Add opcode definitions for tlbivax and tlbsrx Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 5/20] powerpc/mm: Add more bit definitions for Book3E MMU registers Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 6/20] powerpc/mm: Add support for early ioremap on non-hash 64-bit processors Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 7/20] powerpc: Modify some ppc_asm.h macros to accomodate 64-bits Book3E Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 8/20] powerpc/mm: Make low level TLB flush ops on BookE take additional args (v2) Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 9/20] powerpc/mm: Call mmu_context_init() from ppc64 (v2) Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 10/20] powerpc: Clean ifdef usage in copy_thread() Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 11/20] powerpc: Move definitions of secondary CPU spinloop to header file (v2) Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 12/20] powerpc/mm: Rework & cleanup page table freeing code path Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 13/20] powerpc: Add SPR definitions for new 64-bit BookE (v2) Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 14/20] powerpc: Add memory management headers " Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 15/20] powerpc: Add definitions used by exception handling on 64-bit Book3E (v2) Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 16/20] powerpc: Add PACA fields specific to 64-bit Book3E processors Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 17/20] powerpc/mm: Move around mmu_gathers definition on 64-bit (v2) Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 18/20] powerpc: Add TLB management code for 64-bit Book3E (v2) Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 19/20] powerpc/mm: Add support for SPARSEMEM_VMEMMAP on 64-bit Book3E Benjamin Herrenschmidt
2009-07-24  9:15 ` [PATCH 20/20] powerpc: Remaining 64-bit Book3E support (v2) Benjamin Herrenschmidt
  -- strict thread matches above, loose matches on Subject: below --
2009-07-23  5:59 [PATCH 19/20] powerpc/mm: Add support for SPARSEMEM_VMEMMAP on 64-bit Book3E Benjamin Herrenschmidt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).