linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13 v4] [POWERPC] ppc32 mm init clean and 85xx kernel reloc
@ 2008-04-15 19:52 Kumar Gala
  2008-04-15 19:52 ` [PATCH 01/13] [POWERPC] Remove Kconfig option BOOT_LOAD Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

* Removed EXPORT_SYMBOL of memstart_addr
* Fixed phys_addr_t typedef so its not exported to user space
* Added new patch that splits out removal of KERNEL_PGD_PTRS & USER_PGD_PTRS
  and cleaning up pmd_page macro

These patches exist in the following git tree:

	master.kernel.org:/pub/scm/linux/kernel/git/galak/powerpc.git ppc32_mm_init

Paul, can you please apply patches 01-12 to your powerpc-next tree.  We can than
further discuss the last patch and any changes you'd like to see.

- k

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 01/13] [POWERPC] Remove Kconfig option BOOT_LOAD
  2008-04-15 19:52 [PATCH 00/13 v4] [POWERPC] ppc32 mm init clean and 85xx kernel reloc Kumar Gala
@ 2008-04-15 19:52 ` Kumar Gala
  2008-04-15 19:52   ` [PATCH 02/13] [POWERPC] Provide access to arch/powerpc include path on ppc64 Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

Nothing appears to use BOOT_LOAD so remove it as a configurable option.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/Kconfig |   16 ----------------
 1 files changed, 0 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 1d4d19f..cb7406e 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -688,22 +688,6 @@ config CONSISTENT_SIZE
 	hex "Size of consistent memory pool" if CONSISTENT_SIZE_BOOL
 	default "0x00200000" if NOT_COHERENT_CACHE
 
-config BOOT_LOAD_BOOL
-	bool "Set the boot link/load address"
-	depends on ADVANCED_OPTIONS && !PPC_MULTIPLATFORM
-	help
-	  This option allows you to set the initial load address of the zImage
-	  or zImage.initrd file.  This can be useful if you are on a board
-	  which has a small amount of memory.
-
-	  Say N here unless you know what you are doing.
-
-config BOOT_LOAD
-	hex "Link/load address for booting" if BOOT_LOAD_BOOL
-	default "0x00400000" if 40x || 8xx || 8260
-	default "0x01000000" if 44x
-	default "0x00800000"
-
 config PIN_TLB
 	bool "Pinned Kernel TLBs (860 ONLY)"
 	depends on ADVANCED_OPTIONS && 8xx
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 02/13] [POWERPC] Provide access to arch/powerpc include path on ppc64
  2008-04-15 19:52 ` [PATCH 01/13] [POWERPC] Remove Kconfig option BOOT_LOAD Kumar Gala
@ 2008-04-15 19:52   ` Kumar Gala
  2008-04-15 19:52     ` [PATCH 03/13] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

There does not appear to be any reason that we shouldn't just have
-Iarch/$(ARCH) on both ppc32 and ppc64 builds.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/Makefile |   10 ++++------
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index dd80825..e2ec4a9 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -71,13 +71,11 @@ endif
 
 LDFLAGS_vmlinux	:= -Bstatic
 
-CPPFLAGS-$(CONFIG_PPC32) := -Iarch/$(ARCH)
-AFLAGS-$(CONFIG_PPC32)	:= -Iarch/$(ARCH)
 CFLAGS-$(CONFIG_PPC64)	:= -mminimal-toc -mtraceback=none  -mcall-aixdesc
-CFLAGS-$(CONFIG_PPC32)	:= -Iarch/$(ARCH) -ffixed-r2 -mmultiple
-KBUILD_CPPFLAGS	+= $(CPPFLAGS-y)
-KBUILD_AFLAGS	+= $(AFLAGS-y)
-KBUILD_CFLAGS	+= -msoft-float -pipe $(CFLAGS-y)
+CFLAGS-$(CONFIG_PPC32)	:= -ffixed-r2 -mmultiple
+KBUILD_CPPFLAGS	+= -Iarch/$(ARCH)
+KBUILD_AFLAGS	+= -Iarch/$(ARCH)
+KBUILD_CFLAGS	+= -msoft-float -pipe -Iarch/$(ARCH) $(CFLAGS-y)
 CPP		= $(CC) -E $(KBUILD_CFLAGS)
 
 CHECKFLAGS	+= -m$(CONFIG_WORD_SIZE) -D__powerpc__ -D__powerpc$(CONFIG_WORD_SIZE)__
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 03/13] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr
  2008-04-15 19:52   ` [PATCH 02/13] [POWERPC] Provide access to arch/powerpc include path on ppc64 Kumar Gala
@ 2008-04-15 19:52     ` Kumar Gala
  2008-04-15 19:52       ` [PATCH 04/13] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always use
0 as we don't support booting these kernels at non-zero physical addresses
since their exception vectors must be at 0 (or 0xfffx_xxxx).

For the sub-arches that support relocatable interrupt vectors (book-e) its
reasonable to have memory start at a non-zero physical address.  For those
cases use the variable memstart_addr instead of the #define PPC_MEMSTART
since the only uses of PPC_MEMSTART are for initialization and in the
future we can set memstart_addr at runtime to have a relocatable kernel.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/mm/40x_mmu.c       |    2 +-
 arch/powerpc/mm/fsl_booke_mmu.c |   11 +++++------
 arch/powerpc/mm/init_32.c       |    7 +++----
 arch/powerpc/mm/mmu_decl.h      |    1 +
 arch/powerpc/mm/pgtable_32.c    |    5 +++--
 arch/powerpc/mm/ppc_mmu_32.c    |   11 ++---------
 include/asm-powerpc/page_32.h   |    2 --
 7 files changed, 15 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/mm/40x_mmu.c b/arch/powerpc/mm/40x_mmu.c
index 3899ea9..cecbbc7 100644
--- a/arch/powerpc/mm/40x_mmu.c
+++ b/arch/powerpc/mm/40x_mmu.c
@@ -97,7 +97,7 @@ unsigned long __init mmu_mapin_ram(void)
 	phys_addr_t p;
 
 	v = KERNELBASE;
-	p = PPC_MEMSTART;
+	p = 0;
 	s = total_lowmem;
 
 	if (__map_without_ltlbs)
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index c93a966..3dd0c81 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -53,13 +53,12 @@
 #include <asm/machdep.h>
 #include <asm/setup.h>
 
+#include "mmu_decl.h"
+
 extern void loadcam_entry(unsigned int index);
 unsigned int tlbcam_index;
 unsigned int num_tlbcam_entries;
 static unsigned long __cam0, __cam1, __cam2;
-extern unsigned long total_lowmem;
-extern unsigned long __max_low_memory;
-extern unsigned long __initial_memory_limit;
 #define MAX_LOW_MEM	CONFIG_LOWMEM_SIZE
 
 #define NUM_TLBCAMS	(16)
@@ -165,15 +164,15 @@ void invalidate_tlbcam_entry(int index)
 void __init cam_mapin_ram(unsigned long cam0, unsigned long cam1,
 		unsigned long cam2)
 {
-	settlbcam(0, PAGE_OFFSET, PPC_MEMSTART, cam0, _PAGE_KERNEL, 0);
+	settlbcam(0, PAGE_OFFSET, memstart_addr, cam0, _PAGE_KERNEL, 0);
 	tlbcam_index++;
 	if (cam1) {
 		tlbcam_index++;
-		settlbcam(1, PAGE_OFFSET+cam0, PPC_MEMSTART+cam0, cam1, _PAGE_KERNEL, 0);
+		settlbcam(1, PAGE_OFFSET+cam0, memstart_addr+cam0, cam1, _PAGE_KERNEL, 0);
 	}
 	if (cam2) {
 		tlbcam_index++;
-		settlbcam(2, PAGE_OFFSET+cam0+cam1, PPC_MEMSTART+cam0+cam1, cam2, _PAGE_KERNEL, 0);
+		settlbcam(2, PAGE_OFFSET+cam0+cam1, memstart_addr+cam0+cam1, cam2, _PAGE_KERNEL, 0);
 	}
 }
 
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 0c66a9f..1d7e5b8 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -59,8 +59,8 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
 unsigned long total_memory;
 unsigned long total_lowmem;
 
-unsigned long ppc_memstart;
-unsigned long ppc_memoffset = PAGE_OFFSET;
+phys_addr_t memstart_addr;
+phys_addr_t lowmem_end_addr;
 
 int boot_mapsize;
 #ifdef CONFIG_PPC_PMAC
@@ -145,8 +145,7 @@ void __init MMU_init(void)
 		printk(KERN_WARNING "Only using first contiguous memory region");
 	}
 
-	total_memory = lmb_end_of_DRAM();
-	total_lowmem = total_memory;
+	total_lowmem = total_memory = lmb_end_of_DRAM() - memstart_addr;
 
 #ifdef CONFIG_FSL_BOOKE
 	/* Freescale Book-E parts expect lowmem to be mapped by fixed TLB
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index ebfd13d..5bc11f5 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -51,6 +51,7 @@ extern unsigned long __max_low_memory;
 extern unsigned long __initial_memory_limit;
 extern unsigned long total_memory;
 extern unsigned long total_lowmem;
+extern phys_addr_t memstart_addr;
 
 /* ...and now those things that may be slightly different between processor
  * architectures.  -- Dan
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index ac3390f..64c44bc 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -281,12 +281,13 @@ int map_page(unsigned long va, phys_addr_t pa, int flags)
  */
 void __init mapin_ram(void)
 {
-	unsigned long v, p, s, f;
+	unsigned long v, s, f;
+	phys_addr_t p;
 	int ktext;
 
 	s = mmu_mapin_ram();
 	v = KERNELBASE + s;
-	p = PPC_MEMSTART + s;
+	p = memstart_addr + s;
 	for (; s < total_lowmem; s += PAGE_SIZE) {
 		ktext = ((char *) v >= _stext && (char *) v < etext);
 		f = ktext ?_PAGE_RAM_TEXT : _PAGE_RAM;
diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c
index 72de3c7..65f915c 100644
--- a/arch/powerpc/mm/ppc_mmu_32.c
+++ b/arch/powerpc/mm/ppc_mmu_32.c
@@ -82,7 +82,6 @@ unsigned long __init mmu_mapin_ram(void)
 #else
 	unsigned long tot, bl, done;
 	unsigned long max_size = (256<<20);
-	unsigned long align;
 
 	if (__map_without_bats) {
 		printk(KERN_DEBUG "RAM mapped without BATs\n");
@@ -93,19 +92,13 @@ unsigned long __init mmu_mapin_ram(void)
 
 	/* Make sure we don't map a block larger than the
 	   smallest alignment of the physical address. */
-	/* alignment of PPC_MEMSTART */
-	align = ~(PPC_MEMSTART-1) & PPC_MEMSTART;
-	/* set BAT block size to MIN(max_size, align) */
-	if (align && align < max_size)
-		max_size = align;
-
 	tot = total_lowmem;
 	for (bl = 128<<10; bl < max_size; bl <<= 1) {
 		if (bl * 2 > tot)
 			break;
 	}
 
-	setbat(2, KERNELBASE, PPC_MEMSTART, bl, _PAGE_RAM);
+	setbat(2, KERNELBASE, 0, bl, _PAGE_RAM);
 	done = (unsigned long)bat_addrs[2].limit - KERNELBASE + 1;
 	if ((done < tot) && !bat_addrs[3].limit) {
 		/* use BAT3 to cover a bit more */
@@ -113,7 +106,7 @@ unsigned long __init mmu_mapin_ram(void)
 		for (bl = 128<<10; bl < max_size; bl <<= 1)
 			if (bl * 2 > tot)
 				break;
-		setbat(3, KERNELBASE+done, PPC_MEMSTART+done, bl, _PAGE_RAM);
+		setbat(3, KERNELBASE+done, done, bl, _PAGE_RAM);
 		done = (unsigned long)bat_addrs[3].limit - KERNELBASE + 1;
 	}
 
diff --git a/include/asm-powerpc/page_32.h b/include/asm-powerpc/page_32.h
index 65ea19e..51f8134 100644
--- a/include/asm-powerpc/page_32.h
+++ b/include/asm-powerpc/page_32.h
@@ -3,8 +3,6 @@
 
 #define VM_DATA_DEFAULT_FLAGS	VM_DATA_DEFAULT_FLAGS32
 
-#define PPC_MEMSTART	0
-
 #ifdef CONFIG_NOT_COHERENT_CACHE
 #define ARCH_KMALLOC_MINALIGN	L1_CACHE_BYTES
 #endif
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 04/13] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem
  2008-04-15 19:52     ` [PATCH 03/13] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Kumar Gala
@ 2008-04-15 19:52       ` Kumar Gala
  2008-04-15 19:52         ` [PATCH 05/13] [POWERPC] 85xx: Cleanup TLB initialization Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

total_lowmem represents the amount of low memory not the physical address
that low memory ends at.  If the start of memory is at 0 it happends that
total_lowmem can be used as both the size and the address that lowmem
ends at. (technical its one byte beyond the end)

To make the code a bit more clear and deal with the case when the start of
memory isn't at physical 0, we introduce lowmem_end_addr that represents
one byte beyond the last physical address in the lowmem region.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/mm/44x_mmu.c  |    2 +-
 arch/powerpc/mm/init_32.c  |    4 +++-
 arch/powerpc/mm/init_64.c  |    2 ++
 arch/powerpc/mm/mem.c      |   16 +++++++++-------
 arch/powerpc/mm/mmu_decl.h |    1 +
 5 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/mm/44x_mmu.c b/arch/powerpc/mm/44x_mmu.c
index 04dc087..953fb91 100644
--- a/arch/powerpc/mm/44x_mmu.c
+++ b/arch/powerpc/mm/44x_mmu.c
@@ -67,7 +67,7 @@ unsigned long __init mmu_mapin_ram(void)
 
 	/* Pin in enough TLBs to cover any lowmem not covered by the
 	 * initial 256M mapping established in head_44x.S */
-	for (addr = PPC_PIN_SIZE; addr < total_lowmem;
+	for (addr = PPC_PIN_SIZE; addr < lowmem_end_addr;
 	     addr += PPC_PIN_SIZE)
 		ppc44x_pin_tlb(addr + PAGE_OFFSET, addr);
 
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 1d7e5b8..ba61d08 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -146,6 +146,7 @@ void __init MMU_init(void)
 	}
 
 	total_lowmem = total_memory = lmb_end_of_DRAM() - memstart_addr;
+	lowmem_end_addr = memstart_addr + total_lowmem;
 
 #ifdef CONFIG_FSL_BOOKE
 	/* Freescale Book-E parts expect lowmem to be mapped by fixed TLB
@@ -156,9 +157,10 @@ void __init MMU_init(void)
 
 	if (total_lowmem > __max_low_memory) {
 		total_lowmem = __max_low_memory;
+		lowmem_end_addr = memstart_addr + total_lowmem;
 #ifndef CONFIG_HIGHMEM
 		total_memory = total_lowmem;
-		lmb_enforce_memory_limit(total_lowmem);
+		lmb_enforce_memory_limit(lowmem_end_addr);
 		lmb_analyze();
 #endif /* CONFIG_HIGHMEM */
 	}
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 5f55399..9ea65d9 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -75,6 +75,8 @@
 /* max amount of RAM to use */
 unsigned long __max_memory;
 
+phys_addr_t memstart_addr;
+
 void free_initmem(void)
 {
 	unsigned long addr;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index e3349ea..16def4d 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -216,9 +216,11 @@ void __init do_init_bootmem(void)
 	unsigned long total_pages;
 	int boot_mapsize;
 
-	max_pfn = total_pages = lmb_end_of_DRAM() >> PAGE_SHIFT;
+	max_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT;
+	total_pages = (lmb_end_of_DRAM() - memstart_addr) >> PAGE_SHIFT;
 #ifdef CONFIG_HIGHMEM
 	total_pages = total_lowmem >> PAGE_SHIFT;
+	max_low_pfn = lowmem_end_addr >> PAGE_SHIFT;
 #endif
 
 	/*
@@ -244,18 +246,18 @@ void __init do_init_bootmem(void)
 	 * present.
 	 */
 #ifdef CONFIG_HIGHMEM
-	free_bootmem_with_active_regions(0, total_lowmem >> PAGE_SHIFT);
+	free_bootmem_with_active_regions(0, lowmem_end_addr >> PAGE_SHIFT);
 
 	/* reserve the sections we're already using */
 	for (i = 0; i < lmb.reserved.cnt; i++) {
 		unsigned long addr = lmb.reserved.region[i].base +
 				     lmb_size_bytes(&lmb.reserved, i) - 1;
-		if (addr < total_lowmem)
+		if (addr < lowmem_end_addr)
 			reserve_bootmem(lmb.reserved.region[i].base,
 					lmb_size_bytes(&lmb.reserved, i),
 					BOOTMEM_DEFAULT);
-		else if (lmb.reserved.region[i].base < total_lowmem) {
-			unsigned long adjusted_size = total_lowmem -
+		else if (lmb.reserved.region[i].base < lowmem_end_addr) {
+			unsigned long adjusted_size = lowmem_end_addr -
 				      lmb.reserved.region[i].base;
 			reserve_bootmem(lmb.reserved.region[i].base,
 					adjusted_size, BOOTMEM_DEFAULT);
@@ -325,7 +327,7 @@ void __init paging_init(void)
 	       (top_of_ram - total_ram) >> 20);
 	memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
 #ifdef CONFIG_HIGHMEM
-	max_zone_pfns[ZONE_DMA] = total_lowmem >> PAGE_SHIFT;
+	max_zone_pfns[ZONE_DMA] = lowmem_end_addr >> PAGE_SHIFT;
 	max_zone_pfns[ZONE_HIGHMEM] = top_of_ram >> PAGE_SHIFT;
 #else
 	max_zone_pfns[ZONE_DMA] = top_of_ram >> PAGE_SHIFT;
@@ -380,7 +382,7 @@ void __init mem_init(void)
 	{
 		unsigned long pfn, highmem_mapnr;
 
-		highmem_mapnr = total_lowmem >> PAGE_SHIFT;
+		highmem_mapnr = lowmem_end_addr >> PAGE_SHIFT;
 		for (pfn = highmem_mapnr; pfn < max_mapnr; ++pfn) {
 			struct page *page = pfn_to_page(pfn);
 			if (lmb_is_reserved(pfn << PAGE_SHIFT))
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 5bc11f5..67477e7 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -52,6 +52,7 @@ extern unsigned long __initial_memory_limit;
 extern unsigned long total_memory;
 extern unsigned long total_lowmem;
 extern phys_addr_t memstart_addr;
+extern phys_addr_t lowmem_end_addr;
 
 /* ...and now those things that may be slightly different between processor
  * architectures.  -- Dan
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 05/13] [POWERPC] 85xx: Cleanup TLB initialization
  2008-04-15 19:52       ` [PATCH 04/13] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem Kumar Gala
@ 2008-04-15 19:52         ` Kumar Gala
  2008-04-15 19:52           ` [PATCH 06/13] [POWERPC] Use lowmem_end_addr to limit lmb allocations on ppc32 Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

* Determine the RPN we are running the kernel at runtime rather
  than using compile time constant for initial TLB

* Cleanup adjust_total_lowmem() to respect memstart_addr and
  be a bit more clear on variables that are sizes vs addresses.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/kernel/head_fsl_booke.S |   34 ++++++++++++++++++++++++------
 arch/powerpc/mm/fsl_booke_mmu.c      |   37 ++++++++++++++-------------------
 2 files changed, 43 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index d9cc2c2..9f40b3e 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -68,7 +68,9 @@ _ENTRY(_start);
 	mr	r29,r5
 	mr	r28,r6
 	mr	r27,r7
+	li	r25,0		/* phys kernel start (low) */
 	li	r24,0		/* CPU number */
+	li	r23,0		/* phys kernel start (high) */
 
 /* We try to not make any assumptions about how the boot loader
  * setup or used the TLBs.  We invalidate all mappings from the
@@ -167,7 +169,28 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	mtspr	SPRN_MAS0,r7
 	tlbre
 
-	/* Just modify the entry ID, EPN and RPN for the temp mapping */
+	/* grab and fixup the RPN */
+	mfspr	r6,SPRN_MAS1	/* extract MAS1[SIZE] */
+	rlwinm	r6,r6,25,27,30
+	li	r8,-1
+	addi	r6,r6,10
+	slw	r6,r8,r6	/* convert to mask */
+
+	bl	1f		/* Find our address */
+1:	mflr	r7
+
+	mfspr	r8,SPRN_MAS3
+#ifdef CONFIG_PHYS_64BIT
+	mfspr	r23,SPRN_MAS7
+#endif
+	and	r8,r6,r8
+	subfic	r9,r6,-4096
+	and	r9,r9,r7
+
+	or	r25,r8,r9
+	ori	r8,r25,(MAS3_SX|MAS3_SW|MAS3_SR)
+
+	/* Just modify the entry ID and EPN for the temp mapping */
 	lis	r7,0x1000	/* Set MAS0(TLBSEL) = 1 */
 	rlwimi	r7,r5,16,4,15	/* Setup MAS0 = TLBSEL | ESEL(r5) */
 	mtspr	SPRN_MAS0,r7
@@ -177,12 +200,10 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	ori	r6,r6,(MAS1_TSIZE(BOOKE_PAGESZ_4K))@l
 	mtspr	SPRN_MAS1,r6
 	mfspr	r6,SPRN_MAS2
-	lis	r7,PHYSICAL_START@h
+	li	r7,0		/* temp EPN = 0 */
 	rlwimi	r7,r6,0,20,31
 	mtspr	SPRN_MAS2,r7
-	mfspr	r6,SPRN_MAS3
-	rlwimi	r7,r6,0,20,31
-	mtspr	SPRN_MAS3,r7
+	mtspr	SPRN_MAS3,r8
 	tlbwe
 
 	xori	r6,r4,1
@@ -232,8 +253,7 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	ori	r6,r6,PAGE_OFFSET@l
 	rlwimi	r6,r7,0,20,31
 	mtspr	SPRN_MAS2,r6
-	li	r7,(MAS3_SX|MAS3_SW|MAS3_SR)
-	mtspr	SPRN_MAS3,r7
+	mtspr	SPRN_MAS3,r8
 	tlbwe
 
 /* 7. Jump to KERNELBASE mapping */
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index 3dd0c81..59f6649 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -49,7 +49,6 @@
 #include <asm/mmu.h>
 #include <asm/uaccess.h>
 #include <asm/smp.h>
-#include <asm/bootx.h>
 #include <asm/machdep.h>
 #include <asm/setup.h>
 
@@ -59,7 +58,6 @@ extern void loadcam_entry(unsigned int index);
 unsigned int tlbcam_index;
 unsigned int num_tlbcam_entries;
 static unsigned long __cam0, __cam1, __cam2;
-#define MAX_LOW_MEM	CONFIG_LOWMEM_SIZE
 
 #define NUM_TLBCAMS	(16)
 
@@ -195,35 +193,32 @@ unsigned long __init mmu_mapin_ram(void)
 void __init
 adjust_total_lowmem(void)
 {
-	unsigned long max_low_mem = MAX_LOW_MEM;
-	unsigned long cam_max = 0x10000000;
-	unsigned long ram;
+	phys_addr_t max_lowmem_size = __max_low_memory;
+	phys_addr_t cam_max_size = 0x10000000;
+	phys_addr_t ram;
 
-	/* adjust CAM size to max_low_mem */
-	if (max_low_mem < cam_max)
-		cam_max = max_low_mem;
+	/* adjust CAM size to max_lowmem_size */
+	if (max_lowmem_size < cam_max_size)
+		cam_max_size = max_lowmem_size;
 
-	/* adjust lowmem size to max_low_mem */
-	if (max_low_mem < total_lowmem)
-		ram = max_low_mem;
-	else
-		ram = total_lowmem;
+	/* adjust lowmem size to max_lowmem_size */
+	ram = min(max_lowmem_size, total_lowmem);
 
 	/* Calculate CAM values */
 	__cam0 = 1UL << 2 * (__ilog2(ram) / 2);
-	if (__cam0 > cam_max)
-		__cam0 = cam_max;
+	if (__cam0 > cam_max_size)
+		__cam0 = cam_max_size;
 	ram -= __cam0;
 	if (ram) {
 		__cam1 = 1UL << 2 * (__ilog2(ram) / 2);
-		if (__cam1 > cam_max)
-			__cam1 = cam_max;
+		if (__cam1 > cam_max_size)
+			__cam1 = cam_max_size;
 		ram -= __cam1;
 	}
 	if (ram) {
 		__cam2 = 1UL << 2 * (__ilog2(ram) / 2);
-		if (__cam2 > cam_max)
-			__cam2 = cam_max;
+		if (__cam2 > cam_max_size)
+			__cam2 = cam_max_size;
 		ram -= __cam2;
 	}
 
@@ -231,6 +226,6 @@ adjust_total_lowmem(void)
 			" CAM2=%ldMb residual: %ldMb\n",
 			__cam0 >> 20, __cam1 >> 20, __cam2 >> 20,
 			(total_lowmem - __cam0 - __cam1 - __cam2) >> 20);
-	__max_low_memory = max_low_mem = __cam0 + __cam1 + __cam2;
-	__initial_memory_limit = __max_low_memory;
+	__max_low_memory = __cam0 + __cam1 + __cam2;
+	__initial_memory_limit = memstart_addr + __max_low_memory;
 }
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 06/13] [POWERPC] Use lowmem_end_addr to limit lmb allocations on ppc32
  2008-04-15 19:52         ` [PATCH 05/13] [POWERPC] 85xx: Cleanup TLB initialization Kumar Gala
@ 2008-04-15 19:52           ` Kumar Gala
  2008-04-15 19:52             ` [PATCH 07/13] [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

Now that we have a proper variable that is the address of the top
of low memory we can use it to limit the lmb allocations.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 include/asm-powerpc/lmb.h |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/asm-powerpc/lmb.h b/include/asm-powerpc/lmb.h
index 028184b..6f5fdf0 100644
--- a/include/asm-powerpc/lmb.h
+++ b/include/asm-powerpc/lmb.h
@@ -6,8 +6,8 @@
 #define LMB_DBG(fmt...) udbg_printf(fmt)
 
 #ifdef CONFIG_PPC32
-extern unsigned long __max_low_memory;
-#define LMB_REAL_LIMIT	__max_low_memory
+extern phys_addr_t lowmem_end_addr;
+#define LMB_REAL_LIMIT	lowmem_end_addr
 #else
 #define LMB_REAL_LIMIT	0
 #endif
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 07/13] [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr
  2008-04-15 19:52           ` [PATCH 06/13] [POWERPC] Use lowmem_end_addr to limit lmb allocations on ppc32 Kumar Gala
@ 2008-04-15 19:52             ` Kumar Gala
  2008-04-15 19:52               ` [PATCH 08/13] [POWERPC] Clean up some linker and symbol usage Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

We always use __initial_memory_limit as an address so rename it
to be clear.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/mm/fsl_booke_mmu.c |    2 +-
 arch/powerpc/mm/init_32.c       |   10 +++++-----
 arch/powerpc/mm/mmu_decl.h      |    2 +-
 arch/powerpc/mm/ppc_mmu_32.c    |    2 +-
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index 59f6649..ada249b 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -227,5 +227,5 @@ adjust_total_lowmem(void)
 			__cam0 >> 20, __cam1 >> 20, __cam2 >> 20,
 			(total_lowmem - __cam0 - __cam1 - __cam2) >> 20);
 	__max_low_memory = __cam0 + __cam1 + __cam2;
-	__initial_memory_limit = memstart_addr + __max_low_memory;
+	__initial_memory_limit_addr = memstart_addr + __max_low_memory;
 }
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index ba61d08..9a0ea2c 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -95,10 +95,10 @@ int __map_without_ltlbs;
 unsigned long __max_low_memory = MAX_LOW_MEM;
 
 /*
- * limit of what is accessible with initial MMU setup -
+ * address of the limit of what is accessible with initial MMU setup -
  * 256MB usually, but only 16MB on 601.
  */
-unsigned long __initial_memory_limit = 0x10000000;
+phys_addr_t __initial_memory_limit_addr = (phys_addr_t)0x10000000;
 
 /*
  * Check for command-line options that affect what MMU_init will do.
@@ -131,10 +131,10 @@ void __init MMU_init(void)
 
 	/* 601 can only access 16MB at the moment */
 	if (PVR_VER(mfspr(SPRN_PVR)) == 1)
-		__initial_memory_limit = 0x01000000;
+		__initial_memory_limit_addr = 0x01000000;
 	/* 8xx can only access 8MB at the moment */
 	if (PVR_VER(mfspr(SPRN_PVR)) == 0x50)
-		__initial_memory_limit = 0x00800000;
+		__initial_memory_limit_addr = 0x00800000;
 
 	/* parse args from command line */
 	MMU_setup();
@@ -209,7 +209,7 @@ void __init *early_get_page(void)
 		p = alloc_bootmem_pages(PAGE_SIZE);
 	} else {
 		p = __va(lmb_alloc_base(PAGE_SIZE, PAGE_SIZE,
-					__initial_memory_limit));
+					__initial_memory_limit_addr));
 	}
 	return p;
 }
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 67477e7..0480225 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -48,7 +48,7 @@ extern unsigned int num_tlbcam_entries;
 
 extern unsigned long ioremap_bot;
 extern unsigned long __max_low_memory;
-extern unsigned long __initial_memory_limit;
+extern phys_addr_t __initial_memory_limit_addr;
 extern unsigned long total_memory;
 extern unsigned long total_lowmem;
 extern phys_addr_t memstart_addr;
diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c
index 65f915c..cef9f15 100644
--- a/arch/powerpc/mm/ppc_mmu_32.c
+++ b/arch/powerpc/mm/ppc_mmu_32.c
@@ -233,7 +233,7 @@ void __init MMU_init_hw(void)
 	 */
 	if ( ppc_md.progress ) ppc_md.progress("hash:find piece", 0x322);
 	Hash = __va(lmb_alloc_base(Hash_size, Hash_size,
-				   __initial_memory_limit));
+				   __initial_memory_limit_addr));
 	cacheable_memzero(Hash, Hash_size);
 	_SDR1 = __pa(Hash) | SDR1_LOW_BITS;
 
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 08/13] [POWERPC] Clean up some linker and symbol usage
  2008-04-15 19:52             ` [PATCH 07/13] [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr Kumar Gala
@ 2008-04-15 19:52               ` Kumar Gala
  2008-04-15 19:52                 ` [PATCH 09/13] [POWERPC] Move phys_addr_t definition into asm/types.h Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

* PAGE_OFFSET is not always the start of code, use _stext instead.
* grab PAGE_SIZE and KERNELBASE from asm/page.h like ppc64 does.  Makes the
  code a bit more common and provide a single place to manipulate the
  defines for things like kdump.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/kernel/setup_32.c    |    2 +-
 arch/powerpc/kernel/setup_64.c    |    2 +-
 arch/powerpc/kernel/vmlinux.lds.S |    4 +---
 3 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index eac936e..d813c39 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -289,7 +289,7 @@ void __init setup_arch(char **cmdline_p)
 	if (ppc_md.panic)
 		setup_panic();
 
-	init_mm.start_code = PAGE_OFFSET;
+	init_mm.start_code = (unsigned long)_stext;
 	init_mm.end_code = (unsigned long) _etext;
 	init_mm.end_data = (unsigned long) _edata;
 	init_mm.brk = klimit;
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 2c2d831..0205d40 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -510,7 +510,7 @@ void __init setup_arch(char **cmdline_p)
 	if (ppc_md.panic)
 		setup_panic();
 
-	init_mm.start_code = PAGE_OFFSET;
+	init_mm.start_code = (unsigned long)_stext;
 	init_mm.end_code = (unsigned long) _etext;
 	init_mm.end_data = (unsigned long) _edata;
 	init_mm.brk = klimit;
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 0afb9e3..b5a76bc 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -1,11 +1,9 @@
 #ifdef CONFIG_PPC64
-#include <asm/page.h>
 #define PROVIDE32(x)	PROVIDE(__unused__##x)
 #else
-#define PAGE_SIZE	4096
-#define KERNELBASE	CONFIG_KERNEL_START
 #define PROVIDE32(x)	PROVIDE(x)
 #endif
+#include <asm/page.h>
 #include <asm-generic/vmlinux.lds.h>
 #include <asm/cache.h>
 
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 09/13] [POWERPC] Move phys_addr_t definition into asm/types.h
  2008-04-15 19:52               ` [PATCH 08/13] [POWERPC] Clean up some linker and symbol usage Kumar Gala
@ 2008-04-15 19:52                 ` Kumar Gala
  2008-04-15 19:52                   ` [PATCH 10/13] [POWERPC] Update linker script to properly set physical addresses Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

Moved phys_addr_t out of mmu-*.h and into asm/types.h so we can use it in
places that before would have caused recursive includes.

For example to use phys_addr_t in <asm/page.h> we would have included
<asm/mmu.h> which would have possibly included <asm/mmu-hash64.h> which
includes <asm/page.h>.  Wheeee recursive include.

CONFIG_PHYS_64BIT is a bit counterintuitive in light of ppc64 systems
and thus the config option is only used for ppc32 systems with >32-bit
physical addresses (44x, 85xx, 745x, etc.).

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 include/asm-powerpc/mmu-40x.h       |    2 --
 include/asm-powerpc/mmu-44x.h       |    2 --
 include/asm-powerpc/mmu-8xx.h       |    2 --
 include/asm-powerpc/mmu-fsl-booke.h |    6 ------
 include/asm-powerpc/mmu-hash32.h    |    2 --
 include/asm-powerpc/mmu-hash64.h    |    3 ---
 include/asm-powerpc/types.h         |    7 +++++++
 7 files changed, 7 insertions(+), 17 deletions(-)

diff --git a/include/asm-powerpc/mmu-40x.h b/include/asm-powerpc/mmu-40x.h
index 7d37f77..3d10867 100644
--- a/include/asm-powerpc/mmu-40x.h
+++ b/include/asm-powerpc/mmu-40x.h
@@ -53,8 +53,6 @@
 
 #ifndef __ASSEMBLY__
 
-typedef unsigned long phys_addr_t;
-
 typedef struct {
 	unsigned long id;
 	unsigned long vdso_base;
diff --git a/include/asm-powerpc/mmu-44x.h b/include/asm-powerpc/mmu-44x.h
index 62772ae..c8b02d9 100644
--- a/include/asm-powerpc/mmu-44x.h
+++ b/include/asm-powerpc/mmu-44x.h
@@ -53,8 +53,6 @@
 
 #ifndef __ASSEMBLY__
 
-typedef unsigned long long phys_addr_t;
-
 typedef struct {
 	unsigned long id;
 	unsigned long vdso_base;
diff --git a/include/asm-powerpc/mmu-8xx.h b/include/asm-powerpc/mmu-8xx.h
index 952bd88..9db877e 100644
--- a/include/asm-powerpc/mmu-8xx.h
+++ b/include/asm-powerpc/mmu-8xx.h
@@ -136,8 +136,6 @@
 #define SPRN_M_TW	799
 
 #ifndef __ASSEMBLY__
-typedef unsigned long phys_addr_t;
-
 typedef struct {
 	unsigned long id;
 	unsigned long vdso_base;
diff --git a/include/asm-powerpc/mmu-fsl-booke.h b/include/asm-powerpc/mmu-fsl-booke.h
index 3758000..925d93c 100644
--- a/include/asm-powerpc/mmu-fsl-booke.h
+++ b/include/asm-powerpc/mmu-fsl-booke.h
@@ -73,12 +73,6 @@
 
 #ifndef __ASSEMBLY__
 
-#ifndef CONFIG_PHYS_64BIT
-typedef unsigned long phys_addr_t;
-#else
-typedef unsigned long long phys_addr_t;
-#endif
-
 typedef struct {
 	unsigned long id;
 	unsigned long vdso_base;
diff --git a/include/asm-powerpc/mmu-hash32.h b/include/asm-powerpc/mmu-hash32.h
index 4bd735b..6e21ca6 100644
--- a/include/asm-powerpc/mmu-hash32.h
+++ b/include/asm-powerpc/mmu-hash32.h
@@ -84,8 +84,6 @@ typedef struct {
 	unsigned long vdso_base;
 } mm_context_t;
 
-typedef unsigned long phys_addr_t;
-
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_MMU_HASH32_H_ */
diff --git a/include/asm-powerpc/mmu-hash64.h b/include/asm-powerpc/mmu-hash64.h
index 2864fa3..0dff767 100644
--- a/include/asm-powerpc/mmu-hash64.h
+++ b/include/asm-powerpc/mmu-hash64.h
@@ -469,9 +469,6 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
 				 VSID_MODULUS_256M)
 #define KERNEL_VSID(ea)		VSID_SCRAMBLE(GET_ESID(ea))
 
-/* Physical address used by some IO functions */
-typedef unsigned long phys_addr_t;
-
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_MMU_HASH64_H_ */
diff --git a/include/asm-powerpc/types.h b/include/asm-powerpc/types.h
index 903fd19..c243a6a 100644
--- a/include/asm-powerpc/types.h
+++ b/include/asm-powerpc/types.h
@@ -84,6 +84,13 @@ typedef unsigned long long u64;
 
 typedef __vector128 vector128;
 
+/* Physical address used by some IO functions */
+#if defined(CONFIG_PPC64) || defined(CONFIG_PHYS_64BIT)
+typedef u64 phys_addr_t;
+#else
+typedef u32 phys_addr_t;
+#endif
+
 #ifdef __powerpc64__
 typedef u64 dma_addr_t;
 #else
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 10/13] [POWERPC] Update linker script to properly set physical addresses
  2008-04-15 19:52                 ` [PATCH 09/13] [POWERPC] Move phys_addr_t definition into asm/types.h Kumar Gala
@ 2008-04-15 19:52                   ` Kumar Gala
  2008-04-15 19:52                     ` [PATCH 11/13] [POWERPC] bootwrapper: use physical address in PHDR for uImage Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

We can set LOAD_OFFSET and use the AT attribute on sections and the
linker will properly set the physical address of the LOAD program
header for us.

This allows us to know how the PHYSICAL_START the user configured a
kernel with by just looking at the resulting vmlinux ELF.

This is pretty much stolen from how x86 does things in their linker
scripts.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/kernel/vmlinux.lds.S |   47 ++++++++++++++++++-------------------
 include/asm-powerpc/page.h        |    1 +
 2 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index b5a76bc..0c3000b 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -31,7 +31,7 @@ SECTIONS
  */
 
 	/* Text and gots */
-	.text : {
+	.text : AT(ADDR(.text) - LOAD_OFFSET) {
 		ALIGN_FUNCTION();
 		*(.text.head)
 		_text = .;
@@ -56,7 +56,7 @@ SECTIONS
 	RODATA
 
 	/* Exception & bug tables */
-	__ex_table : {
+	__ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) {
 		__start___ex_table = .;
 		*(__ex_table)
 		__stop___ex_table = .;
@@ -72,7 +72,7 @@ SECTIONS
 	. = ALIGN(PAGE_SIZE);
 	__init_begin = .;
 
-	.init.text : {
+	.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
 		_sinittext = .;
 		INIT_TEXT
 		_einittext = .;
@@ -81,11 +81,11 @@ SECTIONS
 	/* .exit.text is discarded at runtime, not link time,
 	 * to deal with references from __bug_table
 	 */
-	.exit.text : {
+	.exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) {
 		EXIT_TEXT
 	}
 
-	.init.data : {
+	.init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
 		INIT_DATA
 		__vtop_table_begin = .;
 		*(.vtop_fixup);
@@ -101,19 +101,19 @@ SECTIONS
 	}
 
 	. = ALIGN(16);
-	.init.setup : {
+	.init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) {
 		__setup_start = .;
 		*(.init.setup)
 		__setup_end = .;
 	}
 
-	.initcall.init : {
+	.initcall.init : AT(ADDR(.initcall.init) - LOAD_OFFSET) {
 		__initcall_start = .;
 		INITCALLS
 		__initcall_end = .;
 		}
 
-	.con_initcall.init : {
+	.con_initcall.init : AT(ADDR(.con_initcall.init) - LOAD_OFFSET) {
 		__con_initcall_start = .;
 		*(.con_initcall.init)
 		__con_initcall_end = .;
@@ -122,14 +122,14 @@ SECTIONS
 	SECURITY_INIT
 
 	. = ALIGN(8);
-	__ftr_fixup : {
+	__ftr_fixup : AT(ADDR(__ftr_fixup) - LOAD_OFFSET) {
 		__start___ftr_fixup = .;
 		*(__ftr_fixup)
 		__stop___ftr_fixup = .;
 	}
 #ifdef CONFIG_PPC64
 	. = ALIGN(8);
-	__fw_ftr_fixup : {
+	__fw_ftr_fixup : AT(ADDR(__fw_ftr_fixup) - LOAD_OFFSET) {
 		__start___fw_ftr_fixup = .;
 		*(__fw_ftr_fixup)
 		__stop___fw_ftr_fixup = .;
@@ -137,14 +137,14 @@ SECTIONS
 #endif
 #ifdef CONFIG_BLK_DEV_INITRD
 	. = ALIGN(PAGE_SIZE);
-	.init.ramfs : {
+	.init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
 		__initramfs_start = .;
 		*(.init.ramfs)
 		__initramfs_end = .;
 	}
 #endif
 	. = ALIGN(PAGE_SIZE);
-	.data.percpu : {
+	.data.percpu  : AT(ADDR(.data.percpu) - LOAD_OFFSET) {
 		__per_cpu_start = .;
 		*(.data.percpu)
 		*(.data.percpu.shared_aligned)
@@ -152,7 +152,7 @@ SECTIONS
 	}
 
 	. = ALIGN(8);
-	.machine.desc : {
+	.machine.desc : AT(ADDR(.machine.desc) - LOAD_OFFSET) {
 		__machine_desc_start = . ;
 		*(.machine.desc)
 		__machine_desc_end = . ;
@@ -170,25 +170,24 @@ SECTIONS
 	_sdata = .;
 
 #ifdef CONFIG_PPC32
-	.data    :
-	{
+	.data : AT(ADDR(.data) - LOAD_OFFSET) {
 		DATA_DATA
 		*(.sdata)
 		*(.got.plt) *(.got)
 	}
 #else
-	.data : {
+	.data : AT(ADDR(.data) - LOAD_OFFSET) {
 		DATA_DATA
 		*(.data.rel*)
 		*(.toc1)
 		*(.branch_lt)
 	}
 
-	.opd : {
+	.opd : AT(ADDR(.opd) - LOAD_OFFSET) {
 		*(.opd)
 	}
 
-	.got : {
+	.got : AT(ADDR(.got) - LOAD_OFFSET) {
 		__toc_start = .;
 		*(.got)
 		*(.toc)
@@ -205,26 +204,26 @@ SECTIONS
 #else
 	. = ALIGN(16384);
 #endif
-	.data.init_task : {
+	.data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
 		*(.data.init_task)
 	}
 
 	. = ALIGN(PAGE_SIZE);
-	.data.page_aligned : {
+	.data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
 		*(.data.page_aligned)
 	}
 
-	.data.cacheline_aligned : {
+	.data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
 		*(.data.cacheline_aligned)
 	}
 
 	. = ALIGN(L1_CACHE_BYTES);
-	.data.read_mostly : {
+	.data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
 		*(.data.read_mostly)
 	}
 
 	. = ALIGN(PAGE_SIZE);
-	__data_nosave : {
+	.data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
 		__nosave_begin = .;
 		*(.data.nosave)
 		. = ALIGN(PAGE_SIZE);
@@ -235,7 +234,7 @@ SECTIONS
  * And finally the bss
  */
 
-	.bss : {
+	.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
 		__bss_start = .;
 		*(.sbss) *(.scommon)
 		*(.dynbss)
diff --git a/include/asm-powerpc/page.h b/include/asm-powerpc/page.h
index df47bbb..6c85060 100644
--- a/include/asm-powerpc/page.h
+++ b/include/asm-powerpc/page.h
@@ -53,6 +53,7 @@
 
 #define PAGE_OFFSET     ASM_CONST(CONFIG_KERNEL_START)
 #define KERNELBASE      (PAGE_OFFSET + PHYSICAL_START)
+#define LOAD_OFFSET	PAGE_OFFSET
 
 #ifdef CONFIG_FLATMEM
 #define pfn_valid(pfn)		((pfn) < max_mapnr)
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 11/13] [POWERPC] bootwrapper: use physical address in PHDR for uImage
  2008-04-15 19:52                   ` [PATCH 10/13] [POWERPC] Update linker script to properly set physical addresses Kumar Gala
@ 2008-04-15 19:52                     ` Kumar Gala
  2008-04-15 19:52                       ` [PATCH 12/13] [POWERPC] Cleanup pgtable-ppc32.h Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

Now that we properly set the physical address in the program header of the
vmlinux ELF we can extract it to properly set the load and entry point for
u-boot uImages.  Before we always hard coded the laod & entry point to 0.
However there are situations that the kernel may be built with a non-zero
physical address.

We use objdump to extract the PHDR.  We assume that there is only one
PHDR in the vmlinux of type LOAD.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/boot/wrapper |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index 14a0182..d6c96d9 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -230,10 +230,13 @@ if [ -n "$version" ]; then
     uboot_version="-n Linux-$version"
 fi
 
+# physical offset of kernel image
+membase=`${CROSS}objdump -p "$kernel" | grep -m 1 LOAD | awk '{print $7}'`
+
 case "$platform" in
 uboot)
     rm -f "$ofile"
-    mkimage -A ppc -O linux -T kernel -C gzip -a 00000000 -e 00000000 \
+    mkimage -A ppc -O linux -T kernel -C gzip -a $membase -e $membase \
 	$uboot_version -d "$vmz" "$ofile"
     if [ -z "$cacheit" ]; then
 	rm -f "$vmz"
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 12/13] [POWERPC] Cleanup pgtable-ppc32.h
  2008-04-15 19:52                     ` [PATCH 11/13] [POWERPC] bootwrapper: use physical address in PHDR for uImage Kumar Gala
@ 2008-04-15 19:52                       ` Kumar Gala
  2008-04-15 19:52                         ` [PATCH 13/13] [POWERPC] 85xx: Add support for relocatble kernel (and booting at non-zero) Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

* Removed defines KERNEL_PGD_PTRS & USER_PGD_PTRS since they aren't used anywhere
* Changed pmd_page macro to use pfn_to_page so we get proper behavior if
  ARCH_PFN_OFFSET is set as well if we use a different memory model on ppc32.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 include/asm-powerpc/pgtable-ppc32.h |    5 +----
 1 files changed, 1 insertions(+), 4 deletions(-)

diff --git a/include/asm-powerpc/pgtable-ppc32.h b/include/asm-powerpc/pgtable-ppc32.h
index bd5b401..daea769 100644
--- a/include/asm-powerpc/pgtable-ppc32.h
+++ b/include/asm-powerpc/pgtable-ppc32.h
@@ -98,9 +98,6 @@ extern int icache_44x_need_flush;
 #define USER_PTRS_PER_PGD	(TASK_SIZE / PGDIR_SIZE)
 #define FIRST_USER_ADDRESS	0
 
-#define USER_PGD_PTRS (PAGE_OFFSET >> PGDIR_SHIFT)
-#define KERNEL_PGD_PTRS (PTRS_PER_PGD-USER_PGD_PTRS)
-
 #define pte_ERROR(e) \
 	printk("%s:%d: bad pte %llx.\n", __FILE__, __LINE__, \
 		(unsigned long long)pte_val(e))
@@ -693,7 +690,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 #define pmd_page_vaddr(pmd)	\
 	((unsigned long) (pmd_val(pmd) & PAGE_MASK))
 #define pmd_page(pmd)		\
-	(mem_map + (__pa(pmd_val(pmd)) >> PAGE_SHIFT))
+	pfn_to_page((__pa(pmd_val(pmd)) >> PAGE_SHIFT))
 #endif
 
 /* to find an entry in a kernel page-table-directory */
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 13/13] [POWERPC] 85xx: Add support for relocatble kernel (and booting at non-zero)
  2008-04-15 19:52                       ` [PATCH 12/13] [POWERPC] Cleanup pgtable-ppc32.h Kumar Gala
@ 2008-04-15 19:52                         ` Kumar Gala
  2008-04-15 21:09                           ` [PATCH 13/13 v5] " Kumar Gala
  0 siblings, 1 reply; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 19:52 UTC (permalink / raw)
  To: paulus; +Cc: linuxppc-dev

Added support to allow an 85xx kernel to be run from a non-zero physical
address (useful for cooperative asymmetric multiprocessing situations) and
kdump.  The support can either be at compile time or runtime
(CONFIG_RELOCATABLE).

Currently we are limited to running at a physical address that is module
256M.  This is due to how we map TLBs to cover lowmem and should be fixed
up to allow 64M or maybe even 16M alignment in the future.

All the magic for this support is accomplished by proper initializating
of the kernel memory subsystem properly and ARCH_PFN_OFFSET.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/Kconfig                 |   69 ++++++++++++++++++++++++++++++++-
 arch/powerpc/kernel/head_fsl_booke.S |   11 +++++
 arch/powerpc/kernel/prom.c           |    4 ++
 arch/powerpc/kernel/setup_64.c       |    2 +-
 arch/powerpc/mm/init_32.c            |    4 +-
 arch/powerpc/mm/init_64.c            |    3 +-
 arch/powerpc/mm/mem.c                |    5 +-
 include/asm-powerpc/kdump.h          |    5 --
 include/asm-powerpc/page.h           |   45 ++++++++++++++++++----
 include/asm-powerpc/page_32.h        |    6 +++
 10 files changed, 133 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index cb7406e..7813a0a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -633,21 +633,76 @@ config LOWMEM_SIZE
 	hex "Maximum low memory size (in bytes)" if LOWMEM_SIZE_BOOL
 	default "0x30000000"
 
+config RELOCATABLE
+	bool "Build a relocatable kernel (EXPERIMENTAL)"
+	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE
+	help
+	  This builds a kernel image that is capable of running at the
+	  location the kernel is loaded at (some alignment restrictions may
+	  exist).
+
+	  One use is for the kexec on panic case where the recovery kernel
+	  must live at a different physical address than the primary
+	  kernel.
+
+	  Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+	  it has been loaded at and the compile time physical addresses
+	  CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+	  setting can still be useful to bootwrappers that need to know the
+	  load location of the kernel (eg. u-boot/mkimage).
+
+config PAGE_OFFSET_BOOL
+	bool "Set custom page offset address"
+	depends on ADVANCED_OPTIONS
+	help
+	  This option allows you to set the kernel virtual address at which
+	  the kernel will map low memory.  This can be useful in optimizing
+	  the virtual memory layout of the system.
+
+	  Say N here unless you know what you are doing.
+
+config PAGE_OFFSET
+	hex "Virtual address of memory base" if PAGE_OFFSET_BOOL
+	default "0xc0000000"
+
 config KERNEL_START_BOOL
 	bool "Set custom kernel base address"
 	depends on ADVANCED_OPTIONS
 	help
 	  This option allows you to set the kernel virtual address at which
-	  the kernel will map low memory (the kernel image will be linked at
-	  this address).  This can be useful in optimizing the virtual memory
-	  layout of the system.
+	  the kernel will be loaded.  Normally this should match PAGE_OFFSET
+	  however there are times (like kdump) that one might not want them
+	  to be the same.
 
 	  Say N here unless you know what you are doing.
 
 config KERNEL_START
 	hex "Virtual address of kernel base" if KERNEL_START_BOOL
+	default PAGE_OFFSET if PAGE_OFFSET_BOOL
+	default "0xc2000000" if CRASH_DUMP
 	default "0xc0000000"
 
+config PHYSICAL_START_BOOL
+	bool "Set physical address where the kernel is loaded"
+	depends on ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE
+	help
+	  This gives the physical address where the kernel is loaded.
+
+	  Say N here unless you know what you are doing.
+
+config PHYSICAL_START
+	hex "Physical address where the kernel is loaded" if PHYSICAL_START_BOOL
+	default "0x02000000" if PPC_STD_MMU && CRASH_DUMP
+	default "0x00000000"
+
+config PHYSICAL_ALIGN
+	hex
+	default "0x10000000" if FSL_BOOKE
+	help
+	  This value puts the alignment restrictions on physical address
+	  where kernel is loaded and run from. Kernel is compiled for an
+	  address which meets above alignment restriction.
+
 config TASK_SIZE_BOOL
 	bool "Set custom user task size"
 	depends on ADVANCED_OPTIONS
@@ -694,9 +749,17 @@ config PIN_TLB
 endmenu
 
 if PPC64
+config PAGE_OFFSET
+	hex
+	default "0xc000000000000000"
 config KERNEL_START
 	hex
+	default "0xc000000002000000" if CRASH_DUMP
 	default "0xc000000000000000"
+config PHYSICAL_START
+	hex
+	default "0x02000000" if CRASH_DUMP
+	default "0x00000000"
 endif
 
 source "net/Kconfig"
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 9f40b3e..4d0336b 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -368,6 +368,17 @@ skpinv:	addi	r6,r6,1				/* Increment */
 
 	bl	early_init
 
+#ifdef CONFIG_RELOCATABLE
+	lis	r3,kernstart_addr@ha
+	la	r3,kernstart_addr@l(r3)
+#ifdef CONFIG_PHYS_64BIT
+	stw	r23,0(r3)
+	stw	r25,4(r3)
+#else
+	stw	r25,0(r3)
+#endif
+#endif
+
 	mfspr	r3,SPRN_TLB1CFG
 	andi.	r3,r3,0xfff
 	lis	r4,num_tlbcam_entries@ha
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 31d5b22..bbd695c 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -53,6 +53,7 @@
 #include <asm/pci-bridge.h>
 #include <asm/phyp_dump.h>
 #include <asm/kexec.h>
+#include <mm/mmu_decl.h>
 
 #ifdef DEBUG
 #define DBG(fmt...) printk(KERN_ERR fmt)
@@ -978,7 +979,10 @@ static int __init early_init_dt_scan_memory(unsigned long node,
 		}
 #endif
 		lmb_add(base, size);
+
+		memstart_addr = min((u64)memstart_addr, base);
 	}
+
 	return 0;
 }
 
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 0205d40..9087e7a 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -431,7 +431,7 @@ void __init setup_system(void)
 		printk("htab_address                  = 0x%p\n", htab_address);
 	printk("htab_hash_mask                = 0x%lx\n", htab_hash_mask);
 #if PHYSICAL_START > 0
-	printk("physical_start                = 0x%x\n", PHYSICAL_START);
+	printk("physical_start                = 0x%lx\n", PHYSICAL_START);
 #endif
 	printk("-----------------------------------------------------\n");
 
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 9a0ea2c..eac4d1c 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -59,7 +59,9 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
 unsigned long total_memory;
 unsigned long total_lowmem;
 
-phys_addr_t memstart_addr;
+phys_addr_t memstart_addr = (phys_addr_t)~0ull;
+phys_addr_t kernstart_addr;
+EXPORT_SYMBOL(kernstart_addr);
 phys_addr_t lowmem_end_addr;
 
 int boot_mapsize;
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 9ea65d9..3be70ec 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -75,7 +75,8 @@
 /* max amount of RAM to use */
 unsigned long __max_memory;
 
-phys_addr_t memstart_addr;
+phys_addr_t memstart_addr = ~0;
+phys_addr_t kernstart_addr;
 
 void free_initmem(void)
 {
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 16def4d..0062e6b 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -216,7 +216,7 @@ void __init do_init_bootmem(void)
 	unsigned long total_pages;
 	int boot_mapsize;
 
-	max_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT;
+	max_low_pfn = max_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT;
 	total_pages = (lmb_end_of_DRAM() - memstart_addr) >> PAGE_SHIFT;
 #ifdef CONFIG_HIGHMEM
 	total_pages = total_lowmem >> PAGE_SHIFT;
@@ -232,7 +232,8 @@ void __init do_init_bootmem(void)
 
 	start = lmb_alloc(bootmap_pages << PAGE_SHIFT, PAGE_SIZE);
 
-	boot_mapsize = init_bootmem(start >> PAGE_SHIFT, total_pages);
+	min_low_pfn = MEMORY_START >> PAGE_SHIFT;
+	boot_mapsize = init_bootmem_node(NODE_DATA(0), start >> PAGE_SHIFT, min_low_pfn, max_low_pfn);
 
 	/* Add active regions with valid PFNs */
 	for (i = 0; i < lmb.memory.cnt; i++) {
diff --git a/include/asm-powerpc/kdump.h b/include/asm-powerpc/kdump.h
index 10e8eb1..f6c93c7 100644
--- a/include/asm-powerpc/kdump.h
+++ b/include/asm-powerpc/kdump.h
@@ -11,16 +11,11 @@
 
 #ifdef CONFIG_CRASH_DUMP
 
-#define PHYSICAL_START	KDUMP_KERNELBASE
 #define KDUMP_TRAMPOLINE_START	0x0100
 #define KDUMP_TRAMPOLINE_END	0x3000
 
 #define KDUMP_MIN_TCE_ENTRIES	2048
 
-#else /* !CONFIG_CRASH_DUMP */
-
-#define PHYSICAL_START	0x0
-
 #endif /* CONFIG_CRASH_DUMP */
 
 #ifndef __ASSEMBLY__
diff --git a/include/asm-powerpc/page.h b/include/asm-powerpc/page.h
index 6c85060..cffdf0e 100644
--- a/include/asm-powerpc/page.h
+++ b/include/asm-powerpc/page.h
@@ -12,6 +12,7 @@
 
 #include <asm/asm-compat.h>
 #include <asm/kdump.h>
+#include <asm/types.h>
 
 /*
  * On PPC32 page size is 4K. For PPC64 we support either 4K or 64K software
@@ -42,8 +43,23 @@
  *
  * The kdump dump kernel is one example where KERNELBASE != PAGE_OFFSET.
  *
- * To get a physical address from a virtual one you subtract PAGE_OFFSET,
- * _not_ KERNELBASE.
+ * PAGE_OFFSET is the virtual address of the start of lowmem.
+ *
+ * PHYSICAL_START is the physical address of the start of the kernel.
+ *
+ * MEMORY_START is the physical address of the start of lowmem.
+ *
+ * KERNELBASE, PAGE_OFFSET, and PHYSICAL_START are all configurable on
+ * ppc32 and based on how they are set we determine MEMORY_START.
+ *
+ * For the linear mapping the following equation should be true:
+ * KERNELBASE - PAGE_OFFSET = PHYSICAL_START - MEMORY_START
+ *
+ * Also, KERNELBASE >= PAGE_OFFSET and PHYSICAL_START >= MEMORY_START
+ *
+ * There are two was to determine a physical address from a virtual one:
+ * va = pa + PAGE_OFFSET - MEMORY_START
+ * va = pa + KERNELBASE - PHYSICAL_START
  *
  * If you want to know something's offset from the start of the kernel you
  * should subtract KERNELBASE.
@@ -51,20 +67,33 @@
  * If you want to test if something's a kernel address, use is_kernel_addr().
  */
 
-#define PAGE_OFFSET     ASM_CONST(CONFIG_KERNEL_START)
-#define KERNELBASE      (PAGE_OFFSET + PHYSICAL_START)
-#define LOAD_OFFSET	PAGE_OFFSET
+#define KERNELBASE      ASM_CONST(CONFIG_KERNEL_START)
+#define PAGE_OFFSET	ASM_CONST(CONFIG_PAGE_OFFSET)
+#define LOAD_OFFSET	ASM_CONST((CONFIG_KERNEL_START-CONFIG_PHYSICAL_START))
+
+#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_FLATMEM)
+#ifndef __ASSEMBLY__
+extern phys_addr_t memstart_addr;
+extern phys_addr_t kernstart_addr;
+#endif
+#define PHYSICAL_START	kernstart_addr
+#define MEMORY_START	memstart_addr
+#else
+#define PHYSICAL_START	ASM_CONST(CONFIG_PHYSICAL_START)
+#define MEMORY_START	(PHYSICAL_START + PAGE_OFFSET - KERNELBASE)
+#endif
 
 #ifdef CONFIG_FLATMEM
-#define pfn_valid(pfn)		((pfn) < max_mapnr)
+#define ARCH_PFN_OFFSET		(MEMORY_START >> PAGE_SHIFT)
+#define pfn_valid(pfn)		((pfn) >= ARCH_PFN_OFFSET && (pfn) < (ARCH_PFN_OFFSET + max_mapnr))
 #endif
 
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 #define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
 
-#define __va(x) ((void *)((unsigned long)(x) + PAGE_OFFSET))
-#define __pa(x) ((unsigned long)(x) - PAGE_OFFSET)
+#define __va(x) ((void *)((unsigned long)(x) - PHYSICAL_START + KERNELBASE))
+#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - KERNELBASE)
 
 /*
  * Unfortunately the PLT is in the BSS in the PPC32 ELF ABI,
diff --git a/include/asm-powerpc/page_32.h b/include/asm-powerpc/page_32.h
index 51f8134..ebfae53 100644
--- a/include/asm-powerpc/page_32.h
+++ b/include/asm-powerpc/page_32.h
@@ -1,6 +1,12 @@
 #ifndef _ASM_POWERPC_PAGE_32_H
 #define _ASM_POWERPC_PAGE_32_H
 
+#if defined(CONFIG_PHYSICAL_ALIGN) && (CONFIG_PHYSICAL_START != 0)
+#if (CONFIG_PHYSICAL_START % CONFIG_PHYSICAL_ALIGN) != 0
+#error "CONFIG_PHYSICAL_START must be a multiple of CONFIG_PHYSICAL_ALIGN"
+#endif
+#endif
+
 #define VM_DATA_DEFAULT_FLAGS	VM_DATA_DEFAULT_FLAGS32
 
 #ifdef CONFIG_NOT_COHERENT_CACHE
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 13/13 v5] [POWERPC] 85xx: Add support for relocatble kernel (and booting at non-zero)
  2008-04-15 19:52                         ` [PATCH 13/13] [POWERPC] 85xx: Add support for relocatble kernel (and booting at non-zero) Kumar Gala
@ 2008-04-15 21:09                           ` Kumar Gala
  0 siblings, 0 replies; 15+ messages in thread
From: Kumar Gala @ 2008-04-15 21:09 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev

Added support to allow an 85xx kernel to be run from a non-zero physical
address (useful for cooperative asymmetric multiprocessing situations) and
kdump.  The support can either be at compile time or runtime
(CONFIG_RELOCATABLE).

Currently we are limited to running at a physical address that is module
256M.  This is due to how we map TLBs to cover lowmem and should be fixed
up to allow 64M or maybe even 16M alignment in the future.

All the magic for this support is accomplished by proper initializating
of the kernel memory subsystem properly and ARCH_PFN_OFFSET.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---

Added EXPORT_SYMBOL(memstart_addr) since we need it for module builds and
fixed a warning.

- k

 arch/powerpc/Kconfig                 |   69 ++++++++++++++++++++++++++++++++-
 arch/powerpc/kernel/head_fsl_booke.S |   11 +++++
 arch/powerpc/kernel/prom.c           |    4 ++
 arch/powerpc/kernel/setup_64.c       |    2 +-
 arch/powerpc/mm/fsl_booke_mmu.c      |    2 +-
 arch/powerpc/mm/init_32.c            |    5 ++-
 arch/powerpc/mm/init_64.c            |    3 +-
 arch/powerpc/mm/mem.c                |    5 +-
 include/asm-powerpc/kdump.h          |    5 --
 include/asm-powerpc/page.h           |   45 ++++++++++++++++++----
 include/asm-powerpc/page_32.h        |    6 +++
 11 files changed, 135 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index cb7406e..7813a0a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -633,21 +633,76 @@ config LOWMEM_SIZE
 	hex "Maximum low memory size (in bytes)" if LOWMEM_SIZE_BOOL
 	default "0x30000000"

+config RELOCATABLE
+	bool "Build a relocatable kernel (EXPERIMENTAL)"
+	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE
+	help
+	  This builds a kernel image that is capable of running at the
+	  location the kernel is loaded at (some alignment restrictions may
+	  exist).
+
+	  One use is for the kexec on panic case where the recovery kernel
+	  must live at a different physical address than the primary
+	  kernel.
+
+	  Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+	  it has been loaded at and the compile time physical addresses
+	  CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+	  setting can still be useful to bootwrappers that need to know the
+	  load location of the kernel (eg. u-boot/mkimage).
+
+config PAGE_OFFSET_BOOL
+	bool "Set custom page offset address"
+	depends on ADVANCED_OPTIONS
+	help
+	  This option allows you to set the kernel virtual address at which
+	  the kernel will map low memory.  This can be useful in optimizing
+	  the virtual memory layout of the system.
+
+	  Say N here unless you know what you are doing.
+
+config PAGE_OFFSET
+	hex "Virtual address of memory base" if PAGE_OFFSET_BOOL
+	default "0xc0000000"
+
 config KERNEL_START_BOOL
 	bool "Set custom kernel base address"
 	depends on ADVANCED_OPTIONS
 	help
 	  This option allows you to set the kernel virtual address at which
-	  the kernel will map low memory (the kernel image will be linked at
-	  this address).  This can be useful in optimizing the virtual memory
-	  layout of the system.
+	  the kernel will be loaded.  Normally this should match PAGE_OFFSET
+	  however there are times (like kdump) that one might not want them
+	  to be the same.

 	  Say N here unless you know what you are doing.

 config KERNEL_START
 	hex "Virtual address of kernel base" if KERNEL_START_BOOL
+	default PAGE_OFFSET if PAGE_OFFSET_BOOL
+	default "0xc2000000" if CRASH_DUMP
 	default "0xc0000000"

+config PHYSICAL_START_BOOL
+	bool "Set physical address where the kernel is loaded"
+	depends on ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE
+	help
+	  This gives the physical address where the kernel is loaded.
+
+	  Say N here unless you know what you are doing.
+
+config PHYSICAL_START
+	hex "Physical address where the kernel is loaded" if PHYSICAL_START_BOOL
+	default "0x02000000" if PPC_STD_MMU && CRASH_DUMP
+	default "0x00000000"
+
+config PHYSICAL_ALIGN
+	hex
+	default "0x10000000" if FSL_BOOKE
+	help
+	  This value puts the alignment restrictions on physical address
+	  where kernel is loaded and run from. Kernel is compiled for an
+	  address which meets above alignment restriction.
+
 config TASK_SIZE_BOOL
 	bool "Set custom user task size"
 	depends on ADVANCED_OPTIONS
@@ -694,9 +749,17 @@ config PIN_TLB
 endmenu

 if PPC64
+config PAGE_OFFSET
+	hex
+	default "0xc000000000000000"
 config KERNEL_START
 	hex
+	default "0xc000000002000000" if CRASH_DUMP
 	default "0xc000000000000000"
+config PHYSICAL_START
+	hex
+	default "0x02000000" if CRASH_DUMP
+	default "0x00000000"
 endif

 source "net/Kconfig"
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 9f40b3e..4d0336b 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -368,6 +368,17 @@ skpinv:	addi	r6,r6,1				/* Increment */

 	bl	early_init

+#ifdef CONFIG_RELOCATABLE
+	lis	r3,kernstart_addr@ha
+	la	r3,kernstart_addr@l(r3)
+#ifdef CONFIG_PHYS_64BIT
+	stw	r23,0(r3)
+	stw	r25,4(r3)
+#else
+	stw	r25,0(r3)
+#endif
+#endif
+
 	mfspr	r3,SPRN_TLB1CFG
 	andi.	r3,r3,0xfff
 	lis	r4,num_tlbcam_entries@ha
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 31d5b22..bbd695c 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -53,6 +53,7 @@
 #include <asm/pci-bridge.h>
 #include <asm/phyp_dump.h>
 #include <asm/kexec.h>
+#include <mm/mmu_decl.h>

 #ifdef DEBUG
 #define DBG(fmt...) printk(KERN_ERR fmt)
@@ -978,7 +979,10 @@ static int __init early_init_dt_scan_memory(unsigned long node,
 		}
 #endif
 		lmb_add(base, size);
+
+		memstart_addr = min((u64)memstart_addr, base);
 	}
+
 	return 0;
 }

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 0205d40..9087e7a 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -431,7 +431,7 @@ void __init setup_system(void)
 		printk("htab_address                  = 0x%p\n", htab_address);
 	printk("htab_hash_mask                = 0x%lx\n", htab_hash_mask);
 #if PHYSICAL_START > 0
-	printk("physical_start                = 0x%x\n", PHYSICAL_START);
+	printk("physical_start                = 0x%lx\n", PHYSICAL_START);
 #endif
 	printk("-----------------------------------------------------\n");

diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index ada249b..ce10e2b 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -202,7 +202,7 @@ adjust_total_lowmem(void)
 		cam_max_size = max_lowmem_size;

 	/* adjust lowmem size to max_lowmem_size */
-	ram = min(max_lowmem_size, total_lowmem);
+	ram = min(max_lowmem_size, (phys_addr_t)total_lowmem);

 	/* Calculate CAM values */
 	__cam0 = 1UL << 2 * (__ilog2(ram) / 2);
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 9a0ea2c..a9ac3f4 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -59,7 +59,10 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
 unsigned long total_memory;
 unsigned long total_lowmem;

-phys_addr_t memstart_addr;
+phys_addr_t memstart_addr = (phys_addr_t)~0ull;
+EXPORT_SYMBOL(memstart_addr);
+phys_addr_t kernstart_addr;
+EXPORT_SYMBOL(kernstart_addr);
 phys_addr_t lowmem_end_addr;

 int boot_mapsize;
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 9ea65d9..3be70ec 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -75,7 +75,8 @@
 /* max amount of RAM to use */
 unsigned long __max_memory;

-phys_addr_t memstart_addr;
+phys_addr_t memstart_addr = ~0;
+phys_addr_t kernstart_addr;

 void free_initmem(void)
 {
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 16def4d..0062e6b 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -216,7 +216,7 @@ void __init do_init_bootmem(void)
 	unsigned long total_pages;
 	int boot_mapsize;

-	max_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT;
+	max_low_pfn = max_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT;
 	total_pages = (lmb_end_of_DRAM() - memstart_addr) >> PAGE_SHIFT;
 #ifdef CONFIG_HIGHMEM
 	total_pages = total_lowmem >> PAGE_SHIFT;
@@ -232,7 +232,8 @@ void __init do_init_bootmem(void)

 	start = lmb_alloc(bootmap_pages << PAGE_SHIFT, PAGE_SIZE);

-	boot_mapsize = init_bootmem(start >> PAGE_SHIFT, total_pages);
+	min_low_pfn = MEMORY_START >> PAGE_SHIFT;
+	boot_mapsize = init_bootmem_node(NODE_DATA(0), start >> PAGE_SHIFT, min_low_pfn, max_low_pfn);

 	/* Add active regions with valid PFNs */
 	for (i = 0; i < lmb.memory.cnt; i++) {
diff --git a/include/asm-powerpc/kdump.h b/include/asm-powerpc/kdump.h
index 10e8eb1..f6c93c7 100644
--- a/include/asm-powerpc/kdump.h
+++ b/include/asm-powerpc/kdump.h
@@ -11,16 +11,11 @@

 #ifdef CONFIG_CRASH_DUMP

-#define PHYSICAL_START	KDUMP_KERNELBASE
 #define KDUMP_TRAMPOLINE_START	0x0100
 #define KDUMP_TRAMPOLINE_END	0x3000

 #define KDUMP_MIN_TCE_ENTRIES	2048

-#else /* !CONFIG_CRASH_DUMP */
-
-#define PHYSICAL_START	0x0
-
 #endif /* CONFIG_CRASH_DUMP */

 #ifndef __ASSEMBLY__
diff --git a/include/asm-powerpc/page.h b/include/asm-powerpc/page.h
index 6c85060..cffdf0e 100644
--- a/include/asm-powerpc/page.h
+++ b/include/asm-powerpc/page.h
@@ -12,6 +12,7 @@

 #include <asm/asm-compat.h>
 #include <asm/kdump.h>
+#include <asm/types.h>

 /*
  * On PPC32 page size is 4K. For PPC64 we support either 4K or 64K software
@@ -42,8 +43,23 @@
  *
  * The kdump dump kernel is one example where KERNELBASE != PAGE_OFFSET.
  *
- * To get a physical address from a virtual one you subtract PAGE_OFFSET,
- * _not_ KERNELBASE.
+ * PAGE_OFFSET is the virtual address of the start of lowmem.
+ *
+ * PHYSICAL_START is the physical address of the start of the kernel.
+ *
+ * MEMORY_START is the physical address of the start of lowmem.
+ *
+ * KERNELBASE, PAGE_OFFSET, and PHYSICAL_START are all configurable on
+ * ppc32 and based on how they are set we determine MEMORY_START.
+ *
+ * For the linear mapping the following equation should be true:
+ * KERNELBASE - PAGE_OFFSET = PHYSICAL_START - MEMORY_START
+ *
+ * Also, KERNELBASE >= PAGE_OFFSET and PHYSICAL_START >= MEMORY_START
+ *
+ * There are two was to determine a physical address from a virtual one:
+ * va = pa + PAGE_OFFSET - MEMORY_START
+ * va = pa + KERNELBASE - PHYSICAL_START
  *
  * If you want to know something's offset from the start of the kernel you
  * should subtract KERNELBASE.
@@ -51,20 +67,33 @@
  * If you want to test if something's a kernel address, use is_kernel_addr().
  */

-#define PAGE_OFFSET     ASM_CONST(CONFIG_KERNEL_START)
-#define KERNELBASE      (PAGE_OFFSET + PHYSICAL_START)
-#define LOAD_OFFSET	PAGE_OFFSET
+#define KERNELBASE      ASM_CONST(CONFIG_KERNEL_START)
+#define PAGE_OFFSET	ASM_CONST(CONFIG_PAGE_OFFSET)
+#define LOAD_OFFSET	ASM_CONST((CONFIG_KERNEL_START-CONFIG_PHYSICAL_START))
+
+#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_FLATMEM)
+#ifndef __ASSEMBLY__
+extern phys_addr_t memstart_addr;
+extern phys_addr_t kernstart_addr;
+#endif
+#define PHYSICAL_START	kernstart_addr
+#define MEMORY_START	memstart_addr
+#else
+#define PHYSICAL_START	ASM_CONST(CONFIG_PHYSICAL_START)
+#define MEMORY_START	(PHYSICAL_START + PAGE_OFFSET - KERNELBASE)
+#endif

 #ifdef CONFIG_FLATMEM
-#define pfn_valid(pfn)		((pfn) < max_mapnr)
+#define ARCH_PFN_OFFSET		(MEMORY_START >> PAGE_SHIFT)
+#define pfn_valid(pfn)		((pfn) >= ARCH_PFN_OFFSET && (pfn) < (ARCH_PFN_OFFSET + max_mapnr))
 #endif

 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 #define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)

-#define __va(x) ((void *)((unsigned long)(x) + PAGE_OFFSET))
-#define __pa(x) ((unsigned long)(x) - PAGE_OFFSET)
+#define __va(x) ((void *)((unsigned long)(x) - PHYSICAL_START + KERNELBASE))
+#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - KERNELBASE)

 /*
  * Unfortunately the PLT is in the BSS in the PPC32 ELF ABI,
diff --git a/include/asm-powerpc/page_32.h b/include/asm-powerpc/page_32.h
index 51f8134..ebfae53 100644
--- a/include/asm-powerpc/page_32.h
+++ b/include/asm-powerpc/page_32.h
@@ -1,6 +1,12 @@
 #ifndef _ASM_POWERPC_PAGE_32_H
 #define _ASM_POWERPC_PAGE_32_H

+#if defined(CONFIG_PHYSICAL_ALIGN) && (CONFIG_PHYSICAL_START != 0)
+#if (CONFIG_PHYSICAL_START % CONFIG_PHYSICAL_ALIGN) != 0
+#error "CONFIG_PHYSICAL_START must be a multiple of CONFIG_PHYSICAL_ALIGN"
+#endif
+#endif
+
 #define VM_DATA_DEFAULT_FLAGS	VM_DATA_DEFAULT_FLAGS32

 #ifdef CONFIG_NOT_COHERENT_CACHE
-- 
1.5.4.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2008-04-15 21:13 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-04-15 19:52 [PATCH 00/13 v4] [POWERPC] ppc32 mm init clean and 85xx kernel reloc Kumar Gala
2008-04-15 19:52 ` [PATCH 01/13] [POWERPC] Remove Kconfig option BOOT_LOAD Kumar Gala
2008-04-15 19:52   ` [PATCH 02/13] [POWERPC] Provide access to arch/powerpc include path on ppc64 Kumar Gala
2008-04-15 19:52     ` [PATCH 03/13] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Kumar Gala
2008-04-15 19:52       ` [PATCH 04/13] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem Kumar Gala
2008-04-15 19:52         ` [PATCH 05/13] [POWERPC] 85xx: Cleanup TLB initialization Kumar Gala
2008-04-15 19:52           ` [PATCH 06/13] [POWERPC] Use lowmem_end_addr to limit lmb allocations on ppc32 Kumar Gala
2008-04-15 19:52             ` [PATCH 07/13] [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr Kumar Gala
2008-04-15 19:52               ` [PATCH 08/13] [POWERPC] Clean up some linker and symbol usage Kumar Gala
2008-04-15 19:52                 ` [PATCH 09/13] [POWERPC] Move phys_addr_t definition into asm/types.h Kumar Gala
2008-04-15 19:52                   ` [PATCH 10/13] [POWERPC] Update linker script to properly set physical addresses Kumar Gala
2008-04-15 19:52                     ` [PATCH 11/13] [POWERPC] bootwrapper: use physical address in PHDR for uImage Kumar Gala
2008-04-15 19:52                       ` [PATCH 12/13] [POWERPC] Cleanup pgtable-ppc32.h Kumar Gala
2008-04-15 19:52                         ` [PATCH 13/13] [POWERPC] 85xx: Add support for relocatble kernel (and booting at non-zero) Kumar Gala
2008-04-15 21:09                           ` [PATCH 13/13 v5] " Kumar Gala

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).