* [PATCH 00/11] ppc32 mm init clean and 85xx kernel reloc @ 2008-04-01 2:08 Kumar Gala 2008-04-01 2:08 ` [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev This set of patches cleans up the initialization of various mmu bits on ppc32 (and a small bit on ppc64 to maintain common code) towards the goal of having an 85xx (book-e) kernel able to run at non-zero offsets. These patches exist in my master and ppc32_mm_init branch. I've dropped them from the powerpc-next as paulus wants them to go via his tree directly since they touch common code. - k ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-01 2:08 [PATCH 00/11] ppc32 mm init clean and 85xx kernel reloc Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 02/11] [POWERPC] Remove Kconfig option BOOT_LOAD Kumar Gala 2008-04-01 10:08 ` [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset Paul Mackerras 0 siblings, 2 replies; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev Normally we assume kernel images will be loaded at offset 0. However there are situations, like when the kernel itself is running at a non-zero physical address, that we don't want to load it at 0. Allow the wrapper to take an offset. We use this when building u-boot images. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- arch/powerpc/boot/Makefile | 7 +++++++ arch/powerpc/boot/wrapper | 12 ++++++++++-- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile index a5528ab..3c80858 100644 --- a/arch/powerpc/boot/Makefile +++ b/arch/powerpc/boot/Makefile @@ -35,6 +35,12 @@ endif BOOTCFLAGS += -I$(obj) -I$(srctree)/$(obj) -I$(srctree)/$(src)/libfdt +ifdef CONFIG_MEMORY_START +MEMBASE=$(CONFIG_MEMORY_START) +else +MEMBASE=0x00000000 +endif + $(obj)/4xx.o: BOOTCFLAGS += -mcpu=405 $(obj)/ebony.o: BOOTCFLAGS += -mcpu=405 $(obj)/cuboot-taishan.o: BOOTCFLAGS += -mcpu=405 @@ -181,6 +187,7 @@ endif # args (to if_changed): 1 = (this rule), 2 = platform, 3 = dts 4=dtb 5=initrd quiet_cmd_wrap = WRAP $@ cmd_wrap =$(CONFIG_SHELL) $(wrapper) -c -o $@ -p $2 $(CROSSWRAP) \ + -m $(MEMBASE) \ $(if $3, -s $3)$(if $4, -d $4)$(if $5, -i $5) vmlinux image-$(CONFIG_PPC_PSERIES) += zImage.pseries diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper index 03b474b..011d148 100755 --- a/arch/powerpc/boot/wrapper +++ b/arch/powerpc/boot/wrapper @@ -50,8 +50,11 @@ objbin=$object # directory for working files tmpdir=. +# physical offset of kernel image +membase=0x00000000 + usage() { - echo 'Usage: wrapper [-o output] [-p platform] [-i initrd]' >&2 + echo 'Usage: wrapper [-o output] [-p platform] [-i initrd] [-m membase]' >&2 echo ' [-d devtree] [-s tree.dts] [-c] [-C cross-prefix]' >&2 echo ' [-D datadir] [-W workingdir] [--no-gzip] [vmlinux]' >&2 exit 1 @@ -84,6 +87,11 @@ while [ "$#" -gt 0 ]; do [ "$#" -gt 0 ] || usage dts="$1" ;; + -m) + shift + [ "$#" -gt 0 ] || usage + membase="$1" + ;; -c) cacheit=y ;; @@ -229,7 +237,7 @@ fi case "$platform" in uboot) rm -f "$ofile" - mkimage -A ppc -O linux -T kernel -C gzip -a 00000000 -e 00000000 \ + mkimage -A ppc -O linux -T kernel -C gzip -a $membase -e $membase \ $uboot_version -d "$vmz" "$ofile" if [ -z "$cacheit" ]; then rm -f "$vmz" -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 02/11] [POWERPC] Remove Kconfig option BOOT_LOAD 2008-04-01 2:08 ` [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 03/11] [POWERPC] Provide access to arch/powerpc include path on ppc64 Kumar Gala 2008-04-01 10:08 ` [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset Paul Mackerras 1 sibling, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev Nothing appears to use BOOT_LOAD so remove it as a configurable option. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- arch/powerpc/Kconfig | 16 ---------------- 1 files changed, 0 insertions(+), 16 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 3651355..4d9ced2 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -674,22 +674,6 @@ config CONSISTENT_SIZE hex "Size of consistent memory pool" if CONSISTENT_SIZE_BOOL default "0x00200000" if NOT_COHERENT_CACHE -config BOOT_LOAD_BOOL - bool "Set the boot link/load address" - depends on ADVANCED_OPTIONS && !PPC_MULTIPLATFORM - help - This option allows you to set the initial load address of the zImage - or zImage.initrd file. This can be useful if you are on a board - which has a small amount of memory. - - Say N here unless you know what you are doing. - -config BOOT_LOAD - hex "Link/load address for booting" if BOOT_LOAD_BOOL - default "0x00400000" if 40x || 8xx || 8260 - default "0x01000000" if 44x - default "0x00800000" - config PIN_TLB bool "Pinned Kernel TLBs (860 ONLY)" depends on ADVANCED_OPTIONS && 8xx -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 03/11] [POWERPC] Provide access to arch/powerpc include path on ppc64 2008-04-01 2:08 ` [PATCH 02/11] [POWERPC] Remove Kconfig option BOOT_LOAD Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev There does not appear to be any reason that we shouldn't just have -Iarch/$(ARCH) on both ppc32 and ppc64 builds. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- arch/powerpc/Makefile | 10 ++++------ 1 files changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index ab5cfe8..43ee815 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -71,13 +71,11 @@ endif LDFLAGS_vmlinux := -Bstatic -CPPFLAGS-$(CONFIG_PPC32) := -Iarch/$(ARCH) -AFLAGS-$(CONFIG_PPC32) := -Iarch/$(ARCH) CFLAGS-$(CONFIG_PPC64) := -mminimal-toc -mtraceback=none -mcall-aixdesc -CFLAGS-$(CONFIG_PPC32) := -Iarch/$(ARCH) -ffixed-r2 -mmultiple -KBUILD_CPPFLAGS += $(CPPFLAGS-y) -KBUILD_AFLAGS += $(AFLAGS-y) -KBUILD_CFLAGS += -msoft-float -pipe $(CFLAGS-y) +CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 -mmultiple +KBUILD_CPPFLAGS += -Iarch/$(ARCH) +KBUILD_AFLAGS += -Iarch/$(ARCH) +KBUILD_CFLAGS += -msoft-float -pipe -Iarch/$(ARCH) $(CFLAGS-y) CPP = $(CC) -E $(KBUILD_CFLAGS) CHECKFLAGS += -m$(CONFIG_WORD_SIZE) -D__powerpc__ -D__powerpc$(CONFIG_WORD_SIZE)__ -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr 2008-04-01 2:08 ` [PATCH 03/11] [POWERPC] Provide access to arch/powerpc include path on ppc64 Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 05/11] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem Kumar Gala 2008-04-01 10:50 ` [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Paul Mackerras 0 siblings, 2 replies; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always use 0 as we don't support booting these kernels at non-zero physical addresses since their exception vectors must be at 0 (or 0xfffx_xxxx). For the sub-arches that support relocatable interrupt vectors (book-e) its reasonable to have memory start at a non-zero physical address. For those cases use the variable memstart_addr instead of the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for initialization and in the future we can set memstart_addr at runtime to have a relocatable kernel. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- Fixed warning in ppc_mmu_32.c due to align still being defined. arch/powerpc/mm/40x_mmu.c | 2 +- arch/powerpc/mm/fsl_booke_mmu.c | 11 +++++------ arch/powerpc/mm/init_32.c | 8 ++++---- arch/powerpc/mm/mmu_decl.h | 1 + arch/powerpc/mm/pgtable_32.c | 5 +++-- arch/powerpc/mm/ppc_mmu_32.c | 11 ++--------- include/asm-powerpc/page_32.h | 2 -- 7 files changed, 16 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/mm/40x_mmu.c b/arch/powerpc/mm/40x_mmu.c index 3899ea9..cecbbc7 100644 --- a/arch/powerpc/mm/40x_mmu.c +++ b/arch/powerpc/mm/40x_mmu.c @@ -97,7 +97,7 @@ unsigned long __init mmu_mapin_ram(void) phys_addr_t p; v = KERNELBASE; - p = PPC_MEMSTART; + p = 0; s = total_lowmem; if (__map_without_ltlbs) diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c index c93a966..3dd0c81 100644 --- a/arch/powerpc/mm/fsl_booke_mmu.c +++ b/arch/powerpc/mm/fsl_booke_mmu.c @@ -53,13 +53,12 @@ #include <asm/machdep.h> #include <asm/setup.h> +#include "mmu_decl.h" + extern void loadcam_entry(unsigned int index); unsigned int tlbcam_index; unsigned int num_tlbcam_entries; static unsigned long __cam0, __cam1, __cam2; -extern unsigned long total_lowmem; -extern unsigned long __max_low_memory; -extern unsigned long __initial_memory_limit; #define MAX_LOW_MEM CONFIG_LOWMEM_SIZE #define NUM_TLBCAMS (16) @@ -165,15 +164,15 @@ void invalidate_tlbcam_entry(int index) void __init cam_mapin_ram(unsigned long cam0, unsigned long cam1, unsigned long cam2) { - settlbcam(0, PAGE_OFFSET, PPC_MEMSTART, cam0, _PAGE_KERNEL, 0); + settlbcam(0, PAGE_OFFSET, memstart_addr, cam0, _PAGE_KERNEL, 0); tlbcam_index++; if (cam1) { tlbcam_index++; - settlbcam(1, PAGE_OFFSET+cam0, PPC_MEMSTART+cam0, cam1, _PAGE_KERNEL, 0); + settlbcam(1, PAGE_OFFSET+cam0, memstart_addr+cam0, cam1, _PAGE_KERNEL, 0); } if (cam2) { tlbcam_index++; - settlbcam(2, PAGE_OFFSET+cam0+cam1, PPC_MEMSTART+cam0+cam1, cam2, _PAGE_KERNEL, 0); + settlbcam(2, PAGE_OFFSET+cam0+cam1, memstart_addr+cam0+cam1, cam2, _PAGE_KERNEL, 0); } } diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c index 59a725b..01a81a0 100644 --- a/arch/powerpc/mm/init_32.c +++ b/arch/powerpc/mm/init_32.c @@ -59,8 +59,9 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_gathers); unsigned long total_memory; unsigned long total_lowmem; -unsigned long ppc_memstart; -unsigned long ppc_memoffset = PAGE_OFFSET; +phys_addr_t memstart_addr; +EXPORT_SYMBOL(memstart_addr); +phys_addr_t lowmem_end_addr; int boot_mapsize; #ifdef CONFIG_PPC_PMAC @@ -145,8 +146,7 @@ void __init MMU_init(void) printk(KERN_WARNING "Only using first contiguous memory region"); } - total_memory = lmb_end_of_DRAM(); - total_lowmem = total_memory; + total_lowmem = total_memory = lmb_end_of_DRAM() - memstart_addr; #ifdef CONFIG_FSL_BOOKE /* Freescale Book-E parts expect lowmem to be mapped by fixed TLB diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h index ebfd13d..5bc11f5 100644 --- a/arch/powerpc/mm/mmu_decl.h +++ b/arch/powerpc/mm/mmu_decl.h @@ -51,6 +51,7 @@ extern unsigned long __max_low_memory; extern unsigned long __initial_memory_limit; extern unsigned long total_memory; extern unsigned long total_lowmem; +extern phys_addr_t memstart_addr; /* ...and now those things that may be slightly different between processor * architectures. -- Dan diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index ac3390f..64c44bc 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -281,12 +281,13 @@ int map_page(unsigned long va, phys_addr_t pa, int flags) */ void __init mapin_ram(void) { - unsigned long v, p, s, f; + unsigned long v, s, f; + phys_addr_t p; int ktext; s = mmu_mapin_ram(); v = KERNELBASE + s; - p = PPC_MEMSTART + s; + p = memstart_addr + s; for (; s < total_lowmem; s += PAGE_SIZE) { ktext = ((char *) v >= _stext && (char *) v < etext); f = ktext ?_PAGE_RAM_TEXT : _PAGE_RAM; diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c index 72de3c7..65f915c 100644 --- a/arch/powerpc/mm/ppc_mmu_32.c +++ b/arch/powerpc/mm/ppc_mmu_32.c @@ -82,7 +82,6 @@ unsigned long __init mmu_mapin_ram(void) #else unsigned long tot, bl, done; unsigned long max_size = (256<<20); - unsigned long align; if (__map_without_bats) { printk(KERN_DEBUG "RAM mapped without BATs\n"); @@ -93,19 +92,13 @@ unsigned long __init mmu_mapin_ram(void) /* Make sure we don't map a block larger than the smallest alignment of the physical address. */ - /* alignment of PPC_MEMSTART */ - align = ~(PPC_MEMSTART-1) & PPC_MEMSTART; - /* set BAT block size to MIN(max_size, align) */ - if (align && align < max_size) - max_size = align; - tot = total_lowmem; for (bl = 128<<10; bl < max_size; bl <<= 1) { if (bl * 2 > tot) break; } - setbat(2, KERNELBASE, PPC_MEMSTART, bl, _PAGE_RAM); + setbat(2, KERNELBASE, 0, bl, _PAGE_RAM); done = (unsigned long)bat_addrs[2].limit - KERNELBASE + 1; if ((done < tot) && !bat_addrs[3].limit) { /* use BAT3 to cover a bit more */ @@ -113,7 +106,7 @@ unsigned long __init mmu_mapin_ram(void) for (bl = 128<<10; bl < max_size; bl <<= 1) if (bl * 2 > tot) break; - setbat(3, KERNELBASE+done, PPC_MEMSTART+done, bl, _PAGE_RAM); + setbat(3, KERNELBASE+done, done, bl, _PAGE_RAM); done = (unsigned long)bat_addrs[3].limit - KERNELBASE + 1; } diff --git a/include/asm-powerpc/page_32.h b/include/asm-powerpc/page_32.h index 65ea19e..51f8134 100644 --- a/include/asm-powerpc/page_32.h +++ b/include/asm-powerpc/page_32.h @@ -3,8 +3,6 @@ #define VM_DATA_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS32 -#define PPC_MEMSTART 0 - #ifdef CONFIG_NOT_COHERENT_CACHE #define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES #endif -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 05/11] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem 2008-04-01 2:08 ` [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 06/11] [POWERPC] 85xx: Cleanup TLB initialization Kumar Gala 2008-04-01 10:50 ` [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Paul Mackerras 1 sibling, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev total_lowmem represents the amount of low memory not the physical address that low memory ends at. If the start of memory is at 0 it happends that total_lowmem can be used as both the size and the address that lowmem ends at. (technical its one byte beyond the end) To make the code a bit more clear and deal with the case when the start of memory isn't at physical 0, we introduce lowmem_end_addr that represents one byte beyond the last physical address in the lowmem region. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- arch/powerpc/mm/44x_mmu.c | 2 +- arch/powerpc/mm/init_32.c | 4 +++- arch/powerpc/mm/init_64.c | 2 ++ arch/powerpc/mm/mem.c | 16 +++++++++------- arch/powerpc/mm/mmu_decl.h | 1 + 5 files changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/mm/44x_mmu.c b/arch/powerpc/mm/44x_mmu.c index 04dc087..953fb91 100644 --- a/arch/powerpc/mm/44x_mmu.c +++ b/arch/powerpc/mm/44x_mmu.c @@ -67,7 +67,7 @@ unsigned long __init mmu_mapin_ram(void) /* Pin in enough TLBs to cover any lowmem not covered by the * initial 256M mapping established in head_44x.S */ - for (addr = PPC_PIN_SIZE; addr < total_lowmem; + for (addr = PPC_PIN_SIZE; addr < lowmem_end_addr; addr += PPC_PIN_SIZE) ppc44x_pin_tlb(addr + PAGE_OFFSET, addr); diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c index 01a81a0..345a275 100644 --- a/arch/powerpc/mm/init_32.c +++ b/arch/powerpc/mm/init_32.c @@ -147,6 +147,7 @@ void __init MMU_init(void) } total_lowmem = total_memory = lmb_end_of_DRAM() - memstart_addr; + lowmem_end_addr = memstart_addr + total_lowmem; #ifdef CONFIG_FSL_BOOKE /* Freescale Book-E parts expect lowmem to be mapped by fixed TLB @@ -157,9 +158,10 @@ void __init MMU_init(void) if (total_lowmem > __max_low_memory) { total_lowmem = __max_low_memory; + lowmem_end_addr = memstart_addr + total_lowmem; #ifndef CONFIG_HIGHMEM total_memory = total_lowmem; - lmb_enforce_memory_limit(total_lowmem); + lmb_enforce_memory_limit(lowmem_end_addr); lmb_analyze(); #endif /* CONFIG_HIGHMEM */ } diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c index abeb0eb..f18b203 100644 --- a/arch/powerpc/mm/init_64.c +++ b/arch/powerpc/mm/init_64.c @@ -75,6 +75,8 @@ /* max amount of RAM to use */ unsigned long __max_memory; +phys_addr_t memstart_addr; + void free_initmem(void) { unsigned long addr; diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 60c019c..9c10b14 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -217,9 +217,11 @@ void __init do_init_bootmem(void) unsigned long total_pages; int boot_mapsize; - max_pfn = total_pages = lmb_end_of_DRAM() >> PAGE_SHIFT; + max_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT; + total_pages = (lmb_end_of_DRAM() - memstart_addr) >> PAGE_SHIFT; #ifdef CONFIG_HIGHMEM total_pages = total_lowmem >> PAGE_SHIFT; + max_low_pfn = lowmem_end_addr >> PAGE_SHIFT; #endif /* @@ -245,18 +247,18 @@ void __init do_init_bootmem(void) * present. */ #ifdef CONFIG_HIGHMEM - free_bootmem_with_active_regions(0, total_lowmem >> PAGE_SHIFT); + free_bootmem_with_active_regions(0, lowmem_end_addr >> PAGE_SHIFT); /* reserve the sections we're already using */ for (i = 0; i < lmb.reserved.cnt; i++) { unsigned long addr = lmb.reserved.region[i].base + lmb_size_bytes(&lmb.reserved, i) - 1; - if (addr < total_lowmem) + if (addr < lowmem_end_addr) reserve_bootmem(lmb.reserved.region[i].base, lmb_size_bytes(&lmb.reserved, i), BOOTMEM_DEFAULT); - else if (lmb.reserved.region[i].base < total_lowmem) { - unsigned long adjusted_size = total_lowmem - + else if (lmb.reserved.region[i].base < lowmem_end_addr) { + unsigned long adjusted_size = lowmem_end_addr - lmb.reserved.region[i].base; reserve_bootmem(lmb.reserved.region[i].base, adjusted_size, BOOTMEM_DEFAULT); @@ -326,7 +328,7 @@ void __init paging_init(void) (top_of_ram - total_ram) >> 20); memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); #ifdef CONFIG_HIGHMEM - max_zone_pfns[ZONE_DMA] = total_lowmem >> PAGE_SHIFT; + max_zone_pfns[ZONE_DMA] = lowmem_end_addr >> PAGE_SHIFT; max_zone_pfns[ZONE_HIGHMEM] = top_of_ram >> PAGE_SHIFT; #else max_zone_pfns[ZONE_DMA] = top_of_ram >> PAGE_SHIFT; @@ -381,7 +383,7 @@ void __init mem_init(void) { unsigned long pfn, highmem_mapnr; - highmem_mapnr = total_lowmem >> PAGE_SHIFT; + highmem_mapnr = lowmem_end_addr >> PAGE_SHIFT; for (pfn = highmem_mapnr; pfn < max_mapnr; ++pfn) { struct page *page = pfn_to_page(pfn); if (lmb_is_reserved(pfn << PAGE_SHIFT)) diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h index 5bc11f5..67477e7 100644 --- a/arch/powerpc/mm/mmu_decl.h +++ b/arch/powerpc/mm/mmu_decl.h @@ -52,6 +52,7 @@ extern unsigned long __initial_memory_limit; extern unsigned long total_memory; extern unsigned long total_lowmem; extern phys_addr_t memstart_addr; +extern phys_addr_t lowmem_end_addr; /* ...and now those things that may be slightly different between processor * architectures. -- Dan -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 06/11] [POWERPC] 85xx: Cleanup TLB initialization 2008-04-01 2:08 ` [PATCH 05/11] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 07/11] [POWERPC] Use lowmem_end_addr to limit lmb allocations on ppc32 Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev * Determine the RPN we are running the kernel at runtime rather than using compile time constant for initial TLB * Cleanup adjust_total_lowmem() to respect memstart_addr and be a bit more clear on variables that are sizes vs addresses. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- arch/powerpc/kernel/head_fsl_booke.S | 34 ++++++++++++++++++++++++------ arch/powerpc/mm/fsl_booke_mmu.c | 37 ++++++++++++++------------------- 2 files changed, 43 insertions(+), 28 deletions(-) diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S index d9cc2c2..9f40b3e 100644 --- a/arch/powerpc/kernel/head_fsl_booke.S +++ b/arch/powerpc/kernel/head_fsl_booke.S @@ -68,7 +68,9 @@ _ENTRY(_start); mr r29,r5 mr r28,r6 mr r27,r7 + li r25,0 /* phys kernel start (low) */ li r24,0 /* CPU number */ + li r23,0 /* phys kernel start (high) */ /* We try to not make any assumptions about how the boot loader * setup or used the TLBs. We invalidate all mappings from the @@ -167,7 +169,28 @@ skpinv: addi r6,r6,1 /* Increment */ mtspr SPRN_MAS0,r7 tlbre - /* Just modify the entry ID, EPN and RPN for the temp mapping */ + /* grab and fixup the RPN */ + mfspr r6,SPRN_MAS1 /* extract MAS1[SIZE] */ + rlwinm r6,r6,25,27,30 + li r8,-1 + addi r6,r6,10 + slw r6,r8,r6 /* convert to mask */ + + bl 1f /* Find our address */ +1: mflr r7 + + mfspr r8,SPRN_MAS3 +#ifdef CONFIG_PHYS_64BIT + mfspr r23,SPRN_MAS7 +#endif + and r8,r6,r8 + subfic r9,r6,-4096 + and r9,r9,r7 + + or r25,r8,r9 + ori r8,r25,(MAS3_SX|MAS3_SW|MAS3_SR) + + /* Just modify the entry ID and EPN for the temp mapping */ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */ rlwimi r7,r5,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */ mtspr SPRN_MAS0,r7 @@ -177,12 +200,10 @@ skpinv: addi r6,r6,1 /* Increment */ ori r6,r6,(MAS1_TSIZE(BOOKE_PAGESZ_4K))@l mtspr SPRN_MAS1,r6 mfspr r6,SPRN_MAS2 - lis r7,PHYSICAL_START@h + li r7,0 /* temp EPN = 0 */ rlwimi r7,r6,0,20,31 mtspr SPRN_MAS2,r7 - mfspr r6,SPRN_MAS3 - rlwimi r7,r6,0,20,31 - mtspr SPRN_MAS3,r7 + mtspr SPRN_MAS3,r8 tlbwe xori r6,r4,1 @@ -232,8 +253,7 @@ skpinv: addi r6,r6,1 /* Increment */ ori r6,r6,PAGE_OFFSET@l rlwimi r6,r7,0,20,31 mtspr SPRN_MAS2,r6 - li r7,(MAS3_SX|MAS3_SW|MAS3_SR) - mtspr SPRN_MAS3,r7 + mtspr SPRN_MAS3,r8 tlbwe /* 7. Jump to KERNELBASE mapping */ diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c index 3dd0c81..59f6649 100644 --- a/arch/powerpc/mm/fsl_booke_mmu.c +++ b/arch/powerpc/mm/fsl_booke_mmu.c @@ -49,7 +49,6 @@ #include <asm/mmu.h> #include <asm/uaccess.h> #include <asm/smp.h> -#include <asm/bootx.h> #include <asm/machdep.h> #include <asm/setup.h> @@ -59,7 +58,6 @@ extern void loadcam_entry(unsigned int index); unsigned int tlbcam_index; unsigned int num_tlbcam_entries; static unsigned long __cam0, __cam1, __cam2; -#define MAX_LOW_MEM CONFIG_LOWMEM_SIZE #define NUM_TLBCAMS (16) @@ -195,35 +193,32 @@ unsigned long __init mmu_mapin_ram(void) void __init adjust_total_lowmem(void) { - unsigned long max_low_mem = MAX_LOW_MEM; - unsigned long cam_max = 0x10000000; - unsigned long ram; + phys_addr_t max_lowmem_size = __max_low_memory; + phys_addr_t cam_max_size = 0x10000000; + phys_addr_t ram; - /* adjust CAM size to max_low_mem */ - if (max_low_mem < cam_max) - cam_max = max_low_mem; + /* adjust CAM size to max_lowmem_size */ + if (max_lowmem_size < cam_max_size) + cam_max_size = max_lowmem_size; - /* adjust lowmem size to max_low_mem */ - if (max_low_mem < total_lowmem) - ram = max_low_mem; - else - ram = total_lowmem; + /* adjust lowmem size to max_lowmem_size */ + ram = min(max_lowmem_size, total_lowmem); /* Calculate CAM values */ __cam0 = 1UL << 2 * (__ilog2(ram) / 2); - if (__cam0 > cam_max) - __cam0 = cam_max; + if (__cam0 > cam_max_size) + __cam0 = cam_max_size; ram -= __cam0; if (ram) { __cam1 = 1UL << 2 * (__ilog2(ram) / 2); - if (__cam1 > cam_max) - __cam1 = cam_max; + if (__cam1 > cam_max_size) + __cam1 = cam_max_size; ram -= __cam1; } if (ram) { __cam2 = 1UL << 2 * (__ilog2(ram) / 2); - if (__cam2 > cam_max) - __cam2 = cam_max; + if (__cam2 > cam_max_size) + __cam2 = cam_max_size; ram -= __cam2; } @@ -231,6 +226,6 @@ adjust_total_lowmem(void) " CAM2=%ldMb residual: %ldMb\n", __cam0 >> 20, __cam1 >> 20, __cam2 >> 20, (total_lowmem - __cam0 - __cam1 - __cam2) >> 20); - __max_low_memory = max_low_mem = __cam0 + __cam1 + __cam2; - __initial_memory_limit = __max_low_memory; + __max_low_memory = __cam0 + __cam1 + __cam2; + __initial_memory_limit = memstart_addr + __max_low_memory; } -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 07/11] [POWERPC] Use lowmem_end_addr to limit lmb allocations on ppc32 2008-04-01 2:08 ` [PATCH 06/11] [POWERPC] 85xx: Cleanup TLB initialization Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 08/11 v2] [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev Now that we have a proper variable that is the address of the top of low memory we can use it to limit the lmb allocations. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- include/asm-powerpc/lmb.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/asm-powerpc/lmb.h b/include/asm-powerpc/lmb.h index 028184b..6f5fdf0 100644 --- a/include/asm-powerpc/lmb.h +++ b/include/asm-powerpc/lmb.h @@ -6,8 +6,8 @@ #define LMB_DBG(fmt...) udbg_printf(fmt) #ifdef CONFIG_PPC32 -extern unsigned long __max_low_memory; -#define LMB_REAL_LIMIT __max_low_memory +extern phys_addr_t lowmem_end_addr; +#define LMB_REAL_LIMIT lowmem_end_addr #else #define LMB_REAL_LIMIT 0 #endif -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 08/11 v2] [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr 2008-04-01 2:08 ` [PATCH 07/11] [POWERPC] Use lowmem_end_addr to limit lmb allocations on ppc32 Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 09/11] [POWERPC] Clean up some linker and symbol usage Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev We always use __initial_memory_limit as an address so rename it to be clear. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- Fixed a build breakage issue on ebony due to phys_addr_t not being the type of __initial_memory_limit_addr. arch/powerpc/mm/fsl_booke_mmu.c | 2 +- arch/powerpc/mm/init_32.c | 10 +++++----- arch/powerpc/mm/mmu_decl.h | 2 +- arch/powerpc/mm/ppc_mmu_32.c | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c index 59f6649..ada249b 100644 --- a/arch/powerpc/mm/fsl_booke_mmu.c +++ b/arch/powerpc/mm/fsl_booke_mmu.c @@ -227,5 +227,5 @@ adjust_total_lowmem(void) __cam0 >> 20, __cam1 >> 20, __cam2 >> 20, (total_lowmem - __cam0 - __cam1 - __cam2) >> 20); __max_low_memory = __cam0 + __cam1 + __cam2; - __initial_memory_limit = memstart_addr + __max_low_memory; + __initial_memory_limit_addr = memstart_addr + __max_low_memory; } diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c index 345a275..2c72c39 100644 --- a/arch/powerpc/mm/init_32.c +++ b/arch/powerpc/mm/init_32.c @@ -96,10 +96,10 @@ int __map_without_ltlbs; unsigned long __max_low_memory = MAX_LOW_MEM; /* - * limit of what is accessible with initial MMU setup - + * address of the limit of what is accessible with initial MMU setup - * 256MB usually, but only 16MB on 601. */ -unsigned long __initial_memory_limit = 0x10000000; +phys_addr_t __initial_memory_limit_addr = (phys_addr_t)0x10000000; /* * Check for command-line options that affect what MMU_init will do. @@ -132,10 +132,10 @@ void __init MMU_init(void) /* 601 can only access 16MB at the moment */ if (PVR_VER(mfspr(SPRN_PVR)) == 1) - __initial_memory_limit = 0x01000000; + __initial_memory_limit_addr = 0x01000000; /* 8xx can only access 8MB at the moment */ if (PVR_VER(mfspr(SPRN_PVR)) == 0x50) - __initial_memory_limit = 0x00800000; + __initial_memory_limit_addr = 0x00800000; /* parse args from command line */ MMU_setup(); @@ -210,7 +210,7 @@ void __init *early_get_page(void) p = alloc_bootmem_pages(PAGE_SIZE); } else { p = __va(lmb_alloc_base(PAGE_SIZE, PAGE_SIZE, - __initial_memory_limit)); + __initial_memory_limit_addr)); } return p; } diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h index 67477e7..0480225 100644 --- a/arch/powerpc/mm/mmu_decl.h +++ b/arch/powerpc/mm/mmu_decl.h @@ -48,7 +48,7 @@ extern unsigned int num_tlbcam_entries; extern unsigned long ioremap_bot; extern unsigned long __max_low_memory; -extern unsigned long __initial_memory_limit; +extern phys_addr_t __initial_memory_limit_addr; extern unsigned long total_memory; extern unsigned long total_lowmem; extern phys_addr_t memstart_addr; diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c index 65f915c..cef9f15 100644 --- a/arch/powerpc/mm/ppc_mmu_32.c +++ b/arch/powerpc/mm/ppc_mmu_32.c @@ -233,7 +233,7 @@ void __init MMU_init_hw(void) */ if ( ppc_md.progress ) ppc_md.progress("hash:find piece", 0x322); Hash = __va(lmb_alloc_base(Hash_size, Hash_size, - __initial_memory_limit)); + __initial_memory_limit_addr)); cacheable_memzero(Hash, Hash_size); _SDR1 = __pa(Hash) | SDR1_LOW_BITS; -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 09/11] [POWERPC] Clean up some linker and symbol usage 2008-04-01 2:08 ` [PATCH 08/11 v2] [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 10/11] [POWERPC] Move phys_addr_t definition into asm/types.h Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev * use _stext and _end sybmols when reserving kernel text in the lmb. Use of these symbols is a bit more robust way to determine the physical start and size of the kernel text. * PAGE_OFFSET is not always the start of code, use _stext instead. * grab PAGE_SIZE and KERNELBASE from asm/page.h like ppc64 does. Makes the code a bit more common and provide a single place to manipulate the defines for things like kdump. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- arch/powerpc/kernel/prom.c | 2 +- arch/powerpc/kernel/setup_32.c | 2 +- arch/powerpc/kernel/setup_64.c | 2 +- arch/powerpc/kernel/vmlinux.lds.S | 4 +--- 4 files changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index 9330920..60ef7d1 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -1124,7 +1124,7 @@ void __init early_init_devtree(void *params) parse_early_param(); /* Reserve LMB regions used by kernel, initrd, dt, etc... */ - lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START); + lmb_reserve(__pa(_stext), _end - _stext); reserve_kdump_trampoline(); reserve_crashkernel(); early_reserve_mem(); diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c index cd870a8..b0989ca 100644 --- a/arch/powerpc/kernel/setup_32.c +++ b/arch/powerpc/kernel/setup_32.c @@ -277,7 +277,7 @@ void __init setup_arch(char **cmdline_p) if (ppc_md.panic) setup_panic(); - init_mm.start_code = PAGE_OFFSET; + init_mm.start_code = (unsigned long)_stext; init_mm.end_code = (unsigned long) _etext; init_mm.end_data = (unsigned long) _edata; init_mm.brk = klimit; diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 2c2d831..0205d40 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -510,7 +510,7 @@ void __init setup_arch(char **cmdline_p) if (ppc_md.panic) setup_panic(); - init_mm.start_code = PAGE_OFFSET; + init_mm.start_code = (unsigned long)_stext; init_mm.end_code = (unsigned long) _etext; init_mm.end_data = (unsigned long) _edata; init_mm.brk = klimit; diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S index 0afb9e3..b5a76bc 100644 --- a/arch/powerpc/kernel/vmlinux.lds.S +++ b/arch/powerpc/kernel/vmlinux.lds.S @@ -1,11 +1,9 @@ #ifdef CONFIG_PPC64 -#include <asm/page.h> #define PROVIDE32(x) PROVIDE(__unused__##x) #else -#define PAGE_SIZE 4096 -#define KERNELBASE CONFIG_KERNEL_START #define PROVIDE32(x) PROVIDE(x) #endif +#include <asm/page.h> #include <asm-generic/vmlinux.lds.h> #include <asm/cache.h> -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 10/11] [POWERPC] Move phys_addr_t definition into asm/types.h 2008-04-01 2:08 ` [PATCH 09/11] [POWERPC] Clean up some linker and symbol usage Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 2008-04-01 2:08 ` [PATCH 11/11] [POWERPC] 85xx: Add support for relocatble kernel (and booting at non-zero) Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev Moved phys_addr_t out of mmu-*.h and into asm/types.h so we can use it in places that before would have caused recursive includes. For example to use phys_addr_t in <asm/page.h> we would have included <asm/mmu.h> which would have possibly included <asm/mmu-hash64.h> which includes <asm/page.h>. Wheeee recursive include. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- include/asm-powerpc/mmu-40x.h | 2 -- include/asm-powerpc/mmu-44x.h | 2 -- include/asm-powerpc/mmu-8xx.h | 2 -- include/asm-powerpc/mmu-fsl-booke.h | 6 ------ include/asm-powerpc/mmu-hash32.h | 2 -- include/asm-powerpc/mmu-hash64.h | 3 --- include/asm-powerpc/types.h | 7 +++++++ 7 files changed, 7 insertions(+), 17 deletions(-) diff --git a/include/asm-powerpc/mmu-40x.h b/include/asm-powerpc/mmu-40x.h index 7d37f77..3d10867 100644 --- a/include/asm-powerpc/mmu-40x.h +++ b/include/asm-powerpc/mmu-40x.h @@ -53,8 +53,6 @@ #ifndef __ASSEMBLY__ -typedef unsigned long phys_addr_t; - typedef struct { unsigned long id; unsigned long vdso_base; diff --git a/include/asm-powerpc/mmu-44x.h b/include/asm-powerpc/mmu-44x.h index 62772ae..c8b02d9 100644 --- a/include/asm-powerpc/mmu-44x.h +++ b/include/asm-powerpc/mmu-44x.h @@ -53,8 +53,6 @@ #ifndef __ASSEMBLY__ -typedef unsigned long long phys_addr_t; - typedef struct { unsigned long id; unsigned long vdso_base; diff --git a/include/asm-powerpc/mmu-8xx.h b/include/asm-powerpc/mmu-8xx.h index 952bd88..9db877e 100644 --- a/include/asm-powerpc/mmu-8xx.h +++ b/include/asm-powerpc/mmu-8xx.h @@ -136,8 +136,6 @@ #define SPRN_M_TW 799 #ifndef __ASSEMBLY__ -typedef unsigned long phys_addr_t; - typedef struct { unsigned long id; unsigned long vdso_base; diff --git a/include/asm-powerpc/mmu-fsl-booke.h b/include/asm-powerpc/mmu-fsl-booke.h index 3758000..925d93c 100644 --- a/include/asm-powerpc/mmu-fsl-booke.h +++ b/include/asm-powerpc/mmu-fsl-booke.h @@ -73,12 +73,6 @@ #ifndef __ASSEMBLY__ -#ifndef CONFIG_PHYS_64BIT -typedef unsigned long phys_addr_t; -#else -typedef unsigned long long phys_addr_t; -#endif - typedef struct { unsigned long id; unsigned long vdso_base; diff --git a/include/asm-powerpc/mmu-hash32.h b/include/asm-powerpc/mmu-hash32.h index 4bd735b..6e21ca6 100644 --- a/include/asm-powerpc/mmu-hash32.h +++ b/include/asm-powerpc/mmu-hash32.h @@ -84,8 +84,6 @@ typedef struct { unsigned long vdso_base; } mm_context_t; -typedef unsigned long phys_addr_t; - #endif /* !__ASSEMBLY__ */ #endif /* _ASM_POWERPC_MMU_HASH32_H_ */ diff --git a/include/asm-powerpc/mmu-hash64.h b/include/asm-powerpc/mmu-hash64.h index 2864fa3..0dff767 100644 --- a/include/asm-powerpc/mmu-hash64.h +++ b/include/asm-powerpc/mmu-hash64.h @@ -469,9 +469,6 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea, VSID_MODULUS_256M) #define KERNEL_VSID(ea) VSID_SCRAMBLE(GET_ESID(ea)) -/* Physical address used by some IO functions */ -typedef unsigned long phys_addr_t; - #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_MMU_HASH64_H_ */ diff --git a/include/asm-powerpc/types.h b/include/asm-powerpc/types.h index 903fd19..020db52 100644 --- a/include/asm-powerpc/types.h +++ b/include/asm-powerpc/types.h @@ -50,6 +50,13 @@ typedef struct { __u32 u[4]; } __attribute__((aligned(16))) __vector128; +/* Physical address used by some IO functions */ +#ifndef CONFIG_PHYS_64BIT +typedef unsigned long phys_addr_t; +#else +typedef unsigned long long phys_addr_t; +#endif + #endif /* __ASSEMBLY__ */ #ifdef __KERNEL__ -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 11/11] [POWERPC] 85xx: Add support for relocatble kernel (and booting at non-zero) 2008-04-01 2:08 ` [PATCH 10/11] [POWERPC] Move phys_addr_t definition into asm/types.h Kumar Gala @ 2008-04-01 2:08 ` Kumar Gala 0 siblings, 0 replies; 23+ messages in thread From: Kumar Gala @ 2008-04-01 2:08 UTC (permalink / raw) To: paulus; +Cc: linuxppc-dev Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations) and kdump. The support can either be at compile time or runtime (CONFIG_RELOCATABLE). Currently we are limited to running at a physical address that is module 256M. This is due to how we map TLBs to cover lowmem and should be fixed up to allow 64M or maybe even 16M alignment in the future. All the magic for this support is accomplished by proper initializating of the kernel memory subsystem properly and ARCH_PFN_OFFSET. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> --- arch/powerpc/Kconfig | 69 ++++++++++++++++++++++++++++++++- arch/powerpc/boot/Makefile | 4 +- arch/powerpc/kernel/head_fsl_booke.S | 11 +++++ arch/powerpc/kernel/prom.c | 4 ++ arch/powerpc/kernel/setup_64.c | 2 +- arch/powerpc/mm/init_32.c | 4 +- arch/powerpc/mm/init_64.c | 3 +- arch/powerpc/mm/mem.c | 5 +- include/asm-powerpc/kdump.h | 5 -- include/asm-powerpc/page.h | 43 +++++++++++++++++--- include/asm-powerpc/page_32.h | 6 +++ include/asm-powerpc/pgtable-ppc32.h | 5 +-- 12 files changed, 135 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 4d9ced2..42c22f7 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -619,21 +619,76 @@ config LOWMEM_SIZE hex "Maximum low memory size (in bytes)" if LOWMEM_SIZE_BOOL default "0x30000000" +config RELOCATABLE + bool "Build a relocatable kernel (EXPERIMENTAL)" + depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE + help + This builds a kernel image that is capable of running at the + location the kernel is loaded at (some alignment restrictions may + exist). + + One use is for the kexec on panic case where the recovery kernel + must live at a different physical address than the primary + kernel. + + Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address + it has been loaded at and the compile time physical addresses + CONFIG_PHYSICAL_START is ignored. However CONFIG_PHYSICAL_START + setting can still be useful to bootwrappers that need to know the + load location of the kernel (eg. u-boot/mkimage). + +config PAGE_OFFSET_BOOL + bool "Set custom page offset address" + depends on ADVANCED_OPTIONS + help + This option allows you to set the kernel virtual address at which + the kernel will map low memory. This can be useful in optimizing + the virtual memory layout of the system. + + Say N here unless you know what you are doing. + +config PAGE_OFFSET + hex "Virtual address of memory base" if PAGE_OFFSET_BOOL + default "0xc0000000" + config KERNEL_START_BOOL bool "Set custom kernel base address" depends on ADVANCED_OPTIONS help This option allows you to set the kernel virtual address at which - the kernel will map low memory (the kernel image will be linked at - this address). This can be useful in optimizing the virtual memory - layout of the system. + the kernel will be loaded. Normally this should match PAGE_OFFSET + however there are times (like kdump) that one might not want them + to be the same. Say N here unless you know what you are doing. config KERNEL_START hex "Virtual address of kernel base" if KERNEL_START_BOOL + default PAGE_OFFSET if PAGE_OFFSET_BOOL + default "0xc2000000" if CRASH_DUMP default "0xc0000000" +config PHYSICAL_START_BOOL + bool "Set physical address where the kernel is loaded" + depends on ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE + help + This gives the physical address where the kernel is loaded. + + Say N here unless you know what you are doing. + +config PHYSICAL_START + hex "Physical address where the kernel is loaded" if PHYSICAL_START_BOOL + default "0x02000000" if PPC_STD_MMU && CRASH_DUMP + default "0x00000000" + +config PHYSICAL_ALIGN + hex + default "0x10000000" if FSL_BOOKE + help + This value puts the alignment restrictions on physical address + where kernel is loaded and run from. Kernel is compiled for an + address which meets above alignment restriction. + config TASK_SIZE_BOOL bool "Set custom user task size" depends on ADVANCED_OPTIONS @@ -680,9 +735,17 @@ config PIN_TLB endmenu if PPC64 +config PAGE_OFFSET + hex + default "0xc000000000000000" config KERNEL_START hex + default "0xc000000002000000" if CRASH_DUMP default "0xc000000000000000" +config PHYSICAL_START + hex + default "0x02000000" if CRASH_DUMP + default "0x00000000" endif source "net/Kconfig" diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile index 3c80858..c7c2a9d 100644 --- a/arch/powerpc/boot/Makefile +++ b/arch/powerpc/boot/Makefile @@ -35,8 +35,8 @@ endif BOOTCFLAGS += -I$(obj) -I$(srctree)/$(obj) -I$(srctree)/$(src)/libfdt -ifdef CONFIG_MEMORY_START -MEMBASE=$(CONFIG_MEMORY_START) +ifdef CONFIG_PHYSICAL_START +MEMBASE=$(CONFIG_PHYSICAL_START) else MEMBASE=0x00000000 endif diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S index 9f40b3e..4d0336b 100644 --- a/arch/powerpc/kernel/head_fsl_booke.S +++ b/arch/powerpc/kernel/head_fsl_booke.S @@ -368,6 +368,17 @@ skpinv: addi r6,r6,1 /* Increment */ bl early_init +#ifdef CONFIG_RELOCATABLE + lis r3,kernstart_addr@ha + la r3,kernstart_addr@l(r3) +#ifdef CONFIG_PHYS_64BIT + stw r23,0(r3) + stw r25,4(r3) +#else + stw r25,0(r3) +#endif +#endif + mfspr r3,SPRN_TLB1CFG andi. r3,r3,0xfff lis r4,num_tlbcam_entries@ha diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index 60ef7d1..988cbde 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -53,6 +53,7 @@ #include <asm/pci-bridge.h> #include <asm/phyp_dump.h> #include <asm/kexec.h> +#include <mm/mmu_decl.h> #ifdef DEBUG #define DBG(fmt...) printk(KERN_ERR fmt) @@ -978,7 +979,10 @@ static int __init early_init_dt_scan_memory(unsigned long node, } #endif lmb_add(base, size); + + memstart_addr = min((u64)memstart_addr, base); } + return 0; } diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 0205d40..9087e7a 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -431,7 +431,7 @@ void __init setup_system(void) printk("htab_address = 0x%p\n", htab_address); printk("htab_hash_mask = 0x%lx\n", htab_hash_mask); #if PHYSICAL_START > 0 - printk("physical_start = 0x%x\n", PHYSICAL_START); + printk("physical_start = 0x%lx\n", PHYSICAL_START); #endif printk("-----------------------------------------------------\n"); diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c index 2c72c39..4b97daa 100644 --- a/arch/powerpc/mm/init_32.c +++ b/arch/powerpc/mm/init_32.c @@ -59,8 +59,10 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_gathers); unsigned long total_memory; unsigned long total_lowmem; -phys_addr_t memstart_addr; +phys_addr_t memstart_addr = (phys_addr_t)~0ull; EXPORT_SYMBOL(memstart_addr); +phys_addr_t kernstart_addr; +EXPORT_SYMBOL(kernstart_addr); phys_addr_t lowmem_end_addr; int boot_mapsize; diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c index f18b203..7fbbafd 100644 --- a/arch/powerpc/mm/init_64.c +++ b/arch/powerpc/mm/init_64.c @@ -75,7 +75,8 @@ /* max amount of RAM to use */ unsigned long __max_memory; -phys_addr_t memstart_addr; +phys_addr_t memstart_addr = ~0; +phys_addr_t kernstart_addr; void free_initmem(void) { diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 9c10b14..3ec7814 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -217,7 +217,7 @@ void __init do_init_bootmem(void) unsigned long total_pages; int boot_mapsize; - max_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT; + max_low_pfn = max_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT; total_pages = (lmb_end_of_DRAM() - memstart_addr) >> PAGE_SHIFT; #ifdef CONFIG_HIGHMEM total_pages = total_lowmem >> PAGE_SHIFT; @@ -233,7 +233,8 @@ void __init do_init_bootmem(void) start = lmb_alloc(bootmap_pages << PAGE_SHIFT, PAGE_SIZE); - boot_mapsize = init_bootmem(start >> PAGE_SHIFT, total_pages); + min_low_pfn = MEMORY_START >> PAGE_SHIFT; + boot_mapsize = init_bootmem_node(NODE_DATA(0), start >> PAGE_SHIFT, min_low_pfn, max_low_pfn); /* Add active regions with valid PFNs */ for (i = 0; i < lmb.memory.cnt; i++) { diff --git a/include/asm-powerpc/kdump.h b/include/asm-powerpc/kdump.h index 10e8eb1..f6c93c7 100644 --- a/include/asm-powerpc/kdump.h +++ b/include/asm-powerpc/kdump.h @@ -11,16 +11,11 @@ #ifdef CONFIG_CRASH_DUMP -#define PHYSICAL_START KDUMP_KERNELBASE #define KDUMP_TRAMPOLINE_START 0x0100 #define KDUMP_TRAMPOLINE_END 0x3000 #define KDUMP_MIN_TCE_ENTRIES 2048 -#else /* !CONFIG_CRASH_DUMP */ - -#define PHYSICAL_START 0x0 - #endif /* CONFIG_CRASH_DUMP */ #ifndef __ASSEMBLY__ diff --git a/include/asm-powerpc/page.h b/include/asm-powerpc/page.h index df47bbb..adf0591 100644 --- a/include/asm-powerpc/page.h +++ b/include/asm-powerpc/page.h @@ -12,6 +12,7 @@ #include <asm/asm-compat.h> #include <asm/kdump.h> +#include <asm/types.h> /* * On PPC32 page size is 4K. For PPC64 we support either 4K or 64K software @@ -42,8 +43,23 @@ * * The kdump dump kernel is one example where KERNELBASE != PAGE_OFFSET. * - * To get a physical address from a virtual one you subtract PAGE_OFFSET, - * _not_ KERNELBASE. + * PAGE_OFFSET is the virtual address of the start of lowmem. + * + * PHYSICAL_START is the physical address of the start of the kernel. + * + * MEMORY_START is the physical address of the start of lowmem. + * + * KERNELBASE, PAGE_OFFSET, and PHYSICAL_START are all configurable on + * ppc32 and based on how they are set we determine MEMORY_START. + * + * For the linear mapping the following equation should be true: + * KERNELBASE - PAGE_OFFSET = PHYSICAL_START - MEMORY_START + * + * Also, KERNELBASE >= PAGE_OFFSET and PHYSICAL_START >= MEMORY_START + * + * There are two was to determine a physical address from a virtual one: + * va = pa + PAGE_OFFSET - MEMORY_START + * va = pa + KERNELBASE - PHYSICAL_START * * If you want to know something's offset from the start of the kernel you * should subtract KERNELBASE. @@ -51,19 +67,32 @@ * If you want to test if something's a kernel address, use is_kernel_addr(). */ -#define PAGE_OFFSET ASM_CONST(CONFIG_KERNEL_START) -#define KERNELBASE (PAGE_OFFSET + PHYSICAL_START) +#define KERNELBASE ASM_CONST(CONFIG_KERNEL_START) +#define PAGE_OFFSET ASM_CONST(CONFIG_PAGE_OFFSET) + +#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_FLATMEM) +#ifndef __ASSEMBLY__ +extern phys_addr_t memstart_addr; +extern phys_addr_t kernstart_addr; +#endif +#define PHYSICAL_START kernstart_addr +#define MEMORY_START memstart_addr +#else +#define PHYSICAL_START ASM_CONST(CONFIG_PHYSICAL_START) +#define MEMORY_START (PHYSICAL_START + PAGE_OFFSET - KERNELBASE) +#endif #ifdef CONFIG_FLATMEM -#define pfn_valid(pfn) ((pfn) < max_mapnr) +#define ARCH_PFN_OFFSET (MEMORY_START >> PAGE_SHIFT) +#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && (pfn) < (ARCH_PFN_OFFSET + max_mapnr)) #endif #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) -#define __va(x) ((void *)((unsigned long)(x) + PAGE_OFFSET)) -#define __pa(x) ((unsigned long)(x) - PAGE_OFFSET) +#define __va(x) ((void *)((unsigned long)(x) - PHYSICAL_START + KERNELBASE)) +#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - KERNELBASE) /* * Unfortunately the PLT is in the BSS in the PPC32 ELF ABI, diff --git a/include/asm-powerpc/page_32.h b/include/asm-powerpc/page_32.h index 51f8134..ebfae53 100644 --- a/include/asm-powerpc/page_32.h +++ b/include/asm-powerpc/page_32.h @@ -1,6 +1,12 @@ #ifndef _ASM_POWERPC_PAGE_32_H #define _ASM_POWERPC_PAGE_32_H +#if defined(CONFIG_PHYSICAL_ALIGN) && (CONFIG_PHYSICAL_START != 0) +#if (CONFIG_PHYSICAL_START % CONFIG_PHYSICAL_ALIGN) != 0 +#error "CONFIG_PHYSICAL_START must be a multiple of CONFIG_PHYSICAL_ALIGN" +#endif +#endif + #define VM_DATA_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS32 #ifdef CONFIG_NOT_COHERENT_CACHE diff --git a/include/asm-powerpc/pgtable-ppc32.h b/include/asm-powerpc/pgtable-ppc32.h index 2c79f55..dbd1875 100644 --- a/include/asm-powerpc/pgtable-ppc32.h +++ b/include/asm-powerpc/pgtable-ppc32.h @@ -98,9 +98,6 @@ extern int icache_44x_need_flush; #define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE) #define FIRST_USER_ADDRESS 0 -#define USER_PGD_PTRS (PAGE_OFFSET >> PGDIR_SHIFT) -#define KERNEL_PGD_PTRS (PTRS_PER_PGD-USER_PGD_PTRS) - #define pte_ERROR(e) \ printk("%s:%d: bad pte %llx.\n", __FILE__, __LINE__, \ (unsigned long long)pte_val(e)) @@ -692,7 +689,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, #define pmd_page_vaddr(pmd) \ ((unsigned long) (pmd_val(pmd) & PAGE_MASK)) #define pmd_page(pmd) \ - (mem_map + (__pa(pmd_val(pmd)) >> PAGE_SHIFT)) + (mem_map + (__pa(pmd_val(pmd)) >> PAGE_SHIFT) - ARCH_PFN_OFFSET) #endif /* to find an entry in a kernel page-table-directory */ -- 1.5.4.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr 2008-04-01 2:08 ` [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Kumar Gala 2008-04-01 2:08 ` [PATCH 05/11] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem Kumar Gala @ 2008-04-01 10:50 ` Paul Mackerras 2008-04-01 21:49 ` Kumar Gala 1 sibling, 1 reply; 23+ messages in thread From: Paul Mackerras @ 2008-04-01 10:50 UTC (permalink / raw) To: Kumar Gala; +Cc: linuxppc-dev Kumar Gala writes: > For the sub-arches that support relocatable interrupt vectors (book-e) its > reasonable to have memory start at a non-zero physical address. For those > cases use the variable memstart_addr instead of the #define PPC_MEMSTART > since the only uses of PPC_MEMSTART are for initialization and in the > future we can set memstart_addr at runtime to have a relocatable kernel. In those cases, is it still true that the kernel sits at the start of the usable memory, or might there be usable memory before _stext? In other words, is PAGE_OFFSET == KERNELBASE still true or not? Paul. ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr 2008-04-01 10:50 ` [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Paul Mackerras @ 2008-04-01 21:49 ` Kumar Gala 0 siblings, 0 replies; 23+ messages in thread From: Kumar Gala @ 2008-04-01 21:49 UTC (permalink / raw) To: Paul Mackerras; +Cc: linuxppc-dev On Apr 1, 2008, at 5:50 AM, Paul Mackerras wrote: > Kumar Gala writes: > >> For the sub-arches that support relocatable interrupt vectors (book- >> e) its >> reasonable to have memory start at a non-zero physical address. >> For those >> cases use the variable memstart_addr instead of the #define >> PPC_MEMSTART >> since the only uses of PPC_MEMSTART are for initialization and in the >> future we can set memstart_addr at runtime to have a relocatable >> kernel. > > In those cases, is it still true that the kernel sits at the start of > the usable memory, or might there be usable memory before _stext? > In other words, is PAGE_OFFSET == KERNELBASE still true or not? Both cases are possible. Here are the four cases I see: * Normal kernel (kernel at physical 0) * kexec kernel (kernel at physical 32M, but still has access to memory from physical 0 to 32M) * cAMP kernel (kernel at non-zero offset) [eg. 256M offset] * kexec cAMP kernel (kernel at 256M+32M, but has access to 256M + size) - k ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-01 2:08 ` [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset Kumar Gala 2008-04-01 2:08 ` [PATCH 02/11] [POWERPC] Remove Kconfig option BOOT_LOAD Kumar Gala @ 2008-04-01 10:08 ` Paul Mackerras 2008-04-01 21:52 ` Kumar Gala 1 sibling, 1 reply; 23+ messages in thread From: Paul Mackerras @ 2008-04-01 10:08 UTC (permalink / raw) To: Kumar Gala; +Cc: linuxppc-dev Kumar Gala writes: > Normally we assume kernel images will be loaded at offset 0. However > there are situations, like when the kernel itself is running at a non-zero > physical address, that we don't want to load it at 0. > > Allow the wrapper to take an offset. We use this when building u-boot images. This makes it a bit harder to build a kernel and then wrap it later on, since one would have to know what -m value to give. Would it possible for either the wrapper script or mkimage to peek in the ELF header of the vmlinux to work out what physical address to use? Paul. ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-01 10:08 ` [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset Paul Mackerras @ 2008-04-01 21:52 ` Kumar Gala 2008-04-02 0:58 ` Paul Mackerras 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-01 21:52 UTC (permalink / raw) To: Paul Mackerras; +Cc: linuxppc-dev On Apr 1, 2008, at 5:08 AM, Paul Mackerras wrote: > Kumar Gala writes: > >> Normally we assume kernel images will be loaded at offset 0. However >> there are situations, like when the kernel itself is running at a >> non-zero >> physical address, that we don't want to load it at 0. >> >> Allow the wrapper to take an offset. We use this when building u- >> boot images. > > This makes it a bit harder to build a kernel and then wrap it later > on, since one would have to know what -m value to give. Would it > possible for either the wrapper script or mkimage to peek in the ELF > header of the vmlinux to work out what physical address to use? Hmm, need to think about that. But my initial reaction is two fold. One I don't think this information would be around and two don't we already have this problem with a kdump kernel? - k ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-01 21:52 ` Kumar Gala @ 2008-04-02 0:58 ` Paul Mackerras 2008-04-02 13:14 ` Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Paul Mackerras @ 2008-04-02 0:58 UTC (permalink / raw) To: Kumar Gala; +Cc: linuxppc-dev Kumar Gala writes: > Hmm, need to think about that. But my initial reaction is two fold. > One I don't think this information would be around and two don't we > already have this problem with a kdump kernel? I'm just concerned that we have two things that have to match up - the compiled-in physical address that the kernel assumes it is running at, and the physical address where it is actually loaded. While those two things were both always 0 for embedded processors, there wasn't a problem, but now we can have a situation where a kernel binary has to be loaded at some nonzero address to work correctly, but there is no way to work out what that address is for an existing vmlinux binary. Or have I missed something? For a kdump kernel, at least for 64-bit, the physical address has to be 32MB. There is no other choice, so there is no possibility of confusion. For 85xx, would it be possible to have the kernel figure out what physical address it has been loaded at, and use that as the base address, rather than having the base address set at compile time? That would solve my objection since it would mean that there would no longer be two things that had to be kept in sync. You could pass any value to wrapper/mkimage (subject to constraints such as it has to be a multiple of 256M) and it would work. That value could even come from a config option in the case where wrapper is invoked as part of the kernel build, but that config option shouldn't affect anything at all in the vmlinux. Paul. ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-02 0:58 ` Paul Mackerras @ 2008-04-02 13:14 ` Kumar Gala 2008-04-02 17:03 ` Segher Boessenkool 2008-04-03 6:34 ` Paul Mackerras 0 siblings, 2 replies; 23+ messages in thread From: Kumar Gala @ 2008-04-02 13:14 UTC (permalink / raw) To: Paul Mackerras; +Cc: linuxppc-dev On Apr 1, 2008, at 7:58 PM, Paul Mackerras wrote: > Kumar Gala writes: > >> Hmm, need to think about that. But my initial reaction is two fold. >> One I don't think this information would be around and two don't we >> already have this problem with a kdump kernel? > > I'm just concerned that we have two things that have to match up - the > compiled-in physical address that the kernel assumes it is running at, > and the physical address where it is actually loaded. While those two > things were both always 0 for embedded processors, there wasn't a > problem, but now we can have a situation where a kernel binary has to > be loaded at some nonzero address to work correctly, but there is no > way to work out what that address is for an existing vmlinux binary. > Or have I missed something? Nope, that sums up the situation pretty well. > For a kdump kernel, at least for 64-bit, the physical address has to > be 32MB. There is no other choice, so there is no possibility of > confusion. But how do you know a vmlinux image is for kdump or not? > For 85xx, would it be possible to have the kernel figure out what > physical address it has been loaded at, and use that as the base > address, rather than having the base address set at compile time? Yes, that is what CONFIG_RELOCATABLE is all about. > That would solve my objection since it would mean that there would no > longer be two things that had to be kept in sync. You could pass any > value to wrapper/mkimage (subject to constraints such as it has to be > a multiple of 256M) and it would work. That value could even come > from a config option in the case where wrapper is invoked as part of > the kernel build, but that config option shouldn't affect anything at > all in the vmlinux. Ok, but I still think the issues exists when we config PHYSICAL_START to non-zero and CONFIG_RELOCATABLE=n. Ideally we get set the phys address the PHDR, but I'm not sure how to get the linker to do that. - k ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-02 13:14 ` Kumar Gala @ 2008-04-02 17:03 ` Segher Boessenkool 2008-04-03 6:34 ` Paul Mackerras 1 sibling, 0 replies; 23+ messages in thread From: Segher Boessenkool @ 2008-04-02 17:03 UTC (permalink / raw) To: Kumar Gala; +Cc: linuxppc-dev, Paul Mackerras > Ideally we get set the phys address the PHDR, but I'm not sure how to > get the linker to do that. I think you want to look at "AT" in the ld manual. Segher ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-02 13:14 ` Kumar Gala 2008-04-02 17:03 ` Segher Boessenkool @ 2008-04-03 6:34 ` Paul Mackerras 2008-04-03 6:47 ` Kumar Gala 1 sibling, 1 reply; 23+ messages in thread From: Paul Mackerras @ 2008-04-03 6:34 UTC (permalink / raw) To: Kumar Gala; +Cc: linuxppc-dev Kumar Gala writes: > > For a kdump kernel, at least for 64-bit, the physical address has to > > be 32MB. There is no other choice, so there is no possibility of > > confusion. > > But how do you know a vmlinux image is for kdump or not? By looking at the ELF headers -- either the first PT_LOAD segment or the .text section -- and seeing whether the start address is 0xc000000002000000 or not. > > For 85xx, would it be possible to have the kernel figure out what > > physical address it has been loaded at, and use that as the base > > address, rather than having the base address set at compile time? > > Yes, that is what CONFIG_RELOCATABLE is all about. Is there any reason to have that as an option, rather than just always doing that? > > That would solve my objection since it would mean that there would no > > longer be two things that had to be kept in sync. You could pass any > > value to wrapper/mkimage (subject to constraints such as it has to be > > a multiple of 256M) and it would work. That value could even come > > from a config option in the case where wrapper is invoked as part of > > the kernel build, but that config option shouldn't affect anything at > > all in the vmlinux. > > Ok, but I still think the issues exists when we config PHYSICAL_START > to non-zero and CONFIG_RELOCATABLE=n. Ideally we get set the phys > address the PHDR, but I'm not sure how to get the linker to do that. If we can do that then the wrapper script can dig it out and pass it to mkimage, which would solve the problem. Paul. ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-03 6:34 ` Paul Mackerras @ 2008-04-03 6:47 ` Kumar Gala 2008-04-10 2:25 ` Paul Mackerras 0 siblings, 1 reply; 23+ messages in thread From: Kumar Gala @ 2008-04-03 6:47 UTC (permalink / raw) To: Paul Mackerras; +Cc: linuxppc-dev On Apr 3, 2008, at 1:34 AM, Paul Mackerras wrote: > Kumar Gala writes: > >>> For a kdump kernel, at least for 64-bit, the physical address has to >>> be 32MB. There is no other choice, so there is no possibility of >>> confusion. >> >> But how do you know a vmlinux image is for kdump or not? > > By looking at the ELF headers -- either the first PT_LOAD segment or > the .text section -- and seeing whether the start address is > 0xc000000002000000 or not. fair point. >>> For 85xx, would it be possible to have the kernel figure out what >>> physical address it has been loaded at, and use that as the base >>> address, rather than having the base address set at compile time? >> >> Yes, that is what CONFIG_RELOCATABLE is all about. > > Is there any reason to have that as an option, rather than just always > doing that? Concerned about runtime performance of __va() and __pa(). >>> That would solve my objection since it would mean that there would >>> no >>> longer be two things that had to be kept in sync. You could pass >>> any >>> value to wrapper/mkimage (subject to constraints such as it has to >>> be >>> a multiple of 256M) and it would work. That value could even come >>> from a config option in the case where wrapper is invoked as part of >>> the kernel build, but that config option shouldn't affect anything >>> at >>> all in the vmlinux. >> >> Ok, but I still think the issues exists when we config PHYSICAL_START >> to non-zero and CONFIG_RELOCATABLE=n. Ideally we get set the phys >> address the PHDR, but I'm not sure how to get the linker to do that. > > If we can do that then the wrapper script can dig it out and pass it > to mkimage, which would solve the problem. Ok, so on Segher's recommendation I looked at 'AT' and posted a patch that uses that so now regardless of the kernel the PT_LOAD PHDR will have the physical address set properly. So now we can look at the vmlinux and determine the physical offset. The question is how best to do that. Here are the options I see: * readelf, grep and parse output * objdump grep and parse output * simple C program that read's the elf and reports back [looking for suggestion on what direction you want to take] The other questions is if we'd ever have a vmlinux with more than one PT_LOAD PHDR. If so which one do we use (the one with the lowest physical address)? - k ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-03 6:47 ` Kumar Gala @ 2008-04-10 2:25 ` Paul Mackerras 2008-04-10 10:53 ` Kumar Gala 0 siblings, 1 reply; 23+ messages in thread From: Paul Mackerras @ 2008-04-10 2:25 UTC (permalink / raw) To: Kumar Gala; +Cc: linuxppc-dev Kumar Gala writes: > So now we can look at the vmlinux and determine the physical offset. > The question is how best to do that. Here are the options I see: > * readelf, grep and parse output > * objdump grep and parse output > * simple C program that read's the elf and reports back Either readelf or objdump for now, and if that proves to be fragile we can look at a C program. You could do: readelf -l $vmlinux | grep -m 1 LOAD | awk '{print $4}' or objdump -p $vmlinux | grep -m 1 LOAD | awk '{print $7}' There's not a lot of difference. Since the wrapper already uses objdump, I think we should use objdump rather than making the wrapper depend on an additional program (readelf). > The other questions is if we'd ever have a vmlinux with more than one > PT_LOAD PHDR. If so which one do we use (the one with the lowest > physical address)? I think we would take the first one. Paul. ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset 2008-04-10 2:25 ` Paul Mackerras @ 2008-04-10 10:53 ` Kumar Gala 0 siblings, 0 replies; 23+ messages in thread From: Kumar Gala @ 2008-04-10 10:53 UTC (permalink / raw) To: Paul Mackerras; +Cc: linuxppc-dev On Apr 9, 2008, at 9:25 PM, Paul Mackerras wrote: > Kumar Gala writes: > >> So now we can look at the vmlinux and determine the physical offset. >> The question is how best to do that. Here are the options I see: >> * readelf, grep and parse output >> * objdump grep and parse output >> * simple C program that read's the elf and reports back > > Either readelf or objdump for now, and if that proves to be fragile we > can look at a C program. You could do: > > readelf -l $vmlinux | grep -m 1 LOAD | awk '{print $4}' > > or > > objdump -p $vmlinux | grep -m 1 LOAD | awk '{print $7}' > > There's not a lot of difference. Since the wrapper already uses > objdump, I think we should use objdump rather than making the wrapper > depend on an additional program (readelf). > >> The other questions is if we'd ever have a vmlinux with more than one >> PT_LOAD PHDR. If so which one do we use (the one with the lowest >> physical address)? > > I think we would take the first one. Ok. I've reworked this patch and sent in the new patch series. - k ^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2008-04-10 10:53 UTC | newest] Thread overview: 23+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2008-04-01 2:08 [PATCH 00/11] ppc32 mm init clean and 85xx kernel reloc Kumar Gala 2008-04-01 2:08 ` [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset Kumar Gala 2008-04-01 2:08 ` [PATCH 02/11] [POWERPC] Remove Kconfig option BOOT_LOAD Kumar Gala 2008-04-01 2:08 ` [PATCH 03/11] [POWERPC] Provide access to arch/powerpc include path on ppc64 Kumar Gala 2008-04-01 2:08 ` [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Kumar Gala 2008-04-01 2:08 ` [PATCH 05/11] [POWERPC] Introduce lowmem_end_addr to distiguish from total_lowmem Kumar Gala 2008-04-01 2:08 ` [PATCH 06/11] [POWERPC] 85xx: Cleanup TLB initialization Kumar Gala 2008-04-01 2:08 ` [PATCH 07/11] [POWERPC] Use lowmem_end_addr to limit lmb allocations on ppc32 Kumar Gala 2008-04-01 2:08 ` [PATCH 08/11 v2] [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr Kumar Gala 2008-04-01 2:08 ` [PATCH 09/11] [POWERPC] Clean up some linker and symbol usage Kumar Gala 2008-04-01 2:08 ` [PATCH 10/11] [POWERPC] Move phys_addr_t definition into asm/types.h Kumar Gala 2008-04-01 2:08 ` [PATCH 11/11] [POWERPC] 85xx: Add support for relocatble kernel (and booting at non-zero) Kumar Gala 2008-04-01 10:50 ` [PATCH 04/11 v2] [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr Paul Mackerras 2008-04-01 21:49 ` Kumar Gala 2008-04-01 10:08 ` [PATCH 01/11] [POWERPC] bootwrapper: Allow specifying of image physical offset Paul Mackerras 2008-04-01 21:52 ` Kumar Gala 2008-04-02 0:58 ` Paul Mackerras 2008-04-02 13:14 ` Kumar Gala 2008-04-02 17:03 ` Segher Boessenkool 2008-04-03 6:34 ` Paul Mackerras 2008-04-03 6:47 ` Kumar Gala 2008-04-10 2:25 ` Paul Mackerras 2008-04-10 10:53 ` Kumar Gala
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).