linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/7] Kudmp support for PPC440x
@ 2011-12-09 11:43 Suzuki K. Poulose
  2011-12-09 11:43 ` [PATCH v4 1/7] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE Suzuki K. Poulose
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-09 11:43 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

The following series implements:

 * Generic framework for relocatable kernel on PPC32, based on processing 
   the dynamic relocation entries.
 * Relocatable kernel support for 44x
 * Kdump support for 44x. Doesn't support 47x yet, as the kexec 
   support is missing.

Changes from V3:

 * Added a new config - NONSTATIC_KERNEL - to group different types of relocatable
   kernel. (Suggested by: Josh Boyer)
 * Added supported ppc relocation types in relocs_check.pl for verifying the
   relocations used in the kernel.

Changes from V2:

 * Renamed old style mapping based RELOCATABLE on BookE to DYNAMIC_MEMSTART.
   Suggested by: Scott Wood
 * Added support for DYNAMIC_MEMSTART on PPC440x
 * Reverted back to RELOCATABLE and RELOCATABLE_PPC32 from RELOCATABLE_PPC32_PIE
   for relocation based on processing dynamic reloc entries for PPC32.
 * Ensure the modified instructions are flushed and the i-cache invalidated at
   the end of relocate(). - Reported by : Josh Poimboeuf

Changes from V1:

 * Splitted patch 'Enable CONFIG_RELOCATABLE for PPC44x' to move some
   of the generic bits to a new patch.
 * Renamed RELOCATABLE_PPC32 to RELOCATABLE_PPC32_PIE and provided options to
   retained old style mapping. (Suggested by: Scott Wood)
 * Added support for avoiding the overlapping of uncompressed kernel
   with boot wrapper for PPC images.

The patches are based on -next tree for ppc.

I have tested these patches on Ebony, Sequoia and Virtex(QEMU Emulated).
I haven't tested the RELOCATABLE bits on PPC_47x yet, as I don't have access
to one. However, RELOCATABLE should work fine there as we only depend on the 
runtime address and the XLAT entry setup by the boot loader. It would be great if
somebody could test these patches on a 47x.


---

Suzuki K. Poulose (7):
      [boot] Change the load address for the wrapper to fit the kernel
      [44x] Enable CRASH_DUMP for 440x
      [44x] Enable CONFIG_RELOCATABLE for PPC44x
      [ppc] Define virtual-physical translations for RELOCATABLE
      [ppc] Process dynamic relocations for kernel
      [44x] Enable DYNAMIC_MEMSTART for 440x
      [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE


 arch/powerpc/Kconfig                          |   46 +++++++++--
 arch/powerpc/Makefile                         |    6 +
 arch/powerpc/boot/wrapper                     |   20 +++++
 arch/powerpc/configs/44x/iss476-smp_defconfig |    2 
 arch/powerpc/include/asm/kdump.h              |    4 -
 arch/powerpc/include/asm/page.h               |   89 ++++++++++++++++++++-
 arch/powerpc/kernel/Makefile                  |    2 
 arch/powerpc/kernel/crash_dump.c              |    4 -
 arch/powerpc/kernel/head_44x.S                |  105 +++++++++++++++++++++++++
 arch/powerpc/kernel/head_fsl_booke.S          |    2 
 arch/powerpc/kernel/machine_kexec.c           |    2 
 arch/powerpc/kernel/prom_init.c               |    2 
 arch/powerpc/kernel/vmlinux.lds.S             |    8 ++
 arch/powerpc/mm/44x_mmu.c                     |    2 
 arch/powerpc/mm/init_32.c                     |    7 ++
 arch/powerpc/relocs_check.pl                  |    7 ++
 16 files changed, 282 insertions(+), 26 deletions(-)

--
Suzuki

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/7] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE
  2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
@ 2011-12-09 11:43 ` Suzuki K. Poulose
  2011-12-09 11:47 ` [PATCH v4 2/7] [44x] Enable DYNAMIC_MEMSTART for 440x Suzuki K. Poulose
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-09 11:43 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

The current implementation of CONFIG_RELOCATABLE in BookE is based
on mapping the page aligned kernel load address to KERNELBASE. This
approach however is not enough for platforms, where the TLB page size
is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used
currently in BookE to DYNAMIC_MEMSTART to reflect the actual method.

The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the
dynamic relocations will be introduced in the later in the patch series.

This change would allow the use of the old method of RELOCATABLE for
platforms which can afford to enforce the page alignment (platforms with
smaller TLB size).

Changes since v3:

* Introduced a new config, NONSTATIC_KERNEL, to denote a kernel which is
  either a RELOCATABLE or DYNAMIC_MEMSTART(Suggested by: Josh Boyer)

Suggested-by: Scott Wood <scottwood@freescale.com>
Tested-by: Scott Wood <scottwood@freescale.com>

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Cc: Scott Wood <scottwood@freescale.com>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Josh Boyer <jwboyer@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org>
---

 arch/powerpc/Kconfig                          |   60 +++++++++++++++++--------
 arch/powerpc/configs/44x/iss476-smp_defconfig |    2 -
 arch/powerpc/include/asm/kdump.h              |    4 +-
 arch/powerpc/include/asm/page.h               |    4 +-
 arch/powerpc/kernel/crash_dump.c              |    4 +-
 arch/powerpc/kernel/head_44x.S                |    4 +-
 arch/powerpc/kernel/head_fsl_booke.S          |    2 -
 arch/powerpc/kernel/machine_kexec.c           |    2 -
 arch/powerpc/kernel/prom_init.c               |    2 -
 arch/powerpc/mm/44x_mmu.c                     |    2 -
 10 files changed, 55 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 7c93c7e..fac92ce 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -364,7 +364,8 @@ config KEXEC
 config CRASH_DUMP
 	bool "Build a kdump crash kernel"
 	depends on PPC64 || 6xx || FSL_BOOKE
-	select RELOCATABLE if PPC64 || FSL_BOOKE
+	select RELOCATABLE if PPC64
+	select DYNAMIC_MEMSTART if FSL_BOOKE
 	help
 	  Build a kernel suitable for use as a kdump capture kernel.
 	  The same kernel binary can be used as production kernel and dump
@@ -773,6 +774,10 @@ source "drivers/rapidio/Kconfig"
 
 endmenu
 
+config NONSTATIC_KERNEL
+	bool
+	default n
+
 menu "Advanced setup"
 	depends on PPC32
 
@@ -822,23 +827,39 @@ config LOWMEM_CAM_NUM
 	int "Number of CAMs to use to map low memory" if LOWMEM_CAM_NUM_BOOL
 	default 3
 
-config RELOCATABLE
-	bool "Build a relocatable kernel (EXPERIMENTAL)"
+config DYNAMIC_MEMSTART
+	bool "Enable page aligned dynamic load address for kernel (EXPERIMENTAL)"
 	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && (FSL_BOOKE || PPC_47x)
-	help
-	  This builds a kernel image that is capable of running at the
-	  location the kernel is loaded at (some alignment restrictions may
-	  exist).
-
-	  One use is for the kexec on panic case where the recovery kernel
-	  must live at a different physical address than the primary
-	  kernel.
-
-	  Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
-	  it has been loaded at and the compile time physical addresses
-	  CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
-	  setting can still be useful to bootwrappers that need to know the
-	  load location of the kernel (eg. u-boot/mkimage).
+	select NONSTATIC_KERNEL
+	help
+	  This option enables the kernel to be loaded at any page aligned
+	  physical address. The kernel creates a mapping from KERNELBASE to 
+	  the address where the kernel is loaded. The page size here implies
+	  the TLB page size of the mapping for kernel on the particular platform.
+	  Please refer to the init code for finding the TLB page size.
+
+	  DYNAMIC_MEMSTART is an easy way of implementing pseudo-RELOCATABLE
+	  kernel image, where the only restriction is the page aligned kernel
+	  load address. When this option is enabled, the compile time physical 
+	  address CONFIG_PHYSICAL_START is ignored.
+
+# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
+# config RELOCATABLE
+#	bool "Build a relocatable kernel (EXPERIMENTAL)"
+#	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && (FSL_BOOKE || PPC_47x)
+#	help
+#	  This builds a kernel image that is capable of running at the
+#	  location the kernel is loaded at, without any alignment restrictions.
+#
+#	  One use is for the kexec on panic case where the recovery kernel
+#	  must live at a different physical address than the primary
+#	  kernel.
+#
+#	  Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+#	  it has been loaded at and the compile time physical addresses
+#	  CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+#	  setting can still be useful to bootwrappers that need to know the
+#	  load location of the kernel (eg. u-boot/mkimage).
 
 config PAGE_OFFSET_BOOL
 	bool "Set custom page offset address"
@@ -868,7 +889,7 @@ config KERNEL_START_BOOL
 config KERNEL_START
 	hex "Virtual address of kernel base" if KERNEL_START_BOOL
 	default PAGE_OFFSET if PAGE_OFFSET_BOOL
-	default "0xc2000000" if CRASH_DUMP && !RELOCATABLE
+	default "0xc2000000" if CRASH_DUMP && !NONSTATIC_KERNEL
 	default "0xc0000000"
 
 config PHYSICAL_START_BOOL
@@ -881,7 +902,7 @@ config PHYSICAL_START_BOOL
 
 config PHYSICAL_START
 	hex "Physical address where the kernel is loaded" if PHYSICAL_START_BOOL
-	default "0x02000000" if PPC_STD_MMU && CRASH_DUMP && !RELOCATABLE
+	default "0x02000000" if PPC_STD_MMU && CRASH_DUMP && !NONSTATIC_KERNEL
 	default "0x00000000"
 
 config PHYSICAL_ALIGN
@@ -927,6 +948,7 @@ endmenu
 if PPC64
 config RELOCATABLE
 	bool "Build a relocatable kernel"
+	select NONSTATIC_KERNEL
 	help
 	  This builds a kernel image that is capable of running anywhere
 	  in the RMA (real memory area) at any 16k-aligned base address.
diff --git a/arch/powerpc/configs/44x/iss476-smp_defconfig b/arch/powerpc/configs/44x/iss476-smp_defconfig
index a6eb6ad..122043e 100644
--- a/arch/powerpc/configs/44x/iss476-smp_defconfig
+++ b/arch/powerpc/configs/44x/iss476-smp_defconfig
@@ -25,7 +25,7 @@ CONFIG_CMDLINE_BOOL=y
 CONFIG_CMDLINE="root=/dev/issblk0"
 # CONFIG_PCI is not set
 CONFIG_ADVANCED_OPTIONS=y
-CONFIG_RELOCATABLE=y
+CONFIG_DYNAMIC_MEMSTART=y
 CONFIG_NET=y
 CONFIG_PACKET=y
 CONFIG_UNIX=y
diff --git a/arch/powerpc/include/asm/kdump.h b/arch/powerpc/include/asm/kdump.h
index bffd062..c977620 100644
--- a/arch/powerpc/include/asm/kdump.h
+++ b/arch/powerpc/include/asm/kdump.h
@@ -32,11 +32,11 @@
 
 #ifndef __ASSEMBLY__
 
-#if defined(CONFIG_CRASH_DUMP) && !defined(CONFIG_RELOCATABLE)
+#if defined(CONFIG_CRASH_DUMP) && !defined(CONFIG_NONSTATIC_KERNEL)
 extern void reserve_kdump_trampoline(void);
 extern void setup_kdump_trampoline(void);
 #else
-/* !CRASH_DUMP || RELOCATABLE */
+/* !CRASH_DUMP || !NONSTATIC_KERNEL */
 static inline void reserve_kdump_trampoline(void) { ; }
 static inline void setup_kdump_trampoline(void) { ; }
 #endif
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 9d7485c..f149967 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -92,7 +92,7 @@ extern unsigned int HPAGE_SHIFT;
 #define PAGE_OFFSET	ASM_CONST(CONFIG_PAGE_OFFSET)
 #define LOAD_OFFSET	ASM_CONST((CONFIG_KERNEL_START-CONFIG_PHYSICAL_START))
 
-#if defined(CONFIG_RELOCATABLE)
+#if defined(CONFIG_NONSTATIC_KERNEL)
 #ifndef __ASSEMBLY__
 
 extern phys_addr_t memstart_addr;
@@ -105,7 +105,7 @@ extern phys_addr_t kernstart_addr;
 
 #ifdef CONFIG_PPC64
 #define MEMORY_START	0UL
-#elif defined(CONFIG_RELOCATABLE)
+#elif defined(CONFIG_NONSTATIC_KERNEL)
 #define MEMORY_START	memstart_addr
 #else
 #define MEMORY_START	(PHYSICAL_START + PAGE_OFFSET - KERNELBASE)
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 424afb6..b3ba516 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -28,7 +28,7 @@
 #define DBG(fmt...)
 #endif
 
-#ifndef CONFIG_RELOCATABLE
+#ifndef CONFIG_NONSTATIC_KERNEL
 void __init reserve_kdump_trampoline(void)
 {
 	memblock_reserve(0, KDUMP_RESERVE_LIMIT);
@@ -67,7 +67,7 @@ void __init setup_kdump_trampoline(void)
 
 	DBG(" <- setup_kdump_trampoline()\n");
 }
-#endif /* CONFIG_RELOCATABLE */
+#endif /* CONFIG_NONSTATIC_KERNEL */
 
 static int __init parse_savemaxmem(char *p)
 {
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index b725dab..d5f787d 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -86,8 +86,10 @@ _ENTRY(_start);
 
 	bl	early_init
 
-#ifdef CONFIG_RELOCATABLE
+#ifdef CONFIG_DYNAMIC_MEMSTART
 	/*
+	 * Mapping based, page aligned dyanmic kernel loading.
+	 *
 	 * r25 will contain RPN/ERPN for the start address of memory
 	 *
 	 * Add the difference between KERNELBASE and PAGE_OFFSET to the
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 9f5d210..d5d78c4 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -197,7 +197,7 @@ _ENTRY(__early_start)
 
 	bl	early_init
 
-#ifdef CONFIG_RELOCATABLE
+#ifdef CONFIG_DYNAMIC_MEMSTART
 	lis	r3,kernstart_addr@ha
 	la	r3,kernstart_addr@l(r3)
 #ifdef CONFIG_PHYS_64BIT
diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index 9ce1672..ec50bb9 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -128,7 +128,7 @@ void __init reserve_crashkernel(void)
 
 	crash_size = resource_size(&crashk_res);
 
-#ifndef CONFIG_RELOCATABLE
+#ifndef CONFIG_NONSTATIC_KERNEL
 	if (crashk_res.start != KDUMP_KERNELBASE)
 		printk("Crash kernel location must be 0x%x\n",
 				KDUMP_KERNELBASE);
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index df47316..6e63b20 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -2844,7 +2844,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 	RELOC(of_platform) = prom_find_machine_type();
 	prom_printf("Detected machine type: %x\n", RELOC(of_platform));
 
-#ifndef CONFIG_RELOCATABLE
+#ifndef CONFIG_NONSTATIC_KERNEL
 	/* Bail if this is a kdump kernel. */
 	if (PHYSICAL_START > 0)
 		prom_panic("Error: You can't boot a kdump kernel from OF!\n");
diff --git a/arch/powerpc/mm/44x_mmu.c b/arch/powerpc/mm/44x_mmu.c
index f60e006..924a258 100644
--- a/arch/powerpc/mm/44x_mmu.c
+++ b/arch/powerpc/mm/44x_mmu.c
@@ -221,7 +221,7 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 {
 	u64 size;
 
-#ifndef CONFIG_RELOCATABLE
+#ifndef CONFIG_NONSTATIC_KERNEL
 	/* We don't currently support the first MEMBLOCK not mapping 0
 	 * physical on those processors
 	 */

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/7] [44x] Enable DYNAMIC_MEMSTART for 440x
  2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
  2011-12-09 11:43 ` [PATCH v4 1/7] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE Suzuki K. Poulose
@ 2011-12-09 11:47 ` Suzuki K. Poulose
  2011-12-09 11:47 ` [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel Suzuki K. Poulose
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-09 11:47 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

DYNAMIC_MEMSTART(old RELOCATABLE) was restricted only to PPC_47x variants
of 44x. This patch enables DYNAMIC_MEMSTART for 440x based chipsets.

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Cc: Josh Boyer <jwboyer@gmail.com>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org>
---

 arch/powerpc/Kconfig           |    2 +-
 arch/powerpc/kernel/head_44x.S |   12 ++++++++++++
 2 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index fac92ce..5eafe95 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -829,7 +829,7 @@ config LOWMEM_CAM_NUM
 
 config DYNAMIC_MEMSTART
 	bool "Enable page aligned dynamic load address for kernel (EXPERIMENTAL)"
-	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && (FSL_BOOKE || PPC_47x)
+	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && (FSL_BOOKE || 44x)
 	select NONSTATIC_KERNEL
 	help
 	  This option enables the kernel to be loaded at any page aligned
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index d5f787d..62a4cd5 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -802,12 +802,24 @@ skpinv:	addi	r4,r4,1				/* Increment */
 /*
  * Configure and load pinned entry into TLB slot 63.
  */
+#ifdef CONFIG_DYNAMIC_MEMSTART
+
+	/* Read the XLAT entry for our current mapping */
+	tlbre	r25,r23,PPC44x_TLB_XLAT
+
+	lis	r3,KERNELBASE@h
+	ori	r3,r3,KERNELBASE@l
+
+	/* Use our current RPN entry */
+	mr	r4,r25
+#else
 
 	lis	r3,PAGE_OFFSET@h
 	ori	r3,r3,PAGE_OFFSET@l
 
 	/* Kernel is at the base of RAM */
 	li r4, 0			/* Load the kernel physical address */
+#endif
 
 	/* Load the kernel PID = 0 */
 	li	r0,0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel
  2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
  2011-12-09 11:43 ` [PATCH v4 1/7] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE Suzuki K. Poulose
  2011-12-09 11:47 ` [PATCH v4 2/7] [44x] Enable DYNAMIC_MEMSTART for 440x Suzuki K. Poulose
@ 2011-12-09 11:47 ` Suzuki K. Poulose
  2011-12-09 13:40   ` Josh Boyer
  2011-12-10  2:37   ` [UPDATED] " Suzuki K. Poulose
  2011-12-09 11:47 ` [PATCH v4 4/7] [ppc] Define virtual-physical translations for RELOCATABLE Suzuki K. Poulose
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-09 11:47 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

	R_PPC_RELATIVE
	R_PPC_ADDR16_LO
	R_PPC_ADDR16_HI
	R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Changes since V3:
 * Updated relocation types for ppc in arch/powerpc/relocs_check.pl

Changes since v2:
  * Flush the d-cache'd instructions and invalidate the i-cache to reflect
    the processed instructions.(Reported by: Josh Poimboeuf)

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@linux.vnet.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Alan Modra <amodra@au1.ibm.com>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
---

 arch/powerpc/Kconfig              |   42 ++++++++++++++++++++++---------------
 arch/powerpc/Makefile             |    6 +++--
 arch/powerpc/kernel/Makefile      |    2 ++
 arch/powerpc/kernel/vmlinux.lds.S |    8 ++++++-
 arch/powerpc/relocs_check.pl      |    7 ++++++
 5 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5eafe95..6936cb0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -843,23 +843,31 @@ config DYNAMIC_MEMSTART
 	  load address. When this option is enabled, the compile time physical 
 	  address CONFIG_PHYSICAL_START is ignored.
 
-# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
-# config RELOCATABLE
-#	bool "Build a relocatable kernel (EXPERIMENTAL)"
-#	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && (FSL_BOOKE || PPC_47x)
-#	help
-#	  This builds a kernel image that is capable of running at the
-#	  location the kernel is loaded at, without any alignment restrictions.
-#
-#	  One use is for the kexec on panic case where the recovery kernel
-#	  must live at a different physical address than the primary
-#	  kernel.
-#
-#	  Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
-#	  it has been loaded at and the compile time physical addresses
-#	  CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
-#	  setting can still be useful to bootwrappers that need to know the
-#	  load location of the kernel (eg. u-boot/mkimage).
+	  This option is overridden by RELOCATABLE.
+
+config RELOCATABLE
+	bool "Build a relocatable kernel (EXPERIMENTAL)"
+	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM
+	select NONSTATIC_KERNEL
+	help
+	  This builds a kernel image that is capable of running at the
+	  location the kernel is loaded at, without any alignment restrictions.
+	  This feature is a superset of DYNAMIC_MEMSTART, and hence overrides 
+	  it.
+
+	  One use is for the kexec on panic case where the recovery kernel
+	  must live at a different physical address than the primary
+	  kernel.
+
+	  Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+	  it has been loaded at and the compile time physical addresses
+	  CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+	  setting can still be useful to bootwrappers that need to know the
+	  load address of the kernel (eg. u-boot/mkimage).
+
+config RELOCATABLE_PPC32
+	def_bool y
+	depends on PPC32 && RELOCATABLE
 
 config PAGE_OFFSET_BOOL
 	bool "Set custom page offset address"
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index ffe4d88..b8b105c 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,9 +63,9 @@ override CC	+= -m$(CONFIG_WORD_SIZE)
 override AR	:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
 endif
 
-LDFLAGS_vmlinux-yy := -Bstatic
-LDFLAGS_vmlinux-$(CONFIG_PPC64)$(CONFIG_RELOCATABLE) := -pie
-LDFLAGS_vmlinux	:= $(LDFLAGS_vmlinux-yy)
+LDFLAGS_vmlinux-y := -Bstatic
+LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+LDFLAGS_vmlinux	:= $(LDFLAGS_vmlinux-y)
 
 CFLAGS-$(CONFIG_PPC64)	:= -mminimal-toc -mtraceback=no -mcall-aixdesc
 CFLAGS-$(CONFIG_PPC32)	:= -ffixed-r2 -mmultiple
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ce4f7f1..ee728e4 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -85,6 +85,8 @@ extra-$(CONFIG_FSL_BOOKE)	:= head_fsl_booke.o
 extra-$(CONFIG_8xx)		:= head_8xx.o
 extra-y				+= vmlinux.lds
 
+obj-$(CONFIG_RELOCATABLE_PPC32)	+= reloc_32.o
+
 obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o
 obj-$(CONFIG_PPC64)		+= dma-iommu.o iommu.o
 obj-$(CONFIG_KGDB)		+= kgdb.o
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 920276c..710a540 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -170,7 +170,13 @@ SECTIONS
 	}
 #ifdef CONFIG_RELOCATABLE
 	. = ALIGN(8);
-	.dynsym : AT(ADDR(.dynsym) - LOAD_OFFSET) { *(.dynsym) }
+	.dynsym : AT(ADDR(.dynsym) - LOAD_OFFSET)
+	{
+#ifdef CONFIG_RELOCATABLE_PPC32
+		__dynamic_symtab = .;
+#endif
+		*(.dynsym)
+	}
 	.dynstr : AT(ADDR(.dynstr) - LOAD_OFFSET) { *(.dynstr) }
 	.dynamic : AT(ADDR(.dynamic) - LOAD_OFFSET)
 	{
diff --git a/arch/powerpc/relocs_check.pl b/arch/powerpc/relocs_check.pl
index d257109..71aeb03 100755
--- a/arch/powerpc/relocs_check.pl
+++ b/arch/powerpc/relocs_check.pl
@@ -32,8 +32,15 @@ while (<FD>) {
 	next if (!/\s+R_/);
 
 	# These relocations are okay
+	# On PPC64:
+	# 	R_PPC64_RELATIVE, R_PPC64_NONE, R_PPC64_ADDR64
+	# On PPC:
+	# 	R_PPC_RELATIVE, R_PPC_ADDR16_HI, 
+	# 	R_PPC_ADDR16_HA,R_PPC_ADDR16_LO
 	next if (/R_PPC64_RELATIVE/ or /R_PPC64_NONE/ or
 	         /R_PPC64_ADDR64\s+mach_/);
+	next if (/R_PPC_ADDR16_LO/ or /R_PPC_ADDR16_HI/ or
+		 /R_PPC_ADDR16_HA/ or /R_PPC_RELATIVE/);
 
 	# If we see this type of relcoation it's an idication that
 	# we /may/ be using an old version of binutils.

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 4/7] [ppc] Define virtual-physical translations for RELOCATABLE
  2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
                   ` (2 preceding siblings ...)
  2011-12-09 11:47 ` [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel Suzuki K. Poulose
@ 2011-12-09 11:47 ` Suzuki K. Poulose
  2011-12-09 11:48 ` [PATCH v4 5/7] [44x] Enable CONFIG_RELOCATABLE for PPC44x Suzuki K. Poulose
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-09 11:47 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

	virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
			MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)

relocate() is called with the Effective Virtual Base Address (as
shown below)

            | Phys. Addr| Virt. Addr |
Page        |------------------------|
Boundary    |           |            |
            |           |            |
            |           |            |
Kernel Load |___________|_ __ _ _ _ _|<- Effective
Addr(_stext)|           |      ^     |Virt. Base Addr
            |           |      |     |
            |           |      |     |
            |           |reloc_offset|
            |           |      |     |
            |           |      |     |
            |           |______v_____|<-(KERNELBASE)%TLB_SIZE
            |           |            |
            |           |            |
            |           |            |
Page        |-----------|------------|
Boundary    |           |            |


On BookE, we need __va() & __pa() early in the boot process to access
the device tree.

Currently this has been defined as :

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
						PHYSICAL_START + KERNELBASE)
where:
 PHYSICAL_START is kernstart_addr - a variable updated at runtime.
 KERNELBASE	is the compile time Virtual base address of kernel.

This won't work for us, as kernstart_addr is dynamic and will yield different
results for __va()/__pa() for same mapping.

e.g.,

Let the kernel be loaded at 64MB and KERNELBASE be 0xc0000000 (same as
PAGE_OFFSET).

In this case, we would be mapping 0 to 0xc0000000, and kernstart_addr = 64M

Now __va(1MB) = (0x100000) - (0x4000000) + 0xc0000000
		= 0xbc100000 , which is wrong.

it should be : 0xc0000000 + 0x100000 = 0xc0100000

On platforms which support AMP, like PPC_47x (based on 44x), the kernel
could be loaded at highmem. Hence we cannot always depend on the compile
time constants for mapping.

Here are the possible solutions:

1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
compile time KERNELBASE value, instead of the actual Physical_Address(_stext).

The disadvantage is that we may break other users of PHYSICAL_START. They
could be replaced with __pa(_stext).

2) Redefine __va() & __pa() with relocation offset


#ifdef	CONFIG_RELOCATABLE_PPC32
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + (KERNELBASE + RELOC_OFFSET)))
#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + RELOC_OFFSET))
#endif

where, RELOC_OFFSET could be

  a) A variable, say relocation_offset (like kernstart_addr), updated
     at boot time. This impacts performance, as we have to load an additional
     variable from memory.

		OR

  b) #define RELOC_OFFSET ((PHYSICAL_START & PPC_PIN_SIZE_OFFSET_MASK) - \
                      (KERNELBASE & PPC_PIN_SIZE_OFFSET_MASK))

   This introduces more calculations for doing the translation.

3) Redefine __va() & __pa() with a new variable

i.e,

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))

where VIRT_PHYS_OFFSET :

#ifdef CONFIG_RELOCATABLE_PPC32
#define VIRT_PHYS_OFFSET virt_phys_offset
#else
#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
#endif /* CONFIG_RELOCATABLE_PPC32 */

where virt_phy_offset is updated at runtime to :

	Effective KERNELBASE - kernstart_addr.

Taking our example, above:

virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
		 = 0xc0400000 - 0x400000
		 = 0xc0000000
	and

	__va(0x100000) = 0xc0000000 + 0x100000 = 0xc0100000
	 which is what we want.

I have implemented (3) in the following patch which has same cost of
operation as the existing one.

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
---

 arch/powerpc/include/asm/page.h |   85 ++++++++++++++++++++++++++++++++++++++-
 arch/powerpc/mm/init_32.c       |    7 +++
 2 files changed, 89 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index f149967..f072e97 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -97,12 +97,26 @@ extern unsigned int HPAGE_SHIFT;
 
 extern phys_addr_t memstart_addr;
 extern phys_addr_t kernstart_addr;
+
+#ifdef CONFIG_RELOCATABLE_PPC32
+extern long long virt_phys_offset;
 #endif
+
+#endif /* __ASSEMBLY__ */
 #define PHYSICAL_START	kernstart_addr
-#else
+
+#else	/* !CONFIG_NONSTATIC_KERNEL */
 #define PHYSICAL_START	ASM_CONST(CONFIG_PHYSICAL_START)
 #endif
 
+/* See Description below for VIRT_PHYS_OFFSET */
+#ifdef CONFIG_RELOCATABLE_PPC32
+#define VIRT_PHYS_OFFSET virt_phys_offset
+#else
+#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
+#endif
+
+
 #ifdef CONFIG_PPC64
 #define MEMORY_START	0UL
 #elif defined(CONFIG_NONSTATIC_KERNEL)
@@ -125,12 +139,77 @@ extern phys_addr_t kernstart_addr;
  * determine MEMORY_START until then.  However we can determine PHYSICAL_START
  * from information at hand (program counter, TLB lookup).
  *
+ * On BookE with RELOCATABLE (RELOCATABLE_PPC32)
+ *
+ *   With RELOCATABLE_PPC32,  we support loading the kernel at any physical 
+ *   address without any restriction on the page alignment.
+ *
+ *   We find the runtime address of _stext and relocate ourselves based on 
+ *   the following calculation:
+ *
+ *  	  virtual_base = ALIGN_DOWN(KERNELBASE,256M) +
+ *  				MODULO(_stext.run,256M)
+ *   and create the following mapping:
+ *
+ * 	  ALIGN_DOWN(_stext.run,256M) => ALIGN_DOWN(KERNELBASE,256M)
+ *
+ *   When we process relocations, we cannot depend on the
+ *   existing equation for the __va()/__pa() translations:
+ *
+ * 	   __va(x) = (x)  - PHYSICAL_START + KERNELBASE
+ *
+ *   Where:
+ *   	 PHYSICAL_START = kernstart_addr = Physical address of _stext
+ *  	 KERNELBASE = Compiled virtual address of _stext.
+ *
+ *   This formula holds true iff, kernel load address is TLB page aligned.
+ *
+ *   In our case, we need to also account for the shift in the kernel Virtual 
+ *   address.
+ *
+ *   E.g.,
+ *
+ *   Let the kernel be loaded at 64MB and KERNELBASE be 0xc0000000 (same as PAGE_OFFSET).
+ *   In this case, we would be mapping 0 to 0xc0000000, and kernstart_addr = 64M
+ *
+ *   Now __va(1MB) = (0x100000) - (0x4000000) + 0xc0000000
+ *                 = 0xbc100000 , which is wrong.
+ *
+ *   Rather, it should be : 0xc0000000 + 0x100000 = 0xc0100000
+ *      	according to our mapping.
+ *
+ *   Hence we use the following formula to get the translations right:
+ *
+ * 	  __va(x) = (x) - [ PHYSICAL_START - Effective KERNELBASE ]
+ *
+ * 	  Where :
+ * 		PHYSICAL_START = dynamic load address.(kernstart_addr variable)
+ * 		Effective KERNELBASE = virtual_base =
+ * 				     = ALIGN_DOWN(KERNELBASE,256M) +
+ * 						MODULO(PHYSICAL_START,256M)
+ *
+ * 	To make the cost of __va() / __pa() more light weight, we introduce
+ * 	a new variable virt_phys_offset, which will hold :
+ *
+ * 	virt_phys_offset = Effective KERNELBASE - PHYSICAL_START
+ * 			 = ALIGN_DOWN(KERNELBASE,256M) - 
+ * 			 	ALIGN_DOWN(PHYSICALSTART,256M)
+ *
+ * 	Hence :
+ *
+ * 	__va(x) = x - PHYSICAL_START + Effective KERNELBASE
+ * 		= x + virt_phys_offset
+ *
+ * 		and
+ * 	__pa(x) = x + PHYSICAL_START - Effective KERNELBASE
+ * 		= x - virt_phys_offset
+ * 		
  * On non-Book-E PPC64 PAGE_OFFSET and MEMORY_START are constants so use
  * the other definitions for __va & __pa.
  */
 #ifdef CONFIG_BOOKE
-#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + KERNELBASE))
-#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - KERNELBASE)
+#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
+#define __pa(x) ((unsigned long)(x) - VIRT_PHYS_OFFSET)
 #else
 #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - MEMORY_START))
 #define __pa(x) ((unsigned long)(x) - PAGE_OFFSET + MEMORY_START)
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 161cefd..60a4e4e 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -65,6 +65,13 @@ phys_addr_t memstart_addr = (phys_addr_t)~0ull;
 EXPORT_SYMBOL(memstart_addr);
 phys_addr_t kernstart_addr;
 EXPORT_SYMBOL(kernstart_addr);
+
+#ifdef CONFIG_RELOCATABLE_PPC32
+/* Used in __va()/__pa() */
+long long virt_phys_offset;
+EXPORT_SYMBOL(virt_phys_offset);
+#endif
+
 phys_addr_t lowmem_end_addr;
 
 int boot_mapsize;

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 5/7] [44x] Enable CONFIG_RELOCATABLE for PPC44x
  2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
                   ` (3 preceding siblings ...)
  2011-12-09 11:47 ` [PATCH v4 4/7] [ppc] Define virtual-physical translations for RELOCATABLE Suzuki K. Poulose
@ 2011-12-09 11:48 ` Suzuki K. Poulose
  2011-12-09 11:48 ` [PATCH v4 6/7] [44x] Enable CRASH_DUMP for 440x Suzuki K. Poulose
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-09 11:48 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

The following patch adds relocatable kernel support - based on processing
of dynamic relocations - for PPC44x kernel.

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

	virtual_base = ALIGN(KERNELBASE,256M) +
			MODULO(_stext.run,256M)

relocate() is called with the Effective Virtual Base Address (as
shown below)

            | Phys. Addr| Virt. Addr |
Page (256M) |------------------------|
Boundary    |           |            |
            |           |            |
            |           |            |
Kernel Load |___________|_ __ _ _ _ _|<- Effective
Addr(_stext)|           |      ^     |Virt. Base Addr
            |           |      |     |
            |           |      |     |
            |           |reloc_offset|
            |           |      |     |
            |           |      |     |
            |           |______v_____|<-(KERNELBASE)%256M
            |           |            |
            |           |            |
            |           |            |
Page(256M)  |-----------|------------|
Boundary    |           |            |

The virt_phys_offset is updated accordingly, i.e,

	virt_phys_offset = effective. kernel virt base - kernstart_addr

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Tony Breeds <tony@bakeyournoodle.com>
Cc: Josh Boyer <jwboyer@gmail.com>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
---

 arch/powerpc/Kconfig           |    2 -
 arch/powerpc/kernel/head_44x.S |   95 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 94 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 6936cb0..90cd8d3 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -847,7 +847,7 @@ config DYNAMIC_MEMSTART
 
 config RELOCATABLE
 	bool "Build a relocatable kernel (EXPERIMENTAL)"
-	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM
+	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && 44x
 	select NONSTATIC_KERNEL
 	help
 	  This builds a kernel image that is capable of running at the
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index 62a4cd5..213ed31 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -64,6 +64,35 @@ _ENTRY(_start);
 	mr	r31,r3		/* save device tree ptr */
 	li	r24,0		/* CPU number */
 
+#ifdef CONFIG_RELOCATABLE
+/*
+ * Relocate ourselves to the current runtime address.
+ * This is called only by the Boot CPU.
+ * "relocate" is called with our current runtime virutal
+ * address.
+ * r21 will be loaded with the physical runtime address of _stext
+ */
+	bl	0f				/* Get our runtime address */
+0:	mflr	r21				/* Make it accessible */
+	addis	r21,r21,(_stext - 0b)@ha
+	addi	r21,r21,(_stext - 0b)@l 	/* Get our current runtime base */
+
+	/*
+	 * We have the runtime (virutal) address of our base.
+	 * We calculate our shift of offset from a 256M page.
+	 * We could map the 256M page we belong to at PAGE_OFFSET and
+	 * get going from there.
+	 */
+	lis	r4,KERNELBASE@h
+	ori	r4,r4,KERNELBASE@l
+	rlwinm	r6,r21,0,4,31			/* r6 = PHYS_START % 256M */
+	rlwinm	r5,r4,0,4,31			/* r5 = KERNELBASE % 256M */
+	subf	r3,r5,r6			/* r3 = r6 - r5 */
+	add	r3,r4,r3			/* Required Virutal Address */
+
+	bl	relocate
+#endif
+
 	bl	init_cpu_state
 
 	/*
@@ -86,7 +115,64 @@ _ENTRY(_start);
 
 	bl	early_init
 
-#ifdef CONFIG_DYNAMIC_MEMSTART
+#ifdef CONFIG_RELOCATABLE
+	/*
+	 * Relocatable kernel support based on processing of dynamic
+	 * relocation entries.
+	 *
+	 * r25 will contain RPN/ERPN for the start address of memory
+	 * r21 will contain the current offset of _stext
+	 */
+	lis	r3,kernstart_addr@ha
+	la	r3,kernstart_addr@l(r3)
+
+	/*
+	 * Compute the kernstart_addr.
+	 * kernstart_addr => (r6,r8)
+	 * kernstart_addr & ~0xfffffff => (r6,r7)
+	 */
+	rlwinm	r6,r25,0,28,31	/* ERPN. Bits 32-35 of Address */
+	rlwinm	r7,r25,0,0,3	/* RPN - assuming 256 MB page size */
+	rlwinm	r8,r21,0,4,31	/* r8 = (_stext & 0xfffffff) */
+	or	r8,r7,r8	/* Compute the lower 32bit of kernstart_addr */
+
+	/* Store kernstart_addr */
+	stw	r6,0(r3)	/* higher 32bit */
+	stw	r8,4(r3)	/* lower 32bit  */
+
+	/*
+	 * Compute the virt_phys_offset :
+	 * virt_phys_offset = stext.run - kernstart_addr
+	 *
+	 * stext.run = (KERNELBASE & ~0xfffffff) + (kernstart_addr & 0xfffffff)
+	 * When we relocate, we have :
+	 *
+	 *	(kernstart_addr & 0xfffffff) = (stext.run & 0xfffffff)
+	 *
+	 * hence:
+	 *  virt_phys_offset = (KERNELBASE & ~0xfffffff) - (kernstart_addr & ~0xfffffff)
+	 *
+	 */
+
+	/* KERNELBASE&~0xfffffff => (r4,r5) */
+	li	r4, 0		/* higer 32bit */
+	lis	r5,KERNELBASE@h
+	rlwinm	r5,r5,0,0,3	/* Align to 256M, lower 32bit */
+
+	/*
+	 * 64bit subtraction.
+	 */
+	subfc	r5,r7,r5
+	subfe	r4,r6,r4
+
+	/* Store virt_phys_offset */
+	lis	r3,virt_phys_offset@ha
+	la	r3,virt_phys_offset@l(r3)
+
+	stw	r4,0(r3)
+	stw	r5,4(r3)
+
+#elif defined(CONFIG_DYNAMIC_MEMSTART)
 	/*
 	 * Mapping based, page aligned dyanmic kernel loading.
 	 *
@@ -802,7 +888,12 @@ skpinv:	addi	r4,r4,1				/* Increment */
 /*
  * Configure and load pinned entry into TLB slot 63.
  */
-#ifdef CONFIG_DYNAMIC_MEMSTART
+#ifdef CONFIG_NONSTATIC_KERNEL
+	/*
+	 * In case of a NONSTATIC_KERNEL we reuse the TLB XLAT
+	 * entries of the initial mapping set by the boot loader.
+	 * The XLAT entry is stored in r25
+	 */
 
 	/* Read the XLAT entry for our current mapping */
 	tlbre	r25,r23,PPC44x_TLB_XLAT

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 6/7] [44x] Enable CRASH_DUMP for 440x
  2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
                   ` (4 preceding siblings ...)
  2011-12-09 11:48 ` [PATCH v4 5/7] [44x] Enable CONFIG_RELOCATABLE for PPC44x Suzuki K. Poulose
@ 2011-12-09 11:48 ` Suzuki K. Poulose
  2011-12-09 11:48 ` [PATCH v4 7/7] [boot] Change the load address for the wrapper to fit the kernel Suzuki K. Poulose
  2011-12-10  5:01 ` [PATCH v4 0/7] Kudmp support for PPC440x Suzuki Poulose
  7 siblings, 0 replies; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-09 11:48 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

Now that we have relocatable kernel, supporting CRASH_DUMP only requires
turning the switches on for UP machines.

We don't have kexec support on 47x yet. Enabling SMP support would be done
as part of enabling the PPC_47x support.


Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Cc: Josh Boyer <jwboyer@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
---

 arch/powerpc/Kconfig |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 90cd8d3..d612943 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -363,8 +363,8 @@ config KEXEC
 
 config CRASH_DUMP
 	bool "Build a kdump crash kernel"
-	depends on PPC64 || 6xx || FSL_BOOKE
-	select RELOCATABLE if PPC64
+	depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP && !PPC_47x)
+	select RELOCATABLE if PPC64 || 44x
 	select DYNAMIC_MEMSTART if FSL_BOOKE
 	help
 	  Build a kernel suitable for use as a kdump capture kernel.

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 7/7] [boot] Change the load address for the wrapper to fit the kernel
  2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
                   ` (5 preceding siblings ...)
  2011-12-09 11:48 ` [PATCH v4 6/7] [44x] Enable CRASH_DUMP for 440x Suzuki K. Poulose
@ 2011-12-09 11:48 ` Suzuki K. Poulose
  2011-12-10  5:01 ` [PATCH v4 0/7] Kudmp support for PPC440x Suzuki Poulose
  7 siblings, 0 replies; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-09 11:48 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

The wrapper code which uncompresses the kernel in case of a 'ppc' boot
is by default loaded at 0x00400000 and the kernel will be uncompressed
to fit the location 0-0x00400000. But with dynamic relocations, the size
of the kernel may exceed 0x00400000(4M). This would cause an overlap
of the uncompressed kernel and the boot wrapper, causing a failure in
boot.

The message looks like :


   zImage starting: loaded at 0x00400000 (sp: 0x0065ffb0)
   Allocating 0x5ce650 bytes for kernel ...
   Insufficient memory for kernel at address 0! (_start=00400000, uncompressed size=00591a20)

This patch shifts the load address of the boot wrapper code to the next higher MB,
according to the size of  the uncompressed vmlinux.

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
---

 arch/powerpc/boot/wrapper |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index 14cd4bc..4d625cd 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -257,6 +257,8 @@ vmz="$tmpdir/`basename \"$kernel\"`.$ext"
 if [ -z "$cacheit" -o ! -f "$vmz$gzip" -o "$vmz$gzip" -ot "$kernel" ]; then
     ${CROSS}objcopy $objflags "$kernel" "$vmz.$$"
 
+    strip_size=$(stat -c %s $vmz.$$)
+
     if [ -n "$gzip" ]; then
         gzip -n -f -9 "$vmz.$$"
     fi
@@ -266,6 +268,24 @@ if [ -z "$cacheit" -o ! -f "$vmz$gzip" -o "$vmz$gzip" -ot "$kernel" ]; then
     else
 	vmz="$vmz.$$"
     fi
+else
+    # Calculate the vmlinux.strip size
+    ${CROSS}objcopy $objflags "$kernel" "$vmz.$$"
+    strip_size=$(stat -c %s $vmz.$$)
+    rm -f $vmz.$$
+fi
+
+# Round the size to next higher MB limit
+round_size=$(((strip_size + 0xfffff) & 0xfff00000))
+
+round_size=0x$(printf "%x" $round_size)
+link_addr=$(printf "%d" $link_address)
+
+if [ $link_addr -lt $strip_size ]; then
+    echo "WARN: Uncompressed kernel size(0x$(printf "%x\n" $strip_size))" \
+		" exceeds the address of the wrapper($link_address)"
+    echo "WARN: Fixing the link_address to ($round_size))"
+    link_address=$round_size
 fi
 
 vmz="$vmz$gzip"

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel
  2011-12-09 11:47 ` [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel Suzuki K. Poulose
@ 2011-12-09 13:40   ` Josh Boyer
  2011-12-10  2:35     ` Suzuki Poulose
  2011-12-10  2:37   ` [UPDATED] " Suzuki K. Poulose
  1 sibling, 1 reply; 14+ messages in thread
From: Josh Boyer @ 2011-12-09 13:40 UTC (permalink / raw)
  To: Suzuki K. Poulose; +Cc: Scott Wood, linux ppc dev, Josh Poimboeuf

On Fri, Dec 9, 2011 at 6:47 AM, Suzuki K. Poulose <suzuki@in.ibm.com> wrote=
:
>
> Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
> Signed-off-by: Josh Poimboeuf <jpoimboe@linux.vnet.ibm.com>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Alan Modra <amodra@au1.ibm.com>
> Cc: Kumar Gala <galak@kernel.crashing.org>
> Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
> ---
>
> =A0arch/powerpc/Kconfig =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0 42 +++++++++++++=
+++++++++---------------
> =A0arch/powerpc/Makefile =A0 =A0 =A0 =A0 =A0 =A0 | =A0 =A06 +++--
> =A0arch/powerpc/kernel/Makefile =A0 =A0 =A0| =A0 =A02 ++
> =A0arch/powerpc/kernel/vmlinux.lds.S | =A0 =A08 ++++++-
> =A0arch/powerpc/relocs_check.pl =A0 =A0 =A0| =A0 =A07 ++++++
> =A05 files changed, 44 insertions(+), 21 deletions(-)

You're missing the whole reloc_32.S file in this patch.  Forget to do a git=
-add?

Can you resend just this patch with that fixed up?

josh

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel
  2011-12-09 13:40   ` Josh Boyer
@ 2011-12-10  2:35     ` Suzuki Poulose
  0 siblings, 0 replies; 14+ messages in thread
From: Suzuki Poulose @ 2011-12-10  2:35 UTC (permalink / raw)
  To: Josh Boyer; +Cc: Scott Wood, linux ppc dev, Josh Poimboeuf

On 12/09/11 19:10, Josh Boyer wrote:
> On Fri, Dec 9, 2011 at 6:47 AM, Suzuki K. Poulose<suzuki@in.ibm.com>  wrote:
>>
>> Signed-off-by: Suzuki K. Poulose<suzuki@in.ibm.com>
>> Signed-off-by: Josh Poimboeuf<jpoimboe@linux.vnet.ibm.com>
>> Cc: Paul Mackerras<paulus@samba.org>
>> Cc: Benjamin Herrenschmidt<benh@kernel.crashing.org>
>> Cc: Alan Modra<amodra@au1.ibm.com>
>> Cc: Kumar Gala<galak@kernel.crashing.org>
>> Cc: linuxppc-dev<linuxppc-dev@lists.ozlabs.org>
>> ---
>>
>>   arch/powerpc/Kconfig              |   42 ++++++++++++++++++++++---------------
>>   arch/powerpc/Makefile             |    6 +++--
>>   arch/powerpc/kernel/Makefile      |    2 ++
>>   arch/powerpc/kernel/vmlinux.lds.S |    8 ++++++-
>>   arch/powerpc/relocs_check.pl      |    7 ++++++
>>   5 files changed, 44 insertions(+), 21 deletions(-)
>
> You're missing the whole reloc_32.S file in this patch.  Forget to do a git-add?
>
> Can you resend just this patch with that fixed up?

Yikes, missed that. Will send the updated one.

Thanks
Suzuki
>
> josh
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [UPDATED] [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel
  2011-12-09 11:47 ` [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel Suzuki K. Poulose
  2011-12-09 13:40   ` Josh Boyer
@ 2011-12-10  2:37   ` Suzuki K. Poulose
  2011-12-10 20:02     ` Segher Boessenkool
  1 sibling, 1 reply; 14+ messages in thread
From: Suzuki K. Poulose @ 2011-12-10  2:37 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

	R_PPC_RELATIVE
	R_PPC_ADDR16_LO
	R_PPC_ADDR16_HI
	R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Changes since V3:
 * Updated relocation types for ppc in arch/powerpc/relocs_check.pl

Changes since v2:
  * Flush the d-cache'd instructions and invalidate the i-cache to reflect
    the processed instructions.(Reported by: Josh Poimboeuf)

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@linux.vnet.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Alan Modra <amodra@au1.ibm.com>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
---

 arch/powerpc/Kconfig              |   42 ++++----
 arch/powerpc/Makefile             |    6 +
 arch/powerpc/kernel/Makefile      |    2 
 arch/powerpc/kernel/reloc_32.S    |  207 +++++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/vmlinux.lds.S |    8 +
 arch/powerpc/relocs_check.pl      |    7 +
 6 files changed, 251 insertions(+), 21 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5eafe95..6936cb0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -843,23 +843,31 @@ config DYNAMIC_MEMSTART
 	  load address. When this option is enabled, the compile time physical 
 	  address CONFIG_PHYSICAL_START is ignored.
 
-# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
-# config RELOCATABLE
-#	bool "Build a relocatable kernel (EXPERIMENTAL)"
-#	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && (FSL_BOOKE || PPC_47x)
-#	help
-#	  This builds a kernel image that is capable of running at the
-#	  location the kernel is loaded at, without any alignment restrictions.
-#
-#	  One use is for the kexec on panic case where the recovery kernel
-#	  must live at a different physical address than the primary
-#	  kernel.
-#
-#	  Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
-#	  it has been loaded at and the compile time physical addresses
-#	  CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
-#	  setting can still be useful to bootwrappers that need to know the
-#	  load location of the kernel (eg. u-boot/mkimage).
+	  This option is overridden by RELOCATABLE.
+
+config RELOCATABLE
+	bool "Build a relocatable kernel (EXPERIMENTAL)"
+	depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM
+	select NONSTATIC_KERNEL
+	help
+	  This builds a kernel image that is capable of running at the
+	  location the kernel is loaded at, without any alignment restrictions.
+	  This feature is a superset of DYNAMIC_MEMSTART, and hence overrides 
+	  it.
+
+	  One use is for the kexec on panic case where the recovery kernel
+	  must live at a different physical address than the primary
+	  kernel.
+
+	  Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+	  it has been loaded at and the compile time physical addresses
+	  CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+	  setting can still be useful to bootwrappers that need to know the
+	  load address of the kernel (eg. u-boot/mkimage).
+
+config RELOCATABLE_PPC32
+	def_bool y
+	depends on PPC32 && RELOCATABLE
 
 config PAGE_OFFSET_BOOL
 	bool "Set custom page offset address"
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index ffe4d88..b8b105c 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,9 +63,9 @@ override CC	+= -m$(CONFIG_WORD_SIZE)
 override AR	:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
 endif
 
-LDFLAGS_vmlinux-yy := -Bstatic
-LDFLAGS_vmlinux-$(CONFIG_PPC64)$(CONFIG_RELOCATABLE) := -pie
-LDFLAGS_vmlinux	:= $(LDFLAGS_vmlinux-yy)
+LDFLAGS_vmlinux-y := -Bstatic
+LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+LDFLAGS_vmlinux	:= $(LDFLAGS_vmlinux-y)
 
 CFLAGS-$(CONFIG_PPC64)	:= -mminimal-toc -mtraceback=no -mcall-aixdesc
 CFLAGS-$(CONFIG_PPC32)	:= -ffixed-r2 -mmultiple
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ce4f7f1..ee728e4 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -85,6 +85,8 @@ extra-$(CONFIG_FSL_BOOKE)	:= head_fsl_booke.o
 extra-$(CONFIG_8xx)		:= head_8xx.o
 extra-y				+= vmlinux.lds
 
+obj-$(CONFIG_RELOCATABLE_PPC32)	+= reloc_32.o
+
 obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o
 obj-$(CONFIG_PPC64)		+= dma-iommu.o iommu.o
 obj-$(CONFIG_KGDB)		+= kgdb.o
diff --git a/arch/powerpc/kernel/reloc_32.S b/arch/powerpc/kernel/reloc_32.S
new file mode 100644
index 0000000..a45438e
--- /dev/null
+++ b/arch/powerpc/kernel/reloc_32.S
@@ -0,0 +1,207 @@
+/*
+ * Code to process dynamic relocations for PPC32.
+ *
+ * Copyrights (C) IBM Corporation, 2011.
+ *	Author: Suzuki Poulose <suzuki@in.ibm.com>
+ *
+ *  - Based on ppc64 code - reloc_64.S
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation; either version
+ *  2 of the License, or (at your option) any later version.
+ */
+
+#include <asm/ppc_asm.h>
+
+/* Dynamic section table entry tags */
+DT_RELA = 7			/* Tag for Elf32_Rela section */
+DT_RELASZ = 8			/* Size of the Rela relocs */
+DT_RELAENT = 9			/* Size of one Rela reloc entry */
+
+STN_UNDEF = 0			/* Undefined symbol index */
+STB_LOCAL = 0			/* Local binding for the symbol */
+
+R_PPC_ADDR16_LO = 4		/* Lower half of (S+A) */
+R_PPC_ADDR16_HI = 5		/* Upper half of (S+A) */
+R_PPC_ADDR16_HA = 6		/* High Adjusted (S+A) */
+R_PPC_RELATIVE = 22
+
+/*
+ * r3 = desired final address
+ */
+
+_GLOBAL(relocate)
+
+	mflr	r0		/* Save our LR */
+	bl	0f		/* Find our current runtime address */
+0:	mflr	r12		/* Make it accessible */
+	mtlr	r0
+
+	lwz	r11, (p_dyn - 0b)(r12)
+	add	r11, r11, r12	/* runtime address of .dynamic section */
+	lwz	r9, (p_rela - 0b)(r12)
+	add	r9, r9, r12	/* runtime address of .rela.dyn section */
+	lwz	r10, (p_st - 0b)(r12)
+	add	r10, r10, r12	/* runtime address of _stext section */
+	lwz	r13, (p_sym - 0b)(r12)
+	add	r13, r13, r12	/* runtime address of .dynsym section */
+
+	/*
+	 * Scan the dynamic section for RELA, RELASZ entries
+	 */
+	li	r6, 0
+	li	r7, 0
+	li	r8, 0
+1:	lwz	r5, 0(r11)	/* ELF_Dyn.d_tag */
+	cmpwi	r5, 0		/* End of ELF_Dyn[] */
+	beq	eodyn
+	cmpwi	r5, DT_RELA
+	bne	relasz
+	lwz	r7, 4(r11)	/* r7 = rela.link */
+	b	skip
+relasz:
+	cmpwi	r5, DT_RELASZ
+	bne	relaent
+	lwz	r8, 4(r11)	/* r8 = Total Rela relocs size */
+	b	skip
+relaent:
+	cmpwi	r5, DT_RELAENT
+	bne	skip
+	lwz	r6, 4(r11)	/* r6 = Size of one Rela reloc */
+skip:
+	addi	r11, r11, 8
+	b	1b
+eodyn:				/* End of Dyn Table scan */
+
+	/* Check if we have found all the entries */
+	cmpwi	r7, 0
+	beq	done
+	cmpwi	r8, 0
+	beq	done
+	cmpwi	r6, 0
+	beq	done
+
+
+	/*
+	 * Work out the current offset from the link time address of .rela
+	 * section.
+	 *  cur_offset[r7] = rela.run[r9] - rela.link [r7]
+	 *  _stext.link[r12] = _stext.run[r10] - cur_offset[r7]
+	 *  final_offset[r3] = _stext.final[r3] - _stext.link[r12]
+	 */
+	subf	r7, r7, r9	/* cur_offset */
+	subf	r12, r7, r10
+	subf	r3, r12, r3	/* final_offset */
+
+	subf	r8, r6, r8	/* relaz -= relaent */
+	/*
+	 * Scan through the .rela table and process each entry
+	 * r9	- points to the current .rela table entry
+	 * r13	- points to the symbol table
+	 */
+
+	/*
+	 * Check if we have a relocation based on symbol
+	 * r5 will hold the value of the symbol.
+	 */
+applyrela:
+	lwz	r4, 4(r9)
+	srwi	r5, r4, 8		/* ELF32_R_SYM(r_info) */
+	cmpwi	r5, STN_UNDEF	/* sym == STN_UNDEF ? */
+	beq	get_type	/* value = 0 */
+	/* Find the value of the symbol at index(r5) */
+	slwi	r5, r5, 4		/* r5 = r5 * sizeof(Elf32_Sym) */
+	add	r12, r13, r5	/* r12 = &__dyn_sym[Index] */
+
+	/*
+	 * GNU ld has a bug, where dynamic relocs based on
+	 * STB_LOCAL symbols, the value should be assumed
+	 * to be zero. - Alan Modra
+	 */
+	/* XXX: Do we need to check if we are using GNU ld ? */
+	lbz	r5, 12(r12)	/* r5 = dyn_sym[Index].st_info */
+	extrwi	r5, r5, 4, 24	/* r5 = ELF32_ST_BIND(r5) */
+	cmpwi	r5, STB_LOCAL	/* st_value = 0, ld bug */
+	beq	get_type	/* We have r5 = 0 */
+	lwz	r5, 4(r12)	/* r5 = __dyn_sym[Index].st_value */
+
+get_type:
+	/* r4 holds the relocation type */
+	extrwi	r4, r4, 8, 24	/* r4 = ((char*)r4)[3] */
+
+	/* R_PPC_RELATIVE */
+	cmpwi	r4, R_PPC_RELATIVE
+	bne	hi16
+	lwz	r4, 0(r9)	/* r_offset */
+	lwz	r0, 8(r9)	/* r_addend */
+	add	r0, r0, r3	/* final addend */
+	stwx	r0, r4, r7	/* memory[r4+r7]) = (u32)r0 */
+	b	nxtrela		/* continue */
+
+	/* R_PPC_ADDR16_HI */
+hi16:
+	cmpwi	r4, R_PPC_ADDR16_HI
+	bne	ha16
+	lwz	r4, 0(r9)	/* r_offset */
+	lwz	r0, 8(r9)	/* r_addend */
+	add	r0, r0, r3
+	add	r0, r0, r5	/* r0 = (S+A+Offset) */
+	extrwi	r0, r0, 16, 0	/* r0 = (r0 >> 16) */
+	b	store_half
+
+	/* R_PPC_ADDR16_HA */
+ha16:
+	cmpwi	r4, R_PPC_ADDR16_HA
+	bne	lo16
+	lwz	r4, 0(r9)	/* r_offset */
+	lwz	r0, 8(r9)	/* r_addend */
+	add	r0, r0, r3
+	add	r0, r0, r5	/* r0 = (S+A+Offset) */
+	extrwi	r5, r0, 1, 16	/* Extract bit 16 */
+	extrwi	r0, r0, 16, 0	/* r0 = (r0 >> 16) */
+	add	r0, r0, r5	/* Add it to r0 */
+	b	store_half
+
+	/* R_PPC_ADDR16_LO */
+lo16:
+	cmpwi	r4, R_PPC_ADDR16_LO
+	bne	nxtrela
+	lwz	r4, 0(r9)	/* r_offset */
+	lwz	r0, 8(r9)	/* r_addend */
+	add	r0, r0, r3
+	add	r0, r0, r5	/* r0 = (S+A+Offset) */
+	extrwi	r0, r0, 16, 16	/* r0 &= 0xffff */
+	/* Fall through to */
+
+	/* Store half word */
+store_half:
+	sthx	r0, r4, r7	/* memory[r4+r7] = (u16)r0 */
+
+nxtrela:
+	/*
+	 * We have to flush the modified instructions to the
+	 * main storage from the d-cache. And also, invalidate the
+	 * cached instructions in i-cache which has been modified.
+	 *
+	 * We delay the msync / isync operation till the end, since
+	 * we won't be executing the modified instructions until
+	 * we return from here.
+	 */
+	dcbst	r4,r7
+	icbi	r4,r7
+	cmpwi	r8, 0		/* relasz = 0 ? */
+	ble	done
+	add	r9, r9, r6	/* move to next entry in the .rela table */
+	subf	r8, r6, r8	/* relasz -= relaent */
+	b	applyrela
+
+done:
+	msync			/* Wait for the flush to finish */
+	isync			/* Discard prefetched instructions */
+	blr
+
+p_dyn:		.long	__dynamic_start - 0b
+p_rela:		.long	__rela_dyn_start - 0b
+p_sym:		.long	__dynamic_symtab - 0b
+p_st:		.long	_stext - 0b
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 920276c..710a540 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -170,7 +170,13 @@ SECTIONS
 	}
 #ifdef CONFIG_RELOCATABLE
 	. = ALIGN(8);
-	.dynsym : AT(ADDR(.dynsym) - LOAD_OFFSET) { *(.dynsym) }
+	.dynsym : AT(ADDR(.dynsym) - LOAD_OFFSET)
+	{
+#ifdef CONFIG_RELOCATABLE_PPC32
+		__dynamic_symtab = .;
+#endif
+		*(.dynsym)
+	}
 	.dynstr : AT(ADDR(.dynstr) - LOAD_OFFSET) { *(.dynstr) }
 	.dynamic : AT(ADDR(.dynamic) - LOAD_OFFSET)
 	{
diff --git a/arch/powerpc/relocs_check.pl b/arch/powerpc/relocs_check.pl
index d257109..71aeb03 100755
--- a/arch/powerpc/relocs_check.pl
+++ b/arch/powerpc/relocs_check.pl
@@ -32,8 +32,15 @@ while (<FD>) {
 	next if (!/\s+R_/);
 
 	# These relocations are okay
+	# On PPC64:
+	# 	R_PPC64_RELATIVE, R_PPC64_NONE, R_PPC64_ADDR64
+	# On PPC:
+	# 	R_PPC_RELATIVE, R_PPC_ADDR16_HI, 
+	# 	R_PPC_ADDR16_HA,R_PPC_ADDR16_LO
 	next if (/R_PPC64_RELATIVE/ or /R_PPC64_NONE/ or
 	         /R_PPC64_ADDR64\s+mach_/);
+	next if (/R_PPC_ADDR16_LO/ or /R_PPC_ADDR16_HI/ or
+		 /R_PPC_ADDR16_HA/ or /R_PPC_RELATIVE/);
 
 	# If we see this type of relcoation it's an idication that
 	# we /may/ be using an old version of binutils.

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/7] Kudmp support for PPC440x
  2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
                   ` (6 preceding siblings ...)
  2011-12-09 11:48 ` [PATCH v4 7/7] [boot] Change the load address for the wrapper to fit the kernel Suzuki K. Poulose
@ 2011-12-10  5:01 ` Suzuki Poulose
  7 siblings, 0 replies; 14+ messages in thread
From: Suzuki Poulose @ 2011-12-10  5:01 UTC (permalink / raw)
  To: Josh Boyer, Benjamin Herrenschmidt
  Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

On 12/09/11 17:13, Suzuki K. Poulose wrote:
> The following series implements:
>
>   * Generic framework for relocatable kernel on PPC32, based on processing
>     the dynamic relocation entries.
>   * Relocatable kernel support for 44x
>   * Kdump support for 44x. Doesn't support 47x yet, as the kexec
>     support is missing.
>
> Changes from V3:
>
>   * Added a new config - NONSTATIC_KERNEL - to group different types of relocatable
>     kernel. (Suggested by: Josh Boyer)
>   * Added supported ppc relocation types in relocs_check.pl for verifying the
>     relocations used in the kernel.
>
> Changes from V2:
>
>   * Renamed old style mapping based RELOCATABLE on BookE to DYNAMIC_MEMSTART.
>     Suggested by: Scott Wood
>   * Added support for DYNAMIC_MEMSTART on PPC440x
>   * Reverted back to RELOCATABLE and RELOCATABLE_PPC32 from RELOCATABLE_PPC32_PIE
>     for relocation based on processing dynamic reloc entries for PPC32.
>   * Ensure the modified instructions are flushed and the i-cache invalidated at
>     the end of relocate(). - Reported by : Josh Poimboeuf
>
> Changes from V1:
>
>   * Splitted patch 'Enable CONFIG_RELOCATABLE for PPC44x' to move some
>     of the generic bits to a new patch.
>   * Renamed RELOCATABLE_PPC32 to RELOCATABLE_PPC32_PIE and provided options to
>     retained old style mapping. (Suggested by: Scott Wood)
>   * Added support for avoiding the overlapping of uncompressed kernel
>     with boot wrapper for PPC images.
>
> The patches are based on -next tree for ppc.
>
> I have tested these patches on Ebony, Sequoia and Virtex(QEMU Emulated).
> I haven't tested the RELOCATABLE bits on PPC_47x yet, as I don't have access
> to one. However, RELOCATABLE should work fine there as we only depend on the
> runtime address and the XLAT entry setup by the boot loader. It would be great if
> somebody could test these patches on a 47x.
>
>
> ---

Updated diffstats:

Suzuki K. Poulose (7):
       [boot] Change the load address for the wrapper to fit the kernel
       [44x] Enable CRASH_DUMP for 440x
       [44x] Enable CONFIG_RELOCATABLE for PPC44x
       [ppc] Define virtual-physical translations for RELOCATABLE
       [ppc] Process dynamic relocations for kernel
       [44x] Enable DYNAMIC_MEMSTART for 440x
       [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE


  arch/powerpc/Kconfig                          |   46 +++++-
  arch/powerpc/Makefile                         |    6 -
  arch/powerpc/boot/wrapper                     |   20 ++
  arch/powerpc/configs/44x/iss476-smp_defconfig |    2
  arch/powerpc/include/asm/kdump.h              |    4
  arch/powerpc/include/asm/page.h               |   89 ++++++++++-
  arch/powerpc/kernel/Makefile                  |    2
  arch/powerpc/kernel/crash_dump.c              |    4
  arch/powerpc/kernel/head_44x.S                |  105 +++++++++++++
  arch/powerpc/kernel/head_fsl_booke.S          |    2
  arch/powerpc/kernel/machine_kexec.c           |    2
  arch/powerpc/kernel/prom_init.c               |    2
  arch/powerpc/kernel/reloc_32.S                |  207 +++++++++++++++++++++++++
  arch/powerpc/kernel/vmlinux.lds.S             |    8 +
  arch/powerpc/mm/44x_mmu.c                     |    2
  arch/powerpc/mm/init_32.c                     |    7 +
  arch/powerpc/relocs_check.pl                  |    7 +
  17 files changed, 489 insertions(+), 26 deletions(-)
  create mode 100644 arch/powerpc/kernel/reloc_32.S


Thanks
Suzuki

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [UPDATED] [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel
  2011-12-10  2:37   ` [UPDATED] " Suzuki K. Poulose
@ 2011-12-10 20:02     ` Segher Boessenkool
  2011-12-13  2:24       ` Suzuki Poulose
  0 siblings, 1 reply; 14+ messages in thread
From: Segher Boessenkool @ 2011-12-10 20:02 UTC (permalink / raw)
  To: Suzuki K. Poulose; +Cc: Scott Wood, linux ppc dev, Josh Poimboeuf

Hi Suzuki,

Looks quite good, a few comments...

> +get_type:
> +	/* r4 holds the relocation type */
> +	extrwi	r4, r4, 8, 24	/* r4 = ((char*)r4)[3] */

This comment is confusing (only makes sense together with the
lwz a long way up).

> +nxtrela:
> +	/*
> +	 * We have to flush the modified instructions to the
> +	 * main storage from the d-cache. And also, invalidate the
> +	 * cached instructions in i-cache which has been modified.
> +	 *
> +	 * We delay the msync / isync operation till the end, since
> +	 * we won't be executing the modified instructions until
> +	 * we return from here.
> +	 */
> +	dcbst	r4,r7
> +	icbi	r4,r7

You still need a sync between these two.  Without it, the icbi can
complete before the dcbst for the same address does, which leaves
room for an instruction fetch from that address to get old data.

> +	cmpwi	r8, 0		/* relasz = 0 ? */
> +	ble	done
> +	add	r9, r9, r6	/* move to next entry in the .rela table */
> +	subf	r8, r6, r8	/* relasz -= relaent */
> +	b	applyrela
> +
> +done:
> +	msync			/* Wait for the flush to finish */

The instruction is called "sync".  msync is a BookE thing.

>  	next if (/R_PPC64_RELATIVE/ or /R_PPC64_NONE/ or
>  	         /R_PPC64_ADDR64\s+mach_/);
> +	next if (/R_PPC_ADDR16_LO/ or /R_PPC_ADDR16_HI/ or
> +		 /R_PPC_ADDR16_HA/ or /R_PPC_RELATIVE/);

Nothing new, but these should probably have \b or \s or just
a space on each side.


Segher

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [UPDATED] [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel
  2011-12-10 20:02     ` Segher Boessenkool
@ 2011-12-13  2:24       ` Suzuki Poulose
  0 siblings, 0 replies; 14+ messages in thread
From: Suzuki Poulose @ 2011-12-13  2:24 UTC (permalink / raw)
  To: Segher Boessenkool; +Cc: Scott Wood, Josh Poimboeuf, linux ppc dev

On 12/11/11 01:32, Segher Boessenkool wrote:
> Hi Suzuki,
>
> Looks quite good, a few comments...
>
>> +get_type:
>> + /* r4 holds the relocation type */
>> + extrwi r4, r4, 8, 24 /* r4 = ((char*)r4)[3] */
>
> This comment is confusing (only makes sense together with the
> lwz a long way up).

Agree, will fix them.
>
>> +nxtrela:
>> + /*
>> + * We have to flush the modified instructions to the
>> + * main storage from the d-cache. And also, invalidate the
>> + * cached instructions in i-cache which has been modified.
>> + *
>> + * We delay the msync / isync operation till the end, since
>> + * we won't be executing the modified instructions until
>> + * we return from here.
>> + */
>> + dcbst r4,r7
>> + icbi r4,r7
>
> You still need a sync between these two. Without it, the icbi can
> complete before the dcbst for the same address does, which leaves
> room for an instruction fetch from that address to get old data.
>
Ok.
>> + cmpwi r8, 0 /* relasz = 0 ? */
>> + ble done
>> + add r9, r9, r6 /* move to next entry in the .rela table */
>> + subf r8, r6, r8 /* relasz -= relaent */
>> + b applyrela
>> +
>> +done:
>> + msync /* Wait for the flush to finish */
>
> The instruction is called "sync". msync is a BookE thing.
>
>> next if (/R_PPC64_RELATIVE/ or /R_PPC64_NONE/ or
>> /R_PPC64_ADDR64\s+mach_/);
>> + next if (/R_PPC_ADDR16_LO/ or /R_PPC_ADDR16_HI/ or
>> + /R_PPC_ADDR16_HA/ or /R_PPC_RELATIVE/);
>
> Nothing new, but these should probably have \b or \s or just
> a space on each side.
Will fix this too. Also will include the R_PPC_NONE to the list
of valid relocations.

Thanks
Suzuki


>
>
> Segher
>
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2011-12-13  2:25 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-09 11:43 [PATCH v4 0/7] Kudmp support for PPC440x Suzuki K. Poulose
2011-12-09 11:43 ` [PATCH v4 1/7] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE Suzuki K. Poulose
2011-12-09 11:47 ` [PATCH v4 2/7] [44x] Enable DYNAMIC_MEMSTART for 440x Suzuki K. Poulose
2011-12-09 11:47 ` [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel Suzuki K. Poulose
2011-12-09 13:40   ` Josh Boyer
2011-12-10  2:35     ` Suzuki Poulose
2011-12-10  2:37   ` [UPDATED] " Suzuki K. Poulose
2011-12-10 20:02     ` Segher Boessenkool
2011-12-13  2:24       ` Suzuki Poulose
2011-12-09 11:47 ` [PATCH v4 4/7] [ppc] Define virtual-physical translations for RELOCATABLE Suzuki K. Poulose
2011-12-09 11:48 ` [PATCH v4 5/7] [44x] Enable CONFIG_RELOCATABLE for PPC44x Suzuki K. Poulose
2011-12-09 11:48 ` [PATCH v4 6/7] [44x] Enable CRASH_DUMP for 440x Suzuki K. Poulose
2011-12-09 11:48 ` [PATCH v4 7/7] [boot] Change the load address for the wrapper to fit the kernel Suzuki K. Poulose
2011-12-10  5:01 ` [PATCH v4 0/7] Kudmp support for PPC440x Suzuki Poulose

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).