* [PATCH 0/5] Relocatable kernel support for PPC64
@ 2008-08-11 20:11 Mohan Kumar M
2008-08-11 20:12 ` [PATCH 1/5] Extract list of relocation offsets Mohan Kumar M
` (4 more replies)
0 siblings, 5 replies; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-11 20:11 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Hi,
Following five patches enable the "relocatable kernel" feature for
PPC64 kernels.
1. Extract list of relocation offsets.patch
2. Build files needed for relocation.patch
3. Apply relocation.patch
4. Relocation support.patch
5. Relocation support for kdump kernel.patch
Paul, can you please merge these patches to the powerpc git tree?
With the patchset, vmcore image of a crashed system can be captured
using the same kernel binary.
Still the kernel is not a fully relocatable kernel. It can either run at
0 or 32MB based on which address its loaded. If its loaded by 'kexec -p',
it behaves as a relocatable kernel and runs at 32MB(even though its
compiled for 0). If the same kernel is loaded by yaboot or kexec -l, it
will behave as a normal kernel and will run at the compiled address.
Difference between previous patchset and current
* The problem kdump kernel boot fail on some specific systems is fixed now.
* Kdump kernel boot failed with git tree kernels, its fixed now.
Issues:
* Relocatable vmlinux image is built in arch/powerpc/boot as
vmlinux.reloc. But it should be built in top level directory of kernel
source as vmlinux instead of vmlinux.reloc
Limitation:
* During kdump kernel boot, all secondary processors are stuck up. But
during yaboot all secondary processors are brought online.
Since relocatable kernel is used only for kdump kernels and kdump
kernel is always booted with "maxcpus=1" kernel parameter, there is no
significant difference. It can be marked as a known issue.
(I am trying to fix this issue)
Tested on POWER5 systems.
Regards,
Mohan.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/5] Extract list of relocation offsets
2008-08-11 20:11 [PATCH 0/5] Relocatable kernel support for PPC64 Mohan Kumar M
@ 2008-08-11 20:12 ` Mohan Kumar M
2008-08-11 20:14 ` [PATCH 2/5] Build files needed for relocation Mohan Kumar M
` (3 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-11 20:12 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Extract list of relocation offsets
Extract list of offsets in the vmlinux file for which the relocation
delta has to be patched. Currently only following type of relocation
types are considered: R_PPC64_ADDR16_HI, R_PPC64_TOC and R_PPC64_ADDR64
The offsets are sorted according to the relocation type and this
information is appended to the normal vmlinux file by using the patch
relocation_build.patch
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/boot/relocs.c | 820 ++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 820 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/boot/relocs.c
diff --git a/arch/powerpc/boot/relocs.c b/arch/powerpc/boot/relocs.c
new file mode 100644
index 0000000..aed90e5
--- /dev/null
+++ b/arch/powerpc/boot/relocs.c
@@ -0,0 +1,820 @@
+/*
+ * PowerPC version
+ *
+ * Adapted for 64bit PowerPC by Mohan Kumar M (mohan@in.ibm.com)
+ *
+ * Written by Eric W Biederman(ebiederm@xmission.com) and
+ * Vivek Goyal(vgoyal@redhat.com) for x86
+ *
+ * Extract list of offsets in the vmlinux file for which the
+ * relocation delta has to be patched.
+ * Currently only following type of relocation types are
+ * considered: R_PPC64_ADDR16_HI, R_PPC64_TOC and R_PPC64_ADDR64
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#include <stdio.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <string.h>
+#include <errno.h>
+#include <unistd.h>
+#include <elf.h>
+#include <byteswap.h>
+#define USE_BSD
+#include <endian.h>
+
+#define MAX_SHDRS 100
+static Elf64_Ehdr ehdr;
+static Elf64_Shdr shdr[MAX_SHDRS];
+static Elf64_Sym *symtab[MAX_SHDRS];
+static Elf64_Rela *reltaba[MAX_SHDRS];
+static char *strtab[MAX_SHDRS];
+static unsigned long reloc_count, reloc_idx;
+
+struct reloc_info {
+ unsigned int rel_type;
+ unsigned long long offset;
+};
+
+static struct reloc_info *relocs;
+
+/*
+ * Following symbols have been audited. There values are constant and do
+ * not change if bzImage is loaded at a different physical address than
+ * the address for which it has been compiled. Don't warn user about
+ * absolute relocations present w.r.t these symbols.
+ */
+static const char* safe_abs_relocs[] = {
+ "__kernel_vsyscall",
+ "__kernel_rt_sigreturn",
+ "__kernel_sigreturn",
+ "SYSENTER_RETURN",
+};
+
+static int is_safe_abs_reloc(const char* sym_name)
+{
+ int i, array_size;
+
+ array_size = sizeof(safe_abs_relocs)/sizeof(char*);
+
+ for(i = 0; i < array_size; i++) {
+ if (!strcmp(sym_name, safe_abs_relocs[i]))
+ /* Match found */
+ return 1;
+ }
+ if (strncmp(sym_name, "__crc_", 6) == 0)
+ return 1;
+ return 0;
+}
+
+static void die(char *fmt, ...)
+{
+ va_list ap;
+ va_start(ap, fmt);
+ vfprintf(stderr, fmt, ap);
+ va_end(ap);
+ exit(1);
+}
+
+static const char *sym_type(unsigned type)
+{
+ static const char *type_name[] = {
+#define SYM_TYPE(X) [X] = #X
+ SYM_TYPE(STT_NOTYPE),
+ SYM_TYPE(STT_OBJECT),
+ SYM_TYPE(STT_FUNC),
+ SYM_TYPE(STT_SECTION),
+ SYM_TYPE(STT_FILE),
+ SYM_TYPE(STT_COMMON),
+ SYM_TYPE(STT_TLS),
+#undef SYM_TYPE
+ };
+ const char *s_type = "unknown sym type name";
+ if (type < sizeof(type_name)/sizeof(type_name[0])) {
+ s_type = type_name[type];
+ }
+ return s_type;
+}
+
+static const char *sym_bind(unsigned bind)
+{
+ static const char *bind_name[] = {
+#define SYM_BIND(X) [X] = #X
+ SYM_BIND(STB_LOCAL),
+ SYM_BIND(STB_GLOBAL),
+ SYM_BIND(STB_WEAK),
+#undef SYM_BIND
+ };
+ const char *s_bind = "unknown sym bind name";
+ if (bind < sizeof(bind_name)/sizeof(bind_name[0])) {
+ s_bind = bind_name[bind];
+ }
+ return s_bind;
+}
+
+static const char *sym_visibility(unsigned visibility)
+{
+ static const char *visibility_name[] = {
+#define SYM_VISIBILITY(X) [X] = #X
+ SYM_VISIBILITY(STV_DEFAULT),
+ SYM_VISIBILITY(STV_INTERNAL),
+ SYM_VISIBILITY(STV_HIDDEN),
+ SYM_VISIBILITY(STV_PROTECTED),
+#undef SYM_VISIBILITY
+ };
+ const char *name = "unknown sym visibility name";
+ if (visibility < sizeof(visibility_name)/sizeof(visibility_name[0])) {
+ name = visibility_name[visibility];
+ }
+ return name;
+}
+
+static const char *rel_type(unsigned type)
+{
+ static const char *type_name[] = {
+#define REL_TYPE(X) [X] = #X
+ REL_TYPE(R_PPC64_NONE),
+ REL_TYPE(R_PPC64_ADDR32),
+ REL_TYPE(R_PPC64_ADDR24),
+ REL_TYPE(R_PPC64_ADDR16),
+ REL_TYPE(R_PPC64_ADDR16_LO),
+ REL_TYPE(R_PPC64_ADDR16_HI),
+ REL_TYPE(R_PPC64_ADDR16_HA),
+ REL_TYPE(R_PPC64_ADDR14 ),
+ REL_TYPE(R_PPC64_ADDR14_BRTAKEN),
+ REL_TYPE(R_PPC64_ADDR14_BRNTAKEN),
+ REL_TYPE(R_PPC64_REL24),
+ REL_TYPE(R_PPC64_REL14),
+ REL_TYPE(R_PPC64_REL14_BRTAKEN),
+ REL_TYPE(R_PPC64_REL14_BRNTAKEN),
+ REL_TYPE(R_PPC64_GOT16),
+ REL_TYPE(R_PPC64_GOT16_LO),
+ REL_TYPE(R_PPC64_GOT16_HI),
+ REL_TYPE(R_PPC64_GOT16_HA),
+ REL_TYPE(R_PPC64_COPY),
+ REL_TYPE(R_PPC64_GLOB_DAT),
+ REL_TYPE(R_PPC64_JMP_SLOT),
+ REL_TYPE(R_PPC64_RELATIVE),
+ REL_TYPE(R_PPC64_UADDR32),
+ REL_TYPE(R_PPC64_UADDR16),
+ REL_TYPE(R_PPC64_REL32),
+ REL_TYPE(R_PPC64_PLT32),
+ REL_TYPE(R_PPC64_PLTREL32),
+ REL_TYPE(R_PPC64_PLT16_LO),
+ REL_TYPE(R_PPC64_PLT16_HI),
+ REL_TYPE(R_PPC64_PLT16_HA),
+ REL_TYPE(R_PPC64_SECTOFF),
+ REL_TYPE(R_PPC64_SECTOFF_LO),
+ REL_TYPE(R_PPC64_SECTOFF_HI),
+ REL_TYPE(R_PPC64_SECTOFF_HA),
+ REL_TYPE(R_PPC64_ADDR30),
+ REL_TYPE(R_PPC64_ADDR64),
+ REL_TYPE(R_PPC64_ADDR16_HIGHER),
+ REL_TYPE(R_PPC64_ADDR16_HIGHERA),
+ REL_TYPE(R_PPC64_ADDR16_HIGHEST),
+ REL_TYPE(R_PPC64_ADDR16_HIGHESTA),
+ REL_TYPE(R_PPC64_UADDR64),
+ REL_TYPE(R_PPC64_REL64),
+ REL_TYPE(R_PPC64_PLT64),
+ REL_TYPE(R_PPC64_PLTREL64),
+ REL_TYPE(R_PPC64_TOC16),
+ REL_TYPE(R_PPC64_TOC16_LO),
+ REL_TYPE(R_PPC64_TOC16_HI),
+ REL_TYPE(R_PPC64_TOC16_HA),
+ REL_TYPE(R_PPC64_TOC),
+ REL_TYPE(R_PPC64_PLTGOT16),
+ REL_TYPE(R_PPC64_PLTGOT16_LO),
+ REL_TYPE(R_PPC64_PLTGOT16_HI),
+ REL_TYPE(R_PPC64_PLTGOT16_HA),
+ REL_TYPE(R_PPC64_ADDR16_DS),
+ REL_TYPE(R_PPC64_ADDR16_LO_DS),
+ REL_TYPE(R_PPC64_GOT16_DS),
+ REL_TYPE(R_PPC64_GOT16_LO_DS),
+ REL_TYPE(R_PPC64_PLT16_LO_DS),
+ REL_TYPE(R_PPC64_SECTOFF_DS),
+ REL_TYPE(R_PPC64_SECTOFF_LO_DS),
+ REL_TYPE(R_PPC64_TOC16_DS),
+ REL_TYPE(R_PPC64_TOC16_LO_DS),
+ REL_TYPE(R_PPC64_PLTGOT16_DS),
+ REL_TYPE(R_PPC64_PLTGOT16_LO_DS),
+ REL_TYPE(R_PPC64_TLS),
+ REL_TYPE(R_PPC64_DTPMOD64),
+ REL_TYPE(R_PPC64_TPREL16),
+ REL_TYPE(R_PPC64_TPREL16_LO),
+ REL_TYPE(R_PPC64_TPREL16_HI),
+ REL_TYPE(R_PPC64_TPREL16_HA),
+ REL_TYPE(R_PPC64_TPREL64),
+ REL_TYPE(R_PPC64_DTPREL16),
+ REL_TYPE(R_PPC64_DTPREL16_LO),
+ REL_TYPE(R_PPC64_DTPREL16_HI),
+ REL_TYPE(R_PPC64_DTPREL16_HA),
+ REL_TYPE(R_PPC64_DTPREL64),
+ REL_TYPE(R_PPC64_GOT_TLSGD16),
+ REL_TYPE(R_PPC64_GOT_TLSGD16_LO),
+ REL_TYPE(R_PPC64_GOT_TLSGD16_HI),
+ REL_TYPE(R_PPC64_GOT_TLSGD16_HA),
+ REL_TYPE(R_PPC64_GOT_TLSLD16),
+ REL_TYPE(R_PPC64_GOT_TLSLD16_LO),
+ REL_TYPE(R_PPC64_GOT_TLSLD16_HI),
+ REL_TYPE(R_PPC64_GOT_TLSLD16_HA),
+ REL_TYPE(R_PPC64_GOT_TPREL16_DS),
+ REL_TYPE(R_PPC64_GOT_TPREL16_LO_DS),
+ REL_TYPE(R_PPC64_GOT_TPREL16_HI),
+ REL_TYPE(R_PPC64_GOT_TPREL16_HA),
+ REL_TYPE(R_PPC64_GOT_DTPREL16_DS),
+ REL_TYPE(R_PPC64_GOT_DTPREL16_LO_DS),
+ REL_TYPE(R_PPC64_GOT_DTPREL16_HI),
+ REL_TYPE(R_PPC64_GOT_DTPREL16_HA),
+ REL_TYPE(R_PPC64_TPREL16_DS),
+ REL_TYPE(R_PPC64_TPREL16_LO_DS),
+ REL_TYPE(R_PPC64_TPREL16_HIGHER),
+ REL_TYPE(R_PPC64_TPREL16_HIGHERA),
+ REL_TYPE(R_PPC64_TPREL16_HIGHEST),
+ REL_TYPE(R_PPC64_TPREL16_HIGHESTA),
+ REL_TYPE(R_PPC64_DTPREL16_DS),
+ REL_TYPE(R_PPC64_DTPREL16_LO_DS),
+ REL_TYPE(R_PPC64_DTPREL16_HIGHER),
+ REL_TYPE(R_PPC64_DTPREL16_HIGHERA),
+ REL_TYPE(R_PPC64_DTPREL16_HIGHEST),
+ REL_TYPE(R_PPC64_DTPREL16_HIGHESTA),
+#undef REL_TYPE
+ };
+ const char *name = "unknown type rel type name";
+ if (type < sizeof(type_name)/sizeof(type_name[0])) {
+ name = type_name[type];
+ }
+ return name;
+}
+
+static const char *sec_name(unsigned shndx)
+{
+ const char *sec_strtab;
+ const char *name;
+ sec_strtab = strtab[ehdr.e_shstrndx];
+ name = "<noname>";
+ if (shndx < ehdr.e_shnum) {
+ name = sec_strtab + shdr[shndx].sh_name;
+ }
+ else if (shndx == SHN_ABS) {
+ name = "ABSOLUTE";
+ }
+ else if (shndx == SHN_COMMON) {
+ name = "COMMON";
+ }
+ return name;
+}
+
+static const char *sym_name(const char *sym_strtab, Elf64_Sym *sym)
+{
+ const char *name;
+ name = "<noname>";
+ if (sym->st_name) {
+ name = sym_strtab + sym->st_name;
+ }
+ else {
+ name = sec_name(shdr[sym->st_shndx].sh_name);
+ }
+ return name;
+}
+
+
+
+#if BYTE_ORDER == BIG_ENDIAN
+#define be16_to_cpu(val) (val)
+#define be32_to_cpu(val) (val)
+#define be64_to_cpu(val) (val)
+#endif
+#if BYTE_ORDER == LITTLE_ENDIAN
+#define be16_to_cpu(val) bswap_16(val)
+#define be32_to_cpu(val) bswap_32(val)
+#define be64_to_cpu(val) bswap_64(val)
+#endif
+
+static uint16_t elf16_to_cpu(uint16_t val)
+{
+ return be16_to_cpu(val);
+}
+
+static uint32_t elf32_to_cpu(uint32_t val)
+{
+ return be32_to_cpu(val);
+}
+
+static uint64_t elf64_to_cpu(uint64_t val)
+{
+ return be64_to_cpu(val);
+}
+
+static void read_ehdr(FILE *fp)
+{
+ if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1) {
+ die("Cannot read ELF header: %s\n",
+ strerror(errno));
+ }
+ if (memcmp(ehdr.e_ident, ELFMAG, 4) != 0) {
+ die("No ELF magic\n");
+ }
+ if (ehdr.e_ident[EI_CLASS] != ELFCLASS64) {
+ die("Not a 64 bit executable\n");
+ }
+ if (ehdr.e_ident[EI_DATA] != ELFDATA2MSB) {
+ die("Not a MSB ELF executable\n");
+ }
+ if (ehdr.e_ident[EI_VERSION] != EV_CURRENT) {
+ die("Unknown ELF version\n");
+ }
+ /* Convert the fields to native endian */
+ ehdr.e_type = elf16_to_cpu(ehdr.e_type);
+ ehdr.e_machine = elf16_to_cpu(ehdr.e_machine);
+ ehdr.e_version = elf32_to_cpu(ehdr.e_version);
+ ehdr.e_entry = elf64_to_cpu(ehdr.e_entry);
+ ehdr.e_phoff = elf64_to_cpu(ehdr.e_phoff);
+ ehdr.e_shoff = elf64_to_cpu(ehdr.e_shoff);
+ ehdr.e_flags = elf32_to_cpu(ehdr.e_flags);
+ ehdr.e_ehsize = elf16_to_cpu(ehdr.e_ehsize);
+ ehdr.e_phentsize = elf16_to_cpu(ehdr.e_phentsize);
+ ehdr.e_phnum = elf16_to_cpu(ehdr.e_phnum);
+ ehdr.e_shentsize = elf16_to_cpu(ehdr.e_shentsize);
+ ehdr.e_shnum = elf16_to_cpu(ehdr.e_shnum);
+ ehdr.e_shstrndx = elf16_to_cpu(ehdr.e_shstrndx);
+
+ if ((ehdr.e_type != ET_EXEC) && (ehdr.e_type != ET_DYN)) {
+ die("Unsupported ELF header type\n");
+ }
+ if (ehdr.e_machine != EM_PPC64) {
+ die("Not for PPC64\n");
+ }
+ if (ehdr.e_version != EV_CURRENT) {
+ die("Unknown ELF version\n");
+ }
+ if (ehdr.e_ehsize != sizeof(Elf64_Ehdr)) {
+ die("Bad Elf header size\n");
+ }
+ if (ehdr.e_phentsize != sizeof(Elf64_Phdr)) {
+ die("Bad program header entry\n");
+ }
+ if (ehdr.e_shentsize != sizeof(Elf64_Shdr)) {
+ die("Bad section header entry\n");
+ }
+ if (ehdr.e_shstrndx >= ehdr.e_shnum) {
+ die("String table index out of bounds\n");
+ }
+}
+
+static void read_shdrs(FILE *fp)
+{
+ int i;
+ if (ehdr.e_shnum > MAX_SHDRS) {
+ die("%d section headers supported: %d\n",
+ ehdr.e_shnum, MAX_SHDRS);
+ }
+ if (fseek(fp, ehdr.e_shoff, SEEK_SET) < 0) {
+ die("Seek to %d failed: %s\n",
+ ehdr.e_shoff, strerror(errno));
+ }
+ if (fread(&shdr, sizeof(shdr[0]), ehdr.e_shnum, fp) != ehdr.e_shnum) {
+ die("Cannot read ELF section headers: %s\n",
+ strerror(errno));
+ }
+ for(i = 0; i < ehdr.e_shnum; i++) {
+ shdr[i].sh_name = elf32_to_cpu(shdr[i].sh_name);
+ shdr[i].sh_type = elf32_to_cpu(shdr[i].sh_type);
+ shdr[i].sh_flags = elf64_to_cpu(shdr[i].sh_flags);
+ shdr[i].sh_addr = elf64_to_cpu(shdr[i].sh_addr);
+ shdr[i].sh_offset = elf64_to_cpu(shdr[i].sh_offset);
+ shdr[i].sh_size = elf64_to_cpu(shdr[i].sh_size);
+ shdr[i].sh_link = elf32_to_cpu(shdr[i].sh_link);
+ shdr[i].sh_info = elf32_to_cpu(shdr[i].sh_info);
+ shdr[i].sh_addralign = elf64_to_cpu(shdr[i].sh_addralign);
+ shdr[i].sh_entsize = elf64_to_cpu(shdr[i].sh_entsize);
+ }
+
+}
+
+static void read_strtabs(FILE *fp)
+{
+ int i;
+ for(i = 0; i < ehdr.e_shnum; i++) {
+ if (shdr[i].sh_type != SHT_STRTAB) {
+ continue;
+ }
+ strtab[i] = malloc(shdr[i].sh_size);
+ if (!strtab[i]) {
+ die("malloc of %d bytes for strtab failed\n",
+ shdr[i].sh_size);
+ }
+ if (fseek(fp, shdr[i].sh_offset, SEEK_SET) < 0) {
+ die("Seek to %d failed: %s\n",
+ shdr[i].sh_offset, strerror(errno));
+ }
+ if (fread(strtab[i], 1, shdr[i].sh_size, fp) != shdr[i].sh_size) {
+ die("Cannot read symbol table: %s\n",
+ strerror(errno));
+ }
+ }
+}
+
+static void read_symtabs(FILE *fp)
+{
+ int i,j;
+ for(i = 0; i < ehdr.e_shnum; i++) {
+ if (shdr[i].sh_type != SHT_SYMTAB) {
+ continue;
+ }
+ symtab[i] = malloc(shdr[i].sh_size);
+ if (!symtab[i]) {
+ die("malloc of %d bytes for symtab failed\n",
+ shdr[i].sh_size);
+ }
+ if (fseek(fp, shdr[i].sh_offset, SEEK_SET) < 0) {
+ die("Seek to %d failed: %s\n",
+ shdr[i].sh_offset, strerror(errno));
+ }
+ if (fread(symtab[i], 1, shdr[i].sh_size, fp) != shdr[i].sh_size) {
+ die("Cannot read symbol table: %s\n",
+ strerror(errno));
+ }
+ for(j = 0; j < shdr[i].sh_size/sizeof(symtab[i][0]); j++) {
+ symtab[i][j].st_name = elf32_to_cpu(symtab[i][j].st_name);
+ symtab[i][j].st_value = elf64_to_cpu(symtab[i][j].st_value);
+ symtab[i][j].st_size = elf64_to_cpu(symtab[i][j].st_size);
+ symtab[i][j].st_shndx = elf16_to_cpu(symtab[i][j].st_shndx);
+ }
+ }
+}
+
+
+static void read_relocs(FILE *fp)
+{
+ int i,j;
+ void *relp;
+
+ for(i = 0; i < ehdr.e_shnum; i++) {
+ if (shdr[i].sh_type != SHT_RELA)
+ continue;
+
+ reltaba[i] = malloc(shdr[i].sh_size);
+ if (!reltaba[i])
+ die("malloc of %d bytes for relocs failed\n",
+ shdr[i].sh_size);
+
+ relp = reltaba[i];
+
+ if (fseek(fp, shdr[i].sh_offset, SEEK_SET) < 0)
+ die("Seek to %d failed: %s\n",
+ shdr[i].sh_offset, strerror(errno));
+
+ if (fread(relp, 1, shdr[i].sh_size, fp) != shdr[i].sh_size)
+ die("Cannot read symbol table: %s\n",
+ strerror(errno));
+
+ for(j = 0; j < shdr[i].sh_size/sizeof(reltaba[0][0]); j++) {
+ reltaba[i][j].r_offset = elf64_to_cpu(reltaba[i][j].r_offset);
+ reltaba[i][j].r_info = elf64_to_cpu(reltaba[i][j].r_info);
+ reltaba[i][j].r_addend = elf64_to_cpu(reltaba[i][j].r_addend);
+ }
+ }
+}
+
+
+static void print_absolute_symbols(void)
+{
+ int i;
+ printf("Absolute symbols\n");
+ printf(" Num: Value Size Type Bind Visibility Name\n");
+ for(i = 0; i < ehdr.e_shnum; i++) {
+ char *sym_strtab;
+ Elf64_Sym *sh_symtab;
+ int j;
+ if (shdr[i].sh_type != SHT_SYMTAB) {
+ continue;
+ }
+ sh_symtab = symtab[i];
+ sym_strtab = strtab[shdr[i].sh_link];
+ for(j = 0; j < shdr[i].sh_size/sizeof(symtab[0][0]); j++) {
+ Elf64_Sym *sym;
+ const char *name;
+ sym = &symtab[i][j];
+ name = sym_name(sym_strtab, sym);
+ if (sym->st_shndx != SHN_ABS) {
+ continue;
+ }
+ printf("type:[%s]\n",
+ sym_type(ELF64_ST_TYPE(sym->st_info)));
+ printf("%5d %016llx %5d type:%s bind:%10s %12s %s\n", \
+ j, sym->st_value, (int)(sym->st_size), \
+ sym_type(ELF64_ST_TYPE(sym->st_info)), \
+ sym_bind(ELF64_ST_BIND(sym->st_info)), \
+ sym_visibility(ELF64_ST_VISIBILITY(sym->st_other)), \
+ name);
+ }
+ }
+ printf("\n");
+}
+
+static void print_absolute_relocs(void)
+{
+ int i, printed = 0;
+ int nr;
+
+ for(i = 0; i < ehdr.e_shnum; i++) {
+ char *sym_strtab;
+ Elf64_Sym *sh_symtab;
+ unsigned sec_applies, sec_symtab;
+ int j;
+ if (shdr[i].sh_type != SHT_RELA)
+ continue;
+
+ sec_symtab = shdr[i].sh_link;
+ sec_applies = shdr[i].sh_info;
+ if (!(shdr[sec_applies].sh_flags & SHF_ALLOC))
+ continue;
+
+ nr = shdr[i].sh_size/sizeof(reltaba[0][0]);
+
+ sh_symtab = symtab[sec_symtab];
+ sym_strtab = strtab[shdr[sec_symtab].sh_link];
+
+ for(j = 0; j < nr; j++) {
+ Elf64_Rela *rela;
+ Elf64_Sym *sym;
+ const char *name;
+
+ rela = &reltaba[i][j];
+ sym = &sh_symtab[ELF64_R_SYM(rela->r_info)];
+
+ name = sym_name(sym_strtab, sym);
+ if (sym->st_shndx != SHN_ABS)
+ continue;
+
+ /* Absolute symbols are not relocated if vmlinux is
+ * loaded at a non-compiled address. Display a warning
+ * to user at compile time about the absolute
+ * relocations present.
+ *
+ * User need to audit the code to make sure
+ * some symbols which should have been section
+ * relative have not become absolute because of some
+ * linker optimization or wrong programming usage.
+ *
+ * Before warning check if this absolute symbol
+ * relocation is harmless.
+ */
+ if (is_safe_abs_reloc(name))
+ continue;
+
+ if (!printed) {
+ printf("WARNING: Absolute relocations"
+ " present\n");
+ printf("Offset Info Type Sym.Value "
+ "Sym.Name\n");
+ printed = 1;
+ }
+
+ printf("%016llx %016llx %10s %016llx %s\n",
+ rela->r_offset, rela->r_info,
+ rel_type(ELF64_R_TYPE(rela->r_info)),
+ sym->st_value, name);
+ }
+ }
+
+ if (printed)
+ printf("\n");
+}
+
+static void walk_relocs(void (*visit)(Elf64_Rela *rela, Elf64_Sym *sym))
+{
+ int i;
+ /* Walk through the relocations */
+ for(i = 0; i < ehdr.e_shnum; i++) {
+ char *sym_strtab;
+ Elf64_Sym *sh_symtab;
+ unsigned sec_applies, sec_symtab;
+ int j, nr_entries;
+ if (shdr[i].sh_type != SHT_RELA)
+ continue;
+
+ sec_symtab = shdr[i].sh_link;
+ sec_applies = shdr[i].sh_info;
+ if (!(shdr[sec_applies].sh_flags & SHF_ALLOC))
+ continue;
+
+ sh_symtab = symtab[sec_symtab];
+ sym_strtab = strtab[shdr[sec_symtab].sh_link];
+ nr_entries = shdr[i].sh_size/sizeof(reltaba[0][0]);
+
+ for(j = 0; j < nr_entries; j++) {
+ Elf64_Rela *rela;
+ Elf64_Sym *sym;
+ void *relp;
+ unsigned r_type;
+
+
+ rela = &reltaba[i][j];
+ sym = &sh_symtab[ELF64_R_SYM(rela->r_info)];
+ r_type = ELF64_R_TYPE(rela->r_info);
+ relp = rela;
+ /* Don't visit relocations to absolute symbols */
+ if (sym->st_shndx == SHN_ABS)
+ continue;
+
+ /* PC relative relocations don't need to be adjusted */
+ switch (r_type) {
+ case R_PPC64_ADDR16_HI:
+ case R_PPC64_ADDR64:
+ case R_PPC64_TOC:
+ /* Visit relocations that need to be adjusted */
+ visit(rela, sym);
+ break;
+ /* Ignore these relocation types */
+ case R_PPC64_REL24:
+ case R_PPC64_ADDR16_HIGHEST:
+ case R_PPC64_ADDR16_HIGHER:
+ case R_PPC64_ADDR16_LO:
+ case R_PPC64_TOC16_DS:
+ case R_PPC64_REL14:
+ case R_PPC64_TOC16:
+ case R_PPC64_REL64:
+ case R_PPC64_ADDR16_LO_DS:
+ break;
+ case R_PPC64_GOT16_DS:
+ default:
+ die("unsupported relocation type %s(%d)\n", rel_type(r_type), r_type);
+ break;
+ }
+ }
+ }
+}
+
+static void count_reloc(Elf64_Rela *rela, Elf64_Sym *sym)
+{
+ reloc_count += 1;
+}
+
+static void collect_reloc(Elf64_Rela *rela, Elf64_Sym *sym)
+{
+ /* Remember the address that needs to be adjusted. */
+ relocs[reloc_idx].offset = rela->r_offset;
+ relocs[reloc_idx++].rel_type = ELF64_R_TYPE(rela->r_info);
+}
+
+static int cmp_relocs(const void *va, const void *vb)
+{
+ const struct reloc_info *a, *b;
+ a = va; b = vb;
+ return (a->rel_type == b->rel_type)? 0 : (a->rel_type > b->rel_type)? 1 : -1;
+}
+
+static void emit_relocs(int as_text)
+{
+ int i;
+ int prev_r_type;
+ /* Count how many relocations I have and allocate space for them. */
+ reloc_count = 0;
+ walk_relocs(count_reloc);
+ relocs = malloc(reloc_count * sizeof(relocs[0]));
+ if (!relocs)
+ die("malloc of %d entries for relocs failed\n",
+ reloc_count);
+
+ /* Collect up the relocations */
+ reloc_idx = 0;
+ walk_relocs(collect_reloc);
+
+ /* Order the relocations for more efficient processing */
+ qsort(relocs, reloc_count, sizeof(relocs[0]), cmp_relocs);
+
+ /* Print the relocations */
+ if (as_text) {
+ /* Print the relocations in a form suitable that
+ * gas will like.
+ */
+ printf(".section \".data.reloc\",\"a\"\n");
+ printf(".balign 4\n");
+
+ printf("\t .long 0x%016llx\n", relocs[0].offset);
+ prev_r_type = relocs[0].rel_type;
+
+ for(i = 1; i < reloc_count; i++) {
+ if (prev_r_type != relocs[i].rel_type && prev_r_type == R_PPC64_ADDR16_HI) {
+ printf("\t .long 0xffffffffffffffff\n");
+ prev_r_type = relocs[i].rel_type;
+ }
+ printf("\t .long 0x%016llx\n", relocs[i].offset);
+ }
+ printf("\n");
+ }
+ else {
+ unsigned char buf[8];
+ buf[0] = buf[1] = buf[2] = buf[3] = 0;
+ buf[4] = buf[5] = buf[6] = buf[7] = 0;
+
+ /* Print a stop */
+ printf("%c%c%c%c", buf[0], buf[1], buf[2], buf[3]);
+ printf("%c%c%c%c", buf[4], buf[5], buf[6], buf[7]);
+
+ buf[7] = (relocs[0].offset >> 0) & 0xff;
+ buf[6] = (relocs[0].offset >> 8) & 0xff;
+ buf[5] = (relocs[0].offset >> 16) & 0xff;
+ buf[4] = (relocs[0].offset >> 24) & 0xff;
+ buf[3] = (relocs[0].offset >> 32) & 0xff;
+ buf[2] = (relocs[0].offset >> 40) & 0xff;
+ buf[1] = (relocs[0].offset >> 48) & 0xff;
+ buf[0] = (relocs[0].offset >> 56) & 0xff;
+ printf("%c%c%c%c", buf[0], buf[1], buf[2], buf[3]);
+ printf("%c%c%c%c", buf[4], buf[5], buf[6], buf[7]);
+
+ prev_r_type = relocs[0].rel_type;
+
+ /* Now print each relocation */
+ for(i = 1; i < reloc_count; i++) {
+ if (prev_r_type != relocs[i].rel_type && prev_r_type == R_PPC64_ADDR16_HI) {
+ printf("%c%c%c%c", 0xff, 0xff, 0xff, 0xff);
+ printf("%c%c%c%c", 0xff, 0xff, 0xff, 0xff);
+ prev_r_type = relocs[i].rel_type;
+ }
+ buf[7] = (relocs[i].offset >> 0) & 0xff;
+ buf[6] = (relocs[i].offset >> 8) & 0xff;
+ buf[5] = (relocs[i].offset >> 16) & 0xff;
+ buf[4] = (relocs[i].offset >> 24) & 0xff;
+ buf[3] = (relocs[i].offset >> 32) & 0xff;
+ buf[2] = (relocs[i].offset >> 40) & 0xff;
+ buf[1] = (relocs[i].offset >> 48) & 0xff;
+ buf[0] = (relocs[i].offset >> 56) & 0xff;
+ printf("%c%c%c%c", buf[0], buf[1], buf[2], buf[3]);
+ printf("%c%c%c%c", buf[4], buf[5], buf[6], buf[7]);
+ }
+ buf[0] = buf[1] = buf[2] = buf[3] = 0;
+ buf[4] = buf[5] = buf[6] = buf[7] = 0;
+ }
+}
+
+static void usage(void)
+{
+ die("relocs [--abs-syms |--abs-relocs | --text] vmlinux\n");
+}
+
+int main(int argc, char **argv)
+{
+ int show_absolute_syms, show_absolute_relocs;
+ int as_text;
+ const char *fname;
+ FILE *fp;
+ int i;
+
+ show_absolute_syms = 0;
+ show_absolute_relocs = 0;
+ as_text = 0;
+ fname = NULL;
+ for(i = 1; i < argc; i++) {
+ char *arg = argv[i];
+ if (*arg == '-') {
+ if (strcmp(argv[1], "--abs-syms") == 0) {
+ show_absolute_syms = 1;
+ continue;
+ }
+
+ if (strcmp(argv[1], "--abs-relocs") == 0) {
+ show_absolute_relocs = 1;
+ continue;
+ }
+ else if (strcmp(argv[1], "--text") == 0) {
+ as_text = 1;
+ continue;
+ }
+ }
+ else if (!fname) {
+ fname = arg;
+ continue;
+ }
+ usage();
+ }
+ if (!fname) {
+ usage();
+ }
+ fp = fopen(fname, "r");
+ if (!fp) {
+ die("Cannot open %s: %s\n",
+ fname, strerror(errno));
+ }
+ read_ehdr(fp);
+ read_shdrs(fp);
+ read_strtabs(fp);
+ read_symtabs(fp);
+ read_relocs(fp);
+ if (show_absolute_syms) {
+ print_absolute_symbols();
+ return 0;
+ }
+ if (show_absolute_relocs) {
+ print_absolute_relocs();
+ return 0;
+ }
+ emit_relocs(as_text);
+ return 0;
+}
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/5] Build files needed for relocation
2008-08-11 20:11 [PATCH 0/5] Relocatable kernel support for PPC64 Mohan Kumar M
2008-08-11 20:12 ` [PATCH 1/5] Extract list of relocation offsets Mohan Kumar M
@ 2008-08-11 20:14 ` Mohan Kumar M
2008-08-12 8:07 ` Mohan Kumar M
2008-08-12 8:09 ` Mohan Kumar M
2008-08-11 20:15 ` [PATCH 3/5] Apply relocation Mohan Kumar M
` (2 subsequent siblings)
4 siblings, 2 replies; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-11 20:14 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Build files needed for relocation
This patch builds vmlinux file with relocation sections and contents so
that relocs user space program can extract the required relocation
offsets. This packs final relocatable vmlinux kernel as following:
earlier part of relocation apply code, vmlinux, rest of relocation apply
code.
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/Kconfig | 15 ++++++++++++---
arch/powerpc/Makefile | 9 +++++++--
arch/powerpc/boot/Makefile | 39 ++++++++++++++++++++++++++++++++++++---
3 files changed, 55 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 63c9caf..b992bc1 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -332,6 +332,15 @@ config CRASH_DUMP
Don't change this unless you know what you are doing.
+config RELOCATABLE_PPC64
+ bool "Build a relocatable kernel (EXPERIMENTAL)"
+ depends on PPC_MULTIPLATFORM && PPC64 && CRASH_DUMP && EXPERIMENTAL
+ help
+ Build a kernel suitable for use as regular kernel and kdump capture
+ kernel.
+
+ Don't change this unless you know what you are doing.
+
config PHYP_DUMP
bool "Hypervisor-assisted dump (EXPERIMENTAL)"
depends on PPC_PSERIES && EXPERIMENTAL
@@ -694,7 +703,7 @@ config LOWMEM_SIZE
default "0x30000000"
config RELOCATABLE
- bool "Build a relocatable kernel (EXPERIMENTAL)"
+ bool "Build relocatable kernel (EXPERIMENTAL)"
depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE
help
This builds a kernel image that is capable of running at the
@@ -814,11 +823,11 @@ config PAGE_OFFSET
default "0xc000000000000000"
config KERNEL_START
hex
- default "0xc000000002000000" if CRASH_DUMP
+ default "0xc000000002000000" if CRASH_DUMP && !RELOCATABLE_PPC64
default "0xc000000000000000"
config PHYSICAL_START
hex
- default "0x02000000" if CRASH_DUMP
+ default "0x02000000" if CRASH_DUMP && !RELOCATABLE_PPC64
default "0x00000000"
endif
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 9155c93..1bfdeea 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,7 +63,7 @@ override CC += -m$(CONFIG_WORD_SIZE)
override AR := GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
endif
-LDFLAGS_vmlinux := -Bstatic
+LDFLAGS_vmlinux := --emit-relocs
CFLAGS-$(CONFIG_PPC64) := -mminimal-toc -mtraceback=none -mcall-aixdesc
CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 -mmultiple
@@ -146,11 +146,16 @@ core-$(CONFIG_KVM) += arch/powerpc/kvm/
drivers-$(CONFIG_OPROFILE) += arch/powerpc/oprofile/
# Default to zImage, override when needed
+
+ifneq ($(CONFIG_RELOCATABLE_PPC64),y)
all: zImage
+else
+all: zImage vmlinux.reloc
+endif
CPPFLAGS_vmlinux.lds := -Upowerpc
-BOOT_TARGETS = zImage zImage.initrd uImage zImage% dtbImage% treeImage.% cuImage.% simpleImage.%
+BOOT_TARGETS = zImage vmlinux.reloc zImage.initrd uImage zImage% dtbImage% treeImage.% cuImage.% simpleImage.%
PHONY += $(BOOT_TARGETS)
diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
index 14174aa..a67a701 100644
--- a/arch/powerpc/boot/Makefile
+++ b/arch/powerpc/boot/Makefile
@@ -17,7 +17,7 @@
# CROSS32_COMPILE is setup as a prefix just like CROSS_COMPILE
# in the toplevel makefile.
-all: $(obj)/zImage
+all: $(obj)/zImage $(obj)/vmlinux.reloc
BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -Os -msoft-float -pipe \
@@ -122,18 +122,51 @@ $(patsubst %.S,%.o, $(filter %.S, $(src-boot))): %.o: %.S FORCE
$(obj)/wrapper.a: $(obj-wlib) FORCE
$(call if_changed,bootar)
-hostprogs-y := addnote addRamDisk hack-coff mktree dtc
+hostprogs-y := addnote addRamDisk hack-coff mktree dtc relocs
targets += $(patsubst $(obj)/%,%,$(obj-boot) wrapper.a)
extra-y := $(obj)/wrapper.a $(obj-plat) $(obj)/empty.o \
$(obj)/zImage.lds $(obj)/zImage.coff.lds $(obj)/zImage.ps3.lds
+ifeq ($(CONFIG_RELOCATABLE_PPC64),y)
+extra-y += $(obj)/vmlinux.lds
+endif
+
dtstree := $(srctree)/$(src)/dts
wrapper :=$(srctree)/$(src)/wrapper
-wrapperbits := $(extra-y) $(addprefix $(obj)/,addnote hack-coff mktree dtc) \
+wrapperbits := $(extra-y) $(addprefix $(obj)/,addnote hack-coff mktree dtc relocs) \
$(wrapper) FORCE
+ifeq ($(CONFIG_RELOCATABLE_PPC64),y)
+
+targets += vmlinux.offsets vmlinux.bin vmlinux.bin.all vmlinux.reloc.elf vmlinux.reloc reloc_apply.o vmlinux.lds
+
+OBJCOPYFLAGS_vmlinux.bin := -O binary -R .note -R .comment -S
+$(obj)/vmlinux.bin: vmlinux FORCE
+ $(call if_changed,objcopy)
+
+quiet_cmd_relocbin = BUILD $@
+ cmd_relocbin = cat $(filter-out FORCE,$^) > $@
+
+quiet_cmd_relocs = RELOCS $@
+ cmd_relocs = $(obj)/relocs $< > $@
+
+$(obj)/vmlinux.offsets: vmlinux $(obj)/relocs FORCE
+ $(call if_changed,relocs)
+
+$(obj)/vmlinux.bin.all: $(obj)/vmlinux.bin $(obj)/vmlinux.offsets FORCE
+ $(call if_changed,relocbin)
+
+LDFLAGS_vmlinux.reloc.elf := -T $(obj)/vmlinux.reloc.scr -r --format binary --oformat elf64-powerpc
+$(obj)/vmlinux.reloc.elf: $(obj)/vmlinux.bin.all FORCE
+ $(call if_changed,ld)
+
+LDFLAGS_vmlinux.reloc := -T $(obj)/vmlinux.lds
+$(obj)/vmlinux.reloc: $(obj)/reloc_apply.o $(obj)/vmlinux.reloc.elf FORCE
+ $(call if_changed,ld)
+endif
+
#############
# Bits for building dtc
# DTC_GENPARSER := 1 # Uncomment to rebuild flex/bison output
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/5] Apply relocation
2008-08-11 20:11 [PATCH 0/5] Relocatable kernel support for PPC64 Mohan Kumar M
2008-08-11 20:12 ` [PATCH 1/5] Extract list of relocation offsets Mohan Kumar M
2008-08-11 20:14 ` [PATCH 2/5] Build files needed for relocation Mohan Kumar M
@ 2008-08-11 20:15 ` Mohan Kumar M
2008-08-12 0:23 ` Paul Mackerras
2008-08-12 8:10 ` Mohan Kumar M
2008-08-11 20:16 ` [PATCH 4/5] Relocation support Mohan Kumar M
2008-08-11 20:18 ` [PATCH 5/5] Relocation support for kdump kernel Mohan Kumar M
4 siblings, 2 replies; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-11 20:15 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Apply relocation
This code is a wrapper around regular kernel. This checks whether the
kernel is loaded at 32MB, if its not loaded at 32MB, its treated as a
regular kernel and the control is given to the kernel immediately. If
the kernel is loaded at 32MB, it applies relocation delta to each offset
in the list which was generated and appended by patch 1 and 2. After
updating all offsets, control is given to the relocatable kernel.
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/boot/vmlinux.lds.S | 28 ++++++++++++++++++++++++++++
arch/powerpc/boot/vmlinux.reloc.scr | 8 ++++++++
2 files changed, 36 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/boot/vmlinux.lds.S
create mode 100644 arch/powerpc/boot/vmlinux.reloc.scr
diff --git a/arch/powerpc/boot/vmlinux.lds.S b/arch/powerpc/boot/vmlinux.lds.S
new file mode 100644
index 0000000..245c667
--- /dev/null
+++ b/arch/powerpc/boot/vmlinux.lds.S
@@ -0,0 +1,28 @@
+#include <asm/page.h>
+#include <asm-generic/vmlinux.lds.h>
+
+ENTRY(start_wrap)
+
+OUTPUT_ARCH(powerpc:common64)
+SECTIONS
+{
+ . = KERNELBASE;
+
+/*
+ * Text, read only data and other permanent read-only sections
+ */
+ /* Text and gots */
+ .text : {
+ _head = .;
+ *(.text.head)
+ _ehead = .;
+
+ _text = .;
+ *(.vmlinux)
+ _etext = .;
+
+ _reloc = .;
+ *(.text.reloc)
+ _ereloc = .;
+ }
+}
diff --git a/arch/powerpc/boot/vmlinux.reloc.scr b/arch/powerpc/boot/vmlinux.reloc.scr
new file mode 100644
index 0000000..7240b6b
--- /dev/null
+++ b/arch/powerpc/boot/vmlinux.reloc.scr
@@ -0,0 +1,8 @@
+SECTIONS
+{
+ .vmlinux : {
+ input_len = .;
+ *(.data)
+ output_len = . - 8;
+ }
+}
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 4/5] Relocation support
2008-08-11 20:11 [PATCH 0/5] Relocatable kernel support for PPC64 Mohan Kumar M
` (2 preceding siblings ...)
2008-08-11 20:15 ` [PATCH 3/5] Apply relocation Mohan Kumar M
@ 2008-08-11 20:16 ` Mohan Kumar M
2008-08-12 8:11 ` Mohan Kumar M
2008-08-11 20:18 ` [PATCH 5/5] Relocation support for kdump kernel Mohan Kumar M
4 siblings, 1 reply; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-11 20:16 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Relocation support
Add relocatable kernel support like avoiding copying the vmlinux
image to compile address, adding relocation delta to the absolute
symbol references etc. ld does not provide relocation entries for
.got section, and the user space relocation extraction program
can not process @got entries. So use LOAD_REG_IMMEDIATE macro
instead of LOAD_REG_ADDR macro for relocatable kernel.
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/include/asm/ppc_asm.h | 4 ++
arch/powerpc/include/asm/prom.h | 2 +
arch/powerpc/include/asm/sections.h | 4 ++-
arch/powerpc/include/asm/system.h | 5 +++
arch/powerpc/kernel/head_64.S | 53 ++++++++++++++++++++++++++++++-
arch/powerpc/kernel/machine_kexec_64.c | 4 +-
arch/powerpc/kernel/prom_init.c | 27 +++++++++++++---
arch/powerpc/kernel/prom_init_check.sh | 2 +-
arch/powerpc/kernel/setup_64.c | 5 +--
arch/powerpc/mm/hash_low_64.S | 12 +++++++
arch/powerpc/mm/init_64.c | 7 ++--
arch/powerpc/mm/mem.c | 3 +-
arch/powerpc/mm/slb_low.S | 4 ++
13 files changed, 114 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 0966899..2309ad0 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -295,8 +295,12 @@ n:
oris (reg),(reg),(expr)@h; \
ori (reg),(reg),(expr)@l;
+#ifdef CONFIG_RELOCATABLE_PPC64
+#define LOAD_REG_ADDR(reg,name) LOAD_REG_IMMEDIATE(reg,name)
+#else
#define LOAD_REG_ADDR(reg,name) \
ld (reg),name@got(r2)
+#endif
#define LOAD_REG_ADDRBASE(reg,name) LOAD_REG_ADDR(reg,name)
#define ADDROFF(name) 0
diff --git a/arch/powerpc/include/asm/prom.h b/arch/powerpc/include/asm/prom.h
index eb3bd2e..4d7aa4f 100644
--- a/arch/powerpc/include/asm/prom.h
+++ b/arch/powerpc/include/asm/prom.h
@@ -39,6 +39,8 @@
#define OF_DT_VERSION 0x10
+extern unsigned long reloc_delta, kernel_base;
+
/*
* This is what gets passed to the kernel by prom_init or kexec
*
diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h
index 916018e..f19dab3 100644
--- a/arch/powerpc/include/asm/sections.h
+++ b/arch/powerpc/include/asm/sections.h
@@ -7,10 +7,12 @@
#ifdef __powerpc64__
extern char _end[];
+extern unsigned long kernel_base;
static inline int in_kernel_text(unsigned long addr)
{
- if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end)
+ if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end
+ + kernel_base)
return 1;
return 0;
diff --git a/arch/powerpc/include/asm/system.h b/arch/powerpc/include/asm/system.h
index d6648c1..065c830 100644
--- a/arch/powerpc/include/asm/system.h
+++ b/arch/powerpc/include/asm/system.h
@@ -537,6 +537,11 @@ extern unsigned long add_reloc_offset(unsigned long);
extern void reloc_got2(unsigned long);
#define PTRRELOC(x) ((typeof(x)) add_reloc_offset((unsigned long)(x)))
+#ifdef CONFIG_PPC64
+#define RELOC(x) (*PTRRELOC(&(x)))
+#else
+#define RELOC(x) (x)
+#endif
#ifdef CONFIG_VIRT_CPU_ACCOUNTING
extern void account_system_vtime(struct task_struct *);
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index cc8fb47..6274686 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -102,6 +102,12 @@ __secondary_hold_acknowledge:
.llong hvReleaseData-KERNELBASE
#endif /* CONFIG_PPC_ISERIES */
+#ifdef CONFIG_RELOCATABLE_PPC64
+ /* Used as static variable to initialize the reloc_delta */
+__initialized:
+ .long 0x0
+#endif
+
. = 0x60
/*
* The following code is used to hold secondary processors
@@ -1248,6 +1254,38 @@ _STATIC(__mmu_off)
*
*/
_GLOBAL(__start_initialization_multiplatform)
+#ifdef CONFIG_RELOCATABLE_PPC64
+ mr r21,r3
+ mr r22,r4
+ mr r23,r5
+ bl .reloc_offset
+ mr r26,r3
+ mr r3,r21
+ mr r4,r22
+ mr r5,r23
+
+ LOAD_REG_IMMEDIATE(r27, __initialized)
+ add r27,r26,r27
+ ld r7,0(r27)
+ cmpdi r7,0
+ bne 4f
+
+ li r7,1
+ stw r7,0(r27)
+
+ cmpdi r6,0
+ beq 4f
+ LOAD_REG_IMMEDIATE(r27, reloc_delta)
+ add r27,r27,r26
+ std r6,0(r27)
+
+ LOAD_REG_IMMEDIATE(r27, KERNELBASE)
+ add r7,r6,r27
+ LOAD_REG_IMMEDIATE(r27, kernel_base)
+ add r27,r27,r26
+ std r7,0(r27)
+4:
+#endif
/*
* Are we booted from a PROM Of-type client-interface ?
*/
@@ -1323,6 +1361,19 @@ _INIT_STATIC(__boot_from_prom)
trap
_STATIC(__after_prom_start)
+ bl .reloc_offset
+ mr r26,r3
+#ifdef CONFIG_RELOCATABLE_PPC64
+ /*
+ * If its a relocatable kernel, no need to copy the kernel
+ * to PHYSICAL_START. Continue running from the same location
+ */
+ LOAD_REG_IMMEDIATE(r27, reloc_delta)
+ add r27,r27,r26
+ ld r28,0(r27)
+ cmpdi r28,0
+ bne .start_here_multiplatform
+#endif
/*
* We need to run with __start at physical address PHYSICAL_START.
@@ -1336,8 +1387,6 @@ _STATIC(__after_prom_start)
* r26 == relocation offset
* r27 == KERNELBASE
*/
- bl .reloc_offset
- mr r26,r3
LOAD_REG_IMMEDIATE(r27, KERNELBASE)
LOAD_REG_IMMEDIATE(r3, PHYSICAL_START) /* target addr */
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index a168514..ce19c44 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -43,7 +43,7 @@ int default_machine_kexec_prepare(struct kimage *image)
* overlaps kernel static data or bss.
*/
for (i = 0; i < image->nr_segments; i++)
- if (image->segment[i].mem < __pa(_end))
+ if (image->segment[i].mem < (__pa(_end) + kernel_base))
return -ETXTBSY;
/*
@@ -317,7 +317,7 @@ static void __init export_htab_values(void)
if (!node)
return;
- kernel_end = __pa(_end);
+ kernel_end = __pa(_end) + kernel_base;
prom_add_property(node, &kernel_end_prop);
/* On machines with no htab htab_address is NULL */
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index b72849a..8e8ddbe 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -91,11 +91,9 @@ extern const struct linux_logo logo_linux_clut224;
* fortunately don't get interpreted as two arguments).
*/
#ifdef CONFIG_PPC64
-#define RELOC(x) (*PTRRELOC(&(x)))
#define ADDR(x) (u32) add_reloc_offset((unsigned long)(x))
#define OF_WORKAROUNDS 0
#else
-#define RELOC(x) (x)
#define ADDR(x) (u32) (x)
#define OF_WORKAROUNDS of_workarounds
int of_workarounds;
@@ -1078,7 +1076,12 @@ static void __init prom_init_mem(void)
}
}
- RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000);
+#ifndef CONFIG_RELOCATABLE_PPC64
+ RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000);
+#else
+ RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000 +
+ RELOC(reloc_delta));
+#endif
/* Check if we have an initrd after the kernel, if we do move our bottom
* point to after it
@@ -1338,10 +1341,17 @@ static void __init prom_hold_cpus(void)
phandle node;
char type[64];
struct prom_t *_prom = &RELOC(prom);
+#ifndef CONFIG_RELOCATABLE_PPC64
unsigned long *spinloop
= (void *) LOW_ADDR(__secondary_hold_spinloop);
unsigned long *acknowledge
= (void *) LOW_ADDR(__secondary_hold_acknowledge);
+#else
+ unsigned long *spinloop
+ = (void *) &__secondary_hold_spinloop;
+ unsigned long *acknowledge
+ = (void *) &__secondary_hold_acknowledge;
+#endif
#ifdef CONFIG_PPC64
/* __secondary_hold is actually a descriptor, not the text address */
unsigned long secondary_hold
@@ -2376,8 +2386,15 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
/*
* Copy the CPU hold code
*/
- if (RELOC(of_platform) != PLATFORM_POWERMAC)
- copy_and_flush(0, KERNELBASE + offset, 0x100, 0);
+ if (RELOC(of_platform) != PLATFORM_POWERMAC) {
+#ifdef CONFIG_RELOCATABLE_PPC64
+ if (RELOC(reloc_delta))
+ copy_and_flush(0, KERNELBASE + RELOC(reloc_delta),
+ 0x100, 0);
+ else
+#endif
+ copy_and_flush(0, KERNELBASE + offset, 0x100, 0);
+ }
/*
* Do early parsing of command line
diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh
index 2c7e8e8..3cc7e24 100644
--- a/arch/powerpc/kernel/prom_init_check.sh
+++ b/arch/powerpc/kernel/prom_init_check.sh
@@ -20,7 +20,7 @@ WHITELIST="add_reloc_offset __bss_start __bss_stop copy_and_flush
_end enter_prom memcpy memset reloc_offset __secondary_hold
__secondary_hold_acknowledge __secondary_hold_spinloop __start
strcmp strcpy strlcpy strlen strncmp strstr logo_linux_clut224
-reloc_got2 kernstart_addr"
+reloc_got2 kernstart_addr reloc_delta"
NM="$1"
OBJ="$2"
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 8b25f51..5498662 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -208,7 +208,6 @@ void __init early_setup(unsigned long dt_ptr)
/* Probe the machine type */
probe_machine();
-
setup_kdump_trampoline();
DBG("Found, Initializing memory management...\n");
@@ -526,9 +525,9 @@ void __init setup_arch(char **cmdline_p)
if (ppc_md.panic)
setup_panic();
- init_mm.start_code = (unsigned long)_stext;
+ init_mm.start_code = (unsigned long)_stext + kernel_base;
init_mm.end_code = (unsigned long) _etext;
- init_mm.end_data = (unsigned long) _edata;
+ init_mm.end_data = (unsigned long) _edata + kernel_base;
init_mm.brk = klimit;
irqstack_early_init();
diff --git a/arch/powerpc/mm/hash_low_64.S b/arch/powerpc/mm/hash_low_64.S
index a719f53..acf64d9 100644
--- a/arch/powerpc/mm/hash_low_64.S
+++ b/arch/powerpc/mm/hash_low_64.S
@@ -168,7 +168,11 @@ END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
std r3,STK_PARM(r4)(r1)
/* Get htab_hash_mask */
+#ifndef CONFIG_RELOCATABLE_PPC64
ld r4,htab_hash_mask@got(2)
+#else
+ LOAD_REG_IMMEDIATE(r4,htab_hash_mask)
+#endif
ld r27,0(r4) /* htab_hash_mask -> r27 */
/* Check if we may already be in the hashtable, in this case, we
@@ -461,7 +465,11 @@ END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
std r3,STK_PARM(r4)(r1)
/* Get htab_hash_mask */
+#ifndef CONFIG_RELOCATABLE_PPC64
ld r4,htab_hash_mask@got(2)
+#else
+ LOAD_REG_IMMEDIATE(r4,htab_hash_mask)
+#endif
ld r27,0(r4) /* htab_hash_mask -> r27 */
/* Check if we may already be in the hashtable, in this case, we
@@ -792,7 +800,11 @@ END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
std r3,STK_PARM(r4)(r1)
/* Get htab_hash_mask */
+#ifndef CONFIG_RELOCATABLE_PPC64
ld r4,htab_hash_mask@got(2)
+#else
+ LOAD_REG_IMMEDIATE(r4,htab_hash_mask)
+#endif
ld r27,0(r4) /* htab_hash_mask -> r27 */
/* Check if we may already be in the hashtable, in this case, we
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 4f7df85..086fa2d 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -79,10 +79,11 @@ phys_addr_t kernstart_addr;
void free_initmem(void)
{
- unsigned long addr;
+ unsigned long long addr, eaddr;
- addr = (unsigned long)__init_begin;
- for (; addr < (unsigned long)__init_end; addr += PAGE_SIZE) {
+ addr = (unsigned long long )__init_begin + kernel_base;
+ eaddr = (unsigned long long ) __init_end + kernel_base;
+ for (; addr < eaddr; addr += PAGE_SIZE) {
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 1c93c25..04e5d06 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -362,7 +362,8 @@ void __init mem_init(void)
}
}
- codesize = (unsigned long)&_sdata - (unsigned long)&_stext;
+ codesize = (unsigned long)&_sdata - (unsigned long)&_stext
+ + kernel_base;
datasize = (unsigned long)&_edata - (unsigned long)&_sdata;
initsize = (unsigned long)&__init_end - (unsigned long)&__init_begin;
bsssize = (unsigned long)&__bss_stop - (unsigned long)&__bss_start;
diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
index bc44dc4..aadc389 100644
--- a/arch/powerpc/mm/slb_low.S
+++ b/arch/powerpc/mm/slb_low.S
@@ -128,7 +128,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_1T_SEGMENT)
/* Now get to the array and obtain the sllp
*/
ld r11,PACATOC(r13)
+#ifndef CONFIG_RELOCATABLE_PPC64
ld r11,mmu_psize_defs@got(r11)
+#else
+ LOAD_REG_IMMEDIATE(r11,mmu_psize_defs)
+#endif
add r11,r11,r9
ld r11,MMUPSIZESLLP(r11)
ori r11,r11,SLB_VSID_USER
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 5/5] Relocation support for kdump kernel
2008-08-11 20:11 [PATCH 0/5] Relocatable kernel support for PPC64 Mohan Kumar M
` (3 preceding siblings ...)
2008-08-11 20:16 ` [PATCH 4/5] Relocation support Mohan Kumar M
@ 2008-08-11 20:18 ` Mohan Kumar M
2008-08-12 8:11 ` Mohan Kumar M
4 siblings, 1 reply; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-11 20:18 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Relocation support for kdump kernel
Add relocation kernel support for kdump kernel path.
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/kernel/crash_dump.c | 19 +++++++++++++++
arch/powerpc/kernel/iommu.c | 7 ++++-
arch/powerpc/kernel/machine_kexec.c | 6 ++++
arch/powerpc/kernel/misc.S | 40 +++++++++++++++++++++++++------
arch/powerpc/kernel/prom.c | 13 +++++++++-
arch/powerpc/mm/hash_utils_64.c | 5 ++-
arch/powerpc/platforms/pseries/iommu.c | 5 +++-
7 files changed, 81 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index e0debcc..58354b8 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -29,7 +29,12 @@
void __init reserve_kdump_trampoline(void)
{
+#ifdef CONFIG_RELOCATABLE_PPC64
+ if (RELOC(reloc_delta))
+ lmb_reserve(0, KDUMP_RESERVE_LIMIT);
+#else
lmb_reserve(0, KDUMP_RESERVE_LIMIT);
+#endif
}
static void __init create_trampoline(unsigned long addr)
@@ -45,7 +50,11 @@ static void __init create_trampoline(unsigned long addr)
* two instructions it doesn't require any registers.
*/
patch_instruction(p, PPC_NOP_INSTR);
+#ifndef CONFIG_RELOCATABLE_PPC64
patch_branch(++p, addr + PHYSICAL_START, 0);
+#else
+ patch_branch(++p, addr + (RELOC(reloc_delta) & 0xfffffffffffffff), 0);
+#endif
}
void __init setup_kdump_trampoline(void)
@@ -54,13 +63,23 @@ void __init setup_kdump_trampoline(void)
DBG(" -> setup_kdump_trampoline()\n");
+#ifdef CONFIG_RELOCATABLE_PPC64
+ if (!RELOC(reloc_delta))
+ return;
+#endif
+
for (i = KDUMP_TRAMPOLINE_START; i < KDUMP_TRAMPOLINE_END; i += 8) {
create_trampoline(i);
}
#ifdef CONFIG_PPC_PSERIES
+#ifndef CONFIG_RELOCATABLE_PPC64
create_trampoline(__pa(system_reset_fwnmi) - PHYSICAL_START);
create_trampoline(__pa(machine_check_fwnmi) - PHYSICAL_START);
+#else
+ create_trampoline(__pa(system_reset_fwnmi) - RELOC(reloc_delta));
+ create_trampoline(__pa(machine_check_fwnmi) - RELOC(reloc_delta));
+#endif
#endif /* CONFIG_PPC_PSERIES */
DBG(" <- setup_kdump_trampoline()\n");
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 550a193..9ae7657 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -494,7 +494,7 @@ struct iommu_table *iommu_init_table(struct iommu_table *tbl, int nid)
spin_lock_init(&tbl->it_lock);
#ifdef CONFIG_CRASH_DUMP
- if (ppc_md.tce_get) {
+ if (reloc_delta && ppc_md.tce_get) {
unsigned long index;
unsigned long tceval;
unsigned long tcecount = 0;
@@ -520,7 +520,10 @@ struct iommu_table *iommu_init_table(struct iommu_table *tbl, int nid)
index < tbl->it_size; index++)
__clear_bit(index, tbl->it_map);
}
- }
+ } else
+ /* Clear the hardware table in case firmware left allocations
+ in it */
+ ppc_md.tce_free(tbl, tbl->it_offset, tbl->it_size);
#else
/* Clear the hardware table in case firmware left allocations in it */
ppc_md.tce_free(tbl, tbl->it_offset, tbl->it_size);
diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index aab7688..75dc6af 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -67,6 +67,12 @@ void __init reserve_crashkernel(void)
unsigned long long crash_size, crash_base;
int ret;
+#ifdef CONFIG_RELOCATABLE_PPC64
+ /* Return if its kdump kernel */
+ if (reloc_delta)
+ return;
+#endif
+
/* this is necessary because of lmb_phys_mem_size() */
lmb_analyze();
diff --git a/arch/powerpc/kernel/misc.S b/arch/powerpc/kernel/misc.S
index 85cb6f3..28718ed 100644
--- a/arch/powerpc/kernel/misc.S
+++ b/arch/powerpc/kernel/misc.S
@@ -20,6 +20,8 @@
#include <asm/asm-compat.h>
#include <asm/asm-offsets.h>
+#define RELOC_DELTA 0x4000000002000000
+
.text
/*
@@ -33,6 +35,17 @@ _GLOBAL(reloc_offset)
1: mflr r3
LOAD_REG_IMMEDIATE(r4,1b)
subf r3,r4,r3
+#ifdef CONFIG_RELOCATABLE_PPC64
+ LOAD_REG_IMMEDIATE(r5, RELOC_DELTA)
+ cmpd r3,r5
+ bne 2f
+ /*
+ * Don't return the offset if the difference is
+ * RELOC_DELTA
+ */
+ li r3,0
+2:
+#endif
mtlr r0
blr
@@ -40,14 +53,25 @@ _GLOBAL(reloc_offset)
* add_reloc_offset(x) returns x + reloc_offset().
*/
_GLOBAL(add_reloc_offset)
- mflr r0
- bl 1f
-1: mflr r5
- LOAD_REG_IMMEDIATE(r4,1b)
- subf r5,r4,r5
- add r3,r3,r5
- mtlr r0
- blr
+ mflr r0
+ bl 1f
+1: mflr r5
+ LOAD_REG_IMMEDIATE(r4,1b)
+ subf r5,r4,r5
+#ifdef CONFIG_RELOCATABLE_PPC64
+ LOAD_REG_IMMEDIATE(r4, RELOC_DELTA)
+ cmpd r5,r4
+ bne 2f
+ /*
+ * Don't add the offset if the difference is
+ * RELOC_DELTA
+ */
+ li r5,0
+2:
+#endif
+ add r3,r3,r5
+ mtlr r0
+ blr
_GLOBAL(kernel_execve)
li r0,__NR_execve
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 87d83c5..453dc98 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -65,6 +65,9 @@
static int __initdata dt_root_addr_cells;
static int __initdata dt_root_size_cells;
+unsigned long reloc_delta __attribute__ ((__section__ (".data")));
+unsigned long kernel_base __attribute__ ((__section__ (".data")));
+
#ifdef CONFIG_PPC64
int __initdata iommu_is_off;
int __initdata iommu_force_on;
@@ -1163,8 +1166,16 @@ void __init early_init_devtree(void *params)
parse_early_param();
/* Reserve LMB regions used by kernel, initrd, dt, etc... */
- lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
reserve_kdump_trampoline();
+#ifdef CONFIG_RELOCATABLE_PPC64
+ if (RELOC(kernel_base)) {
+ lmb_reserve(0, KDUMP_RESERVE_LIMIT);
+ lmb_reserve(kernel_base, __pa(klimit) - PHYSICAL_START);
+ } else
+ lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
+#else
+ lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
+#endif
reserve_crashkernel();
early_reserve_mem();
phyp_dump_reserve_mem();
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 5ce5a4d..29474e9 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -677,8 +677,9 @@ void __init htab_initialize(void)
continue;
}
#endif /* CONFIG_U3_DART */
- BUG_ON(htab_bolt_mapping(base, base + size, __pa(base),
- mode_rw, mmu_linear_psize, mmu_kernel_ssize));
+ BUG_ON(htab_bolt_mapping(base + kernel_base, base + size,
+ __pa(base) + kernel_base, mode_rw, mmu_linear_psize,
+ mmu_kernel_ssize));
}
/*
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index a8c4466..480341b 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -291,7 +291,10 @@ static void iommu_table_setparms(struct pci_controller *phb,
tbl->it_base = (unsigned long)__va(*basep);
-#ifndef CONFIG_CRASH_DUMP
+#ifdef CONFIG_CRASH_DUMP
+ if (!reloc_delta)
+ memset((void *)tbl->it_base, 0, *sizep);
+#else
memset((void *)tbl->it_base, 0, *sizep);
#endif
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 3/5] Apply relocation
2008-08-11 20:15 ` [PATCH 3/5] Apply relocation Mohan Kumar M
@ 2008-08-12 0:23 ` Paul Mackerras
2008-08-12 8:10 ` Mohan Kumar M
1 sibling, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2008-08-12 0:23 UTC (permalink / raw)
To: mohan; +Cc: ppcdev, miltonm
Mohan Kumar M writes:
> Apply relocation
>
> This code is a wrapper around regular kernel. This checks whether the
> kernel is loaded at 32MB, if its not loaded at 32MB, its treated as a
> regular kernel and the control is given to the kernel immediately. If
> the kernel is loaded at 32MB, it applies relocation delta to each offset
> in the list which was generated and appended by patch 1 and 2. After
> updating all offsets, control is given to the relocatable kernel.
>
> Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
> ---
> arch/powerpc/boot/vmlinux.lds.S | 28 ++++++++++++++++++++++++++++
> arch/powerpc/boot/vmlinux.reloc.scr | 8 ++++++++
> 2 files changed, 36 insertions(+), 0 deletions(-)
I think there is some stuff missing here... This patch only adds two
linker scripts.
Paul.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/5] Build files needed for relocation
2008-08-11 20:14 ` [PATCH 2/5] Build files needed for relocation Mohan Kumar M
@ 2008-08-12 8:07 ` Mohan Kumar M
2008-08-12 8:09 ` Mohan Kumar M
1 sibling, 0 replies; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-12 8:07 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
As my scripts did wrong commit, I have posted wrong patches. I am posting
the patches 3,4 and 5 again. Sorry for the inconvenience.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/5] Build files needed for relocation
2008-08-11 20:14 ` [PATCH 2/5] Build files needed for relocation Mohan Kumar M
2008-08-12 8:07 ` Mohan Kumar M
@ 2008-08-12 8:09 ` Mohan Kumar M
1 sibling, 0 replies; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-12 8:09 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Build files needed for relocation
This patch builds vmlinux file with relocation sections and contents so
that relocs user space program can extract the required relocation
offsets. This packs final relocatable vmlinux kernel as following:
earlier part of relocation apply code, vmlinux, rest of relocation apply
code.
TODO:
Relocatable vmlinux image is built in arch/powerpc/boot as vmlinux.reloc.
But it should be built in top level directory of kernel source as vmlinux
instead of vmlinux.reloc
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/Kconfig | 15 ++++++++++--
arch/powerpc/Makefile | 9 ++++++-
arch/powerpc/boot/Makefile | 39 ++++++++++++++++++++++++++++++++--
arch/powerpc/boot/vmlinux.lds.S | 28 +++++++++++++++++++++++++
arch/powerpc/boot/vmlinux.reloc.scr | 8 +++++++
5 files changed, 91 insertions(+), 8 deletions(-)
create mode 100644 arch/powerpc/boot/vmlinux.lds.S
create mode 100644 arch/powerpc/boot/vmlinux.reloc.scr
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 63c9caf..b992bc1 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -332,6 +332,15 @@ config CRASH_DUMP
Don't change this unless you know what you are doing.
+config RELOCATABLE_PPC64
+ bool "Build a relocatable kernel (EXPERIMENTAL)"
+ depends on PPC_MULTIPLATFORM && PPC64 && CRASH_DUMP && EXPERIMENTAL
+ help
+ Build a kernel suitable for use as regular kernel and kdump capture
+ kernel.
+
+ Don't change this unless you know what you are doing.
+
config PHYP_DUMP
bool "Hypervisor-assisted dump (EXPERIMENTAL)"
depends on PPC_PSERIES && EXPERIMENTAL
@@ -694,7 +703,7 @@ config LOWMEM_SIZE
default "0x30000000"
config RELOCATABLE
- bool "Build a relocatable kernel (EXPERIMENTAL)"
+ bool "Build relocatable kernel (EXPERIMENTAL)"
depends on EXPERIMENTAL && ADVANCED_OPTIONS && FLATMEM && FSL_BOOKE
help
This builds a kernel image that is capable of running at the
@@ -814,11 +823,11 @@ config PAGE_OFFSET
default "0xc000000000000000"
config KERNEL_START
hex
- default "0xc000000002000000" if CRASH_DUMP
+ default "0xc000000002000000" if CRASH_DUMP && !RELOCATABLE_PPC64
default "0xc000000000000000"
config PHYSICAL_START
hex
- default "0x02000000" if CRASH_DUMP
+ default "0x02000000" if CRASH_DUMP && !RELOCATABLE_PPC64
default "0x00000000"
endif
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 9155c93..1bfdeea 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,7 +63,7 @@ override CC += -m$(CONFIG_WORD_SIZE)
override AR := GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
endif
-LDFLAGS_vmlinux := -Bstatic
+LDFLAGS_vmlinux := --emit-relocs
CFLAGS-$(CONFIG_PPC64) := -mminimal-toc -mtraceback=none -mcall-aixdesc
CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 -mmultiple
@@ -146,11 +146,16 @@ core-$(CONFIG_KVM) += arch/powerpc/kvm/
drivers-$(CONFIG_OPROFILE) += arch/powerpc/oprofile/
# Default to zImage, override when needed
+
+ifneq ($(CONFIG_RELOCATABLE_PPC64),y)
all: zImage
+else
+all: zImage vmlinux.reloc
+endif
CPPFLAGS_vmlinux.lds := -Upowerpc
-BOOT_TARGETS = zImage zImage.initrd uImage zImage% dtbImage% treeImage.% cuImage.% simpleImage.%
+BOOT_TARGETS = zImage vmlinux.reloc zImage.initrd uImage zImage% dtbImage% treeImage.% cuImage.% simpleImage.%
PHONY += $(BOOT_TARGETS)
diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
index 14174aa..a67a701 100644
--- a/arch/powerpc/boot/Makefile
+++ b/arch/powerpc/boot/Makefile
@@ -17,7 +17,7 @@
# CROSS32_COMPILE is setup as a prefix just like CROSS_COMPILE
# in the toplevel makefile.
-all: $(obj)/zImage
+all: $(obj)/zImage $(obj)/vmlinux.reloc
BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -Os -msoft-float -pipe \
@@ -122,18 +122,51 @@ $(patsubst %.S,%.o, $(filter %.S, $(src-boot))): %.o: %.S FORCE
$(obj)/wrapper.a: $(obj-wlib) FORCE
$(call if_changed,bootar)
-hostprogs-y := addnote addRamDisk hack-coff mktree dtc
+hostprogs-y := addnote addRamDisk hack-coff mktree dtc relocs
targets += $(patsubst $(obj)/%,%,$(obj-boot) wrapper.a)
extra-y := $(obj)/wrapper.a $(obj-plat) $(obj)/empty.o \
$(obj)/zImage.lds $(obj)/zImage.coff.lds $(obj)/zImage.ps3.lds
+ifeq ($(CONFIG_RELOCATABLE_PPC64),y)
+extra-y += $(obj)/vmlinux.lds
+endif
+
dtstree := $(srctree)/$(src)/dts
wrapper :=$(srctree)/$(src)/wrapper
-wrapperbits := $(extra-y) $(addprefix $(obj)/,addnote hack-coff mktree dtc) \
+wrapperbits := $(extra-y) $(addprefix $(obj)/,addnote hack-coff mktree dtc relocs) \
$(wrapper) FORCE
+ifeq ($(CONFIG_RELOCATABLE_PPC64),y)
+
+targets += vmlinux.offsets vmlinux.bin vmlinux.bin.all vmlinux.reloc.elf vmlinux.reloc reloc_apply.o vmlinux.lds
+
+OBJCOPYFLAGS_vmlinux.bin := -O binary -R .note -R .comment -S
+$(obj)/vmlinux.bin: vmlinux FORCE
+ $(call if_changed,objcopy)
+
+quiet_cmd_relocbin = BUILD $@
+ cmd_relocbin = cat $(filter-out FORCE,$^) > $@
+
+quiet_cmd_relocs = RELOCS $@
+ cmd_relocs = $(obj)/relocs $< > $@
+
+$(obj)/vmlinux.offsets: vmlinux $(obj)/relocs FORCE
+ $(call if_changed,relocs)
+
+$(obj)/vmlinux.bin.all: $(obj)/vmlinux.bin $(obj)/vmlinux.offsets FORCE
+ $(call if_changed,relocbin)
+
+LDFLAGS_vmlinux.reloc.elf := -T $(obj)/vmlinux.reloc.scr -r --format binary --oformat elf64-powerpc
+$(obj)/vmlinux.reloc.elf: $(obj)/vmlinux.bin.all FORCE
+ $(call if_changed,ld)
+
+LDFLAGS_vmlinux.reloc := -T $(obj)/vmlinux.lds
+$(obj)/vmlinux.reloc: $(obj)/reloc_apply.o $(obj)/vmlinux.reloc.elf FORCE
+ $(call if_changed,ld)
+endif
+
#############
# Bits for building dtc
# DTC_GENPARSER := 1 # Uncomment to rebuild flex/bison output
diff --git a/arch/powerpc/boot/vmlinux.lds.S b/arch/powerpc/boot/vmlinux.lds.S
new file mode 100644
index 0000000..245c667
--- /dev/null
+++ b/arch/powerpc/boot/vmlinux.lds.S
@@ -0,0 +1,28 @@
+#include <asm/page.h>
+#include <asm-generic/vmlinux.lds.h>
+
+ENTRY(start_wrap)
+
+OUTPUT_ARCH(powerpc:common64)
+SECTIONS
+{
+ . = KERNELBASE;
+
+/*
+ * Text, read only data and other permanent read-only sections
+ */
+ /* Text and gots */
+ .text : {
+ _head = .;
+ *(.text.head)
+ _ehead = .;
+
+ _text = .;
+ *(.vmlinux)
+ _etext = .;
+
+ _reloc = .;
+ *(.text.reloc)
+ _ereloc = .;
+ }
+}
diff --git a/arch/powerpc/boot/vmlinux.reloc.scr b/arch/powerpc/boot/vmlinux.reloc.scr
new file mode 100644
index 0000000..7240b6b
--- /dev/null
+++ b/arch/powerpc/boot/vmlinux.reloc.scr
@@ -0,0 +1,8 @@
+SECTIONS
+{
+ .vmlinux : {
+ input_len = .;
+ *(.data)
+ output_len = . - 8;
+ }
+}
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/5] Apply relocation
2008-08-11 20:15 ` [PATCH 3/5] Apply relocation Mohan Kumar M
2008-08-12 0:23 ` Paul Mackerras
@ 2008-08-12 8:10 ` Mohan Kumar M
2008-08-13 5:11 ` Paul Mackerras
1 sibling, 1 reply; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-12 8:10 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Apply relocation
This code is a wrapper around regular kernel. This checks whether the
kernel is loaded at 32MB, if its not loaded at 32MB, its treated as a
regular kernel and the control is given to the kernel immediately. If
the kernel is loaded at 32MB, it applies relocation delta to each offset
in the list which was generated and appended by patch 1 and 2. After
updating all offsets, control is given to the relocatable kernel.
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/boot/reloc_apply.S | 242 +++++++++++++++++++++++++++++++++++++++
1 files changed, 242 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/boot/reloc_apply.S
diff --git a/arch/powerpc/boot/reloc_apply.S b/arch/powerpc/boot/reloc_apply.S
new file mode 100644
index 0000000..1886890
--- /dev/null
+++ b/arch/powerpc/boot/reloc_apply.S
@@ -0,0 +1,242 @@
+/*
+ * Written by Mohan Kumar M (mohan@in.ibm.com) for 64bit PowerPC
+ *
+ * This file contains the low-level support and setup for the
+ * relocatable support for PPC64 kernel
+ *
+ * Copyright (C) IBM Corporation, 2008
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <asm/ppc_asm.h>
+
+#define RELOC_DELTA 0x4000000002000000
+
+#define LOADADDR(rn,name) \
+ lis rn,name##@highest; \
+ ori rn,rn,name##@higher; \
+ rldicr rn,rn,32,31; \
+ oris rn,rn,name##@h; \
+ ori rn,rn,name##@l
+
+/*
+ * Layout of vmlinux.reloc file
+ * Minimal part of relocation applying code +
+ * vmlinux +
+ * Rest of the relocation applying code
+ */
+
+.section .text.head
+
+.globl start_wrap
+start_wrap:
+ /* Get relocation offset in r15 */
+ bl 1f
+1: mflr r15
+ LOAD_REG_IMMEDIATE(r16,1b)
+ subf r15,r16,r15
+
+ LOAD_REG_IMMEDIATE(r17, _reloc)
+ add r17,r17,r15
+ mtctr r17
+ bctr /* Jump to start_reloc in section ".text.reloc" */
+
+/* Secondary cpus spin code */
+. = 0x60
+ /* Get relocation offset in r15 */
+ bl 1f
+1: mflr r15
+ LOAD_REG_IMMEDIATE(r16,1b)
+ subf r15,r16,r15
+
+ LOADADDR(r18, __spinloop)
+ add r18,r18,r15
+100: ld r19,0(r18)
+ cmpwi 0,r19,1
+ bne 100b
+
+ LOAD_REG_IMMEDIATE(r17, _reloc)
+ add r17,r17,r15
+ addi r17,r17,0x60
+ mtctr r17
+ /* Jump to start_reloc + 0x60 in section ".text.reloc" */
+ bctr
+
+/*
+ * Layout of vmlinux.reloc file
+ * Minimal part of relocation applying code +
+ * vmlinux +
+ * Rest of the relocation applying code
+ */
+
+
+.section .text.reloc
+
+start_reloc:
+ b master
+
+.org start_reloc + 0x60
+ LOADADDR(r18, __spinloop)
+ add r18,r18,r15
+100: ld r19,0(r18)
+ cmpwi 0,r19,2
+ bne 100b
+
+ /* Now vmlinux is at _head */
+ LOAD_REG_IMMEDIATE(r17, _head)
+ add r17,r17,r15
+ addi r17,r17,0x60
+ mtctr r17
+ bctr
+
+master:
+ LOAD_REG_IMMEDIATE(r16, output_len)
+ add r16,r16,r15
+
+ /*
+ * Load the delimiter to distinguish between different relocation
+ * types
+ */
+ LOAD_REG_IMMEDIATE(r24, __delimiter)
+ add r24,r24,r15
+ ld r24,0(r24)
+
+ LOAD_REG_IMMEDIATE(r17, _head)
+ LOAD_REG_IMMEDIATE(r21, _ehead)
+ sub r21,r21,r17 /* Number of bytes in head section */
+
+ sub r16,r16,r21 /* Original output_len */
+
+ /* Destination address */
+ LOAD_REG_IMMEDIATE(r17, _head) /* KERNELBASE */
+ add r17,r17,r15
+
+ /* Source address */
+ LOAD_REG_IMMEDIATE(r18, _text) /* Regular vmlinux */
+ add r18,r18,r15
+
+ /* Number of bytes to copy */
+ LOAD_REG_IMMEDIATE(r19, _etext)
+ add r19,r19,r15
+ sub r19,r19,r18
+
+ /* Move cpu spin code to "text.reloc" section */
+ LOADADDR(r23, __spinloop)
+ add r23,r23,r15
+ li r25,1
+ stw r25,0(r23)
+
+ /* Copy vmlinux code to physical address 0 */
+ bl .copy /* copy(_head, _text, _etext-_text) */
+
+ /*
+ * If its not running at 32MB, assume it to be a normal kernel.
+ * Copy the vmlinux code to KERNELBASE and jump to KERNELBASE
+ */
+ LOAD_REG_IMMEDIATE(r21, RELOC_DELTA)
+ cmpd r15,r21
+ beq apply_relocation
+ li r6,0
+ b skip_apply
+apply_relocation:
+
+ /* Kernel is running at 32MB */
+ mr r22,r15
+ xor r23,r23,r23
+ addi r23,r23,16
+ srw r22,r22,r23
+
+ li r25,0
+
+ LOAD_REG_IMMEDIATE(r26, _head)
+
+ /*
+ * Start reading the relocation offset list from end of file
+ * Based on the relocation type either add the relocation delta
+ * or do logical ORing the relocation delta
+ */
+3:
+ addi r16,r16,-8
+ ld r18,0(r16)
+ cmpdi r18,0 /* Processed all offsets */
+ beq 4f /* Start vmlinux */
+ /* Are we processing reloction type R_PPC64_ADDR16_HI */
+ cmpdi r25,1
+ beq rel_hi
+ cmpd r18,r24
+ beq set_rel_hi
+ /* Process 64bit absolute relocation update */
+rel_addr64:
+ add r18,r18,r15
+ ld r28,0(r18)
+ cmpdi r28,0
+ beq next
+ add r28,r28,r15 /* add relocation offset */
+ add r28,r28,r26 /* add KERNELBASE */
+ std r28,0(r18)
+ b next
+set_rel_hi: /* Enable R_PPC64_ADDR16_HI flag */
+ addi r25,r25,1
+ b 3b
+rel_hi:
+ add r18,r18,r15
+ lhz r28,0(r18)
+ or r28,r28,r22
+ sth r28,0(r18)
+next:
+ b 3b
+4:
+ mr r6,r15
+
+
+skip_apply:
+ isync
+ sync
+
+ /* Now vmlinux is at _head */
+ LOAD_REG_IMMEDIATE(r17, _head)
+ add r17,r17,r15
+ mtctr r17
+
+ /* Move cpu spin code to "text.reloc" section */
+ LOADADDR(r23, __spinloop)
+ add r23,r23,r15
+ li r25,2
+ stw r25,0(r23)
+
+ bctr
+
+/* r17 destination, r18 source, r19 size */
+.copy:
+ addi r19,r19,-8
+ li r22,-8
+4: li r21,8 /* Use the smallest common */
+ /* denominator cache line */
+ /* size. This results in */
+ /* extra cache line flushes */
+ /* but operation is correct. */
+ /* Can't get cache line size */
+ /* from NACA as it is being */
+ /* moved too. */
+
+ mtctr r21 /* put # words/line in ctr */
+3: addi r22,r22,8 /* copy a cache line */
+ ldx r21,r22,r18
+ stdx r21,r22,r17
+ bdnz 3b
+ dcbst r22,r17 /* write it to memory */
+ sync
+ icbi r22,r17 /* flush the icache line */
+ cmpld 0,r22,r19
+ blt 4b
+ sync
+ blr
+
+__delimiter:
+ .llong 0xffffffffffffffff
+__spinloop:
+ .llong 0x0
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 4/5] Relocation support
2008-08-11 20:16 ` [PATCH 4/5] Relocation support Mohan Kumar M
@ 2008-08-12 8:11 ` Mohan Kumar M
2008-08-13 5:20 ` Paul Mackerras
0 siblings, 1 reply; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-12 8:11 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Relocation support
Add relocatable kernel support like avoiding copying the vmlinux
image to compile address, adding relocation delta to the absolute
symbol references etc. ld does not provide relocation entries for
.got section, and the user space relocation extraction program
can not process @got entries. So use LOAD_REG_IMMEDIATE macro
instead of LOAD_REG_ADDR macro for relocatable kernel.
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/include/asm/ppc_asm.h | 4 ++
arch/powerpc/include/asm/prom.h | 2 +
arch/powerpc/include/asm/sections.h | 4 ++-
arch/powerpc/include/asm/system.h | 5 +++
arch/powerpc/kernel/head_64.S | 53 ++++++++++++++++++++++++++++++-
arch/powerpc/kernel/machine_kexec_64.c | 4 +-
arch/powerpc/kernel/misc.S | 40 +++++++++++++++++++-----
arch/powerpc/kernel/prom.c | 13 +++++++-
arch/powerpc/kernel/prom_init.c | 27 +++++++++++++---
arch/powerpc/kernel/prom_init_check.sh | 2 +-
arch/powerpc/kernel/setup_64.c | 5 +--
arch/powerpc/mm/hash_low_64.S | 12 +++++++
arch/powerpc/mm/init_64.c | 7 ++--
arch/powerpc/mm/mem.c | 3 +-
arch/powerpc/mm/slb_low.S | 4 ++
15 files changed, 158 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 0966899..2309ad0 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -295,8 +295,12 @@ n:
oris (reg),(reg),(expr)@h; \
ori (reg),(reg),(expr)@l;
+#ifdef CONFIG_RELOCATABLE_PPC64
+#define LOAD_REG_ADDR(reg,name) LOAD_REG_IMMEDIATE(reg,name)
+#else
#define LOAD_REG_ADDR(reg,name) \
ld (reg),name@got(r2)
+#endif
#define LOAD_REG_ADDRBASE(reg,name) LOAD_REG_ADDR(reg,name)
#define ADDROFF(name) 0
diff --git a/arch/powerpc/include/asm/prom.h b/arch/powerpc/include/asm/prom.h
index eb3bd2e..4d7aa4f 100644
--- a/arch/powerpc/include/asm/prom.h
+++ b/arch/powerpc/include/asm/prom.h
@@ -39,6 +39,8 @@
#define OF_DT_VERSION 0x10
+extern unsigned long reloc_delta, kernel_base;
+
/*
* This is what gets passed to the kernel by prom_init or kexec
*
diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h
index 916018e..f19dab3 100644
--- a/arch/powerpc/include/asm/sections.h
+++ b/arch/powerpc/include/asm/sections.h
@@ -7,10 +7,12 @@
#ifdef __powerpc64__
extern char _end[];
+extern unsigned long kernel_base;
static inline int in_kernel_text(unsigned long addr)
{
- if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end)
+ if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end
+ + kernel_base)
return 1;
return 0;
diff --git a/arch/powerpc/include/asm/system.h b/arch/powerpc/include/asm/system.h
index d6648c1..065c830 100644
--- a/arch/powerpc/include/asm/system.h
+++ b/arch/powerpc/include/asm/system.h
@@ -537,6 +537,11 @@ extern unsigned long add_reloc_offset(unsigned long);
extern void reloc_got2(unsigned long);
#define PTRRELOC(x) ((typeof(x)) add_reloc_offset((unsigned long)(x)))
+#ifdef CONFIG_PPC64
+#define RELOC(x) (*PTRRELOC(&(x)))
+#else
+#define RELOC(x) (x)
+#endif
#ifdef CONFIG_VIRT_CPU_ACCOUNTING
extern void account_system_vtime(struct task_struct *);
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index cc8fb47..6274686 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -102,6 +102,12 @@ __secondary_hold_acknowledge:
.llong hvReleaseData-KERNELBASE
#endif /* CONFIG_PPC_ISERIES */
+#ifdef CONFIG_RELOCATABLE_PPC64
+ /* Used as static variable to initialize the reloc_delta */
+__initialized:
+ .long 0x0
+#endif
+
. = 0x60
/*
* The following code is used to hold secondary processors
@@ -1248,6 +1254,38 @@ _STATIC(__mmu_off)
*
*/
_GLOBAL(__start_initialization_multiplatform)
+#ifdef CONFIG_RELOCATABLE_PPC64
+ mr r21,r3
+ mr r22,r4
+ mr r23,r5
+ bl .reloc_offset
+ mr r26,r3
+ mr r3,r21
+ mr r4,r22
+ mr r5,r23
+
+ LOAD_REG_IMMEDIATE(r27, __initialized)
+ add r27,r26,r27
+ ld r7,0(r27)
+ cmpdi r7,0
+ bne 4f
+
+ li r7,1
+ stw r7,0(r27)
+
+ cmpdi r6,0
+ beq 4f
+ LOAD_REG_IMMEDIATE(r27, reloc_delta)
+ add r27,r27,r26
+ std r6,0(r27)
+
+ LOAD_REG_IMMEDIATE(r27, KERNELBASE)
+ add r7,r6,r27
+ LOAD_REG_IMMEDIATE(r27, kernel_base)
+ add r27,r27,r26
+ std r7,0(r27)
+4:
+#endif
/*
* Are we booted from a PROM Of-type client-interface ?
*/
@@ -1323,6 +1361,19 @@ _INIT_STATIC(__boot_from_prom)
trap
_STATIC(__after_prom_start)
+ bl .reloc_offset
+ mr r26,r3
+#ifdef CONFIG_RELOCATABLE_PPC64
+ /*
+ * If its a relocatable kernel, no need to copy the kernel
+ * to PHYSICAL_START. Continue running from the same location
+ */
+ LOAD_REG_IMMEDIATE(r27, reloc_delta)
+ add r27,r27,r26
+ ld r28,0(r27)
+ cmpdi r28,0
+ bne .start_here_multiplatform
+#endif
/*
* We need to run with __start at physical address PHYSICAL_START.
@@ -1336,8 +1387,6 @@ _STATIC(__after_prom_start)
* r26 == relocation offset
* r27 == KERNELBASE
*/
- bl .reloc_offset
- mr r26,r3
LOAD_REG_IMMEDIATE(r27, KERNELBASE)
LOAD_REG_IMMEDIATE(r3, PHYSICAL_START) /* target addr */
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index a168514..ce19c44 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -43,7 +43,7 @@ int default_machine_kexec_prepare(struct kimage *image)
* overlaps kernel static data or bss.
*/
for (i = 0; i < image->nr_segments; i++)
- if (image->segment[i].mem < __pa(_end))
+ if (image->segment[i].mem < (__pa(_end) + kernel_base))
return -ETXTBSY;
/*
@@ -317,7 +317,7 @@ static void __init export_htab_values(void)
if (!node)
return;
- kernel_end = __pa(_end);
+ kernel_end = __pa(_end) + kernel_base;
prom_add_property(node, &kernel_end_prop);
/* On machines with no htab htab_address is NULL */
diff --git a/arch/powerpc/kernel/misc.S b/arch/powerpc/kernel/misc.S
index 85cb6f3..28718ed 100644
--- a/arch/powerpc/kernel/misc.S
+++ b/arch/powerpc/kernel/misc.S
@@ -20,6 +20,8 @@
#include <asm/asm-compat.h>
#include <asm/asm-offsets.h>
+#define RELOC_DELTA 0x4000000002000000
+
.text
/*
@@ -33,6 +35,17 @@ _GLOBAL(reloc_offset)
1: mflr r3
LOAD_REG_IMMEDIATE(r4,1b)
subf r3,r4,r3
+#ifdef CONFIG_RELOCATABLE_PPC64
+ LOAD_REG_IMMEDIATE(r5, RELOC_DELTA)
+ cmpd r3,r5
+ bne 2f
+ /*
+ * Don't return the offset if the difference is
+ * RELOC_DELTA
+ */
+ li r3,0
+2:
+#endif
mtlr r0
blr
@@ -40,14 +53,25 @@ _GLOBAL(reloc_offset)
* add_reloc_offset(x) returns x + reloc_offset().
*/
_GLOBAL(add_reloc_offset)
- mflr r0
- bl 1f
-1: mflr r5
- LOAD_REG_IMMEDIATE(r4,1b)
- subf r5,r4,r5
- add r3,r3,r5
- mtlr r0
- blr
+ mflr r0
+ bl 1f
+1: mflr r5
+ LOAD_REG_IMMEDIATE(r4,1b)
+ subf r5,r4,r5
+#ifdef CONFIG_RELOCATABLE_PPC64
+ LOAD_REG_IMMEDIATE(r4, RELOC_DELTA)
+ cmpd r5,r4
+ bne 2f
+ /*
+ * Don't add the offset if the difference is
+ * RELOC_DELTA
+ */
+ li r5,0
+2:
+#endif
+ add r3,r3,r5
+ mtlr r0
+ blr
_GLOBAL(kernel_execve)
li r0,__NR_execve
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 87d83c5..453dc98 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -65,6 +65,9 @@
static int __initdata dt_root_addr_cells;
static int __initdata dt_root_size_cells;
+unsigned long reloc_delta __attribute__ ((__section__ (".data")));
+unsigned long kernel_base __attribute__ ((__section__ (".data")));
+
#ifdef CONFIG_PPC64
int __initdata iommu_is_off;
int __initdata iommu_force_on;
@@ -1163,8 +1166,16 @@ void __init early_init_devtree(void *params)
parse_early_param();
/* Reserve LMB regions used by kernel, initrd, dt, etc... */
- lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
reserve_kdump_trampoline();
+#ifdef CONFIG_RELOCATABLE_PPC64
+ if (RELOC(kernel_base)) {
+ lmb_reserve(0, KDUMP_RESERVE_LIMIT);
+ lmb_reserve(kernel_base, __pa(klimit) - PHYSICAL_START);
+ } else
+ lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
+#else
+ lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
+#endif
reserve_crashkernel();
early_reserve_mem();
phyp_dump_reserve_mem();
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index b72849a..8e8ddbe 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -91,11 +91,9 @@ extern const struct linux_logo logo_linux_clut224;
* fortunately don't get interpreted as two arguments).
*/
#ifdef CONFIG_PPC64
-#define RELOC(x) (*PTRRELOC(&(x)))
#define ADDR(x) (u32) add_reloc_offset((unsigned long)(x))
#define OF_WORKAROUNDS 0
#else
-#define RELOC(x) (x)
#define ADDR(x) (u32) (x)
#define OF_WORKAROUNDS of_workarounds
int of_workarounds;
@@ -1078,7 +1076,12 @@ static void __init prom_init_mem(void)
}
}
- RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000);
+#ifndef CONFIG_RELOCATABLE_PPC64
+ RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000);
+#else
+ RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000 +
+ RELOC(reloc_delta));
+#endif
/* Check if we have an initrd after the kernel, if we do move our bottom
* point to after it
@@ -1338,10 +1341,17 @@ static void __init prom_hold_cpus(void)
phandle node;
char type[64];
struct prom_t *_prom = &RELOC(prom);
+#ifndef CONFIG_RELOCATABLE_PPC64
unsigned long *spinloop
= (void *) LOW_ADDR(__secondary_hold_spinloop);
unsigned long *acknowledge
= (void *) LOW_ADDR(__secondary_hold_acknowledge);
+#else
+ unsigned long *spinloop
+ = (void *) &__secondary_hold_spinloop;
+ unsigned long *acknowledge
+ = (void *) &__secondary_hold_acknowledge;
+#endif
#ifdef CONFIG_PPC64
/* __secondary_hold is actually a descriptor, not the text address */
unsigned long secondary_hold
@@ -2376,8 +2386,15 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
/*
* Copy the CPU hold code
*/
- if (RELOC(of_platform) != PLATFORM_POWERMAC)
- copy_and_flush(0, KERNELBASE + offset, 0x100, 0);
+ if (RELOC(of_platform) != PLATFORM_POWERMAC) {
+#ifdef CONFIG_RELOCATABLE_PPC64
+ if (RELOC(reloc_delta))
+ copy_and_flush(0, KERNELBASE + RELOC(reloc_delta),
+ 0x100, 0);
+ else
+#endif
+ copy_and_flush(0, KERNELBASE + offset, 0x100, 0);
+ }
/*
* Do early parsing of command line
diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh
index 2c7e8e8..3cc7e24 100644
--- a/arch/powerpc/kernel/prom_init_check.sh
+++ b/arch/powerpc/kernel/prom_init_check.sh
@@ -20,7 +20,7 @@ WHITELIST="add_reloc_offset __bss_start __bss_stop copy_and_flush
_end enter_prom memcpy memset reloc_offset __secondary_hold
__secondary_hold_acknowledge __secondary_hold_spinloop __start
strcmp strcpy strlcpy strlen strncmp strstr logo_linux_clut224
-reloc_got2 kernstart_addr"
+reloc_got2 kernstart_addr reloc_delta"
NM="$1"
OBJ="$2"
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 8b25f51..5498662 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -208,7 +208,6 @@ void __init early_setup(unsigned long dt_ptr)
/* Probe the machine type */
probe_machine();
-
setup_kdump_trampoline();
DBG("Found, Initializing memory management...\n");
@@ -526,9 +525,9 @@ void __init setup_arch(char **cmdline_p)
if (ppc_md.panic)
setup_panic();
- init_mm.start_code = (unsigned long)_stext;
+ init_mm.start_code = (unsigned long)_stext + kernel_base;
init_mm.end_code = (unsigned long) _etext;
- init_mm.end_data = (unsigned long) _edata;
+ init_mm.end_data = (unsigned long) _edata + kernel_base;
init_mm.brk = klimit;
irqstack_early_init();
diff --git a/arch/powerpc/mm/hash_low_64.S b/arch/powerpc/mm/hash_low_64.S
index a719f53..acf64d9 100644
--- a/arch/powerpc/mm/hash_low_64.S
+++ b/arch/powerpc/mm/hash_low_64.S
@@ -168,7 +168,11 @@ END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
std r3,STK_PARM(r4)(r1)
/* Get htab_hash_mask */
+#ifndef CONFIG_RELOCATABLE_PPC64
ld r4,htab_hash_mask@got(2)
+#else
+ LOAD_REG_IMMEDIATE(r4,htab_hash_mask)
+#endif
ld r27,0(r4) /* htab_hash_mask -> r27 */
/* Check if we may already be in the hashtable, in this case, we
@@ -461,7 +465,11 @@ END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
std r3,STK_PARM(r4)(r1)
/* Get htab_hash_mask */
+#ifndef CONFIG_RELOCATABLE_PPC64
ld r4,htab_hash_mask@got(2)
+#else
+ LOAD_REG_IMMEDIATE(r4,htab_hash_mask)
+#endif
ld r27,0(r4) /* htab_hash_mask -> r27 */
/* Check if we may already be in the hashtable, in this case, we
@@ -792,7 +800,11 @@ END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
std r3,STK_PARM(r4)(r1)
/* Get htab_hash_mask */
+#ifndef CONFIG_RELOCATABLE_PPC64
ld r4,htab_hash_mask@got(2)
+#else
+ LOAD_REG_IMMEDIATE(r4,htab_hash_mask)
+#endif
ld r27,0(r4) /* htab_hash_mask -> r27 */
/* Check if we may already be in the hashtable, in this case, we
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 4f7df85..086fa2d 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -79,10 +79,11 @@ phys_addr_t kernstart_addr;
void free_initmem(void)
{
- unsigned long addr;
+ unsigned long long addr, eaddr;
- addr = (unsigned long)__init_begin;
- for (; addr < (unsigned long)__init_end; addr += PAGE_SIZE) {
+ addr = (unsigned long long )__init_begin + kernel_base;
+ eaddr = (unsigned long long ) __init_end + kernel_base;
+ for (; addr < eaddr; addr += PAGE_SIZE) {
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 1c93c25..04e5d06 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -362,7 +362,8 @@ void __init mem_init(void)
}
}
- codesize = (unsigned long)&_sdata - (unsigned long)&_stext;
+ codesize = (unsigned long)&_sdata - (unsigned long)&_stext
+ + kernel_base;
datasize = (unsigned long)&_edata - (unsigned long)&_sdata;
initsize = (unsigned long)&__init_end - (unsigned long)&__init_begin;
bsssize = (unsigned long)&__bss_stop - (unsigned long)&__bss_start;
diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
index bc44dc4..aadc389 100644
--- a/arch/powerpc/mm/slb_low.S
+++ b/arch/powerpc/mm/slb_low.S
@@ -128,7 +128,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_1T_SEGMENT)
/* Now get to the array and obtain the sllp
*/
ld r11,PACATOC(r13)
+#ifndef CONFIG_RELOCATABLE_PPC64
ld r11,mmu_psize_defs@got(r11)
+#else
+ LOAD_REG_IMMEDIATE(r11,mmu_psize_defs)
+#endif
add r11,r11,r9
ld r11,MMUPSIZESLLP(r11)
ori r11,r11,SLB_VSID_USER
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 5/5] Relocation support for kdump kernel
2008-08-11 20:18 ` [PATCH 5/5] Relocation support for kdump kernel Mohan Kumar M
@ 2008-08-12 8:11 ` Mohan Kumar M
0 siblings, 0 replies; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-12 8:11 UTC (permalink / raw)
To: ppcdev; +Cc: paulus, miltonm
Relocation support for kdump kernel
Add relocation kernel support for kdump kernel path.
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
---
arch/powerpc/kernel/crash_dump.c | 19 +++++++++++++++++++
arch/powerpc/kernel/iommu.c | 7 +++++--
arch/powerpc/kernel/machine_kexec.c | 6 ++++++
arch/powerpc/mm/hash_utils_64.c | 5 +++--
arch/powerpc/platforms/pseries/iommu.c | 5 ++++-
5 files changed, 37 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index e0debcc..58354b8 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -29,7 +29,12 @@
void __init reserve_kdump_trampoline(void)
{
+#ifdef CONFIG_RELOCATABLE_PPC64
+ if (RELOC(reloc_delta))
+ lmb_reserve(0, KDUMP_RESERVE_LIMIT);
+#else
lmb_reserve(0, KDUMP_RESERVE_LIMIT);
+#endif
}
static void __init create_trampoline(unsigned long addr)
@@ -45,7 +50,11 @@ static void __init create_trampoline(unsigned long addr)
* two instructions it doesn't require any registers.
*/
patch_instruction(p, PPC_NOP_INSTR);
+#ifndef CONFIG_RELOCATABLE_PPC64
patch_branch(++p, addr + PHYSICAL_START, 0);
+#else
+ patch_branch(++p, addr + (RELOC(reloc_delta) & 0xfffffffffffffff), 0);
+#endif
}
void __init setup_kdump_trampoline(void)
@@ -54,13 +63,23 @@ void __init setup_kdump_trampoline(void)
DBG(" -> setup_kdump_trampoline()\n");
+#ifdef CONFIG_RELOCATABLE_PPC64
+ if (!RELOC(reloc_delta))
+ return;
+#endif
+
for (i = KDUMP_TRAMPOLINE_START; i < KDUMP_TRAMPOLINE_END; i += 8) {
create_trampoline(i);
}
#ifdef CONFIG_PPC_PSERIES
+#ifndef CONFIG_RELOCATABLE_PPC64
create_trampoline(__pa(system_reset_fwnmi) - PHYSICAL_START);
create_trampoline(__pa(machine_check_fwnmi) - PHYSICAL_START);
+#else
+ create_trampoline(__pa(system_reset_fwnmi) - RELOC(reloc_delta));
+ create_trampoline(__pa(machine_check_fwnmi) - RELOC(reloc_delta));
+#endif
#endif /* CONFIG_PPC_PSERIES */
DBG(" <- setup_kdump_trampoline()\n");
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 550a193..9ae7657 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -494,7 +494,7 @@ struct iommu_table *iommu_init_table(struct iommu_table *tbl, int nid)
spin_lock_init(&tbl->it_lock);
#ifdef CONFIG_CRASH_DUMP
- if (ppc_md.tce_get) {
+ if (reloc_delta && ppc_md.tce_get) {
unsigned long index;
unsigned long tceval;
unsigned long tcecount = 0;
@@ -520,7 +520,10 @@ struct iommu_table *iommu_init_table(struct iommu_table *tbl, int nid)
index < tbl->it_size; index++)
__clear_bit(index, tbl->it_map);
}
- }
+ } else
+ /* Clear the hardware table in case firmware left allocations
+ in it */
+ ppc_md.tce_free(tbl, tbl->it_offset, tbl->it_size);
#else
/* Clear the hardware table in case firmware left allocations in it */
ppc_md.tce_free(tbl, tbl->it_offset, tbl->it_size);
diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index aab7688..75dc6af 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -67,6 +67,12 @@ void __init reserve_crashkernel(void)
unsigned long long crash_size, crash_base;
int ret;
+#ifdef CONFIG_RELOCATABLE_PPC64
+ /* Return if its kdump kernel */
+ if (reloc_delta)
+ return;
+#endif
+
/* this is necessary because of lmb_phys_mem_size() */
lmb_analyze();
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 5ce5a4d..29474e9 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -677,8 +677,9 @@ void __init htab_initialize(void)
continue;
}
#endif /* CONFIG_U3_DART */
- BUG_ON(htab_bolt_mapping(base, base + size, __pa(base),
- mode_rw, mmu_linear_psize, mmu_kernel_ssize));
+ BUG_ON(htab_bolt_mapping(base + kernel_base, base + size,
+ __pa(base) + kernel_base, mode_rw, mmu_linear_psize,
+ mmu_kernel_ssize));
}
/*
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index a8c4466..480341b 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -291,7 +291,10 @@ static void iommu_table_setparms(struct pci_controller *phb,
tbl->it_base = (unsigned long)__va(*basep);
-#ifndef CONFIG_CRASH_DUMP
+#ifdef CONFIG_CRASH_DUMP
+ if (!reloc_delta)
+ memset((void *)tbl->it_base, 0, *sizep);
+#else
memset((void *)tbl->it_base, 0, *sizep);
#endif
--
1.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 3/5] Apply relocation
2008-08-12 8:10 ` Mohan Kumar M
@ 2008-08-13 5:11 ` Paul Mackerras
0 siblings, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2008-08-13 5:11 UTC (permalink / raw)
To: mohan; +Cc: ppcdev, miltonm
Mohan Kumar M writes:
> This code is a wrapper around regular kernel. This checks whether the
> kernel is loaded at 32MB, if its not loaded at 32MB, its treated as a
> regular kernel and the control is given to the kernel immediately. If
> the kernel is loaded at 32MB, it applies relocation delta to each offset
> in the list which was generated and appended by patch 1 and 2. After
> updating all offsets, control is given to the relocatable kernel.
In patch 1, you output the addresses for three kinds of relocations,
but here you only seem to handle two (R_PPC64_ADDR64 and
R_PPC64_ADDR16_HI, I assume). How does that work?
In general with this patch series, I would like to have seen much more
detailed patch descriptions. I think most of these patches could have
used 4 or 5 paragraphs of description (or more if you like) telling us
things such as why you handle the particular relocations you do and
not others.
Paul.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 4/5] Relocation support
2008-08-12 8:11 ` Mohan Kumar M
@ 2008-08-13 5:20 ` Paul Mackerras
2008-08-13 18:22 ` Mohan Kumar M
0 siblings, 1 reply; 15+ messages in thread
From: Paul Mackerras @ 2008-08-13 5:20 UTC (permalink / raw)
To: mohan; +Cc: ppcdev, miltonm
Mohan Kumar M writes:
> Add relocatable kernel support like avoiding copying the vmlinux
> image to compile address, adding relocation delta to the absolute
> symbol references etc. ld does not provide relocation entries for
> .got section, and the user space relocation extraction program
> can not process @got entries. So use LOAD_REG_IMMEDIATE macro
> instead of LOAD_REG_ADDR macro for relocatable kernel.
I think this is a symptom of the more general problem that
--emit-relocs doesn't actually give us all of the relocations we
need. That, and the fact that the relevant code paths in ld are more
widely used and better tested, is why I would prefer to build the
kernel as a position-independent executable.
> static inline int in_kernel_text(unsigned long addr)
> {
> - if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end)
> + if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end
> + + kernel_base)
Your patch adds kernel_base to some addresses but not to all of them,
so your patch description should have told us why you added it in the
those places and not others. If you tell us the general principle
you're following (even if it seems obvious to you) it will be useful
to people chasing bugs or adding new code later on, or even just
trying to understand what the code does.
> - RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000);
> +#ifndef CONFIG_RELOCATABLE_PPC64
> + RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000);
> +#else
> + RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000 +
> + RELOC(reloc_delta));
> +#endif
Ifdefs in code inside a function are frowned upon in the Linux
kernel. Try to find an alternative way to do this, such as ensuring
that reloc_delta is 0 when CONFIG_RELOCATABLE_PPC64 is not set.
Also it's not clear (to me at least) why you need to add reloc_data in
the relocatable case.
> +#ifndef CONFIG_RELOCATABLE_PPC64
> unsigned long *spinloop
> = (void *) LOW_ADDR(__secondary_hold_spinloop);
> unsigned long *acknowledge
> = (void *) LOW_ADDR(__secondary_hold_acknowledge);
> +#else
> + unsigned long *spinloop
> + = (void *) &__secondary_hold_spinloop;
> + unsigned long *acknowledge
> + = (void *) &__secondary_hold_acknowledge;
> +#endif
This also needs some explanation. (Put it in the patch description or
in a comment in the code, not in a reply to this mail. :)
> +#ifndef CONFIG_RELOCATABLE_PPC64
> ld r4,htab_hash_mask@got(2)
> +#else
> + LOAD_REG_IMMEDIATE(r4,htab_hash_mask)
> +#endif
> ld r27,0(r4) /* htab_hash_mask -> r27 */
Here and in the other similar places, I would prefer you just changed
it to LOAD_REG_ADDR and not have any ifdef.
Paul.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 4/5] Relocation support
2008-08-13 5:20 ` Paul Mackerras
@ 2008-08-13 18:22 ` Mohan Kumar M
0 siblings, 0 replies; 15+ messages in thread
From: Mohan Kumar M @ 2008-08-13 18:22 UTC (permalink / raw)
To: Paul Mackerras; +Cc: ppcdev, miltonm
Paul Mackerras wrote:
> Mohan Kumar M writes:
>
Hi Paul,
Thanks for your comments.
I will update the patches as per your comment and will give a detailed
description for each patch.
Regards,
Mohan.
>
>> static inline int in_kernel_text(unsigned long addr)
>> {
>> - if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end)
>> + if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end
>> + + kernel_base)
>
> Your patch adds kernel_base to some addresses but not to all of them,
> so your patch description should have told us why you added it in the
> those places and not others. If you tell us the general principle
> you're following (even if it seems obvious to you) it will be useful
> to people chasing bugs or adding new code later on, or even just
> trying to understand what the code does.
>
>> - RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000);
>> +#ifndef CONFIG_RELOCATABLE_PPC64
>> + RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000);
>> +#else
>> + RELOC(alloc_bottom) = PAGE_ALIGN((unsigned long)&RELOC(_end) + 0x4000 +
>> + RELOC(reloc_delta));
>> +#endif
>
> Ifdefs in code inside a function are frowned upon in the Linux
> kernel. Try to find an alternative way to do this, such as ensuring
> that reloc_delta is 0 when CONFIG_RELOCATABLE_PPC64 is not set.
> Also it's not clear (to me at least) why you need to add reloc_data in
> the relocatable case.
>
>> +#ifndef CONFIG_RELOCATABLE_PPC64
>> unsigned long *spinloop
>> = (void *) LOW_ADDR(__secondary_hold_spinloop);
>> unsigned long *acknowledge
>> = (void *) LOW_ADDR(__secondary_hold_acknowledge);
>> +#else
>> + unsigned long *spinloop
>> + = (void *) &__secondary_hold_spinloop;
>> + unsigned long *acknowledge
>> + = (void *) &__secondary_hold_acknowledge;
>> +#endif
>
> This also needs some explanation. (Put it in the patch description or
> in a comment in the code, not in a reply to this mail. :)
>
>> +#ifndef CONFIG_RELOCATABLE_PPC64
>> ld r4,htab_hash_mask@got(2)
>> +#else
>> + LOAD_REG_IMMEDIATE(r4,htab_hash_mask)
>> +#endif
>> ld r27,0(r4) /* htab_hash_mask -> r27 */
>
> Here and in the other similar places, I would prefer you just changed
> it to LOAD_REG_ADDR and not have any ifdef.
>
> Paul.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2008-08-13 18:22 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-08-11 20:11 [PATCH 0/5] Relocatable kernel support for PPC64 Mohan Kumar M
2008-08-11 20:12 ` [PATCH 1/5] Extract list of relocation offsets Mohan Kumar M
2008-08-11 20:14 ` [PATCH 2/5] Build files needed for relocation Mohan Kumar M
2008-08-12 8:07 ` Mohan Kumar M
2008-08-12 8:09 ` Mohan Kumar M
2008-08-11 20:15 ` [PATCH 3/5] Apply relocation Mohan Kumar M
2008-08-12 0:23 ` Paul Mackerras
2008-08-12 8:10 ` Mohan Kumar M
2008-08-13 5:11 ` Paul Mackerras
2008-08-11 20:16 ` [PATCH 4/5] Relocation support Mohan Kumar M
2008-08-12 8:11 ` Mohan Kumar M
2008-08-13 5:20 ` Paul Mackerras
2008-08-13 18:22 ` Mohan Kumar M
2008-08-11 20:18 ` [PATCH 5/5] Relocation support for kdump kernel Mohan Kumar M
2008-08-12 8:11 ` Mohan Kumar M
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).