* [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing
@ 2006-05-17 8:00 Michael Ellerman
2006-05-17 8:00 ` [PATCH 2/5] powerpc: Parse early parameters early, rather than sorta early Michael Ellerman
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Michael Ellerman @ 2006-05-17 8:00 UTC (permalink / raw)
To: Paul Mackerras; +Cc: linuxppc-dev, Kumar Gala
Currently early_xmon() calls directly into debugger() if xmon=early is passed.
This ties the invocation of early xmon to the location of parse_early_param(),
which might change.
Tested on P5 LPAR and F50.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
---
arch/powerpc/kernel/setup-common.c | 3 ++-
arch/powerpc/kernel/setup.h | 1 +
arch/powerpc/kernel/setup_32.c | 3 +++
arch/powerpc/kernel/setup_64.c | 3 +++
4 files changed, 9 insertions(+), 1 deletion(-)
Index: to-merge/arch/powerpc/kernel/setup-common.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/setup-common.c
+++ to-merge/arch/powerpc/kernel/setup-common.c
@@ -443,6 +443,7 @@ void __init smp_setup_cpu_maps(void)
}
#endif /* CONFIG_SMP */
+int __initdata do_early_xmon;
#ifdef CONFIG_XMON
static int __init early_xmon(char *p)
{
@@ -456,7 +457,7 @@ static int __init early_xmon(char *p)
return 0;
}
xmon_init(1);
- debugger(NULL);
+ do_early_xmon = 1;
return 0;
}
Index: to-merge/arch/powerpc/kernel/setup.h
===================================================================
--- to-merge.orig/arch/powerpc/kernel/setup.h
+++ to-merge/arch/powerpc/kernel/setup.h
@@ -2,5 +2,6 @@
#define _POWERPC_KERNEL_SETUP_H
void check_for_initrd(void);
+extern int do_early_xmon;
#endif /* _POWERPC_KERNEL_SETUP_H */
Index: to-merge/arch/powerpc/kernel/setup_32.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/setup_32.c
+++ to-merge/arch/powerpc/kernel/setup_32.c
@@ -296,6 +296,9 @@ void __init setup_arch(char **cmdline_p)
parse_early_param();
+ if (do_early_xmon)
+ debugger(NULL);
+
/* set up the bootmem stuff with available memory */
do_init_bootmem();
if ( ppc_md.progress ) ppc_md.progress("setup_arch: bootmem", 0x3eab);
Index: to-merge/arch/powerpc/kernel/setup_64.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/setup_64.c
+++ to-merge/arch/powerpc/kernel/setup_64.c
@@ -425,6 +425,9 @@ void __init setup_system(void)
parse_early_param();
+ if (do_early_xmon)
+ debugger(NULL);
+
check_smt_enabled();
smp_setup_cpu_maps();
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/5] powerpc: Parse early parameters early, rather than sorta early
2006-05-17 8:00 [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Michael Ellerman
@ 2006-05-17 8:00 ` Michael Ellerman
2006-05-17 8:00 ` [PATCH 3/5] powerpc: Unify mem= handling Michael Ellerman
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Michael Ellerman @ 2006-05-17 8:00 UTC (permalink / raw)
To: Paul Mackerras; +Cc: linuxppc-dev, Kumar Gala
Currently we have call parse_early_param() earliyish, but not really very
early. In particular, it's not early enough to do things like mem=x or
crashkernel=blah, which is annoying.
So do it earlier. I've checked all the early param handlers, and none of them
look like they should have any trouble with this. I haven't tested the
booke_wdt ones though.
On 32-bit we were doing the CONFIG_CMDLINE logic twice, so don't.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
---
arch/powerpc/kernel/prom.c | 5 +++++
arch/powerpc/kernel/setup_32.c | 14 ++------------
arch/powerpc/kernel/setup_64.c | 5 -----
3 files changed, 7 insertions(+), 17 deletions(-)
Index: to-merge/arch/powerpc/kernel/prom.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/prom.c
+++ to-merge/arch/powerpc/kernel/prom.c
@@ -1292,6 +1292,11 @@ void __init early_init_devtree(void *par
lmb_init();
of_scan_flat_dt(early_init_dt_scan_root, NULL);
of_scan_flat_dt(early_init_dt_scan_memory, NULL);
+
+ /* Save command line for /proc/cmdline and then parse parameters */
+ strlcpy(saved_command_line, cmd_line, COMMAND_LINE_SIZE);
+ parse_early_param();
+
lmb_enforce_memory_limit(memory_limit);
lmb_analyze();
Index: to-merge/arch/powerpc/kernel/setup_32.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/setup_32.c
+++ to-merge/arch/powerpc/kernel/setup_32.c
@@ -131,12 +131,6 @@ void __init machine_init(unsigned long d
/* Do some early initialization based on the flat device tree */
early_init_devtree(__va(dt_ptr));
- /* Check default command line */
-#ifdef CONFIG_CMDLINE
- if (cmd_line[0] == 0)
- strlcpy(cmd_line, CONFIG_CMDLINE, sizeof(cmd_line));
-#endif /* CONFIG_CMDLINE */
-
probe_machine();
#ifdef CONFIG_6xx
@@ -237,6 +231,8 @@ void __init setup_arch(char **cmdline_p)
{
extern void do_init_bootmem(void);
+ *cmdline_p = cmd_line;
+
/* so udelay does something sensible, assume <= 1000 bogomips */
loops_per_jiffy = 500000000 / HZ;
@@ -290,12 +286,6 @@ void __init setup_arch(char **cmdline_p)
init_mm.end_data = (unsigned long) _edata;
init_mm.brk = klimit;
- /* Save unparsed command line copy for /proc/cmdline */
- strlcpy(saved_command_line, cmd_line, COMMAND_LINE_SIZE);
- *cmdline_p = cmd_line;
-
- parse_early_param();
-
if (do_early_xmon)
debugger(NULL);
Index: to-merge/arch/powerpc/kernel/setup_64.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/setup_64.c
+++ to-merge/arch/powerpc/kernel/setup_64.c
@@ -420,11 +420,6 @@ void __init setup_system(void)
*/
register_early_udbg_console();
- /* Save unparsed command line copy for /proc/cmdline */
- strlcpy(saved_command_line, cmd_line, COMMAND_LINE_SIZE);
-
- parse_early_param();
-
if (do_early_xmon)
debugger(NULL);
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/5] powerpc: Unify mem= handling
2006-05-17 8:00 [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Michael Ellerman
2006-05-17 8:00 ` [PATCH 2/5] powerpc: Parse early parameters early, rather than sorta early Michael Ellerman
@ 2006-05-17 8:00 ` Michael Ellerman
2006-05-17 8:00 ` [PATCH 4/5] powerpc: Kdump header cleanup Michael Ellerman
` (2 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Michael Ellerman @ 2006-05-17 8:00 UTC (permalink / raw)
To: Paul Mackerras; +Cc: linuxppc-dev, Kumar Gala
We currently do mem= handling in three seperate places. And as benh pointed out
I wrote two of them. Now that we parse command line parameters earlier we can
clean this mess up.
Moving the parsing out of prom_init means the device tree might be allocated
above the memory limit. If that happens we'd have to move it. As it happens
we already have logic to do that for kdump, so just genericise it.
This also means we might have reserved regions above the memory limit, if we
do the bootmem allocator will blow up, so we have to modify
lmb_enforce_memory_limit() to truncate the reserves as well.
Tested on P5 LPAR, iSeries, F50, 44p. Tested moving device tree on P5 and
44p and F50.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
---
arch/powerpc/kernel/machine_kexec_64.c | 5 +
arch/powerpc/kernel/prom.c | 89 +++++++++++++++++----------------
arch/powerpc/kernel/prom_init.c | 55 +-------------------
arch/powerpc/kernel/setup_64.c | 3 -
arch/powerpc/mm/lmb.c | 43 +++++++++++----
arch/powerpc/platforms/iseries/setup.c | 22 --------
include/asm-powerpc/kexec.h | 13 ++++
7 files changed, 100 insertions(+), 130 deletions(-)
Index: to-merge/arch/powerpc/kernel/machine_kexec_64.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/machine_kexec_64.c
+++ to-merge/arch/powerpc/kernel/machine_kexec_64.c
@@ -339,3 +339,8 @@ void __init kexec_setup(void)
{
export_htab_values();
}
+
+int overlaps_crashkernel(unsigned long start, unsigned long size)
+{
+ return (start + size) > crashk_res.start && start <= crashk_res.end;
+}
Index: to-merge/arch/powerpc/kernel/prom.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/prom.c
+++ to-merge/arch/powerpc/kernel/prom.c
@@ -50,6 +50,7 @@
#include <asm/machdep.h>
#include <asm/pSeries_reconfig.h>
#include <asm/pci-bridge.h>
+#include <asm/kexec.h>
#ifdef DEBUG
#define DBG(fmt...) printk(KERN_ERR fmt)
@@ -836,6 +837,42 @@ static unsigned long __init unflatten_dt
return mem;
}
+static int __init early_parse_mem(char *p)
+{
+ if (!p)
+ return 1;
+
+ memory_limit = PAGE_ALIGN(memparse(p, &p));
+ DBG("memory limit = 0x%lx\n", memory_limit);
+
+ return 0;
+}
+early_param("mem", early_parse_mem);
+
+/*
+ * The device tree may be allocated below our memory limit, or inside the
+ * crash kernel region for kdump. If so, move it out now.
+ */
+static void move_device_tree(void)
+{
+ unsigned long start, size;
+ void *p;
+
+ DBG("-> move_device_tree\n");
+
+ start = __pa(initial_boot_params);
+ size = initial_boot_params->totalsize;
+
+ if ((memory_limit && (start + size) > memory_limit) ||
+ overlaps_crashkernel(start, size)) {
+ p = __va(lmb_alloc_base(size, PAGE_SIZE, lmb.rmo_size));
+ memcpy(p, initial_boot_params, size);
+ initial_boot_params = (struct boot_param_header *)p;
+ DBG("Moved device tree to 0x%p\n", p);
+ }
+
+ DBG("<- move_device_tree\n");
+}
/**
* unflattens the device-tree passed by the firmware, creating the
@@ -1070,6 +1107,7 @@ static int __init early_init_dt_scan_cho
iommu_force_on = 1;
#endif
+ /* mem=x on the command line is the preferred mechanism */
lprop = of_get_flat_dt_prop(node, "linux,memory-limit", NULL);
if (lprop)
memory_limit = *lprop;
@@ -1123,17 +1161,6 @@ static int __init early_init_dt_scan_cho
DBG("Command line is: %s\n", cmd_line);
- if (strstr(cmd_line, "mem=")) {
- char *p, *q;
-
- for (q = cmd_line; (p = strstr(q, "mem=")) != 0; ) {
- q = p + 4;
- if (p > cmd_line && p[-1] != ' ')
- continue;
- memory_limit = memparse(q, &q);
- }
- }
-
/* break now */
return 1;
}
@@ -1297,11 +1324,6 @@ void __init early_init_devtree(void *par
strlcpy(saved_command_line, cmd_line, COMMAND_LINE_SIZE);
parse_early_param();
- lmb_enforce_memory_limit(memory_limit);
- lmb_analyze();
-
- DBG("Phys. mem: %lx\n", lmb_phys_mem_size());
-
/* Reserve LMB regions used by kernel, initrd, dt, etc... */
lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
#ifdef CONFIG_CRASH_DUMP
@@ -1309,6 +1331,15 @@ void __init early_init_devtree(void *par
#endif
early_reserve_mem();
+ lmb_enforce_memory_limit(memory_limit);
+ lmb_analyze();
+
+ DBG("Phys. mem: %lx\n", lmb_phys_mem_size());
+
+ /* We may need to relocate the flat tree, do it now.
+ * FIXME .. and the initrd too? */
+ move_device_tree();
+
DBG("Scanning CPUs ...\n");
/* Retreive CPU related informations from the flat tree
@@ -2058,29 +2089,3 @@ int prom_update_property(struct device_n
return 0;
}
-#ifdef CONFIG_KEXEC
-/* We may have allocated the flat device tree inside the crash kernel region
- * in prom_init. If so we need to move it out into regular memory. */
-void kdump_move_device_tree(void)
-{
- unsigned long start, end;
- struct boot_param_header *new;
-
- start = __pa((unsigned long)initial_boot_params);
- end = start + initial_boot_params->totalsize;
-
- if (end < crashk_res.start || start > crashk_res.end)
- return;
-
- new = (struct boot_param_header*)
- __va(lmb_alloc(initial_boot_params->totalsize, PAGE_SIZE));
-
- memcpy(new, initial_boot_params, initial_boot_params->totalsize);
-
- initial_boot_params = new;
-
- DBG("Flat device tree blob moved to %p\n", initial_boot_params);
-
- /* XXX should we unreserve the old DT? */
-}
-#endif /* CONFIG_KEXEC */
Index: to-merge/arch/powerpc/kernel/prom_init.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/prom_init.c
+++ to-merge/arch/powerpc/kernel/prom_init.c
@@ -194,8 +194,6 @@ static int __initdata of_platform;
static char __initdata prom_cmd_line[COMMAND_LINE_SIZE];
-static unsigned long __initdata prom_memory_limit;
-
static unsigned long __initdata alloc_top;
static unsigned long __initdata alloc_top_high;
static unsigned long __initdata alloc_bottom;
@@ -594,16 +592,6 @@ static void __init early_cmdline_parse(v
}
#endif
- opt = strstr(RELOC(prom_cmd_line), RELOC("mem="));
- if (opt) {
- opt += 4;
- RELOC(prom_memory_limit) = prom_memparse(opt, (const char **)&opt);
-#ifdef CONFIG_PPC64
- /* Align to 16 MB == size of ppc64 large page */
- RELOC(prom_memory_limit) = ALIGN(RELOC(prom_memory_limit), 0x1000000);
-#endif
- }
-
#ifdef CONFIG_KEXEC
/*
* crashkernel=size@addr specifies the location to reserve for
@@ -1115,29 +1103,6 @@ static void __init prom_init_mem(void)
}
/*
- * If prom_memory_limit is set we reduce the upper limits *except* for
- * alloc_top_high. This must be the real top of RAM so we can put
- * TCE's up there.
- */
-
- RELOC(alloc_top_high) = RELOC(ram_top);
-
- if (RELOC(prom_memory_limit)) {
- if (RELOC(prom_memory_limit) <= RELOC(alloc_bottom)) {
- prom_printf("Ignoring mem=%x <= alloc_bottom.\n",
- RELOC(prom_memory_limit));
- RELOC(prom_memory_limit) = 0;
- } else if (RELOC(prom_memory_limit) >= RELOC(ram_top)) {
- prom_printf("Ignoring mem=%x >= ram_top.\n",
- RELOC(prom_memory_limit));
- RELOC(prom_memory_limit) = 0;
- } else {
- RELOC(ram_top) = RELOC(prom_memory_limit);
- RELOC(rmo_top) = min(RELOC(rmo_top), RELOC(prom_memory_limit));
- }
- }
-
- /*
* Setup our top alloc point, that is top of RMO or top of
* segment 0 when running non-LPAR.
* Some RS64 machines have buggy firmware where claims up at
@@ -1149,9 +1114,9 @@ static void __init prom_init_mem(void)
RELOC(rmo_top) = RELOC(ram_top);
RELOC(rmo_top) = min(0x30000000ul, RELOC(rmo_top));
RELOC(alloc_top) = RELOC(rmo_top);
+ RELOC(alloc_top_high) = RELOC(ram_top);
prom_printf("memory layout at init:\n");
- prom_printf(" memory_limit : %x (16 MB aligned)\n", RELOC(prom_memory_limit));
prom_printf(" alloc_bottom : %x\n", RELOC(alloc_bottom));
prom_printf(" alloc_top : %x\n", RELOC(alloc_top));
prom_printf(" alloc_top_hi : %x\n", RELOC(alloc_top_high));
@@ -1348,16 +1313,10 @@ static void __init prom_initialize_tce_t
reserve_mem(local_alloc_bottom, local_alloc_top - local_alloc_bottom);
- if (RELOC(prom_memory_limit)) {
- /*
- * We align the start to a 16MB boundary so we can map
- * the TCE area using large pages if possible.
- * The end should be the top of RAM so no need to align it.
- */
- RELOC(prom_tce_alloc_start) = _ALIGN_DOWN(local_alloc_bottom,
- 0x1000000);
- RELOC(prom_tce_alloc_end) = local_alloc_top;
- }
+ /* These are only really needed if there is a memory limit in
+ * effect, but we don't know so export them always. */
+ RELOC(prom_tce_alloc_start) = local_alloc_bottom;
+ RELOC(prom_tce_alloc_end) = local_alloc_top;
/* Flag the first invalid entry */
prom_debug("ending prom_initialize_tce_table\n");
@@ -2265,10 +2224,6 @@ unsigned long __init prom_init(unsigned
/*
* Fill in some infos for use by the kernel later on
*/
- if (RELOC(prom_memory_limit))
- prom_setprop(_prom->chosen, "/chosen", "linux,memory-limit",
- &RELOC(prom_memory_limit),
- sizeof(prom_memory_limit));
#ifdef CONFIG_PPC64
if (RELOC(ppc64_iommu_off))
prom_setprop(_prom->chosen, "/chosen", "linux,iommu-off",
Index: to-merge/arch/powerpc/kernel/setup_64.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/setup_64.c
+++ to-merge/arch/powerpc/kernel/setup_64.c
@@ -353,9 +353,6 @@ void __init setup_system(void)
{
DBG(" -> setup_system()\n");
-#ifdef CONFIG_KEXEC
- kdump_move_device_tree();
-#endif
/*
* Unflatten the device-tree passed by prom_init or kexec
*/
Index: to-merge/arch/powerpc/mm/lmb.c
===================================================================
--- to-merge.orig/arch/powerpc/mm/lmb.c
+++ to-merge/arch/powerpc/mm/lmb.c
@@ -89,20 +89,25 @@ static long __init lmb_regions_adjacent(
return lmb_addrs_adjacent(base1, size1, base2, size2);
}
-/* Assumption: base addr of region 1 < base addr of region 2 */
-static void __init lmb_coalesce_regions(struct lmb_region *rgn,
- unsigned long r1, unsigned long r2)
+static void __init lmb_remove_region(struct lmb_region *rgn, unsigned long r)
{
unsigned long i;
- rgn->region[r1].size += rgn->region[r2].size;
- for (i=r2; i < rgn->cnt-1; i++) {
- rgn->region[i].base = rgn->region[i+1].base;
- rgn->region[i].size = rgn->region[i+1].size;
+ for (i = r; i < rgn->cnt - 1; i++) {
+ rgn->region[i].base = rgn->region[i + 1].base;
+ rgn->region[i].size = rgn->region[i + 1].size;
}
rgn->cnt--;
}
+/* Assumption: base addr of region 1 < base addr of region 2 */
+static void __init lmb_coalesce_regions(struct lmb_region *rgn,
+ unsigned long r1, unsigned long r2)
+{
+ rgn->region[r1].size += rgn->region[r2].size;
+ lmb_remove_region(rgn, r2);
+}
+
/* This routine called with relocation disabled. */
void __init lmb_init(void)
{
@@ -294,17 +299,16 @@ unsigned long __init lmb_end_of_DRAM(voi
return (lmb.memory.region[idx].base + lmb.memory.region[idx].size);
}
-/*
- * Truncate the lmb list to memory_limit if it's set
- * You must call lmb_analyze() after this.
- */
+/* You must call lmb_analyze() after this. */
void __init lmb_enforce_memory_limit(unsigned long memory_limit)
{
unsigned long i, limit;
+ struct lmb_property *p;
if (! memory_limit)
return;
+ /* Truncate the lmb regions to satisfy the memory limit. */
limit = memory_limit;
for (i = 0; i < lmb.memory.cnt; i++) {
if (limit > lmb.memory.region[i].size) {
@@ -316,4 +320,21 @@ void __init lmb_enforce_memory_limit(uns
lmb.memory.cnt = i + 1;
break;
}
+
+ lmb.rmo_size = lmb.memory.region[0].size;
+
+ /* And truncate any reserves above the limit also. */
+ for (i = 0; i < lmb.reserved.cnt; i++) {
+ p = &lmb.reserved.region[i];
+
+ if (p->base > memory_limit)
+ p->size = 0;
+ else if ((p->base + p->size) > memory_limit)
+ p->size = memory_limit - p->base;
+
+ if (p->size == 0) {
+ lmb_remove_region(&lmb.reserved, i);
+ i--;
+ }
+ }
}
Index: to-merge/arch/powerpc/platforms/iseries/setup.c
===================================================================
--- to-merge.orig/arch/powerpc/platforms/iseries/setup.c
+++ to-merge/arch/powerpc/platforms/iseries/setup.c
@@ -90,8 +90,6 @@ extern unsigned long embedded_sysmap_end
extern unsigned long iSeries_recal_tb;
extern unsigned long iSeries_recal_titan;
-static unsigned long cmd_mem_limit;
-
struct MemoryBlock {
unsigned long absStart;
unsigned long absEnd;
@@ -1023,8 +1021,6 @@ void build_flat_dt(struct iseries_flat_d
/* /chosen */
dt_start_node(dt, "chosen");
dt_prop_str(dt, "bootargs", cmd_line);
- if (cmd_mem_limit)
- dt_prop_u64(dt, "linux,memory-limit", cmd_mem_limit);
dt_end_node(dt);
dt_cpus(dt);
@@ -1050,29 +1046,11 @@ void * __init iSeries_early_setup(void)
iSeries_get_cmdline();
- /* Save unparsed command line copy for /proc/cmdline */
- strlcpy(saved_command_line, cmd_line, COMMAND_LINE_SIZE);
-
- /* Parse early parameters, in particular mem=x */
- parse_early_param();
-
build_flat_dt(&iseries_dt, phys_mem_size);
return (void *) __pa(&iseries_dt);
}
-/*
- * On iSeries we just parse the mem=X option from the command line.
- * On pSeries it's a bit more complicated, see prom_init_mem()
- */
-static int __init early_parsemem(char *p)
-{
- if (p)
- cmd_mem_limit = ALIGN(memparse(p, &p), PAGE_SIZE);
- return 0;
-}
-early_param("mem", early_parsemem);
-
static void hvputc(char c)
{
if (c == '\n')
Index: to-merge/include/asm-powerpc/kexec.h
===================================================================
--- to-merge.orig/include/asm-powerpc/kexec.h
+++ to-merge/include/asm-powerpc/kexec.h
@@ -31,9 +31,10 @@
#define KEXEC_ARCH KEXEC_ARCH_PPC
#endif
+#ifndef __ASSEMBLY__
+
#ifdef CONFIG_KEXEC
-#ifndef __ASSEMBLY__
#ifdef __powerpc64__
/*
* This function is responsible for capturing register states if coming
@@ -123,8 +124,16 @@ extern int default_machine_kexec_prepare
extern void default_machine_crash_shutdown(struct pt_regs *regs);
extern void machine_kexec_simple(struct kimage *image);
+extern int overlaps_crashkernel(unsigned long start, unsigned long size);
+
+#else /* !CONFIG_KEXEC */
+
+static inline int overlaps_crashkernel(unsigned long start, unsigned long size)
+{
+ return 0;
+}
-#endif /* ! __ASSEMBLY__ */
#endif /* CONFIG_KEXEC */
+#endif /* ! __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_KEXEC_H */
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 4/5] powerpc: Kdump header cleanup
2006-05-17 8:00 [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Michael Ellerman
2006-05-17 8:00 ` [PATCH 2/5] powerpc: Parse early parameters early, rather than sorta early Michael Ellerman
2006-05-17 8:00 ` [PATCH 3/5] powerpc: Unify mem= handling Michael Ellerman
@ 2006-05-17 8:00 ` Michael Ellerman
2006-05-17 8:00 ` [PATCH 5/5] powerpc: Move crashkernel= handling into the kernel Michael Ellerman
2006-05-17 21:29 ` [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Tom Rini
4 siblings, 0 replies; 11+ messages in thread
From: Michael Ellerman @ 2006-05-17 8:00 UTC (permalink / raw)
To: Paul Mackerras; +Cc: linuxppc-dev, Kumar Gala
We need to know the base address of the kdump kernel even when we're not a
kdump kernel, so add a #define for it. Move the logic that sets the kdump
kernelbase into kdump.h instead of page.h.
Rename kdump_setup() to setup_kdump_trampoline() to make it clearer what it's
doing, and add an empty definition for the !CRASH_DUMP case to avoid a
#define in the C code. Similarly, add reserve_kdump_trampoline().
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
---
arch/powerpc/kernel/crash_dump.c | 11 ++++++++---
arch/powerpc/kernel/prom.c | 4 +---
arch/powerpc/kernel/setup_64.c | 4 +---
include/asm-powerpc/kdump.h | 29 +++++++++++++++++++++++++++--
include/asm-powerpc/page.h | 8 +-------
5 files changed, 38 insertions(+), 18 deletions(-)
Index: to-merge/arch/powerpc/kernel/crash_dump.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/crash_dump.c
+++ to-merge/arch/powerpc/kernel/crash_dump.c
@@ -25,6 +25,11 @@
#define DBG(fmt...)
#endif
+void reserve_kdump_trampoline(void)
+{
+ lmb_reserve(0, KDUMP_RESERVE_LIMIT);
+}
+
static void __init create_trampoline(unsigned long addr)
{
/* The maximum range of a single instruction branch, is the current
@@ -39,11 +44,11 @@ static void __init create_trampoline(uns
create_branch(addr + 4, addr + PHYSICAL_START, 0);
}
-void __init kdump_setup(void)
+void __init setup_kdump_trampoline(void)
{
unsigned long i;
- DBG(" -> kdump_setup()\n");
+ DBG(" -> setup_kdump_trampoline()\n");
for (i = KDUMP_TRAMPOLINE_START; i < KDUMP_TRAMPOLINE_END; i += 8) {
create_trampoline(i);
@@ -52,7 +57,7 @@ void __init kdump_setup(void)
create_trampoline(__pa(system_reset_fwnmi) - PHYSICAL_START);
create_trampoline(__pa(machine_check_fwnmi) - PHYSICAL_START);
- DBG(" <- kdump_setup()\n");
+ DBG(" <- setup_kdump_trampoline()\n");
}
#ifdef CONFIG_PROC_VMCORE
Index: to-merge/arch/powerpc/kernel/prom.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/prom.c
+++ to-merge/arch/powerpc/kernel/prom.c
@@ -1326,9 +1326,7 @@ void __init early_init_devtree(void *par
/* Reserve LMB regions used by kernel, initrd, dt, etc... */
lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
-#ifdef CONFIG_CRASH_DUMP
- lmb_reserve(0, KDUMP_RESERVE_LIMIT);
-#endif
+ reserve_kdump_trampoline();
early_reserve_mem();
lmb_enforce_memory_limit(memory_limit);
Index: to-merge/arch/powerpc/kernel/setup_64.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/setup_64.c
+++ to-merge/arch/powerpc/kernel/setup_64.c
@@ -199,9 +199,7 @@ void __init early_setup(unsigned long dt
/* Probe the machine type */
probe_machine();
-#ifdef CONFIG_CRASH_DUMP
- kdump_setup();
-#endif
+ setup_kdump_trampoline();
DBG("Found, Initializing memory management...\n");
Index: to-merge/include/asm-powerpc/kdump.h
===================================================================
--- to-merge.orig/include/asm-powerpc/kdump.h
+++ to-merge/include/asm-powerpc/kdump.h
@@ -1,13 +1,38 @@
#ifndef _PPC64_KDUMP_H
#define _PPC64_KDUMP_H
+/* Kdump kernel runs at 32 MB, change at your peril. */
+#define KDUMP_KERNELBASE 0x2000000
+
/* How many bytes to reserve at zero for kdump. The reserve limit should
- * be greater or equal to the trampoline's end address. */
+ * be greater or equal to the trampoline's end address.
+ * Reserve to the end of the FWNMI area, see head_64.S */
#define KDUMP_RESERVE_LIMIT 0x8000
+#ifdef CONFIG_CRASH_DUMP
+
+#define PHYSICAL_START KDUMP_KERNELBASE
#define KDUMP_TRAMPOLINE_START 0x0100
#define KDUMP_TRAMPOLINE_END 0x3000
-extern void kdump_setup(void);
+#else /* !CONFIG_CRASH_DUMP */
+
+#define PHYSICAL_START 0x0
+
+#endif /* CONFIG_CRASH_DUMP */
+
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_CRASH_DUMP
+
+extern void reserve_kdump_trampoline(void);
+extern void setup_kdump_trampoline(void);
+
+#else /* !CONFIG_CRASH_DUMP */
+
+static inline void reserve_kdump_trampoline(void) { ; }
+static inline void setup_kdump_trampoline(void) { ; }
+
+#endif /* CONFIG_CRASH_DUMP */
+#endif /* __ASSEMBLY__ */
#endif /* __PPC64_KDUMP_H */
Index: to-merge/include/asm-powerpc/page.h
===================================================================
--- to-merge.orig/include/asm-powerpc/page.h
+++ to-merge/include/asm-powerpc/page.h
@@ -13,6 +13,7 @@
#ifdef __KERNEL__
#include <linux/config.h>
#include <asm/asm-compat.h>
+#include <asm/kdump.h>
/*
* On PPC32 page size is 4K. For PPC64 we support either 4K or 64K software
@@ -52,13 +53,6 @@
* If you want to test if something's a kernel address, use is_kernel_addr().
*/
-#ifdef CONFIG_CRASH_DUMP
-/* Kdump kernel runs at 32 MB, change at your peril. */
-#define PHYSICAL_START 0x2000000
-#else
-#define PHYSICAL_START 0x0
-#endif
-
#define PAGE_OFFSET ASM_CONST(CONFIG_KERNEL_START)
#define KERNELBASE (PAGE_OFFSET + PHYSICAL_START)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 5/5] powerpc: Move crashkernel= handling into the kernel.
2006-05-17 8:00 [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Michael Ellerman
` (2 preceding siblings ...)
2006-05-17 8:00 ` [PATCH 4/5] powerpc: Kdump header cleanup Michael Ellerman
@ 2006-05-17 8:00 ` Michael Ellerman
2006-05-18 1:16 ` [PATCH] " Michael Ellerman
2006-05-17 21:29 ` [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Tom Rini
4 siblings, 1 reply; 11+ messages in thread
From: Michael Ellerman @ 2006-05-17 8:00 UTC (permalink / raw)
To: Paul Mackerras; +Cc: linuxppc-dev, Kumar Gala
Currently we parse crashkernel= in prom_init.c, which means for other boot
loaders to support Kdump they also need to parse the command line and setup
the appropriate crash kernel properties.
With early param parsing done earlier we can do crashkernel= parsing in the
early kernel code and avoid the need for every bootloader to do it for us.
We still support the device tree properties if they're specified by firmware,
however the command line overrides anything we find there.
Tested on P5 LPAR.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
---
arch/powerpc/kernel/machine_kexec_64.c | 94 +++++++++++++++++++++++++++++++++
arch/powerpc/kernel/prom.c | 1
arch/powerpc/kernel/prom_init.c | 54 ------------------
include/asm-powerpc/kexec.h | 2
4 files changed, 97 insertions(+), 54 deletions(-)
Index: to-merge/arch/powerpc/kernel/machine_kexec_64.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/machine_kexec_64.c
+++ to-merge/arch/powerpc/kernel/machine_kexec_64.c
@@ -21,6 +21,7 @@
#include <asm/machdep.h>
#include <asm/cacheflush.h>
#include <asm/paca.h>
+#include <asm/lmb.h>
#include <asm/mmu.h>
#include <asm/sections.h> /* _end */
#include <asm/prom.h>
@@ -335,9 +336,102 @@ static void __init export_htab_values(vo
of_node_put(node);
}
+static struct property crashk_base_prop = {
+ .name = "linux,crashkernel-base",
+ .length = sizeof(unsigned long),
+ .value = (unsigned char *)&crashk_res.start,
+};
+
+static unsigned long crashk_size;
+
+static struct property crashk_size_prop = {
+ .name = "linux,crashkernel-size",
+ .length = sizeof(unsigned long),
+ .value = (unsigned char *)&crashk_size,
+};
+
+static void __init export_crashk_values(void)
+{
+ struct device_node *node;
+ struct property *prop;
+
+ node = of_find_node_by_path("/chosen");
+ if (!node)
+ return;
+
+ /* There might be existing crash kernel properties, but we can't
+ * be sure what's in them, so remove them. */
+ prop = of_find_property(node, "linux,crashkernel-base", NULL);
+ if (prop)
+ prom_remove_property(node, prop);
+
+ prop = of_find_property(node, "linux,crashkernel-size", NULL);
+ if (prop)
+ prom_remove_property(node, prop);
+
+ if (crashk_res.start != 0) {
+ prom_add_property(node, &crashk_base_prop);
+ crashk_size = crashk_res.end - crashk_res.start + 1;
+ prom_add_property(node, &crashk_size_prop);
+ }
+
+ of_node_put(node);
+}
+
void __init kexec_setup(void)
{
export_htab_values();
+ export_crashk_values();
+}
+
+static int __init early_parse_crashk(char *p)
+{
+ unsigned long size;
+
+ if (!p)
+ return 1;
+
+ size = memparse(p, &p);
+
+ if (*p == '@')
+ crashk_res.start = memparse(p + 1, &p);
+ else
+ crashk_res.start = KDUMP_KERNELBASE;
+
+ crashk_res.end = crashk_res.start + size - 1;
+
+ return 0;
+}
+early_param("crashkernel", early_parse_crashk);
+
+void __init reserve_crashkernel(void)
+{
+ unsigned long size;
+
+ if (crashk_res.start == 0)
+ return;
+
+ /* We might have got these values via the command line or the
+ * device tree, either way sanitise them now. */
+
+ size = crashk_res.end - crashk_res.start + 1;
+
+ if (crashk_res.start != KDUMP_KERNELBASE)
+ printk("Crash kernel location must be 0x%x\n",
+ KDUMP_KERNELBASE);
+
+ crashk_res.start = KDUMP_KERNELBASE;
+ size = PAGE_ALIGN(size);
+ crashk_res.end = crashk_res.start + size - 1;
+
+ /* Crash kernel trumps memory limit */
+ if (memory_limit && memory_limit <= crashk_res.end) {
+ memory_limit = crashk_res.end + 1;
+ printk("Adjusted memory limit for crashkernel, now 0x%lx\n",
+ memory_limit);
+ }
+
+ lmb_reserve(crashk_res.start, size);
}
int overlaps_crashkernel(unsigned long start, unsigned long size)
Index: to-merge/arch/powerpc/kernel/prom.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/prom.c
+++ to-merge/arch/powerpc/kernel/prom.c
@@ -1327,6 +1327,7 @@ void __init early_init_devtree(void *par
/* Reserve LMB regions used by kernel, initrd, dt, etc... */
lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
reserve_kdump_trampoline();
+ reserve_crashkernel();
early_reserve_mem();
lmb_enforce_memory_limit(memory_limit);
Index: to-merge/arch/powerpc/kernel/prom_init.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/prom_init.c
+++ to-merge/arch/powerpc/kernel/prom_init.c
@@ -200,11 +200,6 @@ static unsigned long __initdata alloc_bo
static unsigned long __initdata rmo_top;
static unsigned long __initdata ram_top;
-#ifdef CONFIG_KEXEC
-static unsigned long __initdata prom_crashk_base;
-static unsigned long __initdata prom_crashk_size;
-#endif
-
static struct mem_map_entry __initdata mem_reserve_map[MEM_RESERVE_MAP_SIZE];
static int __initdata mem_reserve_cnt;
@@ -591,35 +586,6 @@ static void __init early_cmdline_parse(v
RELOC(iommu_force_on) = 1;
}
#endif
-
-#ifdef CONFIG_KEXEC
- /*
- * crashkernel=size@addr specifies the location to reserve for
- * crash kernel.
- */
- opt = strstr(RELOC(prom_cmd_line), RELOC("crashkernel="));
- if (opt) {
- opt += 12;
- RELOC(prom_crashk_size) =
- prom_memparse(opt, (const char **)&opt);
-
- if (ALIGN(RELOC(prom_crashk_size), 0x1000000) !=
- RELOC(prom_crashk_size)) {
- prom_printf("Warning: crashkernel size is not "
- "aligned to 16MB\n");
- }
-
- /*
- * At present, the crash kernel always run at 32MB.
- * Just ignore whatever user passed.
- */
- RELOC(prom_crashk_base) = 0x2000000;
- if (*opt == '@') {
- prom_printf("Warning: PPC64 kdump kernel always runs "
- "at 32 MB\n");
- }
- }
-#endif
}
#ifdef CONFIG_PPC_PSERIES
@@ -1122,12 +1088,6 @@ static void __init prom_init_mem(void)
prom_printf(" alloc_top_hi : %x\n", RELOC(alloc_top_high));
prom_printf(" rmo_top : %x\n", RELOC(rmo_top));
prom_printf(" ram_top : %x\n", RELOC(ram_top));
-#ifdef CONFIG_KEXEC
- if (RELOC(prom_crashk_base)) {
- prom_printf(" crashk_base : %x\n", RELOC(prom_crashk_base));
- prom_printf(" crashk_size : %x\n", RELOC(prom_crashk_size));
- }
-#endif
}
@@ -2187,10 +2147,6 @@ unsigned long __init prom_init(unsigned
*/
prom_init_mem();
-#ifdef CONFIG_KEXEC
- if (RELOC(prom_crashk_base))
- reserve_mem(RELOC(prom_crashk_base), RELOC(prom_crashk_size));
-#endif
/*
* Determine which cpu is actually running right _now_
*/
@@ -2243,16 +2199,6 @@ unsigned long __init prom_init(unsigned
}
#endif
-#ifdef CONFIG_KEXEC
- if (RELOC(prom_crashk_base)) {
- prom_setprop(_prom->chosen, "/chosen", "linux,crashkernel-base",
- PTRRELOC(&prom_crashk_base),
- sizeof(RELOC(prom_crashk_base)));
- prom_setprop(_prom->chosen, "/chosen", "linux,crashkernel-size",
- PTRRELOC(&prom_crashk_size),
- sizeof(RELOC(prom_crashk_size)));
- }
-#endif
/*
* Fixup any known bugs in the device-tree
*/
Index: to-merge/include/asm-powerpc/kexec.h
===================================================================
--- to-merge.orig/include/asm-powerpc/kexec.h
+++ to-merge/include/asm-powerpc/kexec.h
@@ -133,6 +133,8 @@ static inline int overlaps_crashkernel(u
return 0;
}
+static inline void reserve_crashkernel(void) { ; }
+
#endif /* CONFIG_KEXEC */
#endif /* ! __ASSEMBLY__ */
#endif /* __KERNEL__ */
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing
2006-05-17 8:00 [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Michael Ellerman
` (3 preceding siblings ...)
2006-05-17 8:00 ` [PATCH 5/5] powerpc: Move crashkernel= handling into the kernel Michael Ellerman
@ 2006-05-17 21:29 ` Tom Rini
2006-05-18 0:03 ` Michael Ellerman
4 siblings, 1 reply; 11+ messages in thread
From: Tom Rini @ 2006-05-17 21:29 UTC (permalink / raw)
To: Michael Ellerman; +Cc: linuxppc-dev, Kumar Gala, Paul Mackerras
On Wed, May 17, 2006 at 06:00:41PM +1000, Michael Ellerman wrote:
> Currently early_xmon() calls directly into debugger() if xmon=early is passed.
> This ties the invocation of early xmon to the location of parse_early_param(),
> which might change.
>
> Tested on P5 LPAR and F50.
>
> Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Please no, parse_early_param() is there so things like xmon or kgdb can
be dropped into as soon as we're able to parse any params that might be
usable early on.
--
Tom Rini
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing
2006-05-17 21:29 ` [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Tom Rini
@ 2006-05-18 0:03 ` Michael Ellerman
2006-05-18 1:08 ` Tom Rini
0 siblings, 1 reply; 11+ messages in thread
From: Michael Ellerman @ 2006-05-18 0:03 UTC (permalink / raw)
To: Tom Rini; +Cc: linuxppc-dev, Kumar Gala, Paul Mackerras
[-- Attachment #1: Type: text/plain, Size: 1113 bytes --]
On Wed, 2006-05-17 at 14:29 -0700, Tom Rini wrote:
> On Wed, May 17, 2006 at 06:00:41PM +1000, Michael Ellerman wrote:
>
> > Currently early_xmon() calls directly into debugger() if xmon=early is passed.
> > This ties the invocation of early xmon to the location of parse_early_param(),
> > which might change.
> >
> > Tested on P5 LPAR and F50.
> >
> > Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
>
> Please no, parse_early_param() is there so things like xmon or kgdb can
> be dropped into as soon as we're able to parse any params that might be
> usable early on.
Sure, did you read the rest of the series? I want to parse parameters
eariler, so early that xmon isn't ready to run when we parse them, so I
have to defer jumping into xmon until after xmon is initialised. The net
effect on when xmon runs is zero. Or did I miss your point?
cheers
--
Michael Ellerman
IBM OzLabs
wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)
We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 191 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing
2006-05-18 0:03 ` Michael Ellerman
@ 2006-05-18 1:08 ` Tom Rini
2006-05-22 7:03 ` Michael Ellerman
0 siblings, 1 reply; 11+ messages in thread
From: Tom Rini @ 2006-05-18 1:08 UTC (permalink / raw)
To: Michael Ellerman; +Cc: linuxppc-dev, Paul Mackerras
On Thu, May 18, 2006 at 10:03:05AM +1000, Michael Ellerman wrote:
> On Wed, 2006-05-17 at 14:29 -0700, Tom Rini wrote:
> > On Wed, May 17, 2006 at 06:00:41PM +1000, Michael Ellerman wrote:
> >
> > > Currently early_xmon() calls directly into debugger() if xmon=early is passed.
> > > This ties the invocation of early xmon to the location of parse_early_param(),
> > > which might change.
> > >
> > > Tested on P5 LPAR and F50.
> > >
> > > Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
> >
> > Please no, parse_early_param() is there so things like xmon or kgdb can
> > be dropped into as soon as we're able to parse any params that might be
> > usable early on.
>
> Sure, did you read the rest of the series? I want to parse parameters
> eariler, so early that xmon isn't ready to run when we parse them, so I
> have to defer jumping into xmon until after xmon is initialised. The net
> effect on when xmon runs is zero. Or did I miss your point?
My point would be that xmon should either be fixed to work that early or
parse things a bit later as a regular param. I know the current system
is flawed but I really don't like the idea (especially as a comaintainer
of kgdb) of having to do a special plug here or there for one param
because we parse early stuff too early, but regular stuff not early
enough (which is why Andrew Morton got me to poke at the early param
stuff a while back and then I think Rusty did something better, or
something along those lines, anyhow).
I'm just trying to avoid getting back into the situation we had before
parse_early_param().
--
Tom Rini
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH] powerpc: Move crashkernel= handling into the kernel.
2006-05-17 8:00 ` [PATCH 5/5] powerpc: Move crashkernel= handling into the kernel Michael Ellerman
@ 2006-05-18 1:16 ` Michael Ellerman
0 siblings, 0 replies; 11+ messages in thread
From: Michael Ellerman @ 2006-05-18 1:16 UTC (permalink / raw)
To: Paul Mackerras; +Cc: linuxppc-dev
This was missing a quilt ref.
---
Currently we parse crashkernel= in prom_init.c, which means for other boot
loaders to support Kdump they also need to parse the command line and setup
the appropriate crash kernel properties.
With early param parsing done earlier we can do crashkernel= parsing in the
early kernel code and avoid the need for every bootloader to do it for us.
We still support the device tree properties if they're specified by firmware,
however the command line overrides anything we find there.
Tested on P5 LPAR.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
---
arch/powerpc/kernel/machine_kexec_64.c | 94 +++++++++++++++++++++++++++++++++
arch/powerpc/kernel/prom.c | 1
arch/powerpc/kernel/prom_init.c | 54 ------------------
include/asm-powerpc/kexec.h | 3 +
4 files changed, 98 insertions(+), 54 deletions(-)
Index: to-merge/arch/powerpc/kernel/machine_kexec_64.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/machine_kexec_64.c
+++ to-merge/arch/powerpc/kernel/machine_kexec_64.c
@@ -21,6 +21,7 @@
#include <asm/machdep.h>
#include <asm/cacheflush.h>
#include <asm/paca.h>
+#include <asm/lmb.h>
#include <asm/mmu.h>
#include <asm/sections.h> /* _end */
#include <asm/prom.h>
@@ -335,9 +336,102 @@ static void __init export_htab_values(vo
of_node_put(node);
}
+static struct property crashk_base_prop = {
+ .name = "linux,crashkernel-base",
+ .length = sizeof(unsigned long),
+ .value = (unsigned char *)&crashk_res.start,
+};
+
+static unsigned long crashk_size;
+
+static struct property crashk_size_prop = {
+ .name = "linux,crashkernel-size",
+ .length = sizeof(unsigned long),
+ .value = (unsigned char *)&crashk_size,
+};
+
+static void __init export_crashk_values(void)
+{
+ struct device_node *node;
+ struct property *prop;
+
+ node = of_find_node_by_path("/chosen");
+ if (!node)
+ return;
+
+ /* There might be existing crash kernel properties, but we can't
+ * be sure what's in them, so remove them. */
+ prop = of_find_property(node, "linux,crashkernel-base", NULL);
+ if (prop)
+ prom_remove_property(node, prop);
+
+ prop = of_find_property(node, "linux,crashkernel-size", NULL);
+ if (prop)
+ prom_remove_property(node, prop);
+
+ if (crashk_res.start != 0) {
+ prom_add_property(node, &crashk_base_prop);
+ crashk_size = crashk_res.end - crashk_res.start + 1;
+ prom_add_property(node, &crashk_size_prop);
+ }
+
+ of_node_put(node);
+}
+
void __init kexec_setup(void)
{
export_htab_values();
+ export_crashk_values();
+}
+
+static int __init early_parse_crashk(char *p)
+{
+ unsigned long size;
+
+ if (!p)
+ return 1;
+
+ size = memparse(p, &p);
+
+ if (*p == '@')
+ crashk_res.start = memparse(p + 1, &p);
+ else
+ crashk_res.start = KDUMP_KERNELBASE;
+
+ crashk_res.end = crashk_res.start + size - 1;
+
+ return 0;
+}
+early_param("crashkernel", early_parse_crashk);
+
+void __init reserve_crashkernel(void)
+{
+ unsigned long size;
+
+ if (crashk_res.start == 0)
+ return;
+
+ /* We might have got these values via the command line or the
+ * device tree, either way sanitise them now. */
+
+ size = crashk_res.end - crashk_res.start + 1;
+
+ if (crashk_res.start != KDUMP_KERNELBASE)
+ printk("Crash kernel location must be 0x%x\n",
+ KDUMP_KERNELBASE);
+
+ crashk_res.start = KDUMP_KERNELBASE;
+ size = PAGE_ALIGN(size);
+ crashk_res.end = crashk_res.start + size - 1;
+
+ /* Crash kernel trumps memory limit */
+ if (memory_limit && memory_limit <= crashk_res.end) {
+ memory_limit = crashk_res.end + 1;
+ printk("Adjusted memory limit for crashkernel, now 0x%lx\n",
+ memory_limit);
+ }
+
+ lmb_reserve(crashk_res.start, size);
}
int overlaps_crashkernel(unsigned long start, unsigned long size)
Index: to-merge/arch/powerpc/kernel/prom.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/prom.c
+++ to-merge/arch/powerpc/kernel/prom.c
@@ -1327,6 +1327,7 @@ void __init early_init_devtree(void *par
/* Reserve LMB regions used by kernel, initrd, dt, etc... */
lmb_reserve(PHYSICAL_START, __pa(klimit) - PHYSICAL_START);
reserve_kdump_trampoline();
+ reserve_crashkernel();
early_reserve_mem();
lmb_enforce_memory_limit(memory_limit);
Index: to-merge/arch/powerpc/kernel/prom_init.c
===================================================================
--- to-merge.orig/arch/powerpc/kernel/prom_init.c
+++ to-merge/arch/powerpc/kernel/prom_init.c
@@ -200,11 +200,6 @@ static unsigned long __initdata alloc_bo
static unsigned long __initdata rmo_top;
static unsigned long __initdata ram_top;
-#ifdef CONFIG_KEXEC
-static unsigned long __initdata prom_crashk_base;
-static unsigned long __initdata prom_crashk_size;
-#endif
-
static struct mem_map_entry __initdata mem_reserve_map[MEM_RESERVE_MAP_SIZE];
static int __initdata mem_reserve_cnt;
@@ -591,35 +586,6 @@ static void __init early_cmdline_parse(v
RELOC(iommu_force_on) = 1;
}
#endif
-
-#ifdef CONFIG_KEXEC
- /*
- * crashkernel=size@addr specifies the location to reserve for
- * crash kernel.
- */
- opt = strstr(RELOC(prom_cmd_line), RELOC("crashkernel="));
- if (opt) {
- opt += 12;
- RELOC(prom_crashk_size) =
- prom_memparse(opt, (const char **)&opt);
-
- if (ALIGN(RELOC(prom_crashk_size), 0x1000000) !=
- RELOC(prom_crashk_size)) {
- prom_printf("Warning: crashkernel size is not "
- "aligned to 16MB\n");
- }
-
- /*
- * At present, the crash kernel always run at 32MB.
- * Just ignore whatever user passed.
- */
- RELOC(prom_crashk_base) = 0x2000000;
- if (*opt == '@') {
- prom_printf("Warning: PPC64 kdump kernel always runs "
- "at 32 MB\n");
- }
- }
-#endif
}
#ifdef CONFIG_PPC_PSERIES
@@ -1122,12 +1088,6 @@ static void __init prom_init_mem(void)
prom_printf(" alloc_top_hi : %x\n", RELOC(alloc_top_high));
prom_printf(" rmo_top : %x\n", RELOC(rmo_top));
prom_printf(" ram_top : %x\n", RELOC(ram_top));
-#ifdef CONFIG_KEXEC
- if (RELOC(prom_crashk_base)) {
- prom_printf(" crashk_base : %x\n", RELOC(prom_crashk_base));
- prom_printf(" crashk_size : %x\n", RELOC(prom_crashk_size));
- }
-#endif
}
@@ -2187,10 +2147,6 @@ unsigned long __init prom_init(unsigned
*/
prom_init_mem();
-#ifdef CONFIG_KEXEC
- if (RELOC(prom_crashk_base))
- reserve_mem(RELOC(prom_crashk_base), RELOC(prom_crashk_size));
-#endif
/*
* Determine which cpu is actually running right _now_
*/
@@ -2243,16 +2199,6 @@ unsigned long __init prom_init(unsigned
}
#endif
-#ifdef CONFIG_KEXEC
- if (RELOC(prom_crashk_base)) {
- prom_setprop(_prom->chosen, "/chosen", "linux,crashkernel-base",
- PTRRELOC(&prom_crashk_base),
- sizeof(RELOC(prom_crashk_base)));
- prom_setprop(_prom->chosen, "/chosen", "linux,crashkernel-size",
- PTRRELOC(&prom_crashk_size),
- sizeof(RELOC(prom_crashk_size)));
- }
-#endif
/*
* Fixup any known bugs in the device-tree
*/
Index: to-merge/include/asm-powerpc/kexec.h
===================================================================
--- to-merge.orig/include/asm-powerpc/kexec.h
+++ to-merge/include/asm-powerpc/kexec.h
@@ -125,6 +125,7 @@ extern void default_machine_crash_shutdo
extern void machine_kexec_simple(struct kimage *image);
extern int overlaps_crashkernel(unsigned long start, unsigned long size);
+extern void reserve_crashkernel(void);
#else /* !CONFIG_KEXEC */
@@ -133,6 +134,8 @@ static inline int overlaps_crashkernel(u
return 0;
}
+static inline void reserve_crashkernel(void) { ; }
+
#endif /* CONFIG_KEXEC */
#endif /* ! __ASSEMBLY__ */
#endif /* __KERNEL__ */
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing
2006-05-18 1:08 ` Tom Rini
@ 2006-05-22 7:03 ` Michael Ellerman
2006-05-22 14:26 ` Tom Rini
0 siblings, 1 reply; 11+ messages in thread
From: Michael Ellerman @ 2006-05-22 7:03 UTC (permalink / raw)
To: Tom Rini; +Cc: linuxppc-dev, Paul Mackerras
[-- Attachment #1: Type: text/plain, Size: 2437 bytes --]
On Wed, 2006-05-17 at 18:08 -0700, Tom Rini wrote:
> On Thu, May 18, 2006 at 10:03:05AM +1000, Michael Ellerman wrote:
> > On Wed, 2006-05-17 at 14:29 -0700, Tom Rini wrote:
> > > On Wed, May 17, 2006 at 06:00:41PM +1000, Michael Ellerman wrote:
> > >
> > > > Currently early_xmon() calls directly into debugger() if xmon=early is passed.
> > > > This ties the invocation of early xmon to the location of parse_early_param(),
> > > > which might change.
> > > >
> > > > Tested on P5 LPAR and F50.
> > > >
> > > > Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
> > >
> > > Please no, parse_early_param() is there so things like xmon or kgdb can
> > > be dropped into as soon as we're able to parse any params that might be
> > > usable early on.
> >
> > Sure, did you read the rest of the series? I want to parse parameters
> > eariler, so early that xmon isn't ready to run when we parse them, so I
> > have to defer jumping into xmon until after xmon is initialised. The net
> > effect on when xmon runs is zero. Or did I miss your point?
>
> My point would be that xmon should either be fixed to work that early or
> parse things a bit later as a regular param. I know the current system
> is flawed but I really don't like the idea (especially as a comaintainer
> of kgdb) of having to do a special plug here or there for one param
> because we parse early stuff too early, but regular stuff not early
> enough (which is why Andrew Morton got me to poke at the early param
> stuff a while back and then I think Rusty did something better, or
> something along those lines, anyhow).
Ok. I don't know the history so I can't comment on that. I don't think
we can make xmon run that early, the early parsing in my patch is before
we know what machine type we're on.
But as far as xmon and kgdb is concerned it really shouldn't matter that
the parsing is happening earlier. Instead of calling directly into
xmon/kgdb from the parsing code you set a global which is tested later.
If we ever get around to consolidating the 32/64 bit early setup code we
might even be able to move all the xmon logic into xmon_init(), which
would be even cleaner.
cheers
--
Michael Ellerman
IBM OzLabs
wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)
We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 191 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing
2006-05-22 7:03 ` Michael Ellerman
@ 2006-05-22 14:26 ` Tom Rini
0 siblings, 0 replies; 11+ messages in thread
From: Tom Rini @ 2006-05-22 14:26 UTC (permalink / raw)
To: Michael Ellerman; +Cc: linuxppc-dev, Paul Mackerras
On Mon, May 22, 2006 at 05:03:08PM +1000, Michael Ellerman wrote:
[snip]
> But as far as xmon and kgdb is concerned it really shouldn't matter that
> the parsing is happening earlier. Instead of calling directly into
> xmon/kgdb from the parsing code you set a global which is tested later.
Yes, so instead of one method of telling the kernel we want to have kgdb
asap, and one method of implementing that on all architectures we have
the command line parsed sometimes (and we just drop right in), other
architectures we need to write something to set a flag we check later,
and someone else writes something different for this on yet another
platform and we're back to where we started from. That's my fear :)
--
Tom Rini
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2006-05-22 14:26 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-05-17 8:00 [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Michael Ellerman
2006-05-17 8:00 ` [PATCH 2/5] powerpc: Parse early parameters early, rather than sorta early Michael Ellerman
2006-05-17 8:00 ` [PATCH 3/5] powerpc: Unify mem= handling Michael Ellerman
2006-05-17 8:00 ` [PATCH 4/5] powerpc: Kdump header cleanup Michael Ellerman
2006-05-17 8:00 ` [PATCH 5/5] powerpc: Move crashkernel= handling into the kernel Michael Ellerman
2006-05-18 1:16 ` [PATCH] " Michael Ellerman
2006-05-17 21:29 ` [PATCH 1/5] powerpc: Make early xmon logic immune to location of early parsing Tom Rini
2006-05-18 0:03 ` Michael Ellerman
2006-05-18 1:08 ` Tom Rini
2006-05-22 7:03 ` Michael Ellerman
2006-05-22 14:26 ` Tom Rini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).