* [PATCH v3 1/9] ARM: l2x0: fix disabling function to avoid deadlock
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
2011-06-16 14:53 ` Catalin Marinas
2011-06-15 17:23 ` [PATCH v3 2/9] ARM: proc: add definition of cpu_reset for ARMv6 and ARMv7 cores Will Deacon
` (7 subsequent siblings)
8 siblings, 1 reply; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
The l2x0_disable function attempts to writel with the l2x0_lock held.
This results in deadlock when the writel contains an outer_sync call
for the platform since the l2x0_lock is already held by the disable
function. A further problem is that disabling the L2 without flushing it
first can lead to the spin_lock operation becoming visible after the
spin_unlock, causing any subsequent L2 maintenance to deadlock.
This patch replaces the writel with a call to writel_relaxed in the
disabling code and adds a flush before disabling in the control
register, preventing livelock from occurring.
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/mm/cache-l2x0.c | 19 +++++++++++++------
1 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c
index ef59099..44c0867 100644
--- a/arch/arm/mm/cache-l2x0.c
+++ b/arch/arm/mm/cache-l2x0.c
@@ -120,17 +120,22 @@ static void l2x0_cache_sync(void)
spin_unlock_irqrestore(&l2x0_lock, flags);
}
-static void l2x0_flush_all(void)
+static void __l2x0_flush_all(void)
{
- unsigned long flags;
-
- /* clean all ways */
- spin_lock_irqsave(&l2x0_lock, flags);
debug_writel(0x03);
writel_relaxed(l2x0_way_mask, l2x0_base + L2X0_CLEAN_INV_WAY);
cache_wait_way(l2x0_base + L2X0_CLEAN_INV_WAY, l2x0_way_mask);
cache_sync();
debug_writel(0x00);
+}
+
+static void l2x0_flush_all(void)
+{
+ unsigned long flags;
+
+ /* clean all ways */
+ spin_lock_irqsave(&l2x0_lock, flags);
+ __l2x0_flush_all();
spin_unlock_irqrestore(&l2x0_lock, flags);
}
@@ -266,7 +271,9 @@ static void l2x0_disable(void)
unsigned long flags;
spin_lock_irqsave(&l2x0_lock, flags);
- writel(0, l2x0_base + L2X0_CTRL);
+ __l2x0_flush_all();
+ writel_relaxed(0, l2x0_base + L2X0_CTRL);
+ dsb();
spin_unlock_irqrestore(&l2x0_lock, flags);
}
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v3 1/9] ARM: l2x0: fix disabling function to avoid deadlock
2011-06-15 17:23 ` [PATCH v3 1/9] ARM: l2x0: fix disabling function to avoid deadlock Will Deacon
@ 2011-06-16 14:53 ` Catalin Marinas
0 siblings, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2011-06-16 14:53 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 15, 2011 at 06:23:12PM +0100, Will Deacon wrote:
> The l2x0_disable function attempts to writel with the l2x0_lock held.
> This results in deadlock when the writel contains an outer_sync call
> for the platform since the l2x0_lock is already held by the disable
> function. A further problem is that disabling the L2 without flushing it
> first can lead to the spin_lock operation becoming visible after the
> spin_unlock, causing any subsequent L2 maintenance to deadlock.
>
> This patch replaces the writel with a call to writel_relaxed in the
> disabling code and adds a flush before disabling in the control
> register, preventing livelock from occurring.
>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
--
Catalin
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 2/9] ARM: proc: add definition of cpu_reset for ARMv6 and ARMv7 cores
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
2011-06-15 17:23 ` [PATCH v3 1/9] ARM: l2x0: fix disabling function to avoid deadlock Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
2011-06-15 17:23 ` [PATCH v3 3/9] ARM: lib: add switch_stack function for safely changing stack Will Deacon
` (6 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
This patch adds simple definitions of cpu_reset for ARMv6 and ARMv7
cores, which disable the MMU via the SCTLR.
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/mm/proc-v6.S | 5 +++++
arch/arm/mm/proc-v7.S | 7 +++++++
2 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/arch/arm/mm/proc-v6.S b/arch/arm/mm/proc-v6.S
index 1d2b845..f3b5232 100644
--- a/arch/arm/mm/proc-v6.S
+++ b/arch/arm/mm/proc-v6.S
@@ -56,6 +56,11 @@ ENTRY(cpu_v6_proc_fin)
*/
.align 5
ENTRY(cpu_v6_reset)
+ mrc p15, 0, r1, c1, c0, 0 @ ctrl register
+ bic r1, r1, #0x1 @ ...............m
+ mcr p15, 0, r1, c1, c0, 0 @ disable MMU
+ mov r1, #0
+ mcr p15, 0, r1, c7, c5, 4 @ ISB
mov pc, r0
/*
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index b3b566e..776443a 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -58,9 +58,16 @@ ENDPROC(cpu_v7_proc_fin)
* to what would be the reset vector.
*
* - loc - location to jump to for soft reset
+ *
+ * This code must be executed using a flat identity mapping with
+ * caches disabled.
*/
.align 5
ENTRY(cpu_v7_reset)
+ mrc p15, 0, r1, c1, c0, 0 @ ctrl register
+ bic r1, r1, #0x1 @ ...............m
+ mcr p15, 0, r1, c1, c0, 0 @ disable MMU
+ isb
mov pc, r0
ENDPROC(cpu_v7_reset)
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v3 3/9] ARM: lib: add switch_stack function for safely changing stack
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
2011-06-15 17:23 ` [PATCH v3 1/9] ARM: l2x0: fix disabling function to avoid deadlock Will Deacon
2011-06-15 17:23 ` [PATCH v3 2/9] ARM: proc: add definition of cpu_reset for ARMv6 and ARMv7 cores Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
2011-06-15 17:23 ` [PATCH v3 4/9] ARM: idmap: add header file for identity mapping functions Will Deacon
` (5 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
When disabling the MMU, it is necessary to take out a 1:1 identity map
of the reset code so that it can safely be executed with and without
the MMU active. To avoid the situation where the physical address of the
reset code aliases with the virtual address of the active stack (which
cannot be included in the 1:1 mapping), it is desirable to change to a
new stack at a location which is less likely to alias.
This code adds a new lib function, switch_stack:
void switch_stack(void (*fn)(void *), void *arg, void *sp);
which changes the stack to point at the sp parameter, before invoking
fn(arg) with the new stack selected.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/lib/Makefile | 3 +-
arch/arm/lib/switch_stack.S | 44 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 46 insertions(+), 1 deletions(-)
create mode 100644 arch/arm/lib/switch_stack.S
diff --git a/arch/arm/lib/Makefile b/arch/arm/lib/Makefile
index 59ff42d..5fa67de 100644
--- a/arch/arm/lib/Makefile
+++ b/arch/arm/lib/Makefile
@@ -13,7 +13,8 @@ lib-y := backtrace.o changebit.o csumipv6.o csumpartial.o \
testchangebit.o testclearbit.o testsetbit.o \
ashldi3.o ashrdi3.o lshrdi3.o muldi3.o \
ucmpdi2.o lib1funcs.o div64.o sha1.o \
- io-readsb.o io-writesb.o io-readsl.o io-writesl.o
+ io-readsb.o io-writesb.o io-readsl.o io-writesl.o \
+ switch_stack.o
mmu-y := clear_user.o copy_page.o getuser.o putuser.o
diff --git a/arch/arm/lib/switch_stack.S b/arch/arm/lib/switch_stack.S
new file mode 100644
index 0000000..552090d
--- /dev/null
+++ b/arch/arm/lib/switch_stack.S
@@ -0,0 +1,44 @@
+/*
+ * arch/arm/lib/switch_stack.S
+ *
+ * Copyright (C) 2011 ARM Ltd.
+ * Written by Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+/*
+ * void switch_stack(void (*fn)(void *), void *arg, void *sp)
+ *
+ * Change the stack to that pointed at by sp, then invoke fn(arg) with
+ * the new stack.
+ */
+ENTRY(switch_stack)
+ str sp, [r2, #-4]!
+ str lr, [r2, #-4]!
+
+ mov sp, r2
+ mov r2, r0
+ mov r0, r1
+
+ adr lr, BSYM(1f)
+ mov pc, r2
+
+1: ldr lr, [sp]
+ ldr sp, [sp, #4]
+ mov pc, lr
+ENDPROC(switch_stack)
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v3 4/9] ARM: idmap: add header file for identity mapping functions
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
` (2 preceding siblings ...)
2011-06-15 17:23 ` [PATCH v3 3/9] ARM: lib: add switch_stack function for safely changing stack Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
2011-06-15 17:23 ` [PATCH v3 5/9] ARM: reset: allow kernelspace mappings to be flat mapped during reset Will Deacon
` (4 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
The identity mappings functions are useful outside of SMP booting, so
expose them through their own header file.
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/include/asm/idmap.h | 16 ++++++++++++++++
arch/arm/include/asm/pgtable.h | 3 ---
arch/arm/kernel/process.c | 3 +--
arch/arm/kernel/smp.c | 1 +
arch/arm/mm/idmap.c | 1 +
5 files changed, 19 insertions(+), 5 deletions(-)
create mode 100644 arch/arm/include/asm/idmap.h
diff --git a/arch/arm/include/asm/idmap.h b/arch/arm/include/asm/idmap.h
new file mode 100644
index 0000000..ea9517e
--- /dev/null
+++ b/arch/arm/include/asm/idmap.h
@@ -0,0 +1,16 @@
+#ifndef _ARM_IDMAP_H
+#define _ARM_IDMAP_H
+
+#include <asm/page.h>
+
+void identity_mapping_add(pgd_t *pgd, unsigned long addr, unsigned long end);
+
+#ifdef CONFIG_SMP
+void identity_mapping_del(pgd_t *pgd, unsigned long addr, unsigned long end);
+#else
+void identity_mapping_del(pgd_t *pgd, unsigned long addr, unsigned long end) {};
+#endif
+
+void setup_mm_for_reboot(char mode);
+
+#endif /* _ARM_IDMAP_H */
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 5750704..9d559a8 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -474,9 +474,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
#define pgtable_cache_init() do { } while (0)
-void identity_mapping_add(pgd_t *, unsigned long, unsigned long);
-void identity_mapping_del(pgd_t *, unsigned long, unsigned long);
-
#endif /* !__ASSEMBLY__ */
#endif /* CONFIG_MMU */
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index 5e1e541..8bd9d94 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -32,6 +32,7 @@
#include <linux/hw_breakpoint.h>
#include <asm/cacheflush.h>
+#include <asm/idmap.h>
#include <asm/leds.h>
#include <asm/processor.h>
#include <asm/system.h>
@@ -56,8 +57,6 @@ static const char *isa_modes[] = {
"ARM" , "Thumb" , "Jazelle", "ThumbEE"
};
-extern void setup_mm_for_reboot(char mode);
-
static volatile int hlt_counter;
#include <mach/system.h>
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 344e52b..dfc76aa 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -39,6 +39,7 @@
#include <asm/tlbflush.h>
#include <asm/ptrace.h>
#include <asm/localtimer.h>
+#include <asm/idmap.h>
/*
* as from 2.5, kernels no longer have an init_tasks structure
diff --git a/arch/arm/mm/idmap.c b/arch/arm/mm/idmap.c
index 2be9139..4ae0f09 100644
--- a/arch/arm/mm/idmap.c
+++ b/arch/arm/mm/idmap.c
@@ -1,6 +1,7 @@
#include <linux/kernel.h>
#include <asm/cputype.h>
+#include <asm/idmap.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v3 5/9] ARM: reset: allow kernelspace mappings to be flat mapped during reset
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
` (3 preceding siblings ...)
2011-06-15 17:23 ` [PATCH v3 4/9] ARM: idmap: add header file for identity mapping functions Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
2011-06-15 17:23 ` [PATCH v3 6/9] ARM: multi-cpu: remove arguments from CPU proc macros Will Deacon
` (3 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
Currently, switch_mm_for_reboot only takes out a 1:1 mapping from 0x0
to TASK_SIZE during reboot. For situations where we actually want to
turn off the MMU (e.g. kexec, hibernate, CPU hotplug) we want to map
as much memory as possible using the identity mapping so that we
increase the chance of mapping our reset code.
This patch introduces a new reboot mode, 'k', which remaps all of memory
apart from the kernel (PAGE_OFFSET - _end) and an additional page
immediately following it, which can be used as a temporary stack if
valid memory is available there. Note that this change makes it
necessary to manipulate and switch to the swapper page tables rather
than hijack the current task.
Reviewed-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/include/asm/idmap.h | 7 +++++++
arch/arm/mm/idmap.c | 30 +++++++++++++++++++++---------
2 files changed, 28 insertions(+), 9 deletions(-)
diff --git a/arch/arm/include/asm/idmap.h b/arch/arm/include/asm/idmap.h
index ea9517e..abe455a 100644
--- a/arch/arm/include/asm/idmap.h
+++ b/arch/arm/include/asm/idmap.h
@@ -2,6 +2,7 @@
#define _ARM_IDMAP_H
#include <asm/page.h>
+#include <asm/sections.h>
void identity_mapping_add(pgd_t *pgd, unsigned long addr, unsigned long end);
@@ -11,6 +12,12 @@ void identity_mapping_del(pgd_t *pgd, unsigned long addr, unsigned long end);
void identity_mapping_del(pgd_t *pgd, unsigned long addr, unsigned long end) {};
#endif
+/* Modes understood from arm_machine_{restart,reset}. */
+#define MODE_REMAP_KERNEL 'k'
+
+/* Page reserved after the kernel image. */
+#define RESERVE_STACK_PAGE ALIGN((unsigned long)_end + PAGE_SIZE, PMD_SIZE)
+
void setup_mm_for_reboot(char mode);
#endif /* _ARM_IDMAP_H */
diff --git a/arch/arm/mm/idmap.c b/arch/arm/mm/idmap.c
index 4ae0f09..e4ae3c5 100644
--- a/arch/arm/mm/idmap.c
+++ b/arch/arm/mm/idmap.c
@@ -75,17 +75,29 @@ void identity_mapping_del(pgd_t *pgd, unsigned long addr, unsigned long end)
#endif
/*
- * In order to soft-boot, we need to insert a 1:1 mapping in place of
- * the user-mode pages. This will then ensure that we have predictable
- * results when turning the mmu off
+ * In order to soft-boot, we need to insert a 1:1 mapping of memory.
+ * This will then ensure that we have predictable results when turning
+ * the mmu off.
*/
void setup_mm_for_reboot(char mode)
{
- /*
- * We need to access to user-mode page tables here. For kernel threads
- * we don't have any user-mode mappings so we use the context that we
- * "borrowed".
- */
- identity_mapping_add(current->active_mm->pgd, 0, TASK_SIZE);
+
+ identity_mapping_add(swapper_pg_dir, 0, TASK_SIZE);
+ if (mode == MODE_REMAP_KERNEL) {
+ /*
+ * Extend the flat mapping into kernelspace.
+ * We leave room for the kernel image and a `reboot stack'.
+ */
+ identity_mapping_add(swapper_pg_dir, TASK_SIZE, PAGE_OFFSET);
+ identity_mapping_add(swapper_pg_dir, RESERVE_STACK_PAGE, 0);
+ }
+
+ /* Clean and invalidate L1. */
+ flush_cache_all();
+
+ /* Switch exclusively to kernel mappings. */
+ cpu_switch_mm(swapper_pg_dir, &init_mm);
+
+ /* Flush the TLB. */
local_flush_tlb_all();
}
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v3 6/9] ARM: multi-cpu: remove arguments from CPU proc macros
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
` (4 preceding siblings ...)
2011-06-15 17:23 ` [PATCH v3 5/9] ARM: reset: allow kernelspace mappings to be flat mapped during reset Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
2011-06-15 17:23 ` [PATCH v3 7/9] ARM: reset: add reset functionality for jumping to a physical address Will Deacon
` (2 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
The macros for invoking functions via the processor struct in the
MULTI_CPU case define the arguments as part of the macros, making it
impossible to take the address of those functions.
This patch removes the arguments from the macro definitions so that we
can take the address of these functions like we can for the !MULTI_CPU
case.
Reported-by: Frank Hofmann <frank.hofmann@tomtom.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/include/asm/proc-fns.h | 14 +++++++-------
1 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index 8ec535e..633d1cb 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -82,13 +82,13 @@ extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte, unsigned int ext);
extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
#else
-#define cpu_proc_init() processor._proc_init()
-#define cpu_proc_fin() processor._proc_fin()
-#define cpu_reset(addr) processor.reset(addr)
-#define cpu_do_idle() processor._do_idle()
-#define cpu_dcache_clean_area(addr,sz) processor.dcache_clean_area(addr,sz)
-#define cpu_set_pte_ext(ptep,pte,ext) processor.set_pte_ext(ptep,pte,ext)
-#define cpu_do_switch_mm(pgd,mm) processor.switch_mm(pgd,mm)
+#define cpu_proc_init processor._proc_init
+#define cpu_proc_fin processor._proc_fin
+#define cpu_reset processor.reset
+#define cpu_do_idle processor._do_idle
+#define cpu_dcache_clean_area processor.dcache_clean_area
+#define cpu_set_pte_ext processor.set_pte_ext
+#define cpu_do_switch_mm processor.switch_mm
#endif
extern void cpu_resume(void);
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v3 7/9] ARM: reset: add reset functionality for jumping to a physical address
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
` (5 preceding siblings ...)
2011-06-15 17:23 ` [PATCH v3 6/9] ARM: multi-cpu: remove arguments from CPU proc macros Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
2011-06-15 17:23 ` [PATCH v3 8/9] ARM: kexec: use arm_machine_reset for branching to the reboot buffer Will Deacon
2011-06-15 17:23 ` [PATCH v3 9/9] ARM: stop: execute platform callback from cpu_stop code Will Deacon
8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
Tools such as kexec and CPU hotplug require a way to reset the processor
and branch to some code in physical space. This requires various bits of
jiggery pokery with the caches and MMU which, when it goes wrong, tends
to lock up the system.
This patch implements a new function, arm_machine_reset, for
consolidating this code in one place where it can be used by multiple
subsystems.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/include/asm/system.h | 1 +
arch/arm/kernel/process.c | 64 +++++++++++++++++++++++++++++++++++-----
2 files changed, 57 insertions(+), 8 deletions(-)
diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h
index 832888d..cd2a3cd 100644
--- a/arch/arm/include/asm/system.h
+++ b/arch/arm/include/asm/system.h
@@ -108,6 +108,7 @@ extern int cpu_architecture(void);
extern void cpu_init(void);
void arm_machine_restart(char mode, const char *cmd);
+void arm_machine_reset(unsigned long reset_code_phys);
extern void (*arm_pm_restart)(char str, const char *cmd);
#define UDBG_UNDEFINED (1 << 0)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index 8bd9d94..fe99ed3 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -90,12 +90,10 @@ static int __init hlt_setup(char *__unused)
__setup("nohlt", nohlt_setup);
__setup("hlt", hlt_setup);
-void arm_machine_restart(char mode, const char *cmd)
-{
- /* Disable interrupts first */
- local_irq_disable();
- local_fiq_disable();
+extern void switch_stack(void (*fn)(void *), void *arg, void *sp);
+static void prepare_for_reboot(char mode)
+{
/*
* Tell the mm system that we are going to reboot -
* we may need it to insert some 1:1 mappings so that
@@ -103,14 +101,20 @@ void arm_machine_restart(char mode, const char *cmd)
*/
setup_mm_for_reboot(mode);
- /* Clean and invalidate caches */
- flush_cache_all();
-
/* Turn off caching */
cpu_proc_fin();
/* Push out any further dirty data, and ensure cache is empty */
flush_cache_all();
+}
+
+void arm_machine_restart(char mode, const char *cmd)
+{
+ /* Disable interrupts first */
+ local_irq_disable();
+ local_fiq_disable();
+
+ prepare_for_reboot(mode);
/*
* Now call the architecture specific reboot code.
@@ -126,6 +130,50 @@ void arm_machine_restart(char mode, const char *cmd)
while (1);
}
+typedef void (*phys_reset_t)(unsigned long);
+void __arm_machine_reset(void *reset_code_phys)
+{
+ phys_reset_t phys_reset;
+
+ prepare_for_reboot(MODE_REMAP_KERNEL);
+
+ /* Switch to the identity mapping. */
+ phys_reset = (phys_reset_t)virt_to_phys(cpu_reset);
+ phys_reset((unsigned long)reset_code_phys);
+
+ /* Should never get here. */
+ BUG();
+}
+
+void arm_machine_reset(unsigned long reset_code_phys)
+{
+ phys_addr_t cpu_reset_end_phys;
+ void *cpu_reset_end, *new_stack = (void *)RESERVE_STACK_PAGE;
+
+ cpu_reset_end = (void *)PAGE_ALIGN((unsigned long)cpu_reset);
+ cpu_reset_end_phys = virt_to_phys(cpu_reset_end);
+
+ /* Check that we can safely identity map the reset code. */
+ BUG_ON(cpu_reset_end_phys > TASK_SIZE &&
+ cpu_reset_end_phys <= RESERVE_STACK_PAGE);
+
+ /* Check that the reserve stack page is valid memory. */
+ BUG_ON(!pfn_valid(__phys_to_pfn(virt_to_phys(new_stack - 1))));
+
+ /* Disable interrupts first. */
+ local_irq_disable();
+ local_fiq_disable();
+
+ /* Disable the L2. */
+ outer_disable();
+
+ /* Change to the new stack and continue with the reset. */
+ switch_stack(__arm_machine_reset, (void *)reset_code_phys, new_stack);
+
+ /* Should never get here. */
+ BUG();
+}
+
/*
* Function pointers to optional machine specific functions
*/
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v3 8/9] ARM: kexec: use arm_machine_reset for branching to the reboot buffer
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
` (6 preceding siblings ...)
2011-06-15 17:23 ` [PATCH v3 7/9] ARM: reset: add reset functionality for jumping to a physical address Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
2011-06-15 17:23 ` [PATCH v3 9/9] ARM: stop: execute platform callback from cpu_stop code Will Deacon
8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
Now that there is a common way to reset the machine, let's use it
instead of reinventing the wheel in the kexec backend.
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/kernel/machine_kexec.c | 14 ++------------
1 files changed, 2 insertions(+), 12 deletions(-)
diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
index e59bbd4..f84567c 100644
--- a/arch/arm/kernel/machine_kexec.c
+++ b/arch/arm/kernel/machine_kexec.c
@@ -16,8 +16,6 @@
extern const unsigned char relocate_new_kernel[];
extern const unsigned int relocate_new_kernel_size;
-extern void setup_mm_for_reboot(char mode);
-
extern unsigned long kexec_start_address;
extern unsigned long kexec_indirection_page;
extern unsigned long kexec_mach_type;
@@ -111,14 +109,6 @@ void machine_kexec(struct kimage *image)
if (kexec_reinit)
kexec_reinit();
- local_irq_disable();
- local_fiq_disable();
- setup_mm_for_reboot(0); /* mode is not used, so just pass 0*/
- flush_cache_all();
- outer_flush_all();
- outer_disable();
- cpu_proc_fin();
- outer_inv_all();
- flush_cache_all();
- cpu_reset(reboot_code_buffer_phys);
+
+ arm_machine_reset(reboot_code_buffer_phys);
}
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v3 9/9] ARM: stop: execute platform callback from cpu_stop code
2011-06-15 17:23 [PATCH v3 0/9] MMU disabling code and kexec fixes Will Deacon
` (7 preceding siblings ...)
2011-06-15 17:23 ` [PATCH v3 8/9] ARM: kexec: use arm_machine_reset for branching to the reboot buffer Will Deacon
@ 2011-06-15 17:23 ` Will Deacon
8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-15 17:23 UTC (permalink / raw)
To: linux-arm-kernel
Sending IPI_CPU_STOP to a CPU causes it to execute a busy cpu_relax
loop forever. This makes it impossible to kexec successfully on an SMP
system since the secondary CPUs do not reset.
This patch adds a callback to platform_cpu_kill, defined when
CONFIG_HOTPLUG_CPU=y, from the ipi_cpu_stop handling code. This function
currently just returns 1 on all platforms that define it but allows them
to do something more sophisticated in the future.
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/Kconfig | 2 +-
arch/arm/kernel/smp.c | 4 ++++
2 files changed, 5 insertions(+), 1 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 9adc278..c26d8f6 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1795,7 +1795,7 @@ config XIP_PHYS_ADDR
config KEXEC
bool "Kexec system call (EXPERIMENTAL)"
- depends on EXPERIMENTAL
+ depends on EXPERIMENTAL && (!SMP || HOTPLUG_CPU)
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index dfc76aa..39189bf 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -548,6 +548,10 @@ static void ipi_cpu_stop(unsigned int cpu)
local_fiq_disable();
local_irq_disable();
+#ifdef CONFIG_HOTPLUG_CPU
+ platform_cpu_kill(cpu);
+#endif
+
while (1)
cpu_relax();
}
--
1.7.0.4
^ permalink raw reply related [flat|nested] 11+ messages in thread