* [PATCH v2 0/6] arm64: hibernate: Fix DEBUG_PAGEALLOC and hibernate on non-boot cpu
@ 2016-06-15 17:35 James Morse
2016-06-15 17:35 ` [PATCH v2 1/6] arm64: Create sections.h James Morse
` (5 more replies)
0 siblings, 6 replies; 14+ messages in thread
From: James Morse @ 2016-06-15 17:35 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Will Deacon, Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm, Lorenzo Pieralisi, Mark Rutland, James Morse
Hi all,
These patches improve the hibernate support for v4.8.
Patches 1&2 fix the hibernate problem with DEBUG_PAGEALLOC reported by Will.
This also addresses the outstanding comment from Catalin [0] regarding cleaning
of the whole kernel to the PoC. Now we clean a new section '.mmuoff.text'.
Patch one adds sections.h to avoid duplicate definitions of
__hyp_idmap_text_start. (it looks like kprobes will duplicate this too [1)
The rest of the patches fix hibernate/resume on any CPU as discussed on the
list[2]. Suzuki's maxcpus improvements mean the CPU we hibernated on may be
offline during boot, and kexec will allow the CPUs logical order to be shuffled.
Log output below[3], suspend happens on cpu0, with mpidr 0x103. For
resume, cpu5 is brought up as it has mpidr 0x103, and cpu0 is taken down.
After resume, /proc/cpuinfo shows only the original cpu0.
Changes since v1:
* Fixed 'Brining' typo.
* Added .mmuoff.text section and gathered functions together.
* Put sections.h in alphabetical order.
[v1] http://www.spinics.net/lists/arm-kernel/msg507805.html
[0] http://www.spinics.net/lists/arm-kernel/msg499305.html
[1] http://www.spinics.net/lists/arm-kernel/msg506451.html
[2] http://www.spinics.net/lists/arm-kernel/msg499036.html
[3] hotplug cpu0, kexec, hibernate, resume
-------------------------%<-------------------------
root@ubuntu:~# echo disk > /sys/power/state
PM: Syncing filesystems ... done.
Freezing user space processes ... (elapsed 0.007 seconds) done.
PM: Preallocating image memory... done (allocated 10112 pages)
PM: Allocated 647168 kbytes in 2.69 seconds (240.58 MB/s)
Freezing remaining freezable tasks ... (elapsed 0.005 seconds) done.
PM: freeze of devices complete after 27.013 msecs
PM: late freeze of devices complete after 20.624 msecs
PM: noirq freeze of devices complete after 22.211 msecs
Disabling non-boot CPUs ...
PM: Creating hibernation image:
PM: Need to copy 10103 pages
PM: Hibernation image created (10103 pages copied)
PM: noirq thaw of devices complete after 22.350 msecs
PM: early thaw of devices complete after 23.680 msecs
PM: thaw of devices complete after 98.331 msecs
hibernate: Suspending on cpu 0 [mpidr:0x103]
PM: Using 1 thread(s) for compression.
PM: Compressing and saving image data (10105 pages)...
PM: Image saving progress: 0%
atkbd serio0: keyboard reset failed on 1c060000.kmi
atkbd serio1: keyboard reset failed on 1c070000.kmi
PM: Image saving progress: 10%
PM: Image saving progress: 20%
PM: Image saving progress: 30%
PM: Image saving progress: 40%
PM: Image saving progress: 50%
PM: Image saving progress: 60%
PM: Image saving progress: 70%
PM: Image saving progress: 80%
PM: Image saving progress: 90%
PM: Image saving progress: 100%
PM: Image saving done.
PM: Wrote 646720 kbytes in 93.08 seconds (6.94 MB/s)
PM: S|
reboot: Power down
[ ... ]
Freezing user space processes ... (elapsed 0.000 seconds) done.
PM: Using 1 thread(s) for decompression.
PM: Loading and decompressing image data (10105 pages)...
hibernate: Suspended on cpu 5 [mpidr:0x103]
hibernate: Suspended on a CPU that is offline! Brining CPU up.
Detected VIPT I-cache on CPU5
CPU5: Booted secondary processor [410fd033]
random: nonblocking pool is initialized
PM: Image loading progress: 0%
PM: Image loading progress: 10%
PM: Image loading progress: 20%
PM: Image loading progress: 30%
PM: Image loading progress: 40%
PM: Image loading progress: 50%
PM: Image loading progress: 60%
PM: Image loading progress: 70%
PM: Image loading progress: 80%
PM: Image loading progress: 90%
PM: Image loading progress: 100%
PM: Image loading done.
PM: Read 646720 kbytes in 30.47 seconds (21.22 MB/s)
PM: quiesce of devices complete after 32.958 msecs
PM: late quiesce of devices complete after 11.574 msecs
PM: noirq quiesce of devices complete after 24.918 msecs
hibernate: Disabling secondary CPUs ...
IRQ1 no longer affine to CPU0
IRQ6 no longer affine to CPU0
IRQ28 no longer affine to CPU0
IRQ29 no longer affine to CPU0
IRQ32 no longer affine to CPU0
IRQ34 no longer affine to CPU0
IRQ35 no longer affine to CPU0
IRQ37 no longer affine to CPU0
IRQ41 no longer affine to CPU0
IRQ48 no longer affine to CPU0
CPU0: shutdown
psci: CPU0 killed.
PM: noirq restore of devices complete after 27.419 msecs
PM: early restore of devices complete after 23.554 msecs
PM: restore of devices complete after 113.188 msecs
Restarting tasks ... done.
root@ubuntu:~# atkbd serio0: keyboard reset failed on 1c060000.kmi
atkbd serio1: keyboard reset failed on 1c070000.kmi
root@ubuntu:~# cat /proc/cpuinfo
processor : 0
BogoMIPS : 100.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 3
root@ubuntu:~#
-------------------------%<-------------------------
James Morse (6):
arm64: Create sections.h
arm64: Add .mmuoff.text section
arm64: hibernate: Support DEBUG_PAGEALLOC
PM / Hibernate: Allow architectures to specify the hibernate/resume
CPU
arm64: hibernate: Identify the CPU to resume on by its MPIDR
Revert "arm64: hibernate: Refuse to hibernate if the boot cpu is
offline"
arch/arm64/Kconfig | 1 -
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/sections.h | 29 ++++++++
arch/arm64/include/asm/suspend.h | 1 +
arch/arm64/include/asm/traps.h | 3 +-
arch/arm64/include/asm/virt.h | 5 +-
arch/arm64/kernel/alternative.c | 7 +-
arch/arm64/kernel/head.S | 6 +-
arch/arm64/kernel/hibernate.c | 137 ++++++++++++++++++++++++++++----------
arch/arm64/kernel/sleep.S | 2 +
arch/arm64/kernel/vmlinux.lds.S | 3 +
arch/arm64/kvm/reset.c | 3 +-
arch/arm64/mm/pageattr.c | 40 ++++++++++-
arch/arm64/mm/proc.S | 4 ++
kernel/power/hibernate.c | 16 ++++-
15 files changed, 202 insertions(+), 56 deletions(-)
create mode 100644 arch/arm64/include/asm/sections.h
--
2.8.0.rc3
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/6] arm64: Create sections.h
2016-06-15 17:35 [PATCH v2 0/6] arm64: hibernate: Fix DEBUG_PAGEALLOC and hibernate on non-boot cpu James Morse
@ 2016-06-15 17:35 ` James Morse
2016-06-16 10:52 ` Mark Rutland
2016-06-15 17:35 ` [PATCH v2 2/6] arm64: Add .mmuoff.text section James Morse
` (4 subsequent siblings)
5 siblings, 1 reply; 14+ messages in thread
From: James Morse @ 2016-06-15 17:35 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Will Deacon, Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm, Lorenzo Pieralisi, Mark Rutland, James Morse
Each time new section markers are added, kernel/vmlinux.ld.S is updated,
and new extern char __start_foo[] definitions are scattered through the
tree.
Create asm/include/sections.h to collect these definitions (and include
the existing asm-generic version).
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v1:
* Removed text_text markers,
* Sorted sections.h
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/sections.h | 28 ++++++++++++++++++++++++++++
arch/arm64/include/asm/traps.h | 3 +--
arch/arm64/include/asm/virt.h | 5 +----
arch/arm64/kernel/alternative.c | 7 +++----
arch/arm64/kernel/hibernate.c | 6 ------
arch/arm64/kvm/reset.c | 3 +--
7 files changed, 34 insertions(+), 19 deletions(-)
create mode 100644 arch/arm64/include/asm/sections.h
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index cff532a6744e..50c1f646b704 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -33,7 +33,6 @@ generic-y += poll.h
generic-y += preempt.h
generic-y += resource.h
generic-y += rwsem.h
-generic-y += sections.h
generic-y += segment.h
generic-y += sembuf.h
generic-y += serial.h
diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
new file mode 100644
index 000000000000..cb68eb348566
--- /dev/null
+++ b/arch/arm64/include/asm/sections.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2016 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_SECTIONS_H
+#define __ASM_SECTIONS_H
+
+#include <asm-generic/sections.h>
+
+extern char __alt_instructions[], __alt_instructions_end[];
+extern char __exception_text_start[], __exception_text_end[];
+extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
+extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
+extern char __hyp_text_start[], __hyp_text_end[];
+extern char __idmap_text_start[], __idmap_text_end[];
+
+#endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
index 0cc2f29bf9da..90156c803f11 100644
--- a/arch/arm64/include/asm/traps.h
+++ b/arch/arm64/include/asm/traps.h
@@ -19,6 +19,7 @@
#define __ASM_TRAP_H
#include <linux/list.h>
+#include <asm/sections.h>
struct pt_regs;
@@ -52,8 +53,6 @@ static inline int __in_irqentry_text(unsigned long ptr)
static inline int in_exception_text(unsigned long ptr)
{
- extern char __exception_text_start[];
- extern char __exception_text_end[];
int in;
in = ptr >= (unsigned long)&__exception_text_start &&
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index dcbcf8dcbefb..b78611157f8b 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -40,6 +40,7 @@
#ifndef __ASSEMBLY__
#include <asm/ptrace.h>
+#include <asm/sections.h>
/*
* __boot_cpu_mode records what mode CPUs were booted in.
@@ -82,10 +83,6 @@ extern void verify_cpu_run_el(void);
static inline void verify_cpu_run_el(void) {}
#endif
-/* The section containing the hypervisor text */
-extern char __hyp_text_start[];
-extern char __hyp_text_end[];
-
#endif /* __ASSEMBLY__ */
#endif /* ! __ASM__VIRT_H */
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index d2ee1b21a10d..4434dabde898 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -25,14 +25,13 @@
#include <asm/alternative.h>
#include <asm/cpufeature.h>
#include <asm/insn.h>
+#include <asm/sections.h>
#include <linux/stop_machine.h>
#define __ALT_PTR(a,f) (u32 *)((void *)&(a)->f + (a)->f)
#define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset)
#define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset)
-extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
-
struct alt_region {
struct alt_instr *begin;
struct alt_instr *end;
@@ -124,8 +123,8 @@ static int __apply_alternatives_multi_stop(void *unused)
{
static int patched = 0;
struct alt_region region = {
- .begin = __alt_instructions,
- .end = __alt_instructions_end,
+ .begin = (struct alt_instr *)__alt_instructions,
+ .end = (struct alt_instr *)__alt_instructions_end,
};
/* We always have a CPU 0 at this point (__init) */
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index f8df75d740f4..56e548fe0386 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -52,12 +52,6 @@ extern int in_suspend;
/* Do we need to reset el2? */
#define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode())
-/*
- * Start/end of the hibernate exit code, this must be copied to a 'safe'
- * location in memory, and executed from there.
- */
-extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
-
/* temporary el2 vectors in the __hibernate_exit_text section. */
extern char hibernate_el2_vectors[];
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index b1ad730e1567..c2d3594dd546 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -32,6 +32,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_mmu.h>
+#include <asm/sections.h>
/*
* ARMv8 Reset Values
@@ -133,8 +134,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
}
-extern char __hyp_idmap_text_start[];
-
unsigned long kvm_hyp_reset_entry(void)
{
if (!__kvm_cpu_uses_extended_idmap()) {
--
2.8.0.rc3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 2/6] arm64: Add .mmuoff.text section
2016-06-15 17:35 [PATCH v2 0/6] arm64: hibernate: Fix DEBUG_PAGEALLOC and hibernate on non-boot cpu James Morse
2016-06-15 17:35 ` [PATCH v2 1/6] arm64: Create sections.h James Morse
@ 2016-06-15 17:35 ` James Morse
2016-06-16 11:10 ` Mark Rutland
2016-06-15 17:35 ` [PATCH v2 3/6] arm64: hibernate: Support DEBUG_PAGEALLOC James Morse
` (3 subsequent siblings)
5 siblings, 1 reply; 14+ messages in thread
From: James Morse @ 2016-06-15 17:35 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Will Deacon, Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm, Lorenzo Pieralisi, Mark Rutland, James Morse
Resume from hibernate needs to clean any text executed by the kernel with
the MMU off to the PoC. Collect these functions together into a new
.mmuoff.text section.
This covers booting of secondary cores and the cpu_suspend() path used
by cpu-idle and suspend-to-ram.
The bulk of head.S is not included, as the primary boot code is only ever
executed once, the kernel never needs to ensure it is cleaned to a
particular point in the cache.
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
---
This patch is new since v1.
arch/arm64/include/asm/sections.h | 1 +
arch/arm64/kernel/head.S | 6 ++++--
arch/arm64/kernel/sleep.S | 2 ++
arch/arm64/kernel/vmlinux.lds.S | 3 +++
arch/arm64/mm/proc.S | 4 ++++
5 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index cb68eb348566..6f27c3f86a09 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -24,5 +24,6 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
extern char __hyp_text_start[], __hyp_text_end[];
extern char __idmap_text_start[], __idmap_text_end[];
+extern char __mmuoff_text_start[], __mmuoff_text_end[];
#endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 2c6e598a94dc..ff37231e2054 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -477,6 +477,7 @@ ENTRY(kimage_vaddr)
* Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in x20 if
* booted in EL1 or EL2 respectively.
*/
+ .pushsection ".mmuoff.text", "ax"
ENTRY(el2_setup)
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
@@ -604,7 +605,7 @@ install_el2_stub:
mov w20, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2
eret
ENDPROC(el2_setup)
-
+ .popsection
/*
* Sets the __boot_cpu_mode flag depending on the CPU boot mode passed
* in x20. See arch/arm64/include/asm/virt.h for more info.
@@ -656,6 +657,7 @@ ENDPROC(secondary_holding_pen)
* Secondary entry point that jumps straight into the kernel. Only to
* be used where CPUs are brought online dynamically by the kernel.
*/
+ .pushsection ".mmuoff.text", "ax"
ENTRY(secondary_entry)
bl el2_setup // Drop to EL1
bl set_cpu_boot_mode_flag
@@ -687,7 +689,7 @@ __secondary_switched:
mov x29, #0
b secondary_start_kernel
ENDPROC(__secondary_switched)
-
+ .popsection
/*
* The booting CPU updates the failed status @__early_cpu_boot_status,
* with MMU turned off.
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index 9a3aec97ac09..e66ce9b7bbde 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -97,6 +97,7 @@ ENTRY(__cpu_suspend_enter)
ENDPROC(__cpu_suspend_enter)
.ltorg
+ .pushsection ".mmuoff.text", "ax"
ENTRY(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly
/* enable the MMU early - so we can access sleep_save_stash by va */
@@ -106,6 +107,7 @@ ENTRY(cpu_resume)
adrp x26, swapper_pg_dir
b __cpu_setup
ENDPROC(cpu_resume)
+ .popsection
ENTRY(_cpu_resume)
mrs x1, mpidr_el1
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 435e820e898d..64fe907bcc5f 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -118,6 +118,9 @@ SECTIONS
__exception_text_end = .;
IRQENTRY_TEXT
SOFTIRQENTRY_TEXT
+ __mmuoff_text_start = .;
+ *(.mmuoff.text)
+ __mmuoff_text_end = .;
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index c4317879b938..655ff3ec90f2 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -83,6 +83,7 @@ ENDPROC(cpu_do_suspend)
*
* x0: Address of context pointer
*/
+ .pushsection ".mmuoff.text", "ax"
ENTRY(cpu_do_resume)
ldp x2, x3, [x0]
ldp x4, x5, [x0, #16]
@@ -111,6 +112,7 @@ ENTRY(cpu_do_resume)
isb
ret
ENDPROC(cpu_do_resume)
+ .popsection
#endif
/*
@@ -172,6 +174,7 @@ ENDPROC(idmap_cpu_replace_ttbr1)
* Initialise the processor for turning the MMU on. Return in x0 the
* value of the SCTLR_EL1 register.
*/
+ .pushsection ".mmuoff.text", "ax"
ENTRY(__cpu_setup)
tlbi vmalle1 // Invalidate local TLB
dsb nsh
@@ -255,3 +258,4 @@ ENDPROC(__cpu_setup)
crval:
.word 0xfcffffff // clear
.word 0x34d5d91d // set
+ .popsection
--
2.8.0.rc3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 3/6] arm64: hibernate: Support DEBUG_PAGEALLOC
2016-06-15 17:35 [PATCH v2 0/6] arm64: hibernate: Fix DEBUG_PAGEALLOC and hibernate on non-boot cpu James Morse
2016-06-15 17:35 ` [PATCH v2 1/6] arm64: Create sections.h James Morse
2016-06-15 17:35 ` [PATCH v2 2/6] arm64: Add .mmuoff.text section James Morse
@ 2016-06-15 17:35 ` James Morse
2016-06-15 17:35 ` [PATCH v2 4/6] PM / Hibernate: Allow architectures to specify the hibernate/resume CPU James Morse
` (2 subsequent siblings)
5 siblings, 0 replies; 14+ messages in thread
From: James Morse @ 2016-06-15 17:35 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Will Deacon, Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm, Lorenzo Pieralisi, Mark Rutland, James Morse
DEBUG_PAGEALLOC removes the valid bit of page table entries to prevent
any access to unallocated memory. Hibernate uses this as a hint that those
pages don't need to be saved/restored. This patch adds the
kernel_page_present() function it uses.
hibernate.c copies the resume kernel's linear map for use during restore.
Add _copy_pte() to fill-in the holes made by DEBUG_PAGEALLOC in the resume
kernel, so we can restore data the original kernel had at these addresses.
Finally, DEBUG_PAGEALLOC means the linear-map alias of KERNEL_START to
KERNEL_END may have holes in it, so we can't lazily clean this whole
area to the PoC. Only clean the .mmuoff.text region, and the kernel/kvm
idmaps.
This reverts commit da24eb1f3f9e2c7b75c5f8c40d8e48e2c4789596.
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Change since v1:
* Changed cleaning of text_text section to the new mmuoff section.
arch/arm64/Kconfig | 1 -
arch/arm64/kernel/hibernate.c | 45 ++++++++++++++++++++++++++++++++++---------
arch/arm64/mm/pageattr.c | 40 +++++++++++++++++++++++++++++++++++++-
3 files changed, 75 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..c7aa78b9f586 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -598,7 +598,6 @@ source kernel/Kconfig.preempt
source kernel/Kconfig.hz
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
- depends on !HIBERNATION
def_bool y
config ARCH_HAS_HOLES_MEMORYMODEL
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 56e548fe0386..75d45c4ceb84 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -235,8 +235,15 @@ int swsusp_arch_suspend(void)
if (__cpu_suspend_enter(&state)) {
ret = swsusp_save();
} else {
- /* Clean kernel to PoC for secondary core startup */
- __flush_dcache_area(LMADDR(KERNEL_START), KERNEL_END - KERNEL_START);
+ /* Clean kernel core startup/idle code to PoC*/
+ __flush_dcache_area(__mmuoff_text_start,
+ __mmuoff_text_end - __mmuoff_text_start);
+ __flush_dcache_area(__idmap_text_start,
+ __idmap_text_end - __idmap_text_start);
+
+ /* Clean kvm setup code to PoC? */
+ if (el2_reset_needed())
+ __flush_dcache_area(__hyp_idmap_text_start, __hyp_idmap_text_end - __hyp_idmap_text_start);
/*
* Tell the hibernation core that we've just restored
@@ -252,6 +259,32 @@ int swsusp_arch_suspend(void)
return ret;
}
+static void _copy_pte(pte_t *dst_pte, pte_t *src_pte, unsigned long addr)
+{
+ unsigned long pfn = virt_to_pfn(addr);
+
+ if (pte_valid(*src_pte)) {
+ /*
+ * Resume will overwrite areas that may be marked
+ * read only (code, rodata). Clear the RDONLY bit from
+ * the temporary mappings we use during restore.
+ */
+ set_pte(dst_pte, __pte(pte_val(*src_pte) & ~PTE_RDONLY));
+ } else if (debug_pagealloc_enabled()) {
+ /*
+ * debug_pagealloc may have removed the PTE_VALID bit if
+ * the page isn't in use by the resume kernel. It may have
+ * been in use by the original kernel, in which case we need
+ * to put it back in our copy to do the restore.
+ *
+ * Check for mappable memory that gives us a translation
+ * like part of the linear map.
+ */
+ if (pfn_valid(pfn) && pte_pfn(*src_pte) == pfn)
+ set_pte(dst_pte, __pte((pte_val(*src_pte) & ~PTE_RDONLY) | PTE_VALID));
+ }
+}
+
static int copy_pte(pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long start,
unsigned long end)
{
@@ -267,13 +300,7 @@ static int copy_pte(pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long start,
src_pte = pte_offset_kernel(src_pmd, start);
do {
- if (!pte_none(*src_pte))
- /*
- * Resume will overwrite areas that may be marked
- * read only (code, rodata). Clear the RDONLY bit from
- * the temporary mappings we use during restore.
- */
- set_pte(dst_pte, __pte(pte_val(*src_pte) & ~PTE_RDONLY));
+ _copy_pte(dst_pte, src_pte, addr);
} while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);
return 0;
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index ca6d268e3313..b6c0da84258c 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -139,4 +139,42 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
__pgprot(0),
__pgprot(PTE_VALID));
}
-#endif
+#ifdef CONFIG_HIBERNATION
+/*
+ * When built with CONFIG_DEBUG_PAGEALLOC and CONFIG_HIBERNATION, this function
+ * is used to determine if a linear map page has been marked as not-present by
+ * CONFIG_DEBUG_PAGEALLOC. Walk the page table and check the PTE_VALID bit.
+ * This is based on kern_addr_valid(), which almost does what we need.
+ */
+bool kernel_page_present(struct page *page)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+ unsigned long addr = (unsigned long)page_address(page);
+
+ pgd = pgd_offset_k(addr);
+ if (pgd_none(*pgd))
+ return false;
+
+ pud = pud_offset(pgd, addr);
+ if (pud_none(*pud))
+ return false;
+ if (pud_sect(*pud))
+ return true;
+
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd))
+ return false;
+ if (pmd_sect(*pmd))
+ return true;
+
+ pte = pte_offset_kernel(pmd, addr);
+ if (pte_none(*pte))
+ return false;
+
+ return pte_valid(*pte);
+}
+#endif /* CONFIG_HIBERNATION */
+#endif /* CONFIG_DEBUG_PAGEALLOC */
--
2.8.0.rc3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 4/6] PM / Hibernate: Allow architectures to specify the hibernate/resume CPU
2016-06-15 17:35 [PATCH v2 0/6] arm64: hibernate: Fix DEBUG_PAGEALLOC and hibernate on non-boot cpu James Morse
` (2 preceding siblings ...)
2016-06-15 17:35 ` [PATCH v2 3/6] arm64: hibernate: Support DEBUG_PAGEALLOC James Morse
@ 2016-06-15 17:35 ` James Morse
2016-06-15 21:10 ` Rafael J. Wysocki
2016-06-15 17:35 ` [PATCH v2 5/6] arm64: hibernate: Identify the CPU to resume on by its MPIDR James Morse
2016-06-15 17:35 ` [PATCH v2 6/6] Revert "arm64: hibernate: Refuse to hibernate if the boot cpu is offline" James Morse
5 siblings, 1 reply; 14+ messages in thread
From: James Morse @ 2016-06-15 17:35 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Will Deacon, Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm, Lorenzo Pieralisi, Mark Rutland, James Morse
On arm64 the cpu with logical id 0 is assumed to be the boot CPU. If a
user hotplugs this CPU out, then uses kexec to boot a new kernel, the new
kernel will assign logical id 0 to a different physical CPU.
This breaks hibernate as hibernate and resume will be attempted on different
CPUs.
Define a weak symbol arch_hibernation_disable_cpus(), which defaults to
calling disable_nonboot_cpus). Architectures that allow CPU 0 to be
hotplugged can use this to control which CPU is used for hibernate/resume.
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Pavel Machek <pavel@ucw.cz>
---
If this approach is acceptable, this patch should go with 4&5 via the arm64
tree.
kernel/power/hibernate.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index fca9254280ee..0e668a3207ec 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -70,6 +70,16 @@ bool hibernation_available(void)
}
/**
+ * arch_hibernation_disable_cpus - Allow architectures to direct which CPU
+ * is used for suspend or resume.
+ * @suspend: True during hibernate, false for resume.
+ */
+int __weak arch_hibernation_disable_cpus(__maybe_unused bool suspend)
+{
+ return disable_nonboot_cpus();
+}
+
+/**
* hibernation_set_ops - Set the global hibernate operations.
* @ops: Hibernation operations to use in subsequent hibernation transitions.
*/
@@ -279,7 +289,7 @@ static int create_image(int platform_mode)
if (error || hibernation_test(TEST_PLATFORM))
goto Platform_finish;
- error = disable_nonboot_cpus();
+ error = arch_hibernation_disable_cpus(true);
if (error || hibernation_test(TEST_CPUS))
goto Enable_cpus;
@@ -433,7 +443,7 @@ static int resume_target_kernel(bool platform_mode)
if (error)
goto Cleanup;
- error = disable_nonboot_cpus();
+ error = arch_hibernation_disable_cpus(false);
if (error)
goto Enable_cpus;
@@ -551,7 +561,7 @@ int hibernation_platform_enter(void)
if (error)
goto Platform_finish;
- error = disable_nonboot_cpus();
+ error = arch_hibernation_disable_cpus(true);
if (error)
goto Enable_cpus;
--
2.8.0.rc3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 5/6] arm64: hibernate: Identify the CPU to resume on by its MPIDR
2016-06-15 17:35 [PATCH v2 0/6] arm64: hibernate: Fix DEBUG_PAGEALLOC and hibernate on non-boot cpu James Morse
` (3 preceding siblings ...)
2016-06-15 17:35 ` [PATCH v2 4/6] PM / Hibernate: Allow architectures to specify the hibernate/resume CPU James Morse
@ 2016-06-15 17:35 ` James Morse
2016-06-15 17:35 ` [PATCH v2 6/6] Revert "arm64: hibernate: Refuse to hibernate if the boot cpu is offline" James Morse
5 siblings, 0 replies; 14+ messages in thread
From: James Morse @ 2016-06-15 17:35 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Will Deacon, Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm, Lorenzo Pieralisi, Mark Rutland, James Morse
On arm64 the cpu with logical id 0 is assumed to be the boot CPU. If a
user hotplugs this CPU out, then uses kexec to boot a new kernel, the new
kernel will assign logical id 0 to a different physical CPU.
This breaks hibernate as hibernate and resume will be attempted on different
CPUs.
Save the MPIDR of the CPU we hibernated on in the hibernate arch-header,
and provide arch_hibernation_disable_cpus() to switch to that CPU during
resume. During hibernate use disable_nonboot_cpus(), and save the MPIDR of
the CPU it selected, this ensures frozen_cpus is updated correctly.
Booting with maxcpus=1, then bringing additional CPUs up from user space may
cause us to hibernate on a CPU that isn't online during boot. In this case,
bring the CPU online.
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
---
Change since v1:
* Fixed Brining typo
arch/arm64/include/asm/suspend.h | 1 +
arch/arm64/kernel/hibernate.c | 70 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 71 insertions(+)
diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h
index 024d623f662e..487758882ef3 100644
--- a/arch/arm64/include/asm/suspend.h
+++ b/arch/arm64/include/asm/suspend.h
@@ -46,5 +46,6 @@ int swsusp_arch_suspend(void);
int swsusp_arch_resume(void);
int arch_hibernation_header_save(void *addr, unsigned int max_size);
int arch_hibernation_header_restore(void *addr);
+int arch_hibernation_disable_cpus(bool suspend);
#endif
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 75d45c4ceb84..5f9266aeff00 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -15,6 +15,7 @@
* License terms: GNU General Public License (GPL) version 2
*/
#define pr_fmt(x) "hibernate: " x
+#include <linux/cpu.h>
#include <linux/kvm_host.h>
#include <linux/mm.h>
#include <linux/notifier.h>
@@ -26,6 +27,7 @@
#include <asm/barrier.h>
#include <asm/cacheflush.h>
+#include <asm/cputype.h>
#include <asm/irqflags.h>
#include <asm/memory.h>
#include <asm/mmu_context.h>
@@ -33,6 +35,7 @@
#include <asm/pgtable.h>
#include <asm/pgtable-hwdef.h>
#include <asm/sections.h>
+#include <asm/smp_plat.h>
#include <asm/suspend.h>
#include <asm/virt.h>
@@ -59,6 +62,12 @@ extern char hibernate_el2_vectors[];
extern char __hyp_stub_vectors[];
/*
+ * The logical cpu number we should resume on, initialised to a non-cpu
+ * number.
+ */
+static int sleep_cpu = -EINVAL;
+
+/*
* Values that may not change over hibernate/resume. We put the build number
* and date in here so that we guarantee not to resume with a different
* kernel.
@@ -80,6 +89,8 @@ static struct arch_hibernate_hdr {
* re-configure el2.
*/
phys_addr_t __hyp_stub_vectors;
+
+ u64 sleep_cpu_mpidr;
} resume_hdr;
static inline void arch_hdr_invariants(struct arch_hibernate_hdr_invariants *i)
@@ -122,12 +133,18 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
else
hdr->__hyp_stub_vectors = 0;
+ /* Save the mpidr of the cpu we called cpu_suspend() on... */
+ hdr->sleep_cpu_mpidr = cpu_logical_map(sleep_cpu);
+ pr_info("Suspending on CPU %d [mpidr:0x%llx]\n", sleep_cpu,
+ hdr->sleep_cpu_mpidr);
+
return 0;
}
EXPORT_SYMBOL(arch_hibernation_header_save);
int arch_hibernation_header_restore(void *addr)
{
+ int ret;
struct arch_hibernate_hdr_invariants invariants;
struct arch_hibernate_hdr *hdr = addr;
@@ -137,6 +154,23 @@ int arch_hibernation_header_restore(void *addr)
return -EINVAL;
}
+ sleep_cpu = get_logical_index(hdr->sleep_cpu_mpidr);
+ pr_info("Suspended on CPU %d [mpidr:0x%llx]\n", sleep_cpu,
+ hdr->sleep_cpu_mpidr);
+ if (sleep_cpu < 0) {
+ pr_crit("Suspended on a CPU not known to this kernel!\n");
+ return -EINVAL;
+ }
+ if (!cpu_online(sleep_cpu)) {
+ pr_info("Suspended on a CPU that is offline! Bringing CPU up.\n");
+ ret = cpu_up(sleep_cpu);
+ if (ret) {
+ pr_err("Failed to bring suspend-CPU up!\n");
+ sleep_cpu = -EINVAL;
+ return ret;
+ }
+ }
+
resume_hdr = *hdr;
return 0;
@@ -233,6 +267,7 @@ int swsusp_arch_suspend(void)
local_dbg_save(flags);
if (__cpu_suspend_enter(&state)) {
+ sleep_cpu = smp_processor_id();
ret = swsusp_save();
} else {
/* Clean kernel core startup/idle code to PoC*/
@@ -251,6 +286,7 @@ int swsusp_arch_suspend(void)
*/
in_suspend = 0;
+ sleep_cpu = -EINVAL;
__cpu_suspend_exit();
}
@@ -506,3 +542,37 @@ static int __init check_boot_cpu_online_init(void)
return 0;
}
core_initcall(check_boot_cpu_online_init);
+
+/* This overrides the weak version in kernel/power/hibernate.c */
+int arch_hibernation_disable_cpus(bool suspend)
+{
+ int cpu, ret;
+
+ if (suspend) {
+ /*
+ * During hibernate we need frozen_cpus to be updated and saved.
+ */
+ ret = disable_nonboot_cpus();
+ } else {
+ /*
+ * Resuming from hibernate. From here, we can't race with
+ * userspace, and don't need to update frozen_cpus.
+ */
+ pr_info("Disabling secondary CPUs ...\n");
+
+ /* sleep_cpu must have been loaded from the arch header */
+ BUG_ON(sleep_cpu < 0);
+
+ for_each_online_cpu(cpu) {
+ if (cpu == sleep_cpu)
+ continue;
+ ret = cpu_down(cpu);
+ if (ret) {
+ pr_err("Secondary CPUs are not disabled\n");
+ break;
+ }
+ }
+ }
+
+ return ret;
+}
--
2.8.0.rc3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 6/6] Revert "arm64: hibernate: Refuse to hibernate if the boot cpu is offline"
2016-06-15 17:35 [PATCH v2 0/6] arm64: hibernate: Fix DEBUG_PAGEALLOC and hibernate on non-boot cpu James Morse
` (4 preceding siblings ...)
2016-06-15 17:35 ` [PATCH v2 5/6] arm64: hibernate: Identify the CPU to resume on by its MPIDR James Morse
@ 2016-06-15 17:35 ` James Morse
5 siblings, 0 replies; 14+ messages in thread
From: James Morse @ 2016-06-15 17:35 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Will Deacon, Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm, Lorenzo Pieralisi, Mark Rutland, James Morse
Now that we use the MPIDR to resume on the same CPU that we hibernated on,
we no longer need to refuse to hibernate if the boot cpu is offline. (Which
we can't possibly know if kexec causes logical CPUs to be renumbered).
This reverts commit 1fe492ce6482b77807b25d29690a48c46456beee.
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/arm64/kernel/hibernate.c | 26 --------------------------
1 file changed, 26 deletions(-)
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 5f9266aeff00..07c59cbfd615 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -18,7 +18,6 @@
#include <linux/cpu.h>
#include <linux/kvm_host.h>
#include <linux/mm.h>
-#include <linux/notifier.h>
#include <linux/pm.h>
#include <linux/sched.h>
#include <linux/suspend.h>
@@ -518,31 +517,6 @@ out:
return rc;
}
-static int check_boot_cpu_online_pm_callback(struct notifier_block *nb,
- unsigned long action, void *ptr)
-{
- if (action == PM_HIBERNATION_PREPARE &&
- cpumask_first(cpu_online_mask) != 0) {
- pr_warn("CPU0 is offline.\n");
- return notifier_from_errno(-ENODEV);
- }
-
- return NOTIFY_OK;
-}
-
-static int __init check_boot_cpu_online_init(void)
-{
- /*
- * Set this pm_notifier callback with a lower priority than
- * cpu_hotplug_pm_callback, so that cpu_hotplug_pm_callback will be
- * called earlier to disable cpu hotplug before the cpu online check.
- */
- pm_notifier(check_boot_cpu_online_pm_callback, -INT_MAX);
-
- return 0;
-}
-core_initcall(check_boot_cpu_online_init);
-
/* This overrides the weak version in kernel/power/hibernate.c */
int arch_hibernation_disable_cpus(bool suspend)
{
--
2.8.0.rc3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v2 4/6] PM / Hibernate: Allow architectures to specify the hibernate/resume CPU
2016-06-15 17:35 ` [PATCH v2 4/6] PM / Hibernate: Allow architectures to specify the hibernate/resume CPU James Morse
@ 2016-06-15 21:10 ` Rafael J. Wysocki
2016-06-28 14:51 ` James Morse
0 siblings, 1 reply; 14+ messages in thread
From: Rafael J. Wysocki @ 2016-06-15 21:10 UTC (permalink / raw)
To: James Morse
Cc: linux-arm-kernel@lists.infradead.org, Will Deacon,
Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm@vger.kernel.org, Lorenzo Pieralisi, Mark Rutland
On Wed, Jun 15, 2016 at 7:35 PM, James Morse <james.morse@arm.com> wrote:
> On arm64 the cpu with logical id 0 is assumed to be the boot CPU. If a
> user hotplugs this CPU out, then uses kexec to boot a new kernel, the new
> kernel will assign logical id 0 to a different physical CPU.
> This breaks hibernate as hibernate and resume will be attempted on different
> CPUs.
>
> Define a weak symbol arch_hibernation_disable_cpus(), which defaults to
> calling disable_nonboot_cpus). Architectures that allow CPU 0 to be
> hotplugged can use this to control which CPU is used for hibernate/resume.
>
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> Cc: Pavel Machek <pavel@ucw.cz>
> ---
> If this approach is acceptable, this patch should go with 4&5 via the arm64
> tree.
I'm not entirely happy with this, but if you absolutely need it and if
you can't think about any cleaner way to achieve the goal, feel free
to add my ACK to this patch if necessary.
> kernel/power/hibernate.c | 16 +++++++++++++---
> 1 file changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
> index fca9254280ee..0e668a3207ec 100644
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -70,6 +70,16 @@ bool hibernation_available(void)
> }
>
> /**
> + * arch_hibernation_disable_cpus - Allow architectures to direct which CPU
> + * is used for suspend or resume.
> + * @suspend: True during hibernate, false for resume.
> + */
> +int __weak arch_hibernation_disable_cpus(__maybe_unused bool suspend)
> +{
> + return disable_nonboot_cpus();
> +}
> +
> +/**
> * hibernation_set_ops - Set the global hibernate operations.
> * @ops: Hibernation operations to use in subsequent hibernation transitions.
> */
> @@ -279,7 +289,7 @@ static int create_image(int platform_mode)
> if (error || hibernation_test(TEST_PLATFORM))
> goto Platform_finish;
>
> - error = disable_nonboot_cpus();
> + error = arch_hibernation_disable_cpus(true);
> if (error || hibernation_test(TEST_CPUS))
> goto Enable_cpus;
>
> @@ -433,7 +443,7 @@ static int resume_target_kernel(bool platform_mode)
> if (error)
> goto Cleanup;
>
> - error = disable_nonboot_cpus();
> + error = arch_hibernation_disable_cpus(false);
> if (error)
> goto Enable_cpus;
>
> @@ -551,7 +561,7 @@ int hibernation_platform_enter(void)
> if (error)
> goto Platform_finish;
>
> - error = disable_nonboot_cpus();
> + error = arch_hibernation_disable_cpus(true);
> if (error)
> goto Enable_cpus;
>
> --
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/6] arm64: Create sections.h
2016-06-15 17:35 ` [PATCH v2 1/6] arm64: Create sections.h James Morse
@ 2016-06-16 10:52 ` Mark Rutland
0 siblings, 0 replies; 14+ messages in thread
From: Mark Rutland @ 2016-06-16 10:52 UTC (permalink / raw)
To: James Morse
Cc: linux-arm-kernel, Will Deacon, Catalin Marinas,
Rafael J . Wysocki, Pavel Machek, linux-pm, Lorenzo Pieralisi
On Wed, Jun 15, 2016 at 06:35:43PM +0100, James Morse wrote:
> Each time new section markers are added, kernel/vmlinux.ld.S is updated,
> and new extern char __start_foo[] definitions are scattered through the
> tree.
>
> Create asm/include/sections.h to collect these definitions (and include
> the existing asm-generic version).
>
> Signed-off-by: James Morse <james.morse@arm.com>
This looks right to me, and a defconfig kernel builds cleanly with this
applied. FWIW:
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Mark.
> ---
> Changes since v1:
> * Removed text_text markers,
> * Sorted sections.h
>
> arch/arm64/include/asm/Kbuild | 1 -
> arch/arm64/include/asm/sections.h | 28 ++++++++++++++++++++++++++++
> arch/arm64/include/asm/traps.h | 3 +--
> arch/arm64/include/asm/virt.h | 5 +----
> arch/arm64/kernel/alternative.c | 7 +++----
> arch/arm64/kernel/hibernate.c | 6 ------
> arch/arm64/kvm/reset.c | 3 +--
> 7 files changed, 34 insertions(+), 19 deletions(-)
> create mode 100644 arch/arm64/include/asm/sections.h
>
> diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
> index cff532a6744e..50c1f646b704 100644
> --- a/arch/arm64/include/asm/Kbuild
> +++ b/arch/arm64/include/asm/Kbuild
> @@ -33,7 +33,6 @@ generic-y += poll.h
> generic-y += preempt.h
> generic-y += resource.h
> generic-y += rwsem.h
> -generic-y += sections.h
> generic-y += segment.h
> generic-y += sembuf.h
> generic-y += serial.h
> diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
> new file mode 100644
> index 000000000000..cb68eb348566
> --- /dev/null
> +++ b/arch/arm64/include/asm/sections.h
> @@ -0,0 +1,28 @@
> +/*
> + * Copyright (C) 2016 ARM Limited
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_SECTIONS_H
> +#define __ASM_SECTIONS_H
> +
> +#include <asm-generic/sections.h>
> +
> +extern char __alt_instructions[], __alt_instructions_end[];
> +extern char __exception_text_start[], __exception_text_end[];
> +extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
> +extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
> +extern char __hyp_text_start[], __hyp_text_end[];
> +extern char __idmap_text_start[], __idmap_text_end[];
> +
> +#endif /* __ASM_SECTIONS_H */
> diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
> index 0cc2f29bf9da..90156c803f11 100644
> --- a/arch/arm64/include/asm/traps.h
> +++ b/arch/arm64/include/asm/traps.h
> @@ -19,6 +19,7 @@
> #define __ASM_TRAP_H
>
> #include <linux/list.h>
> +#include <asm/sections.h>
>
> struct pt_regs;
>
> @@ -52,8 +53,6 @@ static inline int __in_irqentry_text(unsigned long ptr)
>
> static inline int in_exception_text(unsigned long ptr)
> {
> - extern char __exception_text_start[];
> - extern char __exception_text_end[];
> int in;
>
> in = ptr >= (unsigned long)&__exception_text_start &&
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index dcbcf8dcbefb..b78611157f8b 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -40,6 +40,7 @@
> #ifndef __ASSEMBLY__
>
> #include <asm/ptrace.h>
> +#include <asm/sections.h>
>
> /*
> * __boot_cpu_mode records what mode CPUs were booted in.
> @@ -82,10 +83,6 @@ extern void verify_cpu_run_el(void);
> static inline void verify_cpu_run_el(void) {}
> #endif
>
> -/* The section containing the hypervisor text */
> -extern char __hyp_text_start[];
> -extern char __hyp_text_end[];
> -
> #endif /* __ASSEMBLY__ */
>
> #endif /* ! __ASM__VIRT_H */
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index d2ee1b21a10d..4434dabde898 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -25,14 +25,13 @@
> #include <asm/alternative.h>
> #include <asm/cpufeature.h>
> #include <asm/insn.h>
> +#include <asm/sections.h>
> #include <linux/stop_machine.h>
>
> #define __ALT_PTR(a,f) (u32 *)((void *)&(a)->f + (a)->f)
> #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset)
> #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset)
>
> -extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
> -
> struct alt_region {
> struct alt_instr *begin;
> struct alt_instr *end;
> @@ -124,8 +123,8 @@ static int __apply_alternatives_multi_stop(void *unused)
> {
> static int patched = 0;
> struct alt_region region = {
> - .begin = __alt_instructions,
> - .end = __alt_instructions_end,
> + .begin = (struct alt_instr *)__alt_instructions,
> + .end = (struct alt_instr *)__alt_instructions_end,
> };
>
> /* We always have a CPU 0 at this point (__init) */
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index f8df75d740f4..56e548fe0386 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -52,12 +52,6 @@ extern int in_suspend;
> /* Do we need to reset el2? */
> #define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode())
>
> -/*
> - * Start/end of the hibernate exit code, this must be copied to a 'safe'
> - * location in memory, and executed from there.
> - */
> -extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
> -
> /* temporary el2 vectors in the __hibernate_exit_text section. */
> extern char hibernate_el2_vectors[];
>
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index b1ad730e1567..c2d3594dd546 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -32,6 +32,7 @@
> #include <asm/kvm_asm.h>
> #include <asm/kvm_coproc.h>
> #include <asm/kvm_mmu.h>
> +#include <asm/sections.h>
>
> /*
> * ARMv8 Reset Values
> @@ -133,8 +134,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> }
>
> -extern char __hyp_idmap_text_start[];
> -
> unsigned long kvm_hyp_reset_entry(void)
> {
> if (!__kvm_cpu_uses_extended_idmap()) {
> --
> 2.8.0.rc3
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/6] arm64: Add .mmuoff.text section
2016-06-15 17:35 ` [PATCH v2 2/6] arm64: Add .mmuoff.text section James Morse
@ 2016-06-16 11:10 ` Mark Rutland
2016-06-16 13:22 ` James Morse
0 siblings, 1 reply; 14+ messages in thread
From: Mark Rutland @ 2016-06-16 11:10 UTC (permalink / raw)
To: James Morse
Cc: linux-arm-kernel, Will Deacon, Catalin Marinas,
Rafael J . Wysocki, Pavel Machek, linux-pm, Lorenzo Pieralisi
On Wed, Jun 15, 2016 at 06:35:44PM +0100, James Morse wrote:
> Resume from hibernate needs to clean any text executed by the kernel with
> the MMU off to the PoC. Collect these functions together into a new
> .mmuoff.text section.
>
> This covers booting of secondary cores and the cpu_suspend() path used
> by cpu-idle and suspend-to-ram.
>
> The bulk of head.S is not included, as the primary boot code is only ever
> executed once, the kernel never needs to ensure it is cleaned to a
> particular point in the cache.
>
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> ---
> This patch is new since v1.
>
> arch/arm64/include/asm/sections.h | 1 +
> arch/arm64/kernel/head.S | 6 ++++--
> arch/arm64/kernel/sleep.S | 2 ++
> arch/arm64/kernel/vmlinux.lds.S | 3 +++
> arch/arm64/mm/proc.S | 4 ++++
> 5 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
> index cb68eb348566..6f27c3f86a09 100644
> --- a/arch/arm64/include/asm/sections.h
> +++ b/arch/arm64/include/asm/sections.h
> @@ -24,5 +24,6 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
> extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
> extern char __hyp_text_start[], __hyp_text_end[];
> extern char __idmap_text_start[], __idmap_text_end[];
> +extern char __mmuoff_text_start[], __mmuoff_text_end[];
>
> #endif /* __ASM_SECTIONS_H */
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 2c6e598a94dc..ff37231e2054 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -477,6 +477,7 @@ ENTRY(kimage_vaddr)
> * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in x20 if
> * booted in EL1 or EL2 respectively.
> */
> + .pushsection ".mmuoff.text", "ax"
> ENTRY(el2_setup)
> mrs x0, CurrentEL
> cmp x0, #CurrentEL_EL2
> @@ -604,7 +605,7 @@ install_el2_stub:
> mov w20, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2
> eret
> ENDPROC(el2_setup)
> -
> + .popsection
> /*
> * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed
> * in x20. See arch/arm64/include/asm/virt.h for more info.
> @@ -656,6 +657,7 @@ ENDPROC(secondary_holding_pen)
> * Secondary entry point that jumps straight into the kernel. Only to
> * be used where CPUs are brought online dynamically by the kernel.
> */
> + .pushsection ".mmuoff.text", "ax"
> ENTRY(secondary_entry)
> bl el2_setup // Drop to EL1
> bl set_cpu_boot_mode_flag
> @@ -687,7 +689,7 @@ __secondary_switched:
> mov x29, #0
> b secondary_start_kernel
> ENDPROC(__secondary_switched)
> -
> + .popsection
I think we also need to cover set_cpu_boot_mode_flag and
__boot_cpu_mode.
Likewise secondary_holding_pen and secondary_holding_pen_release, in
case you booted with maxcpus=1, suspended, resumed, then tried to bring
secondaries up with spin-table.
Otherwise, this looks good!
Thanks,
Mark.
> /*
> * The booting CPU updates the failed status @__early_cpu_boot_status,
> * with MMU turned off.
> diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
> index 9a3aec97ac09..e66ce9b7bbde 100644
> --- a/arch/arm64/kernel/sleep.S
> +++ b/arch/arm64/kernel/sleep.S
> @@ -97,6 +97,7 @@ ENTRY(__cpu_suspend_enter)
> ENDPROC(__cpu_suspend_enter)
> .ltorg
>
> + .pushsection ".mmuoff.text", "ax"
> ENTRY(cpu_resume)
> bl el2_setup // if in EL2 drop to EL1 cleanly
> /* enable the MMU early - so we can access sleep_save_stash by va */
> @@ -106,6 +107,7 @@ ENTRY(cpu_resume)
> adrp x26, swapper_pg_dir
> b __cpu_setup
> ENDPROC(cpu_resume)
> + .popsection
>
> ENTRY(_cpu_resume)
> mrs x1, mpidr_el1
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index 435e820e898d..64fe907bcc5f 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -118,6 +118,9 @@ SECTIONS
> __exception_text_end = .;
> IRQENTRY_TEXT
> SOFTIRQENTRY_TEXT
> + __mmuoff_text_start = .;
> + *(.mmuoff.text)
> + __mmuoff_text_end = .;
> TEXT_TEXT
> SCHED_TEXT
> LOCK_TEXT
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index c4317879b938..655ff3ec90f2 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -83,6 +83,7 @@ ENDPROC(cpu_do_suspend)
> *
> * x0: Address of context pointer
> */
> + .pushsection ".mmuoff.text", "ax"
> ENTRY(cpu_do_resume)
> ldp x2, x3, [x0]
> ldp x4, x5, [x0, #16]
> @@ -111,6 +112,7 @@ ENTRY(cpu_do_resume)
> isb
> ret
> ENDPROC(cpu_do_resume)
> + .popsection
> #endif
>
> /*
> @@ -172,6 +174,7 @@ ENDPROC(idmap_cpu_replace_ttbr1)
> * Initialise the processor for turning the MMU on. Return in x0 the
> * value of the SCTLR_EL1 register.
> */
> + .pushsection ".mmuoff.text", "ax"
> ENTRY(__cpu_setup)
> tlbi vmalle1 // Invalidate local TLB
> dsb nsh
> @@ -255,3 +258,4 @@ ENDPROC(__cpu_setup)
> crval:
> .word 0xfcffffff // clear
> .word 0x34d5d91d // set
> + .popsection
> --
> 2.8.0.rc3
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/6] arm64: Add .mmuoff.text section
2016-06-16 11:10 ` Mark Rutland
@ 2016-06-16 13:22 ` James Morse
2016-06-16 13:55 ` Mark Rutland
0 siblings, 1 reply; 14+ messages in thread
From: James Morse @ 2016-06-16 13:22 UTC (permalink / raw)
To: Mark Rutland
Cc: linux-arm-kernel, Will Deacon, Catalin Marinas,
Rafael J . Wysocki, Pavel Machek, linux-pm, Lorenzo Pieralisi
Hi Mark,
On 16/06/16 12:10, Mark Rutland wrote:
> On Wed, Jun 15, 2016 at 06:35:44PM +0100, James Morse wrote:
>> Resume from hibernate needs to clean any text executed by the kernel with
>> the MMU off to the PoC. Collect these functions together into a new
>> .mmuoff.text section.
>>
>> This covers booting of secondary cores and the cpu_suspend() path used
>> by cpu-idle and suspend-to-ram.
>>
>> The bulk of head.S is not included, as the primary boot code is only ever
>> executed once, the kernel never needs to ensure it is cleaned to a
>> particular point in the cache.
>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
>> index 2c6e598a94dc..ff37231e2054 100644
>> --- a/arch/arm64/kernel/head.S
>> +++ b/arch/arm64/kernel/head.S
>> @@ -656,6 +657,7 @@ ENDPROC(secondary_holding_pen)
>> * Secondary entry point that jumps straight into the kernel. Only to
>> * be used where CPUs are brought online dynamically by the kernel.
>> */
>> + .pushsection ".mmuoff.text", "ax"
>> ENTRY(secondary_entry)
>> bl el2_setup // Drop to EL1
>> bl set_cpu_boot_mode_flag
>> @@ -687,7 +689,7 @@ __secondary_switched:
>> mov x29, #0
>> b secondary_start_kernel
>> ENDPROC(__secondary_switched)
>> -
>> + .popsection
>
> I think we also need to cover set_cpu_boot_mode_flag and
> __boot_cpu_mode.
Bother, yes.
How come cpu_resume doesn't call this? I guess we assume
psci_cpu_suspend_enter() will bring the core back at the same EL
> Likewise secondary_holding_pen and secondary_holding_pen_release, in
> case you booted with maxcpus=1, suspended, resumed, then tried to bring
> secondaries up with spin-table.
Whoa! This must never happen!
With KASLR:
* relocate to location-1,
* release the secondary cores from firmware into
location-1:secondary_holding_pen,
* resume from hibernate, at which point we are running from location-2,
as the kaslr values are now from the hibernate kernel.
* location-2:secondary_holding_pen is empty,
* location-1:secondary_holding_pen now contains a user-space string, or some
other horror.
This didn't come up during testing because maxcpus=1 was permanent before v4.7,
and we can't bundle cores back into the secondary_holding_pen. Hibernate without
PSCI (or some other cpu_die()) mechanism fails in the core code:
> [70648.097242] Disabling non-boot CPUs ...
> [70648.117277] Error taking CPU1 down: -95
> [70648.117286] Non-boot CPUs are not disabled
... but if we never tried to boot those cpus, we don't hit this check, and the
cores are left in secondary_holding_pen after smp_prepare_cpus(), called well
before the late_initcall that kicks of resume.
This is broken on systems that load the kernel at a different address over a
reboot, (using KASLR or a fancy boot loader), and use spin-table for secondary
cores. (probably none in practice, but still worth fixing)
Thinking aloud:
cpus_stuck_in_kernel only indicates cores that we failed to fully bring up.
(incompatible translation granule etc). There could still be cores in the
kernel's secondary holding pen. We should prevent kexec/hibernate in this case.
(hibernate because we can't safely resume on such a machine)
kexec[0] currently checks for a cpu_die() call:
> if (num_online_cpus() > 1)
Changing this to 'num_possible_cpus() > 1' will cover the above case.
Similar code will need to be added to hibernate.
An alternative is to increase cpus_stuck_in_kernel in
smp_spin_table_cpu_prepare(), but it stops being a counter at this point.
Thoughts? Does this make sense, or do I have the wrong end of the stick somewhere!
James
[0] http://www.spinics.net/lists/arm-kernel/msg510097.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/6] arm64: Add .mmuoff.text section
2016-06-16 13:22 ` James Morse
@ 2016-06-16 13:55 ` Mark Rutland
0 siblings, 0 replies; 14+ messages in thread
From: Mark Rutland @ 2016-06-16 13:55 UTC (permalink / raw)
To: James Morse
Cc: linux-arm-kernel, Will Deacon, Catalin Marinas,
Rafael J . Wysocki, Pavel Machek, linux-pm, Lorenzo Pieralisi
On Thu, Jun 16, 2016 at 02:22:21PM +0100, James Morse wrote:
> Hi Mark,
>
> On 16/06/16 12:10, Mark Rutland wrote:
> > On Wed, Jun 15, 2016 at 06:35:44PM +0100, James Morse wrote:
> >> Resume from hibernate needs to clean any text executed by the kernel with
> >> the MMU off to the PoC. Collect these functions together into a new
> >> .mmuoff.text section.
> >>
> >> This covers booting of secondary cores and the cpu_suspend() path used
> >> by cpu-idle and suspend-to-ram.
> >>
> >> The bulk of head.S is not included, as the primary boot code is only ever
> >> executed once, the kernel never needs to ensure it is cleaned to a
> >> particular point in the cache.
>
>
> >> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> >> index 2c6e598a94dc..ff37231e2054 100644
> >> --- a/arch/arm64/kernel/head.S
> >> +++ b/arch/arm64/kernel/head.S
> >> @@ -656,6 +657,7 @@ ENDPROC(secondary_holding_pen)
> >> * Secondary entry point that jumps straight into the kernel. Only to
> >> * be used where CPUs are brought online dynamically by the kernel.
> >> */
> >> + .pushsection ".mmuoff.text", "ax"
> >> ENTRY(secondary_entry)
> >> bl el2_setup // Drop to EL1
> >> bl set_cpu_boot_mode_flag
> >> @@ -687,7 +689,7 @@ __secondary_switched:
> >> mov x29, #0
> >> b secondary_start_kernel
> >> ENDPROC(__secondary_switched)
> >> -
> >> + .popsection
> >
> > I think we also need to cover set_cpu_boot_mode_flag and
> > __boot_cpu_mode.
>
> Bother, yes.
> How come cpu_resume doesn't call this? I guess we assume
> psci_cpu_suspend_enter() will bring the core back at the same EL
Hmm. It looks like we only bother to check once at boot time (in
hyp_mode_check as part of smp_cpus_done), and otherwise never inspect
the flag again.
We should probably add checks to the hotplug-on path, and perhaps the
idle cold return path. That's largely orthogonal to this series, though.
I think we should for consistency we should place them in the mmuoff
section regardless.
> > Likewise secondary_holding_pen and secondary_holding_pen_release, in
> > case you booted with maxcpus=1, suspended, resumed, then tried to bring
> > secondaries up with spin-table.
>
> Whoa! This must never happen!
> With KASLR:
> * relocate to location-1,
> * release the secondary cores from firmware into
> location-1:secondary_holding_pen,
> * resume from hibernate, at which point we are running from location-2,
> as the kaslr values are now from the hibernate kernel.
> * location-2:secondary_holding_pen is empty,
> * location-1:secondary_holding_pen now contains a user-space string, or some
> other horror.
:(
> This didn't come up during testing because maxcpus=1 was permanent before v4.7,
> and we can't bundle cores back into the secondary_holding_pen. Hibernate without
> PSCI (or some other cpu_die()) mechanism fails in the core code:
> > [70648.097242] Disabling non-boot CPUs ...
> > [70648.117277] Error taking CPU1 down: -95
> > [70648.117286] Non-boot CPUs are not disabled
>
> ... but if we never tried to boot those cpus, we don't hit this check, and the
> cores are left in secondary_holding_pen after smp_prepare_cpus(), called well
> before the late_initcall that kicks of resume.
>
> This is broken on systems that load the kernel at a different address over a
> reboot, (using KASLR or a fancy boot loader), and use spin-table for secondary
> cores. (probably none in practice, but still worth fixing)
>
>
> Thinking aloud:
> cpus_stuck_in_kernel only indicates cores that we failed to fully bring up.
> (incompatible translation granule etc). There could still be cores in the
> kernel's secondary holding pen. We should prevent kexec/hibernate in this case.
> (hibernate because we can't safely resume on such a machine)
This sounds sensible to me.
> kexec[0] currently checks for a cpu_die() call:
> > if (num_online_cpus() > 1)
>
> Changing this to 'num_possible_cpus() > 1' will cover the above case.
> Similar code will need to be added to hibernate.
That will also catch cases where CPUs failed to even enter the pen, but
I don't think there's any reliable way to distinguish the two, so that's
probably the best we can do.
> An alternative is to increase cpus_stuck_in_kernel in
> smp_spin_table_cpu_prepare(), but it stops being a counter at this point.
I don't think we need it to be a counter. I'm happy to change the
meaning slightly if we update the comment.
> Thoughts?
"Oh no", "aaarargarghargahgasdfsdfs". :(
> Does this make sense, or do I have the wrong end of the stick somewhere!
The above makes sense to me. Thanks for digging into this!
Mark.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 4/6] PM / Hibernate: Allow architectures to specify the hibernate/resume CPU
2016-06-15 21:10 ` Rafael J. Wysocki
@ 2016-06-28 14:51 ` James Morse
2016-06-29 0:03 ` Rafael J. Wysocki
0 siblings, 1 reply; 14+ messages in thread
From: James Morse @ 2016-06-28 14:51 UTC (permalink / raw)
To: Rafael J. Wysocki
Cc: linux-arm-kernel@lists.infradead.org, Will Deacon,
Catalin Marinas, Rafael J . Wysocki, Pavel Machek,
linux-pm@vger.kernel.org, Lorenzo Pieralisi, Mark Rutland
Hi Rafael,
On 15/06/16 22:10, Rafael J. Wysocki wrote:
> On Wed, Jun 15, 2016 at 7:35 PM, James Morse <james.morse@arm.com> wrote:
>> On arm64 the cpu with logical id 0 is assumed to be the boot CPU. If a
>> user hotplugs this CPU out, then uses kexec to boot a new kernel, the new
>> kernel will assign logical id 0 to a different physical CPU.
>> This breaks hibernate as hibernate and resume will be attempted on different
>> CPUs.
>>
>> Define a weak symbol arch_hibernation_disable_cpus(), which defaults to
>> calling disable_nonboot_cpus). Architectures that allow CPU 0 to be
>> hotplugged can use this to control which CPU is used for hibernate/resume.
>>
>> Signed-off-by: James Morse <james.morse@arm.com>
>> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
>> Cc: Pavel Machek <pavel@ucw.cz>
>> ---
>> If this approach is acceptable, this patch should go with 4&5 via the arm64
>> tree.
>
> I'm not entirely happy with this, but if you absolutely need it and if
> you can't think about any cleaner way to achieve the goal, feel free
> to add my ACK to this patch if necessary.
I would prefer you to be happy with it.
On the weak symbol: After staring at this some more I've made the macro approach
work, so the weak symbol can go. (a new version of this series should arrive soon)
On the arch-specific odour coming from this, it may also be useful for Chen Yu's
'Use hlt instead of mwait when resuming from hibernation' series[0]. (I don't
understand the x86 specifics)
We need this to allow kexec and hibernate to interact freely. Preventing kexec
if CPU0 is offline because of hibernate doesn't feel right. Fixing it is
necessary as the first hint that something is wrong is when we fail to resume
and the user looses any data they had in memory.
Thanks,
James
[0] https://lkml.org/lkml/2016/6/25/98
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 4/6] PM / Hibernate: Allow architectures to specify the hibernate/resume CPU
2016-06-28 14:51 ` James Morse
@ 2016-06-29 0:03 ` Rafael J. Wysocki
0 siblings, 0 replies; 14+ messages in thread
From: Rafael J. Wysocki @ 2016-06-29 0:03 UTC (permalink / raw)
To: James Morse
Cc: Rafael J. Wysocki, linux-arm-kernel@lists.infradead.org,
Will Deacon, Catalin Marinas, Pavel Machek,
linux-pm@vger.kernel.org, Lorenzo Pieralisi, Mark Rutland
On Tuesday, June 28, 2016 03:51:39 PM James Morse wrote:
> Hi Rafael,
>
> On 15/06/16 22:10, Rafael J. Wysocki wrote:
> > On Wed, Jun 15, 2016 at 7:35 PM, James Morse <james.morse@arm.com> wrote:
> >> On arm64 the cpu with logical id 0 is assumed to be the boot CPU. If a
> >> user hotplugs this CPU out, then uses kexec to boot a new kernel, the new
> >> kernel will assign logical id 0 to a different physical CPU.
> >> This breaks hibernate as hibernate and resume will be attempted on different
> >> CPUs.
> >>
> >> Define a weak symbol arch_hibernation_disable_cpus(), which defaults to
> >> calling disable_nonboot_cpus). Architectures that allow CPU 0 to be
> >> hotplugged can use this to control which CPU is used for hibernate/resume.
> >>
> >> Signed-off-by: James Morse <james.morse@arm.com>
> >> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> >> Cc: Pavel Machek <pavel@ucw.cz>
> >> ---
> >> If this approach is acceptable, this patch should go with 4&5 via the arm64
> >> tree.
> >
> > I'm not entirely happy with this, but if you absolutely need it and if
> > you can't think about any cleaner way to achieve the goal, feel free
> > to add my ACK to this patch if necessary.
>
> I would prefer you to be happy with it.
> On the weak symbol: After staring at this some more I've made the macro approach
> work, so the weak symbol can go. (a new version of this series should arrive soon)
> On the arch-specific odour coming from this, it may also be useful for Chen Yu's
> 'Use hlt instead of mwait when resuming from hibernation' series[0]. (I don't
> understand the x86 specifics)
Yes, it looks like both x86 and ARM64 want/need to do something special around
disabling/enabling nonboot CPUs during hibernation/restore.
Let me comment on the new submission, though.
Thanks,
Rafael
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2016-06-28 23:59 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-15 17:35 [PATCH v2 0/6] arm64: hibernate: Fix DEBUG_PAGEALLOC and hibernate on non-boot cpu James Morse
2016-06-15 17:35 ` [PATCH v2 1/6] arm64: Create sections.h James Morse
2016-06-16 10:52 ` Mark Rutland
2016-06-15 17:35 ` [PATCH v2 2/6] arm64: Add .mmuoff.text section James Morse
2016-06-16 11:10 ` Mark Rutland
2016-06-16 13:22 ` James Morse
2016-06-16 13:55 ` Mark Rutland
2016-06-15 17:35 ` [PATCH v2 3/6] arm64: hibernate: Support DEBUG_PAGEALLOC James Morse
2016-06-15 17:35 ` [PATCH v2 4/6] PM / Hibernate: Allow architectures to specify the hibernate/resume CPU James Morse
2016-06-15 21:10 ` Rafael J. Wysocki
2016-06-28 14:51 ` James Morse
2016-06-29 0:03 ` Rafael J. Wysocki
2016-06-15 17:35 ` [PATCH v2 5/6] arm64: hibernate: Identify the CPU to resume on by its MPIDR James Morse
2016-06-15 17:35 ` [PATCH v2 6/6] Revert "arm64: hibernate: Refuse to hibernate if the boot cpu is offline" James Morse
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).