The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* [PATCH] sched: move stack_canary to the start of the randomizable region
@ 2026-05-08  6:15 Ruidong Tian
  2026-05-08 16:38 ` K Prateek Nayak
  0 siblings, 1 reply; 3+ messages in thread
From: Ruidong Tian @ 2026-05-08  6:15 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, vschneid, kprateek.nayak
  Cc: linux-kernel, oliver.yang, Ruidong Tian

task_struct keeps growing over time.  On architectures that compute the
per-task stack canary offset from asm-offsets.h and pass it to the
compiler via -mstack-protector-guard-offset=, this growth eventually
pushes stack_canary beyond what the target ISA can encode.

On RISC-V, canary loads are emitted as

    ld	t0, TSK_STACK_CANARY(tp)

where TSK_STACK_CANARY must fit into a 12-bit signed immediate, i.e.
[-2048, 2047].  Once stack_canary sits past byte 2047 of task_struct,
the build fails with

    cc1: error: '<N>' is not a valid offset in
        '-mstack-protector-guard-offset='

Move stack_canary to the very first slot of the randomizable region of
task_struct, right after __state / saved_state:

  * On RISC-V, CONFIG_STACKPROTECTOR_PER_TASK depends on !RANDSTRUCT,
    so randomized_struct_fields_start/end always expand to nothing and
    stack_canary lands at a small, stable offset well within the
    12-bit signed immediate range.  The build error goes away.

  * On architectures that enable RANDSTRUCT for hardening, stack_canary
    stays inside the randomized region and is still shuffled together
    with the other fields by the layout randomization, so its hardening
    coverage is preserved.  asm-offsets-based architectures read the
    shuffled offset at build time, so the generated canary accesses
    remain correct.

This is a refinement of Christophe Leroy's 2018 proposal [1], which
placed stack_canary before randomized_struct_fields_start and thereby
pulled it out of the randomized region entirely.  Michael Ellerman
flagged that at the time as a regression of the hardening coverage.
Keeping the field inside the region preserves the "scheduling-critical
items only" invariant documented above the start marker and retains
RANDSTRUCT protection on architectures that enable it.

[1]: https://lore.kernel.org/lkml/d60ce7dd74704b0ca0a857186f30de4006b63534.1537355312.git.christophe.leroy@c-s.fr/

Signed-off-by: Ruidong Tian <tianruidong@linux.alibaba.com>
---
 include/linux/sched.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 368c7b4d7cb5..d9ee2381c3a3 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -836,6 +836,11 @@ struct task_struct {
 	 */
 	randomized_struct_fields_start
 
+#ifdef CONFIG_STACKPROTECTOR
+	/* Canary value for the -fstack-protector GCC feature: */
+	unsigned long			stack_canary;
+#endif
+
 	void				*stack;
 	refcount_t			usage;
 	/* Per task flags (PF_*), defined further below: */
@@ -1060,10 +1065,6 @@ struct task_struct {
 	pid_t				pid;
 	pid_t				tgid;
 
-#ifdef CONFIG_STACKPROTECTOR
-	/* Canary value for the -fstack-protector GCC feature: */
-	unsigned long			stack_canary;
-#endif
 	/*
 	 * Pointers to the (original) parent process, youngest child, younger sibling,
 	 * older sibling, respectively.  (p->father can be replaced with
-- 
2.51.2.612.gdc70283dfc


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-05-09  3:49 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-08  6:15 [PATCH] sched: move stack_canary to the start of the randomizable region Ruidong Tian
2026-05-08 16:38 ` K Prateek Nayak
2026-05-09  3:49   ` Ruidong Tian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox