* [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry
@ 2026-03-17 8:20 Jinjie Ruan
2026-03-17 8:20 ` [PATCH v13 RESEND 01/14] arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags parameter Jinjie Ruan
` (15 more replies)
0 siblings, 16 replies; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Currently, x86, Riscv, Loongarch use the Generic Entry which makes
maintainers' work easier and codes more elegant. arm64 has already
successfully switched to the Generic IRQ Entry in commit
b3cf07851b6c ("arm64: entry: Switch to generic IRQ entry"), it is
time to completely convert arm64 to Generic Entry.
The goal is to bring arm64 in line with other architectures that already
use the generic entry infrastructure, reducing duplicated code and
making it easier to share future changes in entry/exit paths, such as
"Syscall User Dispatch" and RSEQ optimizations.
This patch set is rebased on v7.0-rc3. And the performance
benchmarks results on qemu-kvm are below:
perf bench syscall usec/op (-ve is improvement)
| Syscall | Base | Generic Entry | change % |
| ------- | ----------- | ------------- | -------- |
| basic | 0.123997 | 0.120872 | -2.57 |
| execve | 512.1173 | 504.9966 | -1.52 |
| fork | 114.1144 | 113.2301 | -1.06 |
| getpgid | 0.120182 | 0.121245 | +0.9 |
perf bench syscall ops/sec (+ve is improvement)
| Syscall | Base | Generic Entry| change % |
| ------- | -------- | ------------ | -------- |
| basic | 8064712 | 8273212 | +2.48 |
| execve | 1952 | 1980 | +1.52 |
| fork | 8763 | 8832 | +1.06 |
| getpgid | 8320704 | 8247810 | -0.9 |
Therefore, the syscall performance variation ranges from a 1% regression
to a 2.5% improvement.
It was tested ok with following test cases on QEMU virt platform:
- Stress-ng CPU stress test.
- Hackbench stress test.
- "sud" selftest testcase.
- get_set_sud, get_syscall_info, set_syscall_info, peeksiginfo
in tools/testing/selftests/ptrace.
- breakpoint_test_arm64 in selftests/breakpoints.
- syscall-abi and ptrace in tools/testing/selftests/arm64/abi
- fp-ptrace, sve-ptrace, za-ptrace in selftests/arm64/fp.
- vdso_test_getrandom in tools/testing/selftests/vDSO
- Strace tests.
- slice_test for rseq optimizations.
The test QEMU configuration is as follows:
qemu-system-aarch64 \
-M virt \
-enable-kvm \
-cpu host \
-kernel Image \
-smp 8 \
-m 512m \
-nographic \
-no-reboot \
-device virtio-rng-pci \
-append "root=/dev/vda rw console=ttyAMA0 kgdboc=ttyAMA0,115200 \
earlycon preempt=voluntary irqchip.gicv3_pseudo_nmi=1 audit=1" \
-drive if=none,file=images/rootfs.ext4,format=raw,id=hd0 \
-device virtio-blk-device,drive=hd0 \
Changes in v13 resend:
- Fix exit_to_user_mode_prepare_legacy() issues.
- Also move TIF_SINGLESTEP to generic TIF infrastructure for loongarch.
- Use generic TIF bits for arm64 and moving TIF_SINGLESTEP to
generic TIF for related architectures separately.
- Refactor syscall_trace_enter/exit() to accept flags and Use syscall_get_nr()
helper separately.
- Tested with slice_test for rseq optimizations.
- Add acked-by.
- Link to v13: https://lore.kernel.org/all/20260313094738.3985794-1-ruanjinjie@huawei.com/
Changes in v13:
- Rebased on v7.0-rc3, so drop the firt applied arm64 patch.
- Use generic TIF bits to enables RSEQ optimization.
- Update most of the commit message to make it more clear.
- Link to v12: https://lore.kernel.org/all/20260203133728.848283-1-ruanjinjie@huawei.com/
Changes in v12:
- Rebased on "sched/core", so remove the four generic entry patches.
- Move "Expand secure_computing() in place" and
"Use syscall_get_arguments() helper" patch forward, which will group all
non-functional cleanups at the front.
- Adjust the explanation for moving rseq_syscall() before
audit_syscall_exit().
- Link to v11: https://lore.kernel.org/all/20260128031934.3906955-1-ruanjinjie@huawei.com/
Changes in v11:
- Remove unused syscall in syscall_trace_enter().
- Update and provide a detailed explanation of the differences after
moving rseq_syscall() before audit_syscall_exit().
- Rebased on arm64 (for-next/entry), and remove the first applied 3 patchs.
- syscall_exit_to_user_mode_work() for arch reuse instead of adding
new syscall_exit_to_user_mode_work_prepare() helper.
- Link to v10: https://lore.kernel.org/all/20251222114737.1334364-1-ruanjinjie@huawei.com/
Changes in v10:
- Rebased on v6.19-rc1, rename syscall_exit_to_user_mode_prepare() to
syscall_exit_to_user_mode_work_prepare() to avoid conflict.
- Also inline syscall_trace_enter().
- Support aarch64 for sud_benchmark.
- Update and correct the commit message.
- Add Reviewed-by.
- Link to v9: https://lore.kernel.org/all/20251204082123.2792067-1-ruanjinjie@huawei.com/
Changes in v9:
- Move "Return early for ptrace_report_syscall_entry() error" patch ahead
to make it not introduce a regression.
- Not check _TIF_SECCOMP/SYSCALL_EMU for syscall_exit_work() in
a separate patch.
- Do not report_syscall_exit() for PTRACE_SYSEMU_SINGLESTEP in a separate
patch.
- Add two performance patch to improve the arm64 performance.
- Add Reviewed-by.
- Link to v8: https://lore.kernel.org/all/20251126071446.3234218-1-ruanjinjie@huawei.com/
Changes in v8:
- Rename "report_syscall_enter()" to "report_syscall_entry()".
- Add ptrace_save_reg() to avoid duplication.
- Remove unused _TIF_WORK_MASK in a standalone patch.
- Align syscall_trace_enter() return value with the generic version.
- Use "scno" instead of regs->syscallno in el0_svc_common().
- Move rseq_syscall() ahead in a standalone patch to clarify it clearly.
- Rename "syscall_trace_exit()" to "syscall_exit_work()".
- Keep the goto in el0_svc_common().
- No argument was passed to __secure_computing() and check -1 not -1L.
- Remove "Add has_syscall_work() helper" patch.
- Move "Add syscall_exit_to_user_mode_prepare() helper" patch later.
- Add miss header for asm/entry-common.h.
- Update the implementation of arch_syscall_is_vdso_sigreturn().
- Add "ARCH_SYSCALL_WORK_EXIT" to be defined as "SECCOMP | SYSCALL_EMU"
to keep the behaviour unchanged.
- Add more testcases test.
- Add Reviewed-by.
- Update the commit message.
- Link to v7: https://lore.kernel.org/all/20251117133048.53182-1-ruanjinjie@huawei.com/
Jinjie Ruan (13):
arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags
parameter
arm64/ptrace: Use syscall_get_nr() helper for syscall_trace_enter()
arm64/ptrace: Expand secure_computing() in place
arm64/ptrace: Use syscall_get_arguments() helper for audit
arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
arm64: syscall: Introduce syscall_exit_to_user_mode_work()
arm64/ptrace: Define and use _TIF_SYSCALL_EXIT_WORK
arm64/ptrace: Skip syscall exit reporting for PTRACE_SYSEMU_SINGLESTEP
arm64: entry: Convert to generic entry
arm64: Inline el0_svc_common()
s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP
asm-generic: Move TIF_SINGLESTEP to generic TIF bits
arm64: Use generic TIF bits for common thread flags
kemal (1):
selftests: sud_test: Support aarch64
arch/arm64/Kconfig | 3 +-
arch/arm64/include/asm/entry-common.h | 76 ++++++++++++
arch/arm64/include/asm/syscall.h | 19 ++-
arch/arm64/include/asm/thread_info.h | 76 ++++--------
arch/arm64/kernel/debug-monitors.c | 7 ++
arch/arm64/kernel/entry-common.c | 25 +++-
arch/arm64/kernel/ptrace.c | 115 ------------------
arch/arm64/kernel/signal.c | 2 +-
arch/arm64/kernel/syscall.c | 29 ++---
arch/loongarch/include/asm/thread_info.h | 11 +-
arch/s390/include/asm/thread_info.h | 7 +-
arch/s390/kernel/process.c | 2 +-
arch/s390/kernel/ptrace.c | 20 +--
arch/s390/kernel/signal.c | 6 +-
arch/x86/include/asm/thread_info.h | 6 +-
include/asm-generic/thread_info_tif.h | 5 +
include/linux/irq-entry-common.h | 8 --
include/linux/rseq_entry.h | 18 ---
.../syscall_user_dispatch/sud_benchmark.c | 2 +-
.../syscall_user_dispatch/sud_test.c | 4 +
20 files changed, 191 insertions(+), 250 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 01/14] arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags parameter
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 13:47 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 02/14] arm64/ptrace: Use syscall_get_nr() helper for syscall_trace_enter() Jinjie Ruan
` (14 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Refactor syscall_trace_enter() and syscall_trace_exit() to move thread
flag reading to the caller. This aligns arm64's syscall trace enter/exit
function signature with generic entry framework.
[Changes]
1. Function signature changes:
- syscall_trace_enter(regs) → syscall_trace_enter(regs, flags)
- syscall_trace_exit(regs) → syscall_trace_exit(regs, flags)
2. Move flags reading to caller:
- Previously: read_thread_flags() called inside each function.
- Now: caller (like el0_svc_common) passes flags as parameter.
3. Update syscall.c:
- el0_svc_common() now passes flags to tracing functions and
re-fetches flags before exit to handle potential TIF updates.
[Why this matters]
- Aligns arm64 with the generic entry interface.
- Makes future migration to generic entry framework.
No functional changes intended.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/include/asm/syscall.h | 4 ++--
arch/arm64/kernel/ptrace.c | 7 ++-----
arch/arm64/kernel/syscall.c | 5 +++--
3 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index 5e4c7fc44f73..30b203ef156b 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -120,7 +120,7 @@ static inline int syscall_get_arch(struct task_struct *task)
return AUDIT_ARCH_AARCH64;
}
-int syscall_trace_enter(struct pt_regs *regs);
-void syscall_trace_exit(struct pt_regs *regs);
+int syscall_trace_enter(struct pt_regs *regs, unsigned long flags);
+void syscall_trace_exit(struct pt_regs *regs, unsigned long flags);
#endif /* __ASM_SYSCALL_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index ba5eab23fd90..e4d524ccbc7b 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2408,9 +2408,8 @@ static void report_syscall_exit(struct pt_regs *regs)
}
}
-int syscall_trace_enter(struct pt_regs *regs)
+int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
{
- unsigned long flags = read_thread_flags();
int ret;
if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
@@ -2432,10 +2431,8 @@ int syscall_trace_enter(struct pt_regs *regs)
return regs->syscallno;
}
-void syscall_trace_exit(struct pt_regs *regs)
+void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
{
- unsigned long flags = read_thread_flags();
-
audit_syscall_exit(regs);
if (flags & _TIF_SYSCALL_TRACEPOINT)
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index c062badd1a56..e8fd0d60ab09 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -124,7 +124,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
*/
if (scno == NO_SYSCALL)
syscall_set_return_value(current, regs, -ENOSYS, 0);
- scno = syscall_trace_enter(regs);
+ scno = syscall_trace_enter(regs, flags);
if (scno == NO_SYSCALL)
goto trace_exit;
}
@@ -143,7 +143,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
}
trace_exit:
- syscall_trace_exit(regs);
+ flags = read_thread_flags();
+ syscall_trace_exit(regs, flags);
}
void do_el0_svc(struct pt_regs *regs)
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 02/14] arm64/ptrace: Use syscall_get_nr() helper for syscall_trace_enter()
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
2026-03-17 8:20 ` [PATCH v13 RESEND 01/14] arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags parameter Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 13:50 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 03/14] arm64/ptrace: Expand secure_computing() in place Jinjie Ruan
` (13 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Use syscall_get_nr() to get syscall number for syscall_trace_enter().
This aligns arm64's internal tracing logic with the generic
entry framework.
[Changes]
1. Use syscall_get_nr() helper:
- Replace direct regs->syscallno access with
syscall_get_nr(current, regs).
- This helper is functionally equivalent to direct access on arm64.
2. Re-read syscall number after tracepoint:
- Re-fetch the syscall number after trace_sys_enter() as it may have
been modified by BPF or ftrace probes, matching generic entry behavior.
[Why this matters]
- Aligns arm64 with the generic entry interface.
- Makes future migration to generic entry framework.
- Properly handles syscall number modifications by tracers.
- Uses standard architecture-independent helpers.
No functional changes intended.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/kernel/ptrace.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index e4d524ccbc7b..8d296a07fbf7 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2410,6 +2410,7 @@ static void report_syscall_exit(struct pt_regs *regs)
int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
{
+ long syscall;
int ret;
if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
@@ -2422,13 +2423,23 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
if (secure_computing() == -1)
return NO_SYSCALL;
- if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
- trace_sys_enter(regs, regs->syscallno);
+ /* Either of the above might have changed the syscall number */
+ syscall = syscall_get_nr(current, regs);
- audit_syscall_entry(regs->syscallno, regs->orig_x0, regs->regs[1],
+ if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) {
+ trace_sys_enter(regs, syscall);
+
+ /*
+ * Probes or BPF hooks in the tracepoint may have changed the
+ * system call number as well.
+ */
+ syscall = syscall_get_nr(current, regs);
+ }
+
+ audit_syscall_entry(syscall, regs->orig_x0, regs->regs[1],
regs->regs[2], regs->regs[3]);
- return regs->syscallno;
+ return syscall;
}
void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 03/14] arm64/ptrace: Expand secure_computing() in place
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
2026-03-17 8:20 ` [PATCH v13 RESEND 01/14] arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags parameter Jinjie Ruan
2026-03-17 8:20 ` [PATCH v13 RESEND 02/14] arm64/ptrace: Use syscall_get_nr() helper for syscall_trace_enter() Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 13:58 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 04/14] arm64/ptrace: Use syscall_get_arguments() helper for audit Jinjie Ruan
` (12 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Refactor syscall_trace_enter() by open-coding the seccomp check
to align with the generic entry framework.
[Background]
The generic entry implementation expands the seccomp check in-place
instead of using the secure_computing() wrapper. It directly tests
SYSCALL_WORK_SECCOMP and calls the underlying __secure_computing()
function to handle syscall filtering.
[Changes]
1. Open-code seccomp check:
- Instead of calling the secure_computing() wrapper, explicitly check
the 'flags' parameter for _TIF_SECCOMP.
- Call __secure_computing() directly if the flag is set.
2. Refine return value handling:
- Use 'return ret ? : syscall' to propagate the return value.
- Ensures any unexpected non-zero return from __secure_computing()
is properly propagated is properly propagated.
- This matches the logic in the generic entry code.
[Why this matters]
- Aligns the arm64 syscall path with the generic entry implementation,
simplifying future migration to the generic entry framework.
- No functional changes are intended; seccomp behavior remains identical.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/kernel/ptrace.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 8d296a07fbf7..d68f872339c7 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2420,8 +2420,11 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
}
/* Do the secure computing after ptrace; failures should be fast. */
- if (secure_computing() == -1)
- return NO_SYSCALL;
+ if (flags & _TIF_SECCOMP) {
+ ret = __secure_computing();
+ if (ret == -1)
+ return NO_SYSCALL;
+ }
/* Either of the above might have changed the syscall number */
syscall = syscall_get_nr(current, regs);
@@ -2439,7 +2442,7 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
audit_syscall_entry(syscall, regs->orig_x0, regs->regs[1],
regs->regs[2], regs->regs[3]);
- return syscall;
+ return ret ? : syscall;
}
void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 04/14] arm64/ptrace: Use syscall_get_arguments() helper for audit
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (2 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 03/14] arm64/ptrace: Expand secure_computing() in place Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:14 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 05/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit() Jinjie Ruan
` (11 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Extract syscall_enter_audit() helper and use syscall_get_arguments()
to get syscall arguments, matching the generic entry implementation.
The new code:
- Checks audit_context() first to avoid unnecessary memcpy when audit
is not active.
- Uses syscall_get_arguments() helper instead of directly accessing
regs fields.
- Is now exactly equivalent to generic entry's syscall_enter_audit().
No functional changes.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/kernel/ptrace.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index d68f872339c7..3cb497b2bd22 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2408,6 +2408,16 @@ static void report_syscall_exit(struct pt_regs *regs)
}
}
+static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
+{
+ if (unlikely(audit_context())) {
+ unsigned long args[6];
+
+ syscall_get_arguments(current, regs, args);
+ audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]);
+ }
+}
+
int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
{
long syscall;
@@ -2439,8 +2449,7 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
syscall = syscall_get_nr(current, regs);
}
- audit_syscall_entry(syscall, regs->orig_x0, regs->regs[1],
- regs->regs[2], regs->regs[3]);
+ syscall_enter_audit(regs, syscall);
return ret ? : syscall;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 05/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (3 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 04/14] arm64/ptrace: Use syscall_get_arguments() helper for audit Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:16 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 06/14] arm64: syscall: Introduce syscall_exit_to_user_mode_work() Jinjie Ruan
` (10 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Move the rseq_syscall() check earlier in the syscall exit path to ensure
it operates on the original instruction pointer (regs->pc) before any
potential modification by a tracer.
[Background]
When CONFIG_DEBUG_RSEQ is enabled, rseq_syscall() verifies that a system
call was not executed within an rseq critical section by examining
regs->pc. If a violation is detected, it triggers a SIGSEGV.
[Problem]
Currently, arm64 invokes rseq_syscall() after report_syscall_exit().
However, during report_syscall_exit(), a ptrace tracer can modify the
task's instruction pointer via PTRACE_SETREGS. This leads to an
inconsistency where rseq may analyze a post-trace PC instead of the
actual PC at the time of syscall exit.
[Why this matters]
The rseq check is intended to validate the execution context of the
syscall itself. Analyzing a tracer-modified PC can lead to incorrect
detection or missed violations. Moving the check earlier ensures rseq
sees the authentic state of the task.
[Alignment]
This change aligns arm64 with:
- Generic entry, which calls rseq_syscall() first.
- arm32 implementation, which also performs the check before audit.
[Impact]
There is no functional change to signal delivery; SIGSEGV will still be
processed in arm64_exit_to_user_mode() at the end of the exit path.
Cc: Thomas Gleixner <tglx@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/kernel/ptrace.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 3cb497b2bd22..f3d3dec85828 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2456,6 +2456,8 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
{
+ rseq_syscall(regs);
+
audit_syscall_exit(regs);
if (flags & _TIF_SYSCALL_TRACEPOINT)
@@ -2463,8 +2465,6 @@ void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
report_syscall_exit(regs);
-
- rseq_syscall(regs);
}
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 06/14] arm64: syscall: Introduce syscall_exit_to_user_mode_work()
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (4 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 05/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit() Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:17 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 07/14] arm64/ptrace: Define and use _TIF_SYSCALL_EXIT_WORK Jinjie Ruan
` (9 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Refactor the system call exit path to align with the generic entry
framework. This consolidates thread flag checking, rseq handling, and
syscall tracing into a structure that mirrors the generic
syscall_exit_to_user_mode_work() implementation.
[Rationale]
The generic entry code employs a hierarchical approach for
syscall exit work:
1. syscall_exit_to_user_mode_work(): The entry point that handles
rseq and checks if further exit work (tracing/audit) is required.
2. syscall_exit_work(): Performs the actual tracing, auditing, and
ptrace reporting.
[Changes]
- Rename and Encapsulate: Rename syscall_trace_exit() to
syscall_exit_work() and make it static, as it is now an internal
helper for the exit path.
- New Entry Point: Implement syscall_exit_to_user_mode_work() to
replace the manual flag-reading logic in el0_svc_common(). This
function now encapsulates the rseq_syscall() call and the
conditional execution of syscall_exit_work().
- Simplify el0_svc_common(): Remove the complex conditional checks
for tracing and CONFIG_DEBUG_RSEQ at the end of the syscall path,
delegating this responsibility to the new helper.
- Helper Migration: Move has_syscall_work() to asm/syscall.h
to allow its reuse across ptrace.c and syscall.c.
- Clean up RSEQ: Remove the explicit IS_ENABLED(CONFIG_DEBUG_RSEQ)
check in the caller, as rseq_syscall() is already a no-op when the
config is disabled.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/include/asm/syscall.h | 7 ++++++-
arch/arm64/kernel/ptrace.c | 14 +++++++++++---
arch/arm64/kernel/syscall.c | 20 +-------------------
3 files changed, 18 insertions(+), 23 deletions(-)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index 30b203ef156b..c469d09a7964 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -120,7 +120,12 @@ static inline int syscall_get_arch(struct task_struct *task)
return AUDIT_ARCH_AARCH64;
}
+static inline bool has_syscall_work(unsigned long flags)
+{
+ return unlikely(flags & _TIF_SYSCALL_WORK);
+}
+
int syscall_trace_enter(struct pt_regs *regs, unsigned long flags);
-void syscall_trace_exit(struct pt_regs *regs, unsigned long flags);
+void syscall_exit_to_user_mode_work(struct pt_regs *regs);
#endif /* __ASM_SYSCALL_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index f3d3dec85828..35efa2062408 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2454,10 +2454,8 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
return ret ? : syscall;
}
-void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
+static void syscall_exit_work(struct pt_regs *regs, unsigned long flags)
{
- rseq_syscall(regs);
-
audit_syscall_exit(regs);
if (flags & _TIF_SYSCALL_TRACEPOINT)
@@ -2467,6 +2465,16 @@ void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
report_syscall_exit(regs);
}
+void syscall_exit_to_user_mode_work(struct pt_regs *regs)
+{
+ unsigned long flags = read_thread_flags();
+
+ rseq_syscall(regs);
+
+ if (has_syscall_work(flags) || flags & _TIF_SINGLESTEP)
+ syscall_exit_work(regs, flags);
+}
+
/*
* SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
* We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index e8fd0d60ab09..66d4da641d97 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -65,11 +65,6 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
choose_random_kstack_offset(get_random_u16());
}
-static inline bool has_syscall_work(unsigned long flags)
-{
- return unlikely(flags & _TIF_SYSCALL_WORK);
-}
-
static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
const syscall_fn_t syscall_table[])
{
@@ -130,21 +125,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
}
invoke_syscall(regs, scno, sc_nr, syscall_table);
-
- /*
- * The tracing status may have changed under our feet, so we have to
- * check again. However, if we were tracing entry, then we always trace
- * exit regardless, as the old entry assembly did.
- */
- if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
- flags = read_thread_flags();
- if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
- return;
- }
-
trace_exit:
- flags = read_thread_flags();
- syscall_trace_exit(regs, flags);
+ syscall_exit_to_user_mode_work(regs);
}
void do_el0_svc(struct pt_regs *regs)
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 07/14] arm64/ptrace: Define and use _TIF_SYSCALL_EXIT_WORK
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (5 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 06/14] arm64: syscall: Introduce syscall_exit_to_user_mode_work() Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:18 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 08/14] arm64/ptrace: Skip syscall exit reporting for PTRACE_SYSEMU_SINGLESTEP Jinjie Ruan
` (8 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Introduce _TIF_SYSCALL_EXIT_WORK to filter out entry-only flags
during the syscall exit path. This aligns arm64 with the generic
entry framework's SYSCALL_WORK_EXIT semantics.
[Rationale]
The current syscall exit path uses _TIF_SYSCALL_WORK to decide whether
to invoke syscall_exit_work(). However, _TIF_SYSCALL_WORK includes
flags that are only relevant during syscall entry:
1. _TIF_SECCOMP: Seccomp filtering (__secure_computing) only runs
on entry. There is no seccomp callback for syscall exit.
2. _TIF_SYSCALL_EMU: In PTRACE_SYSEMU mode, the syscall is
intercepted and skipped on entry. Since the syscall is never
executed, reporting a syscall exit stop is unnecessary.
[Changes]
- Define _TIF_SYSCALL_EXIT_WORK: A new mask containing only flags
requiring exit processing: _TIF_SYSCALL_TRACE, _TIF_SYSCALL_AUDIT,
and _TIF_SYSCALL_TRACEPOINT.
- Update exit path: Use _TIF_SYSCALL_EXIT_WORK in
syscall_exit_to_user_mode_work() to avoid redundant calls to
audit and ptrace reporting when only entry-flags are set.
- Cleanup: Remove the has_syscall_work() helper as it is no longer
needed. Direct flag comparison is now used to distinguish between
entry and exit work requirements.
[Impact]
audit_syscall_exit() and report_syscall_exit() will no longer be
triggered for seccomp-only or emu-only syscalls. This matches the
generic entry behavior and improves efficiency by skipping unnecessary
exit processing.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/include/asm/syscall.h | 5 -----
arch/arm64/include/asm/thread_info.h | 3 +++
arch/arm64/kernel/ptrace.c | 2 +-
arch/arm64/kernel/syscall.c | 2 +-
4 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index c469d09a7964..dea392c081ca 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -120,11 +120,6 @@ static inline int syscall_get_arch(struct task_struct *task)
return AUDIT_ARCH_AARCH64;
}
-static inline bool has_syscall_work(unsigned long flags)
-{
- return unlikely(flags & _TIF_SYSCALL_WORK);
-}
-
int syscall_trace_enter(struct pt_regs *regs, unsigned long flags);
void syscall_exit_to_user_mode_work(struct pt_regs *regs);
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 7942478e4065..4ae83cb620bb 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -109,6 +109,9 @@ void arch_setup_new_exec(void);
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
_TIF_SYSCALL_EMU)
+#define _TIF_SYSCALL_EXIT_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
+ _TIF_SYSCALL_TRACEPOINT)
+
#ifdef CONFIG_SHADOW_CALL_STACK
#define INIT_SCS \
.scs_base = init_shadow_call_stack, \
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 35efa2062408..3cac9668aaa8 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2471,7 +2471,7 @@ void syscall_exit_to_user_mode_work(struct pt_regs *regs)
rseq_syscall(regs);
- if (has_syscall_work(flags) || flags & _TIF_SINGLESTEP)
+ if (unlikely(flags & _TIF_SYSCALL_EXIT_WORK) || flags & _TIF_SINGLESTEP)
syscall_exit_work(regs, flags);
}
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 66d4da641d97..ec478fc37a9f 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -101,7 +101,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
return;
}
- if (has_syscall_work(flags)) {
+ if (unlikely(flags & _TIF_SYSCALL_WORK)) {
/*
* The de-facto standard way to skip a system call using ptrace
* is to set the system call to -1 (NO_SYSCALL) and set x0 to a
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 08/14] arm64/ptrace: Skip syscall exit reporting for PTRACE_SYSEMU_SINGLESTEP
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (6 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 07/14] arm64/ptrace: Define and use _TIF_SYSCALL_EXIT_WORK Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:20 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 09/14] arm64: entry: Convert to generic entry Jinjie Ruan
` (7 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Align the syscall exit reporting logic with the generic entry
framework by skipping the exit stop when PTRACE_SYSEMU_SINGLESTEP is
in effect.
[Rationale]
When a tracer uses PTRACE_SYSEMU_SINGLESTEP, both _TIF_SYSCALL_EMU
and _TIF_SINGLESTEP flags are set. Currently, arm64 reports a syscall
exit stop whenever _TIF_SINGLESTEP is set, regardless of the
emulation state.
However, as per the generic entry implementation (see
include/linux/entry-common.h):
"If SYSCALL_EMU is set, then the only reason to report is when SINGLESTEP
is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall instruction has been
already reported in syscall_trace_enter()."
Since PTRACE_SYSEMU intercepts and skips the actual syscall
execution, reporting a subsequent exit stop is redundant and
inconsistent with the expected behavior of emulated system calls.
[Changes]
- Introduce report_single_step(): Add a helper to encapsulate the
logic for deciding whether to report a single-step stop at syscall
exit. It returns false if _TIF_SYSCALL_EMU is set, ensuring the
emulated syscall does not trigger a duplicate report.
- Update syscall_exit_work(): Use the new helper to determine
the stepping state instead of directly checking _TIF_SINGLESTEP.
[Impact]
- PTRACE_SINGLESTEP: Continues to report exit stops for actual
instructions.
- PTRACE_SYSEMU: Continues to skip exit stops.
- PTRACE_SYSEMU_SINGLESTEP: Now correctly skips the redundant exit
stop, aligning arm64 with the generic entry infrastructure.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/kernel/ptrace.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 3cac9668aaa8..766de3584cff 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2454,14 +2454,25 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
return ret ? : syscall;
}
+static inline bool report_single_step(unsigned long flags)
+{
+ if (flags & _TIF_SYSCALL_EMU)
+ return false;
+
+ return flags & _TIF_SINGLESTEP;
+}
+
static void syscall_exit_work(struct pt_regs *regs, unsigned long flags)
{
+ bool step;
+
audit_syscall_exit(regs);
if (flags & _TIF_SYSCALL_TRACEPOINT)
trace_sys_exit(regs, syscall_get_return_value(current, regs));
- if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
+ step = report_single_step(flags);
+ if (step || flags & _TIF_SYSCALL_TRACE)
report_syscall_exit(regs);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 09/14] arm64: entry: Convert to generic entry
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (7 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 08/14] arm64/ptrace: Skip syscall exit reporting for PTRACE_SYSEMU_SINGLESTEP Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-17 10:58 ` Peter Zijlstra
2026-03-19 14:21 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 10/14] arm64: Inline el0_svc_common() Jinjie Ruan
` (6 subsequent siblings)
15 siblings, 2 replies; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Implement the generic entry framework for arm64 to handle system call
entry and exit. This follows the migration of x86, RISC-V, and LoongArch,
consolidating architecture-specific syscall tracing and auditing into
the common kernel entry infrastructure.
[Background]
Arm64 has already adopted generic IRQ entry. Completing the conversion
to the generic syscall entry framework reduces architectural divergence,
simplifies maintenance, and allows arm64 to automatically benefit from
improvements in the common entry code.
[Changes]
1. Kconfig and Infrastructure:
- Select GENERIC_ENTRY and remove GENERIC_IRQ_ENTRY (now implied).
- Migrate struct thread_info to use the syscall_work field instead
of TIF flags for syscall-related tasks.
2. Thread Info and Flags:
- Remove definitions for TIF_SYSCALL_TRACE, TIF_SYSCALL_AUDIT,
TIF_SYSCALL_TRACEPOINT, TIF_SECCOMP, and TIF_SYSCALL_EMU.
- Replace _TIF_SYSCALL_WORK and _TIF_SYSCALL_EXIT_WORK with the
generic SYSCALL_WORK bitmask.
- Map single-step state to SYSCALL_EXIT_TRAP in debug-monitors.c.
3. Architecture-Specific Hooks (asm/entry-common.h):
- Implement arch_ptrace_report_syscall_entry() and _exit() by
porting the existing arm64 logic to the generic interface.
- Add arch_syscall_is_vdso_sigreturn() to asm/syscall.h to
support Syscall User Dispatch (SUD).
4. Differentiate between syscall and interrupt entry/exit paths to handle
RSEQ slice extensions correctly.
- For irq/exception entry/exit: use irqentry_enter_from_user_mode() and
irqentry_exit_to_user_mode_prepare().
- For syscall entry/exit: use enter_from_user_mode() and
syscall_exit_to_user_mode_prepare().
- Remove exit_to_user_mode_prepare_legacy() which is no longer necessary.
5. rseq_syscall() will be replaced with the static key version, that is
"rseq_debug_syscall_return()"
6. Cleanup and Refactoring:
- Remove redundant arm64-specific syscall tracing functions from
ptrace.c, including syscall_trace_enter(), syscall_exit_work(),
and related audit/step helpers.
- Update el0_svc_common() in syscall.c to use the generic
syscall_work checks and entry/exit call sites.
[Why this matters]
- Unified Interface: Aligns arm64 with the modern kernel entry standard.
- Improved Maintainability: Bug fixes in kernel/entry/common.c now
apply to arm64 automatically.
- Feature Readiness: Simplifies the implementation of future
cross-architecture syscall features.
[Compatibility]
This conversion maintains full ABI compatibility with existing
userspace. The ptrace register-saving behavior, seccomp filtering, and
syscall tracing semantics remain identical to the previous implementation.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Thomas Gleixner <tglx@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/entry-common.h | 76 +++++++++++++
arch/arm64/include/asm/syscall.h | 19 +++-
arch/arm64/include/asm/thread_info.h | 19 +---
arch/arm64/kernel/debug-monitors.c | 7 ++
arch/arm64/kernel/entry-common.c | 25 ++++-
arch/arm64/kernel/ptrace.c | 154 --------------------------
arch/arm64/kernel/signal.c | 2 +-
arch/arm64/kernel/syscall.c | 6 +-
include/linux/irq-entry-common.h | 8 --
include/linux/rseq_entry.h | 18 ---
11 files changed, 127 insertions(+), 209 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 38dba5f7e4d2..96fef01598be 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -154,9 +154,9 @@ config ARM64
select GENERIC_CPU_DEVICES
select GENERIC_CPU_VULNERABILITIES
select GENERIC_EARLY_IOREMAP
+ select GENERIC_ENTRY
select GENERIC_IDLE_POLL_SETUP
select GENERIC_IOREMAP
- select GENERIC_IRQ_ENTRY
select GENERIC_IRQ_IPI
select GENERIC_IRQ_KEXEC_CLEAR_VM_FORWARD
select GENERIC_IRQ_PROBE
diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm/entry-common.h
index cab8cd78f693..d8bf4bf342e8 100644
--- a/arch/arm64/include/asm/entry-common.h
+++ b/arch/arm64/include/asm/entry-common.h
@@ -3,14 +3,21 @@
#ifndef _ASM_ARM64_ENTRY_COMMON_H
#define _ASM_ARM64_ENTRY_COMMON_H
+#include <linux/ptrace.h>
#include <linux/thread_info.h>
+#include <asm/compat.h>
#include <asm/cpufeature.h>
#include <asm/daifflags.h>
#include <asm/fpsimd.h>
#include <asm/mte.h>
#include <asm/stacktrace.h>
+enum ptrace_syscall_dir {
+ PTRACE_SYSCALL_ENTER = 0,
+ PTRACE_SYSCALL_EXIT,
+};
+
#define ARCH_EXIT_TO_USER_MODE_WORK (_TIF_MTE_ASYNC_FAULT | _TIF_FOREIGN_FPSTATE)
static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *regs,
@@ -54,4 +61,73 @@ static inline bool arch_irqentry_exit_need_resched(void)
#define arch_irqentry_exit_need_resched arch_irqentry_exit_need_resched
+static __always_inline unsigned long ptrace_save_reg(struct pt_regs *regs,
+ enum ptrace_syscall_dir dir,
+ int *regno)
+{
+ unsigned long saved_reg;
+
+ /*
+ * We have some ABI weirdness here in the way that we handle syscall
+ * exit stops because we indicate whether or not the stop has been
+ * signalled from syscall entry or syscall exit by clobbering a general
+ * purpose register (ip/r12 for AArch32, x7 for AArch64) in the tracee
+ * and restoring its old value after the stop. This means that:
+ *
+ * - Any writes by the tracer to this register during the stop are
+ * ignored/discarded.
+ *
+ * - The actual value of the register is not available during the stop,
+ * so the tracer cannot save it and restore it later.
+ *
+ * - Syscall stops behave differently to seccomp and pseudo-step traps
+ * (the latter do not nobble any registers).
+ */
+ *regno = (is_compat_task() ? 12 : 7);
+ saved_reg = regs->regs[*regno];
+ regs->regs[*regno] = dir;
+
+ return saved_reg;
+}
+
+static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs *regs)
+{
+ unsigned long saved_reg;
+ int regno, ret;
+
+ saved_reg = ptrace_save_reg(regs, PTRACE_SYSCALL_ENTER, ®no);
+ ret = ptrace_report_syscall_entry(regs);
+ if (ret)
+ forget_syscall(regs);
+ regs->regs[regno] = saved_reg;
+
+ return ret;
+}
+
+#define arch_ptrace_report_syscall_entry arch_ptrace_report_syscall_entry
+
+static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs,
+ int step)
+{
+ unsigned long saved_reg;
+ int regno;
+
+ saved_reg = ptrace_save_reg(regs, PTRACE_SYSCALL_EXIT, ®no);
+ if (!step) {
+ ptrace_report_syscall_exit(regs, 0);
+ regs->regs[regno] = saved_reg;
+ } else {
+ regs->regs[regno] = saved_reg;
+
+ /*
+ * Signal a pseudo-step exception since we are stepping but
+ * tracer modifications to the registers may have rewound the
+ * state machine.
+ */
+ ptrace_report_syscall_exit(regs, 1);
+ }
+}
+
+#define arch_ptrace_report_syscall_exit arch_ptrace_report_syscall_exit
+
#endif /* _ASM_ARM64_ENTRY_COMMON_H */
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index dea392c081ca..240d45735cc5 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -9,6 +9,9 @@
#include <linux/compat.h>
#include <linux/err.h>
+#include <asm/compat.h>
+#include <asm/vdso.h>
+
typedef long (*syscall_fn_t)(const struct pt_regs *regs);
extern const syscall_fn_t sys_call_table[];
@@ -120,7 +123,19 @@ static inline int syscall_get_arch(struct task_struct *task)
return AUDIT_ARCH_AARCH64;
}
-int syscall_trace_enter(struct pt_regs *regs, unsigned long flags);
-void syscall_exit_to_user_mode_work(struct pt_regs *regs);
+static inline bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
+{
+ unsigned long sigtramp;
+
+#ifdef CONFIG_COMPAT
+ if (is_compat_task()) {
+ unsigned long sigpage = (unsigned long)current->mm->context.sigpage;
+
+ return regs->pc >= sigpage && regs->pc < (sigpage + PAGE_SIZE);
+ }
+#endif
+ sigtramp = (unsigned long)VDSO_SYMBOL(current->mm->context.vdso, sigtramp);
+ return regs->pc == (sigtramp + 8);
+}
#endif /* __ASM_SYSCALL_H */
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 4ae83cb620bb..f89a15dc6ad5 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -43,6 +43,7 @@ struct thread_info {
void *scs_sp;
#endif
u32 cpu;
+ unsigned long syscall_work; /* SYSCALL_WORK_ flags */
};
#define thread_saved_pc(tsk) \
@@ -65,11 +66,6 @@ void arch_setup_new_exec(void);
#define TIF_UPROBE 5 /* uprobe breakpoint or singlestep */
#define TIF_MTE_ASYNC_FAULT 6 /* MTE Asynchronous Tag Check Fault */
#define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
-#define TIF_SYSCALL_TRACE 8 /* syscall trace active */
-#define TIF_SYSCALL_AUDIT 9 /* syscall auditing */
-#define TIF_SYSCALL_TRACEPOINT 10 /* syscall tracepoint for ftrace */
-#define TIF_SECCOMP 11 /* syscall secure computing */
-#define TIF_SYSCALL_EMU 12 /* syscall emulation active */
#define TIF_PATCH_PENDING 13 /* pending live patching update */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_FREEZE 19
@@ -91,27 +87,14 @@ void arch_setup_new_exec(void);
#define _TIF_NEED_RESCHED_LAZY (1 << TIF_NEED_RESCHED_LAZY)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE)
-#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
-#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
-#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
-#define _TIF_SECCOMP (1 << TIF_SECCOMP)
-#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)
#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)
#define _TIF_UPROBE (1 << TIF_UPROBE)
-#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_SVE (1 << TIF_SVE)
#define _TIF_MTE_ASYNC_FAULT (1 << TIF_MTE_ASYNC_FAULT)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
#define _TIF_TSC_SIGSEGV (1 << TIF_TSC_SIGSEGV)
-#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
- _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
- _TIF_SYSCALL_EMU)
-
-#define _TIF_SYSCALL_EXIT_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
- _TIF_SYSCALL_TRACEPOINT)
-
#ifdef CONFIG_SHADOW_CALL_STACK
#define INIT_SCS \
.scs_base = init_shadow_call_stack, \
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 29307642f4c9..e67643a70405 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -385,11 +385,18 @@ void user_enable_single_step(struct task_struct *task)
if (!test_and_set_ti_thread_flag(ti, TIF_SINGLESTEP))
set_regs_spsr_ss(task_pt_regs(task));
+
+ /*
+ * Ensure that a trap is triggered once stepping out of a system
+ * call prior to executing any user instruction.
+ */
+ set_task_syscall_work(task, SYSCALL_EXIT_TRAP);
}
NOKPROBE_SYMBOL(user_enable_single_step);
void user_disable_single_step(struct task_struct *task)
{
clear_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
+ clear_task_syscall_work(task, SYSCALL_EXIT_TRAP);
}
NOKPROBE_SYMBOL(user_disable_single_step);
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 3625797e9ee8..b7ac88bb946c 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -64,6 +64,12 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
* instrumentable code, or any code which may trigger an exception.
*/
static __always_inline void arm64_enter_from_user_mode(struct pt_regs *regs)
+{
+ irqentry_enter_from_user_mode(regs);
+ mte_disable_tco_entry(current);
+}
+
+static __always_inline void arm64_syscall_enter_from_user_mode(struct pt_regs *regs)
{
enter_from_user_mode(regs);
mte_disable_tco_entry(current);
@@ -78,7 +84,16 @@ static __always_inline void arm64_enter_from_user_mode(struct pt_regs *regs)
static __always_inline void arm64_exit_to_user_mode(struct pt_regs *regs)
{
local_irq_disable();
- exit_to_user_mode_prepare_legacy(regs);
+ irqentry_exit_to_user_mode_prepare(regs);
+ local_daif_mask();
+ mte_check_tfsr_exit();
+ exit_to_user_mode();
+}
+
+static __always_inline void arm64_syscall_exit_to_user_mode(struct pt_regs *regs)
+{
+ local_irq_disable();
+ syscall_exit_to_user_mode_prepare(regs);
local_daif_mask();
mte_check_tfsr_exit();
exit_to_user_mode();
@@ -717,12 +732,12 @@ static void noinstr el0_brk64(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_svc(struct pt_regs *regs)
{
- arm64_enter_from_user_mode(regs);
+ arm64_syscall_enter_from_user_mode(regs);
cortex_a76_erratum_1463225_svc_handler();
fpsimd_syscall_enter();
local_daif_restore(DAIF_PROCCTX);
do_el0_svc(regs);
- arm64_exit_to_user_mode(regs);
+ arm64_syscall_exit_to_user_mode(regs);
fpsimd_syscall_exit();
}
@@ -869,11 +884,11 @@ static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_svc_compat(struct pt_regs *regs)
{
- arm64_enter_from_user_mode(regs);
+ arm64_syscall_enter_from_user_mode(regs);
cortex_a76_erratum_1463225_svc_handler();
local_daif_restore(DAIF_PROCCTX);
do_el0_svc_compat(regs);
- arm64_exit_to_user_mode(regs);
+ arm64_syscall_exit_to_user_mode(regs);
}
static void noinstr el0_bkpt32(struct pt_regs *regs, unsigned long esr)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 766de3584cff..9acc314bc376 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -8,7 +8,6 @@
* Copyright (C) 2012 ARM Ltd.
*/
-#include <linux/audit.h>
#include <linux/compat.h>
#include <linux/kernel.h>
#include <linux/sched/signal.h>
@@ -18,7 +17,6 @@
#include <linux/smp.h>
#include <linux/ptrace.h>
#include <linux/user.h>
-#include <linux/seccomp.h>
#include <linux/security.h>
#include <linux/init.h>
#include <linux/signal.h>
@@ -28,7 +26,6 @@
#include <linux/hw_breakpoint.h>
#include <linux/regset.h>
#include <linux/elf.h>
-#include <linux/rseq.h>
#include <asm/compat.h>
#include <asm/cpufeature.h>
@@ -38,13 +35,9 @@
#include <asm/mte.h>
#include <asm/pointer_auth.h>
#include <asm/stacktrace.h>
-#include <asm/syscall.h>
#include <asm/traps.h>
#include <asm/system_misc.h>
-#define CREATE_TRACE_POINTS
-#include <trace/events/syscalls.h>
-
struct pt_regs_offset {
const char *name;
int offset;
@@ -2339,153 +2332,6 @@ long arch_ptrace(struct task_struct *child, long request,
return ptrace_request(child, request, addr, data);
}
-enum ptrace_syscall_dir {
- PTRACE_SYSCALL_ENTER = 0,
- PTRACE_SYSCALL_EXIT,
-};
-
-static __always_inline unsigned long ptrace_save_reg(struct pt_regs *regs,
- enum ptrace_syscall_dir dir,
- int *regno)
-{
- unsigned long saved_reg;
-
- /*
- * We have some ABI weirdness here in the way that we handle syscall
- * exit stops because we indicate whether or not the stop has been
- * signalled from syscall entry or syscall exit by clobbering a general
- * purpose register (ip/r12 for AArch32, x7 for AArch64) in the tracee
- * and restoring its old value after the stop. This means that:
- *
- * - Any writes by the tracer to this register during the stop are
- * ignored/discarded.
- *
- * - The actual value of the register is not available during the stop,
- * so the tracer cannot save it and restore it later.
- *
- * - Syscall stops behave differently to seccomp and pseudo-step traps
- * (the latter do not nobble any registers).
- */
- *regno = (is_compat_task() ? 12 : 7);
- saved_reg = regs->regs[*regno];
- regs->regs[*regno] = dir;
-
- return saved_reg;
-}
-
-static int report_syscall_entry(struct pt_regs *regs)
-{
- unsigned long saved_reg;
- int regno, ret;
-
- saved_reg = ptrace_save_reg(regs, PTRACE_SYSCALL_ENTER, ®no);
- ret = ptrace_report_syscall_entry(regs);
- if (ret)
- forget_syscall(regs);
- regs->regs[regno] = saved_reg;
-
- return ret;
-}
-
-static void report_syscall_exit(struct pt_regs *regs)
-{
- unsigned long saved_reg;
- int regno;
-
- saved_reg = ptrace_save_reg(regs, PTRACE_SYSCALL_EXIT, ®no);
- if (!test_thread_flag(TIF_SINGLESTEP)) {
- ptrace_report_syscall_exit(regs, 0);
- regs->regs[regno] = saved_reg;
- } else {
- regs->regs[regno] = saved_reg;
-
- /*
- * Signal a pseudo-step exception since we are stepping but
- * tracer modifications to the registers may have rewound the
- * state machine.
- */
- ptrace_report_syscall_exit(regs, 1);
- }
-}
-
-static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
-{
- if (unlikely(audit_context())) {
- unsigned long args[6];
-
- syscall_get_arguments(current, regs, args);
- audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]);
- }
-}
-
-int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
-{
- long syscall;
- int ret;
-
- if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
- ret = report_syscall_entry(regs);
- if (ret || (flags & _TIF_SYSCALL_EMU))
- return NO_SYSCALL;
- }
-
- /* Do the secure computing after ptrace; failures should be fast. */
- if (flags & _TIF_SECCOMP) {
- ret = __secure_computing();
- if (ret == -1)
- return NO_SYSCALL;
- }
-
- /* Either of the above might have changed the syscall number */
- syscall = syscall_get_nr(current, regs);
-
- if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) {
- trace_sys_enter(regs, syscall);
-
- /*
- * Probes or BPF hooks in the tracepoint may have changed the
- * system call number as well.
- */
- syscall = syscall_get_nr(current, regs);
- }
-
- syscall_enter_audit(regs, syscall);
-
- return ret ? : syscall;
-}
-
-static inline bool report_single_step(unsigned long flags)
-{
- if (flags & _TIF_SYSCALL_EMU)
- return false;
-
- return flags & _TIF_SINGLESTEP;
-}
-
-static void syscall_exit_work(struct pt_regs *regs, unsigned long flags)
-{
- bool step;
-
- audit_syscall_exit(regs);
-
- if (flags & _TIF_SYSCALL_TRACEPOINT)
- trace_sys_exit(regs, syscall_get_return_value(current, regs));
-
- step = report_single_step(flags);
- if (step || flags & _TIF_SYSCALL_TRACE)
- report_syscall_exit(regs);
-}
-
-void syscall_exit_to_user_mode_work(struct pt_regs *regs)
-{
- unsigned long flags = read_thread_flags();
-
- rseq_syscall(regs);
-
- if (unlikely(flags & _TIF_SYSCALL_EXIT_WORK) || flags & _TIF_SINGLESTEP)
- syscall_exit_work(regs, flags);
-}
-
/*
* SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
* We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 08ffc5a5aea4..7ca30ee41e7a 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -8,8 +8,8 @@
#include <linux/cache.h>
#include <linux/compat.h>
+#include <linux/entry-common.h>
#include <linux/errno.h>
-#include <linux/irq-entry-common.h>
#include <linux/kernel.h>
#include <linux/signal.h>
#include <linux/freezer.h>
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index ec478fc37a9f..77d00a5cf0e9 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -2,6 +2,7 @@
#include <linux/compiler.h>
#include <linux/context_tracking.h>
+#include <linux/entry-common.h>
#include <linux/errno.h>
#include <linux/nospec.h>
#include <linux/ptrace.h>
@@ -68,6 +69,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
const syscall_fn_t syscall_table[])
{
+ unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
unsigned long flags = read_thread_flags();
regs->orig_x0 = regs->regs[0];
@@ -101,7 +103,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
return;
}
- if (unlikely(flags & _TIF_SYSCALL_WORK)) {
+ if (unlikely(work & SYSCALL_WORK_ENTER)) {
/*
* The de-facto standard way to skip a system call using ptrace
* is to set the system call to -1 (NO_SYSCALL) and set x0 to a
@@ -119,7 +121,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
*/
if (scno == NO_SYSCALL)
syscall_set_return_value(current, regs, -ENOSYS, 0);
- scno = syscall_trace_enter(regs, flags);
+ scno = syscall_trace_enter(regs, work);
if (scno == NO_SYSCALL)
goto trace_exit;
}
diff --git a/include/linux/irq-entry-common.h b/include/linux/irq-entry-common.h
index d26d1b1bcbfb..6519b4a30dc1 100644
--- a/include/linux/irq-entry-common.h
+++ b/include/linux/irq-entry-common.h
@@ -236,14 +236,6 @@ static __always_inline void __exit_to_user_mode_validate(void)
lockdep_sys_exit();
}
-/* Temporary workaround to keep ARM64 alive */
-static __always_inline void exit_to_user_mode_prepare_legacy(struct pt_regs *regs)
-{
- __exit_to_user_mode_prepare(regs);
- rseq_exit_to_user_mode_legacy();
- __exit_to_user_mode_validate();
-}
-
/**
* syscall_exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required
* @regs: Pointer to pt_regs on entry stack
diff --git a/include/linux/rseq_entry.h b/include/linux/rseq_entry.h
index c6831c93cd6e..e9c4108ac514 100644
--- a/include/linux/rseq_entry.h
+++ b/include/linux/rseq_entry.h
@@ -743,24 +743,6 @@ static __always_inline void rseq_irqentry_exit_to_user_mode(void)
ev->events = 0;
}
-/* Required to keep ARM64 working */
-static __always_inline void rseq_exit_to_user_mode_legacy(void)
-{
- struct rseq_event *ev = ¤t->rseq.event;
-
- rseq_stat_inc(rseq_stats.exit);
-
- if (static_branch_unlikely(&rseq_debug_enabled))
- WARN_ON_ONCE(ev->sched_switch);
-
- /*
- * Ensure that event (especially user_irq) is cleared when the
- * interrupt did not result in a schedule and therefore the
- * rseq processing did not clear it.
- */
- ev->events = 0;
-}
-
void __rseq_debug_syscall_return(struct pt_regs *regs);
static __always_inline void rseq_debug_syscall_return(struct pt_regs *regs)
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 10/14] arm64: Inline el0_svc_common()
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (8 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 09/14] arm64: entry: Convert to generic entry Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:22 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 11/14] s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP Jinjie Ruan
` (5 subsequent siblings)
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
After converting arm64 to Generic Entry framework, the compiler no longer
inlines el0_svc_common() into its caller do_el0_svc(). This introduces
a small but measurable overhead in the critical system call path.
Manually forcing el0_svc_common() to be inlined restores the
performance. Benchmarking with perf bench syscall basic on a
Kunpeng 920 platform (based on v6.19-rc1) shows a ~1% performance
uplift.
Inlining this function reduces function prologue/epilogue overhead
and allows for better compiler optimization in the hot system call
dispatch path.
| Metric | W/O this patch | With this patch | Change |
| ---------- | -------------- | --------------- | --------- |
| Total time | 2.195 [sec] | 2.171 [sec] | ↓1.1% |
| usecs/op | 0.219575 | 0.217192 | ↓1.1% |
| ops/sec | 4,554,260 | 4,604,225 | ↑1.1% |
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/kernel/syscall.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 77d00a5cf0e9..6fcd97c46716 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -66,8 +66,8 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
choose_random_kstack_offset(get_random_u16());
}
-static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
- const syscall_fn_t syscall_table[])
+static __always_inline void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ const syscall_fn_t syscall_table[])
{
unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
unsigned long flags = read_thread_flags();
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 11/14] s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (9 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 10/14] arm64: Inline el0_svc_common() Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:23 ` Linus Walleij
2026-03-19 17:05 ` Kevin Brodsky
2026-03-17 8:20 ` [PATCH v13 RESEND 12/14] asm-generic: Move TIF_SINGLESTEP to generic TIF bits Jinjie Ruan
` (4 subsequent siblings)
15 siblings, 2 replies; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Rename TIF_SINGLE_STEP to TIF_SINGLESTEP to align with the naming
convention used by arm64, x86, and other architectures.
By aligning the name, TIF_SINGLESTEP can be consolidated into the generic
TIF bits definitions, reducing architectural divergence and simplifying
cross-architecture entry/exit logic.
No functional changes intended.
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/s390/include/asm/thread_info.h | 4 ++--
arch/s390/kernel/process.c | 2 +-
arch/s390/kernel/ptrace.c | 20 ++++++++++----------
arch/s390/kernel/signal.c | 6 +++---
4 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/arch/s390/include/asm/thread_info.h b/arch/s390/include/asm/thread_info.h
index 6a548a819400..1bcd42614e41 100644
--- a/arch/s390/include/asm/thread_info.h
+++ b/arch/s390/include/asm/thread_info.h
@@ -69,7 +69,7 @@ void arch_setup_new_exec(void);
#define TIF_GUARDED_STORAGE 17 /* load guarded storage control block */
#define TIF_ISOLATE_BP_GUEST 18 /* Run KVM guests with isolated BP */
#define TIF_PER_TRAP 19 /* Need to handle PER trap on exit to usermode */
-#define TIF_SINGLE_STEP 21 /* This task is single stepped */
+#define TIF_SINGLESTEP 21 /* This task is single stepped */
#define TIF_BLOCK_STEP 22 /* This task is block stepped */
#define TIF_UPROBE_SINGLESTEP 23 /* This task is uprobe single stepped */
@@ -77,7 +77,7 @@ void arch_setup_new_exec(void);
#define _TIF_GUARDED_STORAGE BIT(TIF_GUARDED_STORAGE)
#define _TIF_ISOLATE_BP_GUEST BIT(TIF_ISOLATE_BP_GUEST)
#define _TIF_PER_TRAP BIT(TIF_PER_TRAP)
-#define _TIF_SINGLE_STEP BIT(TIF_SINGLE_STEP)
+#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
#define _TIF_BLOCK_STEP BIT(TIF_BLOCK_STEP)
#define _TIF_UPROBE_SINGLESTEP BIT(TIF_UPROBE_SINGLESTEP)
diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
index 0df95dcb2101..3accc0c064a0 100644
--- a/arch/s390/kernel/process.c
+++ b/arch/s390/kernel/process.c
@@ -122,7 +122,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
/* Don't copy debug registers */
memset(&p->thread.per_user, 0, sizeof(p->thread.per_user));
memset(&p->thread.per_event, 0, sizeof(p->thread.per_event));
- clear_tsk_thread_flag(p, TIF_SINGLE_STEP);
+ clear_tsk_thread_flag(p, TIF_SINGLESTEP);
p->thread.per_flags = 0;
/* Initialize per thread user and system timer values */
p->thread.user_timer = 0;
diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
index 125ca4c4e30c..d2cf91f4ac3f 100644
--- a/arch/s390/kernel/ptrace.c
+++ b/arch/s390/kernel/ptrace.c
@@ -90,8 +90,8 @@ void update_cr_regs(struct task_struct *task)
new.start.val = thread->per_user.start;
new.end.val = thread->per_user.end;
- /* merge TIF_SINGLE_STEP into user specified PER registers. */
- if (test_tsk_thread_flag(task, TIF_SINGLE_STEP) ||
+ /* merge TIF_SINGLESTEP into user specified PER registers. */
+ if (test_tsk_thread_flag(task, TIF_SINGLESTEP) ||
test_tsk_thread_flag(task, TIF_UPROBE_SINGLESTEP)) {
if (test_tsk_thread_flag(task, TIF_BLOCK_STEP))
new.control.val |= PER_EVENT_BRANCH;
@@ -119,18 +119,18 @@ void update_cr_regs(struct task_struct *task)
void user_enable_single_step(struct task_struct *task)
{
clear_tsk_thread_flag(task, TIF_BLOCK_STEP);
- set_tsk_thread_flag(task, TIF_SINGLE_STEP);
+ set_tsk_thread_flag(task, TIF_SINGLESTEP);
}
void user_disable_single_step(struct task_struct *task)
{
clear_tsk_thread_flag(task, TIF_BLOCK_STEP);
- clear_tsk_thread_flag(task, TIF_SINGLE_STEP);
+ clear_tsk_thread_flag(task, TIF_SINGLESTEP);
}
void user_enable_block_step(struct task_struct *task)
{
- set_tsk_thread_flag(task, TIF_SINGLE_STEP);
+ set_tsk_thread_flag(task, TIF_SINGLESTEP);
set_tsk_thread_flag(task, TIF_BLOCK_STEP);
}
@@ -143,7 +143,7 @@ void ptrace_disable(struct task_struct *task)
{
memset(&task->thread.per_user, 0, sizeof(task->thread.per_user));
memset(&task->thread.per_event, 0, sizeof(task->thread.per_event));
- clear_tsk_thread_flag(task, TIF_SINGLE_STEP);
+ clear_tsk_thread_flag(task, TIF_SINGLESTEP);
clear_tsk_thread_flag(task, TIF_PER_TRAP);
task->thread.per_flags = 0;
}
@@ -155,19 +155,19 @@ static inline unsigned long __peek_user_per(struct task_struct *child,
{
if (addr == offsetof(struct per_struct_kernel, cr9))
/* Control bits of the active per set. */
- return test_thread_flag(TIF_SINGLE_STEP) ?
+ return test_thread_flag(TIF_SINGLESTEP) ?
PER_EVENT_IFETCH : child->thread.per_user.control;
else if (addr == offsetof(struct per_struct_kernel, cr10))
/* Start address of the active per set. */
- return test_thread_flag(TIF_SINGLE_STEP) ?
+ return test_thread_flag(TIF_SINGLESTEP) ?
0 : child->thread.per_user.start;
else if (addr == offsetof(struct per_struct_kernel, cr11))
/* End address of the active per set. */
- return test_thread_flag(TIF_SINGLE_STEP) ?
+ return test_thread_flag(TIF_SINGLESTEP) ?
-1UL : child->thread.per_user.end;
else if (addr == offsetof(struct per_struct_kernel, bits))
/* Single-step bit. */
- return test_thread_flag(TIF_SINGLE_STEP) ?
+ return test_thread_flag(TIF_SINGLESTEP) ?
(1UL << (BITS_PER_LONG - 1)) : 0;
else if (addr == offsetof(struct per_struct_kernel, starting_addr))
/* Start address of the user specified per set. */
diff --git a/arch/s390/kernel/signal.c b/arch/s390/kernel/signal.c
index 4874de5edea0..83f7650f2032 100644
--- a/arch/s390/kernel/signal.c
+++ b/arch/s390/kernel/signal.c
@@ -423,7 +423,7 @@ static void handle_signal(struct ksignal *ksig, sigset_t *oldset,
else
ret = setup_frame(ksig->sig, &ksig->ka, oldset, regs);
- signal_setup_done(ret, ksig, test_thread_flag(TIF_SINGLE_STEP));
+ signal_setup_done(ret, ksig, test_thread_flag(TIF_SINGLESTEP));
}
/*
@@ -491,7 +491,7 @@ void arch_do_signal_or_restart(struct pt_regs *regs)
regs->gprs[2] = regs->orig_gpr2;
current->restart_block.arch_data = regs->psw.addr;
regs->psw.addr = VDSO_SYMBOL(current, restart_syscall);
- if (test_thread_flag(TIF_SINGLE_STEP))
+ if (test_thread_flag(TIF_SINGLESTEP))
clear_thread_flag(TIF_PER_TRAP);
break;
case -ERESTARTNOHAND:
@@ -499,7 +499,7 @@ void arch_do_signal_or_restart(struct pt_regs *regs)
case -ERESTARTNOINTR:
regs->gprs[2] = regs->orig_gpr2;
regs->psw.addr = __rewind_psw(regs->psw, regs->int_code >> 16);
- if (test_thread_flag(TIF_SINGLE_STEP))
+ if (test_thread_flag(TIF_SINGLESTEP))
clear_thread_flag(TIF_PER_TRAP);
break;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 12/14] asm-generic: Move TIF_SINGLESTEP to generic TIF bits
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (10 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 11/14] s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:24 ` Linus Walleij
2026-03-19 17:05 ` Kevin Brodsky
2026-03-17 8:20 ` [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags Jinjie Ruan
` (3 subsequent siblings)
15 siblings, 2 replies; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Currently, x86, ARM64, s390, and LoongArch all define and use
TIF_SINGLESTEP to track single-stepping state.
Since this flag is shared across multiple major architectures and serves
a common purpose in the generic entry/exit paths, move TIF_SINGLESTEP
into the generic Thread Information Flags (TIF) infrastructure.
This consolidation reduces architecture-specific boilerplate code and
ensures consistency for generic features that rely on single-step
state tracking.
Cc: Thomas Gleixner <tglx@kernel.org>
Acked-by: Heiko Carstens <hca@linux.ibm.com> # s390
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/loongarch/include/asm/thread_info.h | 11 +++++------
arch/s390/include/asm/thread_info.h | 7 +++----
arch/x86/include/asm/thread_info.h | 6 ++----
include/asm-generic/thread_info_tif.h | 5 +++++
4 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/arch/loongarch/include/asm/thread_info.h b/arch/loongarch/include/asm/thread_info.h
index 4d7117fcdc78..a2ec87f18e1d 100644
--- a/arch/loongarch/include/asm/thread_info.h
+++ b/arch/loongarch/include/asm/thread_info.h
@@ -70,6 +70,7 @@ register unsigned long current_stack_pointer __asm__("$sp");
*/
#define HAVE_TIF_NEED_RESCHED_LAZY
#define HAVE_TIF_RESTORE_SIGMASK
+#define HAVE_TIF_SINGLESTEP
#include <asm-generic/thread_info_tif.h>
@@ -82,11 +83,10 @@ register unsigned long current_stack_pointer __asm__("$sp");
#define TIF_32BIT_REGS 21 /* 32-bit general purpose registers */
#define TIF_32BIT_ADDR 22 /* 32-bit address space */
#define TIF_LOAD_WATCH 23 /* If set, load watch registers */
-#define TIF_SINGLESTEP 24 /* Single Step */
-#define TIF_LSX_CTX_LIVE 25 /* LSX context must be preserved */
-#define TIF_LASX_CTX_LIVE 26 /* LASX context must be preserved */
-#define TIF_USEDLBT 27 /* LBT was used by this task this quantum (SMP) */
-#define TIF_LBT_CTX_LIVE 28 /* LBT context must be preserved */
+#define TIF_LSX_CTX_LIVE 24 /* LSX context must be preserved */
+#define TIF_LASX_CTX_LIVE 25 /* LASX context must be preserved */
+#define TIF_USEDLBT 26 /* LBT was used by this task this quantum (SMP) */
+#define TIF_LBT_CTX_LIVE 27 /* LBT context must be preserved */
#define _TIF_NOHZ BIT(TIF_NOHZ)
#define _TIF_USEDFPU BIT(TIF_USEDFPU)
@@ -96,7 +96,6 @@ register unsigned long current_stack_pointer __asm__("$sp");
#define _TIF_32BIT_REGS BIT(TIF_32BIT_REGS)
#define _TIF_32BIT_ADDR BIT(TIF_32BIT_ADDR)
#define _TIF_LOAD_WATCH BIT(TIF_LOAD_WATCH)
-#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
#define _TIF_LSX_CTX_LIVE BIT(TIF_LSX_CTX_LIVE)
#define _TIF_LASX_CTX_LIVE BIT(TIF_LASX_CTX_LIVE)
#define _TIF_USEDLBT BIT(TIF_USEDLBT)
diff --git a/arch/s390/include/asm/thread_info.h b/arch/s390/include/asm/thread_info.h
index 1bcd42614e41..95be5258a422 100644
--- a/arch/s390/include/asm/thread_info.h
+++ b/arch/s390/include/asm/thread_info.h
@@ -61,6 +61,7 @@ void arch_setup_new_exec(void);
*/
#define HAVE_TIF_NEED_RESCHED_LAZY
#define HAVE_TIF_RESTORE_SIGMASK
+#define HAVE_TIF_SINGLESTEP
#include <asm-generic/thread_info_tif.h>
@@ -69,15 +70,13 @@ void arch_setup_new_exec(void);
#define TIF_GUARDED_STORAGE 17 /* load guarded storage control block */
#define TIF_ISOLATE_BP_GUEST 18 /* Run KVM guests with isolated BP */
#define TIF_PER_TRAP 19 /* Need to handle PER trap on exit to usermode */
-#define TIF_SINGLESTEP 21 /* This task is single stepped */
-#define TIF_BLOCK_STEP 22 /* This task is block stepped */
-#define TIF_UPROBE_SINGLESTEP 23 /* This task is uprobe single stepped */
+#define TIF_BLOCK_STEP 20 /* This task is block stepped */
+#define TIF_UPROBE_SINGLESTEP 21 /* This task is uprobe single stepped */
#define _TIF_ASCE_PRIMARY BIT(TIF_ASCE_PRIMARY)
#define _TIF_GUARDED_STORAGE BIT(TIF_GUARDED_STORAGE)
#define _TIF_ISOLATE_BP_GUEST BIT(TIF_ISOLATE_BP_GUEST)
#define _TIF_PER_TRAP BIT(TIF_PER_TRAP)
-#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
#define _TIF_BLOCK_STEP BIT(TIF_BLOCK_STEP)
#define _TIF_UPROBE_SINGLESTEP BIT(TIF_UPROBE_SINGLESTEP)
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 0067684afb5b..f59072ba1473 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -98,9 +98,8 @@ struct thread_info {
#define TIF_IO_BITMAP 22 /* uses I/O bitmap */
#define TIF_SPEC_FORCE_UPDATE 23 /* Force speculation MSR update in context switch */
#define TIF_FORCED_TF 24 /* true if TF in eflags artificially */
-#define TIF_SINGLESTEP 25 /* reenable singlestep on user return*/
-#define TIF_BLOCKSTEP 26 /* set when we want DEBUGCTLMSR_BTF */
-#define TIF_ADDR32 27 /* 32-bit address space on 64 bits */
+#define TIF_BLOCKSTEP 25 /* set when we want DEBUGCTLMSR_BTF */
+#define TIF_ADDR32 26 /* 32-bit address space on 64 bits */
#define _TIF_SSBD BIT(TIF_SSBD)
#define _TIF_SPEC_IB BIT(TIF_SPEC_IB)
@@ -112,7 +111,6 @@ struct thread_info {
#define _TIF_SPEC_FORCE_UPDATE BIT(TIF_SPEC_FORCE_UPDATE)
#define _TIF_FORCED_TF BIT(TIF_FORCED_TF)
#define _TIF_BLOCKSTEP BIT(TIF_BLOCKSTEP)
-#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
#define _TIF_ADDR32 BIT(TIF_ADDR32)
/* flags to check in __switch_to() */
diff --git a/include/asm-generic/thread_info_tif.h b/include/asm-generic/thread_info_tif.h
index da1610a78f92..b277fe06aee3 100644
--- a/include/asm-generic/thread_info_tif.h
+++ b/include/asm-generic/thread_info_tif.h
@@ -48,4 +48,9 @@
#define TIF_RSEQ 11 // Run RSEQ fast path
#define _TIF_RSEQ BIT(TIF_RSEQ)
+#ifdef HAVE_TIF_SINGLESTEP
+#define TIF_SINGLESTEP 12 /* reenable singlestep on user return*/
+#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
+#endif
+
#endif /* _ASM_GENERIC_THREAD_INFO_TIF_H_ */
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (11 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 12/14] asm-generic: Move TIF_SINGLESTEP to generic TIF bits Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:24 ` Linus Walleij
2026-03-19 17:07 ` Kevin Brodsky
2026-03-17 8:20 ` [PATCH v13 RESEND 14/14] selftests: sud_test: Support aarch64 Jinjie Ruan
` (2 subsequent siblings)
15 siblings, 2 replies; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
Use the generic TIF bits defined in <asm-generic/thread_info_tif.h> for
standard thread flags (TIF_SIGPENDING, TIF_NEED_RESCHED, TIF_NOTIFY_RESUME,
TIF_RESTORE_SIGMASK, TIF_SINGLESTEP, etc.) instead of defining
them locally.
Arm64-specific bits (TIF_FOREIGN_FPSTATE, TIF_MTE_ASYNC_FAULT, TIF_SVE,
TIF_SSBD, etc.) are renumbered to start at bit 16 to avoid conflicts.
This enables RSEQ optimizations which require CONFIG_HAVE_GENERIC_TIF_BITS
combined with the generic entry infrastructure (already used by arm64).
Cc: Thomas Gleixner <tglx@kernel.org>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/thread_info.h | 62 ++++++++++++----------------
2 files changed, 28 insertions(+), 35 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 96fef01598be..33cf901fb1a0 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -224,6 +224,7 @@ config ARM64
select HAVE_SAMPLE_FTRACE_DIRECT_MULTI
select HAVE_BUILDTIME_MCOUNT_SORT
select HAVE_EFFICIENT_UNALIGNED_ACCESS
+ select HAVE_GENERIC_TIF_BITS
select HAVE_GUP_FAST
select HAVE_FTRACE_GRAPH_FUNC
select HAVE_FUNCTION_TRACER
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index f89a15dc6ad5..be1a0651cfe2 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -58,42 +58,34 @@ void arch_setup_new_exec(void);
#endif
-#define TIF_SIGPENDING 0 /* signal pending */
-#define TIF_NEED_RESCHED 1 /* rescheduling necessary */
-#define TIF_NEED_RESCHED_LAZY 2 /* Lazy rescheduling needed */
-#define TIF_NOTIFY_RESUME 3 /* callback before returning to user */
-#define TIF_FOREIGN_FPSTATE 4 /* CPU's FP state is not current's */
-#define TIF_UPROBE 5 /* uprobe breakpoint or singlestep */
-#define TIF_MTE_ASYNC_FAULT 6 /* MTE Asynchronous Tag Check Fault */
-#define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
-#define TIF_PATCH_PENDING 13 /* pending live patching update */
-#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
-#define TIF_FREEZE 19
-#define TIF_RESTORE_SIGMASK 20
-#define TIF_SINGLESTEP 21
-#define TIF_32BIT 22 /* 32bit process */
-#define TIF_SVE 23 /* Scalable Vector Extension in use */
-#define TIF_SVE_VL_INHERIT 24 /* Inherit SVE vl_onexec across exec */
-#define TIF_SSBD 25 /* Wants SSB mitigation */
-#define TIF_TAGGED_ADDR 26 /* Allow tagged user addresses */
-#define TIF_SME 27 /* SME in use */
-#define TIF_SME_VL_INHERIT 28 /* Inherit SME vl_onexec across exec */
-#define TIF_KERNEL_FPSTATE 29 /* Task is in a kernel mode FPSIMD section */
-#define TIF_TSC_SIGSEGV 30 /* SIGSEGV on counter-timer access */
-#define TIF_LAZY_MMU_PENDING 31 /* Ops pending for lazy mmu mode exit */
+/*
+ * Tell the generic TIF infrastructure which bits arm64 supports
+ */
+#define HAVE_TIF_NEED_RESCHED_LAZY
+#define HAVE_TIF_RESTORE_SIGMASK
+#define HAVE_TIF_SINGLESTEP
+
+#include <asm-generic/thread_info_tif.h>
+
+#define TIF_FOREIGN_FPSTATE 16 /* CPU's FP state is not current's */
+#define TIF_MTE_ASYNC_FAULT 17 /* MTE Asynchronous Tag Check Fault */
+#define TIF_FREEZE 18
+#define TIF_32BIT 19 /* 32bit process */
+#define TIF_SVE 20 /* Scalable Vector Extension in use */
+#define TIF_SVE_VL_INHERIT 21 /* Inherit SVE vl_onexec across exec */
+#define TIF_SSBD 22 /* Wants SSB mitigation */
+#define TIF_TAGGED_ADDR 23 /* Allow tagged user addresses */
+#define TIF_SME 24 /* SME in use */
+#define TIF_SME_VL_INHERIT 25 /* Inherit SME vl_onexec across exec */
+#define TIF_KERNEL_FPSTATE 26 /* Task is in a kernel mode FPSIMD section */
+#define TIF_TSC_SIGSEGV 27 /* SIGSEGV on counter-timer access */
+#define TIF_LAZY_MMU_PENDING 28 /* Ops pending for lazy mmu mode exit */
-#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
-#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
-#define _TIF_NEED_RESCHED_LAZY (1 << TIF_NEED_RESCHED_LAZY)
-#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
-#define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE)
-#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)
-#define _TIF_UPROBE (1 << TIF_UPROBE)
-#define _TIF_32BIT (1 << TIF_32BIT)
-#define _TIF_SVE (1 << TIF_SVE)
-#define _TIF_MTE_ASYNC_FAULT (1 << TIF_MTE_ASYNC_FAULT)
-#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
-#define _TIF_TSC_SIGSEGV (1 << TIF_TSC_SIGSEGV)
+#define _TIF_FOREIGN_FPSTATE BIT(TIF_FOREIGN_FPSTATE)
+#define _TIF_32BIT BIT(TIF_32BIT)
+#define _TIF_SVE BIT(TIF_SVE)
+#define _TIF_MTE_ASYNC_FAULT BIT(TIF_MTE_ASYNC_FAULT)
+#define _TIF_TSC_SIGSEGV BIT(TIF_TSC_SIGSEGV)
#ifdef CONFIG_SHADOW_CALL_STACK
#define INIT_SCS \
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v13 RESEND 14/14] selftests: sud_test: Support aarch64
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (12 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags Jinjie Ruan
@ 2026-03-17 8:20 ` Jinjie Ruan
2026-03-19 14:26 ` Linus Walleij
2026-03-17 10:57 ` [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Yeoreum Yun
2026-03-19 14:35 ` Linus Walleij
15 siblings, 1 reply; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-17 8:20 UTC (permalink / raw)
To: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
Cc: ruanjinjie
From: kemal <kmal@cock.li>
Support aarch64 to test "Syscall User Dispatch" with sud_test
selftest testcase.
Signed-off-by: kemal <kmal@cock.li>
---
tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c | 2 +-
tools/testing/selftests/syscall_user_dispatch/sud_test.c | 4 ++++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c b/tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c
index 073a03702ff5..6059abe75cb3 100644
--- a/tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c
+++ b/tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c
@@ -41,7 +41,7 @@
* out of the box, but don't enable them until they support syscall user
* dispatch.
*/
-#if defined(__x86_64__) || defined(__i386__)
+#if defined(__x86_64__) || defined(__i386__) || defined(__aarch64__)
#define TEST_BLOCKED_RETURN
#endif
diff --git a/tools/testing/selftests/syscall_user_dispatch/sud_test.c b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
index b855c6000287..3ffea2f4a66d 100644
--- a/tools/testing/selftests/syscall_user_dispatch/sud_test.c
+++ b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
@@ -192,6 +192,10 @@ static void handle_sigsys(int sig, siginfo_t *info, void *ucontext)
((ucontext_t *)ucontext)->uc_mcontext.__gregs[REG_A0] =
((ucontext_t *)ucontext)->uc_mcontext.__gregs[REG_A7];
#endif
+#ifdef __aarch64__
+ ((ucontext_t *)ucontext)->uc_mcontext.regs[0] = (unsigned int)
+ ((ucontext_t *)ucontext)->uc_mcontext.regs[8];
+#endif
}
int setup_sigsys_handler(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (13 preceding siblings ...)
2026-03-17 8:20 ` [PATCH v13 RESEND 14/14] selftests: sud_test: Support aarch64 Jinjie Ruan
@ 2026-03-17 10:57 ` Yeoreum Yun
2026-03-19 14:35 ` Linus Walleij
15 siblings, 0 replies; 38+ messages in thread
From: Yeoreum Yun @ 2026-03-17 10:57 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
This series looks good to me.
Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com>
> Currently, x86, Riscv, Loongarch use the Generic Entry which makes
> maintainers' work easier and codes more elegant. arm64 has already
> successfully switched to the Generic IRQ Entry in commit
> b3cf07851b6c ("arm64: entry: Switch to generic IRQ entry"), it is
> time to completely convert arm64 to Generic Entry.
>
> The goal is to bring arm64 in line with other architectures that already
> use the generic entry infrastructure, reducing duplicated code and
> making it easier to share future changes in entry/exit paths, such as
> "Syscall User Dispatch" and RSEQ optimizations.
>
> This patch set is rebased on v7.0-rc3. And the performance
> benchmarks results on qemu-kvm are below:
>
> perf bench syscall usec/op (-ve is improvement)
>
> | Syscall | Base | Generic Entry | change % |
> | ------- | ----------- | ------------- | -------- |
> | basic | 0.123997 | 0.120872 | -2.57 |
> | execve | 512.1173 | 504.9966 | -1.52 |
> | fork | 114.1144 | 113.2301 | -1.06 |
> | getpgid | 0.120182 | 0.121245 | +0.9 |
>
> perf bench syscall ops/sec (+ve is improvement)
>
> | Syscall | Base | Generic Entry| change % |
> | ------- | -------- | ------------ | -------- |
> | basic | 8064712 | 8273212 | +2.48 |
> | execve | 1952 | 1980 | +1.52 |
> | fork | 8763 | 8832 | +1.06 |
> | getpgid | 8320704 | 8247810 | -0.9 |
>
> Therefore, the syscall performance variation ranges from a 1% regression
> to a 2.5% improvement.
>
> It was tested ok with following test cases on QEMU virt platform:
> - Stress-ng CPU stress test.
> - Hackbench stress test.
> - "sud" selftest testcase.
> - get_set_sud, get_syscall_info, set_syscall_info, peeksiginfo
> in tools/testing/selftests/ptrace.
> - breakpoint_test_arm64 in selftests/breakpoints.
> - syscall-abi and ptrace in tools/testing/selftests/arm64/abi
> - fp-ptrace, sve-ptrace, za-ptrace in selftests/arm64/fp.
> - vdso_test_getrandom in tools/testing/selftests/vDSO
> - Strace tests.
> - slice_test for rseq optimizations.
>
> The test QEMU configuration is as follows:
>
> qemu-system-aarch64 \
> -M virt \
> -enable-kvm \
> -cpu host \
> -kernel Image \
> -smp 8 \
> -m 512m \
> -nographic \
> -no-reboot \
> -device virtio-rng-pci \
> -append "root=/dev/vda rw console=ttyAMA0 kgdboc=ttyAMA0,115200 \
> earlycon preempt=voluntary irqchip.gicv3_pseudo_nmi=1 audit=1" \
> -drive if=none,file=images/rootfs.ext4,format=raw,id=hd0 \
> -device virtio-blk-device,drive=hd0 \
>
> Changes in v13 resend:
> - Fix exit_to_user_mode_prepare_legacy() issues.
> - Also move TIF_SINGLESTEP to generic TIF infrastructure for loongarch.
> - Use generic TIF bits for arm64 and moving TIF_SINGLESTEP to
> generic TIF for related architectures separately.
> - Refactor syscall_trace_enter/exit() to accept flags and Use syscall_get_nr()
> helper separately.
> - Tested with slice_test for rseq optimizations.
> - Add acked-by.
> - Link to v13: https://lore.kernel.org/all/20260313094738.3985794-1-ruanjinjie@huawei.com/
>
> Changes in v13:
> - Rebased on v7.0-rc3, so drop the firt applied arm64 patch.
> - Use generic TIF bits to enables RSEQ optimization.
> - Update most of the commit message to make it more clear.
> - Link to v12: https://lore.kernel.org/all/20260203133728.848283-1-ruanjinjie@huawei.com/
>
> Changes in v12:
> - Rebased on "sched/core", so remove the four generic entry patches.
> - Move "Expand secure_computing() in place" and
> "Use syscall_get_arguments() helper" patch forward, which will group all
> non-functional cleanups at the front.
> - Adjust the explanation for moving rseq_syscall() before
> audit_syscall_exit().
> - Link to v11: https://lore.kernel.org/all/20260128031934.3906955-1-ruanjinjie@huawei.com/
>
> Changes in v11:
> - Remove unused syscall in syscall_trace_enter().
> - Update and provide a detailed explanation of the differences after
> moving rseq_syscall() before audit_syscall_exit().
> - Rebased on arm64 (for-next/entry), and remove the first applied 3 patchs.
> - syscall_exit_to_user_mode_work() for arch reuse instead of adding
> new syscall_exit_to_user_mode_work_prepare() helper.
> - Link to v10: https://lore.kernel.org/all/20251222114737.1334364-1-ruanjinjie@huawei.com/
>
> Changes in v10:
> - Rebased on v6.19-rc1, rename syscall_exit_to_user_mode_prepare() to
> syscall_exit_to_user_mode_work_prepare() to avoid conflict.
> - Also inline syscall_trace_enter().
> - Support aarch64 for sud_benchmark.
> - Update and correct the commit message.
> - Add Reviewed-by.
> - Link to v9: https://lore.kernel.org/all/20251204082123.2792067-1-ruanjinjie@huawei.com/
>
> Changes in v9:
> - Move "Return early for ptrace_report_syscall_entry() error" patch ahead
> to make it not introduce a regression.
> - Not check _TIF_SECCOMP/SYSCALL_EMU for syscall_exit_work() in
> a separate patch.
> - Do not report_syscall_exit() for PTRACE_SYSEMU_SINGLESTEP in a separate
> patch.
> - Add two performance patch to improve the arm64 performance.
> - Add Reviewed-by.
> - Link to v8: https://lore.kernel.org/all/20251126071446.3234218-1-ruanjinjie@huawei.com/
>
> Changes in v8:
> - Rename "report_syscall_enter()" to "report_syscall_entry()".
> - Add ptrace_save_reg() to avoid duplication.
> - Remove unused _TIF_WORK_MASK in a standalone patch.
> - Align syscall_trace_enter() return value with the generic version.
> - Use "scno" instead of regs->syscallno in el0_svc_common().
> - Move rseq_syscall() ahead in a standalone patch to clarify it clearly.
> - Rename "syscall_trace_exit()" to "syscall_exit_work()".
> - Keep the goto in el0_svc_common().
> - No argument was passed to __secure_computing() and check -1 not -1L.
> - Remove "Add has_syscall_work() helper" patch.
> - Move "Add syscall_exit_to_user_mode_prepare() helper" patch later.
> - Add miss header for asm/entry-common.h.
> - Update the implementation of arch_syscall_is_vdso_sigreturn().
> - Add "ARCH_SYSCALL_WORK_EXIT" to be defined as "SECCOMP | SYSCALL_EMU"
> to keep the behaviour unchanged.
> - Add more testcases test.
> - Add Reviewed-by.
> - Update the commit message.
> - Link to v7: https://lore.kernel.org/all/20251117133048.53182-1-ruanjinjie@huawei.com/
>
> Jinjie Ruan (13):
> arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags
> parameter
> arm64/ptrace: Use syscall_get_nr() helper for syscall_trace_enter()
> arm64/ptrace: Expand secure_computing() in place
> arm64/ptrace: Use syscall_get_arguments() helper for audit
> arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
> arm64: syscall: Introduce syscall_exit_to_user_mode_work()
> arm64/ptrace: Define and use _TIF_SYSCALL_EXIT_WORK
> arm64/ptrace: Skip syscall exit reporting for PTRACE_SYSEMU_SINGLESTEP
> arm64: entry: Convert to generic entry
> arm64: Inline el0_svc_common()
> s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP
> asm-generic: Move TIF_SINGLESTEP to generic TIF bits
> arm64: Use generic TIF bits for common thread flags
>
> kemal (1):
> selftests: sud_test: Support aarch64
>
> arch/arm64/Kconfig | 3 +-
> arch/arm64/include/asm/entry-common.h | 76 ++++++++++++
> arch/arm64/include/asm/syscall.h | 19 ++-
> arch/arm64/include/asm/thread_info.h | 76 ++++--------
> arch/arm64/kernel/debug-monitors.c | 7 ++
> arch/arm64/kernel/entry-common.c | 25 +++-
> arch/arm64/kernel/ptrace.c | 115 ------------------
> arch/arm64/kernel/signal.c | 2 +-
> arch/arm64/kernel/syscall.c | 29 ++---
> arch/loongarch/include/asm/thread_info.h | 11 +-
> arch/s390/include/asm/thread_info.h | 7 +-
> arch/s390/kernel/process.c | 2 +-
> arch/s390/kernel/ptrace.c | 20 +--
> arch/s390/kernel/signal.c | 6 +-
> arch/x86/include/asm/thread_info.h | 6 +-
> include/asm-generic/thread_info_tif.h | 5 +
> include/linux/irq-entry-common.h | 8 --
> include/linux/rseq_entry.h | 18 ---
> .../syscall_user_dispatch/sud_benchmark.c | 2 +-
> .../syscall_user_dispatch/sud_test.c | 4 +
> 20 files changed, 191 insertions(+), 250 deletions(-)
>
> --
> 2.34.1
>
>
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 09/14] arm64: entry: Convert to generic entry
2026-03-17 8:20 ` [PATCH v13 RESEND 09/14] arm64: entry: Convert to generic entry Jinjie Ruan
@ 2026-03-17 10:58 ` Peter Zijlstra
2026-03-19 14:21 ` Linus Walleij
1 sibling, 0 replies; 38+ messages in thread
From: Peter Zijlstra @ 2026-03-17 10:58 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, luto, shuah, kees, wad, kevin.brodsky, deller, macro, akpm,
ldv, anshuman.khandual, ryan.roberts, mark.rutland, thuth, song,
ada.coupriediaz, linusw, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 04:20:15PM +0800, Jinjie Ruan wrote:
> include/linux/irq-entry-common.h | 8 --
> include/linux/rseq_entry.h | 18 ---
> 11 files changed, 127 insertions(+), 209 deletions(-)
Excellent,
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> diff --git a/include/linux/irq-entry-common.h b/include/linux/irq-entry-common.h
> index d26d1b1bcbfb..6519b4a30dc1 100644
> --- a/include/linux/irq-entry-common.h
> +++ b/include/linux/irq-entry-common.h
> @@ -236,14 +236,6 @@ static __always_inline void __exit_to_user_mode_validate(void)
> lockdep_sys_exit();
> }
>
> -/* Temporary workaround to keep ARM64 alive */
> -static __always_inline void exit_to_user_mode_prepare_legacy(struct pt_regs *regs)
> -{
> - __exit_to_user_mode_prepare(regs);
> - rseq_exit_to_user_mode_legacy();
> - __exit_to_user_mode_validate();
> -}
> -
> /**
> * syscall_exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required
> * @regs: Pointer to pt_regs on entry stack
> diff --git a/include/linux/rseq_entry.h b/include/linux/rseq_entry.h
> index c6831c93cd6e..e9c4108ac514 100644
> --- a/include/linux/rseq_entry.h
> +++ b/include/linux/rseq_entry.h
> @@ -743,24 +743,6 @@ static __always_inline void rseq_irqentry_exit_to_user_mode(void)
> ev->events = 0;
> }
>
> -/* Required to keep ARM64 working */
> -static __always_inline void rseq_exit_to_user_mode_legacy(void)
> -{
> - struct rseq_event *ev = ¤t->rseq.event;
> -
> - rseq_stat_inc(rseq_stats.exit);
> -
> - if (static_branch_unlikely(&rseq_debug_enabled))
> - WARN_ON_ONCE(ev->sched_switch);
> -
> - /*
> - * Ensure that event (especially user_irq) is cleared when the
> - * interrupt did not result in a schedule and therefore the
> - * rseq processing did not clear it.
> - */
> - ev->events = 0;
> -}
> -
> void __rseq_debug_syscall_return(struct pt_regs *regs);
>
> static __always_inline void rseq_debug_syscall_return(struct pt_regs *regs)
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 01/14] arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags parameter
2026-03-17 8:20 ` [PATCH v13 RESEND 01/14] arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags parameter Jinjie Ruan
@ 2026-03-19 13:47 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 13:47 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Refactor syscall_trace_enter() and syscall_trace_exit() to move thread
> flag reading to the caller. This aligns arm64's syscall trace enter/exit
> function signature with generic entry framework.
>
> [Changes]
> 1. Function signature changes:
> - syscall_trace_enter(regs) → syscall_trace_enter(regs, flags)
> - syscall_trace_exit(regs) → syscall_trace_exit(regs, flags)
>
> 2. Move flags reading to caller:
> - Previously: read_thread_flags() called inside each function.
> - Now: caller (like el0_svc_common) passes flags as parameter.
>
> 3. Update syscall.c:
> - el0_svc_common() now passes flags to tracing functions and
> re-fetches flags before exit to handle potential TIF updates.
>
> [Why this matters]
> - Aligns arm64 with the generic entry interface.
> - Makes future migration to generic entry framework.
>
> No functional changes intended.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 02/14] arm64/ptrace: Use syscall_get_nr() helper for syscall_trace_enter()
2026-03-17 8:20 ` [PATCH v13 RESEND 02/14] arm64/ptrace: Use syscall_get_nr() helper for syscall_trace_enter() Jinjie Ruan
@ 2026-03-19 13:50 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 13:50 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Use syscall_get_nr() to get syscall number for syscall_trace_enter().
> This aligns arm64's internal tracing logic with the generic
> entry framework.
>
> [Changes]
> 1. Use syscall_get_nr() helper:
> - Replace direct regs->syscallno access with
> syscall_get_nr(current, regs).
> - This helper is functionally equivalent to direct access on arm64.
>
> 2. Re-read syscall number after tracepoint:
> - Re-fetch the syscall number after trace_sys_enter() as it may have
> been modified by BPF or ftrace probes, matching generic entry behavior.
>
> [Why this matters]
> - Aligns arm64 with the generic entry interface.
> - Makes future migration to generic entry framework.
> - Properly handles syscall number modifications by tracers.
> - Uses standard architecture-independent helpers.
>
> No functional changes intended.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 03/14] arm64/ptrace: Expand secure_computing() in place
2026-03-17 8:20 ` [PATCH v13 RESEND 03/14] arm64/ptrace: Expand secure_computing() in place Jinjie Ruan
@ 2026-03-19 13:58 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 13:58 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Refactor syscall_trace_enter() by open-coding the seccomp check
> to align with the generic entry framework.
>
> [Background]
> The generic entry implementation expands the seccomp check in-place
> instead of using the secure_computing() wrapper. It directly tests
> SYSCALL_WORK_SECCOMP and calls the underlying __secure_computing()
> function to handle syscall filtering.
>
> [Changes]
> 1. Open-code seccomp check:
> - Instead of calling the secure_computing() wrapper, explicitly check
> the 'flags' parameter for _TIF_SECCOMP.
> - Call __secure_computing() directly if the flag is set.
>
> 2. Refine return value handling:
> - Use 'return ret ? : syscall' to propagate the return value.
> - Ensures any unexpected non-zero return from __secure_computing()
> is properly propagated is properly propagated.
> - This matches the logic in the generic entry code.
>
> [Why this matters]
> - Aligns the arm64 syscall path with the generic entry implementation,
> simplifying future migration to the generic entry framework.
> - No functional changes are intended; seccomp behavior remains identical.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 04/14] arm64/ptrace: Use syscall_get_arguments() helper for audit
2026-03-17 8:20 ` [PATCH v13 RESEND 04/14] arm64/ptrace: Use syscall_get_arguments() helper for audit Jinjie Ruan
@ 2026-03-19 14:14 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:14 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Extract syscall_enter_audit() helper and use syscall_get_arguments()
> to get syscall arguments, matching the generic entry implementation.
>
> The new code:
> - Checks audit_context() first to avoid unnecessary memcpy when audit
> is not active.
> - Uses syscall_get_arguments() helper instead of directly accessing
> regs fields.
> - Is now exactly equivalent to generic entry's syscall_enter_audit().
>
> No functional changes.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 05/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
2026-03-17 8:20 ` [PATCH v13 RESEND 05/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit() Jinjie Ruan
@ 2026-03-19 14:16 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:16 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Move the rseq_syscall() check earlier in the syscall exit path to ensure
> it operates on the original instruction pointer (regs->pc) before any
> potential modification by a tracer.
>
> [Background]
> When CONFIG_DEBUG_RSEQ is enabled, rseq_syscall() verifies that a system
> call was not executed within an rseq critical section by examining
> regs->pc. If a violation is detected, it triggers a SIGSEGV.
>
> [Problem]
> Currently, arm64 invokes rseq_syscall() after report_syscall_exit().
> However, during report_syscall_exit(), a ptrace tracer can modify the
> task's instruction pointer via PTRACE_SETREGS. This leads to an
> inconsistency where rseq may analyze a post-trace PC instead of the
> actual PC at the time of syscall exit.
>
> [Why this matters]
> The rseq check is intended to validate the execution context of the
> syscall itself. Analyzing a tracer-modified PC can lead to incorrect
> detection or missed violations. Moving the check earlier ensures rseq
> sees the authentic state of the task.
>
> [Alignment]
> This change aligns arm64 with:
> - Generic entry, which calls rseq_syscall() first.
> - arm32 implementation, which also performs the check before audit.
>
> [Impact]
> There is no functional change to signal delivery; SIGSEGV will still be
> processed in arm64_exit_to_user_mode() at the end of the exit path.
>
> Cc: Thomas Gleixner <tglx@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 06/14] arm64: syscall: Introduce syscall_exit_to_user_mode_work()
2026-03-17 8:20 ` [PATCH v13 RESEND 06/14] arm64: syscall: Introduce syscall_exit_to_user_mode_work() Jinjie Ruan
@ 2026-03-19 14:17 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:17 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Refactor the system call exit path to align with the generic entry
> framework. This consolidates thread flag checking, rseq handling, and
> syscall tracing into a structure that mirrors the generic
> syscall_exit_to_user_mode_work() implementation.
>
> [Rationale]
> The generic entry code employs a hierarchical approach for
> syscall exit work:
>
> 1. syscall_exit_to_user_mode_work(): The entry point that handles
> rseq and checks if further exit work (tracing/audit) is required.
>
> 2. syscall_exit_work(): Performs the actual tracing, auditing, and
> ptrace reporting.
>
> [Changes]
> - Rename and Encapsulate: Rename syscall_trace_exit() to
> syscall_exit_work() and make it static, as it is now an internal
> helper for the exit path.
>
> - New Entry Point: Implement syscall_exit_to_user_mode_work() to
> replace the manual flag-reading logic in el0_svc_common(). This
> function now encapsulates the rseq_syscall() call and the
> conditional execution of syscall_exit_work().
>
> - Simplify el0_svc_common(): Remove the complex conditional checks
> for tracing and CONFIG_DEBUG_RSEQ at the end of the syscall path,
> delegating this responsibility to the new helper.
>
> - Helper Migration: Move has_syscall_work() to asm/syscall.h
> to allow its reuse across ptrace.c and syscall.c.
>
> - Clean up RSEQ: Remove the explicit IS_ENABLED(CONFIG_DEBUG_RSEQ)
> check in the caller, as rseq_syscall() is already a no-op when the
> config is disabled.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 07/14] arm64/ptrace: Define and use _TIF_SYSCALL_EXIT_WORK
2026-03-17 8:20 ` [PATCH v13 RESEND 07/14] arm64/ptrace: Define and use _TIF_SYSCALL_EXIT_WORK Jinjie Ruan
@ 2026-03-19 14:18 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:18 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Introduce _TIF_SYSCALL_EXIT_WORK to filter out entry-only flags
> during the syscall exit path. This aligns arm64 with the generic
> entry framework's SYSCALL_WORK_EXIT semantics.
>
> [Rationale]
> The current syscall exit path uses _TIF_SYSCALL_WORK to decide whether
> to invoke syscall_exit_work(). However, _TIF_SYSCALL_WORK includes
> flags that are only relevant during syscall entry:
>
> 1. _TIF_SECCOMP: Seccomp filtering (__secure_computing) only runs
> on entry. There is no seccomp callback for syscall exit.
>
> 2. _TIF_SYSCALL_EMU: In PTRACE_SYSEMU mode, the syscall is
> intercepted and skipped on entry. Since the syscall is never
> executed, reporting a syscall exit stop is unnecessary.
>
> [Changes]
> - Define _TIF_SYSCALL_EXIT_WORK: A new mask containing only flags
> requiring exit processing: _TIF_SYSCALL_TRACE, _TIF_SYSCALL_AUDIT,
> and _TIF_SYSCALL_TRACEPOINT.
>
> - Update exit path: Use _TIF_SYSCALL_EXIT_WORK in
> syscall_exit_to_user_mode_work() to avoid redundant calls to
> audit and ptrace reporting when only entry-flags are set.
>
> - Cleanup: Remove the has_syscall_work() helper as it is no longer
> needed. Direct flag comparison is now used to distinguish between
> entry and exit work requirements.
>
> [Impact]
> audit_syscall_exit() and report_syscall_exit() will no longer be
> triggered for seccomp-only or emu-only syscalls. This matches the
> generic entry behavior and improves efficiency by skipping unnecessary
> exit processing.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 08/14] arm64/ptrace: Skip syscall exit reporting for PTRACE_SYSEMU_SINGLESTEP
2026-03-17 8:20 ` [PATCH v13 RESEND 08/14] arm64/ptrace: Skip syscall exit reporting for PTRACE_SYSEMU_SINGLESTEP Jinjie Ruan
@ 2026-03-19 14:20 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:20 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Align the syscall exit reporting logic with the generic entry
> framework by skipping the exit stop when PTRACE_SYSEMU_SINGLESTEP is
> in effect.
>
> [Rationale]
> When a tracer uses PTRACE_SYSEMU_SINGLESTEP, both _TIF_SYSCALL_EMU
> and _TIF_SINGLESTEP flags are set. Currently, arm64 reports a syscall
> exit stop whenever _TIF_SINGLESTEP is set, regardless of the
> emulation state.
>
> However, as per the generic entry implementation (see
> include/linux/entry-common.h):
> "If SYSCALL_EMU is set, then the only reason to report is when SINGLESTEP
> is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall instruction has been
> already reported in syscall_trace_enter()."
>
> Since PTRACE_SYSEMU intercepts and skips the actual syscall
> execution, reporting a subsequent exit stop is redundant and
> inconsistent with the expected behavior of emulated system calls.
>
> [Changes]
> - Introduce report_single_step(): Add a helper to encapsulate the
> logic for deciding whether to report a single-step stop at syscall
> exit. It returns false if _TIF_SYSCALL_EMU is set, ensuring the
> emulated syscall does not trigger a duplicate report.
>
> - Update syscall_exit_work(): Use the new helper to determine
> the stepping state instead of directly checking _TIF_SINGLESTEP.
>
> [Impact]
> - PTRACE_SINGLESTEP: Continues to report exit stops for actual
> instructions.
>
> - PTRACE_SYSEMU: Continues to skip exit stops.
>
> - PTRACE_SYSEMU_SINGLESTEP: Now correctly skips the redundant exit
> stop, aligning arm64 with the generic entry infrastructure.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
These small semantic glitches underscores the need to move
to generic entry.
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 09/14] arm64: entry: Convert to generic entry
2026-03-17 8:20 ` [PATCH v13 RESEND 09/14] arm64: entry: Convert to generic entry Jinjie Ruan
2026-03-17 10:58 ` Peter Zijlstra
@ 2026-03-19 14:21 ` Linus Walleij
1 sibling, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:21 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Implement the generic entry framework for arm64 to handle system call
> entry and exit. This follows the migration of x86, RISC-V, and LoongArch,
> consolidating architecture-specific syscall tracing and auditing into
> the common kernel entry infrastructure.
>
> [Background]
> Arm64 has already adopted generic IRQ entry. Completing the conversion
> to the generic syscall entry framework reduces architectural divergence,
> simplifies maintenance, and allows arm64 to automatically benefit from
> improvements in the common entry code.
>
> [Changes]
>
> 1. Kconfig and Infrastructure:
> - Select GENERIC_ENTRY and remove GENERIC_IRQ_ENTRY (now implied).
>
> - Migrate struct thread_info to use the syscall_work field instead
> of TIF flags for syscall-related tasks.
>
> 2. Thread Info and Flags:
> - Remove definitions for TIF_SYSCALL_TRACE, TIF_SYSCALL_AUDIT,
> TIF_SYSCALL_TRACEPOINT, TIF_SECCOMP, and TIF_SYSCALL_EMU.
>
> - Replace _TIF_SYSCALL_WORK and _TIF_SYSCALL_EXIT_WORK with the
> generic SYSCALL_WORK bitmask.
>
> - Map single-step state to SYSCALL_EXIT_TRAP in debug-monitors.c.
>
> 3. Architecture-Specific Hooks (asm/entry-common.h):
> - Implement arch_ptrace_report_syscall_entry() and _exit() by
> porting the existing arm64 logic to the generic interface.
>
> - Add arch_syscall_is_vdso_sigreturn() to asm/syscall.h to
> support Syscall User Dispatch (SUD).
>
> 4. Differentiate between syscall and interrupt entry/exit paths to handle
> RSEQ slice extensions correctly.
> - For irq/exception entry/exit: use irqentry_enter_from_user_mode() and
> irqentry_exit_to_user_mode_prepare().
> - For syscall entry/exit: use enter_from_user_mode() and
> syscall_exit_to_user_mode_prepare().
> - Remove exit_to_user_mode_prepare_legacy() which is no longer necessary.
>
> 5. rseq_syscall() will be replaced with the static key version, that is
> "rseq_debug_syscall_return()"
>
> 6. Cleanup and Refactoring:
> - Remove redundant arm64-specific syscall tracing functions from
> ptrace.c, including syscall_trace_enter(), syscall_exit_work(),
> and related audit/step helpers.
>
> - Update el0_svc_common() in syscall.c to use the generic
> syscall_work checks and entry/exit call sites.
>
> [Why this matters]
> - Unified Interface: Aligns arm64 with the modern kernel entry standard.
>
> - Improved Maintainability: Bug fixes in kernel/entry/common.c now
> apply to arm64 automatically.
>
> - Feature Readiness: Simplifies the implementation of future
> cross-architecture syscall features.
>
> [Compatibility]
> This conversion maintains full ABI compatibility with existing
> userspace. The ptrace register-saving behavior, seccomp filtering, and
> syscall tracing semantics remain identical to the previous implementation.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Thomas Gleixner <tglx@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
This looks really neat.
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 10/14] arm64: Inline el0_svc_common()
2026-03-17 8:20 ` [PATCH v13 RESEND 10/14] arm64: Inline el0_svc_common() Jinjie Ruan
@ 2026-03-19 14:22 ` Linus Walleij
0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:22 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> After converting arm64 to Generic Entry framework, the compiler no longer
> inlines el0_svc_common() into its caller do_el0_svc(). This introduces
> a small but measurable overhead in the critical system call path.
>
> Manually forcing el0_svc_common() to be inlined restores the
> performance. Benchmarking with perf bench syscall basic on a
> Kunpeng 920 platform (based on v6.19-rc1) shows a ~1% performance
> uplift.
>
> Inlining this function reduces function prologue/epilogue overhead
> and allows for better compiler optimization in the hot system call
> dispatch path.
>
> | Metric | W/O this patch | With this patch | Change |
> | ---------- | -------------- | --------------- | --------- |
> | Total time | 2.195 [sec] | 2.171 [sec] | ↓1.1% |
> | usecs/op | 0.219575 | 0.217192 | ↓1.1% |
> | ops/sec | 4,554,260 | 4,604,225 | ↑1.1% |
>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 11/14] s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP
2026-03-17 8:20 ` [PATCH v13 RESEND 11/14] s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP Jinjie Ruan
@ 2026-03-19 14:23 ` Linus Walleij
2026-03-19 17:05 ` Kevin Brodsky
1 sibling, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:23 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Rename TIF_SINGLE_STEP to TIF_SINGLESTEP to align with the naming
> convention used by arm64, x86, and other architectures.
>
> By aligning the name, TIF_SINGLESTEP can be consolidated into the generic
> TIF bits definitions, reducing architectural divergence and simplifying
> cross-architecture entry/exit logic.
>
> No functional changes intended.
>
> Acked-by: Heiko Carstens <hca@linux.ibm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 12/14] asm-generic: Move TIF_SINGLESTEP to generic TIF bits
2026-03-17 8:20 ` [PATCH v13 RESEND 12/14] asm-generic: Move TIF_SINGLESTEP to generic TIF bits Jinjie Ruan
@ 2026-03-19 14:24 ` Linus Walleij
2026-03-19 17:05 ` Kevin Brodsky
1 sibling, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:24 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Currently, x86, ARM64, s390, and LoongArch all define and use
> TIF_SINGLESTEP to track single-stepping state.
>
> Since this flag is shared across multiple major architectures and serves
> a common purpose in the generic entry/exit paths, move TIF_SINGLESTEP
> into the generic Thread Information Flags (TIF) infrastructure.
>
> This consolidation reduces architecture-specific boilerplate code and
> ensures consistency for generic features that rely on single-step
> state tracking.
>
> Cc: Thomas Gleixner <tglx@kernel.org>
> Acked-by: Heiko Carstens <hca@linux.ibm.com> # s390
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
This is really neat, thanks for making the extra effort!
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags
2026-03-17 8:20 ` [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags Jinjie Ruan
@ 2026-03-19 14:24 ` Linus Walleij
2026-03-19 17:07 ` Kevin Brodsky
1 sibling, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:24 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Use the generic TIF bits defined in <asm-generic/thread_info_tif.h> for
> standard thread flags (TIF_SIGPENDING, TIF_NEED_RESCHED, TIF_NOTIFY_RESUME,
> TIF_RESTORE_SIGMASK, TIF_SINGLESTEP, etc.) instead of defining
> them locally.
>
> Arm64-specific bits (TIF_FOREIGN_FPSTATE, TIF_MTE_ASYNC_FAULT, TIF_SVE,
> TIF_SSBD, etc.) are renumbered to start at bit 16 to avoid conflicts.
>
> This enables RSEQ optimizations which require CONFIG_HAVE_GENERIC_TIF_BITS
> combined with the generic entry infrastructure (already used by arm64).
>
> Cc: Thomas Gleixner <tglx@kernel.org>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Linus Walleij <linusw@kernel.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 14/14] selftests: sud_test: Support aarch64
2026-03-17 8:20 ` [PATCH v13 RESEND 14/14] selftests: sud_test: Support aarch64 Jinjie Ruan
@ 2026-03-19 14:26 ` Linus Walleij
2026-03-20 9:23 ` Jinjie Ruan
0 siblings, 1 reply; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:26 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> From: kemal <kmal@cock.li>
>
> Support aarch64 to test "Syscall User Dispatch" with sud_test
> selftest testcase.
>
> Signed-off-by: kemal <kmal@cock.li>
You need to sign this off since you are on the delivery path.
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
` (14 preceding siblings ...)
2026-03-17 10:57 ` [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Yeoreum Yun
@ 2026-03-19 14:35 ` Linus Walleij
2026-03-20 9:28 ` Jinjie Ruan
15 siblings, 1 reply; 38+ messages in thread
From: Linus Walleij @ 2026-03-19 14:35 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> Currently, x86, Riscv, Loongarch use the Generic Entry which makes
> maintainers' work easier and codes more elegant. arm64 has already
> successfully switched to the Generic IRQ Entry in commit
> b3cf07851b6c ("arm64: entry: Switch to generic IRQ entry"), it is
> time to completely convert arm64 to Generic Entry.
Looks good to me, except patch 14 that needs your Signoff.
Perhaps it is best if patches 1 thru 11 are applied separately
to the arm64 tree and the remaining patches either postponed
to the next kernel cycle or applied on top of an immutable branch
based off v7.0-rc1 from the arm64 tree?
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 11/14] s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP
2026-03-17 8:20 ` [PATCH v13 RESEND 11/14] s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP Jinjie Ruan
2026-03-19 14:23 ` Linus Walleij
@ 2026-03-19 17:05 ` Kevin Brodsky
1 sibling, 0 replies; 38+ messages in thread
From: Kevin Brodsky @ 2026-03-19 17:05 UTC (permalink / raw)
To: Jinjie Ruan, catalin.marinas, will, oleg, chenhuacai, kernel, hca,
gor, agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen,
hpa, arnd, peterz, luto, shuah, kees, wad, deller, macro, akpm,
ldv, anshuman.khandual, ryan.roberts, mark.rutland, thuth, song,
ada.coupriediaz, linusw, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On 17/03/2026 09:20, Jinjie Ruan wrote:
> Rename TIF_SINGLE_STEP to TIF_SINGLESTEP to align with the naming
> convention used by arm64, x86, and other architectures.
>
> By aligning the name, TIF_SINGLESTEP can be consolidated into the generic
> TIF bits definitions, reducing architectural divergence and simplifying
> cross-architecture entry/exit logic.
>
> No functional changes intended.
>
> Acked-by: Heiko Carstens <hca@linux.ibm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> ---
> arch/s390/include/asm/thread_info.h | 4 ++--
> arch/s390/kernel/process.c | 2 +-
> arch/s390/kernel/ptrace.c | 20 ++++++++++----------
> arch/s390/kernel/signal.c | 6 +++---
> 4 files changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/arch/s390/include/asm/thread_info.h b/arch/s390/include/asm/thread_info.h
> index 6a548a819400..1bcd42614e41 100644
> --- a/arch/s390/include/asm/thread_info.h
> +++ b/arch/s390/include/asm/thread_info.h
> @@ -69,7 +69,7 @@ void arch_setup_new_exec(void);
> #define TIF_GUARDED_STORAGE 17 /* load guarded storage control block */
> #define TIF_ISOLATE_BP_GUEST 18 /* Run KVM guests with isolated BP */
> #define TIF_PER_TRAP 19 /* Need to handle PER trap on exit to usermode */
> -#define TIF_SINGLE_STEP 21 /* This task is single stepped */
> +#define TIF_SINGLESTEP 21 /* This task is single stepped */
> #define TIF_BLOCK_STEP 22 /* This task is block stepped */
> #define TIF_UPROBE_SINGLESTEP 23 /* This task is uprobe single stepped */
>
> @@ -77,7 +77,7 @@ void arch_setup_new_exec(void);
> #define _TIF_GUARDED_STORAGE BIT(TIF_GUARDED_STORAGE)
> #define _TIF_ISOLATE_BP_GUEST BIT(TIF_ISOLATE_BP_GUEST)
> #define _TIF_PER_TRAP BIT(TIF_PER_TRAP)
> -#define _TIF_SINGLE_STEP BIT(TIF_SINGLE_STEP)
> +#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
> #define _TIF_BLOCK_STEP BIT(TIF_BLOCK_STEP)
> #define _TIF_UPROBE_SINGLESTEP BIT(TIF_UPROBE_SINGLESTEP)
>
> diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
> index 0df95dcb2101..3accc0c064a0 100644
> --- a/arch/s390/kernel/process.c
> +++ b/arch/s390/kernel/process.c
> @@ -122,7 +122,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
> /* Don't copy debug registers */
> memset(&p->thread.per_user, 0, sizeof(p->thread.per_user));
> memset(&p->thread.per_event, 0, sizeof(p->thread.per_event));
> - clear_tsk_thread_flag(p, TIF_SINGLE_STEP);
> + clear_tsk_thread_flag(p, TIF_SINGLESTEP);
> p->thread.per_flags = 0;
> /* Initialize per thread user and system timer values */
> p->thread.user_timer = 0;
> diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
> index 125ca4c4e30c..d2cf91f4ac3f 100644
> --- a/arch/s390/kernel/ptrace.c
> +++ b/arch/s390/kernel/ptrace.c
> @@ -90,8 +90,8 @@ void update_cr_regs(struct task_struct *task)
> new.start.val = thread->per_user.start;
> new.end.val = thread->per_user.end;
>
> - /* merge TIF_SINGLE_STEP into user specified PER registers. */
> - if (test_tsk_thread_flag(task, TIF_SINGLE_STEP) ||
> + /* merge TIF_SINGLESTEP into user specified PER registers. */
> + if (test_tsk_thread_flag(task, TIF_SINGLESTEP) ||
> test_tsk_thread_flag(task, TIF_UPROBE_SINGLESTEP)) {
> if (test_tsk_thread_flag(task, TIF_BLOCK_STEP))
> new.control.val |= PER_EVENT_BRANCH;
> @@ -119,18 +119,18 @@ void update_cr_regs(struct task_struct *task)
> void user_enable_single_step(struct task_struct *task)
> {
> clear_tsk_thread_flag(task, TIF_BLOCK_STEP);
> - set_tsk_thread_flag(task, TIF_SINGLE_STEP);
> + set_tsk_thread_flag(task, TIF_SINGLESTEP);
> }
>
> void user_disable_single_step(struct task_struct *task)
> {
> clear_tsk_thread_flag(task, TIF_BLOCK_STEP);
> - clear_tsk_thread_flag(task, TIF_SINGLE_STEP);
> + clear_tsk_thread_flag(task, TIF_SINGLESTEP);
> }
>
> void user_enable_block_step(struct task_struct *task)
> {
> - set_tsk_thread_flag(task, TIF_SINGLE_STEP);
> + set_tsk_thread_flag(task, TIF_SINGLESTEP);
> set_tsk_thread_flag(task, TIF_BLOCK_STEP);
> }
>
> @@ -143,7 +143,7 @@ void ptrace_disable(struct task_struct *task)
> {
> memset(&task->thread.per_user, 0, sizeof(task->thread.per_user));
> memset(&task->thread.per_event, 0, sizeof(task->thread.per_event));
> - clear_tsk_thread_flag(task, TIF_SINGLE_STEP);
> + clear_tsk_thread_flag(task, TIF_SINGLESTEP);
> clear_tsk_thread_flag(task, TIF_PER_TRAP);
> task->thread.per_flags = 0;
> }
> @@ -155,19 +155,19 @@ static inline unsigned long __peek_user_per(struct task_struct *child,
> {
> if (addr == offsetof(struct per_struct_kernel, cr9))
> /* Control bits of the active per set. */
> - return test_thread_flag(TIF_SINGLE_STEP) ?
> + return test_thread_flag(TIF_SINGLESTEP) ?
> PER_EVENT_IFETCH : child->thread.per_user.control;
> else if (addr == offsetof(struct per_struct_kernel, cr10))
> /* Start address of the active per set. */
> - return test_thread_flag(TIF_SINGLE_STEP) ?
> + return test_thread_flag(TIF_SINGLESTEP) ?
> 0 : child->thread.per_user.start;
> else if (addr == offsetof(struct per_struct_kernel, cr11))
> /* End address of the active per set. */
> - return test_thread_flag(TIF_SINGLE_STEP) ?
> + return test_thread_flag(TIF_SINGLESTEP) ?
> -1UL : child->thread.per_user.end;
> else if (addr == offsetof(struct per_struct_kernel, bits))
> /* Single-step bit. */
> - return test_thread_flag(TIF_SINGLE_STEP) ?
> + return test_thread_flag(TIF_SINGLESTEP) ?
> (1UL << (BITS_PER_LONG - 1)) : 0;
> else if (addr == offsetof(struct per_struct_kernel, starting_addr))
> /* Start address of the user specified per set. */
> diff --git a/arch/s390/kernel/signal.c b/arch/s390/kernel/signal.c
> index 4874de5edea0..83f7650f2032 100644
> --- a/arch/s390/kernel/signal.c
> +++ b/arch/s390/kernel/signal.c
> @@ -423,7 +423,7 @@ static void handle_signal(struct ksignal *ksig, sigset_t *oldset,
> else
> ret = setup_frame(ksig->sig, &ksig->ka, oldset, regs);
>
> - signal_setup_done(ret, ksig, test_thread_flag(TIF_SINGLE_STEP));
> + signal_setup_done(ret, ksig, test_thread_flag(TIF_SINGLESTEP));
> }
>
> /*
> @@ -491,7 +491,7 @@ void arch_do_signal_or_restart(struct pt_regs *regs)
> regs->gprs[2] = regs->orig_gpr2;
> current->restart_block.arch_data = regs->psw.addr;
> regs->psw.addr = VDSO_SYMBOL(current, restart_syscall);
> - if (test_thread_flag(TIF_SINGLE_STEP))
> + if (test_thread_flag(TIF_SINGLESTEP))
> clear_thread_flag(TIF_PER_TRAP);
> break;
> case -ERESTARTNOHAND:
> @@ -499,7 +499,7 @@ void arch_do_signal_or_restart(struct pt_regs *regs)
> case -ERESTARTNOINTR:
> regs->gprs[2] = regs->orig_gpr2;
> regs->psw.addr = __rewind_psw(regs->psw, regs->int_code >> 16);
> - if (test_thread_flag(TIF_SINGLE_STEP))
> + if (test_thread_flag(TIF_SINGLESTEP))
> clear_thread_flag(TIF_PER_TRAP);
> break;
> }
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 12/14] asm-generic: Move TIF_SINGLESTEP to generic TIF bits
2026-03-17 8:20 ` [PATCH v13 RESEND 12/14] asm-generic: Move TIF_SINGLESTEP to generic TIF bits Jinjie Ruan
2026-03-19 14:24 ` Linus Walleij
@ 2026-03-19 17:05 ` Kevin Brodsky
1 sibling, 0 replies; 38+ messages in thread
From: Kevin Brodsky @ 2026-03-19 17:05 UTC (permalink / raw)
To: Jinjie Ruan, catalin.marinas, will, oleg, chenhuacai, kernel, hca,
gor, agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen,
hpa, arnd, peterz, luto, shuah, kees, wad, deller, macro, akpm,
ldv, anshuman.khandual, ryan.roberts, mark.rutland, thuth, song,
ada.coupriediaz, linusw, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On 17/03/2026 09:20, Jinjie Ruan wrote:
> Currently, x86, ARM64, s390, and LoongArch all define and use
> TIF_SINGLESTEP to track single-stepping state.
>
> Since this flag is shared across multiple major architectures and serves
> a common purpose in the generic entry/exit paths, move TIF_SINGLESTEP
> into the generic Thread Information Flags (TIF) infrastructure.
>
> This consolidation reduces architecture-specific boilerplate code and
> ensures consistency for generic features that rely on single-step
> state tracking.
>
> Cc: Thomas Gleixner <tglx@kernel.org>
> Acked-by: Heiko Carstens <hca@linux.ibm.com> # s390
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> ---
> arch/loongarch/include/asm/thread_info.h | 11 +++++------
> arch/s390/include/asm/thread_info.h | 7 +++----
> arch/x86/include/asm/thread_info.h | 6 ++----
> include/asm-generic/thread_info_tif.h | 5 +++++
> 4 files changed, 15 insertions(+), 14 deletions(-)
>
> diff --git a/arch/loongarch/include/asm/thread_info.h b/arch/loongarch/include/asm/thread_info.h
> index 4d7117fcdc78..a2ec87f18e1d 100644
> --- a/arch/loongarch/include/asm/thread_info.h
> +++ b/arch/loongarch/include/asm/thread_info.h
> @@ -70,6 +70,7 @@ register unsigned long current_stack_pointer __asm__("$sp");
> */
> #define HAVE_TIF_NEED_RESCHED_LAZY
> #define HAVE_TIF_RESTORE_SIGMASK
> +#define HAVE_TIF_SINGLESTEP
>
> #include <asm-generic/thread_info_tif.h>
>
> @@ -82,11 +83,10 @@ register unsigned long current_stack_pointer __asm__("$sp");
> #define TIF_32BIT_REGS 21 /* 32-bit general purpose registers */
> #define TIF_32BIT_ADDR 22 /* 32-bit address space */
> #define TIF_LOAD_WATCH 23 /* If set, load watch registers */
> -#define TIF_SINGLESTEP 24 /* Single Step */
> -#define TIF_LSX_CTX_LIVE 25 /* LSX context must be preserved */
> -#define TIF_LASX_CTX_LIVE 26 /* LASX context must be preserved */
> -#define TIF_USEDLBT 27 /* LBT was used by this task this quantum (SMP) */
> -#define TIF_LBT_CTX_LIVE 28 /* LBT context must be preserved */
> +#define TIF_LSX_CTX_LIVE 24 /* LSX context must be preserved */
> +#define TIF_LASX_CTX_LIVE 25 /* LASX context must be preserved */
> +#define TIF_USEDLBT 26 /* LBT was used by this task this quantum (SMP) */
> +#define TIF_LBT_CTX_LIVE 27 /* LBT context must be preserved */
>
> #define _TIF_NOHZ BIT(TIF_NOHZ)
> #define _TIF_USEDFPU BIT(TIF_USEDFPU)
> @@ -96,7 +96,6 @@ register unsigned long current_stack_pointer __asm__("$sp");
> #define _TIF_32BIT_REGS BIT(TIF_32BIT_REGS)
> #define _TIF_32BIT_ADDR BIT(TIF_32BIT_ADDR)
> #define _TIF_LOAD_WATCH BIT(TIF_LOAD_WATCH)
> -#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
> #define _TIF_LSX_CTX_LIVE BIT(TIF_LSX_CTX_LIVE)
> #define _TIF_LASX_CTX_LIVE BIT(TIF_LASX_CTX_LIVE)
> #define _TIF_USEDLBT BIT(TIF_USEDLBT)
> diff --git a/arch/s390/include/asm/thread_info.h b/arch/s390/include/asm/thread_info.h
> index 1bcd42614e41..95be5258a422 100644
> --- a/arch/s390/include/asm/thread_info.h
> +++ b/arch/s390/include/asm/thread_info.h
> @@ -61,6 +61,7 @@ void arch_setup_new_exec(void);
> */
> #define HAVE_TIF_NEED_RESCHED_LAZY
> #define HAVE_TIF_RESTORE_SIGMASK
> +#define HAVE_TIF_SINGLESTEP
>
> #include <asm-generic/thread_info_tif.h>
>
> @@ -69,15 +70,13 @@ void arch_setup_new_exec(void);
> #define TIF_GUARDED_STORAGE 17 /* load guarded storage control block */
> #define TIF_ISOLATE_BP_GUEST 18 /* Run KVM guests with isolated BP */
> #define TIF_PER_TRAP 19 /* Need to handle PER trap on exit to usermode */
> -#define TIF_SINGLESTEP 21 /* This task is single stepped */
> -#define TIF_BLOCK_STEP 22 /* This task is block stepped */
> -#define TIF_UPROBE_SINGLESTEP 23 /* This task is uprobe single stepped */
> +#define TIF_BLOCK_STEP 20 /* This task is block stepped */
> +#define TIF_UPROBE_SINGLESTEP 21 /* This task is uprobe single stepped */
>
> #define _TIF_ASCE_PRIMARY BIT(TIF_ASCE_PRIMARY)
> #define _TIF_GUARDED_STORAGE BIT(TIF_GUARDED_STORAGE)
> #define _TIF_ISOLATE_BP_GUEST BIT(TIF_ISOLATE_BP_GUEST)
> #define _TIF_PER_TRAP BIT(TIF_PER_TRAP)
> -#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
> #define _TIF_BLOCK_STEP BIT(TIF_BLOCK_STEP)
> #define _TIF_UPROBE_SINGLESTEP BIT(TIF_UPROBE_SINGLESTEP)
>
> diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
> index 0067684afb5b..f59072ba1473 100644
> --- a/arch/x86/include/asm/thread_info.h
> +++ b/arch/x86/include/asm/thread_info.h
> @@ -98,9 +98,8 @@ struct thread_info {
> #define TIF_IO_BITMAP 22 /* uses I/O bitmap */
> #define TIF_SPEC_FORCE_UPDATE 23 /* Force speculation MSR update in context switch */
> #define TIF_FORCED_TF 24 /* true if TF in eflags artificially */
> -#define TIF_SINGLESTEP 25 /* reenable singlestep on user return*/
> -#define TIF_BLOCKSTEP 26 /* set when we want DEBUGCTLMSR_BTF */
> -#define TIF_ADDR32 27 /* 32-bit address space on 64 bits */
> +#define TIF_BLOCKSTEP 25 /* set when we want DEBUGCTLMSR_BTF */
> +#define TIF_ADDR32 26 /* 32-bit address space on 64 bits */
>
> #define _TIF_SSBD BIT(TIF_SSBD)
> #define _TIF_SPEC_IB BIT(TIF_SPEC_IB)
> @@ -112,7 +111,6 @@ struct thread_info {
> #define _TIF_SPEC_FORCE_UPDATE BIT(TIF_SPEC_FORCE_UPDATE)
> #define _TIF_FORCED_TF BIT(TIF_FORCED_TF)
> #define _TIF_BLOCKSTEP BIT(TIF_BLOCKSTEP)
> -#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
> #define _TIF_ADDR32 BIT(TIF_ADDR32)
>
> /* flags to check in __switch_to() */
> diff --git a/include/asm-generic/thread_info_tif.h b/include/asm-generic/thread_info_tif.h
> index da1610a78f92..b277fe06aee3 100644
> --- a/include/asm-generic/thread_info_tif.h
> +++ b/include/asm-generic/thread_info_tif.h
> @@ -48,4 +48,9 @@
> #define TIF_RSEQ 11 // Run RSEQ fast path
> #define _TIF_RSEQ BIT(TIF_RSEQ)
>
> +#ifdef HAVE_TIF_SINGLESTEP
> +#define TIF_SINGLESTEP 12 /* reenable singlestep on user return*/
> +#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
> +#endif
> +
> #endif /* _ASM_GENERIC_THREAD_INFO_TIF_H_ */
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags
2026-03-17 8:20 ` [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags Jinjie Ruan
2026-03-19 14:24 ` Linus Walleij
@ 2026-03-19 17:07 ` Kevin Brodsky
2026-03-20 9:21 ` Jinjie Ruan
1 sibling, 1 reply; 38+ messages in thread
From: Kevin Brodsky @ 2026-03-19 17:07 UTC (permalink / raw)
To: Jinjie Ruan, catalin.marinas, will, oleg, chenhuacai, kernel, hca,
gor, agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen,
hpa, arnd, peterz, luto, shuah, kees, wad, deller, macro, akpm,
ldv, anshuman.khandual, ryan.roberts, mark.rutland, thuth, song,
ada.coupriediaz, linusw, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On 17/03/2026 09:20, Jinjie Ruan wrote:
> Use the generic TIF bits defined in <asm-generic/thread_info_tif.h> for
> standard thread flags (TIF_SIGPENDING, TIF_NEED_RESCHED, TIF_NOTIFY_RESUME,
> TIF_RESTORE_SIGMASK, TIF_SINGLESTEP, etc.) instead of defining
> them locally.
>
> Arm64-specific bits (TIF_FOREIGN_FPSTATE, TIF_MTE_ASYNC_FAULT, TIF_SVE,
> TIF_SSBD, etc.) are renumbered to start at bit 16 to avoid conflicts.
>
> This enables RSEQ optimizations which require CONFIG_HAVE_GENERIC_TIF_BITS
> combined with the generic entry infrastructure (already used by arm64).
>
> Cc: Thomas Gleixner <tglx@kernel.org>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/thread_info.h | 62 ++++++++++++----------------
> 2 files changed, 28 insertions(+), 35 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 96fef01598be..33cf901fb1a0 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -224,6 +224,7 @@ config ARM64
> select HAVE_SAMPLE_FTRACE_DIRECT_MULTI
> select HAVE_BUILDTIME_MCOUNT_SORT
> select HAVE_EFFICIENT_UNALIGNED_ACCESS
> + select HAVE_GENERIC_TIF_BITS
> select HAVE_GUP_FAST
> select HAVE_FTRACE_GRAPH_FUNC
> select HAVE_FUNCTION_TRACER
> diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
> index f89a15dc6ad5..be1a0651cfe2 100644
> --- a/arch/arm64/include/asm/thread_info.h
> +++ b/arch/arm64/include/asm/thread_info.h
> @@ -58,42 +58,34 @@ void arch_setup_new_exec(void);
>
> #endif
>
> -#define TIF_SIGPENDING 0 /* signal pending */
> -#define TIF_NEED_RESCHED 1 /* rescheduling necessary */
> -#define TIF_NEED_RESCHED_LAZY 2 /* Lazy rescheduling needed */
> -#define TIF_NOTIFY_RESUME 3 /* callback before returning to user */
> -#define TIF_FOREIGN_FPSTATE 4 /* CPU's FP state is not current's */
> -#define TIF_UPROBE 5 /* uprobe breakpoint or singlestep */
> -#define TIF_MTE_ASYNC_FAULT 6 /* MTE Asynchronous Tag Check Fault */
> -#define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
> -#define TIF_PATCH_PENDING 13 /* pending live patching update */
> -#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
> -#define TIF_FREEZE 19
> -#define TIF_RESTORE_SIGMASK 20
> -#define TIF_SINGLESTEP 21
> -#define TIF_32BIT 22 /* 32bit process */
> -#define TIF_SVE 23 /* Scalable Vector Extension in use */
> -#define TIF_SVE_VL_INHERIT 24 /* Inherit SVE vl_onexec across exec */
> -#define TIF_SSBD 25 /* Wants SSB mitigation */
> -#define TIF_TAGGED_ADDR 26 /* Allow tagged user addresses */
> -#define TIF_SME 27 /* SME in use */
> -#define TIF_SME_VL_INHERIT 28 /* Inherit SME vl_onexec across exec */
> -#define TIF_KERNEL_FPSTATE 29 /* Task is in a kernel mode FPSIMD section */
> -#define TIF_TSC_SIGSEGV 30 /* SIGSEGV on counter-timer access */
> -#define TIF_LAZY_MMU_PENDING 31 /* Ops pending for lazy mmu mode exit */
> +/*
> + * Tell the generic TIF infrastructure which bits arm64 supports
> + */
> +#define HAVE_TIF_NEED_RESCHED_LAZY
> +#define HAVE_TIF_RESTORE_SIGMASK
> +#define HAVE_TIF_SINGLESTEP
> +
> +#include <asm-generic/thread_info_tif.h>
> +
> +#define TIF_FOREIGN_FPSTATE 16 /* CPU's FP state is not current's */
> +#define TIF_MTE_ASYNC_FAULT 17 /* MTE Asynchronous Tag Check Fault */
> +#define TIF_FREEZE 18
Turns out this flag became unused a long time ago, see commit
d88e4cb67197 ("freezer: remove now unused TIF_FREEZE"), and it was
probably reintroduced by mistake in the original arm64 implementation,
commit b3901d54dc4f ("arm64: Process management"). Good opportunity to
remove it I think.
Otherwise:
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> +#define TIF_32BIT 19 /* 32bit process */
> +#define TIF_SVE 20 /* Scalable Vector Extension in use */
> +#define TIF_SVE_VL_INHERIT 21 /* Inherit SVE vl_onexec across exec */
> +#define TIF_SSBD 22 /* Wants SSB mitigation */
> +#define TIF_TAGGED_ADDR 23 /* Allow tagged user addresses */
> +#define TIF_SME 24 /* SME in use */
> +#define TIF_SME_VL_INHERIT 25 /* Inherit SME vl_onexec across exec */
> +#define TIF_KERNEL_FPSTATE 26 /* Task is in a kernel mode FPSIMD section */
> +#define TIF_TSC_SIGSEGV 27 /* SIGSEGV on counter-timer access */
> +#define TIF_LAZY_MMU_PENDING 28 /* Ops pending for lazy mmu mode exit */
>
> -#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
> -#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
> -#define _TIF_NEED_RESCHED_LAZY (1 << TIF_NEED_RESCHED_LAZY)
> -#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
> -#define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE)
> -#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)
> -#define _TIF_UPROBE (1 << TIF_UPROBE)
> -#define _TIF_32BIT (1 << TIF_32BIT)
> -#define _TIF_SVE (1 << TIF_SVE)
> -#define _TIF_MTE_ASYNC_FAULT (1 << TIF_MTE_ASYNC_FAULT)
> -#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
> -#define _TIF_TSC_SIGSEGV (1 << TIF_TSC_SIGSEGV)
> +#define _TIF_FOREIGN_FPSTATE BIT(TIF_FOREIGN_FPSTATE)
> +#define _TIF_32BIT BIT(TIF_32BIT)
> +#define _TIF_SVE BIT(TIF_SVE)
> +#define _TIF_MTE_ASYNC_FAULT BIT(TIF_MTE_ASYNC_FAULT)
> +#define _TIF_TSC_SIGSEGV BIT(TIF_TSC_SIGSEGV)
>
> #ifdef CONFIG_SHADOW_CALL_STACK
> #define INIT_SCS \
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags
2026-03-19 17:07 ` Kevin Brodsky
@ 2026-03-20 9:21 ` Jinjie Ruan
0 siblings, 0 replies; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-20 9:21 UTC (permalink / raw)
To: Kevin Brodsky, catalin.marinas, will, oleg, chenhuacai, kernel,
hca, gor, agordeev, borntraeger, svens, tglx, mingo, bp,
dave.hansen, hpa, arnd, peterz, luto, shuah, kees, wad, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, linusw, broonie, pengcan, liqiang01,
ziyao, guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On 2026/3/20 1:07, Kevin Brodsky wrote:
> On 17/03/2026 09:20, Jinjie Ruan wrote:
>> Use the generic TIF bits defined in <asm-generic/thread_info_tif.h> for
>> standard thread flags (TIF_SIGPENDING, TIF_NEED_RESCHED, TIF_NOTIFY_RESUME,
>> TIF_RESTORE_SIGMASK, TIF_SINGLESTEP, etc.) instead of defining
>> them locally.
>>
>> Arm64-specific bits (TIF_FOREIGN_FPSTATE, TIF_MTE_ASYNC_FAULT, TIF_SVE,
>> TIF_SSBD, etc.) are renumbered to start at bit 16 to avoid conflicts.
>>
>> This enables RSEQ optimizations which require CONFIG_HAVE_GENERIC_TIF_BITS
>> combined with the generic entry infrastructure (already used by arm64).
>>
>> Cc: Thomas Gleixner <tglx@kernel.org>
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
>> arch/arm64/Kconfig | 1 +
>> arch/arm64/include/asm/thread_info.h | 62 ++++++++++++----------------
>> 2 files changed, 28 insertions(+), 35 deletions(-)
>>
[...]
>> + */
>> +#define HAVE_TIF_NEED_RESCHED_LAZY
>> +#define HAVE_TIF_RESTORE_SIGMASK
>> +#define HAVE_TIF_SINGLESTEP
>> +
>> +#include <asm-generic/thread_info_tif.h>
>> +
>> +#define TIF_FOREIGN_FPSTATE 16 /* CPU's FP state is not current's */
>> +#define TIF_MTE_ASYNC_FAULT 17 /* MTE Asynchronous Tag Check Fault */
>> +#define TIF_FREEZE 18
>
> Turns out this flag became unused a long time ago, see commit
> d88e4cb67197 ("freezer: remove now unused TIF_FREEZE"), and it was
> probably reintroduced by mistake in the original arm64 implementation,
> commit b3901d54dc4f ("arm64: Process management"). Good opportunity to
> remove it I think.
Totally agree. Let's get rid of it. No point in keeping dead code.
Thanks for the review.
>
> Otherwise:
>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
>
>> +#define TIF_32BIT 19 /* 32bit process */
>> +#define TIF_SVE 20 /* Scalable Vector Extension in use */
>> +#define TIF_SVE_VL_INHERIT 21 /* Inherit SVE vl_onexec across exec */
>> +#define TIF_SSBD 22 /* Wants SSB mitigation */
>> +#define TIF_TAGGED_ADDR 23 /* Allow tagged user addresses */
>> +#define TIF_SME 24 /* SME in use */
>> +#define TIF_SME_VL_INHERIT 25 /* Inherit SME vl_onexec across exec */
>> +#define TIF_KERNEL_FPSTATE 26 /* Task is in a kernel mode FPSIMD section */
>> +#define TIF_TSC_SIGSEGV 27 /* SIGSEGV on counter-timer access */
>> +#define TIF_LAZY_MMU_PENDING 28 /* Ops pending for lazy mmu mode exit */
>>
>> -#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
>> -#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
>> -#define _TIF_NEED_RESCHED_LAZY (1 << TIF_NEED_RESCHED_LAZY)
>> -#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
>> -#define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE)
>> -#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)
>> -#define _TIF_UPROBE (1 << TIF_UPROBE)
>> -#define _TIF_32BIT (1 << TIF_32BIT)
>> -#define _TIF_SVE (1 << TIF_SVE)
>> -#define _TIF_MTE_ASYNC_FAULT (1 << TIF_MTE_ASYNC_FAULT)
>> -#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
>> -#define _TIF_TSC_SIGSEGV (1 << TIF_TSC_SIGSEGV)
>> +#define _TIF_FOREIGN_FPSTATE BIT(TIF_FOREIGN_FPSTATE)
>> +#define _TIF_32BIT BIT(TIF_32BIT)
>> +#define _TIF_SVE BIT(TIF_SVE)
>> +#define _TIF_MTE_ASYNC_FAULT BIT(TIF_MTE_ASYNC_FAULT)
>> +#define _TIF_TSC_SIGSEGV BIT(TIF_TSC_SIGSEGV)
>>
>> #ifdef CONFIG_SHADOW_CALL_STACK
>> #define INIT_SCS \
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 14/14] selftests: sud_test: Support aarch64
2026-03-19 14:26 ` Linus Walleij
@ 2026-03-20 9:23 ` Jinjie Ruan
0 siblings, 0 replies; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-20 9:23 UTC (permalink / raw)
To: Linus Walleij
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On 2026/3/19 22:26, Linus Walleij wrote:
> On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
>
>> From: kemal <kmal@cock.li>
>>
>> Support aarch64 to test "Syscall User Dispatch" with sud_test
>> selftest testcase.
>>
>> Signed-off-by: kemal <kmal@cock.li>
>
> You need to sign this off since you are on the delivery path.
My apologies for missing that. I'll definitely add the Signed-off-by tag
in the v14 submission.
>
> Yours,
> Linus Walleij
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry
2026-03-19 14:35 ` Linus Walleij
@ 2026-03-20 9:28 ` Jinjie Ruan
0 siblings, 0 replies; 38+ messages in thread
From: Jinjie Ruan @ 2026-03-20 9:28 UTC (permalink / raw)
To: Linus Walleij
Cc: catalin.marinas, will, oleg, chenhuacai, kernel, hca, gor,
agordeev, borntraeger, svens, tglx, mingo, bp, dave.hansen, hpa,
arnd, peterz, luto, shuah, kees, wad, kevin.brodsky, deller,
macro, akpm, ldv, anshuman.khandual, ryan.roberts, mark.rutland,
thuth, song, ada.coupriediaz, broonie, pengcan, liqiang01, ziyao,
guanwentao, guoren, schuster.simon, jremus, david,
mathieu.desnoyers, edumazet, kmal, dvyukov, reddybalavignesh9979,
x86, linux-arm-kernel, linux-kernel, loongarch, linux-s390,
linux-arch, linux-kselftest
On 2026/3/19 22:35, Linus Walleij wrote:
> On Tue, Mar 17, 2026 at 9:20 AM Jinjie Ruan <ruanjinjie@huawei.com> wrote:
>
>> Currently, x86, Riscv, Loongarch use the Generic Entry which makes
>> maintainers' work easier and codes more elegant. arm64 has already
>> successfully switched to the Generic IRQ Entry in commit
>> b3cf07851b6c ("arm64: entry: Switch to generic IRQ entry"), it is
>> time to completely convert arm64 to Generic Entry.
>
> Looks good to me, except patch 14 that needs your Signoff.
>
> Perhaps it is best if patches 1 thru 11 are applied separately
> to the arm64 tree and the remaining patches either postponed
> to the next kernel cycle or applied on top of an immutable branch
> based off v7.0-rc1 from the arm64 tree?
Thanks for the review and the suggestion on the merge strategy.
1. Regarding the Split: I agree with applying Patches 1-10 to the arm64
tree first. These are foundational and ready for inclusion.
2. Regarding Patches 11-14: I am fine with postponing them or using an
immutable branch based on v7.0-rc1.
>
> Yours,
> Linus Walleij
>
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2026-03-20 9:28 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-17 8:20 [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
2026-03-17 8:20 ` [PATCH v13 RESEND 01/14] arm64/ptrace: Refactor syscall_trace_enter/exit() to accept flags parameter Jinjie Ruan
2026-03-19 13:47 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 02/14] arm64/ptrace: Use syscall_get_nr() helper for syscall_trace_enter() Jinjie Ruan
2026-03-19 13:50 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 03/14] arm64/ptrace: Expand secure_computing() in place Jinjie Ruan
2026-03-19 13:58 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 04/14] arm64/ptrace: Use syscall_get_arguments() helper for audit Jinjie Ruan
2026-03-19 14:14 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 05/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit() Jinjie Ruan
2026-03-19 14:16 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 06/14] arm64: syscall: Introduce syscall_exit_to_user_mode_work() Jinjie Ruan
2026-03-19 14:17 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 07/14] arm64/ptrace: Define and use _TIF_SYSCALL_EXIT_WORK Jinjie Ruan
2026-03-19 14:18 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 08/14] arm64/ptrace: Skip syscall exit reporting for PTRACE_SYSEMU_SINGLESTEP Jinjie Ruan
2026-03-19 14:20 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 09/14] arm64: entry: Convert to generic entry Jinjie Ruan
2026-03-17 10:58 ` Peter Zijlstra
2026-03-19 14:21 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 10/14] arm64: Inline el0_svc_common() Jinjie Ruan
2026-03-19 14:22 ` Linus Walleij
2026-03-17 8:20 ` [PATCH v13 RESEND 11/14] s390: Rename TIF_SINGLE_STEP to TIF_SINGLESTEP Jinjie Ruan
2026-03-19 14:23 ` Linus Walleij
2026-03-19 17:05 ` Kevin Brodsky
2026-03-17 8:20 ` [PATCH v13 RESEND 12/14] asm-generic: Move TIF_SINGLESTEP to generic TIF bits Jinjie Ruan
2026-03-19 14:24 ` Linus Walleij
2026-03-19 17:05 ` Kevin Brodsky
2026-03-17 8:20 ` [PATCH v13 RESEND 13/14] arm64: Use generic TIF bits for common thread flags Jinjie Ruan
2026-03-19 14:24 ` Linus Walleij
2026-03-19 17:07 ` Kevin Brodsky
2026-03-20 9:21 ` Jinjie Ruan
2026-03-17 8:20 ` [PATCH v13 RESEND 14/14] selftests: sud_test: Support aarch64 Jinjie Ruan
2026-03-19 14:26 ` Linus Walleij
2026-03-20 9:23 ` Jinjie Ruan
2026-03-17 10:57 ` [PATCH v13 RESEND 00/14] arm64: entry: Convert to Generic Entry Yeoreum Yun
2026-03-19 14:35 ` Linus Walleij
2026-03-20 9:28 ` Jinjie Ruan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox