public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v11 00/14] arm64: entry: Convert to Generic Entry
@ 2026-01-28  3:19 Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter() Jinjie Ruan
                   ` (13 more replies)
  0 siblings, 14 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

Currently, x86, Riscv, Loongarch use the Generic Entry which makes
maintainers' work easier and codes more elegant. arm64 has already
successfully switched to the Generic IRQ Entry in commit
b3cf07851b6c ("arm64: entry: Switch to generic IRQ entry"), it is
time to completely convert arm64 to Generic Entry.

The goal is to bring arm64 in line with other architectures that already
use the generic entry infrastructure, reducing duplicated code and
making it easier to share future changes in entry/exit paths, such as
"Syscall User Dispatch".

This patch set is rebased on arm64 (for-next/entry). And the performance
was measured on Kunpeng 920 using "perf bench basic syscall" with
"arm64.nopauth selinux=0 audit=1".

After switch to Generic Entry, the performance are below:

| Metric     | W/O Generic Framework | With Generic Framework | Change |
| ---------- | --------------------- | ---------------------- | ------ |
| Total time | 2.487 [sec]           |  2.393[sec]            | ↓3.8% |
| usecs/op   | 0.248780              |  0.239361              | ↓3.8% |
| ops/sec    | 4,019,620             |  4,177,789             | ↑3.9% |

Compared to earlier with arch specific handling, the performance improved
by approximately 3.9%.

On the basis of optimizing syscall_get_arguments()[1], el0_svc_common()
and syscall_exit_work(), the performance are below:

| Metric     | W/O Generic Entry | With Generic Entry opt| Change |
| ---------- | ----------------- | ------------------    | ------ |
| Total time | 2.487 [sec]       | 2.264 [sec]           | ↓9.0% |
| usecs/op   | 0.248780          | 0.226481              | ↓9.0% |
| ops/sec    | 4,019,620         | 4,415,383             | ↑9.8% |

Therefore, after the optimization, ARM64 System Call performance improved
by approximately 9%.

It was tested ok with following test cases on kunpeng920 and QEMU
virt platform:
 - Perf tests.
 - Different `dynamic preempt` mode switch.
 - Pseudo NMI tests.
 - Stress-ng CPU stress test.
 - Hackbench stress test.
 - MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst
   and all test cases in tools/testing/selftests/arm64/mte/*.
 - "sud" selftest testcase.
 - get_set_sud, get_syscall_info, set_syscall_info, peeksiginfo
   in tools/testing/selftests/ptrace.
 - breakpoint_test_arm64 in selftests/breakpoints.
 - syscall-abi and ptrace in tools/testing/selftests/arm64/abi
 - fp-ptrace, sve-ptrace, za-ptrace in selftests/arm64/fp.
 - vdso_test_getrandom in tools/testing/selftests/vDSO
 - Strace tests.

The test QEMU configuration is as follows:

	qemu-system-aarch64 \
		-M virt,gic-version=3,virtualization=on,mte=on \
		-cpu max,pauth-impdef=on \
		-kernel Image \
		-smp 8,sockets=1,cores=4,threads=2 \
		-m 512m \
		-nographic \
		-no-reboot \
		-device virtio-rng-pci \
		-append "root=/dev/vda rw console=ttyAMA0 kgdboc=ttyAMA0,115200 \
			earlycon preempt=voluntary irqchip.gicv3_pseudo_nmi=1" \
		-drive if=none,file=images/rootfs.ext4,format=raw,id=hd0 \
		-device virtio-blk-device,drive=hd0 \

[1]: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm/+/89bf683c9507c280e20c3e17b4ea15e19696ed63%5E%21/#F0

Changes in v11:
- Remove unused syscall in syscall_trace_enter().
- Update and provide a detailed explanation of the differences after
  moving rseq_syscall() before audit_syscall_exit().
- Rebased on arm64 (for-next/entry), and remove the first applied 3 patchs.
- syscall_exit_to_user_mode_work() for arch reuse instead of adding
  new syscall_exit_to_user_mode_work_prepare() helper.
- Link to v10: https://lore.kernel.org/all/20251222114737.1334364-1-ruanjinjie@huawei.com/

Changes in v10:
- Rebased on v6.19-rc1, rename syscall_exit_to_user_mode_prepare() to
  syscall_exit_to_user_mode_work_prepare() to avoid conflict.
- Also inline syscall_trace_enter().
- Support aarch64 for sud_benchmark.
- Update and correct the commit message.
- Add Reviewed-by.
- Link to v9: https://lore.kernel.org/all/20251204082123.2792067-1-ruanjinjie@huawei.com/

Changes in v9:
- Move "Return early for ptrace_report_syscall_entry() error" patch ahead
  to make it not introduce a regression.
- Not check _TIF_SECCOMP/SYSCALL_EMU for syscall_exit_work() in
  a separate patch.
- Do not report_syscall_exit() for PTRACE_SYSEMU_SINGLESTEP in a separate
  patch.
- Add two performance patch to improve the arm64 performance.
- Add Reviewed-by.
- Link to v8: https://lore.kernel.org/all/20251126071446.3234218-1-ruanjinjie@huawei.com/

Changes in v8:
- Rename "report_syscall_enter()" to "report_syscall_entry()".
- Add ptrace_save_reg() to avoid duplication.
- Remove unused _TIF_WORK_MASK in a standalone patch.
- Align syscall_trace_enter() return value with the generic version.
- Use "scno" instead of regs->syscallno in el0_svc_common().
- Move rseq_syscall() ahead in a standalone patch to clarify it clearly.
- Rename "syscall_trace_exit()" to "syscall_exit_work()".
- Keep the goto in el0_svc_common().
- No argument was passed to __secure_computing() and check -1 not -1L.
- Remove "Add has_syscall_work() helper" patch.
- Move "Add syscall_exit_to_user_mode_prepare() helper" patch later.
- Add miss header for asm/entry-common.h.
- Update the implementation of arch_syscall_is_vdso_sigreturn().
- Add "ARCH_SYSCALL_WORK_EXIT" to be defined as "SECCOMP | SYSCALL_EMU"
  to keep the behaviour unchanged.
- Add more testcases test.
- Add Reviewed-by.
- Update the commit message.
- Link to v7: https://lore.kernel.org/all/20251117133048.53182-1-ruanjinjie@huawei.com/

Chanegs in v7:
- Support "Syscall User Dispatch" by implementing
  arch_syscall_is_vdso_sigreturn() as kemal suggested.
- Add aarch64 support for "sud" selftest testcase, which tested ok with
  the patch series.
- Fix the kernel test robot warning for arch_ptrace_report_syscall_entry()
  and arch_ptrace_report_syscall_exit() in asm/entry-common.h.
- Add perf syscall performance test.
- Link to v6: https://lore.kernel.org/all/20250916082611.2972008-1-ruanjinjie@huawei.com/

Changes in v6:
- Rebased on v6.17-rc5-next as arm64 generic irq entry has merged.
- Update the commit message.
- Link to v5: https://lore.kernel.org/all/20241206101744.4161990-1-ruanjinjie@huawei.com/

Changes in v5:
- Not change arm32 and keep inerrupts_enabled() macro for gicv3 driver.
- Move irqentry_state definition into arch/arm64/kernel/entry-common.c.
- Avoid removing the __enter_from_*() and __exit_to_*() wrappers.
- Update "irqentry_state_t ret/irq_state" to "state"
  to keep it consistently.
- Use generic irq entry header for PREEMPT_DYNAMIC after split
  the generic entry.
- Also refactor the ARM64 syscall code.
- Introduce arch_ptrace_report_syscall_entry/exit(), instead of
  arch_pre/post_report_syscall_entry/exit() to simplify code.
- Make the syscall patches clear separation.
- Update the commit message.
- Link to v4: https://lore.kernel.org/all/20241025100700.3714552-1-ruanjinjie@huawei.com/

Changes in v4:
- Rework/cleanup split into a few patches as Mark suggested.
- Replace interrupts_enabled() macro with regs_irqs_disabled(), instead
  of left it here.
- Remove rcu and lockdep state in pt_regs by using temporary
  irqentry_state_t as Mark suggested.
- Remove some unnecessary intermediate functions to make it clear.
- Rework preempt irq and PREEMPT_DYNAMIC code
  to make the switch more clear.
- arch_prepare_*_entry/exit() -> arch_pre_*_entry/exit().
- Expand the arch functions comment.
- Make arch functions closer to its caller.
- Declare saved_reg in for block.
- Remove arch_exit_to_kernel_mode_prepare(), arch_enter_from_kernel_mode().
- Adjust "Add few arch functions to use generic entry" patch to be
  the penultimate.
- Update the commit message.
- Add suggested-by.
- Link to v3: https://lore.kernel.org/all/20240629085601.470241-1-ruanjinjie@huawei.com/

Changes in v3:
- Test the MTE test cases.
- Handle forget_syscall() in arch_post_report_syscall_entry()
- Make the arch funcs not use __weak as Thomas suggested, so move
  the arch funcs to entry-common.h, and make arch_forget_syscall() folded
  in arch_post_report_syscall_entry() as suggested.
- Move report_single_step() to thread_info.h for arm64
- Change __always_inline() to inline, add inline for the other arch funcs.
- Remove unused signal.h for entry-common.h.
- Add Suggested-by.
- Update the commit message.

Changes in v2:
- Add tested-by.
- Fix a bug that not call arch_post_report_syscall_entry() in
  syscall_trace_enter() if ptrace_report_syscall_entry() return not zero.
- Refactor report_syscall().
- Add comment for arch_prepare_report_syscall_exit().
- Adjust entry-common.h header file inclusion to alphabetical order.
- Update the commit message.

Jinjie Ruan (13):
  entry: Remove unused syscall in syscall_trace_enter()
  arm64/ptrace: Refactor syscall_trace_enter/exit()
  arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
  arm64: syscall: Rework el0_svc_common()
  arm64/ptrace: Not check _TIF_SECCOMP/SYSCALL_EMU for
    syscall_exit_work()
  arm64/ptrace: Do not report_syscall_exit() for
    PTRACE_SYSEMU_SINGLESTEP
  arm64/ptrace: Expand secure_computing() in place
  arm64/ptrace: Use syscall_get_arguments() helper
  entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  entry: Add arch_ptrace_report_syscall_entry/exit()
  arm64: entry: Convert to generic entry
  arm64: Inline el0_svc_common()
  entry: Inline syscall_exit_work() and syscall_trace_enter()

kemal (1):
  selftests: sud_test: Support aarch64

 arch/arm64/Kconfig                            |   2 +-
 arch/arm64/include/asm/entry-common.h         |  76 +++++++++
 arch/arm64/include/asm/syscall.h              |  19 ++-
 arch/arm64/include/asm/thread_info.h          |  16 +-
 arch/arm64/kernel/debug-monitors.c            |   7 +
 arch/arm64/kernel/ptrace.c                    | 115 -------------
 arch/arm64/kernel/signal.c                    |   2 +-
 arch/arm64/kernel/syscall.c                   |  29 +---
 include/linux/entry-common.h                  | 158 ++++++++++++++++--
 kernel/entry/common.h                         |   7 -
 kernel/entry/syscall-common.c                 |  96 +----------
 kernel/entry/syscall_user_dispatch.c          |   4 +-
 .../syscall_user_dispatch/sud_benchmark.c     |   2 +-
 .../syscall_user_dispatch/sud_test.c          |   4 +
 14 files changed, 268 insertions(+), 269 deletions(-)
 delete mode 100644 kernel/entry/common.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter()
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-29 12:06   ` Kevin Brodsky
                     ` (2 more replies)
  2026-01-28  3:19 ` [PATCH v11 02/14] arm64/ptrace: Refactor syscall_trace_enter/exit() Jinjie Ruan
                   ` (12 subsequent siblings)
  13 siblings, 3 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

The 'syscall' argument in syscall_trace_enter() is immediately overwritten
before any real use and serves only as a local variable, so drop
the parameter.

No functional change intended.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 include/linux/entry-common.h  | 4 ++--
 kernel/entry/syscall-common.c | 5 ++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index 87efb38b7081..e4a8287af822 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -45,7 +45,7 @@
 				 SYSCALL_WORK_SYSCALL_EXIT_TRAP	|	\
 				 ARCH_SYSCALL_WORK_EXIT)
 
-long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work);
+long syscall_trace_enter(struct pt_regs *regs, unsigned long work);
 
 /**
  * syscall_enter_from_user_mode_work - Check and handle work before invoking
@@ -75,7 +75,7 @@ static __always_inline long syscall_enter_from_user_mode_work(struct pt_regs *re
 	unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
 
 	if (work & SYSCALL_WORK_ENTER)
-		syscall = syscall_trace_enter(regs, syscall, work);
+		syscall = syscall_trace_enter(regs, work);
 
 	return syscall;
 }
diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c
index 940a597ded40..e6237b536d8b 100644
--- a/kernel/entry/syscall-common.c
+++ b/kernel/entry/syscall-common.c
@@ -17,10 +17,9 @@ static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
 	}
 }
 
-long syscall_trace_enter(struct pt_regs *regs, long syscall,
-				unsigned long work)
+long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
 {
-	long ret = 0;
+	long syscall, ret = 0;
 
 	/*
 	 * Handle Syscall User Dispatch.  This must comes first, since
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 02/14] arm64/ptrace: Refactor syscall_trace_enter/exit()
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter() Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 03/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit() Jinjie Ruan
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

The generic syscall entry code has the following form, which use
the input syscall work flag:

| syscall_trace_enter(struct pt_regs *regs, unsigned long work)
|
| syscall_exit_work(struct pt_regs *regs, unsigned long work)

In preparation for moving arm64 over to the generic entry code,
refactor syscall_trace_enter/exit() to also pass thread flags, and
get syscall number by syscall_get_nr() helper.

No functional changes.

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/include/asm/syscall.h |  4 ++--
 arch/arm64/kernel/ptrace.c       | 26 +++++++++++++++++---------
 arch/arm64/kernel/syscall.c      |  5 +++--
 3 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index 5e4c7fc44f73..30b203ef156b 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -120,7 +120,7 @@ static inline int syscall_get_arch(struct task_struct *task)
 	return AUDIT_ARCH_AARCH64;
 }
 
-int syscall_trace_enter(struct pt_regs *regs);
-void syscall_trace_exit(struct pt_regs *regs);
+int syscall_trace_enter(struct pt_regs *regs, unsigned long flags);
+void syscall_trace_exit(struct pt_regs *regs, unsigned long flags);
 
 #endif	/* __ASM_SYSCALL_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index f333791ffba6..9f9aa3087c09 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2407,9 +2407,9 @@ static void report_syscall_exit(struct pt_regs *regs)
 	}
 }
 
-int syscall_trace_enter(struct pt_regs *regs)
+int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
 {
-	unsigned long flags = read_thread_flags();
+	long syscall;
 	int ret;
 
 	if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
@@ -2422,19 +2422,27 @@ int syscall_trace_enter(struct pt_regs *regs)
 	if (secure_computing() == -1)
 		return NO_SYSCALL;
 
-	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
-		trace_sys_enter(regs, regs->syscallno);
+	/* Either of the above might have changed the syscall number */
+	syscall = syscall_get_nr(current, regs);
 
-	audit_syscall_entry(regs->syscallno, regs->orig_x0, regs->regs[1],
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) {
+		trace_sys_enter(regs, syscall);
+
+		/*
+		 * Probes or BPF hooks in the tracepoint may have changed the
+		 * system call number as well.
+		 */
+		 syscall = syscall_get_nr(current, regs);
+	}
+
+	audit_syscall_entry(syscall, regs->orig_x0, regs->regs[1],
 			    regs->regs[2], regs->regs[3]);
 
-	return regs->syscallno;
+	return syscall;
 }
 
-void syscall_trace_exit(struct pt_regs *regs)
+void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
 {
-	unsigned long flags = read_thread_flags();
-
 	audit_syscall_exit(regs);
 
 	if (flags & _TIF_SYSCALL_TRACEPOINT)
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index c062badd1a56..e8fd0d60ab09 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -124,7 +124,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 		 */
 		if (scno == NO_SYSCALL)
 			syscall_set_return_value(current, regs, -ENOSYS, 0);
-		scno = syscall_trace_enter(regs);
+		scno = syscall_trace_enter(regs, flags);
 		if (scno == NO_SYSCALL)
 			goto trace_exit;
 	}
@@ -143,7 +143,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 	}
 
 trace_exit:
-	syscall_trace_exit(regs);
+	flags = read_thread_flags();
+	syscall_trace_exit(regs, flags);
 }
 
 void do_el0_svc(struct pt_regs *regs)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 03/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter() Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 02/14] arm64/ptrace: Refactor syscall_trace_enter/exit() Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-29 12:06   ` Kevin Brodsky
  2026-01-28  3:19 ` [PATCH v11 04/14] arm64: syscall: Rework el0_svc_common() Jinjie Ruan
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

commit a9f3a74a29af ("entry: Provide generic syscall exit function")
introduce generic syscall exit function and call rseq_syscall()
before audit_syscall_exit() and arch_syscall_exit_tracehook().

And commit b74406f37737 ("arm: Add syscall detection for restartable
sequences") add rseq support for arm32, which also call rseq_syscall()
before audit_syscall_exit() and tracehook_report_syscall().

However, commit 409d5db49867c ("arm64: rseq: Implement backend rseq
calls and select HAVE_RSEQ") implement arm64 rseq and call
rseq_syscall() after audit_syscall_exit() and tracehook_report_syscall().
So compared to the generic entry and arm32 code, arm64 calls
rseq_syscall() a bit later.

But as commit b74406f37737 ("arm: Add syscall detection for restartable
sequences") said, syscalls are not allowed inside restartable sequences,
so should call rseq_syscall() at the very beginning of system call
exiting path for CONFIG_DEBUG_RSEQ=y kernel. This could help us to detect
whether there is a syscall issued inside restartable sequences.

As for the impact of raising SIGSEGV via rseq_syscall(), it makes no
practical difference to signal delivery because signals are processed
in arm64_exit_to_user_mode() at the very end.

As for the "regs", rseq_syscall() only checks and update
instruction_pointer(regs), ptrace can not modify the "pc" on syscall exit
path but 'only changes the return value', so calling rseq_syscall()
before or after ptrace_report_syscall_exit() makes no difference.

And audit_syscall_exit() only checks the return value (x0 for arm64),
so calling rseq_syscall() before or after audit syscall exit makes
no difference. trace_sys_exit() only uses syscallno and the return value,
so calling rseq_syscall() before or after trace_sys_exit() also makes
no difference.

In preparation for moving arm64 over to the generic entry code, move
rseq_syscall() ahead before audit_syscall_exit().

No functional changes.

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/ptrace.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 9f9aa3087c09..785280c76317 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2443,6 +2443,8 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
 
 void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
 {
+	rseq_syscall(regs);
+
 	audit_syscall_exit(regs);
 
 	if (flags & _TIF_SYSCALL_TRACEPOINT)
@@ -2450,8 +2452,6 @@ void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
 
 	if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
 		report_syscall_exit(regs);
-
-	rseq_syscall(regs);
 }
 
 /*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 04/14] arm64: syscall: Rework el0_svc_common()
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (2 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 03/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit() Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 05/14] arm64/ptrace: Not check _TIF_SECCOMP/SYSCALL_EMU for syscall_exit_work() Jinjie Ruan
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

The generic syscall syscall_exit_work() has the following content:

| audit_syscall_exit(regs)
| trace_sys_exit(regs, ...)
| ptrace_report_syscall_exit(regs, step)

The generic syscall syscall_exit_to_user_mode_work() has
the following form:

| unsigned long work = READ_ONCE(current_thread_info()->syscall_work)
| rseq_syscall()
| if (unlikely(work & SYSCALL_WORK_EXIT))
|	syscall_exit_work(regs, work)

In preparation for moving arm64 over to the generic entry code,
rework el0_svc_common() as below:

- Rename syscall_trace_exit() to syscall_exit_work().

- Add syscall_exit_to_user_mode_work() function to replace
  the combination of read_thread_flags() and syscall_exit_work(),
  also move the syscall exit check logic into it. Move has_syscall_work()
  helper into asm/syscall.h for reuse.

- As currently rseq_syscall() is always called and itself is controlled
  by the CONFIG_DEBUG_RSEQ macro, so the CONFIG_DEBUG_RSEQ check
  is removed.

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/include/asm/syscall.h |  7 ++++++-
 arch/arm64/kernel/ptrace.c       | 14 +++++++++++---
 arch/arm64/kernel/syscall.c      | 20 +-------------------
 3 files changed, 18 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index 30b203ef156b..c469d09a7964 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -120,7 +120,12 @@ static inline int syscall_get_arch(struct task_struct *task)
 	return AUDIT_ARCH_AARCH64;
 }
 
+static inline bool has_syscall_work(unsigned long flags)
+{
+	return unlikely(flags & _TIF_SYSCALL_WORK);
+}
+
 int syscall_trace_enter(struct pt_regs *regs, unsigned long flags);
-void syscall_trace_exit(struct pt_regs *regs, unsigned long flags);
+void syscall_exit_to_user_mode_work(struct pt_regs *regs);
 
 #endif	/* __ASM_SYSCALL_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 785280c76317..bf8af5247db4 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2441,10 +2441,8 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
 	return syscall;
 }
 
-void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
+static void syscall_exit_work(struct pt_regs *regs, unsigned long flags)
 {
-	rseq_syscall(regs);
-
 	audit_syscall_exit(regs);
 
 	if (flags & _TIF_SYSCALL_TRACEPOINT)
@@ -2454,6 +2452,16 @@ void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
 		report_syscall_exit(regs);
 }
 
+void syscall_exit_to_user_mode_work(struct pt_regs *regs)
+{
+	unsigned long flags = read_thread_flags();
+
+	rseq_syscall(regs);
+
+	if (has_syscall_work(flags) || flags & _TIF_SINGLESTEP)
+		syscall_exit_work(regs, flags);
+}
+
 /*
  * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
  * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index e8fd0d60ab09..66d4da641d97 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -65,11 +65,6 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
 	choose_random_kstack_offset(get_random_u16());
 }
 
-static inline bool has_syscall_work(unsigned long flags)
-{
-	return unlikely(flags & _TIF_SYSCALL_WORK);
-}
-
 static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 			   const syscall_fn_t syscall_table[])
 {
@@ -130,21 +125,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 	}
 
 	invoke_syscall(regs, scno, sc_nr, syscall_table);
-
-	/*
-	 * The tracing status may have changed under our feet, so we have to
-	 * check again. However, if we were tracing entry, then we always trace
-	 * exit regardless, as the old entry assembly did.
-	 */
-	if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
-		flags = read_thread_flags();
-		if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
-			return;
-	}
-
 trace_exit:
-	flags = read_thread_flags();
-	syscall_trace_exit(regs, flags);
+	syscall_exit_to_user_mode_work(regs);
 }
 
 void do_el0_svc(struct pt_regs *regs)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 05/14] arm64/ptrace: Not check _TIF_SECCOMP/SYSCALL_EMU for syscall_exit_work()
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (3 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 04/14] arm64: syscall: Rework el0_svc_common() Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 06/14] arm64/ptrace: Do not report_syscall_exit() for PTRACE_SYSEMU_SINGLESTEP Jinjie Ruan
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

As syscall_exit_work() do not handle seccomp, so not check _TIF_SECCOMP
for syscall_exit_work().

And as the man manual of PTRACE_SYSEMU and PTRACE_SYSEMU_SINGLESTEP
said, "For PTRACE_SYSEMU, continue and stop on entry to the next system
call, which will not be executed. For PTRACE_SYSEMU_SINGLESTEP, do the same
but also singlestep if not a system call.". So only the syscall entry need
to be reported for SYSCALL_EMU, so not check _TIF_SYSCALL_EMU for
syscall_exit_work().

After this, audit_syscall_exit() and report_syscall_exit() will
no longer be called if only SECCOMP and/or SYSCALL_EMU is set.

And remove has_syscall_work() by the way as currently it is only used in
el0_svc_common().

This is another preparation for moving arm64 over to the generic
entry code.

Link:https://man7.org/linux/man-pages/man2/ptrace.2.html
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/include/asm/syscall.h     | 5 -----
 arch/arm64/include/asm/thread_info.h | 3 +++
 arch/arm64/kernel/ptrace.c           | 2 +-
 arch/arm64/kernel/syscall.c          | 2 +-
 4 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index c469d09a7964..dea392c081ca 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -120,11 +120,6 @@ static inline int syscall_get_arch(struct task_struct *task)
 	return AUDIT_ARCH_AARCH64;
 }
 
-static inline bool has_syscall_work(unsigned long flags)
-{
-	return unlikely(flags & _TIF_SYSCALL_WORK);
-}
-
 int syscall_trace_enter(struct pt_regs *regs, unsigned long flags);
 void syscall_exit_to_user_mode_work(struct pt_regs *regs);
 
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 24fcd6adaa33..ef1462b9b00b 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -110,6 +110,9 @@ void arch_setup_new_exec(void);
 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
 				 _TIF_SYSCALL_EMU)
 
+#define _TIF_SYSCALL_EXIT_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
+				 _TIF_SYSCALL_TRACEPOINT)
+
 #ifdef CONFIG_SHADOW_CALL_STACK
 #define INIT_SCS							\
 	.scs_base	= init_shadow_call_stack,			\
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index bf8af5247db4..ec30a23e7e93 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2458,7 +2458,7 @@ void syscall_exit_to_user_mode_work(struct pt_regs *regs)
 
 	rseq_syscall(regs);
 
-	if (has_syscall_work(flags) || flags & _TIF_SINGLESTEP)
+	if (unlikely(flags & _TIF_SYSCALL_EXIT_WORK) || flags & _TIF_SINGLESTEP)
 		syscall_exit_work(regs, flags);
 }
 
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 66d4da641d97..ec478fc37a9f 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -101,7 +101,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 		return;
 	}
 
-	if (has_syscall_work(flags)) {
+	if (unlikely(flags & _TIF_SYSCALL_WORK)) {
 		/*
 		 * The de-facto standard way to skip a system call using ptrace
 		 * is to set the system call to -1 (NO_SYSCALL) and set x0 to a
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 06/14] arm64/ptrace: Do not report_syscall_exit() for PTRACE_SYSEMU_SINGLESTEP
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (4 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 05/14] arm64/ptrace: Not check _TIF_SECCOMP/SYSCALL_EMU for syscall_exit_work() Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 07/14] arm64/ptrace: Expand secure_computing() in place Jinjie Ruan
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

The generic report_single_step() always returns false if SYSCALL_EMU
is set, but arm64 only checks _TIF_SINGLESTEP and does not check
_TIF_SYSCALL_EMU, which means that if both _TIF_SINGLESTEP and
_TIF_SYSCALL_EMU are set, the generic entry will not report
a single-step, whereas arm64 will do it.

As the man manual of PTRACE_SYSEMU and PTRACE_SYSEMU_SINGLESTEP said,
"For PTRACE_SYSEMU, continue and stop on entry to the next system
call, which will not be executed. For PTRACE_SYSEMU_SINGLESTEP, do the
same but also singlestep if not a system call.". And as the generic entry
report_single_step() comment said, If SYSCALL_EMU is set, then the only
reason to report is when SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP).
Because this syscall instruction has been already reported
in syscall_trace_enter(), there is no need to report the syscall
again in syscall_exit_work().

In preparation for moving arm64 over to the generic entry code,

- Add report_single_step() helper for arm64 to make it clear.

- Do not report_syscall_exit() if both _TIF_SYSCALL_EMU and
  _TIF_SINGLESTEP set.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/ptrace.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index ec30a23e7e93..cc2bac9c95d6 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2441,14 +2441,25 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
 	return syscall;
 }
 
+static inline bool report_single_step(unsigned long flags)
+{
+	if (flags & _TIF_SYSCALL_EMU)
+		return false;
+
+	return flags & _TIF_SINGLESTEP;
+}
+
 static void syscall_exit_work(struct pt_regs *regs, unsigned long flags)
 {
+	bool step;
+
 	audit_syscall_exit(regs);
 
 	if (flags & _TIF_SYSCALL_TRACEPOINT)
 		trace_sys_exit(regs, syscall_get_return_value(current, regs));
 
-	if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
+	step = report_single_step(flags);
+	if (step || flags & _TIF_SYSCALL_TRACE)
 		report_syscall_exit(regs);
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 07/14] arm64/ptrace: Expand secure_computing() in place
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (5 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 06/14] arm64/ptrace: Do not report_syscall_exit() for PTRACE_SYSEMU_SINGLESTEP Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 08/14] arm64/ptrace: Use syscall_get_arguments() helper Jinjie Ruan
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

The generic entry expand secure_computing() in place and call
__secure_computing() directly.

In order to switch to the generic entry for arm64, refactor
secure_computing() for syscall_trace_enter().

No functional changes.

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/ptrace.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index cc2bac9c95d6..57ea0e4aaf82 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2419,8 +2419,11 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
 	}
 
 	/* Do the secure computing after ptrace; failures should be fast. */
-	if (secure_computing() == -1)
-		return NO_SYSCALL;
+	if (flags & _TIF_SECCOMP) {
+		ret = __secure_computing();
+		if (ret == -1)
+			return NO_SYSCALL;
+	}
 
 	/* Either of the above might have changed the syscall number */
 	syscall = syscall_get_nr(current, regs);
@@ -2438,7 +2441,7 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
 	audit_syscall_entry(syscall, regs->orig_x0, regs->regs[1],
 			    regs->regs[2], regs->regs[3]);
 
-	return syscall;
+	return ret ? : syscall;
 }
 
 static inline bool report_single_step(unsigned long flags)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 08/14] arm64/ptrace: Use syscall_get_arguments() helper
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (6 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 07/14] arm64/ptrace: Expand secure_computing() in place Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse Jinjie Ruan
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

The generic entry check audit context first and use
syscall_get_arguments() helper.

In order to switch to the generic entry for arm64,

- Also use syscall_get_arguments() to get audit_syscall_entry()'s
  last four parameters.

- Extract the syscall_enter_audit() helper to make it clear.

- Check audit context first, which saves an unnecessary memcpy when
  current process's audit_context is NULL.

Overall these changes make syscall_enter_audit() exactly equivalent
to the generic one.

No functional changes.

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/ptrace.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 57ea0e4aaf82..6e86aec8d607 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2407,6 +2407,16 @@ static void report_syscall_exit(struct pt_regs *regs)
 	}
 }
 
+static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
+{
+	if (unlikely(audit_context())) {
+		unsigned long args[6];
+
+		syscall_get_arguments(current, regs, args);
+		audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]);
+	}
+}
+
 int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
 {
 	long syscall;
@@ -2438,8 +2448,7 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
 		 syscall = syscall_get_nr(current, regs);
 	}
 
-	audit_syscall_entry(syscall, regs->orig_x0, regs->regs[1],
-			    regs->regs[2], regs->regs[3]);
+	syscall_enter_audit(regs, syscall);
 
 	return ret ? : syscall;
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (7 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 08/14] arm64/ptrace: Use syscall_get_arguments() helper Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-29 12:06   ` Kevin Brodsky
  2026-01-30 21:53   ` [tip: core/entry] entry: Rework syscall_exit_to_user_mode_work() for architecture reuse tip-bot2 for Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 10/14] entry: Add arch_ptrace_report_syscall_entry/exit() Jinjie Ruan
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

In the generic entry code, the beginning of
syscall_exit_to_user_mode_work() can be reused on arm64 so it makes
sense to rework it.

In preparation for moving arm64 over to the generic entry
code, as nothing calls syscall_exit_to_user_mode_work() except for
syscall_exit_to_user_mode(), move local_irq_disable_exit_to_user() and
syscall_exit_to_user_mode_prepare() out from
syscall_exit_to_user_mode_work() to the only one caller.

Also update the comment and no functional changes.

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 include/linux/entry-common.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index e4a8287af822..c4fea642d931 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -125,14 +125,14 @@ void syscall_exit_work(struct pt_regs *regs, unsigned long work);
  * syscall_exit_to_user_mode_work - Handle work before returning to user mode
  * @regs:	Pointer to currents pt_regs
  *
- * Same as step 1 and 2 of syscall_exit_to_user_mode() but without calling
+ * Same as step 1 of syscall_exit_to_user_mode() but without calling
+ * local_irq_disable(), syscall_exit_to_user_mode_prepare() and
  * exit_to_user_mode() to perform the final transition to user mode.
  *
- * Calling convention is the same as for syscall_exit_to_user_mode() and it
- * returns with all work handled and interrupts disabled. The caller must
- * invoke exit_to_user_mode() before actually switching to user mode to
- * make the final state transitions. Interrupts must stay disabled between
- * return from this function and the invocation of exit_to_user_mode().
+ * Calling convention is the same as for syscall_exit_to_user_mode(). The
+ * caller must invoke local_irq_disable(), __exit_to_user_mode_prepare() and
+ * exit_to_user_mode() before actually switching to user mode to
+ * make the final state transitions.
  */
 static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
 {
@@ -155,8 +155,6 @@ static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
 	 */
 	if (unlikely(work & SYSCALL_WORK_EXIT))
 		syscall_exit_work(regs, work);
-	local_irq_disable_exit_to_user();
-	syscall_exit_to_user_mode_prepare(regs);
 }
 
 /**
@@ -192,6 +190,8 @@ static __always_inline void syscall_exit_to_user_mode(struct pt_regs *regs)
 {
 	instrumentation_begin();
 	syscall_exit_to_user_mode_work(regs);
+	local_irq_disable_exit_to_user();
+	syscall_exit_to_user_mode_prepare(regs);
 	instrumentation_end();
 	exit_to_user_mode();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 10/14] entry: Add arch_ptrace_report_syscall_entry/exit()
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (8 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-30 21:53   ` [tip: core/entry] " tip-bot2 for Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 11/14] arm64: entry: Convert to generic entry Jinjie Ruan
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

Differ from generic entry, due to historical reasons, ARM64 need to
save/restore during syscall entry/exit because ARM64 use a scratch
register (ip(r12) on AArch32, x7 on AArch64) to denote syscall entry/exit.

In preparation for moving arm64 over to the generic entry code,
add arch_ptrace_report_syscall_entry/exit() as the default
ptrace_report_syscall_entry/exit() implementation. This allows
arm64 to implement the architecture specific version.

This allows arm64 to implement the architecture specific version.

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 include/linux/entry-common.h  | 39 +++++++++++++++++++++++++++++++++++
 kernel/entry/syscall-common.c |  4 ++--
 2 files changed, 41 insertions(+), 2 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index c4fea642d931..48bdde74a3e1 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -45,6 +45,25 @@
 				 SYSCALL_WORK_SYSCALL_EXIT_TRAP	|	\
 				 ARCH_SYSCALL_WORK_EXIT)
 
+/**
+ * arch_ptrace_report_syscall_entry - Architecture specific
+ *				      ptrace_report_syscall_entry().
+ *
+ * Invoked from syscall_trace_enter() to wrap ptrace_report_syscall_entry().
+ * Defaults to ptrace_report_syscall_entry.
+ *
+ * The main purpose is to support arch-specific ptrace_report_syscall_entry()
+ * implementation.
+ */
+static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs *regs);
+
+#ifndef arch_ptrace_report_syscall_entry
+static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs *regs)
+{
+	return ptrace_report_syscall_entry(regs);
+}
+#endif
+
 long syscall_trace_enter(struct pt_regs *regs, unsigned long work);
 
 /**
@@ -112,6 +131,26 @@ static __always_inline long syscall_enter_from_user_mode(struct pt_regs *regs, l
 	return ret;
 }
 
+/**
+ * arch_ptrace_report_syscall_exit - Architecture specific
+ *				     ptrace_report_syscall_exit.
+ *
+ * Invoked from syscall_exit_work() to wrap ptrace_report_syscall_exit().
+ *
+ * The main purpose is to support arch-specific ptrace_report_syscall_exit
+ * implementation.
+ */
+static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs,
+							    int step);
+
+#ifndef arch_ptrace_report_syscall_exit
+static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs,
+							    int step)
+{
+	ptrace_report_syscall_exit(regs, step);
+}
+#endif
+
 /**
  * syscall_exit_work - Handle work before returning to user mode
  * @regs:	Pointer to current pt_regs
diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c
index e6237b536d8b..bb5f61f5629d 100644
--- a/kernel/entry/syscall-common.c
+++ b/kernel/entry/syscall-common.c
@@ -33,7 +33,7 @@ long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
 
 	/* Handle ptrace */
 	if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) {
-		ret = ptrace_report_syscall_entry(regs);
+		ret = arch_ptrace_report_syscall_entry(regs);
 		if (ret || (work & SYSCALL_WORK_SYSCALL_EMU))
 			return -1L;
 	}
@@ -99,5 +99,5 @@ void syscall_exit_work(struct pt_regs *regs, unsigned long work)
 
 	step = report_single_step(work);
 	if (step || work & SYSCALL_WORK_SYSCALL_TRACE)
-		ptrace_report_syscall_exit(regs, step);
+		arch_ptrace_report_syscall_exit(regs, step);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 11/14] arm64: entry: Convert to generic entry
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (9 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 10/14] entry: Add arch_ptrace_report_syscall_entry/exit() Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 12/14] arm64: Inline el0_svc_common() Jinjie Ruan
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

Currently, x86, Riscv, Loongarch use the generic entry which makes
maintainers' work easier and codes more elegant. arm64 has already
switched to the generic IRQ entry, so completely convert arm64 to use
the generic entry infrastructure from kernel/entry/*.

The changes are below:
 - Remove TIF_SYSCALL_* flag.

 - Remove _TIF_SYSCALL_WORK/EXIT_WORK as they are equal with
   SYSCALL_WORK_ENTER/EXIT.

 - Implement arch_ptrace_report_syscall_entry/exit() with
   report_syscall_entry/exit() to do arm64-specific save/restore
   during syscall entry/exit.

 - Remove arm64 syscall_trace_enter() and related sub-functions
   including syscall_enter_audit(), by calling generic entry's
   functions with similar functionality.

 - Set/clear SYSCALL_EXIT_TRAP flag when enable/disable single_step,
   So _TIF_SINGLESTEP can be replaced with generic SYSCALL_EXIT_TRAP,
   _TIF_SYSCALL_EXIT_WORK and _TIF_SINGLESTEP can be replaced
   with generic SYSCALL_WORK_EXIT, arm64's report_single_step() can be
   replaced with generic version.

 - Remove arm64's syscall_exit_to_user_mode_work() and
   syscall_exit_work() etc. by using generic entry's similar same
   name functions.

 - Implement arch_syscall_is_vdso_sigreturn() to support "Syscall User
   Dispatch".

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/Kconfig                    |   2 +-
 arch/arm64/include/asm/entry-common.h |  76 +++++++++++++
 arch/arm64/include/asm/syscall.h      |  19 +++-
 arch/arm64/include/asm/thread_info.h  |  19 +---
 arch/arm64/kernel/debug-monitors.c    |   7 ++
 arch/arm64/kernel/ptrace.c            | 154 --------------------------
 arch/arm64/kernel/signal.c            |   2 +-
 arch/arm64/kernel/syscall.c           |   6 +-
 8 files changed, 107 insertions(+), 178 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 93173f0a09c7..f50b49ce8b65 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -153,9 +153,9 @@ config ARM64
 	select GENERIC_CPU_DEVICES
 	select GENERIC_CPU_VULNERABILITIES
 	select GENERIC_EARLY_IOREMAP
+	select GENERIC_ENTRY
 	select GENERIC_IDLE_POLL_SETUP
 	select GENERIC_IOREMAP
-	select GENERIC_IRQ_ENTRY
 	select GENERIC_IRQ_IPI
 	select GENERIC_IRQ_KEXEC_CLEAR_VM_FORWARD
 	select GENERIC_IRQ_PROBE
diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm/entry-common.h
index cab8cd78f693..d8bf4bf342e8 100644
--- a/arch/arm64/include/asm/entry-common.h
+++ b/arch/arm64/include/asm/entry-common.h
@@ -3,14 +3,21 @@
 #ifndef _ASM_ARM64_ENTRY_COMMON_H
 #define _ASM_ARM64_ENTRY_COMMON_H
 
+#include <linux/ptrace.h>
 #include <linux/thread_info.h>
 
+#include <asm/compat.h>
 #include <asm/cpufeature.h>
 #include <asm/daifflags.h>
 #include <asm/fpsimd.h>
 #include <asm/mte.h>
 #include <asm/stacktrace.h>
 
+enum ptrace_syscall_dir {
+	PTRACE_SYSCALL_ENTER = 0,
+	PTRACE_SYSCALL_EXIT,
+};
+
 #define ARCH_EXIT_TO_USER_MODE_WORK (_TIF_MTE_ASYNC_FAULT | _TIF_FOREIGN_FPSTATE)
 
 static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *regs,
@@ -54,4 +61,73 @@ static inline bool arch_irqentry_exit_need_resched(void)
 
 #define arch_irqentry_exit_need_resched arch_irqentry_exit_need_resched
 
+static __always_inline unsigned long ptrace_save_reg(struct pt_regs *regs,
+						     enum ptrace_syscall_dir dir,
+						     int *regno)
+{
+	unsigned long saved_reg;
+
+	/*
+	 * We have some ABI weirdness here in the way that we handle syscall
+	 * exit stops because we indicate whether or not the stop has been
+	 * signalled from syscall entry or syscall exit by clobbering a general
+	 * purpose register (ip/r12 for AArch32, x7 for AArch64) in the tracee
+	 * and restoring its old value after the stop. This means that:
+	 *
+	 * - Any writes by the tracer to this register during the stop are
+	 *   ignored/discarded.
+	 *
+	 * - The actual value of the register is not available during the stop,
+	 *   so the tracer cannot save it and restore it later.
+	 *
+	 * - Syscall stops behave differently to seccomp and pseudo-step traps
+	 *   (the latter do not nobble any registers).
+	 */
+	*regno = (is_compat_task() ? 12 : 7);
+	saved_reg = regs->regs[*regno];
+	regs->regs[*regno] = dir;
+
+	return saved_reg;
+}
+
+static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs *regs)
+{
+	unsigned long saved_reg;
+	int regno, ret;
+
+	saved_reg = ptrace_save_reg(regs, PTRACE_SYSCALL_ENTER, &regno);
+	ret = ptrace_report_syscall_entry(regs);
+	if (ret)
+		forget_syscall(regs);
+	regs->regs[regno] = saved_reg;
+
+	return ret;
+}
+
+#define arch_ptrace_report_syscall_entry arch_ptrace_report_syscall_entry
+
+static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs,
+							    int step)
+{
+	unsigned long saved_reg;
+	int regno;
+
+	saved_reg = ptrace_save_reg(regs, PTRACE_SYSCALL_EXIT, &regno);
+	if (!step) {
+		ptrace_report_syscall_exit(regs, 0);
+		regs->regs[regno] = saved_reg;
+	} else {
+		regs->regs[regno] = saved_reg;
+
+		/*
+		 * Signal a pseudo-step exception since we are stepping but
+		 * tracer modifications to the registers may have rewound the
+		 * state machine.
+		 */
+		ptrace_report_syscall_exit(regs, 1);
+	}
+}
+
+#define arch_ptrace_report_syscall_exit arch_ptrace_report_syscall_exit
+
 #endif /* _ASM_ARM64_ENTRY_COMMON_H */
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index dea392c081ca..240d45735cc5 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -9,6 +9,9 @@
 #include <linux/compat.h>
 #include <linux/err.h>
 
+#include <asm/compat.h>
+#include <asm/vdso.h>
+
 typedef long (*syscall_fn_t)(const struct pt_regs *regs);
 
 extern const syscall_fn_t sys_call_table[];
@@ -120,7 +123,19 @@ static inline int syscall_get_arch(struct task_struct *task)
 	return AUDIT_ARCH_AARCH64;
 }
 
-int syscall_trace_enter(struct pt_regs *regs, unsigned long flags);
-void syscall_exit_to_user_mode_work(struct pt_regs *regs);
+static inline bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
+{
+	unsigned long sigtramp;
+
+#ifdef CONFIG_COMPAT
+	if (is_compat_task()) {
+		unsigned long sigpage = (unsigned long)current->mm->context.sigpage;
+
+		return regs->pc >= sigpage && regs->pc < (sigpage + PAGE_SIZE);
+	}
+#endif
+	sigtramp = (unsigned long)VDSO_SYMBOL(current->mm->context.vdso, sigtramp);
+	return regs->pc == (sigtramp + 8);
+}
 
 #endif	/* __ASM_SYSCALL_H */
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index ef1462b9b00b..90be0c590b86 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -43,6 +43,7 @@ struct thread_info {
 	void			*scs_sp;
 #endif
 	u32			cpu;
+	unsigned long		syscall_work;   /* SYSCALL_WORK_ flags */
 };
 
 #define thread_saved_pc(tsk)	\
@@ -65,11 +66,6 @@ void arch_setup_new_exec(void);
 #define TIF_UPROBE		5	/* uprobe breakpoint or singlestep */
 #define TIF_MTE_ASYNC_FAULT	6	/* MTE Asynchronous Tag Check Fault */
 #define TIF_NOTIFY_SIGNAL	7	/* signal notifications exist */
-#define TIF_SYSCALL_TRACE	8	/* syscall trace active */
-#define TIF_SYSCALL_AUDIT	9	/* syscall auditing */
-#define TIF_SYSCALL_TRACEPOINT	10	/* syscall tracepoint for ftrace */
-#define TIF_SECCOMP		11	/* syscall secure computing */
-#define TIF_SYSCALL_EMU		12	/* syscall emulation active */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
 #define TIF_MEMDIE		18	/* is terminating due to OOM killer */
 #define TIF_FREEZE		19
@@ -92,27 +88,14 @@ void arch_setup_new_exec(void);
 #define _TIF_NEED_RESCHED_LAZY	(1 << TIF_NEED_RESCHED_LAZY)
 #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
 #define _TIF_FOREIGN_FPSTATE	(1 << TIF_FOREIGN_FPSTATE)
-#define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
-#define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
-#define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
-#define _TIF_SECCOMP		(1 << TIF_SECCOMP)
-#define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
-#define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
 #define _TIF_32BIT		(1 << TIF_32BIT)
 #define _TIF_SVE		(1 << TIF_SVE)
 #define _TIF_MTE_ASYNC_FAULT	(1 << TIF_MTE_ASYNC_FAULT)
 #define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
 #define _TIF_TSC_SIGSEGV	(1 << TIF_TSC_SIGSEGV)
 
-#define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
-				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
-				 _TIF_SYSCALL_EMU)
-
-#define _TIF_SYSCALL_EXIT_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
-				 _TIF_SYSCALL_TRACEPOINT)
-
 #ifdef CONFIG_SHADOW_CALL_STACK
 #define INIT_SCS							\
 	.scs_base	= init_shadow_call_stack,			\
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 29307642f4c9..e67643a70405 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -385,11 +385,18 @@ void user_enable_single_step(struct task_struct *task)
 
 	if (!test_and_set_ti_thread_flag(ti, TIF_SINGLESTEP))
 		set_regs_spsr_ss(task_pt_regs(task));
+
+	/*
+	 * Ensure that a trap is triggered once stepping out of a system
+	 * call prior to executing any user instruction.
+	 */
+	set_task_syscall_work(task, SYSCALL_EXIT_TRAP);
 }
 NOKPROBE_SYMBOL(user_enable_single_step);
 
 void user_disable_single_step(struct task_struct *task)
 {
 	clear_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
+	clear_task_syscall_work(task, SYSCALL_EXIT_TRAP);
 }
 NOKPROBE_SYMBOL(user_disable_single_step);
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 6e86aec8d607..f575b30b2dc4 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -8,7 +8,6 @@
  * Copyright (C) 2012 ARM Ltd.
  */
 
-#include <linux/audit.h>
 #include <linux/compat.h>
 #include <linux/kernel.h>
 #include <linux/sched/signal.h>
@@ -18,7 +17,6 @@
 #include <linux/smp.h>
 #include <linux/ptrace.h>
 #include <linux/user.h>
-#include <linux/seccomp.h>
 #include <linux/security.h>
 #include <linux/init.h>
 #include <linux/signal.h>
@@ -28,7 +26,6 @@
 #include <linux/hw_breakpoint.h>
 #include <linux/regset.h>
 #include <linux/elf.h>
-#include <linux/rseq.h>
 
 #include <asm/compat.h>
 #include <asm/cpufeature.h>
@@ -38,13 +35,9 @@
 #include <asm/mte.h>
 #include <asm/pointer_auth.h>
 #include <asm/stacktrace.h>
-#include <asm/syscall.h>
 #include <asm/traps.h>
 #include <asm/system_misc.h>
 
-#define CREATE_TRACE_POINTS
-#include <trace/events/syscalls.h>
-
 struct pt_regs_offset {
 	const char *name;
 	int offset;
@@ -2338,153 +2331,6 @@ long arch_ptrace(struct task_struct *child, long request,
 	return ptrace_request(child, request, addr, data);
 }
 
-enum ptrace_syscall_dir {
-	PTRACE_SYSCALL_ENTER = 0,
-	PTRACE_SYSCALL_EXIT,
-};
-
-static __always_inline unsigned long ptrace_save_reg(struct pt_regs *regs,
-						     enum ptrace_syscall_dir dir,
-						     int *regno)
-{
-	unsigned long saved_reg;
-
-	/*
-	 * We have some ABI weirdness here in the way that we handle syscall
-	 * exit stops because we indicate whether or not the stop has been
-	 * signalled from syscall entry or syscall exit by clobbering a general
-	 * purpose register (ip/r12 for AArch32, x7 for AArch64) in the tracee
-	 * and restoring its old value after the stop. This means that:
-	 *
-	 * - Any writes by the tracer to this register during the stop are
-	 *   ignored/discarded.
-	 *
-	 * - The actual value of the register is not available during the stop,
-	 *   so the tracer cannot save it and restore it later.
-	 *
-	 * - Syscall stops behave differently to seccomp and pseudo-step traps
-	 *   (the latter do not nobble any registers).
-	 */
-	*regno = (is_compat_task() ? 12 : 7);
-	saved_reg = regs->regs[*regno];
-	regs->regs[*regno] = dir;
-
-	return saved_reg;
-}
-
-static int report_syscall_entry(struct pt_regs *regs)
-{
-	unsigned long saved_reg;
-	int regno, ret;
-
-	saved_reg = ptrace_save_reg(regs, PTRACE_SYSCALL_ENTER, &regno);
-	ret = ptrace_report_syscall_entry(regs);
-	if (ret)
-		forget_syscall(regs);
-	regs->regs[regno] = saved_reg;
-
-	return ret;
-}
-
-static void report_syscall_exit(struct pt_regs *regs)
-{
-	unsigned long saved_reg;
-	int regno;
-
-	saved_reg = ptrace_save_reg(regs, PTRACE_SYSCALL_EXIT, &regno);
-	if (!test_thread_flag(TIF_SINGLESTEP)) {
-		ptrace_report_syscall_exit(regs, 0);
-		regs->regs[regno] = saved_reg;
-	} else {
-		regs->regs[regno] = saved_reg;
-
-		/*
-		 * Signal a pseudo-step exception since we are stepping but
-		 * tracer modifications to the registers may have rewound the
-		 * state machine.
-		 */
-		ptrace_report_syscall_exit(regs, 1);
-	}
-}
-
-static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
-{
-	if (unlikely(audit_context())) {
-		unsigned long args[6];
-
-		syscall_get_arguments(current, regs, args);
-		audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]);
-	}
-}
-
-int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
-{
-	long syscall;
-	int ret;
-
-	if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
-		ret = report_syscall_entry(regs);
-		if (ret || (flags & _TIF_SYSCALL_EMU))
-			return NO_SYSCALL;
-	}
-
-	/* Do the secure computing after ptrace; failures should be fast. */
-	if (flags & _TIF_SECCOMP) {
-		ret = __secure_computing();
-		if (ret == -1)
-			return NO_SYSCALL;
-	}
-
-	/* Either of the above might have changed the syscall number */
-	syscall = syscall_get_nr(current, regs);
-
-	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) {
-		trace_sys_enter(regs, syscall);
-
-		/*
-		 * Probes or BPF hooks in the tracepoint may have changed the
-		 * system call number as well.
-		 */
-		 syscall = syscall_get_nr(current, regs);
-	}
-
-	syscall_enter_audit(regs, syscall);
-
-	return ret ? : syscall;
-}
-
-static inline bool report_single_step(unsigned long flags)
-{
-	if (flags & _TIF_SYSCALL_EMU)
-		return false;
-
-	return flags & _TIF_SINGLESTEP;
-}
-
-static void syscall_exit_work(struct pt_regs *regs, unsigned long flags)
-{
-	bool step;
-
-	audit_syscall_exit(regs);
-
-	if (flags & _TIF_SYSCALL_TRACEPOINT)
-		trace_sys_exit(regs, syscall_get_return_value(current, regs));
-
-	step = report_single_step(flags);
-	if (step || flags & _TIF_SYSCALL_TRACE)
-		report_syscall_exit(regs);
-}
-
-void syscall_exit_to_user_mode_work(struct pt_regs *regs)
-{
-	unsigned long flags = read_thread_flags();
-
-	rseq_syscall(regs);
-
-	if (unlikely(flags & _TIF_SYSCALL_EXIT_WORK) || flags & _TIF_SINGLESTEP)
-		syscall_exit_work(regs, flags);
-}
-
 /*
  * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
  * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 1110eeb21f57..d3ec1892b3c7 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -8,8 +8,8 @@
 
 #include <linux/cache.h>
 #include <linux/compat.h>
+#include <linux/entry-common.h>
 #include <linux/errno.h>
-#include <linux/irq-entry-common.h>
 #include <linux/kernel.h>
 #include <linux/signal.h>
 #include <linux/freezer.h>
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index ec478fc37a9f..77d00a5cf0e9 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -2,6 +2,7 @@
 
 #include <linux/compiler.h>
 #include <linux/context_tracking.h>
+#include <linux/entry-common.h>
 #include <linux/errno.h>
 #include <linux/nospec.h>
 #include <linux/ptrace.h>
@@ -68,6 +69,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
 static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 			   const syscall_fn_t syscall_table[])
 {
+	unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
 	unsigned long flags = read_thread_flags();
 
 	regs->orig_x0 = regs->regs[0];
@@ -101,7 +103,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 		return;
 	}
 
-	if (unlikely(flags & _TIF_SYSCALL_WORK)) {
+	if (unlikely(work & SYSCALL_WORK_ENTER)) {
 		/*
 		 * The de-facto standard way to skip a system call using ptrace
 		 * is to set the system call to -1 (NO_SYSCALL) and set x0 to a
@@ -119,7 +121,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 		 */
 		if (scno == NO_SYSCALL)
 			syscall_set_return_value(current, regs, -ENOSYS, 0);
-		scno = syscall_trace_enter(regs, flags);
+		scno = syscall_trace_enter(regs, work);
 		if (scno == NO_SYSCALL)
 			goto trace_exit;
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 12/14] arm64: Inline el0_svc_common()
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (10 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 11/14] arm64: entry: Convert to generic entry Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 13/14] entry: Inline syscall_exit_work() and syscall_trace_enter() Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 14/14] selftests: sud_test: Support aarch64 Jinjie Ruan
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

After switch arm64 to Generic Entry, the compiler no longer inlines
el0_svc_common() into do_el0_svc(). So inline el0_svc_common() and it
has 1% performance uplift on perf bench basic syscall on kunpeng920
as below which is based on v6.19-rc1.

| Metric     | W/O this patch | With this patch | Change    |
| ---------- | -------------- | --------------- | --------- |
| Total time | 2.195 [sec]    | 2.171 [sec]     |  ↓1.1%   |
| usecs/op   | 0.219575       | 0.217192        |  ↓1.1%   |
| ops/sec    | 4,554,260      | 4,604,225       |  ↑1.1%    |

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/syscall.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 77d00a5cf0e9..6fcd97c46716 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -66,8 +66,8 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
 	choose_random_kstack_offset(get_random_u16());
 }
 
-static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
-			   const syscall_fn_t syscall_table[])
+static __always_inline void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+					   const syscall_fn_t syscall_table[])
 {
 	unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
 	unsigned long flags = read_thread_flags();
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 13/14] entry: Inline syscall_exit_work() and syscall_trace_enter()
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (11 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 12/14] arm64: Inline el0_svc_common() Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  2026-01-30 10:14   ` Thomas Gleixner
  2026-01-30 21:53   ` [tip: core/entry] " tip-bot2 for Jinjie Ruan
  2026-01-28  3:19 ` [PATCH v11 14/14] selftests: sud_test: Support aarch64 Jinjie Ruan
  13 siblings, 2 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

After switch arm64 to Generic Entry, a new hotspot syscall_exit_work()
appeared because syscall_exit_work() is not inlined, so inline
syscall_exit_work(). Also inline syscall_trace_enter() to align with
syscall_exit_work().

On v6.19-rc1 with audit on, inline both syscall_trace_enter() and
syscall_exit_work() has 4% performance uplift on perf bench basic
syscall on kunpeng920 as below:

    | Metric     | W/O this patch | With this patch | Change  |
    | ---------- | -------------- | --------------- | ------  |
    | Total time | 2.353 [sec]    | 2.264 [sec]     |  ↓3.8%  |
    | usecs/op   | 0.235374       | 0.226472        |  ↓3.8%  |
    | ops/sec    | 4,248,588      | 4,415,554       |  ↑3.9%  |

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 include/linux/entry-common.h         | 101 ++++++++++++++++++++++++++-
 kernel/entry/common.h                |   7 --
 kernel/entry/syscall-common.c        |  95 ++-----------------------
 kernel/entry/syscall_user_dispatch.c |   4 +-
 4 files changed, 105 insertions(+), 102 deletions(-)
 delete mode 100644 kernel/entry/common.h

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index 48bdde74a3e1..c2d772c70b7c 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_ENTRYCOMMON_H
 #define __LINUX_ENTRYCOMMON_H
 
+#include <linux/audit.h>
 #include <linux/irq-entry-common.h>
 #include <linux/livepatch.h>
 #include <linux/ptrace.h>
@@ -64,7 +65,63 @@ static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs *regs
 }
 #endif
 
-long syscall_trace_enter(struct pt_regs *regs, unsigned long work);
+static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
+{
+	if (unlikely(audit_context())) {
+		unsigned long args[6];
+
+		syscall_get_arguments(current, regs, args);
+		audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]);
+	}
+}
+
+void __trace_sys_enter(struct pt_regs *regs, long syscall);
+bool syscall_user_dispatch(struct pt_regs *regs);
+
+static __always_inline long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
+{
+	long syscall, ret = 0;
+
+	/*
+	 * Handle Syscall User Dispatch.  This must comes first, since
+	 * the ABI here can be something that doesn't make sense for
+	 * other syscall_work features.
+	 */
+	if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) {
+		if (syscall_user_dispatch(regs))
+			return -1L;
+	}
+
+	/* Handle ptrace */
+	if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) {
+		ret = arch_ptrace_report_syscall_entry(regs);
+		if (ret || (work & SYSCALL_WORK_SYSCALL_EMU))
+			return -1L;
+	}
+
+	/* Do seccomp after ptrace, to catch any tracer changes. */
+	if (work & SYSCALL_WORK_SECCOMP) {
+		ret = __secure_computing();
+		if (ret == -1L)
+			return ret;
+	}
+
+	/* Either of the above might have changed the syscall number */
+	syscall = syscall_get_nr(current, regs);
+
+	if (unlikely(work & SYSCALL_WORK_SYSCALL_TRACEPOINT)) {
+		__trace_sys_enter(regs, syscall);
+		/*
+		 * Probes or BPF hooks in the tracepoint may have changed the
+		 * system call number as well.
+		 */
+		syscall = syscall_get_nr(current, regs);
+	}
+
+	syscall_enter_audit(regs, syscall);
+
+	return ret ? : syscall;
+}
 
 /**
  * syscall_enter_from_user_mode_work - Check and handle work before invoking
@@ -131,6 +188,19 @@ static __always_inline long syscall_enter_from_user_mode(struct pt_regs *regs, l
 	return ret;
 }
 
+/*
+ * If SYSCALL_EMU is set, then the only reason to report is when
+ * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP).  This syscall
+ * instruction has been already reported in syscall_enter_from_user_mode().
+ */
+static __always_inline bool report_single_step(unsigned long work)
+{
+	if (work & SYSCALL_WORK_SYSCALL_EMU)
+		return false;
+
+	return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP;
+}
+
 /**
  * arch_ptrace_report_syscall_exit - Architecture specific
  *				     ptrace_report_syscall_exit.
@@ -151,6 +221,8 @@ static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs
 }
 #endif
 
+void __trace_sys_exit(struct pt_regs *regs, long ret);
+
 /**
  * syscall_exit_work - Handle work before returning to user mode
  * @regs:	Pointer to current pt_regs
@@ -158,7 +230,32 @@ static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs
  *
  * Do one-time syscall specific work.
  */
-void syscall_exit_work(struct pt_regs *regs, unsigned long work);
+static __always_inline void syscall_exit_work(struct pt_regs *regs, unsigned long work)
+{
+	bool step;
+
+	/*
+	 * If the syscall was rolled back due to syscall user dispatching,
+	 * then the tracers below are not invoked for the same reason as
+	 * the entry side was not invoked in syscall_trace_enter(): The ABI
+	 * of these syscalls is unknown.
+	 */
+	if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) {
+		if (unlikely(current->syscall_dispatch.on_dispatch)) {
+			current->syscall_dispatch.on_dispatch = false;
+			return;
+		}
+	}
+
+	audit_syscall_exit(regs);
+
+	if (work & SYSCALL_WORK_SYSCALL_TRACEPOINT)
+		__trace_sys_exit(regs, syscall_get_return_value(current, regs));
+
+	step = report_single_step(work);
+	if (step || work & SYSCALL_WORK_SYSCALL_TRACE)
+		arch_ptrace_report_syscall_exit(regs, step);
+}
 
 /**
  * syscall_exit_to_user_mode_work - Handle work before returning to user mode
diff --git a/kernel/entry/common.h b/kernel/entry/common.h
deleted file mode 100644
index f6e6d02f07fe..000000000000
--- a/kernel/entry/common.h
+++ /dev/null
@@ -1,7 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _COMMON_H
-#define _COMMON_H
-
-bool syscall_user_dispatch(struct pt_regs *regs);
-
-#endif
diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c
index bb5f61f5629d..10231d30405e 100644
--- a/kernel/entry/syscall-common.c
+++ b/kernel/entry/syscall-common.c
@@ -1,103 +1,16 @@
 // SPDX-License-Identifier: GPL-2.0
 
-#include <linux/audit.h>
 #include <linux/entry-common.h>
-#include "common.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/syscalls.h>
 
-static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
+void __trace_sys_enter(struct pt_regs *regs, long syscall)
 {
-	if (unlikely(audit_context())) {
-		unsigned long args[6];
-
-		syscall_get_arguments(current, regs, args);
-		audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]);
-	}
+	trace_sys_enter(regs, syscall);
 }
 
-long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
+void __trace_sys_exit(struct pt_regs *regs, long ret)
 {
-	long syscall, ret = 0;
-
-	/*
-	 * Handle Syscall User Dispatch.  This must comes first, since
-	 * the ABI here can be something that doesn't make sense for
-	 * other syscall_work features.
-	 */
-	if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) {
-		if (syscall_user_dispatch(regs))
-			return -1L;
-	}
-
-	/* Handle ptrace */
-	if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) {
-		ret = arch_ptrace_report_syscall_entry(regs);
-		if (ret || (work & SYSCALL_WORK_SYSCALL_EMU))
-			return -1L;
-	}
-
-	/* Do seccomp after ptrace, to catch any tracer changes. */
-	if (work & SYSCALL_WORK_SECCOMP) {
-		ret = __secure_computing();
-		if (ret == -1L)
-			return ret;
-	}
-
-	/* Either of the above might have changed the syscall number */
-	syscall = syscall_get_nr(current, regs);
-
-	if (unlikely(work & SYSCALL_WORK_SYSCALL_TRACEPOINT)) {
-		trace_sys_enter(regs, syscall);
-		/*
-		 * Probes or BPF hooks in the tracepoint may have changed the
-		 * system call number as well.
-		 */
-		syscall = syscall_get_nr(current, regs);
-	}
-
-	syscall_enter_audit(regs, syscall);
-
-	return ret ? : syscall;
-}
-
-/*
- * If SYSCALL_EMU is set, then the only reason to report is when
- * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP).  This syscall
- * instruction has been already reported in syscall_enter_from_user_mode().
- */
-static inline bool report_single_step(unsigned long work)
-{
-	if (work & SYSCALL_WORK_SYSCALL_EMU)
-		return false;
-
-	return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP;
-}
-
-void syscall_exit_work(struct pt_regs *regs, unsigned long work)
-{
-	bool step;
-
-	/*
-	 * If the syscall was rolled back due to syscall user dispatching,
-	 * then the tracers below are not invoked for the same reason as
-	 * the entry side was not invoked in syscall_trace_enter(): The ABI
-	 * of these syscalls is unknown.
-	 */
-	if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) {
-		if (unlikely(current->syscall_dispatch.on_dispatch)) {
-			current->syscall_dispatch.on_dispatch = false;
-			return;
-		}
-	}
-
-	audit_syscall_exit(regs);
-
-	if (work & SYSCALL_WORK_SYSCALL_TRACEPOINT)
-		trace_sys_exit(regs, syscall_get_return_value(current, regs));
-
-	step = report_single_step(work);
-	if (step || work & SYSCALL_WORK_SYSCALL_TRACE)
-		arch_ptrace_report_syscall_exit(regs, step);
+	trace_sys_exit(regs, ret);
 }
diff --git a/kernel/entry/syscall_user_dispatch.c b/kernel/entry/syscall_user_dispatch.c
index a9055eccb27e..d89dffcc2d64 100644
--- a/kernel/entry/syscall_user_dispatch.c
+++ b/kernel/entry/syscall_user_dispatch.c
@@ -2,6 +2,8 @@
 /*
  * Copyright (C) 2020 Collabora Ltd.
  */
+
+#include <linux/entry-common.h>
 #include <linux/sched.h>
 #include <linux/prctl.h>
 #include <linux/ptrace.h>
@@ -15,8 +17,6 @@
 
 #include <asm/syscall.h>
 
-#include "common.h"
-
 static void trigger_sigsys(struct pt_regs *regs)
 {
 	struct kernel_siginfo info;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v11 14/14] selftests: sud_test: Support aarch64
  2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
                   ` (12 preceding siblings ...)
  2026-01-28  3:19 ` [PATCH v11 13/14] entry: Inline syscall_exit_work() and syscall_trace_enter() Jinjie Ruan
@ 2026-01-28  3:19 ` Jinjie Ruan
  13 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-28  3:19 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, tglx, peterz, luto, shuah, kees, wad,
	kevin.brodsky, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest
  Cc: ruanjinjie

From: kemal <kmal@cock.li>

Support aarch64 to test "Syscall User Dispatch" with sud_test
selftest testcase.

Signed-off-by: kemal <kmal@cock.li>
---
 tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c | 2 +-
 tools/testing/selftests/syscall_user_dispatch/sud_test.c      | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c b/tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c
index 073a03702ff5..6059abe75cb3 100644
--- a/tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c
+++ b/tools/testing/selftests/syscall_user_dispatch/sud_benchmark.c
@@ -41,7 +41,7 @@
  * out of the box, but don't enable them until they support syscall user
  * dispatch.
  */
-#if defined(__x86_64__) || defined(__i386__)
+#if defined(__x86_64__) || defined(__i386__) || defined(__aarch64__)
 #define TEST_BLOCKED_RETURN
 #endif
 
diff --git a/tools/testing/selftests/syscall_user_dispatch/sud_test.c b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
index b855c6000287..3ffea2f4a66d 100644
--- a/tools/testing/selftests/syscall_user_dispatch/sud_test.c
+++ b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
@@ -192,6 +192,10 @@ static void handle_sigsys(int sig, siginfo_t *info, void *ucontext)
 	((ucontext_t *)ucontext)->uc_mcontext.__gregs[REG_A0] =
 			((ucontext_t *)ucontext)->uc_mcontext.__gregs[REG_A7];
 #endif
+#ifdef __aarch64__
+	((ucontext_t *)ucontext)->uc_mcontext.regs[0] = (unsigned int)
+			((ucontext_t *)ucontext)->uc_mcontext.regs[8];
+#endif
 }
 
 int setup_sigsys_handler(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter()
  2026-01-28  3:19 ` [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter() Jinjie Ruan
@ 2026-01-29 12:06   ` Kevin Brodsky
  2026-01-30 10:11   ` Thomas Gleixner
  2026-01-30 21:53   ` [tip: core/entry] entry: Remove unused syscall argument from syscall_trace_enter() tip-bot2 for Jinjie Ruan
  2 siblings, 0 replies; 33+ messages in thread
From: Kevin Brodsky @ 2026-01-29 12:06 UTC (permalink / raw)
  To: Jinjie Ruan, catalin.marinas, will, oleg, tglx, peterz, luto,
	shuah, kees, wad, deller, akpm, charlie, ldv, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest

On 28/01/2026 04:19, Jinjie Ruan wrote:
> The 'syscall' argument in syscall_trace_enter() is immediately overwritten
> before any real use and serves only as a local variable, so drop
> the parameter.
>
> No functional change intended.
>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>

In commit title: s/syscall/parameter/ (very confusing otherwise!)

With that fixed:

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>

> ---
>  include/linux/entry-common.h  | 4 ++--
>  kernel/entry/syscall-common.c | 5 ++---
>  2 files changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
> index 87efb38b7081..e4a8287af822 100644
> --- a/include/linux/entry-common.h
> +++ b/include/linux/entry-common.h
> @@ -45,7 +45,7 @@
>  				 SYSCALL_WORK_SYSCALL_EXIT_TRAP	|	\
>  				 ARCH_SYSCALL_WORK_EXIT)
>  
> -long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work);
> +long syscall_trace_enter(struct pt_regs *regs, unsigned long work);
>  
>  /**
>   * syscall_enter_from_user_mode_work - Check and handle work before invoking
> @@ -75,7 +75,7 @@ static __always_inline long syscall_enter_from_user_mode_work(struct pt_regs *re
>  	unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
>  
>  	if (work & SYSCALL_WORK_ENTER)
> -		syscall = syscall_trace_enter(regs, syscall, work);
> +		syscall = syscall_trace_enter(regs, work);
>  
>  	return syscall;
>  }
> diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c
> index 940a597ded40..e6237b536d8b 100644
> --- a/kernel/entry/syscall-common.c
> +++ b/kernel/entry/syscall-common.c
> @@ -17,10 +17,9 @@ static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
>  	}
>  }
>  
> -long syscall_trace_enter(struct pt_regs *regs, long syscall,
> -				unsigned long work)
> +long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
>  {
> -	long ret = 0;
> +	long syscall, ret = 0;
>  
>  	/*
>  	 * Handle Syscall User Dispatch.  This must comes first, since

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 03/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
  2026-01-28  3:19 ` [PATCH v11 03/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit() Jinjie Ruan
@ 2026-01-29 12:06   ` Kevin Brodsky
  2026-01-29 13:06     ` Jinjie Ruan
  0 siblings, 1 reply; 33+ messages in thread
From: Kevin Brodsky @ 2026-01-29 12:06 UTC (permalink / raw)
  To: Jinjie Ruan, catalin.marinas, will, oleg, tglx, peterz, luto,
	shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest

On 28/01/2026 04:19, Jinjie Ruan wrote:
> commit a9f3a74a29af ("entry: Provide generic syscall exit function")
> introduce generic syscall exit function and call rseq_syscall()
> before audit_syscall_exit() and arch_syscall_exit_tracehook().
>
> And commit b74406f37737 ("arm: Add syscall detection for restartable
> sequences") add rseq support for arm32, which also call rseq_syscall()
> before audit_syscall_exit() and tracehook_report_syscall().
>
> However, commit 409d5db49867c ("arm64: rseq: Implement backend rseq
> calls and select HAVE_RSEQ") implement arm64 rseq and call
> rseq_syscall() after audit_syscall_exit() and tracehook_report_syscall().
> So compared to the generic entry and arm32 code, arm64 calls
> rseq_syscall() a bit later.
>
> But as commit b74406f37737 ("arm: Add syscall detection for restartable
> sequences") said, syscalls are not allowed inside restartable sequences,
> so should call rseq_syscall() at the very beginning of system call
> exiting path for CONFIG_DEBUG_RSEQ=y kernel. This could help us to detect
> whether there is a syscall issued inside restartable sequences.
>
> As for the impact of raising SIGSEGV via rseq_syscall(), it makes no
> practical difference to signal delivery because signals are processed
> in arm64_exit_to_user_mode() at the very end.
>
> As for the "regs", rseq_syscall() only checks and update
> instruction_pointer(regs), ptrace can not modify the "pc" on syscall exit
> path but 'only changes the return value', so calling rseq_syscall()
> before or after ptrace_report_syscall_exit() makes no difference.

Let's update this as discussed on v10 - PC can be modified when
ptrace_report_syscall_exit() is called.

> And audit_syscall_exit() only checks the return value (x0 for arm64),
> so calling rseq_syscall() before or after audit syscall exit makes
> no difference. trace_sys_exit() only uses syscallno and the return value,
> so calling rseq_syscall() before or after trace_sys_exit() also makes
> no difference.
>
> In preparation for moving arm64 over to the generic entry code, move
> rseq_syscall() ahead before audit_syscall_exit().
>
> No functional changes.

And naturally this is not the case.

- Kevin

> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
>  arch/arm64/kernel/ptrace.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index 9f9aa3087c09..785280c76317 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -2443,6 +2443,8 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
>  
>  void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
>  {
> +	rseq_syscall(regs);
> +
>  	audit_syscall_exit(regs);
>  
>  	if (flags & _TIF_SYSCALL_TRACEPOINT)
> @@ -2450,8 +2452,6 @@ void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
>  
>  	if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
>  		report_syscall_exit(regs);
> -
> -	rseq_syscall(regs);
>  }
>  
>  /*

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-28  3:19 ` [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse Jinjie Ruan
@ 2026-01-29 12:06   ` Kevin Brodsky
  2026-01-29 13:11     ` Jinjie Ruan
  2026-01-30 21:53   ` [tip: core/entry] entry: Rework syscall_exit_to_user_mode_work() for architecture reuse tip-bot2 for Jinjie Ruan
  1 sibling, 1 reply; 33+ messages in thread
From: Kevin Brodsky @ 2026-01-29 12:06 UTC (permalink / raw)
  To: Jinjie Ruan, catalin.marinas, will, oleg, tglx, peterz, luto,
	shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest

On 28/01/2026 04:19, Jinjie Ruan wrote:
> In the generic entry code, the beginning of
> syscall_exit_to_user_mode_work() can be reused on arm64 so it makes
> sense to rework it.
>
> In preparation for moving arm64 over to the generic entry
> code, as nothing calls syscall_exit_to_user_mode_work() except for
> syscall_exit_to_user_mode(), move local_irq_disable_exit_to_user() and
> syscall_exit_to_user_mode_prepare() out from
> syscall_exit_to_user_mode_work() to the only one caller.
>
> Also update the comment and no functional changes.
>
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
>  include/linux/entry-common.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
> index e4a8287af822..c4fea642d931 100644
> --- a/include/linux/entry-common.h
> +++ b/include/linux/entry-common.h
> @@ -125,14 +125,14 @@ void syscall_exit_work(struct pt_regs *regs, unsigned long work);
>   * syscall_exit_to_user_mode_work - Handle work before returning to user mode
>   * @regs:	Pointer to currents pt_regs
>   *
> - * Same as step 1 and 2 of syscall_exit_to_user_mode() but without calling
> + * Same as step 1 of syscall_exit_to_user_mode() but without calling
> + * local_irq_disable(), syscall_exit_to_user_mode_prepare() and
>   * exit_to_user_mode() to perform the final transition to user mode.
>   *
> - * Calling convention is the same as for syscall_exit_to_user_mode() and it
> - * returns with all work handled and interrupts disabled. The caller must
> - * invoke exit_to_user_mode() before actually switching to user mode to
> - * make the final state transitions. Interrupts must stay disabled between
> - * return from this function and the invocation of exit_to_user_mode().
> + * Calling convention is the same as for syscall_exit_to_user_mode(). The
> + * caller must invoke local_irq_disable(), __exit_to_user_mode_prepare() and

Shouldn't it be syscall_exit_to_user_mode_prepare() rather than
__exit_to_user_mode_prepare()? The former has extra calls (e.g. rseq).

- Kevin

> + * exit_to_user_mode() before actually switching to user mode to
> + * make the final state transitions.
>   */
>  static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
>  {
> @@ -155,8 +155,6 @@ static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
>  	 */
>  	if (unlikely(work & SYSCALL_WORK_EXIT))
>  		syscall_exit_work(regs, work);
> -	local_irq_disable_exit_to_user();
> -	syscall_exit_to_user_mode_prepare(regs);
>  }
>  
>  /**
> @@ -192,6 +190,8 @@ static __always_inline void syscall_exit_to_user_mode(struct pt_regs *regs)
>  {
>  	instrumentation_begin();
>  	syscall_exit_to_user_mode_work(regs);
> +	local_irq_disable_exit_to_user();
> +	syscall_exit_to_user_mode_prepare(regs);
>  	instrumentation_end();
>  	exit_to_user_mode();
>  }

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 03/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
  2026-01-29 12:06   ` Kevin Brodsky
@ 2026-01-29 13:06     ` Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-29 13:06 UTC (permalink / raw)
  To: Kevin Brodsky, catalin.marinas, will, oleg, tglx, peterz, luto,
	shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest



On 2026/1/29 20:06, Kevin Brodsky wrote:
> On 28/01/2026 04:19, Jinjie Ruan wrote:
>> commit a9f3a74a29af ("entry: Provide generic syscall exit function")
>> introduce generic syscall exit function and call rseq_syscall()
>> before audit_syscall_exit() and arch_syscall_exit_tracehook().
>>
>> And commit b74406f37737 ("arm: Add syscall detection for restartable
>> sequences") add rseq support for arm32, which also call rseq_syscall()
>> before audit_syscall_exit() and tracehook_report_syscall().
>>
>> However, commit 409d5db49867c ("arm64: rseq: Implement backend rseq
>> calls and select HAVE_RSEQ") implement arm64 rseq and call
>> rseq_syscall() after audit_syscall_exit() and tracehook_report_syscall().
>> So compared to the generic entry and arm32 code, arm64 calls
>> rseq_syscall() a bit later.
>>
>> But as commit b74406f37737 ("arm: Add syscall detection for restartable
>> sequences") said, syscalls are not allowed inside restartable sequences,
>> so should call rseq_syscall() at the very beginning of system call
>> exiting path for CONFIG_DEBUG_RSEQ=y kernel. This could help us to detect
>> whether there is a syscall issued inside restartable sequences.
>>
>> As for the impact of raising SIGSEGV via rseq_syscall(), it makes no
>> practical difference to signal delivery because signals are processed
>> in arm64_exit_to_user_mode() at the very end.
>>
>> As for the "regs", rseq_syscall() only checks and update
>> instruction_pointer(regs), ptrace can not modify the "pc" on syscall exit
>> path but 'only changes the return value', so calling rseq_syscall()
>> before or after ptrace_report_syscall_exit() makes no difference.
> 
> Let's update this as discussed on v10 - PC can be modified when
> ptrace_report_syscall_exit() is called.

Should rseq see the PC modified by ptrace on the syscall exit path?
If the PC modified by ptrace happens to fall inside the user-space rseq
critical section, is that reasonable? If so, doesn't that make the order
of rseq and ptrace syscall exit in generic entry incorrect?

Could we have an rseq expert join the discussion — Thomas, what is your
opinion?

> 
>> And audit_syscall_exit() only checks the return value (x0 for arm64),
>> so calling rseq_syscall() before or after audit syscall exit makes
>> no difference. trace_sys_exit() only uses syscallno and the return value,
>> so calling rseq_syscall() before or after trace_sys_exit() also makes
>> no difference.
>>
>> In preparation for moving arm64 over to the generic entry code, move
>> rseq_syscall() ahead before audit_syscall_exit().
>>
>> No functional changes.
> 
> And naturally this is not the case.
> 
> - Kevin
> 
>> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
>>  arch/arm64/kernel/ptrace.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
>> index 9f9aa3087c09..785280c76317 100644
>> --- a/arch/arm64/kernel/ptrace.c
>> +++ b/arch/arm64/kernel/ptrace.c
>> @@ -2443,6 +2443,8 @@ int syscall_trace_enter(struct pt_regs *regs, unsigned long flags)
>>  
>>  void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
>>  {
>> +	rseq_syscall(regs);
>> +
>>  	audit_syscall_exit(regs);
>>  
>>  	if (flags & _TIF_SYSCALL_TRACEPOINT)
>> @@ -2450,8 +2452,6 @@ void syscall_trace_exit(struct pt_regs *regs, unsigned long flags)
>>  
>>  	if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
>>  		report_syscall_exit(regs);
>> -
>> -	rseq_syscall(regs);
>>  }
>>  
>>  /*
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-29 12:06   ` Kevin Brodsky
@ 2026-01-29 13:11     ` Jinjie Ruan
  2026-01-29 16:00       ` Kevin Brodsky
  0 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-29 13:11 UTC (permalink / raw)
  To: Kevin Brodsky, catalin.marinas, will, oleg, tglx, peterz, luto,
	shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest



On 2026/1/29 20:06, Kevin Brodsky wrote:
> On 28/01/2026 04:19, Jinjie Ruan wrote:
>> In the generic entry code, the beginning of
>> syscall_exit_to_user_mode_work() can be reused on arm64 so it makes
>> sense to rework it.
>>
>> In preparation for moving arm64 over to the generic entry
>> code, as nothing calls syscall_exit_to_user_mode_work() except for
>> syscall_exit_to_user_mode(), move local_irq_disable_exit_to_user() and
>> syscall_exit_to_user_mode_prepare() out from
>> syscall_exit_to_user_mode_work() to the only one caller.
>>
>> Also update the comment and no functional changes.
>>
>> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
>> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
>>  include/linux/entry-common.h | 16 ++++++++--------
>>  1 file changed, 8 insertions(+), 8 deletions(-)
>>
>> diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
>> index e4a8287af822..c4fea642d931 100644
>> --- a/include/linux/entry-common.h
>> +++ b/include/linux/entry-common.h
>> @@ -125,14 +125,14 @@ void syscall_exit_work(struct pt_regs *regs, unsigned long work);
>>   * syscall_exit_to_user_mode_work - Handle work before returning to user mode
>>   * @regs:	Pointer to currents pt_regs
>>   *
>> - * Same as step 1 and 2 of syscall_exit_to_user_mode() but without calling
>> + * Same as step 1 of syscall_exit_to_user_mode() but without calling
>> + * local_irq_disable(), syscall_exit_to_user_mode_prepare() and
>>   * exit_to_user_mode() to perform the final transition to user mode.
>>   *
>> - * Calling convention is the same as for syscall_exit_to_user_mode() and it
>> - * returns with all work handled and interrupts disabled. The caller must
>> - * invoke exit_to_user_mode() before actually switching to user mode to
>> - * make the final state transitions. Interrupts must stay disabled between
>> - * return from this function and the invocation of exit_to_user_mode().
>> + * Calling convention is the same as for syscall_exit_to_user_mode(). The
>> + * caller must invoke local_irq_disable(), __exit_to_user_mode_prepare() and
> 
> Shouldn't it be syscall_exit_to_user_mode_prepare() rather than
> __exit_to_user_mode_prepare()? The former has extra calls (e.g. rseq).

Perhaps we can just delete these comments — at present only generic
entry and arm64 use it, and nowhere else needs it; after the refactoring
the comments now seem rather unclear.

> 
> - Kevin
> 
>> + * exit_to_user_mode() before actually switching to user mode to
>> + * make the final state transitions.
>>   */
>>  static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
>>  {
>> @@ -155,8 +155,6 @@ static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
>>  	 */
>>  	if (unlikely(work & SYSCALL_WORK_EXIT))
>>  		syscall_exit_work(regs, work);
>> -	local_irq_disable_exit_to_user();
>> -	syscall_exit_to_user_mode_prepare(regs);
>>  }
>>  
>>  /**
>> @@ -192,6 +190,8 @@ static __always_inline void syscall_exit_to_user_mode(struct pt_regs *regs)
>>  {
>>  	instrumentation_begin();
>>  	syscall_exit_to_user_mode_work(regs);
>> +	local_irq_disable_exit_to_user();
>> +	syscall_exit_to_user_mode_prepare(regs);
>>  	instrumentation_end();
>>  	exit_to_user_mode();
>>  }
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-29 13:11     ` Jinjie Ruan
@ 2026-01-29 16:00       ` Kevin Brodsky
  2026-01-30 10:16         ` Thomas Gleixner
  0 siblings, 1 reply; 33+ messages in thread
From: Kevin Brodsky @ 2026-01-29 16:00 UTC (permalink / raw)
  To: Jinjie Ruan, catalin.marinas, will, oleg, tglx, peterz, luto,
	shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest

On 29/01/2026 14:11, Jinjie Ruan wrote:
>>> diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
>>> index e4a8287af822..c4fea642d931 100644
>>> --- a/include/linux/entry-common.h
>>> +++ b/include/linux/entry-common.h
>>> @@ -125,14 +125,14 @@ void syscall_exit_work(struct pt_regs *regs, unsigned long work);
>>>   * syscall_exit_to_user_mode_work - Handle work before returning to user mode
>>>   * @regs:	Pointer to currents pt_regs
>>>   *
>>> - * Same as step 1 and 2 of syscall_exit_to_user_mode() but without calling
>>> + * Same as step 1 of syscall_exit_to_user_mode() but without calling
>>> + * local_irq_disable(), syscall_exit_to_user_mode_prepare() and
>>>   * exit_to_user_mode() to perform the final transition to user mode.
>>>   *
>>> - * Calling convention is the same as for syscall_exit_to_user_mode() and it
>>> - * returns with all work handled and interrupts disabled. The caller must
>>> - * invoke exit_to_user_mode() before actually switching to user mode to
>>> - * make the final state transitions. Interrupts must stay disabled between
>>> - * return from this function and the invocation of exit_to_user_mode().
>>> + * Calling convention is the same as for syscall_exit_to_user_mode(). The
>>> + * caller must invoke local_irq_disable(), __exit_to_user_mode_prepare() and
>> Shouldn't it be syscall_exit_to_user_mode_prepare() rather than
>> __exit_to_user_mode_prepare()? The former has extra calls (e.g. rseq).
> Perhaps we can just delete these comments — at present only generic
> entry and arm64 use it, and nowhere else needs it; after the refactoring
> the comments now seem rather unclear.

Agreed, the comments are essentially describing what each function
calls; considering how short they are, directly reading the code is
probably easier.

- Kevin

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter()
  2026-01-28  3:19 ` [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter() Jinjie Ruan
  2026-01-29 12:06   ` Kevin Brodsky
@ 2026-01-30 10:11   ` Thomas Gleixner
  2026-01-30 21:53   ` [tip: core/entry] entry: Remove unused syscall argument from syscall_trace_enter() tip-bot2 for Jinjie Ruan
  2 siblings, 0 replies; 33+ messages in thread
From: Thomas Gleixner @ 2026-01-30 10:11 UTC (permalink / raw)
  To: Jinjie Ruan, catalin.marinas, will, oleg, peterz, luto, shuah,
	kees, wad, kevin.brodsky, deller, akpm, charlie, ldv,
	mark.rutland, anshuman.khandual, song, ryan.roberts, thuth,
	ada.coupriediaz, broonie, pengcan, liqiang01, kmal, dvyukov,
	reddybalavignesh9979, richard.weiyang, linux-arm-kernel,
	linux-kernel, linux-kselftest
  Cc: ruanjinjie

On Wed, Jan 28 2026 at 11:19, Jinjie Ruan wrote:
> The 'syscall' argument in syscall_trace_enter() is immediately overwritten
> before any real use and serves only as a local variable, so drop
> the parameter.

This collides with the already queued time slice extension changes,
which rely on syscall to be handed in:

https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?h=sched/core&id=dd0a04606937af5810e9117d343ee3792635bd3d

Please drop this for now.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 13/14] entry: Inline syscall_exit_work() and syscall_trace_enter()
  2026-01-28  3:19 ` [PATCH v11 13/14] entry: Inline syscall_exit_work() and syscall_trace_enter() Jinjie Ruan
@ 2026-01-30 10:14   ` Thomas Gleixner
  2026-01-31  1:48     ` Jinjie Ruan
  2026-01-30 21:53   ` [tip: core/entry] " tip-bot2 for Jinjie Ruan
  1 sibling, 1 reply; 33+ messages in thread
From: Thomas Gleixner @ 2026-01-30 10:14 UTC (permalink / raw)
  To: Jinjie Ruan, catalin.marinas, will, oleg, peterz, luto, shuah,
	kees, wad, kevin.brodsky, deller, akpm, charlie, ldv,
	mark.rutland, anshuman.khandual, song, ryan.roberts, thuth,
	ada.coupriediaz, broonie, pengcan, liqiang01, kmal, dvyukov,
	reddybalavignesh9979, richard.weiyang, linux-arm-kernel,
	linux-kernel, linux-kselftest
  Cc: ruanjinjie

On Wed, Jan 28 2026 at 11:19, Jinjie Ruan wrote:
> After switch arm64 to Generic Entry, a new hotspot syscall_exit_work()
> appeared because syscall_exit_work() is not inlined, so inline
> syscall_exit_work(). Also inline syscall_trace_enter() to align with
> syscall_exit_work().

Has the same collision problem. I can pick that up and massage it on top
of the pending time slice changes. Let me give it a test ride on x86...

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-29 16:00       ` Kevin Brodsky
@ 2026-01-30 10:16         ` Thomas Gleixner
  2026-01-30 13:27           ` Kevin Brodsky
  0 siblings, 1 reply; 33+ messages in thread
From: Thomas Gleixner @ 2026-01-30 10:16 UTC (permalink / raw)
  To: Kevin Brodsky, Jinjie Ruan, catalin.marinas, will, oleg, peterz,
	luto, shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest

On Thu, Jan 29 2026 at 17:00, Kevin Brodsky wrote:
> On 29/01/2026 14:11, Jinjie Ruan wrote:
>>>> - * Calling convention is the same as for syscall_exit_to_user_mode() and it
>>>> - * returns with all work handled and interrupts disabled. The caller must
>>>> - * invoke exit_to_user_mode() before actually switching to user mode to
>>>> - * make the final state transitions. Interrupts must stay disabled between
>>>> - * return from this function and the invocation of exit_to_user_mode().
>>>> + * Calling convention is the same as for syscall_exit_to_user_mode(). The
>>>> + * caller must invoke local_irq_disable(), __exit_to_user_mode_prepare() and
>>> Shouldn't it be syscall_exit_to_user_mode_prepare() rather than
>>> __exit_to_user_mode_prepare()? The former has extra calls (e.g. rseq).
>> Perhaps we can just delete these comments — at present only generic
>> entry and arm64 use it, and nowhere else needs it; after the refactoring
>> the comments now seem rather unclear.
>
> Agreed, the comments are essentially describing what each function
> calls; considering how short they are, directly reading the code is
> probably easier.

No. Please keep them. There is more information in them than just the
pure 'what's' called.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-30 10:16         ` Thomas Gleixner
@ 2026-01-30 13:27           ` Kevin Brodsky
  2026-01-30 15:01             ` Thomas Gleixner
  0 siblings, 1 reply; 33+ messages in thread
From: Kevin Brodsky @ 2026-01-30 13:27 UTC (permalink / raw)
  To: Thomas Gleixner, Jinjie Ruan, catalin.marinas, will, oleg, peterz,
	luto, shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest

On 30/01/2026 11:16, Thomas Gleixner wrote:
> On Thu, Jan 29 2026 at 17:00, Kevin Brodsky wrote:
>> On 29/01/2026 14:11, Jinjie Ruan wrote:
>>>>> - * Calling convention is the same as for syscall_exit_to_user_mode() and it
>>>>> - * returns with all work handled and interrupts disabled. The caller must
>>>>> - * invoke exit_to_user_mode() before actually switching to user mode to
>>>>> - * make the final state transitions. Interrupts must stay disabled between
>>>>> - * return from this function and the invocation of exit_to_user_mode().
>>>>> + * Calling convention is the same as for syscall_exit_to_user_mode(). The
>>>>> + * caller must invoke local_irq_disable(), __exit_to_user_mode_prepare() and
>>>> Shouldn't it be syscall_exit_to_user_mode_prepare() rather than
>>>> __exit_to_user_mode_prepare()? The former has extra calls (e.g. rseq).
>>> Perhaps we can just delete these comments — at present only generic
>>> entry and arm64 use it, and nowhere else needs it; after the refactoring
>>> the comments now seem rather unclear.
>> Agreed, the comments are essentially describing what each function
>> calls; considering how short they are, directly reading the code is
>> probably easier.
> No. Please keep them. There is more information in them than just the
> pure 'what's' called.

That is true before this patch, where it made sense to highlight that
exit_to_user_mode() must still be called after this function (without
re-enabling interrupts). With this patch there is however much more that
this function is lacking, and it feels very likely that comments will go
out of sync with exactly what syscall_exit_to_user_mode() calls.

I suppose we could simply point the reader to
syscall_exit_to_user_mode() to find out what else is needed, and keep
the comment about the calling convention being the same.

- Kevin

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-30 13:27           ` Kevin Brodsky
@ 2026-01-30 15:01             ` Thomas Gleixner
  2026-01-30 23:33               ` Thomas Gleixner
  2026-01-31  1:43               ` Jinjie Ruan
  0 siblings, 2 replies; 33+ messages in thread
From: Thomas Gleixner @ 2026-01-30 15:01 UTC (permalink / raw)
  To: Kevin Brodsky, Jinjie Ruan, catalin.marinas, will, oleg, peterz,
	luto, shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest

On Fri, Jan 30 2026 at 14:27, Kevin Brodsky wrote:
> On 30/01/2026 11:16, Thomas Gleixner wrote:
>>> Agreed, the comments are essentially describing what each function
>>> calls; considering how short they are, directly reading the code is
>>> probably easier.
>> No. Please keep them. There is more information in them than just the
>> pure 'what's' called.
>
> That is true before this patch, where it made sense to highlight that
> exit_to_user_mode() must still be called after this function (without
> re-enabling interrupts). With this patch there is however much more that
> this function is lacking, and it feels very likely that comments will go
> out of sync with exactly what syscall_exit_to_user_mode() calls.
>
> I suppose we could simply point the reader to
> syscall_exit_to_user_mode() to find out what else is needed, and keep
> the comment about the calling convention being the same.

I've picked up _all_ four entry changes and reworked the comments and
changelogs already.

Those patches should have been bundled together at the start of the
series anyway so they can be picked up independently without going
through loops and hoops. When will people learn to think beyond the brim
of their architecture tea cup?

I'll go and apply them on top of 6.19-rc1 into core/entry and merge that
into the scheduler branch to resolve the resulting conflict.

ARM64 can either pull that branch or wait until the next rc1 comes out.

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [tip: core/entry] entry: Inline syscall_exit_work() and syscall_trace_enter()
  2026-01-28  3:19 ` [PATCH v11 13/14] entry: Inline syscall_exit_work() and syscall_trace_enter() Jinjie Ruan
  2026-01-30 10:14   ` Thomas Gleixner
@ 2026-01-30 21:53   ` tip-bot2 for Jinjie Ruan
  1 sibling, 0 replies; 33+ messages in thread
From: tip-bot2 for Jinjie Ruan @ 2026-01-30 21:53 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Jinjie Ruan, Thomas Gleixner, x86, linux-kernel

The following commit has been merged into the core/entry branch of tip:

Commit-ID:     31c9387d0d84bc1d643a0c30155b6d92d05c92fc
Gitweb:        https://git.kernel.org/tip/31c9387d0d84bc1d643a0c30155b6d92d05c92fc
Author:        Jinjie Ruan <ruanjinjie@huawei.com>
AuthorDate:    Wed, 28 Jan 2026 11:19:33 +08:00
Committer:     Thomas Gleixner <tglx@kernel.org>
CommitterDate: Fri, 30 Jan 2026 15:38:10 +01:00

entry: Inline syscall_exit_work() and syscall_trace_enter()

After switching ARM64 to the generic entry code, a syscall_exit_work()
appeared as a profiling hotspot because it is not inlined.

Inlining both syscall_trace_enter() and syscall_exit_work() provides a
performance gain when any of the work items is enabled. With audit enabled
this results in a ~4% performance gain for perf bench basic syscall on
a kunpeng920 system:

    | Metric     | Baseline    | Inlined     | Change  |
    | ---------- | ----------- | ----------- | ------  |
    | Total time | 2.353 [sec] | 2.264 [sec] |  ↓3.8%  |
    | usecs/op   | 0.235374    | 0.226472    |  ↓3.8%  |
    | ops/sec    | 4,248,588   | 4,415,554   |  ↑3.9%  |

Small gains can be observed on x86 as well, though the generated code
optimizes for the work case, which is counterproductive for high
performance scenarios where such entry/exit work is usually avoided.

Avoid this by marking the work check in syscall_enter_from_user_mode_work()
unlikely, which is what the corresponding check in the exit path does
already.

[ tglx: Massage changelog and add the unlikely() ]

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Link: https://patch.msgid.link/20260128031934.3906955-14-ruanjinjie@huawei.com
---
 include/linux/entry-common.h         | 94 +++++++++++++++++++++++++-
 kernel/entry/common.h                |  7 +--
 kernel/entry/syscall-common.c        | 96 ++-------------------------
 kernel/entry/syscall_user_dispatch.c |  4 +-
 4 files changed, 102 insertions(+), 99 deletions(-)
 delete mode 100644 kernel/entry/common.h

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index bea207e..e67e3af 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_ENTRYCOMMON_H
 #define __LINUX_ENTRYCOMMON_H
 
+#include <linux/audit.h>
 #include <linux/irq-entry-common.h>
 #include <linux/livepatch.h>
 #include <linux/ptrace.h>
@@ -63,7 +64,58 @@ static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs *regs
 }
 #endif
 
-long syscall_trace_enter(struct pt_regs *regs, unsigned long work);
+bool syscall_user_dispatch(struct pt_regs *regs);
+long trace_syscall_enter(struct pt_regs *regs, long syscall);
+void trace_syscall_exit(struct pt_regs *regs, long ret);
+
+static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
+{
+	if (unlikely(audit_context())) {
+		unsigned long args[6];
+
+		syscall_get_arguments(current, regs, args);
+		audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]);
+	}
+}
+
+static __always_inline long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
+{
+	long syscall, ret = 0;
+
+	/*
+	 * Handle Syscall User Dispatch.  This must comes first, since
+	 * the ABI here can be something that doesn't make sense for
+	 * other syscall_work features.
+	 */
+	if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) {
+		if (syscall_user_dispatch(regs))
+			return -1L;
+	}
+
+	/* Handle ptrace */
+	if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) {
+		ret = arch_ptrace_report_syscall_entry(regs);
+		if (ret || (work & SYSCALL_WORK_SYSCALL_EMU))
+			return -1L;
+	}
+
+	/* Do seccomp after ptrace, to catch any tracer changes. */
+	if (work & SYSCALL_WORK_SECCOMP) {
+		ret = __secure_computing();
+		if (ret == -1L)
+			return ret;
+	}
+
+	/* Either of the above might have changed the syscall number */
+	syscall = syscall_get_nr(current, regs);
+
+	if (unlikely(work & SYSCALL_WORK_SYSCALL_TRACEPOINT))
+		syscall = trace_syscall_enter(regs, syscall);
+
+	syscall_enter_audit(regs, syscall);
+
+	return ret ? : syscall;
+}
 
 /**
  * syscall_enter_from_user_mode_work - Check and handle work before invoking
@@ -130,6 +182,19 @@ static __always_inline long syscall_enter_from_user_mode(struct pt_regs *regs, l
 	return ret;
 }
 
+/*
+ * If SYSCALL_EMU is set, then the only reason to report is when
+ * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP).  This syscall
+ * instruction has been already reported in syscall_enter_from_user_mode().
+ */
+static __always_inline bool report_single_step(unsigned long work)
+{
+	if (work & SYSCALL_WORK_SYSCALL_EMU)
+		return false;
+
+	return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP;
+}
+
 /**
  * arch_ptrace_report_syscall_exit - Architecture specific ptrace_report_syscall_exit()
  *
@@ -155,7 +220,32 @@ static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs
  *
  * Do one-time syscall specific work.
  */
-void syscall_exit_work(struct pt_regs *regs, unsigned long work);
+static __always_inline void syscall_exit_work(struct pt_regs *regs, unsigned long work)
+{
+	bool step;
+
+	/*
+	 * If the syscall was rolled back due to syscall user dispatching,
+	 * then the tracers below are not invoked for the same reason as
+	 * the entry side was not invoked in syscall_trace_enter(): The ABI
+	 * of these syscalls is unknown.
+	 */
+	if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) {
+		if (unlikely(current->syscall_dispatch.on_dispatch)) {
+			current->syscall_dispatch.on_dispatch = false;
+			return;
+		}
+	}
+
+	audit_syscall_exit(regs);
+
+	if (work & SYSCALL_WORK_SYSCALL_TRACEPOINT)
+		trace_syscall_exit(regs, syscall_get_return_value(current, regs));
+
+	step = report_single_step(work);
+	if (step || work & SYSCALL_WORK_SYSCALL_TRACE)
+		arch_ptrace_report_syscall_exit(regs, step);
+}
 
 /**
  * syscall_exit_to_user_mode_work - Handle one time work before returning to user mode
diff --git a/kernel/entry/common.h b/kernel/entry/common.h
deleted file mode 100644
index f6e6d02..0000000
--- a/kernel/entry/common.h
+++ /dev/null
@@ -1,7 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _COMMON_H
-#define _COMMON_H
-
-bool syscall_user_dispatch(struct pt_regs *regs);
-
-#endif
diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c
index bb5f61f..cd4967a 100644
--- a/kernel/entry/syscall-common.c
+++ b/kernel/entry/syscall-common.c
@@ -1,103 +1,23 @@
 // SPDX-License-Identifier: GPL-2.0
 
-#include <linux/audit.h>
 #include <linux/entry-common.h>
-#include "common.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/syscalls.h>
 
-static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
-{
-	if (unlikely(audit_context())) {
-		unsigned long args[6];
-
-		syscall_get_arguments(current, regs, args);
-		audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]);
-	}
-}
+/* Out of line to prevent tracepoint code duplication */
 
-long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
+long trace_syscall_enter(struct pt_regs *regs, long syscall)
 {
-	long syscall, ret = 0;
-
+	trace_sys_enter(regs, syscall);
 	/*
-	 * Handle Syscall User Dispatch.  This must comes first, since
-	 * the ABI here can be something that doesn't make sense for
-	 * other syscall_work features.
+	 * Probes or BPF hooks in the tracepoint may have changed the
+	 * system call number. Reread it.
 	 */
-	if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) {
-		if (syscall_user_dispatch(regs))
-			return -1L;
-	}
-
-	/* Handle ptrace */
-	if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) {
-		ret = arch_ptrace_report_syscall_entry(regs);
-		if (ret || (work & SYSCALL_WORK_SYSCALL_EMU))
-			return -1L;
-	}
-
-	/* Do seccomp after ptrace, to catch any tracer changes. */
-	if (work & SYSCALL_WORK_SECCOMP) {
-		ret = __secure_computing();
-		if (ret == -1L)
-			return ret;
-	}
-
-	/* Either of the above might have changed the syscall number */
-	syscall = syscall_get_nr(current, regs);
-
-	if (unlikely(work & SYSCALL_WORK_SYSCALL_TRACEPOINT)) {
-		trace_sys_enter(regs, syscall);
-		/*
-		 * Probes or BPF hooks in the tracepoint may have changed the
-		 * system call number as well.
-		 */
-		syscall = syscall_get_nr(current, regs);
-	}
-
-	syscall_enter_audit(regs, syscall);
-
-	return ret ? : syscall;
+	return syscall_get_nr(current, regs);
 }
 
-/*
- * If SYSCALL_EMU is set, then the only reason to report is when
- * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP).  This syscall
- * instruction has been already reported in syscall_enter_from_user_mode().
- */
-static inline bool report_single_step(unsigned long work)
+void trace_syscall_exit(struct pt_regs *regs, long ret)
 {
-	if (work & SYSCALL_WORK_SYSCALL_EMU)
-		return false;
-
-	return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP;
-}
-
-void syscall_exit_work(struct pt_regs *regs, unsigned long work)
-{
-	bool step;
-
-	/*
-	 * If the syscall was rolled back due to syscall user dispatching,
-	 * then the tracers below are not invoked for the same reason as
-	 * the entry side was not invoked in syscall_trace_enter(): The ABI
-	 * of these syscalls is unknown.
-	 */
-	if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) {
-		if (unlikely(current->syscall_dispatch.on_dispatch)) {
-			current->syscall_dispatch.on_dispatch = false;
-			return;
-		}
-	}
-
-	audit_syscall_exit(regs);
-
-	if (work & SYSCALL_WORK_SYSCALL_TRACEPOINT)
-		trace_sys_exit(regs, syscall_get_return_value(current, regs));
-
-	step = report_single_step(work);
-	if (step || work & SYSCALL_WORK_SYSCALL_TRACE)
-		arch_ptrace_report_syscall_exit(regs, step);
+	trace_sys_exit(regs, ret);
 }
diff --git a/kernel/entry/syscall_user_dispatch.c b/kernel/entry/syscall_user_dispatch.c
index a9055ec..d89dffc 100644
--- a/kernel/entry/syscall_user_dispatch.c
+++ b/kernel/entry/syscall_user_dispatch.c
@@ -2,6 +2,8 @@
 /*
  * Copyright (C) 2020 Collabora Ltd.
  */
+
+#include <linux/entry-common.h>
 #include <linux/sched.h>
 #include <linux/prctl.h>
 #include <linux/ptrace.h>
@@ -15,8 +17,6 @@
 
 #include <asm/syscall.h>
 
-#include "common.h"
-
 static void trigger_sigsys(struct pt_regs *regs)
 {
 	struct kernel_siginfo info;

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: core/entry] entry: Add arch_ptrace_report_syscall_entry/exit()
  2026-01-28  3:19 ` [PATCH v11 10/14] entry: Add arch_ptrace_report_syscall_entry/exit() Jinjie Ruan
@ 2026-01-30 21:53   ` tip-bot2 for Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Jinjie Ruan @ 2026-01-30 21:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Mark Rutland, Thomas Gleixner, Jinjie Ruan, Thomas Gleixner,
	Kevin Brodsky, x86, linux-kernel

The following commit has been merged into the core/entry branch of tip:

Commit-ID:     578b21fd3ab2d9901ce40ed802e428a41a40610d
Gitweb:        https://git.kernel.org/tip/578b21fd3ab2d9901ce40ed802e428a41a40610d
Author:        Jinjie Ruan <ruanjinjie@huawei.com>
AuthorDate:    Wed, 28 Jan 2026 11:19:30 +08:00
Committer:     Thomas Gleixner <tglx@kernel.org>
CommitterDate: Fri, 30 Jan 2026 15:38:09 +01:00

entry: Add arch_ptrace_report_syscall_entry/exit()

ARM64 requires a architecture specific ptrace wrapper as it needs to save
and restore scratch registers.

Provide arch_ptrace_report_syscall_entry/exit() wrappers which fall back to
ptrace_report_syscall_entry/exit() if the architecture does not provide
them.

No functional change intended.

[ tglx: Massaged changelog and comments ]

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Link: https://patch.msgid.link/20260128031934.3906955-11-ruanjinjie@huawei.com
---
 include/linux/entry-common.h  | 36 ++++++++++++++++++++++++++++++++++-
 kernel/entry/syscall-common.c |  4 ++--
 2 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index 5316004..bea207e 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -45,6 +45,24 @@
 				 SYSCALL_WORK_SYSCALL_EXIT_TRAP	|	\
 				 ARCH_SYSCALL_WORK_EXIT)
 
+/**
+ * arch_ptrace_report_syscall_entry - Architecture specific ptrace_report_syscall_entry() wrapper
+ *
+ * Invoked from syscall_trace_enter() to wrap ptrace_report_syscall_entry().
+ *
+ * This allows architecture specific ptrace_report_syscall_entry()
+ * implementations. If not defined by the architecture this falls back to
+ * to ptrace_report_syscall_entry().
+ */
+static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs *regs);
+
+#ifndef arch_ptrace_report_syscall_entry
+static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs *regs)
+{
+	return ptrace_report_syscall_entry(regs);
+}
+#endif
+
 long syscall_trace_enter(struct pt_regs *regs, unsigned long work);
 
 /**
@@ -113,6 +131,24 @@ static __always_inline long syscall_enter_from_user_mode(struct pt_regs *regs, l
 }
 
 /**
+ * arch_ptrace_report_syscall_exit - Architecture specific ptrace_report_syscall_exit()
+ *
+ * This allows architecture specific ptrace_report_syscall_exit()
+ * implementations. If not defined by the architecture this falls back to
+ * to ptrace_report_syscall_exit().
+ */
+static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs,
+							    int step);
+
+#ifndef arch_ptrace_report_syscall_exit
+static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs *regs,
+							    int step)
+{
+	ptrace_report_syscall_exit(regs, step);
+}
+#endif
+
+/**
  * syscall_exit_work - Handle work before returning to user mode
  * @regs:	Pointer to current pt_regs
  * @work:	Current thread syscall work
diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c
index e6237b5..bb5f61f 100644
--- a/kernel/entry/syscall-common.c
+++ b/kernel/entry/syscall-common.c
@@ -33,7 +33,7 @@ long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
 
 	/* Handle ptrace */
 	if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) {
-		ret = ptrace_report_syscall_entry(regs);
+		ret = arch_ptrace_report_syscall_entry(regs);
 		if (ret || (work & SYSCALL_WORK_SYSCALL_EMU))
 			return -1L;
 	}
@@ -99,5 +99,5 @@ void syscall_exit_work(struct pt_regs *regs, unsigned long work)
 
 	step = report_single_step(work);
 	if (step || work & SYSCALL_WORK_SYSCALL_TRACE)
-		ptrace_report_syscall_exit(regs, step);
+		arch_ptrace_report_syscall_exit(regs, step);
 }

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: core/entry] entry: Rework syscall_exit_to_user_mode_work() for architecture reuse
  2026-01-28  3:19 ` [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse Jinjie Ruan
  2026-01-29 12:06   ` Kevin Brodsky
@ 2026-01-30 21:53   ` tip-bot2 for Jinjie Ruan
  1 sibling, 0 replies; 33+ messages in thread
From: tip-bot2 for Jinjie Ruan @ 2026-01-30 21:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Jinjie Ruan, Thomas Gleixner, Kevin Brodsky, Thomas Gleixner, x86,
	linux-kernel

The following commit has been merged into the core/entry branch of tip:

Commit-ID:     e1647100c22eb718e9833211722cbb78e339047c
Gitweb:        https://git.kernel.org/tip/e1647100c22eb718e9833211722cbb78e339047c
Author:        Jinjie Ruan <ruanjinjie@huawei.com>
AuthorDate:    Wed, 28 Jan 2026 11:19:29 +08:00
Committer:     Thomas Gleixner <tglx@kernel.org>
CommitterDate: Fri, 30 Jan 2026 15:38:09 +01:00

entry: Rework syscall_exit_to_user_mode_work() for architecture reuse

syscall_exit_to_user_mode_work() invokes local_irq_disable_exit_to_user()
and syscall_exit_to_user_mode_prepare() after handling pending syscall exit
work.

The conversion of ARM64 to the generic entry code requires this to be split
up, so move the invocations of local_irq_disable_exit_to_user() and
syscall_exit_to_user_mode_prepare() into the only caller.

No functional change intended.

[ tglx: Massaged changelog and comments ]

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://patch.msgid.link/20260128031934.3906955-10-ruanjinjie@huawei.com
---
 include/linux/entry-common.h | 25 +++++++++++--------------
 1 file changed, 11 insertions(+), 14 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index e4a8287..5316004 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -122,17 +122,12 @@ static __always_inline long syscall_enter_from_user_mode(struct pt_regs *regs, l
 void syscall_exit_work(struct pt_regs *regs, unsigned long work);
 
 /**
- * syscall_exit_to_user_mode_work - Handle work before returning to user mode
+ * syscall_exit_to_user_mode_work - Handle one time work before returning to user mode
  * @regs:	Pointer to currents pt_regs
  *
- * Same as step 1 and 2 of syscall_exit_to_user_mode() but without calling
- * exit_to_user_mode() to perform the final transition to user mode.
+ * Step 1 of syscall_exit_to_user_mode() with the same calling convention.
  *
- * Calling convention is the same as for syscall_exit_to_user_mode() and it
- * returns with all work handled and interrupts disabled. The caller must
- * invoke exit_to_user_mode() before actually switching to user mode to
- * make the final state transitions. Interrupts must stay disabled between
- * return from this function and the invocation of exit_to_user_mode().
+ * The caller must invoke steps 2-3 of syscall_exit_to_user_mode() afterwards.
  */
 static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
 {
@@ -155,15 +150,13 @@ static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
 	 */
 	if (unlikely(work & SYSCALL_WORK_EXIT))
 		syscall_exit_work(regs, work);
-	local_irq_disable_exit_to_user();
-	syscall_exit_to_user_mode_prepare(regs);
 }
 
 /**
  * syscall_exit_to_user_mode - Handle work before returning to user mode
  * @regs:	Pointer to currents pt_regs
  *
- * Invoked with interrupts enabled and fully valid regs. Returns with all
+ * Invoked with interrupts enabled and fully valid @regs. Returns with all
  * work handled, interrupts disabled such that the caller can immediately
  * switch to user mode. Called from architecture specific syscall and ret
  * from fork code.
@@ -176,6 +169,7 @@ static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
  *	- ptrace (single stepping)
  *
  *  2) Preparatory work
+ *	- Disable interrupts
  *	- Exit to user mode loop (common TIF handling). Invokes
  *	  arch_exit_to_user_mode_work() for architecture specific TIF work
  *	- Architecture specific one time work arch_exit_to_user_mode_prepare()
@@ -184,14 +178,17 @@ static __always_inline void syscall_exit_to_user_mode_work(struct pt_regs *regs)
  *  3) Final transition (lockdep, tracing, context tracking, RCU), i.e. the
  *     functionality in exit_to_user_mode().
  *
- * This is a combination of syscall_exit_to_user_mode_work() (1,2) and
- * exit_to_user_mode(). This function is preferred unless there is a
- * compelling architectural reason to use the separate functions.
+ * This is a combination of syscall_exit_to_user_mode_work() (1), disabling
+ * interrupts followed by syscall_exit_to_user_mode_prepare() (2) and
+ * exit_to_user_mode() (3). This function is preferred unless there is a
+ * compelling architectural reason to invoke the functions separately.
  */
 static __always_inline void syscall_exit_to_user_mode(struct pt_regs *regs)
 {
 	instrumentation_begin();
 	syscall_exit_to_user_mode_work(regs);
+	local_irq_disable_exit_to_user();
+	syscall_exit_to_user_mode_prepare(regs);
 	instrumentation_end();
 	exit_to_user_mode();
 }

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: core/entry] entry: Remove unused syscall argument from syscall_trace_enter()
  2026-01-28  3:19 ` [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter() Jinjie Ruan
  2026-01-29 12:06   ` Kevin Brodsky
  2026-01-30 10:11   ` Thomas Gleixner
@ 2026-01-30 21:53   ` tip-bot2 for Jinjie Ruan
  2 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Jinjie Ruan @ 2026-01-30 21:53 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Jinjie Ruan, Thomas Gleixner, x86, linux-kernel

The following commit has been merged into the core/entry branch of tip:

Commit-ID:     03150a9f84b328f5c724b8ed9ff8600c2d7e2d7b
Gitweb:        https://git.kernel.org/tip/03150a9f84b328f5c724b8ed9ff8600c2d7e2d7b
Author:        Jinjie Ruan <ruanjinjie@huawei.com>
AuthorDate:    Wed, 28 Jan 2026 11:19:21 +08:00
Committer:     Thomas Gleixner <tglx@kernel.org>
CommitterDate: Fri, 30 Jan 2026 15:38:09 +01:00

entry: Remove unused syscall argument from syscall_trace_enter()

The 'syscall' argument of syscall_trace_enter() is immediately overwritten
before any real use and serves only as a local variable, so drop the
parameter.

No functional change intended.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Link: https://patch.msgid.link/20260128031934.3906955-2-ruanjinjie@huawei.com
---
 include/linux/entry-common.h  | 4 ++--
 kernel/entry/syscall-common.c | 5 ++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index 87efb38..e4a8287 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -45,7 +45,7 @@
 				 SYSCALL_WORK_SYSCALL_EXIT_TRAP	|	\
 				 ARCH_SYSCALL_WORK_EXIT)
 
-long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work);
+long syscall_trace_enter(struct pt_regs *regs, unsigned long work);
 
 /**
  * syscall_enter_from_user_mode_work - Check and handle work before invoking
@@ -75,7 +75,7 @@ static __always_inline long syscall_enter_from_user_mode_work(struct pt_regs *re
 	unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
 
 	if (work & SYSCALL_WORK_ENTER)
-		syscall = syscall_trace_enter(regs, syscall, work);
+		syscall = syscall_trace_enter(regs, work);
 
 	return syscall;
 }
diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c
index 940a597..e6237b5 100644
--- a/kernel/entry/syscall-common.c
+++ b/kernel/entry/syscall-common.c
@@ -17,10 +17,9 @@ static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
 	}
 }
 
-long syscall_trace_enter(struct pt_regs *regs, long syscall,
-				unsigned long work)
+long syscall_trace_enter(struct pt_regs *regs, unsigned long work)
 {
-	long ret = 0;
+	long syscall, ret = 0;
 
 	/*
 	 * Handle Syscall User Dispatch.  This must comes first, since

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-30 15:01             ` Thomas Gleixner
@ 2026-01-30 23:33               ` Thomas Gleixner
  2026-01-31  1:43               ` Jinjie Ruan
  1 sibling, 0 replies; 33+ messages in thread
From: Thomas Gleixner @ 2026-01-30 23:33 UTC (permalink / raw)
  To: Kevin Brodsky, Jinjie Ruan, catalin.marinas, will, oleg, peterz,
	luto, shuah, kees, wad, deller, akpm, charlie, mark.rutland,
	anshuman.khandual, song, ryan.roberts, thuth, ada.coupriediaz,
	broonie, pengcan, liqiang01, kmal, dvyukov, reddybalavignesh9979,
	richard.weiyang, linux-arm-kernel, linux-kernel, linux-kselftest

On Fri, Jan 30 2026 at 16:01, Thomas Gleixner wrote:
> I'll go and apply them on top of 6.19-rc1 into core/entry and merge that
> into the scheduler branch to resolve the resulting conflict.
>
> ARM64 can either pull that branch or wait until the next rc1 comes out.

  git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git entry-for-arm64-26-01-31

Consider that tag immutable and consumable for ARM64 if you need it. I
did some massaging, but the ARM64 pile should still apply on top of it.

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse
  2026-01-30 15:01             ` Thomas Gleixner
  2026-01-30 23:33               ` Thomas Gleixner
@ 2026-01-31  1:43               ` Jinjie Ruan
  1 sibling, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-31  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Kevin Brodsky, catalin.marinas, will, oleg,
	peterz, luto, shuah, kees, wad, deller, akpm, charlie,
	mark.rutland, anshuman.khandual, song, ryan.roberts, thuth,
	ada.coupriediaz, broonie, pengcan, liqiang01, kmal, dvyukov,
	reddybalavignesh9979, richard.weiyang, linux-arm-kernel,
	linux-kernel, linux-kselftest



On 2026/1/30 23:01, Thomas Gleixner wrote:
> On Fri, Jan 30 2026 at 14:27, Kevin Brodsky wrote:
>> On 30/01/2026 11:16, Thomas Gleixner wrote:
>>>> Agreed, the comments are essentially describing what each function
>>>> calls; considering how short they are, directly reading the code is
>>>> probably easier.
>>> No. Please keep them. There is more information in them than just the
>>> pure 'what's' called.
>>
>> That is true before this patch, where it made sense to highlight that
>> exit_to_user_mode() must still be called after this function (without
>> re-enabling interrupts). With this patch there is however much more that
>> this function is lacking, and it feels very likely that comments will go
>> out of sync with exactly what syscall_exit_to_user_mode() calls.
>>
>> I suppose we could simply point the reader to
>> syscall_exit_to_user_mode() to find out what else is needed, and keep
>> the comment about the calling convention being the same.
> 
> I've picked up _all_ four entry changes and reworked the comments and
> changelogs already.
> 
> Those patches should have been bundled together at the start of the
> series anyway so they can be picked up independently without going
> through loops and hoops. When will people learn to think beyond the brim
> of their architecture tea cup?

I'll make sure to group related changes together from the start next
time and keep the whole series in view, not just the
architecture-specific parts.
Thanks for taking the time to re-work them — much appreciated.

> 
> I'll go and apply them on top of 6.19-rc1 into core/entry and merge that
> into the scheduler branch to resolve the resulting conflict.
> 
> ARM64 can either pull that branch or wait until the next rc1 comes out.

Thanks for re-bundling the four entry patches and reworking the logs —
that definitely makes the series easier to pick up.

I'll rebase my remaining changes on top of 6.19-rc1 once the core/entry
branch lands.

Let me know if there’s anything I can do to simplify the logistics.

Regards,
Jinjie

> 
> Thanks,
> 
>         tglx
> 
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v11 13/14] entry: Inline syscall_exit_work() and syscall_trace_enter()
  2026-01-30 10:14   ` Thomas Gleixner
@ 2026-01-31  1:48     ` Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2026-01-31  1:48 UTC (permalink / raw)
  To: Thomas Gleixner, catalin.marinas, will, oleg, peterz, luto, shuah,
	kees, wad, kevin.brodsky, deller, akpm, charlie, ldv,
	mark.rutland, anshuman.khandual, song, ryan.roberts, thuth,
	ada.coupriediaz, broonie, pengcan, liqiang01, kmal, dvyukov,
	reddybalavignesh9979, richard.weiyang, linux-arm-kernel,
	linux-kernel, linux-kselftest



On 2026/1/30 18:14, Thomas Gleixner wrote:
> On Wed, Jan 28 2026 at 11:19, Jinjie Ruan wrote:
>> After switch arm64 to Generic Entry, a new hotspot syscall_exit_work()
>> appeared because syscall_exit_work() is not inlined, so inline
>> syscall_exit_work(). Also inline syscall_trace_enter() to align with
>> syscall_exit_work().
> 
> Has the same collision problem. I can pick that up and massage it on top
> of the pending time slice changes. Let me give it a test ride on x86...

Thank you very much, hopefully we'll see similar good results there too.

> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2026-01-31  1:48 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-28  3:19 [PATCH v11 00/14] arm64: entry: Convert to Generic Entry Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 01/14] entry: Remove unused syscall in syscall_trace_enter() Jinjie Ruan
2026-01-29 12:06   ` Kevin Brodsky
2026-01-30 10:11   ` Thomas Gleixner
2026-01-30 21:53   ` [tip: core/entry] entry: Remove unused syscall argument from syscall_trace_enter() tip-bot2 for Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 02/14] arm64/ptrace: Refactor syscall_trace_enter/exit() Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 03/14] arm64: ptrace: Move rseq_syscall() before audit_syscall_exit() Jinjie Ruan
2026-01-29 12:06   ` Kevin Brodsky
2026-01-29 13:06     ` Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 04/14] arm64: syscall: Rework el0_svc_common() Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 05/14] arm64/ptrace: Not check _TIF_SECCOMP/SYSCALL_EMU for syscall_exit_work() Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 06/14] arm64/ptrace: Do not report_syscall_exit() for PTRACE_SYSEMU_SINGLESTEP Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 07/14] arm64/ptrace: Expand secure_computing() in place Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 08/14] arm64/ptrace: Use syscall_get_arguments() helper Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 09/14] entry: Rework syscall_exit_to_user_mode_work() for arch reuse Jinjie Ruan
2026-01-29 12:06   ` Kevin Brodsky
2026-01-29 13:11     ` Jinjie Ruan
2026-01-29 16:00       ` Kevin Brodsky
2026-01-30 10:16         ` Thomas Gleixner
2026-01-30 13:27           ` Kevin Brodsky
2026-01-30 15:01             ` Thomas Gleixner
2026-01-30 23:33               ` Thomas Gleixner
2026-01-31  1:43               ` Jinjie Ruan
2026-01-30 21:53   ` [tip: core/entry] entry: Rework syscall_exit_to_user_mode_work() for architecture reuse tip-bot2 for Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 10/14] entry: Add arch_ptrace_report_syscall_entry/exit() Jinjie Ruan
2026-01-30 21:53   ` [tip: core/entry] " tip-bot2 for Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 11/14] arm64: entry: Convert to generic entry Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 12/14] arm64: Inline el0_svc_common() Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 13/14] entry: Inline syscall_exit_work() and syscall_trace_enter() Jinjie Ruan
2026-01-30 10:14   ` Thomas Gleixner
2026-01-31  1:48     ` Jinjie Ruan
2026-01-30 21:53   ` [tip: core/entry] " tip-bot2 for Jinjie Ruan
2026-01-28  3:19 ` [PATCH v11 14/14] selftests: sud_test: Support aarch64 Jinjie Ruan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox