linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry
@ 2025-07-29  1:54 Jinjie Ruan
  2025-07-29  1:54 ` [PATCH -next v7 1/7] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled() Jinjie Ruan
                   ` (8 more replies)
  0 siblings, 9 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-07-29  1:54 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, ada.coupriediaz,
	anshuman.khandual, kristina.martsenko, liaochang1, ardb, leitao,
	linux-arm-kernel, linux-kernel, xen-devel
  Cc: ruanjinjie

Currently, x86, Riscv, Loongarch use the generic entry. Also convert
arm64 to use the generic entry infrastructure from kernel/entry/*.
The generic entry makes maintainers' work easier and codes more elegant,
which will make PREEMPT_DYNAMIC and PREEMPT_LAZY use the generic entry
common code and remove a lot of duplicate code.

Since commit a70e9f647f50 ("entry: Split generic entry into generic
exception and syscall entry") split the generic entry into generic irq
entry and generic syscall entry, it is time to convert arm64 to use
the generic irq entry. And ARM64 will be completely converted to generic
entry in the upcoming patch series.

The main convert steps are as follows:
- Split generic entry into generic irq entry and generic syscall to
  make the single patch more concentrated in switching to one thing.
- Make arm64 easier to use irqentry_enter/exit().
- Make arm64 closer to the PREEMPT_DYNAMIC code of generic entry.
- Switch to generic irq entry.

It was tested ok with following test cases on QEMU virt platform:
 - Perf tests.
 - Different `dynamic preempt` mode switch.
 - Pseudo NMI tests.
 - Stress-ng CPU stress test.
 - MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst
   and all test cases in tools/testing/selftests/arm64/mte/*.

The test QEMU configuration is as follows:

	qemu-system-aarch64 \
		-M virt,gic-version=3,virtualization=on,mte=on \
		-cpu max,pauth-impdef=on \
		-kernel Image \
		-smp 8,sockets=1,cores=4,threads=2 \
		-m 512m \
		-nographic \
		-no-reboot \
		-device virtio-rng-pci \
		-append "root=/dev/vda rw console=ttyAMA0 kgdboc=ttyAMA0,115200 \
			earlycon preempt=voluntary irqchip.gicv3_pseudo_nmi=1" \
		-drive if=none,file=images/rootfs.ext4,format=raw,id=hd0 \
		-device virtio-blk-device,drive=hd0 \

Changes in v7:
- Rebased on v6.16-rc7 and remove the merged first patch.
- Update the commit message.

Changes in v6:
- Rebased on 6.14 rc2 next.
- Put the syscall bits aside and split it out.
- Have the split patch before the arm64 changes.
- Merge some tightly coupled patches.
- Adjust the order of some patches to make them more reasonable.
- Define regs_irqs_disabled() by inline function.
- Define interrupts_enabled() in terms of regs_irqs_disabled().
- Delete the fast_interrupts_enabled() macro.
- irqentry_state_t -> arm64_irqentry_state_t.
- Remove arch_exit_to_user_mode_prepare() and pull local_daif_mask() later
  in the arm64 exit sequence
- Update the commit message.

Changes in v5:
- Not change arm32 and keep inerrupts_enabled() macro for gicv3 driver.
- Move irqentry_state definition into arch/arm64/kernel/entry-common.c.
- Avoid removing the __enter_from_*() and __exit_to_*() wrappers.
- Update "irqentry_state_t ret/irq_state" to "state"
  to keep it consistently.
- Use generic irq entry header for PREEMPT_DYNAMIC after split
  the generic entry.
- Also refactor the ARM64 syscall code.
- Introduce arch_ptrace_report_syscall_entry/exit(), instead of
  arch_pre/post_report_syscall_entry/exit() to simplify code.
- Make the syscall patches clear separation.
- Update the commit message.

Changes in v4:
- Rework/cleanup split into a few patches as Mark suggested.
- Replace interrupts_enabled() macro with regs_irqs_disabled(), instead
  of left it here.
- Remove rcu and lockdep state in pt_regs by using temporary
  irqentry_state_t as Mark suggested.
- Remove some unnecessary intermediate functions to make it clear.
- Rework preempt irq and PREEMPT_DYNAMIC code
  to make the switch more clear.
- arch_prepare_*_entry/exit() -> arch_pre_*_entry/exit().
- Expand the arch functions comment.
- Make arch functions closer to its caller.
- Declare saved_reg in for block.
- Remove arch_exit_to_kernel_mode_prepare(), arch_enter_from_kernel_mode().
- Adjust "Add few arch functions to use generic entry" patch to be
  the penultimate.
- Update the commit message.
- Add suggested-by.

Changes in v3:
- Test the MTE test cases.
- Handle forget_syscall() in arch_post_report_syscall_entry()
- Make the arch funcs not use __weak as Thomas suggested, so move
  the arch funcs to entry-common.h, and make arch_forget_syscall() folded
  in arch_post_report_syscall_entry() as suggested.
- Move report_single_step() to thread_info.h for arm64
- Change __always_inline() to inline, add inline for the other arch funcs.
- Remove unused signal.h for entry-common.h.
- Add Suggested-by.
- Update the commit message.

Changes in v2:
- Add tested-by.
- Fix a bug that not call arch_post_report_syscall_entry() in
  syscall_trace_enter() if ptrace_report_syscall_entry() return not zero.
- Refactor report_syscall().
- Add comment for arch_prepare_report_syscall_exit().
- Adjust entry-common.h header file inclusion to alphabetical order.
- Update the commit message.

Jinjie Ruan (7):
  arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled()
  arm64: entry: Refactor the entry and exit for exceptions from EL1
  arm64: entry: Rework arm64_preempt_schedule_irq()
  arm64: entry: Use preempt_count() and need_resched() helper
  arm64: entry: Refactor preempt_schedule_irq() check code
  arm64: entry: Move arm64_preempt_schedule_irq() into
    __exit_to_kernel_mode()
  arm64: entry: Switch to generic IRQ entry

 arch/arm64/Kconfig                    |   1 +
 arch/arm64/include/asm/daifflags.h    |   2 +-
 arch/arm64/include/asm/entry-common.h |  56 ++++
 arch/arm64/include/asm/preempt.h      |   2 -
 arch/arm64/include/asm/ptrace.h       |  13 +-
 arch/arm64/include/asm/xen/events.h   |   2 +-
 arch/arm64/kernel/acpi.c              |   2 +-
 arch/arm64/kernel/debug-monitors.c    |   2 +-
 arch/arm64/kernel/entry-common.c      | 411 +++++++++-----------------
 arch/arm64/kernel/sdei.c              |   2 +-
 arch/arm64/kernel/signal.c            |   3 +-
 kernel/entry/common.c                 |  16 +-
 12 files changed, 217 insertions(+), 295 deletions(-)
 create mode 100644 arch/arm64/include/asm/entry-common.h

-- 
2.34.1



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH -next v7 1/7] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled()
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
@ 2025-07-29  1:54 ` Jinjie Ruan
  2025-08-05 15:05   ` Ada Couprie Diaz
  2025-07-29  1:54 ` [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1 Jinjie Ruan
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2025-07-29  1:54 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, ada.coupriediaz,
	anshuman.khandual, kristina.martsenko, liaochang1, ardb, leitao,
	linux-arm-kernel, linux-kernel, xen-devel
  Cc: ruanjinjie

The generic entry code expects architecture code to provide
regs_irqs_disabled(regs) function, but arm64 does not have this and
provides inerrupts_enabled(regs), which has the opposite polarity.

In preparation for moving arm64 over to the generic entry code,
relace arm64's interrupts_enabled() with regs_irqs_disabled() and
update its callers under arch/arm64.

For the moment, a definition of interrupts_enabled() is provided for
the GICv3 driver. Once arch/arm implement regs_irqs_disabled(), this
can be removed.

Delete the fast_interrupts_enabled() macro as it is unused and we
don't want any new users to show up.

No functional changes.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/include/asm/daifflags.h  | 2 +-
 arch/arm64/include/asm/ptrace.h     | 9 +++++----
 arch/arm64/include/asm/xen/events.h | 2 +-
 arch/arm64/kernel/acpi.c            | 2 +-
 arch/arm64/kernel/debug-monitors.c  | 2 +-
 arch/arm64/kernel/entry-common.c    | 4 ++--
 arch/arm64/kernel/sdei.c            | 2 +-
 7 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index fbb5c99eb2f9..5fca48009043 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -128,7 +128,7 @@ static inline void local_daif_inherit(struct pt_regs *regs)
 {
 	unsigned long flags = regs->pstate & DAIF_MASK;
 
-	if (interrupts_enabled(regs))
+	if (!regs_irqs_disabled(regs))
 		trace_hardirqs_on();
 
 	if (system_uses_irq_prio_masking())
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 47ff8654c5ec..8b915d4a9d4b 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -214,11 +214,12 @@ static inline void forget_syscall(struct pt_regs *regs)
 		(regs)->pmr == GIC_PRIO_IRQON :				\
 		true)
 
-#define interrupts_enabled(regs)			\
-	(!((regs)->pstate & PSR_I_BIT) && irqs_priority_unmasked(regs))
+static __always_inline bool regs_irqs_disabled(const struct pt_regs *regs)
+{
+	return (regs->pstate & PSR_I_BIT) || !irqs_priority_unmasked(regs);
+}
 
-#define fast_interrupts_enabled(regs) \
-	(!((regs)->pstate & PSR_F_BIT))
+#define interrupts_enabled(regs)	(!regs_irqs_disabled(regs))
 
 static inline unsigned long user_stack_pointer(struct pt_regs *regs)
 {
diff --git a/arch/arm64/include/asm/xen/events.h b/arch/arm64/include/asm/xen/events.h
index 2788e95d0ff0..2977b5fe068d 100644
--- a/arch/arm64/include/asm/xen/events.h
+++ b/arch/arm64/include/asm/xen/events.h
@@ -14,7 +14,7 @@ enum ipi_vector {
 
 static inline int xen_irqs_disabled(struct pt_regs *regs)
 {
-	return !interrupts_enabled(regs);
+	return regs_irqs_disabled(regs);
 }
 
 #define xchg_xen_ulong(ptr, val) xchg((ptr), (val))
diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
index 4d529ff7ba51..3fbce0a9a0fe 100644
--- a/arch/arm64/kernel/acpi.c
+++ b/arch/arm64/kernel/acpi.c
@@ -407,7 +407,7 @@ int apei_claim_sea(struct pt_regs *regs)
 	return_to_irqs_enabled = !irqs_disabled_flags(arch_local_save_flags());
 
 	if (regs)
-		return_to_irqs_enabled = interrupts_enabled(regs);
+		return_to_irqs_enabled = !regs_irqs_disabled(regs);
 
 	/*
 	 * SEA can interrupt SError, mask it and describe this as an NMI so
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 110d9ff54174..85fc162a6f9b 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -167,7 +167,7 @@ static void send_user_sigtrap(int si_code)
 	if (WARN_ON(!user_mode(regs)))
 		return;
 
-	if (interrupts_enabled(regs))
+	if (!regs_irqs_disabled(regs))
 		local_irq_enable();
 
 	arm64_force_sig_fault(SIGTRAP, si_code, instruction_pointer(regs),
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 2b0c5925502e..8e798f46ad28 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -74,7 +74,7 @@ static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs)
 {
 	lockdep_assert_irqs_disabled();
 
-	if (interrupts_enabled(regs)) {
+	if (!regs_irqs_disabled(regs)) {
 		if (regs->exit_rcu) {
 			trace_hardirqs_on_prepare();
 			lockdep_hardirqs_on_prepare();
@@ -662,7 +662,7 @@ static void noinstr el1_interrupt(struct pt_regs *regs,
 {
 	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
 
-	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
+	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && regs_irqs_disabled(regs))
 		__el1_pnmi(regs, handler);
 	else
 		__el1_irq(regs, handler);
diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
index 6f24a0251e18..95169f7b6531 100644
--- a/arch/arm64/kernel/sdei.c
+++ b/arch/arm64/kernel/sdei.c
@@ -243,7 +243,7 @@ unsigned long __kprobes do_sdei_event(struct pt_regs *regs,
 	 * If we interrupted the kernel with interrupts masked, we always go
 	 * back to wherever we came from.
 	 */
-	if (mode == kernel_mode && !interrupts_enabled(regs))
+	if (mode == kernel_mode && regs_irqs_disabled(regs))
 		return SDEI_EV_HANDLED;
 
 	/*
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
  2025-07-29  1:54 ` [PATCH -next v7 1/7] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled() Jinjie Ruan
@ 2025-07-29  1:54 ` Jinjie Ruan
  2025-08-05 15:06   ` Ada Couprie Diaz
  2025-08-12 11:01   ` Mark Rutland
  2025-07-29  1:54 ` [PATCH -next v7 3/7] arm64: entry: Rework arm64_preempt_schedule_irq() Jinjie Ruan
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-07-29  1:54 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, ada.coupriediaz,
	anshuman.khandual, kristina.martsenko, liaochang1, ardb, leitao,
	linux-arm-kernel, linux-kernel, xen-devel
  Cc: ruanjinjie

The generic entry code uses irqentry_state_t to track lockdep and RCU
state across exception entry and return. For historical reasons, arm64
embeds similar fields within its pt_regs structure.

In preparation for moving arm64 over to the generic entry code, pull
these fields out of arm64's pt_regs, and use a separate structure,
matching the style of the generic entry code.

No functional changes.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/include/asm/ptrace.h  |   4 -
 arch/arm64/kernel/entry-common.c | 151 +++++++++++++++++++------------
 2 files changed, 94 insertions(+), 61 deletions(-)

diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 8b915d4a9d4b..65b053a24d82 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -169,10 +169,6 @@ struct pt_regs {
 
 	u64 sdei_ttbr1;
 	struct frame_record_meta stackframe;
-
-	/* Only valid for some EL1 exceptions. */
-	u64 lockdep_hardirqs;
-	u64 exit_rcu;
 };
 
 /* For correct stack alignment, pt_regs has to be a multiple of 16 bytes. */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 8e798f46ad28..97e0741abde1 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -29,6 +29,13 @@
 #include <asm/sysreg.h>
 #include <asm/system_misc.h>
 
+typedef struct irqentry_state {
+	union {
+		bool	exit_rcu;
+		bool	lockdep;
+	};
+} arm64_irqentry_state_t;
+
 /*
  * Handle IRQ/context state management when entering from kernel mode.
  * Before this function is called it is not safe to call regular kernel code,
@@ -37,29 +44,36 @@
  * This is intended to match the logic in irqentry_enter(), handling the kernel
  * mode transitions only.
  */
-static __always_inline void __enter_from_kernel_mode(struct pt_regs *regs)
+static __always_inline arm64_irqentry_state_t __enter_from_kernel_mode(struct pt_regs *regs)
 {
-	regs->exit_rcu = false;
+	arm64_irqentry_state_t state = {
+		.exit_rcu = false,
+	};
 
 	if (!IS_ENABLED(CONFIG_TINY_RCU) && is_idle_task(current)) {
 		lockdep_hardirqs_off(CALLER_ADDR0);
 		ct_irq_enter();
 		trace_hardirqs_off_finish();
 
-		regs->exit_rcu = true;
-		return;
+		state.exit_rcu = true;
+		return state;
 	}
 
 	lockdep_hardirqs_off(CALLER_ADDR0);
 	rcu_irq_enter_check_tick();
 	trace_hardirqs_off_finish();
+
+	return state;
 }
 
-static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
+static noinstr arm64_irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs)
 {
-	__enter_from_kernel_mode(regs);
+	arm64_irqentry_state_t state = __enter_from_kernel_mode(regs);
+
 	mte_check_tfsr_entry();
 	mte_disable_tco_entry(current);
+
+	return state;
 }
 
 /*
@@ -70,12 +84,13 @@ static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
  * This is intended to match the logic in irqentry_exit(), handling the kernel
  * mode transitions only, and with preemption handled elsewhere.
  */
-static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs)
+static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs,
+						  arm64_irqentry_state_t state)
 {
 	lockdep_assert_irqs_disabled();
 
 	if (!regs_irqs_disabled(regs)) {
-		if (regs->exit_rcu) {
+		if (state.exit_rcu) {
 			trace_hardirqs_on_prepare();
 			lockdep_hardirqs_on_prepare();
 			ct_irq_exit();
@@ -85,15 +100,16 @@ static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs)
 
 		trace_hardirqs_on();
 	} else {
-		if (regs->exit_rcu)
+		if (state.exit_rcu)
 			ct_irq_exit();
 	}
 }
 
-static void noinstr exit_to_kernel_mode(struct pt_regs *regs)
+static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
+					arm64_irqentry_state_t state)
 {
 	mte_check_tfsr_exit();
-	__exit_to_kernel_mode(regs);
+	__exit_to_kernel_mode(regs, state);
 }
 
 /*
@@ -194,9 +210,11 @@ asmlinkage void noinstr asm_exit_to_user_mode(struct pt_regs *regs)
  * mode. Before this function is called it is not safe to call regular kernel
  * code, instrumentable code, or any code which may trigger an exception.
  */
-static void noinstr arm64_enter_nmi(struct pt_regs *regs)
+static noinstr arm64_irqentry_state_t arm64_enter_nmi(struct pt_regs *regs)
 {
-	regs->lockdep_hardirqs = lockdep_hardirqs_enabled();
+	arm64_irqentry_state_t state;
+
+	state.lockdep = lockdep_hardirqs_enabled();
 
 	__nmi_enter();
 	lockdep_hardirqs_off(CALLER_ADDR0);
@@ -205,6 +223,8 @@ static void noinstr arm64_enter_nmi(struct pt_regs *regs)
 
 	trace_hardirqs_off_finish();
 	ftrace_nmi_enter();
+
+	return state;
 }
 
 /*
@@ -212,19 +232,18 @@ static void noinstr arm64_enter_nmi(struct pt_regs *regs)
  * mode. After this function returns it is not safe to call regular kernel
  * code, instrumentable code, or any code which may trigger an exception.
  */
-static void noinstr arm64_exit_nmi(struct pt_regs *regs)
+static void noinstr arm64_exit_nmi(struct pt_regs *regs,
+				   arm64_irqentry_state_t state)
 {
-	bool restore = regs->lockdep_hardirqs;
-
 	ftrace_nmi_exit();
-	if (restore) {
+	if (state.lockdep) {
 		trace_hardirqs_on_prepare();
 		lockdep_hardirqs_on_prepare();
 	}
 
 	ct_nmi_exit();
 	lockdep_hardirq_exit();
-	if (restore)
+	if (state.lockdep)
 		lockdep_hardirqs_on(CALLER_ADDR0);
 	__nmi_exit();
 }
@@ -234,14 +253,18 @@ static void noinstr arm64_exit_nmi(struct pt_regs *regs)
  * kernel mode. Before this function is called it is not safe to call regular
  * kernel code, instrumentable code, or any code which may trigger an exception.
  */
-static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs)
+static noinstr arm64_irqentry_state_t arm64_enter_el1_dbg(struct pt_regs *regs)
 {
-	regs->lockdep_hardirqs = lockdep_hardirqs_enabled();
+	arm64_irqentry_state_t state;
+
+	state.lockdep = lockdep_hardirqs_enabled();
 
 	lockdep_hardirqs_off(CALLER_ADDR0);
 	ct_nmi_enter();
 
 	trace_hardirqs_off_finish();
+
+	return state;
 }
 
 /*
@@ -249,17 +272,16 @@ static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs)
  * kernel mode. After this function returns it is not safe to call regular
  * kernel code, instrumentable code, or any code which may trigger an exception.
  */
-static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs)
+static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs,
+				       arm64_irqentry_state_t state)
 {
-	bool restore = regs->lockdep_hardirqs;
-
-	if (restore) {
+	if (state.lockdep) {
 		trace_hardirqs_on_prepare();
 		lockdep_hardirqs_on_prepare();
 	}
 
 	ct_nmi_exit();
-	if (restore)
+	if (state.lockdep)
 		lockdep_hardirqs_on(CALLER_ADDR0);
 }
 
@@ -475,73 +497,81 @@ UNHANDLED(el1t, 64, error)
 static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
+	arm64_irqentry_state_t state;
 
-	enter_from_kernel_mode(regs);
+	state = enter_from_kernel_mode(regs);
 	local_daif_inherit(regs);
 	do_mem_abort(far, esr, regs);
 	local_daif_mask();
-	exit_to_kernel_mode(regs);
+	exit_to_kernel_mode(regs, state);
 }
 
 static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
+	arm64_irqentry_state_t state;
 
-	enter_from_kernel_mode(regs);
+	state = enter_from_kernel_mode(regs);
 	local_daif_inherit(regs);
 	do_sp_pc_abort(far, esr, regs);
 	local_daif_mask();
-	exit_to_kernel_mode(regs);
+	exit_to_kernel_mode(regs, state);
 }
 
 static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_kernel_mode(regs);
+	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+
 	local_daif_inherit(regs);
 	do_el1_undef(regs, esr);
 	local_daif_mask();
-	exit_to_kernel_mode(regs);
+	exit_to_kernel_mode(regs, state);
 }
 
 static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_kernel_mode(regs);
+	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+
 	local_daif_inherit(regs);
 	do_el1_bti(regs, esr);
 	local_daif_mask();
-	exit_to_kernel_mode(regs);
+	exit_to_kernel_mode(regs, state);
 }
 
 static void noinstr el1_gcs(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_kernel_mode(regs);
+	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+
 	local_daif_inherit(regs);
 	do_el1_gcs(regs, esr);
 	local_daif_mask();
-	exit_to_kernel_mode(regs);
+	exit_to_kernel_mode(regs, state);
 }
 
 static void noinstr el1_mops(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_kernel_mode(regs);
+	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+
 	local_daif_inherit(regs);
 	do_el1_mops(regs, esr);
 	local_daif_mask();
-	exit_to_kernel_mode(regs);
+	exit_to_kernel_mode(regs, state);
 }
 
 static void noinstr el1_breakpt(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_enter_el1_dbg(regs);
+	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
+
 	debug_exception_enter(regs);
 	do_breakpoint(esr, regs);
 	debug_exception_exit(regs);
-	arm64_exit_el1_dbg(regs);
+	arm64_exit_el1_dbg(regs, state);
 }
 
 static void noinstr el1_softstp(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_enter_el1_dbg(regs);
+	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
+
 	if (!cortex_a76_erratum_1463225_debug_handler(regs)) {
 		debug_exception_enter(regs);
 		/*
@@ -554,37 +584,40 @@ static void noinstr el1_softstp(struct pt_regs *regs, unsigned long esr)
 			do_el1_softstep(esr, regs);
 		debug_exception_exit(regs);
 	}
-	arm64_exit_el1_dbg(regs);
+	arm64_exit_el1_dbg(regs, state);
 }
 
 static void noinstr el1_watchpt(struct pt_regs *regs, unsigned long esr)
 {
 	/* Watchpoints are the only debug exception to write FAR_EL1 */
 	unsigned long far = read_sysreg(far_el1);
+	arm64_irqentry_state_t state;
 
-	arm64_enter_el1_dbg(regs);
+	state = arm64_enter_el1_dbg(regs);
 	debug_exception_enter(regs);
 	do_watchpoint(far, esr, regs);
 	debug_exception_exit(regs);
-	arm64_exit_el1_dbg(regs);
+	arm64_exit_el1_dbg(regs, state);
 }
 
 static void noinstr el1_brk64(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_enter_el1_dbg(regs);
+	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
+
 	debug_exception_enter(regs);
 	do_el1_brk64(esr, regs);
 	debug_exception_exit(regs);
-	arm64_exit_el1_dbg(regs);
+	arm64_exit_el1_dbg(regs, state);
 }
 
 static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_kernel_mode(regs);
+	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+
 	local_daif_inherit(regs);
 	do_el1_fpac(regs, esr);
 	local_daif_mask();
-	exit_to_kernel_mode(regs);
+	exit_to_kernel_mode(regs, state);
 }
 
 asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
@@ -639,15 +672,16 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
 static __always_inline void __el1_pnmi(struct pt_regs *regs,
 				       void (*handler)(struct pt_regs *))
 {
-	arm64_enter_nmi(regs);
+	arm64_irqentry_state_t state = arm64_enter_nmi(regs);
+
 	do_interrupt_handler(regs, handler);
-	arm64_exit_nmi(regs);
+	arm64_exit_nmi(regs, state);
 }
 
 static __always_inline void __el1_irq(struct pt_regs *regs,
 				      void (*handler)(struct pt_regs *))
 {
-	enter_from_kernel_mode(regs);
+	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
 
 	irq_enter_rcu();
 	do_interrupt_handler(regs, handler);
@@ -655,7 +689,7 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 
 	arm64_preempt_schedule_irq();
 
-	exit_to_kernel_mode(regs);
+	exit_to_kernel_mode(regs, state);
 }
 static void noinstr el1_interrupt(struct pt_regs *regs,
 				  void (*handler)(struct pt_regs *))
@@ -681,11 +715,12 @@ asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
 asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
+	arm64_irqentry_state_t state;
 
 	local_daif_restore(DAIF_ERRCTX);
-	arm64_enter_nmi(regs);
+	state = arm64_enter_nmi(regs);
 	do_serror(regs, esr);
-	arm64_exit_nmi(regs);
+	arm64_exit_nmi(regs, state);
 }
 
 static void noinstr el0_da(struct pt_regs *regs, unsigned long esr)
@@ -997,12 +1032,13 @@ asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
 static void noinstr __el0_error_handler_common(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
+	arm64_irqentry_state_t state;
 
 	enter_from_user_mode(regs);
 	local_daif_restore(DAIF_ERRCTX);
-	arm64_enter_nmi(regs);
+	state = arm64_enter_nmi(regs);
 	do_serror(regs, esr);
-	arm64_exit_nmi(regs);
+	arm64_exit_nmi(regs, state);
 	local_daif_restore(DAIF_PROCCTX);
 	exit_to_user_mode(regs);
 }
@@ -1122,6 +1158,7 @@ asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs)
 asmlinkage noinstr unsigned long
 __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 {
+	arm64_irqentry_state_t state;
 	unsigned long ret;
 
 	/*
@@ -1146,9 +1183,9 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 	else if (cpu_has_pan())
 		set_pstate_pan(0);
 
-	arm64_enter_nmi(regs);
+	state = arm64_enter_nmi(regs);
 	ret = do_sdei_event(regs, arg);
-	arm64_exit_nmi(regs);
+	arm64_exit_nmi(regs, state);
 
 	return ret;
 }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -next v7 3/7] arm64: entry: Rework arm64_preempt_schedule_irq()
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
  2025-07-29  1:54 ` [PATCH -next v7 1/7] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled() Jinjie Ruan
  2025-07-29  1:54 ` [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1 Jinjie Ruan
@ 2025-07-29  1:54 ` Jinjie Ruan
  2025-08-05 15:06   ` Ada Couprie Diaz
  2025-07-29  1:54 ` [PATCH -next v7 4/7] arm64: entry: Use preempt_count() and need_resched() helper Jinjie Ruan
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2025-07-29  1:54 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, ada.coupriediaz,
	anshuman.khandual, kristina.martsenko, liaochang1, ardb, leitao,
	linux-arm-kernel, linux-kernel, xen-devel
  Cc: ruanjinjie

The generic entry code has the form:

| raw_irqentry_exit_cond_resched()
| {
| 	if (!preempt_count()) {
| 		...
| 		if (need_resched())
| 			preempt_schedule_irq();
| 	}
| }

In preparation for moving arm64 over to the generic entry code, align
the structure of the arm64 code with raw_irqentry_exit_cond_resched() from
the generic entry code.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/entry-common.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 97e0741abde1..21a7d8bea814 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -293,10 +293,10 @@ DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
 #define need_irq_preemption()	(IS_ENABLED(CONFIG_PREEMPTION))
 #endif
 
-static void __sched arm64_preempt_schedule_irq(void)
+static inline bool arm64_preempt_schedule_irq(void)
 {
 	if (!need_irq_preemption())
-		return;
+		return false;
 
 	/*
 	 * Note: thread_info::preempt_count includes both thread_info::count
@@ -304,7 +304,7 @@ static void __sched arm64_preempt_schedule_irq(void)
 	 * preempt_count().
 	 */
 	if (READ_ONCE(current_thread_info()->preempt_count) != 0)
-		return;
+		return false;
 
 	/*
 	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
@@ -313,7 +313,7 @@ static void __sched arm64_preempt_schedule_irq(void)
 	 * DAIF we must have handled an NMI, so skip preemption.
 	 */
 	if (system_uses_irq_prio_masking() && read_sysreg(daif))
-		return;
+		return false;
 
 	/*
 	 * Preempting a task from an IRQ means we leave copies of PSTATE
@@ -323,8 +323,10 @@ static void __sched arm64_preempt_schedule_irq(void)
 	 * Only allow a task to be preempted once cpufeatures have been
 	 * enabled.
 	 */
-	if (system_capabilities_finalized())
-		preempt_schedule_irq();
+	if (!system_capabilities_finalized())
+		return false;
+
+	return true;
 }
 
 static void do_interrupt_handler(struct pt_regs *regs,
@@ -687,7 +689,8 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 	do_interrupt_handler(regs, handler);
 	irq_exit_rcu();
 
-	arm64_preempt_schedule_irq();
+	if (arm64_preempt_schedule_irq())
+		preempt_schedule_irq();
 
 	exit_to_kernel_mode(regs, state);
 }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -next v7 4/7] arm64: entry: Use preempt_count() and need_resched() helper
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
                   ` (2 preceding siblings ...)
  2025-07-29  1:54 ` [PATCH -next v7 3/7] arm64: entry: Rework arm64_preempt_schedule_irq() Jinjie Ruan
@ 2025-07-29  1:54 ` Jinjie Ruan
  2025-08-05 15:06   ` Ada Couprie Diaz
  2025-07-29  1:54 ` [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code Jinjie Ruan
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2025-07-29  1:54 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, ada.coupriediaz,
	anshuman.khandual, kristina.martsenko, liaochang1, ardb, leitao,
	linux-arm-kernel, linux-kernel, xen-devel
  Cc: ruanjinjie

The generic entry code uses preempt_count() and need_resched() helpers to
check if it should do preempt_schedule_irq(). Currently, arm64 use its own
check logic, that is "READ_ONCE(current_thread_info()->preempt_count == 0",
which is equivalent to "preempt_count() == 0 && need_resched()".

In preparation for moving arm64 over to the generic entry code, use
these helpers to replace arm64's own code and move it ahead.

No functional changes.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/entry-common.c | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 21a7d8bea814..7c2299c1ba79 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -298,14 +298,6 @@ static inline bool arm64_preempt_schedule_irq(void)
 	if (!need_irq_preemption())
 		return false;
 
-	/*
-	 * Note: thread_info::preempt_count includes both thread_info::count
-	 * and thread_info::need_resched, and is not equivalent to
-	 * preempt_count().
-	 */
-	if (READ_ONCE(current_thread_info()->preempt_count) != 0)
-		return false;
-
 	/*
 	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
 	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
@@ -689,8 +681,10 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 	do_interrupt_handler(regs, handler);
 	irq_exit_rcu();
 
-	if (arm64_preempt_schedule_irq())
-		preempt_schedule_irq();
+	if (!preempt_count() && need_resched()) {
+		if (arm64_preempt_schedule_irq())
+			preempt_schedule_irq();
+	}
 
 	exit_to_kernel_mode(regs, state);
 }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
                   ` (3 preceding siblings ...)
  2025-07-29  1:54 ` [PATCH -next v7 4/7] arm64: entry: Use preempt_count() and need_resched() helper Jinjie Ruan
@ 2025-07-29  1:54 ` Jinjie Ruan
  2025-08-05 15:06   ` Ada Couprie Diaz
                     ` (2 more replies)
  2025-07-29  1:54 ` [PATCH -next v7 6/7] arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode() Jinjie Ruan
                   ` (3 subsequent siblings)
  8 siblings, 3 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-07-29  1:54 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, ada.coupriediaz,
	anshuman.khandual, kristina.martsenko, liaochang1, ardb, leitao,
	linux-arm-kernel, linux-kernel, xen-devel
  Cc: ruanjinjie

ARM64 requires an additional check whether to reschedule on return
from interrupt. So add arch_irqentry_exit_need_resched() as the default
NOP implementation and hook it up into the need_resched() condition in
raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
the architecture specific version for switching over to
the generic entry code.

To align the structure of the code with irqentry_exit_cond_resched()
from the generic entry code, hoist the need_irq_preemption()
and IS_ENABLED() check earlier. And different preemption check functions
are defined based on whether dynamic preemption is enabled.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/include/asm/preempt.h |  4 ++++
 arch/arm64/kernel/entry-common.c | 35 ++++++++++++++++++--------------
 kernel/entry/common.c            | 16 ++++++++++++++-
 3 files changed, 39 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
index 0159b625cc7f..0f0ba250efe8 100644
--- a/arch/arm64/include/asm/preempt.h
+++ b/arch/arm64/include/asm/preempt.h
@@ -85,6 +85,7 @@ static inline bool should_resched(int preempt_offset)
 void preempt_schedule(void);
 void preempt_schedule_notrace(void);
 
+void raw_irqentry_exit_cond_resched(void);
 #ifdef CONFIG_PREEMPT_DYNAMIC
 
 DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
@@ -92,11 +93,14 @@ void dynamic_preempt_schedule(void);
 #define __preempt_schedule()		dynamic_preempt_schedule()
 void dynamic_preempt_schedule_notrace(void);
 #define __preempt_schedule_notrace()	dynamic_preempt_schedule_notrace()
+void dynamic_irqentry_exit_cond_resched(void);
+#define irqentry_exit_cond_resched()	dynamic_irqentry_exit_cond_resched()
 
 #else /* CONFIG_PREEMPT_DYNAMIC */
 
 #define __preempt_schedule()		preempt_schedule()
 #define __preempt_schedule_notrace()	preempt_schedule_notrace()
+#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
 
 #endif /* CONFIG_PREEMPT_DYNAMIC */
 #endif /* CONFIG_PREEMPTION */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 7c2299c1ba79..4f92664fd46c 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -285,19 +285,8 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs,
 		lockdep_hardirqs_on(CALLER_ADDR0);
 }
 
-#ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
-#define need_irq_preemption() \
-	(static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
-#else
-#define need_irq_preemption()	(IS_ENABLED(CONFIG_PREEMPTION))
-#endif
-
 static inline bool arm64_preempt_schedule_irq(void)
 {
-	if (!need_irq_preemption())
-		return false;
-
 	/*
 	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
 	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
@@ -672,6 +661,24 @@ static __always_inline void __el1_pnmi(struct pt_regs *regs,
 	arm64_exit_nmi(regs, state);
 }
 
+void raw_irqentry_exit_cond_resched(void)
+{
+	if (!preempt_count()) {
+		if (need_resched() && arm64_preempt_schedule_irq())
+			preempt_schedule_irq();
+	}
+}
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+void dynamic_irqentry_exit_cond_resched(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
+		return;
+	raw_irqentry_exit_cond_resched();
+}
+#endif
+
 static __always_inline void __el1_irq(struct pt_regs *regs,
 				      void (*handler)(struct pt_regs *))
 {
@@ -681,10 +688,8 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 	do_interrupt_handler(regs, handler);
 	irq_exit_rcu();
 
-	if (!preempt_count() && need_resched()) {
-		if (arm64_preempt_schedule_irq())
-			preempt_schedule_irq();
-	}
+	if (IS_ENABLED(CONFIG_PREEMPTION))
+		irqentry_exit_cond_resched();
 
 	exit_to_kernel_mode(regs, state);
 }
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index b82032777310..4aa9656fa1b4 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -142,6 +142,20 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
 	return ret;
 }
 
+/**
+ * arch_irqentry_exit_need_resched - Architecture specific need resched function
+ *
+ * Invoked from raw_irqentry_exit_cond_resched() to check if need resched.
+ * Defaults return true.
+ *
+ * The main purpose is to permit arch to skip preempt a task from an IRQ.
+ */
+static inline bool arch_irqentry_exit_need_resched(void);
+
+#ifndef arch_irqentry_exit_need_resched
+static inline bool arch_irqentry_exit_need_resched(void) { return true; }
+#endif
+
 void raw_irqentry_exit_cond_resched(void)
 {
 	if (!preempt_count()) {
@@ -149,7 +163,7 @@ void raw_irqentry_exit_cond_resched(void)
 		rcu_irq_exit_check_preempt();
 		if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
 			WARN_ON_ONCE(!on_thread_stack());
-		if (need_resched())
+		if (need_resched() && arch_irqentry_exit_need_resched())
 			preempt_schedule_irq();
 	}
 }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -next v7 6/7] arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode()
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
                   ` (4 preceding siblings ...)
  2025-07-29  1:54 ` [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code Jinjie Ruan
@ 2025-07-29  1:54 ` Jinjie Ruan
  2025-08-05 15:07   ` Ada Couprie Diaz
  2025-07-29  1:54 ` [PATCH -next v7 7/7] arm64: entry: Switch to generic IRQ entry Jinjie Ruan
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2025-07-29  1:54 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, ada.coupriediaz,
	anshuman.khandual, kristina.martsenko, liaochang1, ardb, leitao,
	linux-arm-kernel, linux-kernel, xen-devel
  Cc: ruanjinjie

The arm64 entry code only preempts a kernel context upon a return from
a regular IRQ exception. The generic entry code may preempt a kernel
context for any exception return where irqentry_exit() is used, and so
may preempt other exceptions such as faults.

In preparation for moving arm64 over to the generic entry code, align
arm64 with the generic behaviour by calling
arm64_preempt_schedule_irq() from exit_to_kernel_mode(). To make this
possible, arm64_preempt_schedule_irq()
and dynamic/raw_irqentry_exit_cond_resched() are moved earlier in
the file, with no changes.

As Mark pointed out, this change will have the following 2 key impact:

- " We'll preempt even without taking a "real" interrupt. That
    shouldn't result in preemption that wasn't possible before,
    but it does change the probability of preempting at certain points,
    and might have a performance impact, so probably warrants a
    benchmark."

- " We will not preempt when taking interrupts from a region of kernel
    code where IRQs are enabled but RCU is not watching, matching the
    behaviour of the generic entry code.

    This has the potential to introduce livelock if we can ever have a
    screaming interrupt in such a region, so we'll need to go figure out
    whether that's actually a problem.

    Having this as a separate patch will make it easier to test/bisect
    for that specifically."

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/entry-common.c | 92 ++++++++++++++++----------------
 1 file changed, 46 insertions(+), 46 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 4f92664fd46c..7c7aa5711a39 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -76,6 +76,49 @@ static noinstr arm64_irqentry_state_t enter_from_kernel_mode(struct pt_regs *reg
 	return state;
 }
 
+static inline bool arm64_preempt_schedule_irq(void)
+{
+	/*
+	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
+	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
+	 * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
+	 * DAIF we must have handled an NMI, so skip preemption.
+	 */
+	if (system_uses_irq_prio_masking() && read_sysreg(daif))
+		return false;
+
+	/*
+	 * Preempting a task from an IRQ means we leave copies of PSTATE
+	 * on the stack. cpufeature's enable calls may modify PSTATE, but
+	 * resuming one of these preempted tasks would undo those changes.
+	 *
+	 * Only allow a task to be preempted once cpufeatures have been
+	 * enabled.
+	 */
+	if (!system_capabilities_finalized())
+		return false;
+
+	return true;
+}
+
+void raw_irqentry_exit_cond_resched(void)
+{
+	if (!preempt_count()) {
+		if (need_resched() && arm64_preempt_schedule_irq())
+			preempt_schedule_irq();
+	}
+}
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+void dynamic_irqentry_exit_cond_resched(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
+		return;
+	raw_irqentry_exit_cond_resched();
+}
+#endif
+
 /*
  * Handle IRQ/context state management when exiting to kernel mode.
  * After this function returns it is not safe to call regular kernel code,
@@ -98,6 +141,9 @@ static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs,
 			return;
 		}
 
+		if (IS_ENABLED(CONFIG_PREEMPTION))
+			irqentry_exit_cond_resched();
+
 		trace_hardirqs_on();
 	} else {
 		if (state.exit_rcu)
@@ -285,31 +331,6 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs,
 		lockdep_hardirqs_on(CALLER_ADDR0);
 }
 
-static inline bool arm64_preempt_schedule_irq(void)
-{
-	/*
-	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
-	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
-	 * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
-	 * DAIF we must have handled an NMI, so skip preemption.
-	 */
-	if (system_uses_irq_prio_masking() && read_sysreg(daif))
-		return false;
-
-	/*
-	 * Preempting a task from an IRQ means we leave copies of PSTATE
-	 * on the stack. cpufeature's enable calls may modify PSTATE, but
-	 * resuming one of these preempted tasks would undo those changes.
-	 *
-	 * Only allow a task to be preempted once cpufeatures have been
-	 * enabled.
-	 */
-	if (!system_capabilities_finalized())
-		return false;
-
-	return true;
-}
-
 static void do_interrupt_handler(struct pt_regs *regs,
 				 void (*handler)(struct pt_regs *))
 {
@@ -661,24 +682,6 @@ static __always_inline void __el1_pnmi(struct pt_regs *regs,
 	arm64_exit_nmi(regs, state);
 }
 
-void raw_irqentry_exit_cond_resched(void)
-{
-	if (!preempt_count()) {
-		if (need_resched() && arm64_preempt_schedule_irq())
-			preempt_schedule_irq();
-	}
-}
-
-#ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
-void dynamic_irqentry_exit_cond_resched(void)
-{
-	if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
-		return;
-	raw_irqentry_exit_cond_resched();
-}
-#endif
-
 static __always_inline void __el1_irq(struct pt_regs *regs,
 				      void (*handler)(struct pt_regs *))
 {
@@ -688,9 +691,6 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 	do_interrupt_handler(regs, handler);
 	irq_exit_rcu();
 
-	if (IS_ENABLED(CONFIG_PREEMPTION))
-		irqentry_exit_cond_resched();
-
 	exit_to_kernel_mode(regs, state);
 }
 static void noinstr el1_interrupt(struct pt_regs *regs,
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -next v7 7/7] arm64: entry: Switch to generic IRQ entry
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
                   ` (5 preceding siblings ...)
  2025-07-29  1:54 ` [PATCH -next v7 6/7] arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode() Jinjie Ruan
@ 2025-07-29  1:54 ` Jinjie Ruan
  2025-08-05 15:07   ` Ada Couprie Diaz
  2025-08-05 15:08 ` [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Ada Couprie Diaz
  2025-08-12 11:19 ` Mark Rutland
  8 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2025-07-29  1:54 UTC (permalink / raw)
  To: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, ada.coupriediaz,
	anshuman.khandual, kristina.martsenko, liaochang1, ardb, leitao,
	linux-arm-kernel, linux-kernel, xen-devel
  Cc: ruanjinjie

Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64
to use the generic entry infrastructure from kernel/entry/*.
The generic entry makes maintainers' work easier and codes
more elegant.

Switch arm64 to generic IRQ entry first, which removed duplicate 100+
LOC. The next patch serise will switch to generic entry completely later.
Switch to generic entry in two steps according to Mark's suggestion
will make it easier to review.

The changes are below:
 - Remove *enter_from/exit_to_kernel_mode(), and wrap with generic
   irqentry_enter/exit(). Also remove *enter_from/exit_to_user_mode(),
   and wrap with generic enter_from/exit_to_user_mode() because they
   are exactly the same so far.

 - Remove arm64_enter/exit_nmi() and use generic irqentry_nmi_enter/exit()
   because they're exactly the same, so the temporary arm64 version
   irqentry_state can also be removed.

 - Remove PREEMPT_DYNAMIC code, as generic entry do the same thing
   if arm64 implement arch_irqentry_exit_need_resched().

Tested ok with following test cases on Qemu virt platform:
 - Perf tests.
 - Different `dynamic preempt` mode switch.
 - Pseudo NMI tests.
 - Stress-ng CPU stress test.
 - MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst
   and all test cases in tools/testing/selftests/arm64/mte/*.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/Kconfig                    |   1 +
 arch/arm64/include/asm/entry-common.h |  56 ++++
 arch/arm64/include/asm/preempt.h      |   6 -
 arch/arm64/kernel/entry-common.c      | 374 +++++++-------------------
 arch/arm64/kernel/signal.c            |   3 +-
 5 files changed, 154 insertions(+), 286 deletions(-)
 create mode 100644 arch/arm64/include/asm/entry-common.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e9bbfacc35a6..6bb60a0620ec 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -151,6 +151,7 @@ config ARM64
 	select GENERIC_EARLY_IOREMAP
 	select GENERIC_IDLE_POLL_SETUP
 	select GENERIC_IOREMAP
+	select GENERIC_IRQ_ENTRY
 	select GENERIC_IRQ_IPI
 	select GENERIC_IRQ_KEXEC_CLEAR_VM_FORWARD
 	select GENERIC_IRQ_PROBE
diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm/entry-common.h
new file mode 100644
index 000000000000..93c30b8d653d
--- /dev/null
+++ b/arch/arm64/include/asm/entry-common.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_ARM64_ENTRY_COMMON_H
+#define _ASM_ARM64_ENTRY_COMMON_H
+
+#include <linux/thread_info.h>
+
+#include <asm/daifflags.h>
+#include <asm/fpsimd.h>
+#include <asm/mte.h>
+#include <asm/stacktrace.h>
+
+#define ARCH_EXIT_TO_USER_MODE_WORK (_TIF_MTE_ASYNC_FAULT | _TIF_FOREIGN_FPSTATE)
+
+static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *regs,
+							unsigned long ti_work)
+{
+	if (ti_work & _TIF_MTE_ASYNC_FAULT) {
+		clear_thread_flag(TIF_MTE_ASYNC_FAULT);
+		send_sig_fault(SIGSEGV, SEGV_MTEAERR, (void __user *)NULL, current);
+	}
+
+	if (ti_work & _TIF_FOREIGN_FPSTATE)
+		fpsimd_restore_current_state();
+}
+
+#define arch_exit_to_user_mode_work arch_exit_to_user_mode_work
+
+static inline bool arch_irqentry_exit_need_resched(void)
+{
+	/*
+	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
+	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
+	 * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
+	 * DAIF we must have handled an NMI, so skip preemption.
+	 */
+	if (system_uses_irq_prio_masking() && read_sysreg(daif))
+		return false;
+
+	/*
+	 * Preempting a task from an IRQ means we leave copies of PSTATE
+	 * on the stack. cpufeature's enable calls may modify PSTATE, but
+	 * resuming one of these preempted tasks would undo those changes.
+	 *
+	 * Only allow a task to be preempted once cpufeatures have been
+	 * enabled.
+	 */
+	if (!system_capabilities_finalized())
+		return false;
+
+	return true;
+}
+
+#define arch_irqentry_exit_need_resched arch_irqentry_exit_need_resched
+
+#endif /* _ASM_ARM64_ENTRY_COMMON_H */
diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
index 0f0ba250efe8..932ea4b62042 100644
--- a/arch/arm64/include/asm/preempt.h
+++ b/arch/arm64/include/asm/preempt.h
@@ -2,7 +2,6 @@
 #ifndef __ASM_PREEMPT_H
 #define __ASM_PREEMPT_H
 
-#include <linux/jump_label.h>
 #include <linux/thread_info.h>
 
 #define PREEMPT_NEED_RESCHED	BIT(32)
@@ -85,22 +84,17 @@ static inline bool should_resched(int preempt_offset)
 void preempt_schedule(void);
 void preempt_schedule_notrace(void);
 
-void raw_irqentry_exit_cond_resched(void);
 #ifdef CONFIG_PREEMPT_DYNAMIC
 
-DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
 void dynamic_preempt_schedule(void);
 #define __preempt_schedule()		dynamic_preempt_schedule()
 void dynamic_preempt_schedule_notrace(void);
 #define __preempt_schedule_notrace()	dynamic_preempt_schedule_notrace()
-void dynamic_irqentry_exit_cond_resched(void);
-#define irqentry_exit_cond_resched()	dynamic_irqentry_exit_cond_resched()
 
 #else /* CONFIG_PREEMPT_DYNAMIC */
 
 #define __preempt_schedule()		preempt_schedule()
 #define __preempt_schedule_notrace()	preempt_schedule_notrace()
-#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
 
 #endif /* CONFIG_PREEMPT_DYNAMIC */
 #endif /* CONFIG_PREEMPTION */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 7c7aa5711a39..d948205ab0b9 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -6,6 +6,7 @@
  */
 
 #include <linux/context_tracking.h>
+#include <linux/irq-entry-common.h>
 #include <linux/kasan.h>
 #include <linux/linkage.h>
 #include <linux/livepatch.h>
@@ -29,13 +30,6 @@
 #include <asm/sysreg.h>
 #include <asm/system_misc.h>
 
-typedef struct irqentry_state {
-	union {
-		bool	exit_rcu;
-		bool	lockdep;
-	};
-} arm64_irqentry_state_t;
-
 /*
  * Handle IRQ/context state management when entering from kernel mode.
  * Before this function is called it is not safe to call regular kernel code,
@@ -44,31 +38,14 @@ typedef struct irqentry_state {
  * This is intended to match the logic in irqentry_enter(), handling the kernel
  * mode transitions only.
  */
-static __always_inline arm64_irqentry_state_t __enter_from_kernel_mode(struct pt_regs *regs)
+static __always_inline irqentry_state_t __enter_from_kernel_mode(struct pt_regs *regs)
 {
-	arm64_irqentry_state_t state = {
-		.exit_rcu = false,
-	};
-
-	if (!IS_ENABLED(CONFIG_TINY_RCU) && is_idle_task(current)) {
-		lockdep_hardirqs_off(CALLER_ADDR0);
-		ct_irq_enter();
-		trace_hardirqs_off_finish();
-
-		state.exit_rcu = true;
-		return state;
-	}
-
-	lockdep_hardirqs_off(CALLER_ADDR0);
-	rcu_irq_enter_check_tick();
-	trace_hardirqs_off_finish();
-
-	return state;
+	return irqentry_enter(regs);
 }
 
-static noinstr arm64_irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs)
+static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs)
 {
-	arm64_irqentry_state_t state = __enter_from_kernel_mode(regs);
+	irqentry_state_t state = __enter_from_kernel_mode(regs);
 
 	mte_check_tfsr_entry();
 	mte_disable_tco_entry(current);
@@ -76,49 +53,6 @@ static noinstr arm64_irqentry_state_t enter_from_kernel_mode(struct pt_regs *reg
 	return state;
 }
 
-static inline bool arm64_preempt_schedule_irq(void)
-{
-	/*
-	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
-	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
-	 * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
-	 * DAIF we must have handled an NMI, so skip preemption.
-	 */
-	if (system_uses_irq_prio_masking() && read_sysreg(daif))
-		return false;
-
-	/*
-	 * Preempting a task from an IRQ means we leave copies of PSTATE
-	 * on the stack. cpufeature's enable calls may modify PSTATE, but
-	 * resuming one of these preempted tasks would undo those changes.
-	 *
-	 * Only allow a task to be preempted once cpufeatures have been
-	 * enabled.
-	 */
-	if (!system_capabilities_finalized())
-		return false;
-
-	return true;
-}
-
-void raw_irqentry_exit_cond_resched(void)
-{
-	if (!preempt_count()) {
-		if (need_resched() && arm64_preempt_schedule_irq())
-			preempt_schedule_irq();
-	}
-}
-
-#ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
-void dynamic_irqentry_exit_cond_resched(void)
-{
-	if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
-		return;
-	raw_irqentry_exit_cond_resched();
-}
-#endif
-
 /*
  * Handle IRQ/context state management when exiting to kernel mode.
  * After this function returns it is not safe to call regular kernel code,
@@ -128,31 +62,13 @@ void dynamic_irqentry_exit_cond_resched(void)
  * mode transitions only, and with preemption handled elsewhere.
  */
 static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs,
-						  arm64_irqentry_state_t state)
-{
-	lockdep_assert_irqs_disabled();
-
-	if (!regs_irqs_disabled(regs)) {
-		if (state.exit_rcu) {
-			trace_hardirqs_on_prepare();
-			lockdep_hardirqs_on_prepare();
-			ct_irq_exit();
-			lockdep_hardirqs_on(CALLER_ADDR0);
-			return;
-		}
-
-		if (IS_ENABLED(CONFIG_PREEMPTION))
-			irqentry_exit_cond_resched();
-
-		trace_hardirqs_on();
-	} else {
-		if (state.exit_rcu)
-			ct_irq_exit();
-	}
+						  irqentry_state_t state)
+{
+	irqentry_exit(regs, state);
 }
 
 static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
-					arm64_irqentry_state_t state)
+					irqentry_state_t state)
 {
 	mte_check_tfsr_exit();
 	__exit_to_kernel_mode(regs, state);
@@ -163,18 +79,15 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
  * Before this function is called it is not safe to call regular kernel code,
  * instrumentable code, or any code which may trigger an exception.
  */
-static __always_inline void __enter_from_user_mode(void)
+static __always_inline void __enter_from_user_mode(struct pt_regs *regs)
 {
-	lockdep_hardirqs_off(CALLER_ADDR0);
-	CT_WARN_ON(ct_state() != CT_STATE_USER);
-	user_exit_irqoff();
-	trace_hardirqs_off_finish();
+	enter_from_user_mode(regs);
 	mte_disable_tco_entry(current);
 }
 
-static __always_inline void enter_from_user_mode(struct pt_regs *regs)
+static __always_inline void arm64_enter_from_user_mode(struct pt_regs *regs)
 {
-	__enter_from_user_mode();
+	__enter_from_user_mode(regs);
 }
 
 /*
@@ -182,116 +95,19 @@ static __always_inline void enter_from_user_mode(struct pt_regs *regs)
  * After this function returns it is not safe to call regular kernel code,
  * instrumentable code, or any code which may trigger an exception.
  */
-static __always_inline void __exit_to_user_mode(void)
-{
-	trace_hardirqs_on_prepare();
-	lockdep_hardirqs_on_prepare();
-	user_enter_irqoff();
-	lockdep_hardirqs_on(CALLER_ADDR0);
-}
 
-static void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags)
+static __always_inline void arm64_exit_to_user_mode(struct pt_regs *regs)
 {
-	do {
-		local_irq_enable();
-
-		if (thread_flags & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY))
-			schedule();
-
-		if (thread_flags & _TIF_UPROBE)
-			uprobe_notify_resume(regs);
-
-		if (thread_flags & _TIF_MTE_ASYNC_FAULT) {
-			clear_thread_flag(TIF_MTE_ASYNC_FAULT);
-			send_sig_fault(SIGSEGV, SEGV_MTEAERR,
-				       (void __user *)NULL, current);
-		}
-
-		if (thread_flags & _TIF_PATCH_PENDING)
-			klp_update_patch_state(current);
-
-		if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
-			do_signal(regs);
-
-		if (thread_flags & _TIF_NOTIFY_RESUME)
-			resume_user_mode_work(regs);
-
-		if (thread_flags & _TIF_FOREIGN_FPSTATE)
-			fpsimd_restore_current_state();
-
-		local_irq_disable();
-		thread_flags = read_thread_flags();
-	} while (thread_flags & _TIF_WORK_MASK);
-}
-
-static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs)
-{
-	unsigned long flags;
-
 	local_irq_disable();
-
-	flags = read_thread_flags();
-	if (unlikely(flags & _TIF_WORK_MASK))
-		do_notify_resume(regs, flags);
-
-	local_daif_mask();
-
-	lockdep_sys_exit();
-}
-
-static __always_inline void exit_to_user_mode(struct pt_regs *regs)
-{
 	exit_to_user_mode_prepare(regs);
+	local_daif_mask();
 	mte_check_tfsr_exit();
-	__exit_to_user_mode();
+	exit_to_user_mode();
 }
 
 asmlinkage void noinstr asm_exit_to_user_mode(struct pt_regs *regs)
 {
-	exit_to_user_mode(regs);
-}
-
-/*
- * Handle IRQ/context state management when entering an NMI from user/kernel
- * mode. Before this function is called it is not safe to call regular kernel
- * code, instrumentable code, or any code which may trigger an exception.
- */
-static noinstr arm64_irqentry_state_t arm64_enter_nmi(struct pt_regs *regs)
-{
-	arm64_irqentry_state_t state;
-
-	state.lockdep = lockdep_hardirqs_enabled();
-
-	__nmi_enter();
-	lockdep_hardirqs_off(CALLER_ADDR0);
-	lockdep_hardirq_enter();
-	ct_nmi_enter();
-
-	trace_hardirqs_off_finish();
-	ftrace_nmi_enter();
-
-	return state;
-}
-
-/*
- * Handle IRQ/context state management when exiting an NMI from user/kernel
- * mode. After this function returns it is not safe to call regular kernel
- * code, instrumentable code, or any code which may trigger an exception.
- */
-static void noinstr arm64_exit_nmi(struct pt_regs *regs,
-				   arm64_irqentry_state_t state)
-{
-	ftrace_nmi_exit();
-	if (state.lockdep) {
-		trace_hardirqs_on_prepare();
-		lockdep_hardirqs_on_prepare();
-	}
-
-	ct_nmi_exit();
-	lockdep_hardirq_exit();
-	if (state.lockdep)
-		lockdep_hardirqs_on(CALLER_ADDR0);
-	__nmi_exit();
+	arm64_exit_to_user_mode(regs);
 }
 
 /*
@@ -299,9 +115,9 @@ static void noinstr arm64_exit_nmi(struct pt_regs *regs,
  * kernel mode. Before this function is called it is not safe to call regular
  * kernel code, instrumentable code, or any code which may trigger an exception.
  */
-static noinstr arm64_irqentry_state_t arm64_enter_el1_dbg(struct pt_regs *regs)
+static noinstr irqentry_state_t arm64_enter_el1_dbg(struct pt_regs *regs)
 {
-	arm64_irqentry_state_t state;
+	irqentry_state_t state;
 
 	state.lockdep = lockdep_hardirqs_enabled();
 
@@ -319,7 +135,7 @@ static noinstr arm64_irqentry_state_t arm64_enter_el1_dbg(struct pt_regs *regs)
  * kernel code, instrumentable code, or any code which may trigger an exception.
  */
 static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs,
-				       arm64_irqentry_state_t state)
+				       irqentry_state_t state)
 {
 	if (state.lockdep) {
 		trace_hardirqs_on_prepare();
@@ -350,7 +166,7 @@ extern void (*handle_arch_fiq)(struct pt_regs *);
 static void noinstr __panic_unhandled(struct pt_regs *regs, const char *vector,
 				      unsigned long esr)
 {
-	arm64_enter_nmi(regs);
+	irqentry_nmi_enter(regs);
 
 	console_verbose();
 
@@ -501,7 +317,7 @@ UNHANDLED(el1t, 64, error)
 static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
-	arm64_irqentry_state_t state;
+	irqentry_state_t state;
 
 	state = enter_from_kernel_mode(regs);
 	local_daif_inherit(regs);
@@ -513,7 +329,7 @@ static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
 static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
-	arm64_irqentry_state_t state;
+	irqentry_state_t state;
 
 	state = enter_from_kernel_mode(regs);
 	local_daif_inherit(regs);
@@ -524,7 +340,7 @@ static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
 
 static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+	irqentry_state_t state = enter_from_kernel_mode(regs);
 
 	local_daif_inherit(regs);
 	do_el1_undef(regs, esr);
@@ -534,7 +350,7 @@ static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
 
 static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+	irqentry_state_t state = enter_from_kernel_mode(regs);
 
 	local_daif_inherit(regs);
 	do_el1_bti(regs, esr);
@@ -544,7 +360,7 @@ static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr)
 
 static void noinstr el1_gcs(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+	irqentry_state_t state = enter_from_kernel_mode(regs);
 
 	local_daif_inherit(regs);
 	do_el1_gcs(regs, esr);
@@ -554,7 +370,7 @@ static void noinstr el1_gcs(struct pt_regs *regs, unsigned long esr)
 
 static void noinstr el1_mops(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+	irqentry_state_t state = enter_from_kernel_mode(regs);
 
 	local_daif_inherit(regs);
 	do_el1_mops(regs, esr);
@@ -564,7 +380,7 @@ static void noinstr el1_mops(struct pt_regs *regs, unsigned long esr)
 
 static void noinstr el1_breakpt(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
+	irqentry_state_t state = arm64_enter_el1_dbg(regs);
 
 	debug_exception_enter(regs);
 	do_breakpoint(esr, regs);
@@ -574,7 +390,7 @@ static void noinstr el1_breakpt(struct pt_regs *regs, unsigned long esr)
 
 static void noinstr el1_softstp(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
+	irqentry_state_t state = arm64_enter_el1_dbg(regs);
 
 	if (!cortex_a76_erratum_1463225_debug_handler(regs)) {
 		debug_exception_enter(regs);
@@ -595,7 +411,7 @@ static void noinstr el1_watchpt(struct pt_regs *regs, unsigned long esr)
 {
 	/* Watchpoints are the only debug exception to write FAR_EL1 */
 	unsigned long far = read_sysreg(far_el1);
-	arm64_irqentry_state_t state;
+	irqentry_state_t state;
 
 	state = arm64_enter_el1_dbg(regs);
 	debug_exception_enter(regs);
@@ -606,7 +422,7 @@ static void noinstr el1_watchpt(struct pt_regs *regs, unsigned long esr)
 
 static void noinstr el1_brk64(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
+	irqentry_state_t state = arm64_enter_el1_dbg(regs);
 
 	debug_exception_enter(regs);
 	do_el1_brk64(esr, regs);
@@ -616,7 +432,7 @@ static void noinstr el1_brk64(struct pt_regs *regs, unsigned long esr)
 
 static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
 {
-	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+	irqentry_state_t state = enter_from_kernel_mode(regs);
 
 	local_daif_inherit(regs);
 	do_el1_fpac(regs, esr);
@@ -676,16 +492,16 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
 static __always_inline void __el1_pnmi(struct pt_regs *regs,
 				       void (*handler)(struct pt_regs *))
 {
-	arm64_irqentry_state_t state = arm64_enter_nmi(regs);
+	irqentry_state_t state = irqentry_nmi_enter(regs);
 
 	do_interrupt_handler(regs, handler);
-	arm64_exit_nmi(regs, state);
+	irqentry_nmi_exit(regs, state);
 }
 
 static __always_inline void __el1_irq(struct pt_regs *regs,
 				      void (*handler)(struct pt_regs *))
 {
-	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
+	irqentry_state_t state = enter_from_kernel_mode(regs);
 
 	irq_enter_rcu();
 	do_interrupt_handler(regs, handler);
@@ -717,22 +533,22 @@ asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
 asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
-	arm64_irqentry_state_t state;
+	irqentry_state_t state;
 
 	local_daif_restore(DAIF_ERRCTX);
-	state = arm64_enter_nmi(regs);
+	state = irqentry_nmi_enter(regs);
 	do_serror(regs, esr);
-	arm64_exit_nmi(regs, state);
+	irqentry_nmi_exit(regs, state);
 }
 
 static void noinstr el0_da(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
 
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_mem_abort(far, esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr)
@@ -747,50 +563,50 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr)
 	if (!is_ttbr0_addr(far))
 		arm64_apply_bp_hardening();
 
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_mem_abort(far, esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_fpsimd_acc(esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_sve_acc(esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_sme_acc(esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_fpsimd_exc(esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_sys(esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
@@ -800,58 +616,58 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
 	if (!is_ttbr0_addr(instruction_pointer(regs)))
 		arm64_apply_bp_hardening();
 
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_sp_pc_abort(far, esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_sp_pc_abort(regs->sp, esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_undef(regs, esr);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_bti(struct pt_regs *regs)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_bti(regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_mops(regs, esr);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_gcs(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_gcs(regs, esr);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	bad_el0_sync(regs, 0, esr);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_breakpt(struct pt_regs *regs, unsigned long esr)
@@ -859,12 +675,12 @@ static void noinstr el0_breakpt(struct pt_regs *regs, unsigned long esr)
 	if (!is_ttbr0_addr(regs->pc))
 		arm64_apply_bp_hardening();
 
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	debug_exception_enter(regs);
 	do_breakpoint(esr, regs);
 	debug_exception_exit(regs);
 	local_daif_restore(DAIF_PROCCTX);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_softstp(struct pt_regs *regs, unsigned long esr)
@@ -872,7 +688,7 @@ static void noinstr el0_softstp(struct pt_regs *regs, unsigned long esr)
 	if (!is_ttbr0_addr(regs->pc))
 		arm64_apply_bp_hardening();
 
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	/*
 	 * After handling a breakpoint, we suspend the breakpoint
 	 * and use single-step to move to the next instruction.
@@ -883,7 +699,7 @@ static void noinstr el0_softstp(struct pt_regs *regs, unsigned long esr)
 		local_daif_restore(DAIF_PROCCTX);
 		do_el0_softstep(esr, regs);
 	}
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_watchpt(struct pt_regs *regs, unsigned long esr)
@@ -891,39 +707,39 @@ static void noinstr el0_watchpt(struct pt_regs *regs, unsigned long esr)
 	/* Watchpoints are the only debug exception to write FAR_EL1 */
 	unsigned long far = read_sysreg(far_el1);
 
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	debug_exception_enter(regs);
 	do_watchpoint(far, esr, regs);
 	debug_exception_exit(regs);
 	local_daif_restore(DAIF_PROCCTX);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_brk64(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_brk64(esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_svc(struct pt_regs *regs)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	cortex_a76_erratum_1463225_svc_handler();
 	fpsimd_syscall_enter();
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_svc(regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 	fpsimd_syscall_exit();
 }
 
 static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_fpac(regs, esr);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
@@ -997,7 +813,7 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
 static void noinstr el0_interrupt(struct pt_regs *regs,
 				  void (*handler)(struct pt_regs *))
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 
 	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
 
@@ -1008,7 +824,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
 	do_interrupt_handler(regs, handler);
 	irq_exit_rcu();
 
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
@@ -1034,15 +850,15 @@ asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
 static void noinstr __el0_error_handler_common(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
-	arm64_irqentry_state_t state;
+	irqentry_state_t state;
 
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_ERRCTX);
-	state = arm64_enter_nmi(regs);
+	state = irqentry_nmi_enter(regs);
 	do_serror(regs, esr);
-	arm64_exit_nmi(regs, state);
+	irqentry_nmi_exit(regs, state);
 	local_daif_restore(DAIF_PROCCTX);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs)
@@ -1053,27 +869,27 @@ asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs)
 #ifdef CONFIG_COMPAT
 static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_cp15(esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_svc_compat(struct pt_regs *regs)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	cortex_a76_erratum_1463225_svc_handler();
 	local_daif_restore(DAIF_PROCCTX);
 	do_el0_svc_compat(regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 static void noinstr el0_bkpt32(struct pt_regs *regs, unsigned long esr)
 {
-	enter_from_user_mode(regs);
+	arm64_enter_from_user_mode(regs);
 	local_daif_restore(DAIF_PROCCTX);
 	do_bkpt32(esr, regs);
-	exit_to_user_mode(regs);
+	arm64_exit_to_user_mode(regs);
 }
 
 asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs)
@@ -1152,7 +968,7 @@ asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs)
 	unsigned long esr = read_sysreg(esr_el1);
 	unsigned long far = read_sysreg(far_el1);
 
-	arm64_enter_nmi(regs);
+	irqentry_nmi_enter(regs);
 	panic_bad_stack(regs, esr, far);
 }
 
@@ -1160,7 +976,7 @@ asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs)
 asmlinkage noinstr unsigned long
 __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 {
-	arm64_irqentry_state_t state;
+	irqentry_state_t state;
 	unsigned long ret;
 
 	/*
@@ -1185,9 +1001,9 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 	else if (cpu_has_pan())
 		set_pstate_pan(0);
 
-	state = arm64_enter_nmi(regs);
+	state = irqentry_nmi_enter(regs);
 	ret = do_sdei_event(regs, arg);
-	arm64_exit_nmi(regs, state);
+	irqentry_nmi_exit(regs, state);
 
 	return ret;
 }
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index db3f972f8cd9..1110eeb21f57 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -9,6 +9,7 @@
 #include <linux/cache.h>
 #include <linux/compat.h>
 #include <linux/errno.h>
+#include <linux/irq-entry-common.h>
 #include <linux/kernel.h>
 #include <linux/signal.h>
 #include <linux/freezer.h>
@@ -1576,7 +1577,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
  * the kernel can handle, and then we build all the user-level signal handling
  * stack-frames in one go after that.
  */
-void do_signal(struct pt_regs *regs)
+void arch_do_signal_or_restart(struct pt_regs *regs)
 {
 	unsigned long continue_addr = 0, restart_addr = 0;
 	int retval = 0;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 1/7] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled()
  2025-07-29  1:54 ` [PATCH -next v7 1/7] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled() Jinjie Ruan
@ 2025-08-05 15:05   ` Ada Couprie Diaz
  2025-08-06  2:31     ` Jinjie Ruan
  0 siblings, 1 reply; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-05 15:05 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

On 29/07/2025 02:54, Jinjie Ruan wrote:

> The generic entry code expects architecture code to provide
> regs_irqs_disabled(regs) function, but arm64 does not have this and
> provides inerrupts_enabled(regs), which has the opposite polarity.
Nit: "interrupts_enabled(regs)"
> In preparation for moving arm64 over to the generic entry code,
> relace arm64's interrupts_enabled() with regs_irqs_disabled() and
> update its callers under arch/arm64.
>
> For the moment, a definition of interrupts_enabled() is provided for
> the GICv3 driver. Once arch/arm implement regs_irqs_disabled(), this
> can be removed.
>
> Delete the fast_interrupts_enabled() macro as it is unused and we
> don't want any new users to show up.
>
> No functional changes.
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
Otherwise looks good to me !
Reviewed-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1
  2025-07-29  1:54 ` [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1 Jinjie Ruan
@ 2025-08-05 15:06   ` Ada Couprie Diaz
  2025-08-06  2:49     ` Jinjie Ruan
  2025-08-12 11:01   ` Mark Rutland
  1 sibling, 1 reply; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-05 15:06 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

Hi,

On 29/07/2025 02:54, Jinjie Ruan wrote:

> The generic entry code uses irqentry_state_t to track lockdep and RCU
> state across exception entry and return. For historical reasons, arm64
> embeds similar fields within its pt_regs structure.
>
> In preparation for moving arm64 over to the generic entry code, pull
> these fields out of arm64's pt_regs, and use a separate structure,
> matching the style of the generic entry code.
>
> No functional changes.
As far as I understand and checked, we used the two fields
in an exclusive fashion, so there is indeed no functional change.
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
> [...]
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 8e798f46ad28..97e0741abde1 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> [...]
> @@ -475,73 +497,81 @@ UNHANDLED(el1t, 64, error)
>   static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
>   {
>   	unsigned long far = read_sysreg(far_el1);
> +	arm64_irqentry_state_t state;
>   
> -	enter_from_kernel_mode(regs);
> +	state = enter_from_kernel_mode(regs);
Nit: There is some inconsistencies with some functions splitting state's 
definition
and declaration (like el1_abort here), while some others do it on the 
same line
(el1_undef() below for example).
In some cases it is welcome as the entry function is called after some 
other work,
but here for example it doesn't seem to be beneficial ?
>   	local_daif_inherit(regs);
>   	do_mem_abort(far, esr, regs);
>   	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>   }
>   
>   static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
>   {
>   	unsigned long far = read_sysreg(far_el1);
> +	arm64_irqentry_state_t state;
>   
> -	enter_from_kernel_mode(regs);
> +	state = enter_from_kernel_mode(regs);
>   	local_daif_inherit(regs);
>   	do_sp_pc_abort(far, esr, regs);
>   	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>   }
>   
>   static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
>   {
> -	enter_from_kernel_mode(regs);
> +	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
> +
>   	local_daif_inherit(regs);
>   	do_el1_undef(regs, esr);
>   	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>   }
>
> [...]
Other than the small nit:
Reviewed-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 3/7] arm64: entry: Rework arm64_preempt_schedule_irq()
  2025-07-29  1:54 ` [PATCH -next v7 3/7] arm64: entry: Rework arm64_preempt_schedule_irq() Jinjie Ruan
@ 2025-08-05 15:06   ` Ada Couprie Diaz
  0 siblings, 0 replies; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-05 15:06 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

On 29/07/2025 02:54, Jinjie Ruan wrote:

> The generic entry code has the form:
>
> | raw_irqentry_exit_cond_resched()
> | {
> | 	if (!preempt_count()) {
> | 		...
> | 		if (need_resched())
> | 			preempt_schedule_irq();
> | 	}
> | }
>
> In preparation for moving arm64 over to the generic entry code, align
> the structure of the arm64 code with raw_irqentry_exit_cond_resched() from
> the generic entry code.
>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
Reviewed-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 4/7] arm64: entry: Use preempt_count() and need_resched() helper
  2025-07-29  1:54 ` [PATCH -next v7 4/7] arm64: entry: Use preempt_count() and need_resched() helper Jinjie Ruan
@ 2025-08-05 15:06   ` Ada Couprie Diaz
  0 siblings, 0 replies; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-05 15:06 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

On 29/07/2025 02:54, Jinjie Ruan wrote:

> The generic entry code uses preempt_count() and need_resched() helpers to
> check if it should do preempt_schedule_irq(). Currently, arm64 use its own
> check logic, that is "READ_ONCE(current_thread_info()->preempt_count == 0",
> which is equivalent to "preempt_count() == 0 && need_resched()".
>
> In preparation for moving arm64 over to the generic entry code, use
> these helpers to replace arm64's own code and move it ahead.
>
> No functional changes.
>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
Reviewed-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-07-29  1:54 ` [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code Jinjie Ruan
@ 2025-08-05 15:06   ` Ada Couprie Diaz
  2025-08-06  6:26     ` Jinjie Ruan
  2025-08-06  6:39     ` Jinjie Ruan
  2025-08-11 16:02   ` Ada Couprie Diaz
  2025-08-12 11:13   ` Mark Rutland
  2 siblings, 2 replies; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-05 15:06 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

Hi Jinjie,

On 29/07/2025 02:54, Jinjie Ruan wrote:
> ARM64 requires an additional check whether to reschedule on return
> from interrupt. So add arch_irqentry_exit_need_resched() as the default
> NOP implementation and hook it up into the need_resched() condition in
> raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
> the architecture specific version for switching over to
> the generic entry code.
I was a bit confused by this, as I didn't see the link with the generic 
entry
given you implement `raw_irqentry_exit_cond_resched()` in arch/arm64
as well in this patch : I expected the arm64 implementation to be added.
I share more thoughts below.

What do you think about something along those lines ?

	Compared to the generic entry code, arm64 does additional checks
	when deciding to reschedule on return from an interrupt.
	Introduce arch_irqentry_exit_need_resched() in the need_resched() condition
	of the generic raw_irqentry_exit_cond_resched(), with a NOP default.
	This will allow arm64 to implement its architecture specific checks when
	switching over to the generic entry code.

> [...]
> diff --git a/kernel/entry/common.c b/kernel/entry/common.c
> index b82032777310..4aa9656fa1b4 100644
> --- a/kernel/entry/common.c
> +++ b/kernel/entry/common.c
> @@ -142,6 +142,20 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
>   	return ret;
>   }
>   
> +/**
> + * arch_irqentry_exit_need_resched - Architecture specific need resched function
> + *
> + * Invoked from raw_irqentry_exit_cond_resched() to check if need resched.
Very nit : "to check if resched is needed." ?
> + * Defaults return true.
> + *
> + * The main purpose is to permit arch to skip preempt a task from an IRQ.
If feel that "to avoid preemption of a task" instead of "to skip preempt 
a task"
would make this much clearer, what do you think ?
> + */
> +static inline bool arch_irqentry_exit_need_resched(void);
> +
> +#ifndef arch_irqentry_exit_need_resched
> +static inline bool arch_irqentry_exit_need_resched(void) { return true; }
> +#endif
> +

I've had some trouble reviewing this patch : on the one hand because
I didn't notice `arch_irqentry_exit_need_resched()` was added in
the common entry code, which is on me !
On the other hand, I felt that the patch itself was a bit disconnected :
we add `arch_irqentry_exit_need_resched()` in the common entry code,
with a default NOP, but in the same function we add to arm64,
while mentioning that this is for arm64's additional checks,
which we only implement in patch 7.

Would it make sense to move the `arch_irqentry_exit_need_resched()`
part of the patch to patch 7, so that the introduction and
arch-specific implementation appear together ?
To me it seems easier to wrap my head around, as it would look like
"Move arm64 to generic entry, but it does additional checks : add a new
arch-specific function controlling re-scheduling, defaulting to true,
and implement it for arm64". I feel it could help making patch 7's
commit message clearer as well.

 From what I gathered on the archive `arch_irqentry_exit_need_resched()`
being added here was suggested previously, so others might not have the
same opinion.

Maybe improving the commit message and comment for this would be enough
as well, as per my suggestions above.


Otherwise the changes make sense and I don't see any functional issues !

Thanks,
Ada



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 6/7] arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode()
  2025-07-29  1:54 ` [PATCH -next v7 6/7] arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode() Jinjie Ruan
@ 2025-08-05 15:07   ` Ada Couprie Diaz
  0 siblings, 0 replies; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-05 15:07 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, ryan.roberts, anshuman.khandual, will,
	liaochang1, linux-kernel, kristina.martsenko, oleg, broonie,
	chenl311, catalin.marinas, leitao, xen-devel, akpm, mbenes,
	puranjay, ardb, linux-arm-kernel

On 29/07/2025 02:54, Jinjie Ruan wrote:

> The arm64 entry code only preempts a kernel context upon a return from
> a regular IRQ exception. The generic entry code may preempt a kernel
> context for any exception return where irqentry_exit() is used, and so
> may preempt other exceptions such as faults.
>
> In preparation for moving arm64 over to the generic entry code, align
> arm64 with the generic behaviour by calling
> arm64_preempt_schedule_irq() from exit_to_kernel_mode(). To make this
> possible, arm64_preempt_schedule_irq()
> and dynamic/raw_irqentry_exit_cond_resched() are moved earlier in
> the file, with no changes.
>
> As Mark pointed out, this change will have the following 2 key impact:
>
> - " We'll preempt even without taking a "real" interrupt. That
>      shouldn't result in preemption that wasn't possible before,
>      but it does change the probability of preempting at certain points,
>      and might have a performance impact, so probably warrants a
>      benchmark."
>
> - " We will not preempt when taking interrupts from a region of kernel
>      code where IRQs are enabled but RCU is not watching, matching the
>      behaviour of the generic entry code.
>
>      This has the potential to introduce livelock if we can ever have a
>      screaming interrupt in such a region, so we'll need to go figure out
>      whether that's actually a problem.
>
>      Having this as a separate patch will make it easier to test/bisect
>      for that specifically."
>
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---

Reviewed-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 7/7] arm64: entry: Switch to generic IRQ entry
  2025-07-29  1:54 ` [PATCH -next v7 7/7] arm64: entry: Switch to generic IRQ entry Jinjie Ruan
@ 2025-08-05 15:07   ` Ada Couprie Diaz
  2025-08-06  6:59     ` Jinjie Ruan
  0 siblings, 1 reply; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-05 15:07 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

Hi Jinjie,

The code changes look good to me, just a small missing clean up I believe.

On 29/07/2025 02:54, Jinjie Ruan wrote:

> Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64
> to use the generic entry infrastructure from kernel/entry/*.
> The generic entry makes maintainers' work easier and codes
> more elegant.
>
> Switch arm64 to generic IRQ entry first, which removed duplicate 100+
> LOC. The next patch serise will switch to generic entry completely later.
> Switch to generic entry in two steps according to Mark's suggestion
> will make it easier to review.

I think the commit message could be clearer, especially since now this 
series
only moves arm64 to generic IRQ entry and not the complete generic entry.

What do you think of something like below ? It repeats a bit less and I 
think
it helps understanding what is going on in this specific commit, as you 
already
have details on the larger plans in the cover.

	Currently, x86, Riscv and Loongarch use the generic entry code, which makes
	maintainer's work easier and code more elegant.
	Start converting arm64 to use the generic entry infrastructure
	from kernel/entry/* by switching it to generic IRQ entry, which removes 100+
	lines of duplicate code.
	arm64 will completely switch to generic entry in a later series.

> The changes are below:
>   - Remove *enter_from/exit_to_kernel_mode(), and wrap with generic
>     irqentry_enter/exit(). Also remove *enter_from/exit_to_user_mode(),
>     and wrap with generic enter_from/exit_to_user_mode() because they
>     are exactly the same so far.
Nit : "so far" can be removed
>   - Remove arm64_enter/exit_nmi() and use generic irqentry_nmi_enter/exit()
>     because they're exactly the same, so the temporary arm64 version
>     irqentry_state can also be removed.
>
>   - Remove PREEMPT_DYNAMIC code, as generic entry do the same thing
>     if arm64 implement arch_irqentry_exit_need_resched().
This feels unrelated, given that the part that needs 
`arch_irqentry_exit_need_resched()`
is called whether or not PREEMPT_DYNAMIC is enabled ?

Given my comments on patch 5, I feel that the commit message should mention
explicitly the implementation of `arch_irqentry_exit_need_resched()` and 
why,
even though it was already mentioned in patch 5.
(This is what I was referencing in patch 5 : as I feel it's useful to 
mention again
the reasons when implementing it, it doesn't feel too out of place to 
introduce
the generic part at the same time. But again, I might be wrong here.)

Then you can have another point explaining that 
`raw_irqentry_exit_cond_resched()`
and the PREEMPT_DYNAMIC code is removed because they are identical to the
generic entry code, similarly to your other points.
> Tested ok with following test cases on Qemu virt platform:
>   - Perf tests.
>   - Different `dynamic preempt` mode switch.
>   - Pseudo NMI tests.
>   - Stress-ng CPU stress test.
>   - MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst
>     and all test cases in tools/testing/selftests/arm64/mte/*.
Nit : I'm not sure if the commit message is the best place for this, 
given you
already gave some details in the cover ?
But I don't have much experience here, so I'll leave it up to you and 
others !
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
> [...]
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index db3f972f8cd9..1110eeb21f57 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -9,6 +9,7 @@
>   #include <linux/cache.h>
>   #include <linux/compat.h>
>   #include <linux/errno.h>
> +#include <linux/irq-entry-common.h>
>   #include <linux/kernel.h>
>   #include <linux/signal.h>
>   #include <linux/freezer.h>
> @@ -1576,7 +1577,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
>    * the kernel can handle, and then we build all the user-level signal handling
>    * stack-frames in one go after that.
>    */
> -void do_signal(struct pt_regs *regs)
> +void arch_do_signal_or_restart(struct pt_regs *regs)
Given that `do_signal(struct pt_regs *regs)` is defined in 
`arch/arm64/include/asm/exception.h`,
and that there remains no users of `do_signal()`, I think it should be 
removed there.

Thanks,
Ada


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
                   ` (6 preceding siblings ...)
  2025-07-29  1:54 ` [PATCH -next v7 7/7] arm64: entry: Switch to generic IRQ entry Jinjie Ruan
@ 2025-08-05 15:08 ` Ada Couprie Diaz
  2025-08-06  8:11   ` Jinjie Ruan
  2025-08-12 11:19 ` Mark Rutland
  8 siblings, 1 reply; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-05 15:08 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

Hi Jinjie,

On 29/07/2025 02:54, Jinjie Ruan wrote:

> Since commit a70e9f647f50 ("entry: Split generic entry into generic
> exception and syscall entry") split the generic entry into generic irq
> entry and generic syscall entry, it is time to convert arm64 to use
> the generic irq entry. And ARM64 will be completely converted to generic
> entry in the upcoming patch series.
Note : I had to manually cherry-pick a70e9f647f50 when pulling the series
on top of the Linux Arm Kernel for-next/core branch, but there might be
something I'm missing here.
>
> The main convert steps are as follows:
> - Split generic entry into generic irq entry and generic syscall to
>    make the single patch more concentrated in switching to one thing.
> - Make arm64 easier to use irqentry_enter/exit().
> - Make arm64 closer to the PREEMPT_DYNAMIC code of generic entry.
> - Switch to generic irq entry.

I reviewed the whole series and as expected it looks good ! Just a few nits
here and there and some clarifications that I think could be useful.

I'm not sure about the generic implementation of 
`arch_irqentry_exit_need_resched()`
in patch 5, I would be tempted to move it to patch 7. I detail my 
thoughts more
on the relevant patches, but I might be wrong and that feels like details :
I don't think the code itself has issues.
> It was tested ok with following test cases on QEMU virt platform:
>   - Perf tests.
>   - Different `dynamic preempt` mode switch.
>   - Pseudo NMI tests.
>   - Stress-ng CPU stress test.
>   - MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst
>     and all test cases in tools/testing/selftests/arm64/mte/*.
>
> The test QEMU configuration is as follows:
>
> 	qemu-system-aarch64 \
> 		-M virt,gic-version=3,virtualization=on,mte=on \
> 		-cpu max,pauth-impdef=on \
> 		-kernel Image \
> 		-smp 8,sockets=1,cores=4,threads=2 \
> 		-m 512m \
> 		-nographic \
> 		-no-reboot \
> 		-device virtio-rng-pci \
> 		-append "root=/dev/vda rw console=ttyAMA0 kgdboc=ttyAMA0,115200 \
> 			earlycon preempt=voluntary irqchip.gicv3_pseudo_nmi=1" \
> 		-drive if=none,file=images/rootfs.ext4,format=raw,id=hd0 \
> 		-device virtio-blk-device,drive=hd0 \
>
I'll spend some time testing the series now, specifically given patch 6's
changes, but other than that everything I saw made sense and didn't look
like it would be of concern to me.

Thanks,
Ada


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 1/7] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled()
  2025-08-05 15:05   ` Ada Couprie Diaz
@ 2025-08-06  2:31     ` Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-06  2:31 UTC (permalink / raw)
  To: Ada Couprie Diaz
  Cc: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, anshuman.khandual,
	kristina.martsenko, liaochang1, ardb, leitao, linux-arm-kernel,
	linux-kernel, xen-devel



On 2025/8/5 23:05, Ada Couprie Diaz wrote:
> On 29/07/2025 02:54, Jinjie Ruan wrote:
> 
>> The generic entry code expects architecture code to provide
>> regs_irqs_disabled(regs) function, but arm64 does not have this and
>> provides inerrupts_enabled(regs), which has the opposite polarity.
> Nit: "interrupts_enabled(regs)"

Good catch! thank you for the review.

>> In preparation for moving arm64 over to the generic entry code,
>> relace arm64's interrupts_enabled() with regs_irqs_disabled() and
>> update its callers under arch/arm64.
>>
>> For the moment, a definition of interrupts_enabled() is provided for
>> the GICv3 driver. Once arch/arm implement regs_irqs_disabled(), this
>> can be removed.
>>
>> Delete the fast_interrupts_enabled() macro as it is unused and we
>> don't want any new users to show up.
>>
>> No functional changes.
>>
>> Acked-by: Mark Rutland <mark.rutland@arm.com>
>> Suggested-by: Mark Rutland <mark.rutland@arm.com>
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
> Otherwise looks good to me !
> Reviewed-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1
  2025-08-05 15:06   ` Ada Couprie Diaz
@ 2025-08-06  2:49     ` Jinjie Ruan
  2025-08-11 16:01       ` Ada Couprie Diaz
  0 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-06  2:49 UTC (permalink / raw)
  To: Ada Couprie Diaz
  Cc: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, anshuman.khandual,
	kristina.martsenko, liaochang1, ardb, leitao, linux-arm-kernel,
	linux-kernel, xen-devel



On 2025/8/5 23:06, Ada Couprie Diaz wrote:
> Hi,
> 
> On 29/07/2025 02:54, Jinjie Ruan wrote:
> 
>> The generic entry code uses irqentry_state_t to track lockdep and RCU
>> state across exception entry and return. For historical reasons, arm64
>> embeds similar fields within its pt_regs structure.
>>
>> In preparation for moving arm64 over to the generic entry code, pull
>> these fields out of arm64's pt_regs, and use a separate structure,
>> matching the style of the generic entry code.
>>
>> No functional changes.
> As far as I understand and checked, we used the two fields
> in an exclusive fashion, so there is indeed no functional change.
>> Suggested-by: Mark Rutland <mark.rutland@arm.com>
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
>> [...]
>> diff --git a/arch/arm64/kernel/entry-common.c
>> b/arch/arm64/kernel/entry-common.c
>> index 8e798f46ad28..97e0741abde1 100644
>> --- a/arch/arm64/kernel/entry-common.c
>> +++ b/arch/arm64/kernel/entry-common.c
>> [...]
>> @@ -475,73 +497,81 @@ UNHANDLED(el1t, 64, error)
>>   static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
>>   {
>>       unsigned long far = read_sysreg(far_el1);
>> +    arm64_irqentry_state_t state;
>>   -    enter_from_kernel_mode(regs);
>> +    state = enter_from_kernel_mode(regs);
> Nit: There is some inconsistencies with some functions splitting state's
> definition
> and declaration (like el1_abort here), while some others do it on the
> same line
> (el1_undef() below for example).
> In some cases it is welcome as the entry function is called after some
> other work,
> but here for example it doesn't seem to be beneficial ?

Both methods can keep the modifications to `enter_from_kernel_mode()` on
the same line as the original code, which will facilitate code review.

I think it is also fine to do it on the same line here, which can reduce
one line code, which method is better may be a matter of personal opinion.

>>       local_daif_inherit(regs);
>>       do_mem_abort(far, esr, regs);
>>       local_daif_mask();
>> -    exit_to_kernel_mode(regs);
>> +    exit_to_kernel_mode(regs, state);
>>   }
>>     static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
>>   {
>>       unsigned long far = read_sysreg(far_el1);
>> +    arm64_irqentry_state_t state;
>>   -    enter_from_kernel_mode(regs);
>> +    state = enter_from_kernel_mode(regs);
>>       local_daif_inherit(regs);
>>       do_sp_pc_abort(far, esr, regs);
>>       local_daif_mask();
>> -    exit_to_kernel_mode(regs);
>> +    exit_to_kernel_mode(regs, state);
>>   }
>>     static void noinstr el1_undef(struct pt_regs *regs, unsigned long
>> esr)
>>   {
>> -    enter_from_kernel_mode(regs);
>> +    arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
>> +
>>       local_daif_inherit(regs);
>>       do_el1_undef(regs, esr);
>>       local_daif_mask();
>> -    exit_to_kernel_mode(regs);
>> +    exit_to_kernel_mode(regs, state);
>>   }
>>
>> [...]
> Other than the small nit:
> Reviewed-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
> 
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-08-05 15:06   ` Ada Couprie Diaz
@ 2025-08-06  6:26     ` Jinjie Ruan
  2025-08-06  6:39     ` Jinjie Ruan
  1 sibling, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-06  6:26 UTC (permalink / raw)
  To: Ada Couprie Diaz
  Cc: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, anshuman.khandual,
	kristina.martsenko, liaochang1, ardb, leitao, linux-arm-kernel,
	linux-kernel, xen-devel



On 2025/8/5 23:06, Ada Couprie Diaz wrote:
> Hi Jinjie,
> 
> On 29/07/2025 02:54, Jinjie Ruan wrote:
>> ARM64 requires an additional check whether to reschedule on return
>> from interrupt. So add arch_irqentry_exit_need_resched() as the default
>> NOP implementation and hook it up into the need_resched() condition in
>> raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
>> the architecture specific version for switching over to
>> the generic entry code.
> I was a bit confused by this, as I didn't see the link with the generic
> entry
> given you implement `raw_irqentry_exit_cond_resched()` in arch/arm64
> as well in this patch : I expected the arm64 implementation to be added.
> I share more thoughts below.
> 
> What do you think about something along those lines ?
> 
>     Compared to the generic entry code, arm64 does additional checks
>     when deciding to reschedule on return from an interrupt.
>     Introduce arch_irqentry_exit_need_resched() in the need_resched()
> condition
>     of the generic raw_irqentry_exit_cond_resched(), with a NOP default.
>     This will allow arm64 to implement its architecture specific checks
> when
>     switching over to the generic entry code.

This revision makes it easier for people to understand.

> 
>> [...]
>> diff --git a/kernel/entry/common.c b/kernel/entry/common.c
>> index b82032777310..4aa9656fa1b4 100644
>> --- a/kernel/entry/common.c
>> +++ b/kernel/entry/common.c
>> @@ -142,6 +142,20 @@ noinstr irqentry_state_t irqentry_enter(struct
>> pt_regs *regs)
>>       return ret;
>>   }
>>   +/**
>> + * arch_irqentry_exit_need_resched - Architecture specific need
>> resched function
>> + *
>> + * Invoked from raw_irqentry_exit_cond_resched() to check if need
>> resched.
> Very nit : "to check if resched is needed." ?

This is good.

>> + * Defaults return true.
>> + *
>> + * The main purpose is to permit arch to skip preempt a task from an
>> IRQ.
> If feel that "to avoid preemption of a task" instead of "to skip preempt
> a task"
> would make this much clearer, what do you think ?

Yes, this is more clearer.

>> + */
>> +static inline bool arch_irqentry_exit_need_resched(void);
>> +
>> +#ifndef arch_irqentry_exit_need_resched
>> +static inline bool arch_irqentry_exit_need_resched(void) { return
>> true; }
>> +#endif
>> +
> 
> I've had some trouble reviewing this patch : on the one hand because
> I didn't notice `arch_irqentry_exit_need_resched()` was added in
> the common entry code, which is on me !
> On the other hand, I felt that the patch itself was a bit disconnected :
> we add `arch_irqentry_exit_need_resched()` in the common entry code,
> with a default NOP, but in the same function we add to arm64,
> while mentioning that this is for arm64's additional checks,
> which we only implement in patch 7.
> 
> Would it make sense to move the `arch_irqentry_exit_need_resched()`
> part of the patch to patch 7, so that the introduction and
> arch-specific implementation appear together ?
> To me it seems easier to wrap my head around, as it would look like
> "Move arm64 to generic entry, but it does additional checks : add a new
> arch-specific function controlling re-scheduling, defaulting to true,
> and implement it for arm64". I feel it could help making patch 7's
> commit message clearer as well.
> 
> From what I gathered on the archive `arch_irqentry_exit_need_resched()`
> being added here was suggested previously, so others might not have the
> same opinion.
> 
> Maybe improving the commit message and comment for this would be enough
> as well, as per my suggestions above.
> 
> 
> Otherwise the changes make sense and I don't see any functional issues !
> 
> Thanks,
> Ada
> 
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-08-05 15:06   ` Ada Couprie Diaz
  2025-08-06  6:26     ` Jinjie Ruan
@ 2025-08-06  6:39     ` Jinjie Ruan
  2025-08-11 16:02       ` Ada Couprie Diaz
  1 sibling, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-06  6:39 UTC (permalink / raw)
  To: Ada Couprie Diaz
  Cc: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, anshuman.khandual,
	kristina.martsenko, liaochang1, ardb, leitao, linux-arm-kernel,
	linux-kernel, xen-devel



On 2025/8/5 23:06, Ada Couprie Diaz wrote:
> Hi Jinjie,
> 
> On 29/07/2025 02:54, Jinjie Ruan wrote:
>> ARM64 requires an additional check whether to reschedule on return
>> from interrupt. So add arch_irqentry_exit_need_resched() as the default
>> NOP implementation and hook it up into the need_resched() condition in
>> raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
>> the architecture specific version for switching over to
>> the generic entry code.
> I was a bit confused by this, as I didn't see the link with the generic
> entry
> given you implement `raw_irqentry_exit_cond_resched()` in arch/arm64
> as well in this patch : I expected the arm64 implementation to be added.
> I share more thoughts below.
> 
> What do you think about something along those lines ?
> 
>     Compared to the generic entry code, arm64 does additional checks
>     when deciding to reschedule on return from an interrupt.
>     Introduce arch_irqentry_exit_need_resched() in the need_resched()
> condition
>     of the generic raw_irqentry_exit_cond_resched(), with a NOP default.
>     This will allow arm64 to implement its architecture specific checks
> when
>     switching over to the generic entry code.
> 
>> [...]
>> diff --git a/kernel/entry/common.c b/kernel/entry/common.c
>> index b82032777310..4aa9656fa1b4 100644
>> --- a/kernel/entry/common.c
>> +++ b/kernel/entry/common.c
>> @@ -142,6 +142,20 @@ noinstr irqentry_state_t irqentry_enter(struct
>> pt_regs *regs)
>>       return ret;
>>   }
>>   +/**
>> + * arch_irqentry_exit_need_resched - Architecture specific need
>> resched function
>> + *
>> + * Invoked from raw_irqentry_exit_cond_resched() to check if need
>> resched.
> Very nit : "to check if resched is needed." ?
>> + * Defaults return true.
>> + *
>> + * The main purpose is to permit arch to skip preempt a task from an
>> IRQ.
> If feel that "to avoid preemption of a task" instead of "to skip preempt
> a task"
> would make this much clearer, what do you think ?
>> + */
>> +static inline bool arch_irqentry_exit_need_resched(void);
>> +
>> +#ifndef arch_irqentry_exit_need_resched
>> +static inline bool arch_irqentry_exit_need_resched(void) { return
>> true; }
>> +#endif
>> +
> 
> I've had some trouble reviewing this patch : on the one hand because
> I didn't notice `arch_irqentry_exit_need_resched()` was added in
> the common entry code, which is on me !
> On the other hand, I felt that the patch itself was a bit disconnected :
> we add `arch_irqentry_exit_need_resched()` in the common entry code,
> with a default NOP, but in the same function we add to arm64,
> while mentioning that this is for arm64's additional checks,
> which we only implement in patch 7.

Yes, it does.

> 
> Would it make sense to move the `arch_irqentry_exit_need_resched()`
> part of the patch to patch 7, so that the introduction and
> arch-specific implementation appear together ?
> To me it seems easier to wrap my head around, as it would look like
> "Move arm64 to generic entry, but it does additional checks : add a new
> arch-specific function controlling re-scheduling, defaulting to true,
> and implement it for arm64". I feel it could help making patch 7's
> commit message clearer as well.
> 
> From what I gathered on the archive `arch_irqentry_exit_need_resched()`
> being added here was suggested previously, so others might not have the
> same opinion.

Yes, introduce `arch_irqentry_exit_need_resched()` here may help
understand the patch's refactoring purpose.

> 
> Maybe improving the commit message and comment for this would be enough
> as well, as per my suggestions above.

Thank you! I'll improve the commit message and comment.

> 
> 
> Otherwise the changes make sense and I don't see any functional issues !
> 
> Thanks,
> Ada
> 
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 7/7] arm64: entry: Switch to generic IRQ entry
  2025-08-05 15:07   ` Ada Couprie Diaz
@ 2025-08-06  6:59     ` Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-06  6:59 UTC (permalink / raw)
  To: Ada Couprie Diaz
  Cc: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, anshuman.khandual,
	kristina.martsenko, liaochang1, ardb, leitao, linux-arm-kernel,
	linux-kernel, xen-devel



On 2025/8/5 23:07, Ada Couprie Diaz wrote:
> Hi Jinjie,
> 
> The code changes look good to me, just a small missing clean up I believe.
> 
> On 29/07/2025 02:54, Jinjie Ruan wrote:
> 
>> Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64
>> to use the generic entry infrastructure from kernel/entry/*.
>> The generic entry makes maintainers' work easier and codes
>> more elegant.
>>
>> Switch arm64 to generic IRQ entry first, which removed duplicate 100+
>> LOC. The next patch serise will switch to generic entry completely later.
>> Switch to generic entry in two steps according to Mark's suggestion
>> will make it easier to review.
> 
> I think the commit message could be clearer, especially since now this
> series
> only moves arm64 to generic IRQ entry and not the complete generic entry.
> 
> What do you think of something like below ? It repeats a bit less and I
> think
> it helps understanding what is going on in this specific commit, as you
> already
> have details on the larger plans in the cover.
> 
>     Currently, x86, Riscv and Loongarch use the generic entry code,
> which makes
>     maintainer's work easier and code more elegant.
>     Start converting arm64 to use the generic entry infrastructure
>     from kernel/entry/* by switching it to generic IRQ entry, which
> removes 100+
>     lines of duplicate code.
>     arm64 will completely switch to generic entry in a later series.
> 

Yes, this is more concise and accurate, and make the motivation more
clearer.

>> The changes are below:
>>   - Remove *enter_from/exit_to_kernel_mode(), and wrap with generic
>>     irqentry_enter/exit(). Also remove *enter_from/exit_to_user_mode(),
>>     and wrap with generic enter_from/exit_to_user_mode() because they
>>     are exactly the same so far.
> Nit : "so far" can be removed
>>   - Remove arm64_enter/exit_nmi() and use generic
>> irqentry_nmi_enter/exit()
>>     because they're exactly the same, so the temporary arm64 version
>>     irqentry_state can also be removed.
>>
>>   - Remove PREEMPT_DYNAMIC code, as generic entry do the same thing
>>     if arm64 implement arch_irqentry_exit_need_resched().
> This feels unrelated, given that the part that needs
> `arch_irqentry_exit_need_resched()`
> is called whether or not PREEMPT_DYNAMIC is enabled ?

Yes, the language here needs to be reorganized in conjunction with your
comments from the fifth patch.

> 
> Given my comments on patch 5, I feel that the commit message should mention
> explicitly the implementation of `arch_irqentry_exit_need_resched()` and
> why,
> even though it was already mentioned in patch 5.
> (This is what I was referencing in patch 5 : as I feel it's useful to
> mention again
> the reasons when implementing it, it doesn't feel too out of place to
> introduce
> the generic part at the same time. But again, I might be wrong here.)
> 
> Then you can have another point explaining that
> `raw_irqentry_exit_cond_resched()`
> and the PREEMPT_DYNAMIC code is removed because they are identical to the
> generic entry code, similarly to your other points.
>> Tested ok with following test cases on Qemu virt platform:
>>   - Perf tests.
>>   - Different `dynamic preempt` mode switch.
>>   - Pseudo NMI tests.
>>   - Stress-ng CPU stress test.
>>   - MTE test case in
>> Documentation/arch/arm64/memory-tagging-extension.rst
>>     and all test cases in tools/testing/selftests/arm64/mte/*.
> Nit : I'm not sure if the commit message is the best place for this,
> given you
> already gave some details in the cover ?
> But I don't have much experience here, so I'll leave it up to you and
> others !

Yes, this can be removed as the cover letter already has it.

>> Suggested-by: Mark Rutland <mark.rutland@arm.com>
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
>> [...]
>> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
>> index db3f972f8cd9..1110eeb21f57 100644
>> --- a/arch/arm64/kernel/signal.c
>> +++ b/arch/arm64/kernel/signal.c
>> @@ -9,6 +9,7 @@
>>   #include <linux/cache.h>
>>   #include <linux/compat.h>
>>   #include <linux/errno.h>
>> +#include <linux/irq-entry-common.h>
>>   #include <linux/kernel.h>
>>   #include <linux/signal.h>
>>   #include <linux/freezer.h>
>> @@ -1576,7 +1577,7 @@ static void handle_signal(struct ksignal *ksig,
>> struct pt_regs *regs)
>>    * the kernel can handle, and then we build all the user-level
>> signal handling
>>    * stack-frames in one go after that.
>>    */
>> -void do_signal(struct pt_regs *regs)
>> +void arch_do_signal_or_restart(struct pt_regs *regs)
> Given that `do_signal(struct pt_regs *regs)` is defined in
> `arch/arm64/include/asm/exception.h`,
> and that there remains no users of `do_signal()`, I think it should be
> removed there.

Good catch! I'll remove it.

> 
> Thanks,
> Ada
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry
  2025-08-05 15:08 ` [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Ada Couprie Diaz
@ 2025-08-06  8:11   ` Jinjie Ruan
  2025-08-11 16:03     ` Ada Couprie Diaz
  0 siblings, 1 reply; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-06  8:11 UTC (permalink / raw)
  To: Ada Couprie Diaz
  Cc: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, anshuman.khandual,
	kristina.martsenko, liaochang1, ardb, leitao, linux-arm-kernel,
	linux-kernel, xen-devel



On 2025/8/5 23:08, Ada Couprie Diaz wrote:
> Hi Jinjie,
> 
> On 29/07/2025 02:54, Jinjie Ruan wrote:
> 
>> Since commit a70e9f647f50 ("entry: Split generic entry into generic
>> exception and syscall entry") split the generic entry into generic irq
>> entry and generic syscall entry, it is time to convert arm64 to use
>> the generic irq entry. And ARM64 will be completely converted to generic
>> entry in the upcoming patch series.
> Note : I had to manually cherry-pick a70e9f647f50 when pulling the series
> on top of the Linux Arm Kernel for-next/core branch, but there might be
> something I'm missing here.
>>

It seems that it is now in mainline v6.16-rc1 and linux-next but not
Linux Arm Kernel for-next/core branch.

[...]

> I'll spend some time testing the series now, specifically given patch 6's
> changes, but other than that everything I saw made sense and didn't look
> like it would be of concern to me.

Thank you for the test and review.

> 
> Thanks,
> Ada
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1
  2025-08-06  2:49     ` Jinjie Ruan
@ 2025-08-11 16:01       ` Ada Couprie Diaz
  0 siblings, 0 replies; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-11 16:01 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

On 06/08/2025 03:49, Jinjie Ruan wrote:

> On 2025/8/5 23:06, Ada Couprie Diaz wrote:
>> Hi,
>>
>> On 29/07/2025 02:54, Jinjie Ruan wrote:
>>
>>> [...]
>>> diff --git a/arch/arm64/kernel/entry-common.c
>>> b/arch/arm64/kernel/entry-common.c
>>> index 8e798f46ad28..97e0741abde1 100644
>>> --- a/arch/arm64/kernel/entry-common.c
>>> +++ b/arch/arm64/kernel/entry-common.c
>>> [...]
>>> @@ -475,73 +497,81 @@ UNHANDLED(el1t, 64, error)
>>>    static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
>>>    {
>>>        unsigned long far = read_sysreg(far_el1);
>>> +    arm64_irqentry_state_t state;
>>>    -    enter_from_kernel_mode(regs);
>>> +    state = enter_from_kernel_mode(regs);
>> Nit: There is some inconsistencies with some functions splitting state's
>> definition
>> and declaration (like el1_abort here), while some others do it on the
>> same line
>> (el1_undef() below for example).
>> In some cases it is welcome as the entry function is called after some
>> other work,
>> but here for example it doesn't seem to be beneficial ?
> Both methods can keep the modifications to `enter_from_kernel_mode()` on
> the same line as the original code, which will facilitate code review.
>
> I think it is also fine to do it on the same line here, which can reduce
> one line code, which method is better may be a matter of personal opinion.
Fair point !
Then, as mentioned previously, I'm happy to leave my Reviewed-By.
>>>        local_daif_inherit(regs);
>>>        do_mem_abort(far, esr, regs);
>>>        local_daif_mask();
>>> -    exit_to_kernel_mode(regs);
>>> +    exit_to_kernel_mode(regs, state);
>>>    }
>>>      static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
>>>    {
>>>        unsigned long far = read_sysreg(far_el1);
>>> +    arm64_irqentry_state_t state;
>>>    -    enter_from_kernel_mode(regs);
>>> +    state = enter_from_kernel_mode(regs);
>>>        local_daif_inherit(regs);
>>>        do_sp_pc_abort(far, esr, regs);
>>>        local_daif_mask();
>>> -    exit_to_kernel_mode(regs);
>>> +    exit_to_kernel_mode(regs, state);
>>>    }
>>>      static void noinstr el1_undef(struct pt_regs *regs, unsigned long
>>> esr)
>>>    {
>>> -    enter_from_kernel_mode(regs);
>>> +    arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
>>> +
>>>        local_daif_inherit(regs);
>>>        do_el1_undef(regs, esr);
>>>        local_daif_mask();
>>> -    exit_to_kernel_mode(regs);
>>> +    exit_to_kernel_mode(regs, state);
>>>    }
>>>
>>> [...]
>> Other than the small nit:
>> Reviewed-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-08-06  6:39     ` Jinjie Ruan
@ 2025-08-11 16:02       ` Ada Couprie Diaz
  0 siblings, 0 replies; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-11 16:02 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

On 06/08/2025 07:39, Jinjie Ruan wrote:

> On 2025/8/5 23:06, Ada Couprie Diaz wrote:
>> Hi Jinjie,
>>
>> On 29/07/2025 02:54, Jinjie Ruan wrote:
>>> ARM64 requires an additional check whether to reschedule on return
>>> from interrupt. So add arch_irqentry_exit_need_resched() as the default
>>> NOP implementation and hook it up into the need_resched() condition in
>>> raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
>>> the architecture specific version for switching over to
>>> the generic entry code.
>>> [...]
>> I've had some trouble reviewing this patch : on the one hand because
>> I didn't notice `arch_irqentry_exit_need_resched()` was added in
>> the common entry code, which is on me !
>> On the other hand, I felt that the patch itself was a bit disconnected :
>> we add `arch_irqentry_exit_need_resched()` in the common entry code,
>> with a default NOP, but in the same function we add to arm64,
>> while mentioning that this is for arm64's additional checks,
>> which we only implement in patch 7.
> Yes, it does.
>
>> Would it make sense to move the `arch_irqentry_exit_need_resched()`
>> part of the patch to patch 7, so that the introduction and
>> arch-specific implementation appear together ?
>> To me it seems easier to wrap my head around, as it would look like
>> "Move arm64 to generic entry, but it does additional checks : add a new
>> arch-specific function controlling re-scheduling, defaulting to true,
>> and implement it for arm64". I feel it could help making patch 7's
>> commit message clearer as well.
>>
>>  From what I gathered on the archive `arch_irqentry_exit_need_resched()`
>> being added here was suggested previously, so others might not have the
>> same opinion.
> Yes, introduce `arch_irqentry_exit_need_resched()` here may help
> understand the patch's refactoring purpose.
I can see that as well.
I shared my opinion in case it could be useful, but as I mentioned
in my reply to the cover : it's not a big issue and I'm happy for
`arch_irqentry_exit_need_resched()` to be implemented here if that
makes more sense !
>> Maybe improving the commit message and comment for this would be enough
>> as well, as per my suggestions above.
> Thank you! I'll improve the commit message and comment.
>
My pleasure !
Ada


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-07-29  1:54 ` [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code Jinjie Ruan
  2025-08-05 15:06   ` Ada Couprie Diaz
@ 2025-08-11 16:02   ` Ada Couprie Diaz
  2025-08-14  8:49     ` Jinjie Ruan
  2025-08-12 11:13   ` Mark Rutland
  2 siblings, 1 reply; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-11 16:02 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

On 29/07/2025 02:54, Jinjie Ruan wrote:

> ARM64 requires an additional check whether to reschedule on return
> from interrupt. So add arch_irqentry_exit_need_resched() as the default
> NOP implementation and hook it up into the need_resched() condition in
> raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
> the architecture specific version for switching over to
> the generic entry code.
>
> To align the structure of the code with irqentry_exit_cond_resched()
> from the generic entry code, hoist the need_irq_preemption()
> and IS_ENABLED() check earlier. And different preemption check functions
> are defined based on whether dynamic preemption is enabled.
>
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
Unrelated to the other thread : I noticed that compiling this patch
with `allnoconfig` would fail :
- `raw_irqentry_exit_cond_resched` has no previous prototype,
   as it is defined within `#ifdef CONFIG_PREEMPTION`
- `irqentry_exit_cond_resched()` is not declared, as it is also within
   `#ifdef CONFIG_PREEMPTION`

The patch below fixes the issue, but introduces merge conflicts in
patches 6 and 7, plus the `#ifdef` needs to be moved accordingly
in patch 6 and the empty "without preemption" `irq_exit_cond_resched()`
needs to be removed in patch 7.

I hope this can be useful,
Ada

---
diff --git a/arch/arm64/include/asm/preempt.h 
b/arch/arm64/include/asm/preempt.h
index 0f0ba250efe8..d9aba8b1e466 100644
--- a/arch/arm64/include/asm/preempt.h
+++ b/arch/arm64/include/asm/preempt.h
@@ -103,6 +103,8 @@ void dynamic_irqentry_exit_cond_resched(void);
  #define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched()

  #endif /* CONFIG_PREEMPT_DYNAMIC */
+#else /* CONFIG_PREEMPTION */
+#define irqentry_exit_cond_resched() {}
  #endif /* CONFIG_PREEMPTION */

  #endif /* __ASM_PREEMPT_H */
diff --git a/arch/arm64/kernel/entry-common.c 
b/arch/arm64/kernel/entry-common.c
index 4f92664fd46c..abd7a315145e 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -661,6 +661,7 @@ static __always_inline void __el1_pnmi(struct 
pt_regs *regs,
         arm64_exit_nmi(regs, state);
  }

+#ifdef CONFIG_PREEMPTION
  void raw_irqentry_exit_cond_resched(void)
  {
         if (!preempt_count()) {
@@ -668,6 +669,7 @@ void raw_irqentry_exit_cond_resched(void)
                         preempt_schedule_irq();
         }
  }
+#endif

  #ifdef CONFIG_PREEMPT_DYNAMIC
  DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry
  2025-08-06  8:11   ` Jinjie Ruan
@ 2025-08-11 16:03     ` Ada Couprie Diaz
  2025-08-14  9:37       ` Jinjie Ruan
  0 siblings, 1 reply; 33+ messages in thread
From: Ada Couprie Diaz @ 2025-08-11 16:03 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: mark.rutland, sstabellini, puranjay, anshuman.khandual,
	catalin.marinas, liaochang1, oleg, kristina.martsenko,
	linux-kernel, broonie, chenl311, xen-devel, leitao, ryan.roberts,
	akpm, mbenes, will, ardb, linux-arm-kernel

On 06/08/2025 09:11, Jinjie Ruan wrote:

> On 2025/8/5 23:08, Ada Couprie Diaz wrote:
>> Hi Jinjie,
>>
>> On 29/07/2025 02:54, Jinjie Ruan wrote:
>>
>>> Since commit a70e9f647f50 ("entry: Split generic entry into generic
>>> exception and syscall entry") split the generic entry into generic irq
>>> entry and generic syscall entry, it is time to convert arm64 to use
>>> the generic irq entry. And ARM64 will be completely converted to generic
>>> entry in the upcoming patch series.
>> Note : I had to manually cherry-pick a70e9f647f50 when pulling the series
>> on top of the Linux Arm Kernel for-next/core branch, but there might be
>> something I'm missing here.
> It seems that it is now in mainline v6.16-rc1 and linux-next but not
> Linux Arm Kernel for-next/core branch.
You're right, I misinterpreted the `-next` of the subject, thanks for the
clarification !
>> I'll spend some time testing the series now, specifically given patch 6's
>> changes, but other than that everything I saw made sense and didn't look
>> like it would be of concern to me.
> Thank you for the test and review.

I've spent some time testing the series with a few different configurations,
including PREEMPT_RT, pNMI, various lockup and hang detection options,
UBSAN, shadow call stack, and various CONFIG_DEBUG_XYZ (focused on locks
and IRQs), on both hardware (AMD Seattle) and KVM guests.

I tried to generate a diverse set of interrupts (via debug exceptions,
page faults, perf, kprobes, swapping, OoM) while loading the system with
different workloads, some generating a lot of context switches : hackbench
and signaltest from rt-tests[0], and mc-crusher[1], a memcached stress-test.

I did not have any issues, nor any warning reported by the various
debug features during all my hours of testing, so it looks good !

Tested-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>

Thank you for the series !
Ada

[0]: https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git/
[1]: https://github.com/memcached/mc-crusher



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1
  2025-07-29  1:54 ` [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1 Jinjie Ruan
  2025-08-05 15:06   ` Ada Couprie Diaz
@ 2025-08-12 11:01   ` Mark Rutland
  1 sibling, 0 replies; 33+ messages in thread
From: Mark Rutland @ 2025-08-12 11:01 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: sstabellini, puranjay, anshuman.khandual, catalin.marinas,
	liaochang1, oleg, kristina.martsenko, linux-kernel, broonie,
	chenl311, xen-devel, leitao, ryan.roberts, akpm, mbenes, will,
	ardb, linux-arm-kernel

Hi Jinjie,

On Tue, Jul 29, 2025 at 09:54:51AM +0800, Jinjie Ruan wrote:
> The generic entry code uses irqentry_state_t to track lockdep and RCU
> state across exception entry and return. For historical reasons, arm64
> embeds similar fields within its pt_regs structure.
> 
> In preparation for moving arm64 over to the generic entry code, pull
> these fields out of arm64's pt_regs, and use a separate structure,
> matching the style of the generic entry code.
> 
> No functional changes.
> 
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>

One minor formatting nit below, but with aside from that this looks
great, and with that fixed up:

Acked-by: Mark Rutland <mark.rutland@arm.com>

[...]

> @@ -475,73 +497,81 @@ UNHANDLED(el1t, 64, error)
>  static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
>  {
>  	unsigned long far = read_sysreg(far_el1);
> +	arm64_irqentry_state_t state;
>  
> -	enter_from_kernel_mode(regs);
> +	state = enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_mem_abort(far, esr, regs);
>  	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>  }
>  
>  static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
>  {
>  	unsigned long far = read_sysreg(far_el1);
> +	arm64_irqentry_state_t state;
>  
> -	enter_from_kernel_mode(regs);
> +	state = enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_sp_pc_abort(far, esr, regs);
>  	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>  }
>  
>  static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
>  {
> -	enter_from_kernel_mode(regs);
> +	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
> +
>  	local_daif_inherit(regs);
>  	do_el1_undef(regs, esr);
>  	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>  }

I'd prefer if we consistently defined 'state' on a separate line, before
the main block consisting of:

	state = enter_from_kernel_mode(regs);
	local_daif_inherit(regs);
	do_el1_undef(regs, esr);
	local_daif_mask();
	exit_to_kernel_mode(regs, state);

... since that way the enter/exit functions clearly enclose the whole
block, which isn't as clear when there's a line gap between
enter_from_kernel_mode() and the rest of the block.

That would also be more consistent with what we do for functions that
need to read other registers (e.g. el1_abort() and el1_pc() above).

If that could be applied consistently here and below, that'd be great.

Mark.

>  static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr)
>  {
> -	enter_from_kernel_mode(regs);
> +	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
> +
>  	local_daif_inherit(regs);
>  	do_el1_bti(regs, esr);
>  	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>  }
>  
>  static void noinstr el1_gcs(struct pt_regs *regs, unsigned long esr)
>  {
> -	enter_from_kernel_mode(regs);
> +	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
> +
>  	local_daif_inherit(regs);
>  	do_el1_gcs(regs, esr);
>  	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>  }
>  
>  static void noinstr el1_mops(struct pt_regs *regs, unsigned long esr)
>  {
> -	enter_from_kernel_mode(regs);
> +	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
> +
>  	local_daif_inherit(regs);
>  	do_el1_mops(regs, esr);
>  	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>  }
>  
>  static void noinstr el1_breakpt(struct pt_regs *regs, unsigned long esr)
>  {
> -	arm64_enter_el1_dbg(regs);
> +	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
> +
>  	debug_exception_enter(regs);
>  	do_breakpoint(esr, regs);
>  	debug_exception_exit(regs);
> -	arm64_exit_el1_dbg(regs);
> +	arm64_exit_el1_dbg(regs, state);
>  }
>  
>  static void noinstr el1_softstp(struct pt_regs *regs, unsigned long esr)
>  {
> -	arm64_enter_el1_dbg(regs);
> +	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
> +
>  	if (!cortex_a76_erratum_1463225_debug_handler(regs)) {
>  		debug_exception_enter(regs);
>  		/*
> @@ -554,37 +584,40 @@ static void noinstr el1_softstp(struct pt_regs *regs, unsigned long esr)
>  			do_el1_softstep(esr, regs);
>  		debug_exception_exit(regs);
>  	}
> -	arm64_exit_el1_dbg(regs);
> +	arm64_exit_el1_dbg(regs, state);
>  }
>  
>  static void noinstr el1_watchpt(struct pt_regs *regs, unsigned long esr)
>  {
>  	/* Watchpoints are the only debug exception to write FAR_EL1 */
>  	unsigned long far = read_sysreg(far_el1);
> +	arm64_irqentry_state_t state;
>  
> -	arm64_enter_el1_dbg(regs);
> +	state = arm64_enter_el1_dbg(regs);
>  	debug_exception_enter(regs);
>  	do_watchpoint(far, esr, regs);
>  	debug_exception_exit(regs);
> -	arm64_exit_el1_dbg(regs);
> +	arm64_exit_el1_dbg(regs, state);
>  }
>  
>  static void noinstr el1_brk64(struct pt_regs *regs, unsigned long esr)
>  {
> -	arm64_enter_el1_dbg(regs);
> +	arm64_irqentry_state_t state = arm64_enter_el1_dbg(regs);
> +
>  	debug_exception_enter(regs);
>  	do_el1_brk64(esr, regs);
>  	debug_exception_exit(regs);
> -	arm64_exit_el1_dbg(regs);
> +	arm64_exit_el1_dbg(regs, state);
>  }
>  
>  static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
>  {
> -	enter_from_kernel_mode(regs);
> +	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
> +
>  	local_daif_inherit(regs);
>  	do_el1_fpac(regs, esr);
>  	local_daif_mask();
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>  }
>  
>  asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
> @@ -639,15 +672,16 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
>  static __always_inline void __el1_pnmi(struct pt_regs *regs,
>  				       void (*handler)(struct pt_regs *))
>  {
> -	arm64_enter_nmi(regs);
> +	arm64_irqentry_state_t state = arm64_enter_nmi(regs);
> +
>  	do_interrupt_handler(regs, handler);
> -	arm64_exit_nmi(regs);
> +	arm64_exit_nmi(regs, state);
>  }
>  
>  static __always_inline void __el1_irq(struct pt_regs *regs,
>  				      void (*handler)(struct pt_regs *))
>  {
> -	enter_from_kernel_mode(regs);
> +	arm64_irqentry_state_t state = enter_from_kernel_mode(regs);
>  
>  	irq_enter_rcu();
>  	do_interrupt_handler(regs, handler);
> @@ -655,7 +689,7 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
>  
>  	arm64_preempt_schedule_irq();
>  
> -	exit_to_kernel_mode(regs);
> +	exit_to_kernel_mode(regs, state);
>  }
>  static void noinstr el1_interrupt(struct pt_regs *regs,
>  				  void (*handler)(struct pt_regs *))
> @@ -681,11 +715,12 @@ asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
>  asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
>  {
>  	unsigned long esr = read_sysreg(esr_el1);
> +	arm64_irqentry_state_t state;
>  
>  	local_daif_restore(DAIF_ERRCTX);
> -	arm64_enter_nmi(regs);
> +	state = arm64_enter_nmi(regs);
>  	do_serror(regs, esr);
> -	arm64_exit_nmi(regs);
> +	arm64_exit_nmi(regs, state);
>  }
>  
>  static void noinstr el0_da(struct pt_regs *regs, unsigned long esr)
> @@ -997,12 +1032,13 @@ asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
>  static void noinstr __el0_error_handler_common(struct pt_regs *regs)
>  {
>  	unsigned long esr = read_sysreg(esr_el1);
> +	arm64_irqentry_state_t state;
>  
>  	enter_from_user_mode(regs);
>  	local_daif_restore(DAIF_ERRCTX);
> -	arm64_enter_nmi(regs);
> +	state = arm64_enter_nmi(regs);
>  	do_serror(regs, esr);
> -	arm64_exit_nmi(regs);
> +	arm64_exit_nmi(regs, state);
>  	local_daif_restore(DAIF_PROCCTX);
>  	exit_to_user_mode(regs);
>  }
> @@ -1122,6 +1158,7 @@ asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs)
>  asmlinkage noinstr unsigned long
>  __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
>  {
> +	arm64_irqentry_state_t state;
>  	unsigned long ret;
>  
>  	/*
> @@ -1146,9 +1183,9 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
>  	else if (cpu_has_pan())
>  		set_pstate_pan(0);
>  
> -	arm64_enter_nmi(regs);
> +	state = arm64_enter_nmi(regs);
>  	ret = do_sdei_event(regs, arg);
> -	arm64_exit_nmi(regs);
> +	arm64_exit_nmi(regs, state);
>  
>  	return ret;
>  }
> -- 
> 2.34.1
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-07-29  1:54 ` [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code Jinjie Ruan
  2025-08-05 15:06   ` Ada Couprie Diaz
  2025-08-11 16:02   ` Ada Couprie Diaz
@ 2025-08-12 11:13   ` Mark Rutland
  2025-08-14  9:31     ` Jinjie Ruan
  2 siblings, 1 reply; 33+ messages in thread
From: Mark Rutland @ 2025-08-12 11:13 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: sstabellini, puranjay, anshuman.khandual, catalin.marinas,
	liaochang1, oleg, kristina.martsenko, linux-kernel, broonie,
	chenl311, xen-devel, leitao, ryan.roberts, akpm, mbenes, will,
	ardb, linux-arm-kernel

On Tue, Jul 29, 2025 at 09:54:54AM +0800, Jinjie Ruan wrote:
> ARM64 requires an additional check whether to reschedule on return
> from interrupt. So add arch_irqentry_exit_need_resched() as the default
> NOP implementation and hook it up into the need_resched() condition in
> raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
> the architecture specific version for switching over to
> the generic entry code.
> 
> To align the structure of the code with irqentry_exit_cond_resched()
> from the generic entry code, hoist the need_irq_preemption()
> and IS_ENABLED() check earlier. And different preemption check functions
> are defined based on whether dynamic preemption is enabled.
> 
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com>
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
>  arch/arm64/include/asm/preempt.h |  4 ++++
>  arch/arm64/kernel/entry-common.c | 35 ++++++++++++++++++--------------
>  kernel/entry/common.c            | 16 ++++++++++++++-
>  3 files changed, 39 insertions(+), 16 deletions(-)

Can you please split the change to kernel/entry/common.c into a separate
patch? That doesn't depend on the arm64-specific changes, and it'll make
it easier to handle any conflcits when merging this.

Mark.

> 
> diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
> index 0159b625cc7f..0f0ba250efe8 100644
> --- a/arch/arm64/include/asm/preempt.h
> +++ b/arch/arm64/include/asm/preempt.h
> @@ -85,6 +85,7 @@ static inline bool should_resched(int preempt_offset)
>  void preempt_schedule(void);
>  void preempt_schedule_notrace(void);
>  
> +void raw_irqentry_exit_cond_resched(void);
>  #ifdef CONFIG_PREEMPT_DYNAMIC
>  
>  DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
> @@ -92,11 +93,14 @@ void dynamic_preempt_schedule(void);
>  #define __preempt_schedule()		dynamic_preempt_schedule()
>  void dynamic_preempt_schedule_notrace(void);
>  #define __preempt_schedule_notrace()	dynamic_preempt_schedule_notrace()
> +void dynamic_irqentry_exit_cond_resched(void);
> +#define irqentry_exit_cond_resched()	dynamic_irqentry_exit_cond_resched()
>  
>  #else /* CONFIG_PREEMPT_DYNAMIC */
>  
>  #define __preempt_schedule()		preempt_schedule()
>  #define __preempt_schedule_notrace()	preempt_schedule_notrace()
> +#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
>  
>  #endif /* CONFIG_PREEMPT_DYNAMIC */
>  #endif /* CONFIG_PREEMPTION */
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 7c2299c1ba79..4f92664fd46c 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -285,19 +285,8 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs,
>  		lockdep_hardirqs_on(CALLER_ADDR0);
>  }
>  
> -#ifdef CONFIG_PREEMPT_DYNAMIC
> -DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
> -#define need_irq_preemption() \
> -	(static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
> -#else
> -#define need_irq_preemption()	(IS_ENABLED(CONFIG_PREEMPTION))
> -#endif
> -
>  static inline bool arm64_preempt_schedule_irq(void)
>  {
> -	if (!need_irq_preemption())
> -		return false;
> -
>  	/*
>  	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
>  	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
> @@ -672,6 +661,24 @@ static __always_inline void __el1_pnmi(struct pt_regs *regs,
>  	arm64_exit_nmi(regs, state);
>  }
>  
> +void raw_irqentry_exit_cond_resched(void)
> +{
> +	if (!preempt_count()) {
> +		if (need_resched() && arm64_preempt_schedule_irq())
> +			preempt_schedule_irq();
> +	}
> +}
> +
> +#ifdef CONFIG_PREEMPT_DYNAMIC
> +DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
> +void dynamic_irqentry_exit_cond_resched(void)
> +{
> +	if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
> +		return;
> +	raw_irqentry_exit_cond_resched();
> +}
> +#endif
> +
>  static __always_inline void __el1_irq(struct pt_regs *regs,
>  				      void (*handler)(struct pt_regs *))
>  {
> @@ -681,10 +688,8 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
>  	do_interrupt_handler(regs, handler);
>  	irq_exit_rcu();
>  
> -	if (!preempt_count() && need_resched()) {
> -		if (arm64_preempt_schedule_irq())
> -			preempt_schedule_irq();
> -	}
> +	if (IS_ENABLED(CONFIG_PREEMPTION))
> +		irqentry_exit_cond_resched();
>  
>  	exit_to_kernel_mode(regs, state);
>  }
> diff --git a/kernel/entry/common.c b/kernel/entry/common.c
> index b82032777310..4aa9656fa1b4 100644
> --- a/kernel/entry/common.c
> +++ b/kernel/entry/common.c
> @@ -142,6 +142,20 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
>  	return ret;
>  }
>  
> +/**
> + * arch_irqentry_exit_need_resched - Architecture specific need resched function
> + *
> + * Invoked from raw_irqentry_exit_cond_resched() to check if need resched.
> + * Defaults return true.
> + *
> + * The main purpose is to permit arch to skip preempt a task from an IRQ.
> + */
> +static inline bool arch_irqentry_exit_need_resched(void);
> +
> +#ifndef arch_irqentry_exit_need_resched
> +static inline bool arch_irqentry_exit_need_resched(void) { return true; }
> +#endif
> +
>  void raw_irqentry_exit_cond_resched(void)
>  {
>  	if (!preempt_count()) {
> @@ -149,7 +163,7 @@ void raw_irqentry_exit_cond_resched(void)
>  		rcu_irq_exit_check_preempt();
>  		if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
>  			WARN_ON_ONCE(!on_thread_stack());
> -		if (need_resched())
> +		if (need_resched() && arch_irqentry_exit_need_resched())
>  			preempt_schedule_irq();
>  	}
>  }
> -- 
> 2.34.1
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry
  2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
                   ` (7 preceding siblings ...)
  2025-08-05 15:08 ` [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Ada Couprie Diaz
@ 2025-08-12 11:19 ` Mark Rutland
  2025-08-14  9:39   ` Jinjie Ruan
  8 siblings, 1 reply; 33+ messages in thread
From: Mark Rutland @ 2025-08-12 11:19 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: sstabellini, puranjay, anshuman.khandual, catalin.marinas,
	liaochang1, oleg, kristina.martsenko, linux-kernel, broonie,
	chenl311, xen-devel, leitao, ryan.roberts, akpm, mbenes, will,
	ardb, linux-arm-kernel

Hi,

This is looking pretty good now, thanks for continuing to work on this!

I've left a couple of minor comments, and Ada has left a few more. If
you're able to address those and respin atop v6.17-rc1, I think we can
start figuring out how to queue this.

Mark.

On Tue, Jul 29, 2025 at 09:54:49AM +0800, Jinjie Ruan wrote:
> Currently, x86, Riscv, Loongarch use the generic entry. Also convert
> arm64 to use the generic entry infrastructure from kernel/entry/*.
> The generic entry makes maintainers' work easier and codes more elegant,
> which will make PREEMPT_DYNAMIC and PREEMPT_LAZY use the generic entry
> common code and remove a lot of duplicate code.
> 
> Since commit a70e9f647f50 ("entry: Split generic entry into generic
> exception and syscall entry") split the generic entry into generic irq
> entry and generic syscall entry, it is time to convert arm64 to use
> the generic irq entry. And ARM64 will be completely converted to generic
> entry in the upcoming patch series.
> 
> The main convert steps are as follows:
> - Split generic entry into generic irq entry and generic syscall to
>   make the single patch more concentrated in switching to one thing.
> - Make arm64 easier to use irqentry_enter/exit().
> - Make arm64 closer to the PREEMPT_DYNAMIC code of generic entry.
> - Switch to generic irq entry.
> 
> It was tested ok with following test cases on QEMU virt platform:
>  - Perf tests.
>  - Different `dynamic preempt` mode switch.
>  - Pseudo NMI tests.
>  - Stress-ng CPU stress test.
>  - MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst
>    and all test cases in tools/testing/selftests/arm64/mte/*.
> 
> The test QEMU configuration is as follows:
> 
> 	qemu-system-aarch64 \
> 		-M virt,gic-version=3,virtualization=on,mte=on \
> 		-cpu max,pauth-impdef=on \
> 		-kernel Image \
> 		-smp 8,sockets=1,cores=4,threads=2 \
> 		-m 512m \
> 		-nographic \
> 		-no-reboot \
> 		-device virtio-rng-pci \
> 		-append "root=/dev/vda rw console=ttyAMA0 kgdboc=ttyAMA0,115200 \
> 			earlycon preempt=voluntary irqchip.gicv3_pseudo_nmi=1" \
> 		-drive if=none,file=images/rootfs.ext4,format=raw,id=hd0 \
> 		-device virtio-blk-device,drive=hd0 \
> 
> Changes in v7:
> - Rebased on v6.16-rc7 and remove the merged first patch.
> - Update the commit message.
> 
> Changes in v6:
> - Rebased on 6.14 rc2 next.
> - Put the syscall bits aside and split it out.
> - Have the split patch before the arm64 changes.
> - Merge some tightly coupled patches.
> - Adjust the order of some patches to make them more reasonable.
> - Define regs_irqs_disabled() by inline function.
> - Define interrupts_enabled() in terms of regs_irqs_disabled().
> - Delete the fast_interrupts_enabled() macro.
> - irqentry_state_t -> arm64_irqentry_state_t.
> - Remove arch_exit_to_user_mode_prepare() and pull local_daif_mask() later
>   in the arm64 exit sequence
> - Update the commit message.
> 
> Changes in v5:
> - Not change arm32 and keep inerrupts_enabled() macro for gicv3 driver.
> - Move irqentry_state definition into arch/arm64/kernel/entry-common.c.
> - Avoid removing the __enter_from_*() and __exit_to_*() wrappers.
> - Update "irqentry_state_t ret/irq_state" to "state"
>   to keep it consistently.
> - Use generic irq entry header for PREEMPT_DYNAMIC after split
>   the generic entry.
> - Also refactor the ARM64 syscall code.
> - Introduce arch_ptrace_report_syscall_entry/exit(), instead of
>   arch_pre/post_report_syscall_entry/exit() to simplify code.
> - Make the syscall patches clear separation.
> - Update the commit message.
> 
> Changes in v4:
> - Rework/cleanup split into a few patches as Mark suggested.
> - Replace interrupts_enabled() macro with regs_irqs_disabled(), instead
>   of left it here.
> - Remove rcu and lockdep state in pt_regs by using temporary
>   irqentry_state_t as Mark suggested.
> - Remove some unnecessary intermediate functions to make it clear.
> - Rework preempt irq and PREEMPT_DYNAMIC code
>   to make the switch more clear.
> - arch_prepare_*_entry/exit() -> arch_pre_*_entry/exit().
> - Expand the arch functions comment.
> - Make arch functions closer to its caller.
> - Declare saved_reg in for block.
> - Remove arch_exit_to_kernel_mode_prepare(), arch_enter_from_kernel_mode().
> - Adjust "Add few arch functions to use generic entry" patch to be
>   the penultimate.
> - Update the commit message.
> - Add suggested-by.
> 
> Changes in v3:
> - Test the MTE test cases.
> - Handle forget_syscall() in arch_post_report_syscall_entry()
> - Make the arch funcs not use __weak as Thomas suggested, so move
>   the arch funcs to entry-common.h, and make arch_forget_syscall() folded
>   in arch_post_report_syscall_entry() as suggested.
> - Move report_single_step() to thread_info.h for arm64
> - Change __always_inline() to inline, add inline for the other arch funcs.
> - Remove unused signal.h for entry-common.h.
> - Add Suggested-by.
> - Update the commit message.
> 
> Changes in v2:
> - Add tested-by.
> - Fix a bug that not call arch_post_report_syscall_entry() in
>   syscall_trace_enter() if ptrace_report_syscall_entry() return not zero.
> - Refactor report_syscall().
> - Add comment for arch_prepare_report_syscall_exit().
> - Adjust entry-common.h header file inclusion to alphabetical order.
> - Update the commit message.
> 
> Jinjie Ruan (7):
>   arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled()
>   arm64: entry: Refactor the entry and exit for exceptions from EL1
>   arm64: entry: Rework arm64_preempt_schedule_irq()
>   arm64: entry: Use preempt_count() and need_resched() helper
>   arm64: entry: Refactor preempt_schedule_irq() check code
>   arm64: entry: Move arm64_preempt_schedule_irq() into
>     __exit_to_kernel_mode()
>   arm64: entry: Switch to generic IRQ entry
> 
>  arch/arm64/Kconfig                    |   1 +
>  arch/arm64/include/asm/daifflags.h    |   2 +-
>  arch/arm64/include/asm/entry-common.h |  56 ++++
>  arch/arm64/include/asm/preempt.h      |   2 -
>  arch/arm64/include/asm/ptrace.h       |  13 +-
>  arch/arm64/include/asm/xen/events.h   |   2 +-
>  arch/arm64/kernel/acpi.c              |   2 +-
>  arch/arm64/kernel/debug-monitors.c    |   2 +-
>  arch/arm64/kernel/entry-common.c      | 411 +++++++++-----------------
>  arch/arm64/kernel/sdei.c              |   2 +-
>  arch/arm64/kernel/signal.c            |   3 +-
>  kernel/entry/common.c                 |  16 +-
>  12 files changed, 217 insertions(+), 295 deletions(-)
>  create mode 100644 arch/arm64/include/asm/entry-common.h
> 
> -- 
> 2.34.1
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-08-11 16:02   ` Ada Couprie Diaz
@ 2025-08-14  8:49     ` Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-14  8:49 UTC (permalink / raw)
  To: Ada Couprie Diaz
  Cc: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, anshuman.khandual,
	kristina.martsenko, liaochang1, ardb, leitao, linux-arm-kernel,
	linux-kernel, xen-devel



On 2025/8/12 0:02, Ada Couprie Diaz wrote:
> On 29/07/2025 02:54, Jinjie Ruan wrote:
> 
>> ARM64 requires an additional check whether to reschedule on return
>> from interrupt. So add arch_irqentry_exit_need_resched() as the default
>> NOP implementation and hook it up into the need_resched() condition in
>> raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
>> the architecture specific version for switching over to
>> the generic entry code.
>>
>> To align the structure of the code with irqentry_exit_cond_resched()
>> from the generic entry code, hoist the need_irq_preemption()
>> and IS_ENABLED() check earlier. And different preemption check functions
>> are defined based on whether dynamic preemption is enabled.
>>
>> Suggested-by: Mark Rutland <mark.rutland@arm.com>
>> Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com>
>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
> Unrelated to the other thread : I noticed that compiling this patch
> with `allnoconfig` would fail :
> - `raw_irqentry_exit_cond_resched` has no previous prototype,
>   as it is defined within `#ifdef CONFIG_PREEMPTION`
> - `irqentry_exit_cond_resched()` is not declared, as it is also within
>   `#ifdef CONFIG_PREEMPTION`

You are right, thank you! I'll fix it.

> 
> The patch below fixes the issue, but introduces merge conflicts in
> patches 6 and 7, plus the `#ifdef` needs to be moved accordingly
> in patch 6 and the empty "without preemption" `irq_exit_cond_resched()`
> needs to be removed in patch 7.
> 
> I hope this can be useful,
> Ada
> 
> ---
> diff --git a/arch/arm64/include/asm/preempt.h
> b/arch/arm64/include/asm/preempt.h
> index 0f0ba250efe8..d9aba8b1e466 100644
> --- a/arch/arm64/include/asm/preempt.h
> +++ b/arch/arm64/include/asm/preempt.h
> @@ -103,6 +103,8 @@ void dynamic_irqentry_exit_cond_resched(void);
>  #define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched()
> 
>  #endif /* CONFIG_PREEMPT_DYNAMIC */
> +#else /* CONFIG_PREEMPTION */
> +#define irqentry_exit_cond_resched() {}
>  #endif /* CONFIG_PREEMPTION */
> 
>  #endif /* __ASM_PREEMPT_H */
> diff --git a/arch/arm64/kernel/entry-common.c
> b/arch/arm64/kernel/entry-common.c
> index 4f92664fd46c..abd7a315145e 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -661,6 +661,7 @@ static __always_inline void __el1_pnmi(struct
> pt_regs *regs,
>         arm64_exit_nmi(regs, state);
>  }
> 
> +#ifdef CONFIG_PREEMPTION
>  void raw_irqentry_exit_cond_resched(void)
>  {
>         if (!preempt_count()) {
> @@ -668,6 +669,7 @@ void raw_irqentry_exit_cond_resched(void)
>                         preempt_schedule_irq();
>         }
>  }
> +#endif
> 
>  #ifdef CONFIG_PREEMPT_DYNAMIC
>  DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
> 
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code
  2025-08-12 11:13   ` Mark Rutland
@ 2025-08-14  9:31     ` Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-14  9:31 UTC (permalink / raw)
  To: Mark Rutland
  Cc: sstabellini, puranjay, anshuman.khandual, catalin.marinas,
	liaochang1, oleg, kristina.martsenko, linux-kernel, broonie,
	chenl311, xen-devel, leitao, ryan.roberts, akpm, mbenes, will,
	ardb, linux-arm-kernel



On 2025/8/12 19:13, Mark Rutland wrote:
> On Tue, Jul 29, 2025 at 09:54:54AM +0800, Jinjie Ruan wrote:
>> ARM64 requires an additional check whether to reschedule on return
>> from interrupt. So add arch_irqentry_exit_need_resched() as the default
>> NOP implementation and hook it up into the need_resched() condition in
>> raw_irqentry_exit_cond_resched(). This allows ARM64 to implement
>> the architecture specific version for switching over to
>> the generic entry code.
>>
>> To align the structure of the code with irqentry_exit_cond_resched()
>> from the generic entry code, hoist the need_irq_preemption()
>> and IS_ENABLED() check earlier. And different preemption check functions
>> are defined based on whether dynamic preemption is enabled.
>>
>> Suggested-by: Mark Rutland <mark.rutland@arm.com>
>> Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com>
>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
>>  arch/arm64/include/asm/preempt.h |  4 ++++
>>  arch/arm64/kernel/entry-common.c | 35 ++++++++++++++++++--------------
>>  kernel/entry/common.c            | 16 ++++++++++++++-
>>  3 files changed, 39 insertions(+), 16 deletions(-)
> 
> Can you please split the change to kernel/entry/common.c into a separate
> patch? That doesn't depend on the arm64-specific changes, and it'll make
> it easier to handle any conflcits when merging this.

Sure, I'll split the change into separate patch.

> 
> Mark.
> 
>>
>> diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
>> index 0159b625cc7f..0f0ba250efe8 100644
>> --- a/arch/arm64/include/asm/preempt.h
>> +++ b/arch/arm64/include/asm/preempt.h
>> @@ -85,6 +85,7 @@ static inline bool should_resched(int preempt_offset)
>>  void preempt_schedule(void);
>>  void preempt_schedule_notrace(void);
>>  
>> +void raw_irqentry_exit_cond_resched(void);
>>  #ifdef CONFIG_PREEMPT_DYNAMIC
>>  
>>  DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
>> @@ -92,11 +93,14 @@ void dynamic_preempt_schedule(void);
>>  #define __preempt_schedule()		dynamic_preempt_schedule()
>>  void dynamic_preempt_schedule_notrace(void);
>>  #define __preempt_schedule_notrace()	dynamic_preempt_schedule_notrace()
>> +void dynamic_irqentry_exit_cond_resched(void);
>> +#define irqentry_exit_cond_resched()	dynamic_irqentry_exit_cond_resched()
>>  
>>  #else /* CONFIG_PREEMPT_DYNAMIC */
>>  
>>  #define __preempt_schedule()		preempt_schedule()
>>  #define __preempt_schedule_notrace()	preempt_schedule_notrace()
>> +#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
>>  
>>  #endif /* CONFIG_PREEMPT_DYNAMIC */
>>  #endif /* CONFIG_PREEMPTION */
>> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
>> index 7c2299c1ba79..4f92664fd46c 100644
>> --- a/arch/arm64/kernel/entry-common.c
>> +++ b/arch/arm64/kernel/entry-common.c
>> @@ -285,19 +285,8 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs,
>>  		lockdep_hardirqs_on(CALLER_ADDR0);
>>  }
>>  
>> -#ifdef CONFIG_PREEMPT_DYNAMIC
>> -DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
>> -#define need_irq_preemption() \
>> -	(static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
>> -#else
>> -#define need_irq_preemption()	(IS_ENABLED(CONFIG_PREEMPTION))
>> -#endif
>> -
>>  static inline bool arm64_preempt_schedule_irq(void)
>>  {
>> -	if (!need_irq_preemption())
>> -		return false;
>> -
>>  	/*
>>  	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
>>  	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
>> @@ -672,6 +661,24 @@ static __always_inline void __el1_pnmi(struct pt_regs *regs,
>>  	arm64_exit_nmi(regs, state);
>>  }
>>  
>> +void raw_irqentry_exit_cond_resched(void)
>> +{
>> +	if (!preempt_count()) {
>> +		if (need_resched() && arm64_preempt_schedule_irq())
>> +			preempt_schedule_irq();
>> +	}
>> +}
>> +
>> +#ifdef CONFIG_PREEMPT_DYNAMIC
>> +DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
>> +void dynamic_irqentry_exit_cond_resched(void)
>> +{
>> +	if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
>> +		return;
>> +	raw_irqentry_exit_cond_resched();
>> +}
>> +#endif
>> +
>>  static __always_inline void __el1_irq(struct pt_regs *regs,
>>  				      void (*handler)(struct pt_regs *))
>>  {
>> @@ -681,10 +688,8 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
>>  	do_interrupt_handler(regs, handler);
>>  	irq_exit_rcu();
>>  
>> -	if (!preempt_count() && need_resched()) {
>> -		if (arm64_preempt_schedule_irq())
>> -			preempt_schedule_irq();
>> -	}
>> +	if (IS_ENABLED(CONFIG_PREEMPTION))
>> +		irqentry_exit_cond_resched();
>>  
>>  	exit_to_kernel_mode(regs, state);
>>  }
>> diff --git a/kernel/entry/common.c b/kernel/entry/common.c
>> index b82032777310..4aa9656fa1b4 100644
>> --- a/kernel/entry/common.c
>> +++ b/kernel/entry/common.c
>> @@ -142,6 +142,20 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
>>  	return ret;
>>  }
>>  
>> +/**
>> + * arch_irqentry_exit_need_resched - Architecture specific need resched function
>> + *
>> + * Invoked from raw_irqentry_exit_cond_resched() to check if need resched.
>> + * Defaults return true.
>> + *
>> + * The main purpose is to permit arch to skip preempt a task from an IRQ.
>> + */
>> +static inline bool arch_irqentry_exit_need_resched(void);
>> +
>> +#ifndef arch_irqentry_exit_need_resched
>> +static inline bool arch_irqentry_exit_need_resched(void) { return true; }
>> +#endif
>> +
>>  void raw_irqentry_exit_cond_resched(void)
>>  {
>>  	if (!preempt_count()) {
>> @@ -149,7 +163,7 @@ void raw_irqentry_exit_cond_resched(void)
>>  		rcu_irq_exit_check_preempt();
>>  		if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
>>  			WARN_ON_ONCE(!on_thread_stack());
>> -		if (need_resched())
>> +		if (need_resched() && arch_irqentry_exit_need_resched())
>>  			preempt_schedule_irq();
>>  	}
>>  }
>> -- 
>> 2.34.1
>>
> 
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry
  2025-08-11 16:03     ` Ada Couprie Diaz
@ 2025-08-14  9:37       ` Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-14  9:37 UTC (permalink / raw)
  To: Ada Couprie Diaz
  Cc: catalin.marinas, will, oleg, sstabellini, mark.rutland, puranjay,
	broonie, mbenes, ryan.roberts, akpm, chenl311, anshuman.khandual,
	kristina.martsenko, liaochang1, ardb, leitao, linux-arm-kernel,
	linux-kernel, xen-devel



On 2025/8/12 0:03, Ada Couprie Diaz wrote:
> On 06/08/2025 09:11, Jinjie Ruan wrote:
> 
>> On 2025/8/5 23:08, Ada Couprie Diaz wrote:
>>> Hi Jinjie,
>>>
>>> On 29/07/2025 02:54, Jinjie Ruan wrote:
>>>
>>>> Since commit a70e9f647f50 ("entry: Split generic entry into generic
>>>> exception and syscall entry") split the generic entry into generic irq
>>>> entry and generic syscall entry, it is time to convert arm64 to use
>>>> the generic irq entry. And ARM64 will be completely converted to
>>>> generic
>>>> entry in the upcoming patch series.
>>> Note : I had to manually cherry-pick a70e9f647f50 when pulling the
>>> series
>>> on top of the Linux Arm Kernel for-next/core branch, but there might be
>>> something I'm missing here.
>> It seems that it is now in mainline v6.16-rc1 and linux-next but not
>> Linux Arm Kernel for-next/core branch.
> You're right, I misinterpreted the `-next` of the subject, thanks for the
> clarification !
>>> I'll spend some time testing the series now, specifically given patch
>>> 6's
>>> changes, but other than that everything I saw made sense and didn't look
>>> like it would be of concern to me.
>> Thank you for the test and review.
> 
> I've spent some time testing the series with a few different
> configurations,
> including PREEMPT_RT, pNMI, various lockup and hang detection options,
> UBSAN, shadow call stack, and various CONFIG_DEBUG_XYZ (focused on locks
> and IRQs), on both hardware (AMD Seattle) and KVM guests.
> 
> I tried to generate a diverse set of interrupts (via debug exceptions,
> page faults, perf, kprobes, swapping, OoM) while loading the system with
> different workloads, some generating a lot of context switches : hackbench
> and signaltest from rt-tests[0], and mc-crusher[1], a memcached
> stress-test.
> 
> I did not have any issues, nor any warning reported by the various
> debug features during all my hours of testing, so it looks good !
> 
> Tested-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>

Thank you for your comprehensive testing and code review.

> 
> Thank you for the series !
> Ada
> 
> [0]: https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git/
> [1]: https://github.com/memcached/mc-crusher
> 
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry
  2025-08-12 11:19 ` Mark Rutland
@ 2025-08-14  9:39   ` Jinjie Ruan
  0 siblings, 0 replies; 33+ messages in thread
From: Jinjie Ruan @ 2025-08-14  9:39 UTC (permalink / raw)
  To: Mark Rutland
  Cc: sstabellini, puranjay, anshuman.khandual, catalin.marinas,
	liaochang1, oleg, kristina.martsenko, linux-kernel, broonie,
	chenl311, xen-devel, leitao, ryan.roberts, akpm, mbenes, will,
	ardb, linux-arm-kernel



On 2025/8/12 19:19, Mark Rutland wrote:
> Hi,
> 
> This is looking pretty good now, thanks for continuing to work on this!
> 
> I've left a couple of minor comments, and Ada has left a few more. If
> you're able to address those and respin atop v6.17-rc1, I think we can
> start figuring out how to queue this.

Sure,I will revise these review comments based on v6.17-rc1 and release
a new version after local testing.

> 
> Mark.
> 
> On Tue, Jul 29, 2025 at 09:54:49AM +0800, Jinjie Ruan wrote:
>> Currently, x86, Riscv, Loongarch use the generic entry. Also convert
>> arm64 to use the generic entry infrastructure from kernel/entry/*.
>> The generic entry makes maintainers' work easier and codes more elegant,
>> which will make PREEMPT_DYNAMIC and PREEMPT_LAZY use the generic entry
>> common code and remove a lot of duplicate code.
>>
>> Since commit a70e9f647f50 ("entry: Split generic entry into generic
>> exception and syscall entry") split the generic entry into generic irq
>> entry and generic syscall entry, it is time to convert arm64 to use
>> the generic irq entry. And ARM64 will be completely converted to generic
>> entry in the upcoming patch series.
>>
>> The main convert steps are as follows:
>> - Split generic entry into generic irq entry and generic syscall to
>>   make the single patch more concentrated in switching to one thing.
>> - Make arm64 easier to use irqentry_enter/exit().
>> - Make arm64 closer to the PREEMPT_DYNAMIC code of generic entry.
>> - Switch to generic irq entry.
>>
>> It was tested ok with following test cases on QEMU virt platform:
>>  - Perf tests.
>>  - Different `dynamic preempt` mode switch.
>>  - Pseudo NMI tests.
>>  - Stress-ng CPU stress test.
>>  - MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst
>>    and all test cases in tools/testing/selftests/arm64/mte/*.
>>
>> The test QEMU configuration is as follows:
>>
>> 	qemu-system-aarch64 \
>> 		-M virt,gic-version=3,virtualization=on,mte=on \
>> 		-cpu max,pauth-impdef=on \
>> 		-kernel Image \
>> 		-smp 8,sockets=1,cores=4,threads=2 \
>> 		-m 512m \
>> 		-nographic \
>> 		-no-reboot \
>> 		-device virtio-rng-pci \
>> 		-append "root=/dev/vda rw console=ttyAMA0 kgdboc=ttyAMA0,115200 \
>> 			earlycon preempt=voluntary irqchip.gicv3_pseudo_nmi=1" \
>> 		-drive if=none,file=images/rootfs.ext4,format=raw,id=hd0 \
>> 		-device virtio-blk-device,drive=hd0 \
>>
>> Changes in v7:
>> - Rebased on v6.16-rc7 and remove the merged first patch.
>> - Update the commit message.
>>
>> Changes in v6:
>> - Rebased on 6.14 rc2 next.
>> - Put the syscall bits aside and split it out.
>> - Have the split patch before the arm64 changes.
>> - Merge some tightly coupled patches.
>> - Adjust the order of some patches to make them more reasonable.
>> - Define regs_irqs_disabled() by inline function.
>> - Define interrupts_enabled() in terms of regs_irqs_disabled().
>> - Delete the fast_interrupts_enabled() macro.
>> - irqentry_state_t -> arm64_irqentry_state_t.
>> - Remove arch_exit_to_user_mode_prepare() and pull local_daif_mask() later
>>   in the arm64 exit sequence
>> - Update the commit message.
>>
>> Changes in v5:
>> - Not change arm32 and keep inerrupts_enabled() macro for gicv3 driver.
>> - Move irqentry_state definition into arch/arm64/kernel/entry-common.c.
>> - Avoid removing the __enter_from_*() and __exit_to_*() wrappers.
>> - Update "irqentry_state_t ret/irq_state" to "state"
>>   to keep it consistently.
>> - Use generic irq entry header for PREEMPT_DYNAMIC after split
>>   the generic entry.
>> - Also refactor the ARM64 syscall code.
>> - Introduce arch_ptrace_report_syscall_entry/exit(), instead of
>>   arch_pre/post_report_syscall_entry/exit() to simplify code.
>> - Make the syscall patches clear separation.
>> - Update the commit message.
>>
>> Changes in v4:
>> - Rework/cleanup split into a few patches as Mark suggested.
>> - Replace interrupts_enabled() macro with regs_irqs_disabled(), instead
>>   of left it here.
>> - Remove rcu and lockdep state in pt_regs by using temporary
>>   irqentry_state_t as Mark suggested.
>> - Remove some unnecessary intermediate functions to make it clear.
>> - Rework preempt irq and PREEMPT_DYNAMIC code
>>   to make the switch more clear.
>> - arch_prepare_*_entry/exit() -> arch_pre_*_entry/exit().
>> - Expand the arch functions comment.
>> - Make arch functions closer to its caller.
>> - Declare saved_reg in for block.
>> - Remove arch_exit_to_kernel_mode_prepare(), arch_enter_from_kernel_mode().
>> - Adjust "Add few arch functions to use generic entry" patch to be
>>   the penultimate.
>> - Update the commit message.
>> - Add suggested-by.
>>
>> Changes in v3:
>> - Test the MTE test cases.
>> - Handle forget_syscall() in arch_post_report_syscall_entry()
>> - Make the arch funcs not use __weak as Thomas suggested, so move
>>   the arch funcs to entry-common.h, and make arch_forget_syscall() folded
>>   in arch_post_report_syscall_entry() as suggested.
>> - Move report_single_step() to thread_info.h for arm64
>> - Change __always_inline() to inline, add inline for the other arch funcs.
>> - Remove unused signal.h for entry-common.h.
>> - Add Suggested-by.
>> - Update the commit message.
>>
>> Changes in v2:
>> - Add tested-by.
>> - Fix a bug that not call arch_post_report_syscall_entry() in
>>   syscall_trace_enter() if ptrace_report_syscall_entry() return not zero.
>> - Refactor report_syscall().
>> - Add comment for arch_prepare_report_syscall_exit().
>> - Adjust entry-common.h header file inclusion to alphabetical order.
>> - Update the commit message.
>>
>> Jinjie Ruan (7):
>>   arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled()
>>   arm64: entry: Refactor the entry and exit for exceptions from EL1
>>   arm64: entry: Rework arm64_preempt_schedule_irq()
>>   arm64: entry: Use preempt_count() and need_resched() helper
>>   arm64: entry: Refactor preempt_schedule_irq() check code
>>   arm64: entry: Move arm64_preempt_schedule_irq() into
>>     __exit_to_kernel_mode()
>>   arm64: entry: Switch to generic IRQ entry
>>
>>  arch/arm64/Kconfig                    |   1 +
>>  arch/arm64/include/asm/daifflags.h    |   2 +-
>>  arch/arm64/include/asm/entry-common.h |  56 ++++
>>  arch/arm64/include/asm/preempt.h      |   2 -
>>  arch/arm64/include/asm/ptrace.h       |  13 +-
>>  arch/arm64/include/asm/xen/events.h   |   2 +-
>>  arch/arm64/kernel/acpi.c              |   2 +-
>>  arch/arm64/kernel/debug-monitors.c    |   2 +-
>>  arch/arm64/kernel/entry-common.c      | 411 +++++++++-----------------
>>  arch/arm64/kernel/sdei.c              |   2 +-
>>  arch/arm64/kernel/signal.c            |   3 +-
>>  kernel/entry/common.c                 |  16 +-
>>  12 files changed, 217 insertions(+), 295 deletions(-)
>>  create mode 100644 arch/arm64/include/asm/entry-common.h
>>
>> -- 
>> 2.34.1
>>
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2025-08-14 10:54 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-29  1:54 [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Jinjie Ruan
2025-07-29  1:54 ` [PATCH -next v7 1/7] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled() Jinjie Ruan
2025-08-05 15:05   ` Ada Couprie Diaz
2025-08-06  2:31     ` Jinjie Ruan
2025-07-29  1:54 ` [PATCH -next v7 2/7] arm64: entry: Refactor the entry and exit for exceptions from EL1 Jinjie Ruan
2025-08-05 15:06   ` Ada Couprie Diaz
2025-08-06  2:49     ` Jinjie Ruan
2025-08-11 16:01       ` Ada Couprie Diaz
2025-08-12 11:01   ` Mark Rutland
2025-07-29  1:54 ` [PATCH -next v7 3/7] arm64: entry: Rework arm64_preempt_schedule_irq() Jinjie Ruan
2025-08-05 15:06   ` Ada Couprie Diaz
2025-07-29  1:54 ` [PATCH -next v7 4/7] arm64: entry: Use preempt_count() and need_resched() helper Jinjie Ruan
2025-08-05 15:06   ` Ada Couprie Diaz
2025-07-29  1:54 ` [PATCH -next v7 5/7] arm64: entry: Refactor preempt_schedule_irq() check code Jinjie Ruan
2025-08-05 15:06   ` Ada Couprie Diaz
2025-08-06  6:26     ` Jinjie Ruan
2025-08-06  6:39     ` Jinjie Ruan
2025-08-11 16:02       ` Ada Couprie Diaz
2025-08-11 16:02   ` Ada Couprie Diaz
2025-08-14  8:49     ` Jinjie Ruan
2025-08-12 11:13   ` Mark Rutland
2025-08-14  9:31     ` Jinjie Ruan
2025-07-29  1:54 ` [PATCH -next v7 6/7] arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode() Jinjie Ruan
2025-08-05 15:07   ` Ada Couprie Diaz
2025-07-29  1:54 ` [PATCH -next v7 7/7] arm64: entry: Switch to generic IRQ entry Jinjie Ruan
2025-08-05 15:07   ` Ada Couprie Diaz
2025-08-06  6:59     ` Jinjie Ruan
2025-08-05 15:08 ` [PATCH -next v7 0/7] arm64: entry: Convert to generic irq entry Ada Couprie Diaz
2025-08-06  8:11   ` Jinjie Ruan
2025-08-11 16:03     ` Ada Couprie Diaz
2025-08-14  9:37       ` Jinjie Ruan
2025-08-12 11:19 ` Mark Rutland
2025-08-14  9:39   ` Jinjie Ruan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).