public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Jinjie Ruan <ruanjinjie@huawei.com>
To: Mark Rutland <mark.rutland@arm.com>,
	<linux-arm-kernel@lists.infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>
Cc: vladimir.murzin@arm.com, peterz@infradead.org,
	linux-kernel@vger.kernel.org, tglx@kernel.org, luto@kernel.org
Subject: Re: [PATCH 09/10] arm64: entry: Use split preemption logic
Date: Wed, 8 Apr 2026 09:52:15 +0800	[thread overview]
Message-ID: <01050228-13d1-d2d7-482e-62bb4fc7a8c0@huawei.com> (raw)
In-Reply-To: <20260407131650.3813777-10-mark.rutland@arm.com>



On 2026/4/7 21:16, Mark Rutland wrote:
> The generic irqentry code now provides
> irqentry_exit_to_kernel_mode_preempt() and
> irqentry_exit_to_kernel_mode_after_preempt(), which can be used
> where architectures have different state requirements for involuntary
> preemption and exception return, as is the case on arm64.
> 
> Use the new functions on arm64, aligning our exit to kernel mode logic
> with the style of our exit to user mode logic. This removes the need for
> the recently-added bodge in arch_irqentry_exit_need_resched(), and
> allows preemption to occur when returning from any exception taken from
> kernel mode, which is nicer for RT.
> 
> In an ideal world, we'd remove arch_irqentry_exit_need_resched(), and
> fold the conditionality directly into the architecture-specific entry
> code. That way all the logic necessary to avoid preempting from a
> pseudo-NMI could be constrained specifically to the EL1 IRQ/FIQ paths,
> avoiding redundant work for other exceptions, and making the flow a bit
> clearer. At present it looks like that would require a larger
> refactoring (e.g. for the PREEMPT_DYNAMIC logic), and so I've left that
> as-is for now.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Jinjie Ruan <ruanjinjie@huawei.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@kernel.org>
> Cc: Vladimir Murzin <vladimir.murzin@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/entry-common.h | 21 ++++++++-------------
>  arch/arm64/kernel/entry-common.c      | 12 ++++--------
>  2 files changed, 12 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm/entry-common.h
> index 20f0a7c7bde15..cab8cd78f6938 100644
> --- a/arch/arm64/include/asm/entry-common.h
> +++ b/arch/arm64/include/asm/entry-common.h
> @@ -29,19 +29,14 @@ static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *regs,
>  
>  static inline bool arch_irqentry_exit_need_resched(void)
>  {
> -	if (system_uses_irq_prio_masking()) {
> -		/*
> -		 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
> -		 * priority masking is used the GIC irqchip driver will clear DAIF.IF
> -		 * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
> -		 * DAIF we must have handled an NMI, so skip preemption.
> -		 */
> -		if (read_sysreg(daif))
> -			return false;
> -	} else {
> -		if (read_sysreg(daif) & (PSR_D_BIT | PSR_A_BIT))
> -			return false;
> -	}
> +	/*
> +	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
> +	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
> +	 * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
> +	 * DAIF we must have handled an NMI, so skip preemption.
> +	 */
> +	if (system_uses_irq_prio_masking() && read_sysreg(daif))
> +		return false;
>  
>  	/*
>  	 * Preempting a task from an IRQ means we leave copies of PSTATE
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 16a65987a6a9b..f42ce7b5c67f3 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -54,8 +54,11 @@ static noinstr irqentry_state_t arm64_enter_from_kernel_mode(struct pt_regs *reg
>  static void noinstr arm64_exit_to_kernel_mode(struct pt_regs *regs,
>  					      irqentry_state_t state)
>  {
> +	local_irq_disable();
> +	irqentry_exit_to_kernel_mode_preempt(regs, state);
> +	local_daif_mask();
>  	mte_check_tfsr_exit();
> -	irqentry_exit_to_kernel_mode(regs, state);
> +	irqentry_exit_to_kernel_mode_after_preempt(regs, state);
>  }

Reviewed-by: Jinjie Ruan <ruanjinjie@huawei.com>

>  
>  /*
> @@ -301,7 +304,6 @@ static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
>  	state = arm64_enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_mem_abort(far, esr, regs);
> -	local_daif_mask();
>  	arm64_exit_to_kernel_mode(regs, state);
>  }
>  
> @@ -313,7 +315,6 @@ static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
>  	state = arm64_enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_sp_pc_abort(far, esr, regs);
> -	local_daif_mask();
>  	arm64_exit_to_kernel_mode(regs, state);
>  }
>  
> @@ -324,7 +325,6 @@ static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
>  	state = arm64_enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_el1_undef(regs, esr);
> -	local_daif_mask();
>  	arm64_exit_to_kernel_mode(regs, state);
>  }
>  
> @@ -335,7 +335,6 @@ static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr)
>  	state = arm64_enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_el1_bti(regs, esr);
> -	local_daif_mask();
>  	arm64_exit_to_kernel_mode(regs, state);
>  }
>  
> @@ -346,7 +345,6 @@ static void noinstr el1_gcs(struct pt_regs *regs, unsigned long esr)
>  	state = arm64_enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_el1_gcs(regs, esr);
> -	local_daif_mask();
>  	arm64_exit_to_kernel_mode(regs, state);
>  }
>  
> @@ -357,7 +355,6 @@ static void noinstr el1_mops(struct pt_regs *regs, unsigned long esr)
>  	state = arm64_enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_el1_mops(regs, esr);
> -	local_daif_mask();
>  	arm64_exit_to_kernel_mode(regs, state);
>  }
>  
> @@ -423,7 +420,6 @@ static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
>  	state = arm64_enter_from_kernel_mode(regs);
>  	local_daif_inherit(regs);
>  	do_el1_fpac(regs, esr);
> -	local_daif_mask();
>  	arm64_exit_to_kernel_mode(regs, state);
>  }
>  


  reply	other threads:[~2026-04-08  1:52 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-07 13:16 [PATCH 00/10] arm64/entry: Mark Rutland
2026-04-07 13:16 ` [PATCH 01/10] entry: Fix stale comment for irqentry_enter() Mark Rutland
2026-04-08  1:14   ` Jinjie Ruan
2026-04-07 13:16 ` [PATCH 02/10] entry: Remove local_irq_{enable,disable}_exit_to_user() Mark Rutland
2026-04-08  1:18   ` Jinjie Ruan
2026-04-07 13:16 ` [PATCH 03/10] entry: Move irqentry_enter() prototype later Mark Rutland
2026-04-08  1:21   ` Jinjie Ruan
2026-04-07 13:16 ` [PATCH 04/10] entry: Split kernel mode logic from irqentry_{enter,exit}() Mark Rutland
2026-04-08  1:32   ` Jinjie Ruan
2026-04-07 13:16 ` [PATCH 05/10] entry: Split preemption from irqentry_exit_to_kernel_mode() Mark Rutland
2026-04-08  1:40   ` Jinjie Ruan
2026-04-08  9:17   ` Jinjie Ruan
2026-04-08 10:19     ` Mark Rutland
2026-04-07 13:16 ` [PATCH 06/10] arm64: entry: Don't preempt with SError or Debug masked Mark Rutland
2026-04-08  1:47   ` Jinjie Ruan
2026-04-07 13:16 ` [PATCH 07/10] arm64: entry: Consistently prefix arm64-specific wrappers Mark Rutland
2026-04-08  1:49   ` Jinjie Ruan
2026-04-07 13:16 ` [PATCH 08/10] arm64: entry: Use irqentry_{enter_from,exit_to}_kernel_mode() Mark Rutland
2026-04-08  1:50   ` Jinjie Ruan
2026-04-07 13:16 ` [PATCH 09/10] arm64: entry: Use split preemption logic Mark Rutland
2026-04-08  1:52   ` Jinjie Ruan [this message]
2026-04-07 13:16 ` [PATCH 10/10] arm64: Check DAIF (and PMR) at task-switch time Mark Rutland
2026-04-08  2:17   ` Jinjie Ruan
2026-04-08  9:08     ` Mark Rutland
2026-04-07 21:08 ` [PATCH 00/10] arm64/entry: Thomas Gleixner
2026-04-08  9:02   ` Mark Rutland
2026-04-08  9:06     ` Catalin Marinas
2026-04-08 10:14       ` Thomas Gleixner
2026-04-08  9:19   ` Peter Zijlstra
2026-04-08 17:25 ` (subset) " Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=01050228-13d1-d2d7-482e-62bb4fc7a8c0@huawei.com \
    --to=ruanjinjie@huawei.com \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=peterz@infradead.org \
    --cc=tglx@kernel.org \
    --cc=vladimir.murzin@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox