From: Mark Rutland <mark.rutland@arm.com>
To: Jinjie Ruan <ruanjinjie@huawei.com>
Cc: catalin.marinas@arm.com, will@kernel.org, oleg@redhat.com,
sstabellini@kernel.org, tglx@linutronix.de, peterz@infradead.org,
luto@kernel.org, mingo@redhat.com, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
vschneid@redhat.com, kees@kernel.org, wad@chromium.org,
akpm@linux-foundation.org, samitolvanen@google.com,
masahiroy@kernel.org, hca@linux.ibm.com, aliceryhl@google.com,
rppt@kernel.org, xur@google.com, paulmck@kernel.org,
arnd@arndb.de, mbenes@suse.cz, puranjay@kernel.org,
pcc@google.com, ardb@kernel.org, sudeep.holla@arm.com,
guohanjun@huawei.com, rafael@kernel.org, liuwei09@cestc.cn,
dwmw@amazon.co.uk, Jonathan.Cameron@huawei.com,
liaochang1@huawei.com, kristina.martsenko@arm.com,
ptosi@google.com, broonie@kernel.org,
thiago.bauermann@linaro.org, kevin.brodsky@arm.com,
joey.gouly@arm.com, liuyuntao12@huawei.com, leobras@redhat.com,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
xen-devel@lists.xenproject.org
Subject: Re: [PATCH -next v5 05/22] arm64: entry: Use preempt_count() and need_resched() helper
Date: Mon, 10 Feb 2025 11:40:03 +0000 [thread overview]
Message-ID: <Z6nlkyxAZEtY_M7T@J2N7QTR9R3> (raw)
In-Reply-To: <20241206101744.4161990-6-ruanjinjie@huawei.com>
On Fri, Dec 06, 2024 at 06:17:27PM +0800, Jinjie Ruan wrote:
> The generic entry code uses preempt_count() and need_resched() helpers to
> check if it is time to resched. Currently, arm64 use its own check logic,
> that is "READ_ONCE(current_thread_info()->preempt_count == 0", which is
> equivalent to "preempt_count() == 0 && need_resched()".
Hmm. The existing code relies upon preempt_fold_need_resched() to work
correctly. If we want to move from:
READ_ONCE(current_thread_info()->preempt_count) == 0
... to:
!preempt_count() && need_resched()
... then that change should be made *before* we change the preemption
logic to preempt non-IRQ exceptions in patch 3. Otherwise, that logic is
consuming stale data most of the time.
Mark.
> In preparation for moving arm64 over to the generic entry code, use
> these helpers to replace arm64's own code and move it ahead.
>
> No functional changes.
>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
> arch/arm64/kernel/entry-common.c | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index da68c089b74b..efd1a990d138 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -88,14 +88,6 @@ static inline bool arm64_need_resched(void)
> if (!need_irq_preemption())
> return false;
>
> - /*
> - * Note: thread_info::preempt_count includes both thread_info::count
> - * and thread_info::need_resched, and is not equivalent to
> - * preempt_count().
> - */
> - if (READ_ONCE(current_thread_info()->preempt_count) != 0)
> - return false;
> -
> /*
> * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
> * priority masking is used the GIC irqchip driver will clear DAIF.IF
> @@ -141,8 +133,10 @@ static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs,
> return;
> }
>
> - if (arm64_need_resched())
> - preempt_schedule_irq();
> + if (!preempt_count() && need_resched()) {
> + if (arm64_need_resched())
> + preempt_schedule_irq();
> + }
>
> trace_hardirqs_on();
> } else {
> --
> 2.34.1
>
next prev parent reply other threads:[~2025-02-10 11:51 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20241206101744.4161990-1-ruanjinjie@huawei.com>
2025-02-08 1:15 ` [PATCH -next v5 00/22] arm64: entry: Convert to generic entry Jinjie Ruan
2025-02-10 12:30 ` Mark Rutland
2025-02-11 11:43 ` Jinjie Ruan
[not found] ` <20241206101744.4161990-2-ruanjinjie@huawei.com>
2025-02-10 11:04 ` [PATCH -next v5 01/22] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled() Mark Rutland
[not found] ` <20241206101744.4161990-3-ruanjinjie@huawei.com>
2025-02-10 11:08 ` [PATCH -next v5 02/22] arm64: entry: Refactor the entry and exit for exceptions from EL1 Mark Rutland
[not found] ` <20241206101744.4161990-4-ruanjinjie@huawei.com>
2025-02-10 11:26 ` [PATCH -next v5 03/22] arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode() Mark Rutland
[not found] ` <20241206101744.4161990-5-ruanjinjie@huawei.com>
2025-02-10 11:33 ` [PATCH -next v5 04/22] arm64: entry: Rework arm64_preempt_schedule_irq() Mark Rutland
[not found] ` <20241206101744.4161990-6-ruanjinjie@huawei.com>
2025-02-10 11:40 ` Mark Rutland [this message]
[not found] ` <20241206101744.4161990-7-ruanjinjie@huawei.com>
2025-02-10 11:48 ` [PATCH -next v5 06/22] arm64: entry: Expand the need_irq_preemption() macro ahead Mark Rutland
[not found] ` <20241206101744.4161990-8-ruanjinjie@huawei.com>
2025-02-10 11:52 ` [PATCH -next v5 07/22] arm64: entry: preempt_schedule_irq() only if PREEMPTION enabled Mark Rutland
[not found] ` <20241206101744.4161990-9-ruanjinjie@huawei.com>
2025-02-10 11:54 ` [PATCH -next v5 08/22] arm64: entry: Use different helpers to check resched for PREEMPT_DYNAMIC Mark Rutland
[not found] ` <20241206101744.4161990-10-ruanjinjie@huawei.com>
2025-02-10 12:04 ` [PATCH -next v5 09/22] entry: Split generic entry into irq and syscall Mark Rutland
[not found] ` <20241206101744.4161990-11-ruanjinjie@huawei.com>
2025-02-10 12:05 ` [PATCH -next v5 10/22] entry: Add arch_irqentry_exit_need_resched() for arm64 Mark Rutland
[not found] ` <20241206101744.4161990-12-ruanjinjie@huawei.com>
2025-02-10 12:24 ` [PATCH -next v5 11/22] arm64: entry: Switch to generic IRQ entry Mark Rutland
2025-02-11 11:32 ` Jinjie Ruan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z6nlkyxAZEtY_M7T@J2N7QTR9R3 \
--to=mark.rutland@arm.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=aliceryhl@google.com \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=broonie@kernel.org \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=dietmar.eggemann@arm.com \
--cc=dwmw@amazon.co.uk \
--cc=guohanjun@huawei.com \
--cc=hca@linux.ibm.com \
--cc=joey.gouly@arm.com \
--cc=juri.lelli@redhat.com \
--cc=kees@kernel.org \
--cc=kevin.brodsky@arm.com \
--cc=kristina.martsenko@arm.com \
--cc=leobras@redhat.com \
--cc=liaochang1@huawei.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=liuwei09@cestc.cn \
--cc=liuyuntao12@huawei.com \
--cc=luto@kernel.org \
--cc=masahiroy@kernel.org \
--cc=mbenes@suse.cz \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=oleg@redhat.com \
--cc=paulmck@kernel.org \
--cc=pcc@google.com \
--cc=peterz@infradead.org \
--cc=ptosi@google.com \
--cc=puranjay@kernel.org \
--cc=rafael@kernel.org \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=ruanjinjie@huawei.com \
--cc=samitolvanen@google.com \
--cc=sstabellini@kernel.org \
--cc=sudeep.holla@arm.com \
--cc=tglx@linutronix.de \
--cc=thiago.bauermann@linaro.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=wad@chromium.org \
--cc=will@kernel.org \
--cc=xen-devel@lists.xenproject.org \
--cc=xur@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox