* [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout()
@ 2026-02-09 2:31 Ankur Arora
2026-02-09 2:31 ` [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout() Ankur Arora
` (11 more replies)
0 siblings, 12 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
This series adds waited variants of the smp_cond_load() primitives:
smp_cond_load_relaxed_timeout(), and smp_cond_load_acquire_timeout().
As the name suggests, the new interfaces are meant for contexts where
you want to wait on a condition variable for a finite duration. This is
easy enough to do with a loop around cpu_relax(). There are, however,
architectures (ex. arm64) that allow waiting on a cacheline instead.
So, these interfaces handle a mixture of spin/wait with a
smp_cond_load() thrown in. The interfaces are:
smp_cond_load_relaxed_timeout(ptr, cond_expr, time_expr, timeout)
smp_cond_load_acquire_timeout(ptr, cond_expr, time_expr, timeout)
The parameters, time_expr, timeout determine when to bail out.
Also add tif_need_resched_relaxed_wait() which wraps the pattern used
in poll_idle() and abstracts out details of the interface and those
of the scheduler.
In addition add atomic_cond_read_*_timeout(), atomic64_cond_read_*_timeout(),
and atomic_long wrappers to the interfaces.
Finally update poll_idle() and resilient queued spinlocks to use them.
Changelog:
v8 [0]:
- Defer evaluation of @time_expr_ns to when we hit the slowpath.
(comment from Alexei Starovoitov).
- Mention that cpu_poll_relax() is better than raw CPU polling
only where ARCH_HAS_CPU_RELAX is defined.
- also define ARCH_HAS_CPU_RELAX for arm64.
(Came out of a discussion with Will Deacon.)
- Split out WFET and WFE handling. I was doing both of these
in a common handler.
(From Will Deacon and in an earlier revision by Catalin Marinas.)
- Add mentions of atomic_cond_read_{relaxed,acquire}(),
atomic_cond_read_{relaxed,acquire}_timeout() in
Documentation/atomic_t.txt.
- Use the BIT() macro to do the checking in tif_bitset_relaxed_wait().
- Cleanup unnecessary assignments, casts etc in poll_idle().
(From Rafael Wysocki.)
- Fixup warnings from kernel build robot
v7 [1]:
- change the interface to separately provide the timeout. This is
useful for supporting WFET and similar primitives which can do
timed waiting (suggested by Arnd Bergmann).
- Adapting rqspinlock code to this changed interface also
necessitated allowing time_expr to fail.
- rqspinlock changes to adapt to the new smp_cond_load_acquire_timeout().
- add WFET support (suggested by Arnd Bergmann).
- add support for atomic-long wrappers.
- add a new scheduler interface tif_need_resched_relaxed_wait() which
encapsulates the polling logic used by poll_idle().
- interface suggested by (Rafael J. Wysocki).
v6 [2]:
- fixup missing timeout parameters in atomic64_cond_read_*_timeout()
- remove a race between setting of TIF_NEED_RESCHED and the call to
smp_cond_load_relaxed_timeout(). This would mean that dev->poll_time_limit
would be set even if we hadn't spent any time waiting.
(The original check compared against local_clock(), which would have been
fine, but I was instead using a cheaper check against _TIF_NEED_RESCHED.)
(Both from meta-CI bot)
v5 [3]:
- use cpu_poll_relax() instead of cpu_relax().
- instead of defining an arm64 specific
smp_cond_load_relaxed_timeout(), just define the appropriate
cpu_poll_relax().
- re-read the target pointer when we exit due to the time-check.
- s/SMP_TIMEOUT_SPIN_COUNT/SMP_TIMEOUT_POLL_COUNT/
(Suggested by Will Deacon)
- add atomic_cond_read_*_timeout() and atomic64_cond_read_*_timeout()
interfaces.
- rqspinlock: use atomic_cond_read_acquire_timeout().
- cpuidle: use smp_cond_load_relaxed_tiemout() for polling.
(Suggested by Catalin Marinas)
- rqspinlock: define SMP_TIMEOUT_POLL_COUNT to be 16k for non arm64
v4 [4]:
- naming change 's/timewait/timeout/'
- resilient spinlocks: get rid of res_smp_cond_load_acquire_waiting()
and fixup use of RES_CHECK_TIMEOUT().
(Both suggested by Catalin Marinas)
v3 [5]:
- further interface simplifications (suggested by Catalin Marinas)
v2 [6]:
- simplified the interface (suggested by Catalin Marinas)
- get rid of wait_policy, and a multitude of constants
- adds a slack parameter
This helped remove a fair amount of duplicated code duplication and in
hindsight unnecessary constants.
v1 [7]:
- add wait_policy (coarse and fine)
- derive spin-count etc at runtime instead of using arbitrary
constants.
Haris Okanovic tested v4 of this series with poll_idle()/haltpoll patches. [8]
Comments appreciated!
Thanks
Ankur
[0]
[1] https://lore.kernel.org/lkml/20251028053136.692462-1-ankur.a.arora@oracle.com/
[2] https://lore.kernel.org/lkml/20250911034655.3916002-1-ankur.a.arora@oracle.com/
[3] https://lore.kernel.org/lkml/20250911034655.3916002-1-ankur.a.arora@oracle.com/
[4] https://lore.kernel.org/lkml/20250829080735.3598416-1-ankur.a.arora@oracle.com/
[5] https://lore.kernel.org/lkml/20250627044805.945491-1-ankur.a.arora@oracle.com/
[6] https://lore.kernel.org/lkml/20250502085223.1316925-1-ankur.a.arora@oracle.com/
[7] https://lore.kernel.org/lkml/20250203214911.898276-1-ankur.a.arora@oracle.com/
[8] https://lore.kernel.org/lkml/2cecbf7fb23ee83a4ce027e1be3f46f97efd585c.camel@amazon.com/
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: bpf@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-pm@vger.kernel.org
Ankur Arora (12):
asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
arm64: barrier: Support smp_cond_load_relaxed_timeout()
arm64/delay: move some constants out to a separate header
arm64: support WFET in smp_cond_relaxed_timeout()
arm64: rqspinlock: Remove private copy of
smp_cond_load_acquire_timewait()
asm-generic: barrier: Add smp_cond_load_acquire_timeout()
atomic: Add atomic_cond_read_*_timeout()
locking/atomic: scripts: build atomic_long_cond_read_*_timeout()
bpf/rqspinlock: switch check_timeout() to a clock interface
bpf/rqspinlock: Use smp_cond_load_acquire_timeout()
sched: add need-resched timed wait interface
cpuidle/poll_state: Wait for need-resched via
tif_need_resched_relaxed_wait()
Documentation/atomic_t.txt | 14 ++--
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/barrier.h | 23 +++++++
arch/arm64/include/asm/cmpxchg.h | 65 ++++++++++++++----
arch/arm64/include/asm/delay-const.h | 25 +++++++
arch/arm64/include/asm/rqspinlock.h | 85 ------------------------
arch/arm64/lib/delay.c | 13 +---
drivers/cpuidle/poll_state.c | 21 +-----
drivers/soc/qcom/rpmh-rsc.c | 8 +--
include/asm-generic/barrier.h | 98 ++++++++++++++++++++++++++++
include/linux/atomic.h | 10 +++
include/linux/atomic/atomic-long.h | 18 +++--
include/linux/sched/idle.h | 29 ++++++++
kernel/bpf/rqspinlock.c | 72 ++++++++++++--------
scripts/atomic/gen-atomic-long.sh | 16 +++--
15 files changed, 321 insertions(+), 177 deletions(-)
create mode 100644 arch/arm64/include/asm/delay-const.h
--
2.31.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-09 4:57 ` Randy Dunlap
` (2 more replies)
2026-02-09 2:31 ` [PATCH v9 02/12] arm64: barrier: Support smp_cond_load_relaxed_timeout() Ankur Arora
` (10 subsequent siblings)
11 siblings, 3 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
Add smp_cond_load_relaxed_timeout(), which extends
smp_cond_load_relaxed() to allow waiting for a duration.
We loop around waiting for the condition variable to change while
peridically doing a time-check. The loop uses cpu_poll_relax() to slow
down the busy-waiting, which, unless overridden by the architecture
code, amounts to a cpu_relax().
Note that there are two ways for the time-check to fail: the usual
timeout case or, @time_expr_ns returning an invalid value (negative
or zero). The second failure mode allows for clocks attached to the
clock-domain of @cond_expr, which might cease to operate meaningfully
once some state internal to @cond_expr has changed.
Evaluation of @time_expr_ns: in the fastpath we want to keep the
performance close to smp_cond_load_relaxed(). To do that we defer
evaluation of the potentially costly @time_expr_ns to when we hit
the slowpath.
This also means that there will always be some hardware dependent
duration that has passed in cpu_poll_relax() iterations at the time of
first evaluation. Additionally cpu_poll_relax() is not guaranteed to
return at timeout boundary. In sum, expect timeout overshoot when we
exit due to expiration of the timeout.
The number of spin iterations before time-check, SMP_TIMEOUT_POLL_COUNT
is chosen to be 200 by default. With a cpu_poll_relax() iteration
taking ~20-30 cycles (measured on a variety of x86 platforms), we expect
a tim-check every ~4000-6000 cycles.
The outer limit of the overshoot is double that when working with the
parameters above. This might be higher or lower depending on the
implementation of cpu_poll_relax() across architectures.
Lastly, config option ARCH_HAS_CPU_RELAX indicates availability of a
cpu_poll_relax() that is cheaper than polling. This might be relevant
for cases with a prolonged timeout.
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-arch@vger.kernel.org
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
Notes:
- Defer evaluation of @time_expr_ns to when we hit the slowpath.
- This also helps get rid of the labelled gotos which were used to
handle the early failure case (since now there's no early init
to be concerned with.)
- Add a comment mentioning that the cpu_poll_relax() implementation
is better than polling if ARCH_HAS_CPU_RELAX.
include/asm-generic/barrier.h | 72 +++++++++++++++++++++++++++++++++++
1 file changed, 72 insertions(+)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index d4f581c1e21d..2738fe35c1df 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -273,6 +273,68 @@ do { \
})
#endif
+/*
+ * Number of times we iterate in the loop before doing the time check.
+ */
+#ifndef SMP_TIMEOUT_POLL_COUNT
+#define SMP_TIMEOUT_POLL_COUNT 200
+#endif
+
+/*
+ * Platforms with ARCH_HAS_CPU_RELAX have a cpu_poll_relax() implementation
+ * that is expected to be cheaper (lower power) than pure polling.
+ */
+#ifndef cpu_poll_relax
+#define cpu_poll_relax(ptr, val, timeout_ns) cpu_relax()
+#endif
+
+/**
+ * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering
+ * guarantees until a timeout expires.
+ * @ptr: pointer to the variable to wait on.
+ * @cond: boolean expression to wait for.
+ * @time_expr_ns: expression that evaluates to monotonic time (in ns) or,
+ * on failure, returns a negative value.
+ * @timeout_ns: timeout value in ns
+ * Both of the above are assumed to be compatible with s64; the signed
+ * value is used to handle the failure case in @time_expr_ns.
+ *
+ * Equivalent to using READ_ONCE() on the condition variable.
+ *
+ * Callers that expect to wait for prolonged durations might want to
+ * take into account the availability of ARCH_HAS_CPU_RELAX.
+ */
+#ifndef smp_cond_load_relaxed_timeout
+#define smp_cond_load_relaxed_timeout(ptr, cond_expr, \
+ time_expr_ns, timeout_ns) \
+({ \
+ typeof(ptr) __PTR = (ptr); \
+ __unqual_scalar_typeof(*ptr) VAL; \
+ u32 __n = 0, __spin = SMP_TIMEOUT_POLL_COUNT; \
+ s64 __timeout = (s64)timeout_ns; \
+ s64 __time_now, __time_end = 0; \
+ \
+ for (;;) { \
+ VAL = READ_ONCE(*__PTR); \
+ if (cond_expr) \
+ break; \
+ cpu_poll_relax(__PTR, VAL, (u64)__timeout); \
+ if (++__n < __spin) \
+ continue; \
+ __time_now = (s64)(time_expr_ns); \
+ if (unlikely(__time_end == 0)) \
+ __time_end = __time_now + __timeout; \
+ __timeout = __time_end - __time_now; \
+ if (__time_now <= 0 || __timeout <= 0) { \
+ VAL = READ_ONCE(*__PTR); \
+ break; \
+ } \
+ __n = 0; \
+ } \
+ (typeof(*ptr))VAL; \
+})
+#endif
+
/*
* pmem_wmb() ensures that all stores for which the modification
* are written to persistent storage by preceding instructions have
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 02/12] arm64: barrier: Support smp_cond_load_relaxed_timeout()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
2026-02-09 2:31 ` [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-11 15:54 ` Catalin Marinas
2026-02-09 2:31 ` [PATCH v9 03/12] arm64/delay: move some constants out to a separate header Ankur Arora
` (9 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
Support waiting in smp_cond_load_relaxed_timeout() via
__cmpwait_relaxed(). To ensure that we wake from waiting in
WFE periodically and don't block forever if there are no stores
to ptr, this path is only used when the event-stream is enabled.
Note that when using __cmpwait_relaxed() we ignore the timeout
value, allowing an overshoot by upto the event-stream period.
And, in the unlikely event that the event-stream is unavailable,
fallback to spin-waiting.
Also set SMP_TIMEOUT_POLL_COUNT to 1 so we do the time-check in
each iteration of smp_cond_load_relaxed_timeout().
And finally define ARCH_HAS_CPU_RELAX to indicate that we have
an optimized implementation of cpu_poll_relax().
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Suggested-by: Will Deacon <will@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Note:
This commit additionally defines ARCH_HAS_CPU_RELAX.
Will: I've retained your acked-by. Please let me know if you
don't agree with this change.
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/barrier.h | 21 +++++++++++++++++++++
2 files changed, 22 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 93173f0a09c7..239fdca8e2cf 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -22,6 +22,7 @@ config ARM64
select ARCH_HAS_CACHE_LINE_SIZE
select ARCH_HAS_CC_PLATFORM
select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
+ select ARCH_HAS_CPU_RELAX
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEBUG_VM_PGTABLE
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9495c4441a46..6190e178db51 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -12,6 +12,7 @@
#include <linux/kasan-checks.h>
#include <asm/alternative-macros.h>
+#include <asm/vdso/processor.h>
#define __nops(n) ".rept " #n "\nnop\n.endr\n"
#define nops(n) asm volatile(__nops(n))
@@ -219,6 +220,26 @@ do { \
(typeof(*ptr))VAL; \
})
+/* Re-declared here to avoid include dependency. */
+extern bool arch_timer_evtstrm_available(void);
+
+/*
+ * In the common case, cpu_poll_relax() sits waiting in __cmpwait_relaxed()
+ * for the ptr value to change.
+ *
+ * Since this period is reasonably long, choose SMP_TIMEOUT_POLL_COUNT
+ * to be 1, so smp_cond_load_{relaxed,acquire}_timeout() does a
+ * time-check in each iteration.
+ */
+#define SMP_TIMEOUT_POLL_COUNT 1
+
+#define cpu_poll_relax(ptr, val, timeout_ns) do { \
+ if (arch_timer_evtstrm_available()) \
+ __cmpwait_relaxed(ptr, val); \
+ else \
+ cpu_relax(); \
+} while (0)
+
#include <asm-generic/barrier.h>
#endif /* __ASSEMBLER__ */
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 03/12] arm64/delay: move some constants out to a separate header
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
2026-02-09 2:31 ` [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout() Ankur Arora
2026-02-09 2:31 ` [PATCH v9 02/12] arm64: barrier: Support smp_cond_load_relaxed_timeout() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-11 16:01 ` Catalin Marinas
2026-02-09 2:31 ` [PATCH v9 04/12] arm64: support WFET in smp_cond_load_relaxed_timeout() Ankur Arora
` (8 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora,
Bjorn Andersson, Konrad Dybcio, Christoph Lameter
Moves some constants and functions related to xloops, cycles computation
out to a new header. Also rename some macros in qcom/rpmh-rsc.c which
were occupying the same namespace.
No functional change.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Bjorn Andersson <andersson@kernel.org>
Cc: Konrad Dybcio <konradybcio@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Reviewed-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
Notes:
- workaround warnings from the kernel build robot.
arch/arm64/include/asm/delay-const.h | 25 +++++++++++++++++++++++++
arch/arm64/lib/delay.c | 13 +++----------
drivers/soc/qcom/rpmh-rsc.c | 8 ++++----
3 files changed, 32 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/include/asm/delay-const.h
diff --git a/arch/arm64/include/asm/delay-const.h b/arch/arm64/include/asm/delay-const.h
new file mode 100644
index 000000000000..63fb5fc24a90
--- /dev/null
+++ b/arch/arm64/include/asm/delay-const.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _ASM_DELAY_CONST_H
+#define _ASM_DELAY_CONST_H
+
+#include <asm/param.h> /* For HZ */
+
+/* 2**32 / 1000000 (rounded up) */
+#define __usecs_to_xloops_mult 0x10C7UL
+
+/* 2**32 / 1000000000 (rounded up) */
+#define __nsecs_to_xloops_mult 0x5UL
+
+extern unsigned long loops_per_jiffy;
+static inline unsigned long xloops_to_cycles(unsigned long xloops)
+{
+ return (xloops * loops_per_jiffy * HZ) >> 32;
+}
+
+#define USECS_TO_CYCLES(time_usecs) \
+ xloops_to_cycles((time_usecs) * __usecs_to_xloops_mult)
+
+#define NSECS_TO_CYCLES(time_nsecs) \
+ xloops_to_cycles((time_nsecs) * __nsecs_to_xloops_mult)
+
+#endif /* _ASM_DELAY_CONST_H */
diff --git a/arch/arm64/lib/delay.c b/arch/arm64/lib/delay.c
index cb2062e7e234..511b5597e2a5 100644
--- a/arch/arm64/lib/delay.c
+++ b/arch/arm64/lib/delay.c
@@ -12,17 +12,10 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/timex.h>
+#include <asm/delay-const.h>
#include <clocksource/arm_arch_timer.h>
-#define USECS_TO_CYCLES(time_usecs) \
- xloops_to_cycles((time_usecs) * 0x10C7UL)
-
-static inline unsigned long xloops_to_cycles(unsigned long xloops)
-{
- return (xloops * loops_per_jiffy * HZ) >> 32;
-}
-
void __delay(unsigned long cycles)
{
cycles_t start = get_cycles();
@@ -58,12 +51,12 @@ EXPORT_SYMBOL(__const_udelay);
void __udelay(unsigned long usecs)
{
- __const_udelay(usecs * 0x10C7UL); /* 2**32 / 1000000 (rounded up) */
+ __const_udelay(usecs * __usecs_to_xloops_mult);
}
EXPORT_SYMBOL(__udelay);
void __ndelay(unsigned long nsecs)
{
- __const_udelay(nsecs * 0x5UL); /* 2**32 / 1000000000 (rounded up) */
+ __const_udelay(nsecs * __nsecs_to_xloops_mult);
}
EXPORT_SYMBOL(__ndelay);
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index c6f7d5c9c493..ad5ec5c0de0a 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -146,10 +146,10 @@ enum {
* +---------------------------------------------------+
*/
-#define USECS_TO_CYCLES(time_usecs) \
- xloops_to_cycles((time_usecs) * 0x10C7UL)
+#define RPMH_USECS_TO_CYCLES(time_usecs) \
+ rpmh_xloops_to_cycles((time_usecs) * 0x10C7UL)
-static inline unsigned long xloops_to_cycles(u64 xloops)
+static inline unsigned long rpmh_xloops_to_cycles(u64 xloops)
{
return (xloops * loops_per_jiffy * HZ) >> 32;
}
@@ -819,7 +819,7 @@ void rpmh_rsc_write_next_wakeup(struct rsc_drv *drv)
wakeup_us = ktime_to_us(wakeup);
/* Convert the wakeup to arch timer scale */
- wakeup_cycles = USECS_TO_CYCLES(wakeup_us);
+ wakeup_cycles = RPMH_USECS_TO_CYCLES(wakeup_us);
wakeup_cycles += arch_timer_read_counter();
exit:
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 04/12] arm64: support WFET in smp_cond_load_relaxed_timeout()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (2 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 03/12] arm64/delay: move some constants out to a separate header Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-11 17:11 ` Catalin Marinas
2026-02-09 2:31 ` [PATCH v9 05/12] arm64: rqspinlock: Remove private copy of smp_cond_load_acquire_timewait() Ankur Arora
` (7 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
To handle WFET use __cmpwait_timeout() similarly to __cmpwait(). These
call out to the respective __cmpwait_case_timeout_##sz(),
__cmpwait_case_##sz() functions.
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
Notes:
- split out the WFET handling into __cmpwait_case_timeout_##sz()
instead of overloading __cmpwait_case_##sz().
- in one of the review comments there was a disussion of an added
warning for long timeout_ns values. I did not add a warning (or
__bad_ndelay() style failure). However, a comment
smp_cond_load_relaxed_timeout() does mention that it might not
be a good idea to poll for long periods if you don't have
HAVE_ARCH_CPU_RELAX.
The reason for the lack of proper warning/error is that for cases
where this interface is used in the spinlock path (as rqspinlock
does) there's no way to avoid this kind of polling.
arch/arm64/include/asm/barrier.h | 8 ++--
arch/arm64/include/asm/cmpxchg.h | 65 ++++++++++++++++++++++++++------
2 files changed, 58 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 6190e178db51..fbd71cd4ef4e 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -224,8 +224,8 @@ do { \
extern bool arch_timer_evtstrm_available(void);
/*
- * In the common case, cpu_poll_relax() sits waiting in __cmpwait_relaxed()
- * for the ptr value to change.
+ * In the common case, cpu_poll_relax() sits waiting in __cmpwait_relaxed()/
+ * __cmpwait_relaxed_timeout() for the ptr value to change.
*
* Since this period is reasonably long, choose SMP_TIMEOUT_POLL_COUNT
* to be 1, so smp_cond_load_{relaxed,acquire}_timeout() does a
@@ -234,7 +234,9 @@ extern bool arch_timer_evtstrm_available(void);
#define SMP_TIMEOUT_POLL_COUNT 1
#define cpu_poll_relax(ptr, val, timeout_ns) do { \
- if (arch_timer_evtstrm_available()) \
+ if (alternative_has_cap_unlikely(ARM64_HAS_WFXT)) \
+ __cmpwait_relaxed_timeout(ptr, val, timeout_ns); \
+ else if (arch_timer_evtstrm_available()) \
__cmpwait_relaxed(ptr, val); \
else \
cpu_relax(); \
diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
index d7a540736741..dfb7d10a18be 100644
--- a/arch/arm64/include/asm/cmpxchg.h
+++ b/arch/arm64/include/asm/cmpxchg.h
@@ -12,6 +12,7 @@
#include <asm/barrier.h>
#include <asm/lse.h>
+#include <asm/delay-const.h>
/*
* We need separate acquire parameters for ll/sc and lse, since the full
@@ -208,9 +209,13 @@ __CMPXCHG_GEN(_mb)
__cmpxchg128((ptr), (o), (n)); \
})
+/* Re-declared here to avoid include dependency. */
+extern u64 (*arch_timer_read_counter)(void);
+
#define __CMPWAIT_CASE(w, sfx, sz) \
static inline void __cmpwait_case_##sz(volatile void *ptr, \
- unsigned long val) \
+ unsigned long val, \
+ u64 __maybe_unused timeout_ns) \
{ \
unsigned long tmp; \
\
@@ -233,20 +238,52 @@ __CMPWAIT_CASE( , , 64);
#undef __CMPWAIT_CASE
-#define __CMPWAIT_GEN(sfx) \
-static __always_inline void __cmpwait##sfx(volatile void *ptr, \
- unsigned long val, \
- int size) \
+#define __CMPWAIT_TIMEOUT_CASE(w, sfx, sz) \
+static inline void __cmpwait_case_timeout_##sz(volatile void *ptr, \
+ unsigned long val, \
+ u64 timeout_ns) \
+{ \
+ unsigned long tmp; \
+ u64 ecycles = arch_timer_read_counter() + \
+ NSECS_TO_CYCLES(timeout_ns); \
+ asm volatile( \
+ " sevl\n" \
+ " wfe\n" \
+ " ldxr" #sfx "\t%" #w "[tmp], %[v]\n" \
+ " eor %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n" \
+ " cbnz %" #w "[tmp], 2f\n" \
+ " msr s0_3_c1_c0_0, %[ecycles]\n" \
+ "2:" \
+ : [tmp] "=&r" (tmp), [v] "+Q" (*(u##sz *)ptr) \
+ : [val] "r" (val), [ecycles] "r" (ecycles)); \
+}
+
+__CMPWAIT_TIMEOUT_CASE(w, b, 8);
+__CMPWAIT_TIMEOUT_CASE(w, h, 16);
+__CMPWAIT_TIMEOUT_CASE(w, , 32);
+__CMPWAIT_TIMEOUT_CASE( , , 64);
+
+#undef __CMPWAIT_TIMEOUT_CASE
+
+#define __CMPWAIT_GEN(timeout, sfx) \
+static __always_inline void __cmpwait##timeout##sfx(volatile void *ptr, \
+ unsigned long val, \
+ u64 timeout_ns, \
+ int size) \
{ \
switch (size) { \
case 1: \
- return __cmpwait_case##sfx##_8(ptr, (u8)val); \
+ return __cmpwait_case##timeout##sfx##_8(ptr, (u8)val, \
+ timeout_ns); \
case 2: \
- return __cmpwait_case##sfx##_16(ptr, (u16)val); \
+ return __cmpwait_case##timeout##sfx##_16(ptr, (u16)val, \
+ timeout_ns); \
case 4: \
- return __cmpwait_case##sfx##_32(ptr, val); \
+ return __cmpwait_case##timeout##sfx##_32(ptr, val, \
+ timeout_ns); \
case 8: \
- return __cmpwait_case##sfx##_64(ptr, val); \
+ return __cmpwait_case##timeout##sfx##_64(ptr, val, \
+ timeout_ns); \
default: \
BUILD_BUG(); \
} \
@@ -254,11 +291,15 @@ static __always_inline void __cmpwait##sfx(volatile void *ptr, \
unreachable(); \
}
-__CMPWAIT_GEN()
+__CMPWAIT_GEN( , )
+__CMPWAIT_GEN(_timeout, )
#undef __CMPWAIT_GEN
-#define __cmpwait_relaxed(ptr, val) \
- __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr)))
+#define __cmpwait_relaxed_timeout(ptr, val, timeout_ns) \
+ __cmpwait_timeout((ptr), (unsigned long)(val), timeout_ns, sizeof(*(ptr)))
+
+#define __cmpwait_relaxed(ptr, val) \
+ __cmpwait((ptr), (unsigned long)(val), 0, sizeof(*(ptr)))
#endif /* __ASM_CMPXCHG_H */
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 05/12] arm64: rqspinlock: Remove private copy of smp_cond_load_acquire_timewait()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (3 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 04/12] arm64: support WFET in smp_cond_load_relaxed_timeout() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-09 2:31 ` [PATCH v9 06/12] asm-generic: barrier: Add smp_cond_load_acquire_timeout() Ankur Arora
` (6 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
In preparation for defining smp_cond_load_acquire_timeout(), remove
the private copy. Lacking this, the rqspinlock code falls back to using
smp_cond_load_acquire().
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: bpf@vger.kernel.org
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Haris Okanovic <harisokn@amazon.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
arch/arm64/include/asm/rqspinlock.h | 85 -----------------------------
1 file changed, 85 deletions(-)
diff --git a/arch/arm64/include/asm/rqspinlock.h b/arch/arm64/include/asm/rqspinlock.h
index 9ea0a74e5892..a385603436e9 100644
--- a/arch/arm64/include/asm/rqspinlock.h
+++ b/arch/arm64/include/asm/rqspinlock.h
@@ -3,91 +3,6 @@
#define _ASM_RQSPINLOCK_H
#include <asm/barrier.h>
-
-/*
- * Hardcode res_smp_cond_load_acquire implementations for arm64 to a custom
- * version based on [0]. In rqspinlock code, our conditional expression involves
- * checking the value _and_ additionally a timeout. However, on arm64, the
- * WFE-based implementation may never spin again if no stores occur to the
- * locked byte in the lock word. As such, we may be stuck forever if
- * event-stream based unblocking is not available on the platform for WFE spin
- * loops (arch_timer_evtstrm_available).
- *
- * Once support for smp_cond_load_acquire_timewait [0] lands, we can drop this
- * copy-paste.
- *
- * While we rely on the implementation to amortize the cost of sampling
- * cond_expr for us, it will not happen when event stream support is
- * unavailable, time_expr check is amortized. This is not the common case, and
- * it would be difficult to fit our logic in the time_expr_ns >= time_limit_ns
- * comparison, hence just let it be. In case of event-stream, the loop is woken
- * up at microsecond granularity.
- *
- * [0]: https://lore.kernel.org/lkml/20250203214911.898276-1-ankur.a.arora@oracle.com
- */
-
-#ifndef smp_cond_load_acquire_timewait
-
-#define smp_cond_time_check_count 200
-
-#define __smp_cond_load_relaxed_spinwait(ptr, cond_expr, time_expr_ns, \
- time_limit_ns) ({ \
- typeof(ptr) __PTR = (ptr); \
- __unqual_scalar_typeof(*ptr) VAL; \
- unsigned int __count = 0; \
- for (;;) { \
- VAL = READ_ONCE(*__PTR); \
- if (cond_expr) \
- break; \
- cpu_relax(); \
- if (__count++ < smp_cond_time_check_count) \
- continue; \
- if ((time_expr_ns) >= (time_limit_ns)) \
- break; \
- __count = 0; \
- } \
- (typeof(*ptr))VAL; \
-})
-
-#define __smp_cond_load_acquire_timewait(ptr, cond_expr, \
- time_expr_ns, time_limit_ns) \
-({ \
- typeof(ptr) __PTR = (ptr); \
- __unqual_scalar_typeof(*ptr) VAL; \
- for (;;) { \
- VAL = smp_load_acquire(__PTR); \
- if (cond_expr) \
- break; \
- __cmpwait_relaxed(__PTR, VAL); \
- if ((time_expr_ns) >= (time_limit_ns)) \
- break; \
- } \
- (typeof(*ptr))VAL; \
-})
-
-#define smp_cond_load_acquire_timewait(ptr, cond_expr, \
- time_expr_ns, time_limit_ns) \
-({ \
- __unqual_scalar_typeof(*ptr) _val; \
- int __wfe = arch_timer_evtstrm_available(); \
- \
- if (likely(__wfe)) { \
- _val = __smp_cond_load_acquire_timewait(ptr, cond_expr, \
- time_expr_ns, \
- time_limit_ns); \
- } else { \
- _val = __smp_cond_load_relaxed_spinwait(ptr, cond_expr, \
- time_expr_ns, \
- time_limit_ns); \
- smp_acquire__after_ctrl_dep(); \
- } \
- (typeof(*ptr))_val; \
-})
-
-#endif
-
-#define res_smp_cond_load_acquire(v, c) smp_cond_load_acquire_timewait(v, c, 0, 1)
-
#include <asm-generic/rqspinlock.h>
#endif /* _ASM_RQSPINLOCK_H */
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 06/12] asm-generic: barrier: Add smp_cond_load_acquire_timeout()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (4 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 05/12] arm64: rqspinlock: Remove private copy of smp_cond_load_acquire_timewait() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-09 4:59 ` Randy Dunlap
2026-02-09 2:31 ` [PATCH v9 07/12] atomic: Add atomic_cond_read_*_timeout() Ankur Arora
` (5 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
Add the acquire variant of smp_cond_load_relaxed_timeout(). This
reuses the relaxed variant, with additional LOAD->LOAD ordering.
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-arch@vger.kernel.org
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Haris Okanovic <harisokn@amazon.com>
Tested-by: Haris Okanovic <harisokn@amazon.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
include/asm-generic/barrier.h | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 2738fe35c1df..ceac834c9e6c 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -345,6 +345,32 @@ do { \
})
#endif
+/**
+ * smp_cond_load_acquire_timeout() - (Spin) wait for cond with ACQUIRE ordering
+ * until a timeout expires.
+ * @ptr: pointer to the variable to wait on
+ * @cond: boolean expression to wait for
+ * @time_expr_ns: monotonic expression that evaluates to time in ns or,
+ * on failure, returns a negative value.
+ * @timeout_ns: timeout value in ns
+ * (Both of the above are assumed to be compatible with s64.)
+ *
+ * Equivalent to using smp_cond_load_acquire() on the condition variable with
+ * a timeout.
+ */
+#ifndef smp_cond_load_acquire_timeout
+#define smp_cond_load_acquire_timeout(ptr, cond_expr, \
+ time_expr_ns, timeout_ns) \
+({ \
+ __unqual_scalar_typeof(*ptr) _val; \
+ _val = smp_cond_load_relaxed_timeout(ptr, cond_expr, \
+ time_expr_ns, \
+ timeout_ns); \
+ smp_acquire__after_ctrl_dep(); \
+ (typeof(*ptr))_val; \
+})
+#endif
+
/*
* pmem_wmb() ensures that all stores for which the modification
* are written to persistent storage by preceding instructions have
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 07/12] atomic: Add atomic_cond_read_*_timeout()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (5 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 06/12] asm-generic: barrier: Add smp_cond_load_acquire_timeout() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-11 17:25 ` Catalin Marinas
2026-02-09 2:31 ` [PATCH v9 08/12] locking/atomic: scripts: build atomic_long_cond_read_*_timeout() Ankur Arora
` (4 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora,
Boqun Feng
Add atomic load wrappers, atomic_cond_read_*_timeout() and
atomic64_cond_read_*_timeout() for the cond-load timeout interfaces.
Also add a short description for the atomic_cond_read_{relaxed,acquire}(),
and the atomic_cond_read_{relaxed,acquire}_timeout() interfaces.
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
Notes:
- mention these interfaces in Documentation/atomic_t.txt
Documentation/atomic_t.txt | 14 +++++++++-----
include/linux/atomic.h | 10 ++++++++++
2 files changed, 19 insertions(+), 5 deletions(-)
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index bee3b1bca9a7..0e53f6ccb558 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -16,6 +16,10 @@ Non-RMW ops:
atomic_read(), atomic_set()
atomic_read_acquire(), atomic_set_release()
+Non-RMW, non-atomic_t ops:
+
+ atomic_cond_read_{relaxed,acquire}()
+ atomic_cond_read_{relaxed,acquire}_timeout()
RMW atomic operations:
@@ -79,11 +83,11 @@ SEMANTICS
Non-RMW ops:
-The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
-implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
-smp_store_release() respectively. Therefore, if you find yourself only using
-the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
-and are doing it wrong.
+The non-RMW ops are (typically) regular, or conditional LOADs and STOREs and
+are canonically implemented using READ_ONCE(), WRITE_ONCE(),
+smp_load_acquire() and smp_store_release() respectively. Therefore, if you
+find yourself only using the Non-RMW operations of atomic_t, you do not in
+fact need atomic_t at all and are doing it wrong.
A note for the implementation of atomic_set{}() is that it must not break the
atomicity of the RMW ops. That is:
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 8dd57c3a99e9..5bcb86e07784 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -31,6 +31,16 @@
#define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))
#define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c))
+#define atomic_cond_read_acquire_timeout(v, c, e, t) \
+ smp_cond_load_acquire_timeout(&(v)->counter, (c), (e), (t))
+#define atomic_cond_read_relaxed_timeout(v, c, e, t) \
+ smp_cond_load_relaxed_timeout(&(v)->counter, (c), (e), (t))
+
+#define atomic64_cond_read_acquire_timeout(v, c, e, t) \
+ smp_cond_load_acquire_timeout(&(v)->counter, (c), (e), (t))
+#define atomic64_cond_read_relaxed_timeout(v, c, e, t) \
+ smp_cond_load_relaxed_timeout(&(v)->counter, (c), (e), (t))
+
/*
* The idea here is to build acquire/release variants by adding explicit
* barriers on top of the relaxed variant. In the case where the relaxed
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 08/12] locking/atomic: scripts: build atomic_long_cond_read_*_timeout()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (6 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 07/12] atomic: Add atomic_cond_read_*_timeout() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-11 17:41 ` Catalin Marinas
2026-02-09 2:31 ` [PATCH v9 09/12] bpf/rqspinlock: switch check_timeout() to a clock interface Ankur Arora
` (3 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora,
Boqun Feng
Add the atomic long wrappers for the cond-load timeout interfaces.
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
include/linux/atomic/atomic-long.h | 18 +++++++++++-------
scripts/atomic/gen-atomic-long.sh | 16 ++++++++++------
2 files changed, 21 insertions(+), 13 deletions(-)
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index f86b29d90877..e6da0189cbe6 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -11,14 +11,18 @@
#ifdef CONFIG_64BIT
typedef atomic64_t atomic_long_t;
-#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i)
-#define atomic_long_cond_read_acquire atomic64_cond_read_acquire
-#define atomic_long_cond_read_relaxed atomic64_cond_read_relaxed
+#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i)
+#define atomic_long_cond_read_acquire atomic64_cond_read_acquire
+#define atomic_long_cond_read_relaxed atomic64_cond_read_relaxed
+#define atomic_long_cond_read_acquire_timeout atomic64_cond_read_acquire_timeout
+#define atomic_long_cond_read_relaxed_timeout atomic64_cond_read_relaxed_timeout
#else
typedef atomic_t atomic_long_t;
-#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i)
-#define atomic_long_cond_read_acquire atomic_cond_read_acquire
-#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
+#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i)
+#define atomic_long_cond_read_acquire atomic_cond_read_acquire
+#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
+#define atomic_long_cond_read_acquire_timeout atomic_cond_read_acquire_timeout
+#define atomic_long_cond_read_relaxed_timeout atomic_cond_read_relaxed_timeout
#endif
/**
@@ -1809,4 +1813,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
}
#endif /* _LINUX_ATOMIC_LONG_H */
-// eadf183c3600b8b92b91839dd3be6bcc560c752d
+// 475f45a880d1625faa5116dcfd6e943e4dbe1cd5
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index 9826be3ba986..874643dc74bd 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -79,14 +79,18 @@ cat << EOF
#ifdef CONFIG_64BIT
typedef atomic64_t atomic_long_t;
-#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i)
-#define atomic_long_cond_read_acquire atomic64_cond_read_acquire
-#define atomic_long_cond_read_relaxed atomic64_cond_read_relaxed
+#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i)
+#define atomic_long_cond_read_acquire atomic64_cond_read_acquire
+#define atomic_long_cond_read_relaxed atomic64_cond_read_relaxed
+#define atomic_long_cond_read_acquire_timeout atomic64_cond_read_acquire_timeout
+#define atomic_long_cond_read_relaxed_timeout atomic64_cond_read_relaxed_timeout
#else
typedef atomic_t atomic_long_t;
-#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i)
-#define atomic_long_cond_read_acquire atomic_cond_read_acquire
-#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
+#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i)
+#define atomic_long_cond_read_acquire atomic_cond_read_acquire
+#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
+#define atomic_long_cond_read_acquire_timeout atomic_cond_read_acquire_timeout
+#define atomic_long_cond_read_relaxed_timeout atomic_cond_read_relaxed_timeout
#endif
EOF
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 09/12] bpf/rqspinlock: switch check_timeout() to a clock interface
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (7 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 08/12] locking/atomic: scripts: build atomic_long_cond_read_*_timeout() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-09 3:05 ` bot+bpf-ci
2026-02-09 2:31 ` [PATCH v9 10/12] bpf/rqspinlock: Use smp_cond_load_acquire_timeout() Ankur Arora
` (2 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
check_timeout() gets the current time value and depending on how
much time has passed, checks for deadlock or times out, returning 0
or -errno on deadlock or timeout.
Switch this out to a clock style interface, where it functions as a
clock in the "lock-domain", returning the current time until a
deadlock or timeout occurs. Once a deadlock or timeout has occurred,
it stops functioning as a clock and returns error.
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: bpf@vger.kernel.org
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
kernel/bpf/rqspinlock.c | 41 +++++++++++++++++++++++++++--------------
1 file changed, 27 insertions(+), 14 deletions(-)
diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c
index f7d0c8d4644e..ac9b3572e42f 100644
--- a/kernel/bpf/rqspinlock.c
+++ b/kernel/bpf/rqspinlock.c
@@ -196,8 +196,12 @@ static noinline int check_deadlock_ABBA(rqspinlock_t *lock, u32 mask)
return 0;
}
-static noinline int check_timeout(rqspinlock_t *lock, u32 mask,
- struct rqspinlock_timeout *ts)
+/*
+ * Returns current monotonic time in ns on success or, negative errno
+ * value on failure due to timeout expiration or detection of deadlock.
+ */
+static noinline s64 clock_deadlock(rqspinlock_t *lock, u32 mask,
+ struct rqspinlock_timeout *ts)
{
u64 prev = ts->cur;
u64 time;
@@ -207,7 +211,7 @@ static noinline int check_timeout(rqspinlock_t *lock, u32 mask,
return -EDEADLK;
ts->cur = ktime_get_mono_fast_ns();
ts->timeout_end = ts->cur + ts->duration;
- return 0;
+ return (s64)ts->cur;
}
time = ktime_get_mono_fast_ns();
@@ -219,11 +223,15 @@ static noinline int check_timeout(rqspinlock_t *lock, u32 mask,
* checks.
*/
if (prev + NSEC_PER_MSEC < time) {
+ int ret;
ts->cur = time;
- return check_deadlock_ABBA(lock, mask);
+ ret = check_deadlock_ABBA(lock, mask);
+ if (ret)
+ return ret;
+
}
- return 0;
+ return (s64)time;
}
/*
@@ -234,12 +242,12 @@ static noinline int check_timeout(rqspinlock_t *lock, u32 mask,
#define RES_CHECK_TIMEOUT(ts, ret, mask) \
({ \
if (!(ts).spin++) \
- (ret) = check_timeout((lock), (mask), &(ts)); \
+ (ret) = clock_deadlock((lock), (mask), &(ts));\
(ret); \
})
#else
#define RES_CHECK_TIMEOUT(ts, ret, mask) \
- ({ (ret) = check_timeout((lock), (mask), &(ts)); })
+ ({ (ret) = clock_deadlock((lock), (mask), &(ts)); })
#endif
/*
@@ -261,7 +269,8 @@ static noinline int check_timeout(rqspinlock_t *lock, u32 mask,
int __lockfunc resilient_tas_spin_lock(rqspinlock_t *lock)
{
struct rqspinlock_timeout ts;
- int val, ret = 0;
+ s64 ret = 0;
+ int val;
RES_INIT_TIMEOUT(ts);
/*
@@ -280,7 +289,7 @@ int __lockfunc resilient_tas_spin_lock(rqspinlock_t *lock)
val = atomic_read(&lock->val);
if (val || !atomic_try_cmpxchg(&lock->val, &val, 1)) {
- if (RES_CHECK_TIMEOUT(ts, ret, ~0u))
+ if (RES_CHECK_TIMEOUT(ts, ret, ~0u) < 0)
goto out;
cpu_relax();
goto retry;
@@ -339,6 +348,7 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
{
struct mcs_spinlock *prev, *next, *node;
struct rqspinlock_timeout ts;
+ s64 timeout_err = 0;
int idx, ret = 0;
u32 old, tail;
@@ -405,10 +415,10 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
*/
if (val & _Q_LOCKED_MASK) {
RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT);
- res_smp_cond_load_acquire(&lock->locked, !VAL || RES_CHECK_TIMEOUT(ts, ret, _Q_LOCKED_MASK));
+ res_smp_cond_load_acquire(&lock->locked, !VAL || RES_CHECK_TIMEOUT(ts, timeout_err, _Q_LOCKED_MASK) < 0);
}
- if (ret) {
+ if (timeout_err < 0) {
/*
* We waited for the locked bit to go back to 0, as the pending
* waiter, but timed out. We need to clear the pending bit since
@@ -420,6 +430,7 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
*/
clear_pending(lock);
lockevent_inc(rqspinlock_lock_timeout);
+ ret = timeout_err;
goto err_release_entry;
}
@@ -567,18 +578,19 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
*/
RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT * 2);
val = res_atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK) ||
- RES_CHECK_TIMEOUT(ts, ret, _Q_LOCKED_PENDING_MASK));
+ RES_CHECK_TIMEOUT(ts, timeout_err, _Q_LOCKED_PENDING_MASK) < 0);
/* Disable queue destruction when we detect deadlocks. */
- if (ret == -EDEADLK) {
+ if (timeout_err == -EDEADLK) {
if (!next)
next = smp_cond_load_relaxed(&node->next, (VAL));
arch_mcs_spin_unlock_contended(&next->locked);
+ ret = timeout_err;
goto err_release_node;
}
waitq_timeout:
- if (ret) {
+ if (timeout_err < 0) {
/*
* If the tail is still pointing to us, then we are the final waiter,
* and are responsible for resetting the tail back to 0. Otherwise, if
@@ -608,6 +620,7 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
WRITE_ONCE(next->locked, RES_TIMEOUT_VAL);
}
lockevent_inc(rqspinlock_lock_timeout);
+ ret = timeout_err;
goto err_release_node;
}
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 10/12] bpf/rqspinlock: Use smp_cond_load_acquire_timeout()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (8 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 09/12] bpf/rqspinlock: switch check_timeout() to a clock interface Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-09 2:31 ` [PATCH v9 11/12] sched: add need-resched timed wait interface Ankur Arora
2026-02-09 2:31 ` [PATCH v9 12/12] cpuidle/poll_state: Wait for need-resched via tif_need_resched_relaxed_wait() Ankur Arora
11 siblings, 0 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
Switch out the conditional load interfaces used by rqspinlock
to smp_cond_read_acquire_timeout() and its wrapper,
atomic_cond_read_acquire_timeout().
Both these handle the timeout and amortize as needed, so use
clock_deadlock() directly instead of going through RES_CHECK_TIMEOUT().
For correctness, however, we need to ensure that the timeout case in
smp_cond_read_acquire_timeout() always agrees with that in
clock_deadlock(), which returns with -ETIMEDOUT.
For the most part, this is fine because smp_cond_load_acquire_timeout()
does not have an independent clock and does not do double reads from
clock_deadlock() which could cause its internal state to go out of
sync from that of clock_deadlock().
There is, however, an edge case where clock_deadlock() checks for:
if (time > ts->timeout_end)
return -ETIMEDOUT;
while smp_cond_load_acquire_timeout() checks for:
__time_now = (time_expr_ns);
if (__time_now <= 0 || __time_now >= __time_end) {
VAL = READ_ONCE(*__PTR);
break;
}
This runs into a problem when (__time_now == __time_end) since
clock_deadlock() does not treat it as a timeout condition but
the second clause in the conditional above does.
So, add an equality check in clock_deadlock().
Finally, redefine SMP_TIMEOUT_POLL_COUNT to be 16k to be similar to the
spin-count used in RES_CHECK_TIMEOUT(). We only do this for non-arm64
as that uses a waiting implementation.
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: bpf@vger.kernel.org
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
kernel/bpf/rqspinlock.c | 37 ++++++++++++++++++++-----------------
1 file changed, 20 insertions(+), 17 deletions(-)
diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c
index ac9b3572e42f..2a361c4c7393 100644
--- a/kernel/bpf/rqspinlock.c
+++ b/kernel/bpf/rqspinlock.c
@@ -215,7 +215,7 @@ static noinline s64 clock_deadlock(rqspinlock_t *lock, u32 mask,
}
time = ktime_get_mono_fast_ns();
- if (time > ts->timeout_end)
+ if (time >= ts->timeout_end)
return -ETIMEDOUT;
/*
@@ -235,20 +235,14 @@ static noinline s64 clock_deadlock(rqspinlock_t *lock, u32 mask,
}
/*
- * Do not amortize with spins when res_smp_cond_load_acquire is defined,
- * as the macro does internal amortization for us.
+ * Amortize timeout check for busy-wait loops.
*/
-#ifndef res_smp_cond_load_acquire
#define RES_CHECK_TIMEOUT(ts, ret, mask) \
({ \
if (!(ts).spin++) \
(ret) = clock_deadlock((lock), (mask), &(ts));\
(ret); \
})
-#else
-#define RES_CHECK_TIMEOUT(ts, ret, mask) \
- ({ (ret) = clock_deadlock((lock), (mask), &(ts)); })
-#endif
/*
* Initialize the 'spin' member.
@@ -262,6 +256,18 @@ static noinline s64 clock_deadlock(rqspinlock_t *lock, u32 mask,
*/
#define RES_RESET_TIMEOUT(ts, _duration) ({ (ts).timeout_end = 0; (ts).duration = _duration; })
+/*
+ * Limit how often we invoke clock_deadlock() while spin-waiting in
+ * smp_cond_load_acquire_timeout() or atomic_cond_read_acquire_timeout().
+ *
+ * (ARM64 generally uses a waited implementation so we use the default
+ * value there.)
+ */
+#ifndef CONFIG_ARM64
+#undef SMP_TIMEOUT_POLL_COUNT
+#define SMP_TIMEOUT_POLL_COUNT (16*1024)
+#endif
+
/*
* Provide a test-and-set fallback for cases when queued spin lock support is
* absent from the architecture.
@@ -312,12 +318,6 @@ EXPORT_SYMBOL_GPL(resilient_tas_spin_lock);
*/
static DEFINE_PER_CPU_ALIGNED(struct qnode, rqnodes[_Q_MAX_NODES]);
-#ifndef res_smp_cond_load_acquire
-#define res_smp_cond_load_acquire(v, c) smp_cond_load_acquire(v, c)
-#endif
-
-#define res_atomic_cond_read_acquire(v, c) res_smp_cond_load_acquire(&(v)->counter, (c))
-
/**
* resilient_queued_spin_lock_slowpath - acquire the queued spinlock
* @lock: Pointer to queued spinlock structure
@@ -415,7 +415,9 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
*/
if (val & _Q_LOCKED_MASK) {
RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT);
- res_smp_cond_load_acquire(&lock->locked, !VAL || RES_CHECK_TIMEOUT(ts, timeout_err, _Q_LOCKED_MASK) < 0);
+ smp_cond_load_acquire_timeout(&lock->locked, !VAL,
+ (timeout_err = clock_deadlock(lock, _Q_LOCKED_MASK, &ts)),
+ ts.duration);
}
if (timeout_err < 0) {
@@ -577,8 +579,9 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
* us.
*/
RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT * 2);
- val = res_atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK) ||
- RES_CHECK_TIMEOUT(ts, timeout_err, _Q_LOCKED_PENDING_MASK) < 0);
+ val = atomic_cond_read_acquire_timeout(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK),
+ (timeout_err = clock_deadlock(lock, _Q_LOCKED_PENDING_MASK, &ts)),
+ ts.duration);
/* Disable queue destruction when we detect deadlocks. */
if (timeout_err == -EDEADLK) {
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 11/12] sched: add need-resched timed wait interface
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (9 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 10/12] bpf/rqspinlock: Use smp_cond_load_acquire_timeout() Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-09 3:05 ` bot+bpf-ci
2026-02-09 2:31 ` [PATCH v9 12/12] cpuidle/poll_state: Wait for need-resched via tif_need_resched_relaxed_wait() Ankur Arora
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora,
Ingo Molnar
Add tif_bitset_relaxed_wait() (and tif_need_resched_relaxed_wait()
which wraps it) which takes the thread_info bit and timeout duration
as parameters and waits until the bit is set or for the expiration
of the timeout.
The wait is implemented via smp_cond_load_relaxed_timeout().
smp_cond_load_acquire_timeout() essentially provides the pattern used
in poll_idle() where we spin in a loop waiting for the flag to change
until a timeout occurs.
tif_need_resched_relaxed_wait() allows us to abstract out the internals
of waiting, scheduler specific details etc.
Placed in linux/sched/idle.h instead of linux/thread_info.h to work
around recursive include hell.
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: linux-pm@vger.kernel.org
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
Notes:
- Make tif_bitset_relaxed_wait() __always_inline to work around the
noinstr section warning.
- use the BIT() macro when checking for the tif bit being set
include/linux/sched/idle.h | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index 8465ff1f20d1..ddee9b019895 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -3,6 +3,7 @@
#define _LINUX_SCHED_IDLE_H
#include <linux/sched.h>
+#include <linux/sched/clock.h>
enum cpu_idle_type {
__CPU_NOT_IDLE = 0,
@@ -113,4 +114,32 @@ static __always_inline void current_clr_polling(void)
}
#endif
+/*
+ * Caller needs to make sure that the thread context cannot be preempted
+ * or migrated, so current_thread_info() cannot change from under us.
+ *
+ * This also allows us to safely stay in the local_clock domain.
+ */
+static __always_inline bool tif_bitset_relaxed_wait(int tif, u64 timeout_ns)
+{
+ unsigned long flags;
+
+ flags = smp_cond_load_relaxed_timeout(¤t_thread_info()->flags,
+ (VAL & BIT(tif)),
+ local_clock_noinstr(),
+ timeout_ns);
+ return flags & BIT(tif);
+}
+
+/**
+ * tif_need_resched_relaxed_wait() - Wait for need-resched being set
+ * with no ordering guarantees until a timeout expires.
+ *
+ * @timeout_ns: timeout value.
+ */
+static __always_inline bool tif_need_resched_relaxed_wait(u64 timeout_ns)
+{
+ return tif_bitset_relaxed_wait(TIF_NEED_RESCHED, timeout_ns);
+}
+
#endif /* _LINUX_SCHED_IDLE_H */
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v9 12/12] cpuidle/poll_state: Wait for need-resched via tif_need_resched_relaxed_wait()
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
` (10 preceding siblings ...)
2026-02-09 2:31 ` [PATCH v9 11/12] sched: add need-resched timed wait interface Ankur Arora
@ 2026-02-09 2:31 ` Ankur Arora
2026-02-10 16:55 ` Rafael J. Wysocki
11 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-09 2:31 UTC (permalink / raw)
To: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, Ankur Arora
The inner loop in poll_idle() polls over the thread_info flags,
waiting to see if the thread has TIF_NEED_RESCHED set. The loop
exits once the condition is met, or if the poll time limit has
been exceeded.
To minimize the number of instructions executed in each iteration,
the time check is rate-limited. In addition, each loop iteration
executes cpu_relax() which on certain platforms provides a hint to
the pipeline that the loop busy-waits, allowing the processor to
reduce power consumption.
Switch over to tif_need_resched_relaxed_wait() instead, since that
provides exactly that.
However, since we want to minimize power consumption in idle, building
of cpuidle/poll_state.c continues to depend on CONFIG_ARCH_HAS_CPU_RELAX
as that serves as an indicator that the platform supports an optimized
version of tif_need_resched_relaxed_wait() (via
smp_cond_load_acquire_timeout()).
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: linux-pm@vger.kernel.org
Suggested-by: "Rafael J. Wysocki" <rafael@kernel.org>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
Notes:
- get rid of unnecessary variable assignments, casts etc
drivers/cpuidle/poll_state.c | 21 +--------------------
1 file changed, 1 insertion(+), 20 deletions(-)
diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
index c7524e4c522a..7443b3e971ba 100644
--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -6,41 +6,22 @@
#include <linux/cpuidle.h>
#include <linux/export.h>
#include <linux/irqflags.h>
-#include <linux/sched.h>
-#include <linux/sched/clock.h>
#include <linux/sched/idle.h>
#include <linux/sprintf.h>
#include <linux/types.h>
-#define POLL_IDLE_RELAX_COUNT 200
-
static int __cpuidle poll_idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
- u64 time_start;
-
- time_start = local_clock_noinstr();
-
dev->poll_time_limit = false;
raw_local_irq_enable();
if (!current_set_polling_and_test()) {
- unsigned int loop_count = 0;
u64 limit;
limit = cpuidle_poll_time(drv, dev);
- while (!need_resched()) {
- cpu_relax();
- if (loop_count++ < POLL_IDLE_RELAX_COUNT)
- continue;
-
- loop_count = 0;
- if (local_clock_noinstr() - time_start > limit) {
- dev->poll_time_limit = true;
- break;
- }
- }
+ dev->poll_time_limit = !tif_need_resched_relaxed_wait(limit);
}
raw_local_irq_disable();
--
2.31.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH v9 11/12] sched: add need-resched timed wait interface
2026-02-09 2:31 ` [PATCH v9 11/12] sched: add need-resched timed wait interface Ankur Arora
@ 2026-02-09 3:05 ` bot+bpf-ci
0 siblings, 0 replies; 33+ messages in thread
From: bot+bpf-ci @ 2026-02-09 3:05 UTC (permalink / raw)
To: ankur.a.arora, linux-kernel, linux-arch, linux-arm-kernel,
linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, ankur.a.arora,
mingo, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song,
clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1297 bytes --]
> The wait is implemented via smp_cond_load_relaxed_timeout().
>
> smp_cond_load_acquire_timeout() essentially provides the pattern used
> in poll_idle() where we spin in a loop waiting for the flag to change
> until a timeout occurs.
This isn't a bug, but the commit message mentions smp_cond_load_acquire_timeout()
when describing the poll_idle() pattern.
However, the code uses smp_cond_load_relaxed_timeout(), and the subsequent
poll_idle() conversion (patch 12/12) also uses the relaxed variant via
tif_need_resched_relaxed_wait().
Should this say smp_cond_load_relaxed_timeout() instead of
smp_cond_load_acquire_timeout() for consistency with the implementation?
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21810635323
AI-authorship-score: low
AI-authorship-explanation: The commit follows standard kernel style with iterative improvements across 9 versions and reviewer feedback, consistent with human authorship.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Minor commit message inaccuracy referencing wrong function name, no impact on code correctness or runtime behavior.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 09/12] bpf/rqspinlock: switch check_timeout() to a clock interface
2026-02-09 2:31 ` [PATCH v9 09/12] bpf/rqspinlock: switch check_timeout() to a clock interface Ankur Arora
@ 2026-02-09 3:05 ` bot+bpf-ci
0 siblings, 0 replies; 33+ messages in thread
From: bot+bpf-ci @ 2026-02-09 3:05 UTC (permalink / raw)
To: ankur.a.arora, linux-kernel, linux-arch, linux-arm-kernel,
linux-pm, bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk, ankur.a.arora, ast,
andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2168 bytes --]
> diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c
> index 2fdfa828e3d3..f02b1c5f0222 100644
> --- a/kernel/bpf/rqspinlock.c
> +++ b/kernel/bpf/rqspinlock.c
[ ... ]
> @@ -340,6 +349,7 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
> {
> struct mcs_spinlock *prev, *next, *node;
> struct rqspinlock_timeout ts;
> + s64 timeout_err = 0;
> int idx, ret = 0;
> u32 old, tail;
[ ... ]
In arch_mcs_spin_lock_contended(), when a waiter in the MCS queue receives
a timeout signal (RES_TIMEOUT_VAL) from the previous waiter:
> if (old & _Q_TAIL_MASK) {
> int val;
>
> prev = decode_tail(old, rqnodes);
>
> /* Link @node into the waitqueue. */
> WRITE_ONCE(prev->next, node);
>
> val = arch_mcs_spin_lock_contended(&node->locked);
> if (val == RES_TIMEOUT_VAL) {
> ret = -ETIMEDOUT;
> goto waitq_timeout;
> }
This sets ret = -ETIMEDOUT but does not set timeout_err. When execution
reaches the waitq_timeout label:
> waitq_timeout:
> - if (ret) {
> + if (timeout_err < 0) {
The condition evaluates to false since timeout_err is still 0. This causes
the timeout handling to be skipped entirely, and execution falls through
to the lock acquisition path. If the lock is acquired, the function
returns ret (which is -ETIMEDOUT), making the caller believe the lock was
not acquired when it actually was. This appears to be a lock leak.
Should the goto waitq_timeout path also set timeout_err = -ETIMEDOUT?
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21810635323
AI-authorship-score: low
AI-authorship-explanation: The commit shows typical human developer patterns including a missed edge case during refactoring, which is a common human error rather than systematic AI-generated code issues.
issues-found: 1
issue-severity-score: high
issue-severity-explanation: Lock leak bug where the function may acquire a lock but return an error code, causing callers to think acquisition failed when it actually succeeded.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-09 2:31 ` [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout() Ankur Arora
@ 2026-02-09 4:57 ` Randy Dunlap
2026-02-10 5:52 ` Ankur Arora
2026-02-11 15:39 ` Catalin Marinas
2026-02-12 9:56 ` David Laight
2 siblings, 1 reply; 33+ messages in thread
From: Randy Dunlap @ 2026-02-09 4:57 UTC (permalink / raw)
To: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
> +/**
> + * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering
> + * guarantees until a timeout expires.
> + * @ptr: pointer to the variable to wait on.
> + * @cond: boolean expression to wait for.
s/cond/cond_expr/
> + * @time_expr_ns: expression that evaluates to monotonic time (in ns) or,
> + * on failure, returns a negative value.
> + * @timeout_ns: timeout value in ns
> + * Both of the above are assumed to be compatible with s64; the signed
> + * value is used to handle the failure case in @time_expr_ns.
> + *
> + * Equivalent to using READ_ONCE() on the condition variable.
> + *
> + * Callers that expect to wait for prolonged durations might want to
> + * take into account the availability of ARCH_HAS_CPU_RELAX.
> + */
> +#ifndef smp_cond_load_relaxed_timeout
> +#define smp_cond_load_relaxed_timeout(ptr, cond_expr, \
> + time_expr_ns, timeout_ns) \
> +({ \
--
~Randy
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 06/12] asm-generic: barrier: Add smp_cond_load_acquire_timeout()
2026-02-09 2:31 ` [PATCH v9 06/12] asm-generic: barrier: Add smp_cond_load_acquire_timeout() Ankur Arora
@ 2026-02-09 4:59 ` Randy Dunlap
0 siblings, 0 replies; 33+ messages in thread
From: Randy Dunlap @ 2026-02-09 4:59 UTC (permalink / raw)
To: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf
Cc: arnd, catalin.marinas, will, peterz, akpm, mark.rutland, harisokn,
cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
On 2/8/26 6:31 PM, Ankur Arora wrote:
> + * @cond: boolean expression to wait for
@cond_expr:
--
~Randy
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-09 4:57 ` Randy Dunlap
@ 2026-02-10 5:52 ` Ankur Arora
0 siblings, 0 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-10 5:52 UTC (permalink / raw)
To: Randy Dunlap
Cc: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf, arnd, catalin.marinas, will, peterz, akpm, mark.rutland,
harisokn, cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1,
xueshuai, joao.m.martins, boris.ostrovsky, konrad.wilk
Randy Dunlap <rdunlap@infradead.org> writes:
>> +/**
>> + * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering
>> + * guarantees until a timeout expires.
>> + * @ptr: pointer to the variable to wait on.
>> + * @cond: boolean expression to wait for.
>
> s/cond/cond_expr/
Thanks! Well spotted.
--
ankur
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 12/12] cpuidle/poll_state: Wait for need-resched via tif_need_resched_relaxed_wait()
2026-02-09 2:31 ` [PATCH v9 12/12] cpuidle/poll_state: Wait for need-resched via tif_need_resched_relaxed_wait() Ankur Arora
@ 2026-02-10 16:55 ` Rafael J. Wysocki
2026-02-11 0:29 ` Ankur Arora
0 siblings, 1 reply; 33+ messages in thread
From: Rafael J. Wysocki @ 2026-02-10 16:55 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
catalin.marinas, will, peterz, akpm, mark.rutland, harisokn, cl,
ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
On Mon, Feb 9, 2026 at 3:43 AM Ankur Arora <ankur.a.arora@oracle.com> wrote:
>
> The inner loop in poll_idle() polls over the thread_info flags,
> waiting to see if the thread has TIF_NEED_RESCHED set. The loop
> exits once the condition is met, or if the poll time limit has
> been exceeded.
>
> To minimize the number of instructions executed in each iteration,
> the time check is rate-limited. In addition, each loop iteration
> executes cpu_relax() which on certain platforms provides a hint to
> the pipeline that the loop busy-waits, allowing the processor to
> reduce power consumption.
>
> Switch over to tif_need_resched_relaxed_wait() instead, since that
> provides exactly that.
>
> However, since we want to minimize power consumption in idle, building
> of cpuidle/poll_state.c continues to depend on CONFIG_ARCH_HAS_CPU_RELAX
> as that serves as an indicator that the platform supports an optimized
> version of tif_need_resched_relaxed_wait() (via
> smp_cond_load_acquire_timeout()).
>
> Cc: "Rafael J. Wysocki" <rafael@kernel.org>
> Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
> Cc: linux-pm@vger.kernel.org
> Suggested-by: "Rafael J. Wysocki" <rafael@kernel.org>
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
This is generally fine with me, of course depending on how
tif_need_resched_relaxed_wait() will work, so conditional on reaching
an agreement with the arch and scheduler folks feel free to add
Acked-by: Rafael J. Wysocki (Intel) <rafael@kernel.org>
to this one.
> ---
> Notes:
> - get rid of unnecessary variable assignments, casts etc
>
> drivers/cpuidle/poll_state.c | 21 +--------------------
> 1 file changed, 1 insertion(+), 20 deletions(-)
>
> diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
> index c7524e4c522a..7443b3e971ba 100644
> --- a/drivers/cpuidle/poll_state.c
> +++ b/drivers/cpuidle/poll_state.c
> @@ -6,41 +6,22 @@
> #include <linux/cpuidle.h>
> #include <linux/export.h>
> #include <linux/irqflags.h>
> -#include <linux/sched.h>
> -#include <linux/sched/clock.h>
> #include <linux/sched/idle.h>
> #include <linux/sprintf.h>
> #include <linux/types.h>
>
> -#define POLL_IDLE_RELAX_COUNT 200
> -
> static int __cpuidle poll_idle(struct cpuidle_device *dev,
> struct cpuidle_driver *drv, int index)
> {
> - u64 time_start;
> -
> - time_start = local_clock_noinstr();
> -
> dev->poll_time_limit = false;
>
> raw_local_irq_enable();
> if (!current_set_polling_and_test()) {
> - unsigned int loop_count = 0;
> u64 limit;
>
> limit = cpuidle_poll_time(drv, dev);
>
> - while (!need_resched()) {
> - cpu_relax();
> - if (loop_count++ < POLL_IDLE_RELAX_COUNT)
> - continue;
> -
> - loop_count = 0;
> - if (local_clock_noinstr() - time_start > limit) {
> - dev->poll_time_limit = true;
> - break;
> - }
> - }
> + dev->poll_time_limit = !tif_need_resched_relaxed_wait(limit);
> }
> raw_local_irq_disable();
>
> --
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 12/12] cpuidle/poll_state: Wait for need-resched via tif_need_resched_relaxed_wait()
2026-02-10 16:55 ` Rafael J. Wysocki
@ 2026-02-11 0:29 ` Ankur Arora
0 siblings, 0 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-11 0:29 UTC (permalink / raw)
To: Rafael J. Wysocki, peterz
Cc: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf, arnd, catalin.marinas, will, peterz, akpm, mark.rutland,
harisokn, cl, ast, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
Hi Peter,
Could you look tif_need_resched_relaxed_wait() and see if the interface
(and the implementation via smp_cond_load_relaxed_timeout() -- patch-11
in this series, with smp_cond_load_relaxed_timeout() itself being the
patch-1 in the series) seems okay to you.
And just a little bit of history on the interface. You had suggested using
smp_cond_load_relaxed() way back
(https://lore.kernel.org/lkml/20230809134837.GM212435@hirez.programming.kicks-ass.net/)
when we were trying to use poll_idle() on arm64. That eventually became
smp_cond_load_relaxed_timeout() and then Rafael suggested this interface
abstracting out all the non idle related details.
Rafael J. Wysocki <rafael@kernel.org> writes:
> On Mon, Feb 9, 2026 at 3:43 AM Ankur Arora <ankur.a.arora@oracle.com> wrote:
>>
>> The inner loop in poll_idle() polls over the thread_info flags,
>> waiting to see if the thread has TIF_NEED_RESCHED set. The loop
>> exits once the condition is met, or if the poll time limit has
>> been exceeded.
>>
>> To minimize the number of instructions executed in each iteration,
>> the time check is rate-limited. In addition, each loop iteration
>> executes cpu_relax() which on certain platforms provides a hint to
>> the pipeline that the loop busy-waits, allowing the processor to
>> reduce power consumption.
>>
>> Switch over to tif_need_resched_relaxed_wait() instead, since that
>> provides exactly that.
>>
>> However, since we want to minimize power consumption in idle, building
>> of cpuidle/poll_state.c continues to depend on CONFIG_ARCH_HAS_CPU_RELAX
>> as that serves as an indicator that the platform supports an optimized
>> version of tif_need_resched_relaxed_wait() (via
>> smp_cond_load_acquire_timeout()).
>>
>> Cc: "Rafael J. Wysocki" <rafael@kernel.org>
>> Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
>> Cc: linux-pm@vger.kernel.org
>> Suggested-by: "Rafael J. Wysocki" <rafael@kernel.org>
>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>
> This is generally fine with me, of course depending on how
> tif_need_resched_relaxed_wait() will work, so conditional on reaching
> an agreement with the arch and scheduler folks feel free to add
>
> Acked-by: Rafael J. Wysocki (Intel) <rafael@kernel.org>
Excellent. Thanks Rafael.
Ankur
> to this one.
>
>> ---
>> Notes:
>> - get rid of unnecessary variable assignments, casts etc
>>
>> drivers/cpuidle/poll_state.c | 21 +--------------------
>> 1 file changed, 1 insertion(+), 20 deletions(-)
>>
>> diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
>> index c7524e4c522a..7443b3e971ba 100644
>> --- a/drivers/cpuidle/poll_state.c
>> +++ b/drivers/cpuidle/poll_state.c
>> @@ -6,41 +6,22 @@
>> #include <linux/cpuidle.h>
>> #include <linux/export.h>
>> #include <linux/irqflags.h>
>> -#include <linux/sched.h>
>> -#include <linux/sched/clock.h>
>> #include <linux/sched/idle.h>
>> #include <linux/sprintf.h>
>> #include <linux/types.h>
>>
>> -#define POLL_IDLE_RELAX_COUNT 200
>> -
>> static int __cpuidle poll_idle(struct cpuidle_device *dev,
>> struct cpuidle_driver *drv, int index)
>> {
>> - u64 time_start;
>> -
>> - time_start = local_clock_noinstr();
>> -
>> dev->poll_time_limit = false;
>>
>> raw_local_irq_enable();
>> if (!current_set_polling_and_test()) {
>> - unsigned int loop_count = 0;
>> u64 limit;
>>
>> limit = cpuidle_poll_time(drv, dev);
>>
>> - while (!need_resched()) {
>> - cpu_relax();
>> - if (loop_count++ < POLL_IDLE_RELAX_COUNT)
>> - continue;
>> -
>> - loop_count = 0;
>> - if (local_clock_noinstr() - time_start > limit) {
>> - dev->poll_time_limit = true;
>> - break;
>> - }
>> - }
>> + dev->poll_time_limit = !tif_need_resched_relaxed_wait(limit);
>> }
>> raw_local_irq_disable();
>>
>> --
--
ankur
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-09 2:31 ` [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout() Ankur Arora
2026-02-09 4:57 ` Randy Dunlap
@ 2026-02-11 15:39 ` Catalin Marinas
2026-02-11 22:17 ` Ankur Arora
2026-02-12 9:56 ` David Laight
2 siblings, 1 reply; 33+ messages in thread
From: Catalin Marinas @ 2026-02-11 15:39 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
will, peterz, akpm, mark.rutland, harisokn, cl, ast, rafael,
daniel.lezcano, memxor, zhenglifeng1, xueshuai, joao.m.martins,
boris.ostrovsky, konrad.wilk
On Sun, Feb 08, 2026 at 06:31:42PM -0800, Ankur Arora wrote:
> Add smp_cond_load_relaxed_timeout(), which extends
> smp_cond_load_relaxed() to allow waiting for a duration.
>
> We loop around waiting for the condition variable to change while
> peridically doing a time-check. The loop uses cpu_poll_relax() to slow
> down the busy-waiting, which, unless overridden by the architecture
> code, amounts to a cpu_relax().
>
> Note that there are two ways for the time-check to fail: the usual
> timeout case or, @time_expr_ns returning an invalid value (negative
> or zero). The second failure mode allows for clocks attached to the
> clock-domain of @cond_expr, which might cease to operate meaningfully
> once some state internal to @cond_expr has changed.
>
> Evaluation of @time_expr_ns: in the fastpath we want to keep the
> performance close to smp_cond_load_relaxed(). To do that we defer
> evaluation of the potentially costly @time_expr_ns to when we hit
> the slowpath.
>
> This also means that there will always be some hardware dependent
> duration that has passed in cpu_poll_relax() iterations at the time of
> first evaluation. Additionally cpu_poll_relax() is not guaranteed to
> return at timeout boundary. In sum, expect timeout overshoot when we
> exit due to expiration of the timeout.
>
> The number of spin iterations before time-check, SMP_TIMEOUT_POLL_COUNT
> is chosen to be 200 by default. With a cpu_poll_relax() iteration
> taking ~20-30 cycles (measured on a variety of x86 platforms), we expect
> a tim-check every ~4000-6000 cycles.
>
> The outer limit of the overshoot is double that when working with the
> parameters above. This might be higher or lower depending on the
> implementation of cpu_poll_relax() across architectures.
>
> Lastly, config option ARCH_HAS_CPU_RELAX indicates availability of a
> cpu_poll_relax() that is cheaper than polling. This might be relevant
> for cases with a prolonged timeout.
>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: linux-arch@vger.kernel.org
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
This series evolved a bit since last time I looked, so going through it
again:
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 02/12] arm64: barrier: Support smp_cond_load_relaxed_timeout()
2026-02-09 2:31 ` [PATCH v9 02/12] arm64: barrier: Support smp_cond_load_relaxed_timeout() Ankur Arora
@ 2026-02-11 15:54 ` Catalin Marinas
2026-02-11 22:57 ` Ankur Arora
0 siblings, 1 reply; 33+ messages in thread
From: Catalin Marinas @ 2026-02-11 15:54 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
will, peterz, akpm, mark.rutland, harisokn, cl, ast, rafael,
daniel.lezcano, memxor, zhenglifeng1, xueshuai, joao.m.martins,
boris.ostrovsky, konrad.wilk
On Sun, Feb 08, 2026 at 06:31:43PM -0800, Ankur Arora wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 93173f0a09c7..239fdca8e2cf 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -22,6 +22,7 @@ config ARM64
> select ARCH_HAS_CACHE_LINE_SIZE
> select ARCH_HAS_CC_PLATFORM
> select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
> + select ARCH_HAS_CPU_RELAX
AFAICT, ARCH_HAS_CPU_RELAX is only defined for x86. You can't just
select it here. Either make the definition global somewhere or add this
to arm64 Kconfig:
config ARCH_HAS_CPU_RELAX
def_bool y
Otherwise,
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 03/12] arm64/delay: move some constants out to a separate header
2026-02-09 2:31 ` [PATCH v9 03/12] arm64/delay: move some constants out to a separate header Ankur Arora
@ 2026-02-11 16:01 ` Catalin Marinas
0 siblings, 0 replies; 33+ messages in thread
From: Catalin Marinas @ 2026-02-11 16:01 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
will, peterz, akpm, mark.rutland, harisokn, cl, ast, rafael,
daniel.lezcano, memxor, zhenglifeng1, xueshuai, joao.m.martins,
boris.ostrovsky, konrad.wilk, Bjorn Andersson, Konrad Dybcio,
Christoph Lameter
On Sun, Feb 08, 2026 at 06:31:44PM -0800, Ankur Arora wrote:
> Moves some constants and functions related to xloops, cycles computation
> out to a new header. Also rename some macros in qcom/rpmh-rsc.c which
> were occupying the same namespace.
>
> No functional change.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Bjorn Andersson <andersson@kernel.org>
> Cc: Konrad Dybcio <konradybcio@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Reviewed-by: Christoph Lameter <cl@linux.com>
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 04/12] arm64: support WFET in smp_cond_load_relaxed_timeout()
2026-02-09 2:31 ` [PATCH v9 04/12] arm64: support WFET in smp_cond_load_relaxed_timeout() Ankur Arora
@ 2026-02-11 17:11 ` Catalin Marinas
2026-02-11 23:13 ` Ankur Arora
0 siblings, 1 reply; 33+ messages in thread
From: Catalin Marinas @ 2026-02-11 17:11 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
will, peterz, akpm, mark.rutland, harisokn, cl, ast, rafael,
daniel.lezcano, memxor, zhenglifeng1, xueshuai, joao.m.martins,
boris.ostrovsky, konrad.wilk
On Sun, Feb 08, 2026 at 06:31:45PM -0800, Ankur Arora wrote:
> diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
> index d7a540736741..dfb7d10a18be 100644
> --- a/arch/arm64/include/asm/cmpxchg.h
> +++ b/arch/arm64/include/asm/cmpxchg.h
> @@ -12,6 +12,7 @@
>
> #include <asm/barrier.h>
> #include <asm/lse.h>
> +#include <asm/delay-const.h>
>
> /*
> * We need separate acquire parameters for ll/sc and lse, since the full
> @@ -208,9 +209,13 @@ __CMPXCHG_GEN(_mb)
> __cmpxchg128((ptr), (o), (n)); \
> })
>
> +/* Re-declared here to avoid include dependency. */
> +extern u64 (*arch_timer_read_counter)(void);
We have a bug in udelay() because the above might read the physical
counter while WFET uses the virtual one. See this thread:
https://lore.kernel.org/all/ktosachvft2cgqd5qkukn275ugmhy6xrhxur4zqpdxlfr3qh5h@o3zrfnsq63od/
We could use __arch_counter_get_cntvct_stable() as in Marc's suggestion
for the udelay() fix (or just wait to see the outcome of the above
thread).
The rest looks fine.
--
Catalin
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 07/12] atomic: Add atomic_cond_read_*_timeout()
2026-02-09 2:31 ` [PATCH v9 07/12] atomic: Add atomic_cond_read_*_timeout() Ankur Arora
@ 2026-02-11 17:25 ` Catalin Marinas
0 siblings, 0 replies; 33+ messages in thread
From: Catalin Marinas @ 2026-02-11 17:25 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
will, peterz, akpm, mark.rutland, harisokn, cl, ast, rafael,
daniel.lezcano, memxor, zhenglifeng1, xueshuai, joao.m.martins,
boris.ostrovsky, konrad.wilk, Boqun Feng
On Sun, Feb 08, 2026 at 06:31:48PM -0800, Ankur Arora wrote:
> Add atomic load wrappers, atomic_cond_read_*_timeout() and
> atomic64_cond_read_*_timeout() for the cond-load timeout interfaces.
>
> Also add a short description for the atomic_cond_read_{relaxed,acquire}(),
> and the atomic_cond_read_{relaxed,acquire}_timeout() interfaces.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 08/12] locking/atomic: scripts: build atomic_long_cond_read_*_timeout()
2026-02-09 2:31 ` [PATCH v9 08/12] locking/atomic: scripts: build atomic_long_cond_read_*_timeout() Ankur Arora
@ 2026-02-11 17:41 ` Catalin Marinas
0 siblings, 0 replies; 33+ messages in thread
From: Catalin Marinas @ 2026-02-11 17:41 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
will, peterz, akpm, mark.rutland, harisokn, cl, ast, rafael,
daniel.lezcano, memxor, zhenglifeng1, xueshuai, joao.m.martins,
boris.ostrovsky, konrad.wilk, Boqun Feng
On Sun, Feb 08, 2026 at 06:31:49PM -0800, Ankur Arora wrote:
> Add the atomic long wrappers for the cond-load timeout interfaces.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-11 15:39 ` Catalin Marinas
@ 2026-02-11 22:17 ` Ankur Arora
0 siblings, 0 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-11 22:17 UTC (permalink / raw)
To: Catalin Marinas
Cc: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf, arnd, will, peterz, akpm, mark.rutland, harisokn, cl, ast,
rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
Catalin Marinas <catalin.marinas@arm.com> writes:
> On Sun, Feb 08, 2026 at 06:31:42PM -0800, Ankur Arora wrote:
>> Add smp_cond_load_relaxed_timeout(), which extends
>> smp_cond_load_relaxed() to allow waiting for a duration.
>>
>> We loop around waiting for the condition variable to change while
>> peridically doing a time-check. The loop uses cpu_poll_relax() to slow
>> down the busy-waiting, which, unless overridden by the architecture
>> code, amounts to a cpu_relax().
>>
>> Note that there are two ways for the time-check to fail: the usual
>> timeout case or, @time_expr_ns returning an invalid value (negative
>> or zero). The second failure mode allows for clocks attached to the
>> clock-domain of @cond_expr, which might cease to operate meaningfully
>> once some state internal to @cond_expr has changed.
>>
>> Evaluation of @time_expr_ns: in the fastpath we want to keep the
>> performance close to smp_cond_load_relaxed(). To do that we defer
>> evaluation of the potentially costly @time_expr_ns to when we hit
>> the slowpath.
>>
>> This also means that there will always be some hardware dependent
>> duration that has passed in cpu_poll_relax() iterations at the time of
>> first evaluation. Additionally cpu_poll_relax() is not guaranteed to
>> return at timeout boundary. In sum, expect timeout overshoot when we
>> exit due to expiration of the timeout.
>>
>> The number of spin iterations before time-check, SMP_TIMEOUT_POLL_COUNT
>> is chosen to be 200 by default. With a cpu_poll_relax() iteration
>> taking ~20-30 cycles (measured on a variety of x86 platforms), we expect
>> a tim-check every ~4000-6000 cycles.
>>
>> The outer limit of the overshoot is double that when working with the
>> parameters above. This might be higher or lower depending on the
>> implementation of cpu_poll_relax() across architectures.
>>
>> Lastly, config option ARCH_HAS_CPU_RELAX indicates availability of a
>> cpu_poll_relax() that is cheaper than polling. This might be relevant
>> for cases with a prolonged timeout.
>>
>> Cc: Arnd Bergmann <arnd@arndb.de>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: linux-arch@vger.kernel.org
>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>
> This series evolved a bit since last time I looked, so going through it
> again:
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Great. Thanks for the (re-)review!
--
ankur
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 02/12] arm64: barrier: Support smp_cond_load_relaxed_timeout()
2026-02-11 15:54 ` Catalin Marinas
@ 2026-02-11 22:57 ` Ankur Arora
0 siblings, 0 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-11 22:57 UTC (permalink / raw)
To: Catalin Marinas
Cc: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf, arnd, will, peterz, akpm, mark.rutland, harisokn, cl, ast,
rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
Catalin Marinas <catalin.marinas@arm.com> writes:
> On Sun, Feb 08, 2026 at 06:31:43PM -0800, Ankur Arora wrote:
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 93173f0a09c7..239fdca8e2cf 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -22,6 +22,7 @@ config ARM64
>> select ARCH_HAS_CACHE_LINE_SIZE
>> select ARCH_HAS_CC_PLATFORM
>> select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
>> + select ARCH_HAS_CPU_RELAX
>
> AFAICT, ARCH_HAS_CPU_RELAX is only defined for x86. You can't just
> select it here. Either make the definition global somewhere or add this
> to arm64 Kconfig:
>
> config ARCH_HAS_CPU_RELAX
> def_bool y
Ack. I had added config ARCH_HAS_CPU_RELAX in arch/Kconfig
for my test builds but got rid of that when sending out the
series. Will fix.
> Otherwise,
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thanks.
--
ankur
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 04/12] arm64: support WFET in smp_cond_load_relaxed_timeout()
2026-02-11 17:11 ` Catalin Marinas
@ 2026-02-11 23:13 ` Ankur Arora
0 siblings, 0 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-11 23:13 UTC (permalink / raw)
To: Catalin Marinas
Cc: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf, arnd, will, peterz, akpm, mark.rutland, harisokn, cl, ast,
rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
Catalin Marinas <catalin.marinas@arm.com> writes:
> On Sun, Feb 08, 2026 at 06:31:45PM -0800, Ankur Arora wrote:
>> diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
>> index d7a540736741..dfb7d10a18be 100644
>> --- a/arch/arm64/include/asm/cmpxchg.h
>> +++ b/arch/arm64/include/asm/cmpxchg.h
>> @@ -12,6 +12,7 @@
>>
>> #include <asm/barrier.h>
>> #include <asm/lse.h>
>> +#include <asm/delay-const.h>
>>
>> /*
>> * We need separate acquire parameters for ll/sc and lse, since the full
>> @@ -208,9 +209,13 @@ __CMPXCHG_GEN(_mb)
>> __cmpxchg128((ptr), (o), (n)); \
>> })
>>
>> +/* Re-declared here to avoid include dependency. */
>> +extern u64 (*arch_timer_read_counter)(void);
>
> We have a bug in udelay() because the above might read the physical
> counter while WFET uses the virtual one. See this thread:
>
> https://lore.kernel.org/all/ktosachvft2cgqd5qkukn275ugmhy6xrhxur4zqpdxlfr3qh5h@o3zrfnsq63od/
Oh that looks painful. Thanks for linking it.
> We could use __arch_counter_get_cntvct_stable() as in Marc's suggestion
> for the udelay() fix (or just wait to see the outcome of the above
> thread).
Yeah will follow where that goes.
> The rest looks fine.
Great. Thanks!
--
ankur
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-09 2:31 ` [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout() Ankur Arora
2026-02-09 4:57 ` Randy Dunlap
2026-02-11 15:39 ` Catalin Marinas
@ 2026-02-12 9:56 ` David Laight
2026-02-14 4:58 ` Ankur Arora
2 siblings, 1 reply; 33+ messages in thread
From: David Laight @ 2026-02-12 9:56 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
catalin.marinas, will, peterz, akpm, mark.rutland, harisokn, cl,
ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
On Sun, 8 Feb 2026 18:31:42 -0800
Ankur Arora <ankur.a.arora@oracle.com> wrote:
> Add smp_cond_load_relaxed_timeout(), which extends
> smp_cond_load_relaxed() to allow waiting for a duration.
>
> We loop around waiting for the condition variable to change while
> peridically doing a time-check. The loop uses cpu_poll_relax() to slow
> down the busy-waiting, which, unless overridden by the architecture
> code, amounts to a cpu_relax().
>
> Note that there are two ways for the time-check to fail: the usual
> timeout case or, @time_expr_ns returning an invalid value (negative
> or zero). The second failure mode allows for clocks attached to the
> clock-domain of @cond_expr, which might cease to operate meaningfully
> once some state internal to @cond_expr has changed.
>
> Evaluation of @time_expr_ns: in the fastpath we want to keep the
> performance close to smp_cond_load_relaxed(). To do that we defer
> evaluation of the potentially costly @time_expr_ns to when we hit
> the slowpath.
>
> This also means that there will always be some hardware dependent
> duration that has passed in cpu_poll_relax() iterations at the time of
> first evaluation. Additionally cpu_poll_relax() is not guaranteed to
> return at timeout boundary. In sum, expect timeout overshoot when we
> exit due to expiration of the timeout.
>
> The number of spin iterations before time-check, SMP_TIMEOUT_POLL_COUNT
> is chosen to be 200 by default. With a cpu_poll_relax() iteration
> taking ~20-30 cycles (measured on a variety of x86 platforms), we expect
> a tim-check every ~4000-6000 cycles.
^ time-check
Plus the cost of evaluating cond_expr 200 times.
I guess that isn't expected to contain a PCIe read :-)
David
>
> The outer limit of the overshoot is double that when working with the
> parameters above. This might be higher or lower depending on the
> implementation of cpu_poll_relax() across architectures.
>
> Lastly, config option ARCH_HAS_CPU_RELAX indicates availability of a
> cpu_poll_relax() that is cheaper than polling. This might be relevant
> for cases with a prolonged timeout.
>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: linux-arch@vger.kernel.org
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
> ---
> Notes:
> - Defer evaluation of @time_expr_ns to when we hit the slowpath.
> - This also helps get rid of the labelled gotos which were used to
> handle the early failure case (since now there's no early init
> to be concerned with.)
> - Add a comment mentioning that the cpu_poll_relax() implementation
> is better than polling if ARCH_HAS_CPU_RELAX.
>
> include/asm-generic/barrier.h | 72 +++++++++++++++++++++++++++++++++++
> 1 file changed, 72 insertions(+)
>
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index d4f581c1e21d..2738fe35c1df 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -273,6 +273,68 @@ do { \
> })
> #endif
>
> +/*
> + * Number of times we iterate in the loop before doing the time check.
> + */
> +#ifndef SMP_TIMEOUT_POLL_COUNT
> +#define SMP_TIMEOUT_POLL_COUNT 200
> +#endif
> +
> +/*
> + * Platforms with ARCH_HAS_CPU_RELAX have a cpu_poll_relax() implementation
> + * that is expected to be cheaper (lower power) than pure polling.
> + */
> +#ifndef cpu_poll_relax
> +#define cpu_poll_relax(ptr, val, timeout_ns) cpu_relax()
> +#endif
> +
> +/**
> + * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering
> + * guarantees until a timeout expires.
> + * @ptr: pointer to the variable to wait on.
> + * @cond: boolean expression to wait for.
> + * @time_expr_ns: expression that evaluates to monotonic time (in ns) or,
> + * on failure, returns a negative value.
> + * @timeout_ns: timeout value in ns
> + * Both of the above are assumed to be compatible with s64; the signed
> + * value is used to handle the failure case in @time_expr_ns.
> + *
> + * Equivalent to using READ_ONCE() on the condition variable.
> + *
> + * Callers that expect to wait for prolonged durations might want to
> + * take into account the availability of ARCH_HAS_CPU_RELAX.
> + */
> +#ifndef smp_cond_load_relaxed_timeout
> +#define smp_cond_load_relaxed_timeout(ptr, cond_expr, \
> + time_expr_ns, timeout_ns) \
> +({ \
> + typeof(ptr) __PTR = (ptr); \
> + __unqual_scalar_typeof(*ptr) VAL; \
> + u32 __n = 0, __spin = SMP_TIMEOUT_POLL_COUNT; \
> + s64 __timeout = (s64)timeout_ns; \
> + s64 __time_now, __time_end = 0; \
> + \
> + for (;;) { \
> + VAL = READ_ONCE(*__PTR); \
> + if (cond_expr) \
> + break; \
> + cpu_poll_relax(__PTR, VAL, (u64)__timeout); \
> + if (++__n < __spin) \
> + continue; \
> + __time_now = (s64)(time_expr_ns); \
> + if (unlikely(__time_end == 0)) \
> + __time_end = __time_now + __timeout; \
> + __timeout = __time_end - __time_now; \
> + if (__time_now <= 0 || __timeout <= 0) { \
> + VAL = READ_ONCE(*__PTR); \
> + break; \
> + } \
> + __n = 0; \
> + } \
> + (typeof(*ptr))VAL; \
> +})
> +#endif
> +
> /*
> * pmem_wmb() ensures that all stores for which the modification
> * are written to persistent storage by preceding instructions have
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-12 9:56 ` David Laight
@ 2026-02-14 4:58 ` Ankur Arora
2026-02-14 11:31 ` David Laight
0 siblings, 1 reply; 33+ messages in thread
From: Ankur Arora @ 2026-02-14 4:58 UTC (permalink / raw)
To: David Laight
Cc: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf, arnd, catalin.marinas, will, peterz, akpm, mark.rutland,
harisokn, cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1,
xueshuai, joao.m.martins, boris.ostrovsky, konrad.wilk
David Laight <david.laight.linux@gmail.com> writes:
> On Sun, 8 Feb 2026 18:31:42 -0800
> Ankur Arora <ankur.a.arora@oracle.com> wrote:
>
>> Add smp_cond_load_relaxed_timeout(), which extends
>> smp_cond_load_relaxed() to allow waiting for a duration.
>>
>> We loop around waiting for the condition variable to change while
>> peridically doing a time-check. The loop uses cpu_poll_relax() to slow
>> down the busy-waiting, which, unless overridden by the architecture
>> code, amounts to a cpu_relax().
>>
>> Note that there are two ways for the time-check to fail: the usual
>> timeout case or, @time_expr_ns returning an invalid value (negative
>> or zero). The second failure mode allows for clocks attached to the
>> clock-domain of @cond_expr, which might cease to operate meaningfully
>> once some state internal to @cond_expr has changed.
>>
>> Evaluation of @time_expr_ns: in the fastpath we want to keep the
>> performance close to smp_cond_load_relaxed(). To do that we defer
>> evaluation of the potentially costly @time_expr_ns to when we hit
>> the slowpath.
>>
>> This also means that there will always be some hardware dependent
>> duration that has passed in cpu_poll_relax() iterations at the time of
>> first evaluation. Additionally cpu_poll_relax() is not guaranteed to
>> return at timeout boundary. In sum, expect timeout overshoot when we
>> exit due to expiration of the timeout.
>>
>> The number of spin iterations before time-check, SMP_TIMEOUT_POLL_COUNT
>> is chosen to be 200 by default. With a cpu_poll_relax() iteration
>> taking ~20-30 cycles (measured on a variety of x86 platforms), we expect
>> a tim-check every ~4000-6000 cycles.
> ^ time-check
Ugh. Thanks.
> Plus the cost of evaluating cond_expr 200 times.
> I guess that isn't expected to contain a PCIe read :-)
:). Good point. I'll see if I can add something like "when polling on
a memory address".
Ankur
>>
>> The outer limit of the overshoot is double that when working with the
>> parameters above. This might be higher or lower depending on the
>> implementation of cpu_poll_relax() across architectures.
>>
>> Lastly, config option ARCH_HAS_CPU_RELAX indicates availability of a
>> cpu_poll_relax() that is cheaper than polling. This might be relevant
>> for cases with a prolonged timeout.
>>
>> Cc: Arnd Bergmann <arnd@arndb.de>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: linux-arch@vger.kernel.org
>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>> ---
>> Notes:
>> - Defer evaluation of @time_expr_ns to when we hit the slowpath.
>> - This also helps get rid of the labelled gotos which were used to
>> handle the early failure case (since now there's no early init
>> to be concerned with.)
>> - Add a comment mentioning that the cpu_poll_relax() implementation
>> is better than polling if ARCH_HAS_CPU_RELAX.
>>
>> include/asm-generic/barrier.h | 72 +++++++++++++++++++++++++++++++++++
>> 1 file changed, 72 insertions(+)
>>
>> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
>> index d4f581c1e21d..2738fe35c1df 100644
>> --- a/include/asm-generic/barrier.h
>> +++ b/include/asm-generic/barrier.h
>> @@ -273,6 +273,68 @@ do { \
>> })
>> #endif
>>
>> +/*
>> + * Number of times we iterate in the loop before doing the time check.
>> + */
>> +#ifndef SMP_TIMEOUT_POLL_COUNT
>> +#define SMP_TIMEOUT_POLL_COUNT 200
>> +#endif
>> +
>> +/*
>> + * Platforms with ARCH_HAS_CPU_RELAX have a cpu_poll_relax() implementation
>> + * that is expected to be cheaper (lower power) than pure polling.
>> + */
>> +#ifndef cpu_poll_relax
>> +#define cpu_poll_relax(ptr, val, timeout_ns) cpu_relax()
>> +#endif
>> +
>> +/**
>> + * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering
>> + * guarantees until a timeout expires.
>> + * @ptr: pointer to the variable to wait on.
>> + * @cond: boolean expression to wait for.
>> + * @time_expr_ns: expression that evaluates to monotonic time (in ns) or,
>> + * on failure, returns a negative value.
>> + * @timeout_ns: timeout value in ns
>> + * Both of the above are assumed to be compatible with s64; the signed
>> + * value is used to handle the failure case in @time_expr_ns.
>> + *
>> + * Equivalent to using READ_ONCE() on the condition variable.
>> + *
>> + * Callers that expect to wait for prolonged durations might want to
>> + * take into account the availability of ARCH_HAS_CPU_RELAX.
>> + */
>> +#ifndef smp_cond_load_relaxed_timeout
>> +#define smp_cond_load_relaxed_timeout(ptr, cond_expr, \
>> + time_expr_ns, timeout_ns) \
>> +({ \
>> + typeof(ptr) __PTR = (ptr); \
>> + __unqual_scalar_typeof(*ptr) VAL; \
>> + u32 __n = 0, __spin = SMP_TIMEOUT_POLL_COUNT; \
>> + s64 __timeout = (s64)timeout_ns; \
>> + s64 __time_now, __time_end = 0; \
>> + \
>> + for (;;) { \
>> + VAL = READ_ONCE(*__PTR); \
>> + if (cond_expr) \
>> + break; \
>> + cpu_poll_relax(__PTR, VAL, (u64)__timeout); \
>> + if (++__n < __spin) \
>> + continue; \
>> + __time_now = (s64)(time_expr_ns); \
>> + if (unlikely(__time_end == 0)) \
>> + __time_end = __time_now + __timeout; \
>> + __timeout = __time_end - __time_now; \
>> + if (__time_now <= 0 || __timeout <= 0) { \
>> + VAL = READ_ONCE(*__PTR); \
>> + break; \
>> + } \
>> + __n = 0; \
>> + } \
>> + (typeof(*ptr))VAL; \
>> +})
>> +#endif
>> +
>> /*
>> * pmem_wmb() ensures that all stores for which the modification
>> * are written to persistent storage by preceding instructions have
--
ankur
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-14 4:58 ` Ankur Arora
@ 2026-02-14 11:31 ` David Laight
2026-02-18 6:33 ` Ankur Arora
0 siblings, 1 reply; 33+ messages in thread
From: David Laight @ 2026-02-14 11:31 UTC (permalink / raw)
To: Ankur Arora
Cc: linux-kernel, linux-arch, linux-arm-kernel, linux-pm, bpf, arnd,
catalin.marinas, will, peterz, akpm, mark.rutland, harisokn, cl,
ast, rafael, daniel.lezcano, memxor, zhenglifeng1, xueshuai,
joao.m.martins, boris.ostrovsky, konrad.wilk
On Fri, 13 Feb 2026 20:58:08 -0800
Ankur Arora <ankur.a.arora@oracle.com> wrote:
> David Laight <david.laight.linux@gmail.com> writes:
...
> > Plus the cost of evaluating cond_expr 200 times.
> > I guess that isn't expected to contain a PCIe read :-)
>
> :). Good point. I'll see if I can add something like "when polling on
> a memory address".
I've only timed PCIe reads into an fpga (Cyclone V) target, but those
are about 1 micro-second - which is a lot of clocks.
Hard logic will be somewhat faster - but still slow.
There might be other places where 200 isn't a good value.
Perhaps add an extra #define that drops in the loop count?
David
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
2026-02-14 11:31 ` David Laight
@ 2026-02-18 6:33 ` Ankur Arora
0 siblings, 0 replies; 33+ messages in thread
From: Ankur Arora @ 2026-02-18 6:33 UTC (permalink / raw)
To: David Laight
Cc: Ankur Arora, linux-kernel, linux-arch, linux-arm-kernel, linux-pm,
bpf, arnd, catalin.marinas, will, peterz, akpm, mark.rutland,
harisokn, cl, ast, rafael, daniel.lezcano, memxor, zhenglifeng1,
xueshuai, joao.m.martins, boris.ostrovsky, konrad.wilk
David Laight <david.laight.linux@gmail.com> writes:
> On Fri, 13 Feb 2026 20:58:08 -0800
> Ankur Arora <ankur.a.arora@oracle.com> wrote:
>
>> David Laight <david.laight.linux@gmail.com> writes:
> ...
>> > Plus the cost of evaluating cond_expr 200 times.
>> > I guess that isn't expected to contain a PCIe read :-)
>>
>> :). Good point. I'll see if I can add something like "when polling on
>> a memory address".
>
> I've only timed PCIe reads into an fpga (Cyclone V) target, but those
> are about 1 micro-second - which is a lot of clocks.
> Hard logic will be somewhat faster - but still slow.
Yeah that would be a lot of clocks.
> There might be other places where 200 isn't a good value.
> Perhaps add an extra #define that drops in the loop count?
In principle that makes sense. However, if we add a variant like this
where the user specifies their own value of loop count:
#define __smp_cond_load_relaxed_timeout(ptr, cond_expr,
time_expr_ns, timeout_ns, loop_count)
that means that the user needs to define a sane loop count value for all
platforms where it might get called. That just seems to shift the problem
to the callers.
--
ankur
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2026-02-18 6:35 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-09 2:31 [PATCH v9 00/12] barrier: Add smp_cond_load_{relaxed,acquire}_timeout() Ankur Arora
2026-02-09 2:31 ` [PATCH v9 01/12] asm-generic: barrier: Add smp_cond_load_relaxed_timeout() Ankur Arora
2026-02-09 4:57 ` Randy Dunlap
2026-02-10 5:52 ` Ankur Arora
2026-02-11 15:39 ` Catalin Marinas
2026-02-11 22:17 ` Ankur Arora
2026-02-12 9:56 ` David Laight
2026-02-14 4:58 ` Ankur Arora
2026-02-14 11:31 ` David Laight
2026-02-18 6:33 ` Ankur Arora
2026-02-09 2:31 ` [PATCH v9 02/12] arm64: barrier: Support smp_cond_load_relaxed_timeout() Ankur Arora
2026-02-11 15:54 ` Catalin Marinas
2026-02-11 22:57 ` Ankur Arora
2026-02-09 2:31 ` [PATCH v9 03/12] arm64/delay: move some constants out to a separate header Ankur Arora
2026-02-11 16:01 ` Catalin Marinas
2026-02-09 2:31 ` [PATCH v9 04/12] arm64: support WFET in smp_cond_load_relaxed_timeout() Ankur Arora
2026-02-11 17:11 ` Catalin Marinas
2026-02-11 23:13 ` Ankur Arora
2026-02-09 2:31 ` [PATCH v9 05/12] arm64: rqspinlock: Remove private copy of smp_cond_load_acquire_timewait() Ankur Arora
2026-02-09 2:31 ` [PATCH v9 06/12] asm-generic: barrier: Add smp_cond_load_acquire_timeout() Ankur Arora
2026-02-09 4:59 ` Randy Dunlap
2026-02-09 2:31 ` [PATCH v9 07/12] atomic: Add atomic_cond_read_*_timeout() Ankur Arora
2026-02-11 17:25 ` Catalin Marinas
2026-02-09 2:31 ` [PATCH v9 08/12] locking/atomic: scripts: build atomic_long_cond_read_*_timeout() Ankur Arora
2026-02-11 17:41 ` Catalin Marinas
2026-02-09 2:31 ` [PATCH v9 09/12] bpf/rqspinlock: switch check_timeout() to a clock interface Ankur Arora
2026-02-09 3:05 ` bot+bpf-ci
2026-02-09 2:31 ` [PATCH v9 10/12] bpf/rqspinlock: Use smp_cond_load_acquire_timeout() Ankur Arora
2026-02-09 2:31 ` [PATCH v9 11/12] sched: add need-resched timed wait interface Ankur Arora
2026-02-09 3:05 ` bot+bpf-ci
2026-02-09 2:31 ` [PATCH v9 12/12] cpuidle/poll_state: Wait for need-resched via tif_need_resched_relaxed_wait() Ankur Arora
2026-02-10 16:55 ` Rafael J. Wysocki
2026-02-11 0:29 ` Ankur Arora
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox