From: Ankur Arora <ankur.a.arora@oracle.com>
To: Will Deacon <will@kernel.org>
Cc: Ankur Arora <ankur.a.arora@oracle.com>,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, bpf@vger.kernel.org,
arnd@arndb.de, catalin.marinas@arm.com, peterz@infradead.org,
akpm@linux-foundation.org, mark.rutland@arm.com,
harisokn@amazon.com, cl@gentwo.org, ast@kernel.org,
memxor@gmail.com, zhenglifeng1@huawei.com,
xueshuai@linux.alibaba.com, joao.m.martins@oracle.com,
boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
Subject: Re: [PATCH v5 1/5] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
Date: Fri, 19 Sep 2025 16:41:56 -0700 [thread overview]
Message-ID: <87qzw2f1rv.fsf@oracle.com> (raw)
In-Reply-To: <aMxgsh3AVO5_CCqf@willie-the-truck>
Will Deacon <will@kernel.org> writes:
> On Wed, Sep 10, 2025 at 08:46:51PM -0700, Ankur Arora wrote:
>> Add smp_cond_load_relaxed_timeout(), which extends
>> smp_cond_load_relaxed() to allow waiting for a duration.
>>
>> The additional parameter allows for the timeout check.
>>
>> The waiting is done via the usual cpu_relax() spin-wait around the
>> condition variable with periodic evaluation of the time-check.
>>
>> The number of times we spin is defined by SMP_TIMEOUT_SPIN_COUNT
>> (chosen to be 200 by default) which, assuming each cpu_relax()
>> iteration takes around 20-30 cycles (measured on a variety of x86
>> platforms), amounts to around 4000-6000 cycles.
>>
>> Cc: Arnd Bergmann <arnd@arndb.de>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: linux-arch@vger.kernel.org
>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
>> Reviewed-by: Haris Okanovic <harisokn@amazon.com>
>> Tested-by: Haris Okanovic <harisokn@amazon.com>
>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>> ---
>> include/asm-generic/barrier.h | 35 +++++++++++++++++++++++++++++++++++
>> 1 file changed, 35 insertions(+)
>>
>> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
>> index d4f581c1e21d..8483e139954f 100644
>> --- a/include/asm-generic/barrier.h
>> +++ b/include/asm-generic/barrier.h
>> @@ -273,6 +273,41 @@ do { \
>> })
>> #endif
>>
>> +#ifndef SMP_TIMEOUT_SPIN_COUNT
>> +#define SMP_TIMEOUT_SPIN_COUNT 200
>> +#endif
>> +
>> +/**
>> + * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering
>> + * guarantees until a timeout expires.
>> + * @ptr: pointer to the variable to wait on
>> + * @cond: boolean expression to wait for
>> + * @time_check_expr: expression to decide when to bail out
>> + *
>> + * Equivalent to using READ_ONCE() on the condition variable.
>> + */
>> +#ifndef smp_cond_load_relaxed_timeout
>> +#define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_check_expr) \
>> +({ \
>> + typeof(ptr) __PTR = (ptr); \
>> + __unqual_scalar_typeof(*ptr) VAL; \
>> + u32 __n = 0, __spin = SMP_TIMEOUT_SPIN_COUNT; \
>> + \
>> + for (;;) { \
>> + VAL = READ_ONCE(*__PTR); \
>> + if (cond_expr) \
>> + break; \
>> + cpu_relax(); \
>> + if (++__n < __spin) \
>> + continue; \
>> + if (time_check_expr) \
>> + break; \
>
> There's a funny discrepancy here when compared to the arm64 version in
> the next patch. Here, if we time out, then the value returned is
> potentially quite stale because it was read before the last cpu_relax().
> In the arm64 patch, the timeout check is before the cmpwait/cpu_relax(),
> which I think is better.
So, that's a good point. But, the return value being stale also seems to
be incorrect.
> Regardless, I think having the same behaviour for the two implementations
> would be a good idea.
Yeah agreed.
As you outlined in the other mail, how about something like this:
#ifndef smp_cond_load_relaxed_timeout
#define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_check_expr) \
({ \
typeof(ptr) __PTR = (ptr); \
__unqual_scalar_typeof(*ptr) VAL; \
u32 __n = 0, __poll = SMP_TIMEOUT_POLL_COUNT; \
\
for (;;) { \
VAL = READ_ONCE(*__PTR); \
if (cond_expr) \
break; \
cpu_poll_relax(); \
if (++__n < __poll) \
continue; \
if (time_check_expr) { \
VAL = READ_ONCE(*__PTR); \
break; \
} \
__n = 0; \
} \
(typeof(*ptr))VAL; \
})
#endif
A bit uglier but if the cpu_poll_relax() was a successful WFE then the
value might be ~100us out of date.
Another option might be to just set some state in the time check and
bail out due to a "if (cond_expr || __timed_out)", but I don't want
to add more instructions in the spin path.
--
ankur
next prev parent reply other threads:[~2025-09-19 23:42 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-11 3:46 [PATCH v5 0/5] barrier: Add smp_cond_load_*_timeout() Ankur Arora
2025-09-11 3:46 ` [PATCH v5 1/5] asm-generic: barrier: Add smp_cond_load_relaxed_timeout() Ankur Arora
2025-09-18 19:42 ` Will Deacon
2025-09-19 23:41 ` Ankur Arora [this message]
2025-09-22 10:47 ` Will Deacon
2025-09-11 3:46 ` [PATCH v5 2/5] arm64: " Ankur Arora
2025-09-18 20:05 ` Will Deacon
2025-09-19 16:18 ` Catalin Marinas
2025-09-19 22:39 ` Ankur Arora
2025-09-11 3:46 ` [PATCH v5 3/5] arm64: rqspinlock: Remove private copy of smp_cond_load_acquire_timewait Ankur Arora
2025-09-11 3:46 ` [PATCH v5 4/5] asm-generic: barrier: Add smp_cond_load_acquire_timeout() Ankur Arora
2025-09-11 3:46 ` [PATCH v5 5/5] rqspinlock: Use smp_cond_load_acquire_timeout() Ankur Arora
2025-09-11 14:32 ` Catalin Marinas
2025-09-11 18:54 ` Kumar Kartikeya Dwivedi
2025-09-11 21:58 ` Ankur Arora
2025-09-12 10:14 ` Catalin Marinas
2025-09-12 18:06 ` Ankur Arora
2025-09-11 18:56 ` Kumar Kartikeya Dwivedi
2025-09-11 21:57 ` Ankur Arora
2025-09-11 14:34 ` [PATCH v5 0/5] barrier: Add smp_cond_load_*_timeout() Catalin Marinas
2025-09-11 21:57 ` Ankur Arora
2025-09-15 11:12 ` Catalin Marinas
2025-09-16 5:29 ` Ankur Arora
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87qzw2f1rv.fsf@oracle.com \
--to=ankur.a.arora@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=ast@kernel.org \
--cc=boris.ostrovsky@oracle.com \
--cc=bpf@vger.kernel.org \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=harisokn@amazon.com \
--cc=joao.m.martins@oracle.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=memxor@gmail.com \
--cc=peterz@infradead.org \
--cc=will@kernel.org \
--cc=xueshuai@linux.alibaba.com \
--cc=zhenglifeng1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).