* [PATCH bpf-next v1] rqspinlock: Introduce res_spin_trylock
@ 2025-11-25 3:08 Kumar Kartikeya Dwivedi
2025-11-26 3:04 ` Alexei Starovoitov
0 siblings, 1 reply; 2+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2025-11-25 3:08 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Eduard Zingerman, kkd, kernel-team
A trylock variant for rqspinlock was missing owing to lack of users in
the tree thus far, add one now as it would be needed in subsequent
patches. Mark as __must_check and __always_inline.
This essentially copies queued_spin_trylock, but doesn't depend on it as
rqspinlock compiles down to a TAS when CONFIG_QUEUED_SPINLOCKS=n.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
include/asm-generic/rqspinlock.h | 45 ++++++++++++++++++++++++++++++++
1 file changed, 45 insertions(+)
diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h
index 6d4244d643df3..a7f4b7c0fb78a 100644
--- a/include/asm-generic/rqspinlock.h
+++ b/include/asm-generic/rqspinlock.h
@@ -217,12 +217,57 @@ static __always_inline void res_spin_unlock(rqspinlock_t *lock)
this_cpu_dec(rqspinlock_held_locks.cnt);
}
+/**
+ * res_spin_trylock - try to acquire a queued spinlock
+ * @lock: Pointer to queued spinlock structure
+ *
+ * Attempts to acquire the lock without blocking. This function should be used
+ * in contexts where blocking is not allowed (e.g., NMI handlers).
+ *
+ * Return:
+ * * 1 - Lock was acquired successfully.
+ * * 0 - Lock acquisition failed.
+ */
+static __must_check __always_inline int res_spin_trylock(rqspinlock_t *lock)
+{
+ int val = atomic_read(&lock->val);
+ int ret;
+
+ if (unlikely(val))
+ return 0;
+
+ ret = likely(atomic_try_cmpxchg_acquire(&lock->val, &val, 1));
+ if (ret)
+ grab_held_lock_entry(lock);
+ return ret;
+}
+
#ifdef CONFIG_QUEUED_SPINLOCKS
#define raw_res_spin_lock_init(lock) ({ *(lock) = (rqspinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; })
#else
#define raw_res_spin_lock_init(lock) ({ *(lock) = (rqspinlock_t){0}; })
#endif
+#define raw_res_spin_trylock(lock) \
+ ({ \
+ int __ret; \
+ preempt_disable(); \
+ __ret = res_spin_trylock(lock); \
+ if (!__ret) \
+ preempt_enable(); \
+ __ret; \
+ })
+
+#define raw_res_spin_trylock_irqsave(lock, flags) \
+ ({ \
+ int __ret; \
+ local_irq_save(flags); \
+ __ret = raw_res_spin_trylock(lock); \
+ if (!__ret) \
+ local_irq_restore(flags); \
+ __ret; \
+ })
+
#define raw_res_spin_lock(lock) \
({ \
int __ret; \
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH bpf-next v1] rqspinlock: Introduce res_spin_trylock
2025-11-25 3:08 [PATCH bpf-next v1] rqspinlock: Introduce res_spin_trylock Kumar Kartikeya Dwivedi
@ 2025-11-26 3:04 ` Alexei Starovoitov
0 siblings, 0 replies; 2+ messages in thread
From: Alexei Starovoitov @ 2025-11-26 3:04 UTC (permalink / raw)
To: Kumar Kartikeya Dwivedi
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Eduard Zingerman, kkd, Kernel Team
On Mon, Nov 24, 2025 at 7:09 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> A trylock variant for rqspinlock was missing owing to lack of users in
> the tree thus far, add one now as it would be needed in subsequent
> patches. Mark as __must_check and __always_inline.
>
> This essentially copies queued_spin_trylock, but doesn't depend on it as
> rqspinlock compiles down to a TAS when CONFIG_QUEUED_SPINLOCKS=n.
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
> include/asm-generic/rqspinlock.h | 45 ++++++++++++++++++++++++++++++++
> 1 file changed, 45 insertions(+)
>
> diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h
> index 6d4244d643df3..a7f4b7c0fb78a 100644
> --- a/include/asm-generic/rqspinlock.h
> +++ b/include/asm-generic/rqspinlock.h
> @@ -217,12 +217,57 @@ static __always_inline void res_spin_unlock(rqspinlock_t *lock)
> this_cpu_dec(rqspinlock_held_locks.cnt);
> }
>
> +/**
> + * res_spin_trylock - try to acquire a queued spinlock
> + * @lock: Pointer to queued spinlock structure
> + *
> + * Attempts to acquire the lock without blocking. This function should be used
> + * in contexts where blocking is not allowed (e.g., NMI handlers).
> + *
> + * Return:
> + * * 1 - Lock was acquired successfully.
> + * * 0 - Lock acquisition failed.
> + */
> +static __must_check __always_inline int res_spin_trylock(rqspinlock_t *lock)
> +{
> + int val = atomic_read(&lock->val);
This needs a comment why val = 0 like res_spin_lock() is doing
is not enough here.
> + int ret;
> +
> + if (unlikely(val))
> + return 0;
> +
> + ret = likely(atomic_try_cmpxchg_acquire(&lock->val, &val, 1));
> + if (ret)
> + grab_held_lock_entry(lock);
Same issue as with res_spin_lock()...
Overall it makes sense, but let's defer it for now,
since without users somebody might send a patch to remove it
as dead code.
pw-bot: cr
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-11-26 3:04 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-25 3:08 [PATCH bpf-next v1] rqspinlock: Introduce res_spin_trylock Kumar Kartikeya Dwivedi
2025-11-26 3:04 ` Alexei Starovoitov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox