* [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
@ 2025-11-17 19:15 Amery Hung
2025-11-18 10:11 ` kernel test robot
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Amery Hung @ 2025-11-17 19:15 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, ameryhung,
kernel-team
Locking a resilient queued spinlock can fail when deadlock or timeout
happen. Mark the lock acquring functions with __must_check to make sure
callers always handle the returned error.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
include/asm-generic/rqspinlock.h | 47 +++++++++++++++++++-------------
1 file changed, 28 insertions(+), 19 deletions(-)
diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h
index 6d4244d643df..855c09435506 100644
--- a/include/asm-generic/rqspinlock.h
+++ b/include/asm-generic/rqspinlock.h
@@ -171,7 +171,7 @@ static __always_inline void release_held_lock_entry(void)
* * -EDEADLK - Lock acquisition failed because of AA/ABBA deadlock.
* * -ETIMEDOUT - Lock acquisition failed because of timeout.
*/
-static __always_inline int res_spin_lock(rqspinlock_t *lock)
+static __always_inline __must_check int res_spin_lock(rqspinlock_t *lock)
{
int val = 0;
@@ -223,27 +223,36 @@ static __always_inline void res_spin_unlock(rqspinlock_t *lock)
#define raw_res_spin_lock_init(lock) ({ *(lock) = (rqspinlock_t){0}; })
#endif
-#define raw_res_spin_lock(lock) \
- ({ \
- int __ret; \
- preempt_disable(); \
- __ret = res_spin_lock(lock); \
- if (__ret) \
- preempt_enable(); \
- __ret; \
- })
+static __always_inline __must_check int raw_res_spin_lock(rqspinlock_t *lock)
+{
+ int ret;
+
+ preempt_disable();
+ ret = res_spin_lock(lock);
+ if (ret)
+ preempt_enable();
+
+ return ret;
+}
#define raw_res_spin_unlock(lock) ({ res_spin_unlock(lock); preempt_enable(); })
-#define raw_res_spin_lock_irqsave(lock, flags) \
- ({ \
- int __ret; \
- local_irq_save(flags); \
- __ret = raw_res_spin_lock(lock); \
- if (__ret) \
- local_irq_restore(flags); \
- __ret; \
- })
+static __always_inline __must_check int
+__raw_res_spin_lock_irqsave(rqspinlock_t *lock, unsigned long *flags)
+{
+ unsigned long __flags;
+ int ret;
+
+ local_irq_save(__flags);
+ ret = raw_res_spin_lock(lock);
+ if (ret)
+ local_irq_restore(__flags);
+
+ *flags = __flags;
+ return ret;
+}
+
+#define raw_res_spin_lock_irqsave(lock, flags) __raw_res_spin_lock_irqsave(lock, &flags)
#define raw_res_spin_unlock_irqrestore(lock, flags) ({ raw_res_spin_unlock(lock); local_irq_restore(flags); })
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
2025-11-17 19:15 [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check Amery Hung
@ 2025-11-18 10:11 ` kernel test robot
2025-11-18 10:12 ` kernel test robot
2025-11-18 10:16 ` Kumar Kartikeya Dwivedi
2 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2025-11-18 10:11 UTC (permalink / raw)
To: Amery Hung, bpf
Cc: oe-kbuild-all, netdev, alexei.starovoitov, andrii, daniel, memxor,
ameryhung, kernel-team
Hi Amery,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Amery-Hung/bpf-Annotate-rqspinlock-lock-acquiring-functions-with-__must_check/20251118-031838
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20251117191515.2934026-1-ameryhung%40gmail.com
patch subject: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
config: arm-randconfig-002-20251118 (https://download.01.org/0day-ci/archive/20251118/202511181704.SFwhGJOb-lkp@intel.com/config)
compiler: arm-linux-gnueabi-gcc (GCC) 10.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251118/202511181704.SFwhGJOb-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511181704.SFwhGJOb-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from ./arch/arm/include/generated/asm/rqspinlock.h:1,
from kernel/locking/locktorture.c:367:
kernel/locking/locktorture.c: In function 'torture_raw_res_spin_write_lock_irq':
>> include/asm-generic/rqspinlock.h:255:48: warning: ignoring return value of '__raw_res_spin_lock_irqsave' declared with attribute 'warn_unused_result' [-Wunused-result]
255 | #define raw_res_spin_lock_irqsave(lock, flags) __raw_res_spin_lock_irqsave(lock, &flags)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
kernel/locking/locktorture.c:396:2: note: in expansion of macro 'raw_res_spin_lock_irqsave'
396 | raw_res_spin_lock_irqsave(&rqspinlock, flags);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
kernel/locking/locktorture.c: In function 'torture_raw_res_spin_write_lock':
>> kernel/locking/locktorture.c:372:2: warning: ignoring return value of 'raw_res_spin_lock' declared with attribute 'warn_unused_result' [-Wunused-result]
372 | raw_res_spin_lock(&rqspinlock);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
In file included from arch/arm/include/generated/asm/rqspinlock.h:1,
from locktorture.c:367:
locktorture.c: In function 'torture_raw_res_spin_write_lock_irq':
>> include/asm-generic/rqspinlock.h:255:48: warning: ignoring return value of '__raw_res_spin_lock_irqsave' declared with attribute 'warn_unused_result' [-Wunused-result]
255 | #define raw_res_spin_lock_irqsave(lock, flags) __raw_res_spin_lock_irqsave(lock, &flags)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
locktorture.c:396:2: note: in expansion of macro 'raw_res_spin_lock_irqsave'
396 | raw_res_spin_lock_irqsave(&rqspinlock, flags);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
locktorture.c: In function 'torture_raw_res_spin_write_lock':
locktorture.c:372:2: warning: ignoring return value of 'raw_res_spin_lock' declared with attribute 'warn_unused_result' [-Wunused-result]
372 | raw_res_spin_lock(&rqspinlock);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
vim +255 include/asm-generic/rqspinlock.h
254
> 255 #define raw_res_spin_lock_irqsave(lock, flags) __raw_res_spin_lock_irqsave(lock, &flags)
256
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
2025-11-17 19:15 [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check Amery Hung
2025-11-18 10:11 ` kernel test robot
@ 2025-11-18 10:12 ` kernel test robot
2025-11-18 10:16 ` Kumar Kartikeya Dwivedi
2 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2025-11-18 10:12 UTC (permalink / raw)
To: Amery Hung, bpf
Cc: llvm, oe-kbuild-all, netdev, alexei.starovoitov, andrii, daniel,
memxor, ameryhung, kernel-team
Hi Amery,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Amery-Hung/bpf-Annotate-rqspinlock-lock-acquiring-functions-with-__must_check/20251118-031838
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20251117191515.2934026-1-ameryhung%40gmail.com
patch subject: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
config: i386-randconfig-002-20251118 (https://download.01.org/0day-ci/archive/20251118/202511181716.vzqU1M8A-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251118/202511181716.vzqU1M8A-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511181716.vzqU1M8A-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> kernel/locking/locktorture.c:372:2: warning: ignoring return value of function declared with 'warn_unused_result' attribute [-Wunused-result]
372 | raw_res_spin_lock(&rqspinlock);
| ^~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
kernel/locking/locktorture.c:396:2: warning: ignoring return value of function declared with 'warn_unused_result' attribute [-Wunused-result]
396 | raw_res_spin_lock_irqsave(&rqspinlock, flags);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/asm-generic/rqspinlock.h:255:48: note: expanded from macro 'raw_res_spin_lock_irqsave'
255 | #define raw_res_spin_lock_irqsave(lock, flags) __raw_res_spin_lock_irqsave(lock, &flags)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~
2 warnings generated.
vim +/warn_unused_result +372 kernel/locking/locktorture.c
a6884f6f1dd565 Kumar Kartikeya Dwivedi 2025-03-15 369
a6884f6f1dd565 Kumar Kartikeya Dwivedi 2025-03-15 370 static int torture_raw_res_spin_write_lock(int tid __maybe_unused)
a6884f6f1dd565 Kumar Kartikeya Dwivedi 2025-03-15 371 {
a6884f6f1dd565 Kumar Kartikeya Dwivedi 2025-03-15 @372 raw_res_spin_lock(&rqspinlock);
a6884f6f1dd565 Kumar Kartikeya Dwivedi 2025-03-15 373 return 0;
a6884f6f1dd565 Kumar Kartikeya Dwivedi 2025-03-15 374 }
a6884f6f1dd565 Kumar Kartikeya Dwivedi 2025-03-15 375
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
2025-11-17 19:15 [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check Amery Hung
2025-11-18 10:11 ` kernel test robot
2025-11-18 10:12 ` kernel test robot
@ 2025-11-18 10:16 ` Kumar Kartikeya Dwivedi
2025-11-18 10:42 ` David Laight
2 siblings, 1 reply; 9+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2025-11-18 10:16 UTC (permalink / raw)
To: Amery Hung; +Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, kernel-team
On Mon, 17 Nov 2025 at 14:15, Amery Hung <ameryhung@gmail.com> wrote:
>
> Locking a resilient queued spinlock can fail when deadlock or timeout
> happen. Mark the lock acquring functions with __must_check to make sure
> callers always handle the returned error.
>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
Looks like it's working :)
I would just explicitly ignore with (void) cast the locktorture case.
After that is fixed, you can add:
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Thanks!
> include/asm-generic/rqspinlock.h | 47 +++++++++++++++++++-------------
> 1 file changed, 28 insertions(+), 19 deletions(-)
>
> diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h
> index 6d4244d643df..855c09435506 100644
> --- a/include/asm-generic/rqspinlock.h
> +++ b/include/asm-generic/rqspinlock.h
> @@ -171,7 +171,7 @@ static __always_inline void release_held_lock_entry(void)
> * * -EDEADLK - Lock acquisition failed because of AA/ABBA deadlock.
> * * -ETIMEDOUT - Lock acquisition failed because of timeout.
> */
> -static __always_inline int res_spin_lock(rqspinlock_t *lock)
> +static __always_inline __must_check int res_spin_lock(rqspinlock_t *lock)
> {
> int val = 0;
>
> @@ -223,27 +223,36 @@ static __always_inline void res_spin_unlock(rqspinlock_t *lock)
> #define raw_res_spin_lock_init(lock) ({ *(lock) = (rqspinlock_t){0}; })
> #endif
>
> -#define raw_res_spin_lock(lock) \
> - ({ \
> - int __ret; \
> - preempt_disable(); \
> - __ret = res_spin_lock(lock); \
> - if (__ret) \
> - preempt_enable(); \
> - __ret; \
> - })
> +static __always_inline __must_check int raw_res_spin_lock(rqspinlock_t *lock)
> +{
> + int ret;
> +
> + preempt_disable();
> + ret = res_spin_lock(lock);
> + if (ret)
> + preempt_enable();
> +
> + return ret;
> +}
>
> #define raw_res_spin_unlock(lock) ({ res_spin_unlock(lock); preempt_enable(); })
>
> -#define raw_res_spin_lock_irqsave(lock, flags) \
> - ({ \
> - int __ret; \
> - local_irq_save(flags); \
> - __ret = raw_res_spin_lock(lock); \
> - if (__ret) \
> - local_irq_restore(flags); \
> - __ret; \
> - })
> +static __always_inline __must_check int
> +__raw_res_spin_lock_irqsave(rqspinlock_t *lock, unsigned long *flags)
> +{
> + unsigned long __flags;
> + int ret;
> +
> + local_irq_save(__flags);
> + ret = raw_res_spin_lock(lock);
> + if (ret)
> + local_irq_restore(__flags);
> +
> + *flags = __flags;
> + return ret;
> +}
> +
> +#define raw_res_spin_lock_irqsave(lock, flags) __raw_res_spin_lock_irqsave(lock, &flags)
>
> #define raw_res_spin_unlock_irqrestore(lock, flags) ({ raw_res_spin_unlock(lock); local_irq_restore(flags); })
>
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
2025-11-18 10:16 ` Kumar Kartikeya Dwivedi
@ 2025-11-18 10:42 ` David Laight
2025-11-20 20:12 ` Amery Hung
0 siblings, 1 reply; 9+ messages in thread
From: David Laight @ 2025-11-18 10:42 UTC (permalink / raw)
To: Kumar Kartikeya Dwivedi
Cc: Amery Hung, bpf, netdev, alexei.starovoitov, andrii, daniel,
kernel-team
On Tue, 18 Nov 2025 05:16:50 -0500
Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
> On Mon, 17 Nov 2025 at 14:15, Amery Hung <ameryhung@gmail.com> wrote:
> >
> > Locking a resilient queued spinlock can fail when deadlock or timeout
> > happen. Mark the lock acquring functions with __must_check to make sure
> > callers always handle the returned error.
> >
> > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > ---
>
> Looks like it's working :)
> I would just explicitly ignore with (void) cast the locktorture case.
I'm not sure that works - I usually have to try a lot harder to ignore
a '__must_check' result.
David
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
2025-11-18 10:42 ` David Laight
@ 2025-11-20 20:12 ` Amery Hung
2025-11-20 21:27 ` David Laight
2025-11-25 23:34 ` Andrii Nakryiko
0 siblings, 2 replies; 9+ messages in thread
From: Amery Hung @ 2025-11-20 20:12 UTC (permalink / raw)
To: David Laight
Cc: Kumar Kartikeya Dwivedi, bpf, netdev, alexei.starovoitov, andrii,
daniel, kernel-team
On Tue, Nov 18, 2025 at 2:42 AM David Laight
<david.laight.linux@gmail.com> wrote:
>
> On Tue, 18 Nov 2025 05:16:50 -0500
> Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
>
> > On Mon, 17 Nov 2025 at 14:15, Amery Hung <ameryhung@gmail.com> wrote:
> > >
> > > Locking a resilient queued spinlock can fail when deadlock or timeout
> > > happen. Mark the lock acquring functions with __must_check to make sure
> > > callers always handle the returned error.
> > >
> > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > ---
> >
> > Looks like it's working :)
> > I would just explicitly ignore with (void) cast the locktorture case.
>
> I'm not sure that works - I usually have to try a lot harder to ignore
> a '__must_check' result.
Thanks for the heads up.
Indeed, gcc still complains about it even casting the return to (void)
while clang does not.
I have to silence the warning by:
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-result"
raw_res_spin_lock(&rqspinlock);
#pragma GCC diagnostic pop
Thanks!
Amery
>
> David
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
2025-11-20 20:12 ` Amery Hung
@ 2025-11-20 21:27 ` David Laight
2025-11-25 23:34 ` Andrii Nakryiko
1 sibling, 0 replies; 9+ messages in thread
From: David Laight @ 2025-11-20 21:27 UTC (permalink / raw)
To: Amery Hung
Cc: Kumar Kartikeya Dwivedi, bpf, netdev, alexei.starovoitov, andrii,
daniel, kernel-team
On Thu, 20 Nov 2025 12:12:12 -0800
Amery Hung <ameryhung@gmail.com> wrote:
> On Tue, Nov 18, 2025 at 2:42 AM David Laight
> <david.laight.linux@gmail.com> wrote:
> >
> > On Tue, 18 Nov 2025 05:16:50 -0500
> > Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
> >
> > > On Mon, 17 Nov 2025 at 14:15, Amery Hung <ameryhung@gmail.com> wrote:
> > > >
> > > > Locking a resilient queued spinlock can fail when deadlock or timeout
> > > > happen. Mark the lock acquring functions with __must_check to make sure
> > > > callers always handle the returned error.
> > > >
> > > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > > ---
> > >
> > > Looks like it's working :)
> > > I would just explicitly ignore with (void) cast the locktorture case.
> >
> > I'm not sure that works - I usually have to try a lot harder to ignore
> > a '__must_check' result.
>
> Thanks for the heads up.
>
> Indeed, gcc still complains about it even casting the return to (void)
> while clang does not.
>
> I have to silence the warning by:
>
> #pragma GCC diagnostic push
> #pragma GCC diagnostic ignored "-Wunused-result"
> raw_res_spin_lock(&rqspinlock);
> #pragma GCC diagnostic pop
I think the simpler:
if (raw_res_spin_lock(&rqspinlock)) {};
also works.
But I'm sure I've resorted to crap like:
x += foo() ? 0 : 0;
and/or:
x += foo() == IMPOSSIBLE_VALUE;
and/or wrapping the call in a static inline function.
It is all a right PITA when you are doing read/write on a pipe
that is being used for events.
At least no one has put a 'must_check' on fprintf() (yet).
Code that looks at the return value is usually broken!
(hint: you need to call fflush() and then check ferror().)
David
>
> Thanks!
> Amery
>
> >
> > David
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
2025-11-20 20:12 ` Amery Hung
2025-11-20 21:27 ` David Laight
@ 2025-11-25 23:34 ` Andrii Nakryiko
2025-11-26 21:52 ` Amery Hung
1 sibling, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2025-11-25 23:34 UTC (permalink / raw)
To: Amery Hung
Cc: David Laight, Kumar Kartikeya Dwivedi, bpf, netdev,
alexei.starovoitov, andrii, daniel, kernel-team
On Thu, Nov 20, 2025 at 12:12 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> On Tue, Nov 18, 2025 at 2:42 AM David Laight
> <david.laight.linux@gmail.com> wrote:
> >
> > On Tue, 18 Nov 2025 05:16:50 -0500
> > Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
> >
> > > On Mon, 17 Nov 2025 at 14:15, Amery Hung <ameryhung@gmail.com> wrote:
> > > >
> > > > Locking a resilient queued spinlock can fail when deadlock or timeout
> > > > happen. Mark the lock acquring functions with __must_check to make sure
> > > > callers always handle the returned error.
> > > >
> > > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > > ---
> > >
> > > Looks like it's working :)
> > > I would just explicitly ignore with (void) cast the locktorture case.
> >
> > I'm not sure that works - I usually have to try a lot harder to ignore
> > a '__must_check' result.
>
> Thanks for the heads up.
>
> Indeed, gcc still complains about it even casting the return to (void)
> while clang does not.
>
> I have to silence the warning by:
>
> #pragma GCC diagnostic push
> #pragma GCC diagnostic ignored "-Wunused-result"
> raw_res_spin_lock(&rqspinlock);
> #pragma GCC diagnostic pop
>
For BPF selftests we have
#define __sink(expr) asm volatile("" : "+g"(expr))
Try if that works here?
> Thanks!
> Amery
>
> >
> > David
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check
2025-11-25 23:34 ` Andrii Nakryiko
@ 2025-11-26 21:52 ` Amery Hung
0 siblings, 0 replies; 9+ messages in thread
From: Amery Hung @ 2025-11-26 21:52 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: David Laight, Kumar Kartikeya Dwivedi, bpf, netdev,
alexei.starovoitov, andrii, daniel, kernel-team
On Tue, Nov 25, 2025 at 3:35 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Nov 20, 2025 at 12:12 PM Amery Hung <ameryhung@gmail.com> wrote:
> >
> > On Tue, Nov 18, 2025 at 2:42 AM David Laight
> > <david.laight.linux@gmail.com> wrote:
> > >
> > > On Tue, 18 Nov 2025 05:16:50 -0500
> > > Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
> > >
> > > > On Mon, 17 Nov 2025 at 14:15, Amery Hung <ameryhung@gmail.com> wrote:
> > > > >
> > > > > Locking a resilient queued spinlock can fail when deadlock or timeout
> > > > > happen. Mark the lock acquring functions with __must_check to make sure
> > > > > callers always handle the returned error.
> > > > >
> > > > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > > > ---
> > > >
> > > > Looks like it's working :)
> > > > I would just explicitly ignore with (void) cast the locktorture case.
> > >
> > > I'm not sure that works - I usually have to try a lot harder to ignore
> > > a '__must_check' result.
> >
> > Thanks for the heads up.
> >
> > Indeed, gcc still complains about it even casting the return to (void)
> > while clang does not.
> >
> > I have to silence the warning by:
> >
> > #pragma GCC diagnostic push
> > #pragma GCC diagnostic ignored "-Wunused-result"
> > raw_res_spin_lock(&rqspinlock);
> > #pragma GCC diagnostic pop
> >
>
> For BPF selftests we have
>
> #define __sink(expr) asm volatile("" : "+g"(expr))
>
> Try if that works here?
Thanks for the tip.
In v2, I decided to return the error to the caller to align with
another test case, where the lock (ww_mutex) can also fail and has
__must_check annotation.
>
> > Thanks!
> > Amery
> >
> > >
> > > David
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-11-26 21:52 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-17 19:15 [PATCH bpf-next v1 1/1] bpf: Annotate rqspinlock lock acquiring functions with __must_check Amery Hung
2025-11-18 10:11 ` kernel test robot
2025-11-18 10:12 ` kernel test robot
2025-11-18 10:16 ` Kumar Kartikeya Dwivedi
2025-11-18 10:42 ` David Laight
2025-11-20 20:12 ` Amery Hung
2025-11-20 21:27 ` David Laight
2025-11-25 23:34 ` Andrii Nakryiko
2025-11-26 21:52 ` Amery Hung
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).