* [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE
@ 2026-03-14 16:02 Levi Zim via B4 Relay
2026-03-16 15:05 ` Paul Chaignon
2026-03-16 19:53 ` Amery Hung
0 siblings, 2 replies; 7+ messages in thread
From: Levi Zim via B4 Relay @ 2026-03-14 16:02 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti
Cc: Amery Hung, linux-riscv, stable, bpf, linux-kernel,
linux-rt-devel, Levi Zim
From: Levi Zim <rsworktech@outlook.com>
kmalloc_nolock always fails for architectures that lack cmpxchg16b.
For example, this causes bpf_task_storage_get with flag
BPF_LOCAL_STORAGE_GET_F_CREATE to fails on riscv64 6.19 kernel.
Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
But leave the PREEMPT_RT case as is because it requires kmalloc_nolock
for correctness. Add a comment about this limitation that architecture's
lack of CMPXCHG_DOUBLE combined with PREEMPT_RT could make
bpf_local_storage_alloc always fail.
Fixes: f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock() in local storage")
Cc: stable@vger.kernel.org
Signed-off-by: Levi Zim <rsworktech@outlook.com>
---
I find that bpf_task_storage_get with flag BPF_LOCAL_STORAGE_GET_F_CREATE
always fails for me on 6.19 kernel on riscv64 and bisected it.
In f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock()
in local storage"), bpf memory allocator is replaced with kmalloc_nolock.
This approach is problematic for architectures that lack CMPXCHG_DOUBLE
because kmalloc_nolock always fails in this case:
In function kmalloc_nolock (kmalloc_nolock_noprof):
if (!(s->flags & __CMPXCHG_DOUBLE) && !kmem_cache_debug(s))
/*
* kmalloc_nolock() is not supported on architectures that
* don't implement cmpxchg16b, but debug caches don't use
* per-cpu slab and per-cpu partial slabs. They rely on
* kmem_cache_node->list_lock, so kmalloc_nolock() can
* attempt to allocate from debug caches by
* spin_trylock_irqsave(&n->list_lock, ...)
*/
return NULL;
Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
(But not for a PREEMPT_RT case as explained in the comment and commitmsg)
Note for stable: this only needs to be picked into v6.19 if the patch
makes it into 7.0.
---
Changes in v3:
- Use macro instead of const static variable to avoid triggering
warnings.
- Wrap lines at 80 columns
- Link to v2: https://lore.kernel.org/r/20260314-bpf-kmalloc-nolock-v2-1-576e33e4fa67@outlook.com
Changes in v2:
- Drop the modification to the PREEMPT_RT case as it requires
kmalloc_nolock for correctness.
- Add a comment to the PREEMPT_RT case about the limitation when
not HAVE_CMPXCHG_DOUBLE but enables PREEMPT_RT.
- Link to v1: https://lore.kernel.org/r/20260314-bpf-kmalloc-nolock-v1-1-24abf3f75a9f@outlook.com
---
include/linux/bpf_local_storage.h | 1 +
kernel/bpf/bpf_cgrp_storage.c | 3 ++-
kernel/bpf/bpf_local_storage.c | 4 ++++
kernel/bpf/bpf_task_storage.c | 3 ++-
4 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
index 8157e8da61d40..d8f2c5d63a80e 100644
--- a/include/linux/bpf_local_storage.h
+++ b/include/linux/bpf_local_storage.h
@@ -18,6 +18,7 @@
#include <asm/rqspinlock.h>
#define BPF_LOCAL_STORAGE_CACHE_SIZE 16
+#define KMALLOC_NOLOCK_SUPPORTED IS_ENABLED(CONFIG_HAVE_CMPXCHG_DOUBLE)
struct bpf_local_storage_map_bucket {
struct hlist_head list;
diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
index c2a2ead1f466d..cd18193c44058 100644
--- a/kernel/bpf/bpf_cgrp_storage.c
+++ b/kernel/bpf/bpf_cgrp_storage.c
@@ -114,7 +114,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
{
- return bpf_local_storage_map_alloc(attr, &cgroup_cache, true);
+ return bpf_local_storage_map_alloc(attr, &cgroup_cache,
+ KMALLOC_NOLOCK_SUPPORTED);
}
static void cgroup_storage_map_free(struct bpf_map *map)
diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
index 9c96a4477f81a..a6c240da87668 100644
--- a/kernel/bpf/bpf_local_storage.c
+++ b/kernel/bpf/bpf_local_storage.c
@@ -893,6 +893,10 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
/* In PREEMPT_RT, kmalloc(GFP_ATOMIC) is still not safe in non
* preemptible context. Thus, enforce all storages to use
* kmalloc_nolock() when CONFIG_PREEMPT_RT is enabled.
+ *
+ * However, kmalloc_nolock would fail on architectures that do not
+ * have CMPXCHG_DOUBLE. On such architectures with PREEMPT_RT,
+ * bpf_local_storage_alloc would always fail.
*/
smap->use_kmalloc_nolock = IS_ENABLED(CONFIG_PREEMPT_RT) ? true : use_kmalloc_nolock;
diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
index 605506792b5b4..6e8597edea314 100644
--- a/kernel/bpf/bpf_task_storage.c
+++ b/kernel/bpf/bpf_task_storage.c
@@ -212,7 +212,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr)
{
- return bpf_local_storage_map_alloc(attr, &task_cache, true);
+ return bpf_local_storage_map_alloc(attr, &task_cache,
+ KMALLOC_NOLOCK_SUPPORTED);
}
static void task_storage_map_free(struct bpf_map *map)
---
base-commit: e06e6b8001233241eb5b2e2791162f0585f50f4b
change-id: 20260314-bpf-kmalloc-nolock-60da80e613de
Best regards,
--
Levi Zim <rsworktech@outlook.com>
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE
2026-03-14 16:02 [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE Levi Zim via B4 Relay
@ 2026-03-16 15:05 ` Paul Chaignon
2026-03-16 15:46 ` Levi Zim
2026-03-17 4:45 ` Yao Zi
2026-03-16 19:53 ` Amery Hung
1 sibling, 2 replies; 7+ messages in thread
From: Paul Chaignon @ 2026-03-16 15:05 UTC (permalink / raw)
To: rsworktech
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Amery Hung, linux-riscv, stable, bpf, linux-kernel,
linux-rt-devel
On Sun, Mar 15, 2026 at 12:02:48AM +0800, Levi Zim via B4 Relay wrote:
> From: Levi Zim <rsworktech@outlook.com>
>
> kmalloc_nolock always fails for architectures that lack cmpxchg16b.
> For example, this causes bpf_task_storage_get with flag
> BPF_LOCAL_STORAGE_GET_F_CREATE to fails on riscv64 6.19 kernel.
>
> Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
> But leave the PREEMPT_RT case as is because it requires kmalloc_nolock
> for correctness. Add a comment about this limitation that architecture's
> lack of CMPXCHG_DOUBLE combined with PREEMPT_RT could make
> bpf_local_storage_alloc always fail.
>
> Fixes: f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock() in local storage")
> Cc: stable@vger.kernel.org
> Signed-off-by: Levi Zim <rsworktech@outlook.com>
> ---
Note there may be something broken with your setup as lore is reporting
that you sent this v3 email three times. Not sure if it could be an
issue.
[...]
> diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
> index 8157e8da61d40..d8f2c5d63a80e 100644
> --- a/include/linux/bpf_local_storage.h
> +++ b/include/linux/bpf_local_storage.h
> @@ -18,6 +18,7 @@
> #include <asm/rqspinlock.h>
>
> #define BPF_LOCAL_STORAGE_CACHE_SIZE 16
> +#define KMALLOC_NOLOCK_SUPPORTED IS_ENABLED(CONFIG_HAVE_CMPXCHG_DOUBLE)
>
> struct bpf_local_storage_map_bucket {
> struct hlist_head list;
> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
> index c2a2ead1f466d..cd18193c44058 100644
> --- a/kernel/bpf/bpf_cgrp_storage.c
> +++ b/kernel/bpf/bpf_cgrp_storage.c
> @@ -114,7 +114,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
>
> static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
> {
> - return bpf_local_storage_map_alloc(attr, &cgroup_cache, true);
> + return bpf_local_storage_map_alloc(attr, &cgroup_cache,
> + KMALLOC_NOLOCK_SUPPORTED);
> }
>
> static void cgroup_storage_map_free(struct bpf_map *map)
> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> index 9c96a4477f81a..a6c240da87668 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c
> @@ -893,6 +893,10 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
> /* In PREEMPT_RT, kmalloc(GFP_ATOMIC) is still not safe in non
> * preemptible context. Thus, enforce all storages to use
> * kmalloc_nolock() when CONFIG_PREEMPT_RT is enabled.
> + *
> + * However, kmalloc_nolock would fail on architectures that do not
> + * have CMPXCHG_DOUBLE. On such architectures with PREEMPT_RT,
> + * bpf_local_storage_alloc would always fail.
> */
> smap->use_kmalloc_nolock = IS_ENABLED(CONFIG_PREEMPT_RT) ? true : use_kmalloc_nolock;
>
> diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
> index 605506792b5b4..6e8597edea314 100644
> --- a/kernel/bpf/bpf_task_storage.c
> +++ b/kernel/bpf/bpf_task_storage.c
> @@ -212,7 +212,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
>
> static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr)
> {
> - return bpf_local_storage_map_alloc(attr, &task_cache, true);
> + return bpf_local_storage_map_alloc(attr, &task_cache,
> + KMALLOC_NOLOCK_SUPPORTED);
I can confirm that this does fix one selftest using
BPF_LOCAL_STORAGE_GET_F_CREATE on riscv64: test_ls_map_kptr_ref1 in
map_kptr. Other tests using BPF_LOCAL_STORAGE_GET_F_CREATE are still
failing so I guess they have other issues.
Tested-by: Paul Chaignon <paul.chaignon@gmail.com>
> }
>
> static void task_storage_map_free(struct bpf_map *map)
>
> ---
> base-commit: e06e6b8001233241eb5b2e2791162f0585f50f4b
> change-id: 20260314-bpf-kmalloc-nolock-60da80e613de
>
> Best regards,
> --
> Levi Zim <rsworktech@outlook.com>
>
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE
2026-03-16 15:05 ` Paul Chaignon
@ 2026-03-16 15:46 ` Levi Zim
2026-03-17 4:45 ` Yao Zi
1 sibling, 0 replies; 7+ messages in thread
From: Levi Zim @ 2026-03-16 15:46 UTC (permalink / raw)
To: Paul Chaignon
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Amery Hung, linux-riscv, stable, bpf, linux-kernel,
linux-rt-devel
On 3/16/26 11:05 PM, Paul Chaignon wrote:
> On Sun, Mar 15, 2026 at 12:02:48AM +0800, Levi Zim via B4 Relay wrote:
>> From: Levi Zim <rsworktech@outlook.com>
>>
>> kmalloc_nolock always fails for architectures that lack cmpxchg16b.
>> For example, this causes bpf_task_storage_get with flag
>> BPF_LOCAL_STORAGE_GET_F_CREATE to fails on riscv64 6.19 kernel.
>>
>> Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
>> But leave the PREEMPT_RT case as is because it requires kmalloc_nolock
>> for correctness. Add a comment about this limitation that architecture's
>> lack of CMPXCHG_DOUBLE combined with PREEMPT_RT could make
>> bpf_local_storage_alloc always fail.
>>
>> Fixes: f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock() in local storage")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Levi Zim <rsworktech@outlook.com>
>> ---
>
> Note there may be something broken with your setup as lore is reporting
> that you sent this v3 email three times. Not sure if it could be an
> issue.
Thanks for reporting! But I only send PATCH v3 once using b4 with web endpoint.
I only received a single email myself.
So I guess it is a bug with b4 or lore.
>
> [...]
>
>> diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
>> index 605506792b5b4..6e8597edea314 100644
>> --- a/kernel/bpf/bpf_task_storage.c
>> +++ b/kernel/bpf/bpf_task_storage.c
>> @@ -212,7 +212,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
>>
>> static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr)
>> {
>> - return bpf_local_storage_map_alloc(attr, &task_cache, true);
>> + return bpf_local_storage_map_alloc(attr, &task_cache,
>> + KMALLOC_NOLOCK_SUPPORTED);
>
> I can confirm that this does fix one selftest using
> BPF_LOCAL_STORAGE_GET_F_CREATE on riscv64: test_ls_map_kptr_ref1 in
> map_kptr. Other tests using BPF_LOCAL_STORAGE_GET_F_CREATE are still
> failing so I guess they have other issues.
>
> Tested-by: Paul Chaignon <paul.chaignon@gmail.com>
Thanks very much for testing this patch!
I am not sure why the other tests fail but perhaps it is because
a big issue for fentry/kprobe on riscv64 is that the first function argument
cannot be read as a0 register is clobbered [1].
I will try to run the selftests on riscv64 when I have more time.
IIRC the issue is workaround-ed by using PTRACE_GET_SYSCALL_INFO for ptrace.
But I got surprised by it again when adding riscv64 to my CI setup.
A working kprobe/fentry bpf program on other architectures cannot read the
first argument of exec family syscalls on riscv64 at all [2].
[1]: https://github.com/strace/strace/issues/315
[2]: https://github.com/kxxt/tracexec/actions/runs/23147343712/job/67247821299#step:6:1429
Best regards,
Levi
>
>> }
>>
>> static void task_storage_map_free(struct bpf_map *map)
>>
>> ---
>> base-commit: e06e6b8001233241eb5b2e2791162f0585f50f4b
>> change-id: 20260314-bpf-kmalloc-nolock-60da80e613de
>>
>> Best regards,
>> --
>> Levi Zim <rsworktech@outlook.com>
>>
>>
>>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE
2026-03-14 16:02 [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE Levi Zim via B4 Relay
2026-03-16 15:05 ` Paul Chaignon
@ 2026-03-16 19:53 ` Amery Hung
2026-03-17 0:40 ` Levi Zim
1 sibling, 1 reply; 7+ messages in thread
From: Amery Hung @ 2026-03-16 19:53 UTC (permalink / raw)
To: rsworktech
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
linux-riscv, stable, bpf, linux-kernel, linux-rt-devel
On Sat, Mar 14, 2026 at 9:02 AM Levi Zim via B4 Relay
<devnull+rsworktech.outlook.com@kernel.org> wrote:
>
> From: Levi Zim <rsworktech@outlook.com>
>
> kmalloc_nolock always fails for architectures that lack cmpxchg16b.
> For example, this causes bpf_task_storage_get with flag
> BPF_LOCAL_STORAGE_GET_F_CREATE to fails on riscv64 6.19 kernel.
>
> Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
> But leave the PREEMPT_RT case as is because it requires kmalloc_nolock
> for correctness. Add a comment about this limitation that architecture's
> lack of CMPXCHG_DOUBLE combined with PREEMPT_RT could make
> bpf_local_storage_alloc always fail.
Let's not do this.
This re-introduces deadlock to local storage. In addition, local
storage will switch to using kmalloc_nolock() entirely.
For riscv hardware without zacas extension, I think a workaround with
some performance overhead is to enable CONFIG_SLUB_DEBUG and
slub_debug options.
>
> Fixes: f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock() in local storage")
> Cc: stable@vger.kernel.org
> Signed-off-by: Levi Zim <rsworktech@outlook.com>
> ---
> I find that bpf_task_storage_get with flag BPF_LOCAL_STORAGE_GET_F_CREATE
> always fails for me on 6.19 kernel on riscv64 and bisected it.
>
> In f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock()
> in local storage"), bpf memory allocator is replaced with kmalloc_nolock.
> This approach is problematic for architectures that lack CMPXCHG_DOUBLE
> because kmalloc_nolock always fails in this case:
>
> In function kmalloc_nolock (kmalloc_nolock_noprof):
>
> if (!(s->flags & __CMPXCHG_DOUBLE) && !kmem_cache_debug(s))
> /*
> * kmalloc_nolock() is not supported on architectures that
> * don't implement cmpxchg16b, but debug caches don't use
> * per-cpu slab and per-cpu partial slabs. They rely on
> * kmem_cache_node->list_lock, so kmalloc_nolock() can
> * attempt to allocate from debug caches by
> * spin_trylock_irqsave(&n->list_lock, ...)
> */
> return NULL;
>
> Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
> (But not for a PREEMPT_RT case as explained in the comment and commitmsg)
>
> Note for stable: this only needs to be picked into v6.19 if the patch
> makes it into 7.0.
> ---
> Changes in v3:
> - Use macro instead of const static variable to avoid triggering
> warnings.
> - Wrap lines at 80 columns
> - Link to v2: https://lore.kernel.org/r/20260314-bpf-kmalloc-nolock-v2-1-576e33e4fa67@outlook.com
>
> Changes in v2:
> - Drop the modification to the PREEMPT_RT case as it requires
> kmalloc_nolock for correctness.
> - Add a comment to the PREEMPT_RT case about the limitation when
> not HAVE_CMPXCHG_DOUBLE but enables PREEMPT_RT.
> - Link to v1: https://lore.kernel.org/r/20260314-bpf-kmalloc-nolock-v1-1-24abf3f75a9f@outlook.com
> ---
> include/linux/bpf_local_storage.h | 1 +
> kernel/bpf/bpf_cgrp_storage.c | 3 ++-
> kernel/bpf/bpf_local_storage.c | 4 ++++
> kernel/bpf/bpf_task_storage.c | 3 ++-
> 4 files changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
> index 8157e8da61d40..d8f2c5d63a80e 100644
> --- a/include/linux/bpf_local_storage.h
> +++ b/include/linux/bpf_local_storage.h
> @@ -18,6 +18,7 @@
> #include <asm/rqspinlock.h>
>
> #define BPF_LOCAL_STORAGE_CACHE_SIZE 16
> +#define KMALLOC_NOLOCK_SUPPORTED IS_ENABLED(CONFIG_HAVE_CMPXCHG_DOUBLE)
>
> struct bpf_local_storage_map_bucket {
> struct hlist_head list;
> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
> index c2a2ead1f466d..cd18193c44058 100644
> --- a/kernel/bpf/bpf_cgrp_storage.c
> +++ b/kernel/bpf/bpf_cgrp_storage.c
> @@ -114,7 +114,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
>
> static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
> {
> - return bpf_local_storage_map_alloc(attr, &cgroup_cache, true);
> + return bpf_local_storage_map_alloc(attr, &cgroup_cache,
> + KMALLOC_NOLOCK_SUPPORTED);
> }
>
> static void cgroup_storage_map_free(struct bpf_map *map)
> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> index 9c96a4477f81a..a6c240da87668 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c
> @@ -893,6 +893,10 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
> /* In PREEMPT_RT, kmalloc(GFP_ATOMIC) is still not safe in non
> * preemptible context. Thus, enforce all storages to use
> * kmalloc_nolock() when CONFIG_PREEMPT_RT is enabled.
> + *
> + * However, kmalloc_nolock would fail on architectures that do not
> + * have CMPXCHG_DOUBLE. On such architectures with PREEMPT_RT,
> + * bpf_local_storage_alloc would always fail.
> */
> smap->use_kmalloc_nolock = IS_ENABLED(CONFIG_PREEMPT_RT) ? true : use_kmalloc_nolock;
>
> diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
> index 605506792b5b4..6e8597edea314 100644
> --- a/kernel/bpf/bpf_task_storage.c
> +++ b/kernel/bpf/bpf_task_storage.c
> @@ -212,7 +212,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
>
> static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr)
> {
> - return bpf_local_storage_map_alloc(attr, &task_cache, true);
> + return bpf_local_storage_map_alloc(attr, &task_cache,
> + KMALLOC_NOLOCK_SUPPORTED);
> }
>
> static void task_storage_map_free(struct bpf_map *map)
>
> ---
> base-commit: e06e6b8001233241eb5b2e2791162f0585f50f4b
> change-id: 20260314-bpf-kmalloc-nolock-60da80e613de
>
> Best regards,
> --
> Levi Zim <rsworktech@outlook.com>
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE
2026-03-16 19:53 ` Amery Hung
@ 2026-03-17 0:40 ` Levi Zim
2026-03-17 17:16 ` Amery Hung
0 siblings, 1 reply; 7+ messages in thread
From: Levi Zim @ 2026-03-17 0:40 UTC (permalink / raw)
To: Amery Hung
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
linux-riscv, stable, bpf, linux-kernel, linux-rt-devel
On 3/17/26 3:53 AM, Amery Hung wrote:
> On Sat, Mar 14, 2026 at 9:02 AM Levi Zim via B4 Relay
> <devnull+rsworktech.outlook.com@kernel.org> wrote:
>>
>> From: Levi Zim <rsworktech@outlook.com>
>>
>> kmalloc_nolock always fails for architectures that lack cmpxchg16b.
>> For example, this causes bpf_task_storage_get with flag
>> BPF_LOCAL_STORAGE_GET_F_CREATE to fails on riscv64 6.19 kernel.
>>
>> Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
>> But leave the PREEMPT_RT case as is because it requires kmalloc_nolock
>> for correctness. Add a comment about this limitation that architecture's
>> lack of CMPXCHG_DOUBLE combined with PREEMPT_RT could make
>> bpf_local_storage_alloc always fail.
>
> Let's not do this.
>
> This re-introduces deadlock to local storage. In addition, local
> storage will switch to using kmalloc_nolock() entirely.
I noticed the PREEMPT_RT case needs kmalloc_nolock for correctness and didn't
disable kmalloc_nolock in that code path when !HAVE_CMPXCHG_DOUBLE.
But in the original series [1], It appears that switching to kmalloc_nolock
is purely for performance benefits and not for fixing deadlocks in local storage.
And I didn't see any "Fixes" tag in [1].
Could you provide a more detailed explanation? Thanks!
[1]: https://lore.kernel.org/all/20251114201329.3275875-1-ameryhung@gmail.com/
> For riscv hardware without zacas extension, I think a workaround with
> some performance overhead is to enable CONFIG_SLUB_DEBUG and
> slub_debug options.
There is a patch [2] that enables HAVE_CMPXCHG_DOUBLE for riscv so I think
it will not be a problem for riscv in the future, even for hardware without zacas
extension because the kernel has fallback implementation for zacas.
However, this would still be an issue for other architectures. Currently only
x86, arm64, s390 and loongarch have HAVE_CMPXCHG_DOUBLE in 7.0-rc4.
I don't think letting users enable slub_debug would be a reasonable workaround.
[2]: https://patchew.org/linux/20260220074449.8526-1-mssola@mssola.com/
Best regards,
Levi
>>
>> Fixes: f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock() in local storage")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Levi Zim <rsworktech@outlook.com>
>> ---
>> I find that bpf_task_storage_get with flag BPF_LOCAL_STORAGE_GET_F_CREATE
>> always fails for me on 6.19 kernel on riscv64 and bisected it.
>>
>> In f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock()
>> in local storage"), bpf memory allocator is replaced with kmalloc_nolock.
>> This approach is problematic for architectures that lack CMPXCHG_DOUBLE
>> because kmalloc_nolock always fails in this case:
>>
>> In function kmalloc_nolock (kmalloc_nolock_noprof):
>>
>> if (!(s->flags & __CMPXCHG_DOUBLE) && !kmem_cache_debug(s))
>> /*
>> * kmalloc_nolock() is not supported on architectures that
>> * don't implement cmpxchg16b, but debug caches don't use
>> * per-cpu slab and per-cpu partial slabs. They rely on
>> * kmem_cache_node->list_lock, so kmalloc_nolock() can
>> * attempt to allocate from debug caches by
>> * spin_trylock_irqsave(&n->list_lock, ...)
>> */
>> return NULL;
>>
>> Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
>> (But not for a PREEMPT_RT case as explained in the comment and commitmsg)
>>
>> Note for stable: this only needs to be picked into v6.19 if the patch
>> makes it into 7.0.
>> ---
>> Changes in v3:
>> - Use macro instead of const static variable to avoid triggering
>> warnings.
>> - Wrap lines at 80 columns
>> - Link to v2: https://lore.kernel.org/r/20260314-bpf-kmalloc-nolock-v2-1-576e33e4fa67@outlook.com
>>
>> Changes in v2:
>> - Drop the modification to the PREEMPT_RT case as it requires
>> kmalloc_nolock for correctness.
>> - Add a comment to the PREEMPT_RT case about the limitation when
>> not HAVE_CMPXCHG_DOUBLE but enables PREEMPT_RT.
>> - Link to v1: https://lore.kernel.org/r/20260314-bpf-kmalloc-nolock-v1-1-24abf3f75a9f@outlook.com
>> ---
>> include/linux/bpf_local_storage.h | 1 +
>> kernel/bpf/bpf_cgrp_storage.c | 3 ++-
>> kernel/bpf/bpf_local_storage.c | 4 ++++
>> kernel/bpf/bpf_task_storage.c | 3 ++-
>> 4 files changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
>> index 8157e8da61d40..d8f2c5d63a80e 100644
>> --- a/include/linux/bpf_local_storage.h
>> +++ b/include/linux/bpf_local_storage.h
>> @@ -18,6 +18,7 @@
>> #include <asm/rqspinlock.h>
>>
>> #define BPF_LOCAL_STORAGE_CACHE_SIZE 16
>> +#define KMALLOC_NOLOCK_SUPPORTED IS_ENABLED(CONFIG_HAVE_CMPXCHG_DOUBLE)
>>
>> struct bpf_local_storage_map_bucket {
>> struct hlist_head list;
>> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
>> index c2a2ead1f466d..cd18193c44058 100644
>> --- a/kernel/bpf/bpf_cgrp_storage.c
>> +++ b/kernel/bpf/bpf_cgrp_storage.c
>> @@ -114,7 +114,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
>>
>> static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
>> {
>> - return bpf_local_storage_map_alloc(attr, &cgroup_cache, true);
>> + return bpf_local_storage_map_alloc(attr, &cgroup_cache,
>> + KMALLOC_NOLOCK_SUPPORTED);
>> }
>>
>> static void cgroup_storage_map_free(struct bpf_map *map)
>> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
>> index 9c96a4477f81a..a6c240da87668 100644
>> --- a/kernel/bpf/bpf_local_storage.c
>> +++ b/kernel/bpf/bpf_local_storage.c
>> @@ -893,6 +893,10 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
>> /* In PREEMPT_RT, kmalloc(GFP_ATOMIC) is still not safe in non
>> * preemptible context. Thus, enforce all storages to use
>> * kmalloc_nolock() when CONFIG_PREEMPT_RT is enabled.
>> + *
>> + * However, kmalloc_nolock would fail on architectures that do not
>> + * have CMPXCHG_DOUBLE. On such architectures with PREEMPT_RT,
>> + * bpf_local_storage_alloc would always fail.
>> */
>> smap->use_kmalloc_nolock = IS_ENABLED(CONFIG_PREEMPT_RT) ? true : use_kmalloc_nolock;
>>
>> diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
>> index 605506792b5b4..6e8597edea314 100644
>> --- a/kernel/bpf/bpf_task_storage.c
>> +++ b/kernel/bpf/bpf_task_storage.c
>> @@ -212,7 +212,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
>>
>> static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr)
>> {
>> - return bpf_local_storage_map_alloc(attr, &task_cache, true);
>> + return bpf_local_storage_map_alloc(attr, &task_cache,
>> + KMALLOC_NOLOCK_SUPPORTED);
>> }
>>
>> static void task_storage_map_free(struct bpf_map *map)
>>
>> ---
>> base-commit: e06e6b8001233241eb5b2e2791162f0585f50f4b
>> change-id: 20260314-bpf-kmalloc-nolock-60da80e613de
>>
>> Best regards,
>> --
>> Levi Zim <rsworktech@outlook.com>
>>
>>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE
2026-03-16 15:05 ` Paul Chaignon
2026-03-16 15:46 ` Levi Zim
@ 2026-03-17 4:45 ` Yao Zi
1 sibling, 0 replies; 7+ messages in thread
From: Yao Zi @ 2026-03-17 4:45 UTC (permalink / raw)
To: Paul Chaignon, rsworktech
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Amery Hung, linux-riscv, stable, bpf, linux-kernel,
linux-rt-devel
On Mon, Mar 16, 2026 at 04:05:14PM +0100, Paul Chaignon wrote:
> On Sun, Mar 15, 2026 at 12:02:48AM +0800, Levi Zim via B4 Relay wrote:
> > From: Levi Zim <rsworktech@outlook.com>
> >
> > kmalloc_nolock always fails for architectures that lack cmpxchg16b.
> > For example, this causes bpf_task_storage_get with flag
> > BPF_LOCAL_STORAGE_GET_F_CREATE to fails on riscv64 6.19 kernel.
> >
> > Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
> > But leave the PREEMPT_RT case as is because it requires kmalloc_nolock
> > for correctness. Add a comment about this limitation that architecture's
> > lack of CMPXCHG_DOUBLE combined with PREEMPT_RT could make
> > bpf_local_storage_alloc always fail.
> >
> > Fixes: f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock() in local storage")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Levi Zim <rsworktech@outlook.com>
> > ---
>
> Note there may be something broken with your setup as lore is reporting
> that you sent this v3 email three times. Not sure if it could be an
> issue.
Once is because linux-riscv@lists.infradead.org adds a trailer when
forwarding messages but keeps the Message-ID unchanged, so lore indexed
one extra message with the same ID but different content, it was not
Levi doing something wrong.
The other message has the same content but a different From line, not
sure what happened to it. Differences of the messages could be viewed
here[1].
Regards,
Yao Zi
[1]: https://lore.kernel.org/all/20260315-bpf-kmalloc-nolock-v3-1-91c72bf91902@outlook.com/d/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE
2026-03-17 0:40 ` Levi Zim
@ 2026-03-17 17:16 ` Amery Hung
0 siblings, 0 replies; 7+ messages in thread
From: Amery Hung @ 2026-03-17 17:16 UTC (permalink / raw)
To: Levi Zim
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
linux-riscv, stable, bpf, linux-kernel, linux-rt-devel
On Mon, Mar 16, 2026 at 5:40 PM Levi Zim <rsworktech@outlook.com> wrote:
>
>
>
> On 3/17/26 3:53 AM, Amery Hung wrote:
> > On Sat, Mar 14, 2026 at 9:02 AM Levi Zim via B4 Relay
> > <devnull+rsworktech.outlook.com@kernel.org> wrote:
> >>
> >> From: Levi Zim <rsworktech@outlook.com>
> >>
> >> kmalloc_nolock always fails for architectures that lack cmpxchg16b.
> >> For example, this causes bpf_task_storage_get with flag
> >> BPF_LOCAL_STORAGE_GET_F_CREATE to fails on riscv64 6.19 kernel.
> >>
> >> Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
> >> But leave the PREEMPT_RT case as is because it requires kmalloc_nolock
> >> for correctness. Add a comment about this limitation that architecture's
> >> lack of CMPXCHG_DOUBLE combined with PREEMPT_RT could make
> >> bpf_local_storage_alloc always fail.
> >
> > Let's not do this.
> >
> > This re-introduces deadlock to local storage. In addition, local
> > storage will switch to using kmalloc_nolock() entirely.
>
> I noticed the PREEMPT_RT case needs kmalloc_nolock for correctness and didn't
> disable kmalloc_nolock in that code path when !HAVE_CMPXCHG_DOUBLE.
>
It is also not correct to use kmalloc() on !PREEMPT_RT. A BPF program
in NMI or tracing kmalloc internal that tries to allocate memory for
local storage can deadlock.
> But in the original series [1], It appears that switching to kmalloc_nolock
> is purely for performance benefits and not for fixing deadlocks in local storage.
> And I didn't see any "Fixes" tag in [1].
>
There is no deadlock issue with BPF memory allocator and
kmalloc_nolock(). For context, the patch you are referencing is part
of a series of refactoring to
- Move from BPF memory allocator to kmalloc_nolock() [1]
- Remove percpu busy lock [2]
Please refer to the cover letter for the motivation and changes made
[1] https://lore.kernel.org/bpf/20251112175939.2365295-1-ameryhung@gmail.com/
[2] https://lore.kernel.org/bpf/20260205222916.1788211-1-ameryhung@gmail.com/
> Could you provide a more detailed explanation? Thanks!
>
> [1]: https://lore.kernel.org/all/20251114201329.3275875-1-ameryhung@gmail.com/
>
> > For riscv hardware without zacas extension, I think a workaround with
> > some performance overhead is to enable CONFIG_SLUB_DEBUG and
> > slub_debug options.
>
> There is a patch [2] that enables HAVE_CMPXCHG_DOUBLE for riscv so I think
> it will not be a problem for riscv in the future, even for hardware without zacas
> extension because the kernel has fallback implementation for zacas.
>
> However, this would still be an issue for other architectures. Currently only
> x86, arm64, s390 and loongarch have HAVE_CMPXCHG_DOUBLE in 7.0-rc4.
> I don't think letting users enable slub_debug would be a reasonable workaround.
>
> [2]: https://patchew.org/linux/20260220074449.8526-1-mssola@mssola.com/
>
> Best regards,
> Levi
>
> >>
> >> Fixes: f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock() in local storage")
> >> Cc: stable@vger.kernel.org
> >> Signed-off-by: Levi Zim <rsworktech@outlook.com>
> >> ---
> >> I find that bpf_task_storage_get with flag BPF_LOCAL_STORAGE_GET_F_CREATE
> >> always fails for me on 6.19 kernel on riscv64 and bisected it.
> >>
> >> In f484f4a3e058 ("bpf: Replace bpf memory allocator with kmalloc_nolock()
> >> in local storage"), bpf memory allocator is replaced with kmalloc_nolock.
> >> This approach is problematic for architectures that lack CMPXCHG_DOUBLE
> >> because kmalloc_nolock always fails in this case:
> >>
> >> In function kmalloc_nolock (kmalloc_nolock_noprof):
> >>
> >> if (!(s->flags & __CMPXCHG_DOUBLE) && !kmem_cache_debug(s))
> >> /*
> >> * kmalloc_nolock() is not supported on architectures that
> >> * don't implement cmpxchg16b, but debug caches don't use
> >> * per-cpu slab and per-cpu partial slabs. They rely on
> >> * kmem_cache_node->list_lock, so kmalloc_nolock() can
> >> * attempt to allocate from debug caches by
> >> * spin_trylock_irqsave(&n->list_lock, ...)
> >> */
> >> return NULL;
> >>
> >> Fix it by enabling use_kmalloc_nolock only when HAVE_CMPXCHG_DOUBLE.
> >> (But not for a PREEMPT_RT case as explained in the comment and commitmsg)
> >>
> >> Note for stable: this only needs to be picked into v6.19 if the patch
> >> makes it into 7.0.
> >> ---
> >> Changes in v3:
> >> - Use macro instead of const static variable to avoid triggering
> >> warnings.
> >> - Wrap lines at 80 columns
> >> - Link to v2: https://lore.kernel.org/r/20260314-bpf-kmalloc-nolock-v2-1-576e33e4fa67@outlook.com
> >>
> >> Changes in v2:
> >> - Drop the modification to the PREEMPT_RT case as it requires
> >> kmalloc_nolock for correctness.
> >> - Add a comment to the PREEMPT_RT case about the limitation when
> >> not HAVE_CMPXCHG_DOUBLE but enables PREEMPT_RT.
> >> - Link to v1: https://lore.kernel.org/r/20260314-bpf-kmalloc-nolock-v1-1-24abf3f75a9f@outlook.com
> >> ---
> >> include/linux/bpf_local_storage.h | 1 +
> >> kernel/bpf/bpf_cgrp_storage.c | 3 ++-
> >> kernel/bpf/bpf_local_storage.c | 4 ++++
> >> kernel/bpf/bpf_task_storage.c | 3 ++-
> >> 4 files changed, 9 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
> >> index 8157e8da61d40..d8f2c5d63a80e 100644
> >> --- a/include/linux/bpf_local_storage.h
> >> +++ b/include/linux/bpf_local_storage.h
> >> @@ -18,6 +18,7 @@
> >> #include <asm/rqspinlock.h>
> >>
> >> #define BPF_LOCAL_STORAGE_CACHE_SIZE 16
> >> +#define KMALLOC_NOLOCK_SUPPORTED IS_ENABLED(CONFIG_HAVE_CMPXCHG_DOUBLE)
> >>
> >> struct bpf_local_storage_map_bucket {
> >> struct hlist_head list;
> >> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
> >> index c2a2ead1f466d..cd18193c44058 100644
> >> --- a/kernel/bpf/bpf_cgrp_storage.c
> >> +++ b/kernel/bpf/bpf_cgrp_storage.c
> >> @@ -114,7 +114,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
> >>
> >> static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
> >> {
> >> - return bpf_local_storage_map_alloc(attr, &cgroup_cache, true);
> >> + return bpf_local_storage_map_alloc(attr, &cgroup_cache,
> >> + KMALLOC_NOLOCK_SUPPORTED);
> >> }
> >>
> >> static void cgroup_storage_map_free(struct bpf_map *map)
> >> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> >> index 9c96a4477f81a..a6c240da87668 100644
> >> --- a/kernel/bpf/bpf_local_storage.c
> >> +++ b/kernel/bpf/bpf_local_storage.c
> >> @@ -893,6 +893,10 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
> >> /* In PREEMPT_RT, kmalloc(GFP_ATOMIC) is still not safe in non
> >> * preemptible context. Thus, enforce all storages to use
> >> * kmalloc_nolock() when CONFIG_PREEMPT_RT is enabled.
> >> + *
> >> + * However, kmalloc_nolock would fail on architectures that do not
> >> + * have CMPXCHG_DOUBLE. On such architectures with PREEMPT_RT,
> >> + * bpf_local_storage_alloc would always fail.
> >> */
> >> smap->use_kmalloc_nolock = IS_ENABLED(CONFIG_PREEMPT_RT) ? true : use_kmalloc_nolock;
> >>
> >> diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
> >> index 605506792b5b4..6e8597edea314 100644
> >> --- a/kernel/bpf/bpf_task_storage.c
> >> +++ b/kernel/bpf/bpf_task_storage.c
> >> @@ -212,7 +212,8 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
> >>
> >> static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr)
> >> {
> >> - return bpf_local_storage_map_alloc(attr, &task_cache, true);
> >> + return bpf_local_storage_map_alloc(attr, &task_cache,
> >> + KMALLOC_NOLOCK_SUPPORTED);
> >> }
> >>
> >> static void task_storage_map_free(struct bpf_map *map)
> >>
> >> ---
> >> base-commit: e06e6b8001233241eb5b2e2791162f0585f50f4b
> >> change-id: 20260314-bpf-kmalloc-nolock-60da80e613de
> >>
> >> Best regards,
> >> --
> >> Levi Zim <rsworktech@outlook.com>
> >>
> >>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-03-17 17:16 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-14 16:02 [PATCH bpf v3] bpf: do not use kmalloc_nolock when !HAVE_CMPXCHG_DOUBLE Levi Zim via B4 Relay
2026-03-16 15:05 ` Paul Chaignon
2026-03-16 15:46 ` Levi Zim
2026-03-17 4:45 ` Yao Zi
2026-03-16 19:53 ` Amery Hung
2026-03-17 0:40 ` Levi Zim
2026-03-17 17:16 ` Amery Hung
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox