* [PATCH bpf-next v4 0/4] bpf: Free special fields when update hash and local storage maps
@ 2025-10-30 15:24 Leon Hwang
2025-10-30 15:24 ` [PATCH bpf-next v4 1/4] bpf: Free special fields when update [lru_,]percpu_hash maps Leon Hwang
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Leon Hwang @ 2025-10-30 15:24 UTC (permalink / raw)
To: bpf
Cc: ast, andrii, daniel, martin.lau, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, memxor, ameryhung,
linux-kernel, kernel-patches-bot, Leon Hwang
In the discussion thread
"[PATCH bpf-next v9 0/7] bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags for percpu maps"[1],
it was pointed out that missing calls to bpf_obj_free_fields() could
lead to memory leaks.
A selftest was added to confirm that this is indeed a real issue - the
refcount of BPF_KPTR_REF field is not decremented when
bpf_obj_free_fields() is missing after copy_map_value[,_long]().
Further inspection of copy_map_value[,_long]() call sites revealed two
locations affected by this issue:
1. pcpu_copy_value()
2. htab_map_update_elem() when used with BPF_F_LOCK
Similar case happens when update local storage maps with BPF_F_LOCK.
This series fixes the issues by properly calling bpf_obj_free_fields()
(or check_and_free_fields()) after copy_map_value[,_long]() and adds
selftests to verify the fix.
Changes:
v3 -> v4:
* Target bpf-next tree.
* Address comments from Amery:
* Drop 'bpf_obj_free_fields()' in the path of updating local storage
maps without BPF_F_LOCK.
* Drop the corresponding self test.
* Respin the other test of local storage maps using syscall BPF
programs.
v2 -> v3:
* Free special fields when update local storage maps without BPF_F_LOCK.
* Add test to verify decrementing refcount when update cgroup local
storage maps without BPF_F_LOCK.
* Address review from AI bot:
* Slow path with BPF_F_LOCK (around line 642-646) in
'bpf_local_storage.c'.
* https://lore.kernel.org/bpf/20251020164608.20536-1-leon.hwang@linux.dev/
v1 -> v2:
* Add test to verify decrementing refcount when update cgroup local
storage maps with BPF_F_LOCK.
* Address review from AI bot:
* Fast path without bucket lock (around line 610) in
'bpf_local_storage.c'.
* https://lore.kernel.org/bpf/20251016145801.47552-1-leon.hwang@linux.dev/
Leon Hwang (4):
bpf: Free special fields when update [lru_,]percpu_hash maps
bpf: Free special fields when update hash maps with BPF_F_LOCK
bpf: Free special fields when update local storage maps with
BPF_F_LOCK
selftests/bpf: Add tests to verify freeing the special fields when
update hash and local storage maps
kernel/bpf/bpf_local_storage.c | 2 +
kernel/bpf/hashtab.c | 4 +
.../bpf/prog_tests/refcounted_kptr.c | 134 +++++++++++++++++-
.../selftests/bpf/progs/refcounted_kptr.c | 129 +++++++++++++++++
4 files changed, 268 insertions(+), 1 deletion(-)
--
2.51.1
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH bpf-next v4 1/4] bpf: Free special fields when update [lru_,]percpu_hash maps 2025-10-30 15:24 [PATCH bpf-next v4 0/4] bpf: Free special fields when update hash and local storage maps Leon Hwang @ 2025-10-30 15:24 ` Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 2/4] bpf: Free special fields when update hash maps with BPF_F_LOCK Leon Hwang ` (2 subsequent siblings) 3 siblings, 0 replies; 11+ messages in thread From: Leon Hwang @ 2025-10-30 15:24 UTC (permalink / raw) To: bpf Cc: ast, andrii, daniel, martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, memxor, ameryhung, linux-kernel, kernel-patches-bot, Leon Hwang As [lru_,]percpu_hash maps support BPF_KPTR_{REF,PERCPU}, missing calls to 'bpf_obj_free_fields()' in 'pcpu_copy_value()' could cause the memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the map gets freed. Fix this by calling 'bpf_obj_free_fields()' after 'copy_map_value[,_long]()' in 'pcpu_copy_value()'. Fixes: 65334e64a493 ("bpf: Support kptrs in percpu hashmap and percpu LRU hashmap") Signed-off-by: Leon Hwang <leon.hwang@linux.dev> --- kernel/bpf/hashtab.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index f876f09355f0d..3861f28a6be81 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -937,12 +937,14 @@ static void pcpu_copy_value(struct bpf_htab *htab, void __percpu *pptr, if (!onallcpus) { /* copy true value_size bytes */ copy_map_value(&htab->map, this_cpu_ptr(pptr), value); + bpf_obj_free_fields(htab->map.record, this_cpu_ptr(pptr)); } else { u32 size = round_up(htab->map.value_size, 8); int off = 0, cpu; for_each_possible_cpu(cpu) { copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value + off); + bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu)); off += size; } } -- 2.51.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH bpf-next v4 2/4] bpf: Free special fields when update hash maps with BPF_F_LOCK 2025-10-30 15:24 [PATCH bpf-next v4 0/4] bpf: Free special fields when update hash and local storage maps Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 1/4] bpf: Free special fields when update [lru_,]percpu_hash maps Leon Hwang @ 2025-10-30 15:24 ` Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 3/4] bpf: Free special fields when update local storage " Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 4/4] selftests/bpf: Add tests to verify freeing the special fields when update hash and local storage maps Leon Hwang 3 siblings, 0 replies; 11+ messages in thread From: Leon Hwang @ 2025-10-30 15:24 UTC (permalink / raw) To: bpf Cc: ast, andrii, daniel, martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, memxor, ameryhung, linux-kernel, kernel-patches-bot, Leon Hwang When updating hash maps with BPF_F_LOCK, the special fields were not freed after being replaced. This could cause memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the map gets freed. Fix this by calling 'check_and_free_fields()' after 'copy_map_value_locked()' to properly release the old fields. Fixes: 14a324f6a67e ("bpf: Wire up freeing of referenced kptr") Signed-off-by: Leon Hwang <leon.hwang@linux.dev> --- kernel/bpf/hashtab.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 3861f28a6be81..fc3c7ede3cd0c 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -1110,6 +1110,7 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value, copy_map_value_locked(map, htab_elem_value(l_old, key_size), value, false); + check_and_free_fields(htab, l_old); return 0; } /* fall through, grab the bucket lock and lookup again. @@ -1138,6 +1139,7 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value, copy_map_value_locked(map, htab_elem_value(l_old, key_size), value, false); + check_and_free_fields(htab, l_old); ret = 0; goto err; } -- 2.51.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH bpf-next v4 3/4] bpf: Free special fields when update local storage maps with BPF_F_LOCK 2025-10-30 15:24 [PATCH bpf-next v4 0/4] bpf: Free special fields when update hash and local storage maps Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 1/4] bpf: Free special fields when update [lru_,]percpu_hash maps Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 2/4] bpf: Free special fields when update hash maps with BPF_F_LOCK Leon Hwang @ 2025-10-30 15:24 ` Leon Hwang 2025-10-30 22:35 ` Alexei Starovoitov 2025-10-30 15:24 ` [PATCH bpf-next v4 4/4] selftests/bpf: Add tests to verify freeing the special fields when update hash and local storage maps Leon Hwang 3 siblings, 1 reply; 11+ messages in thread From: Leon Hwang @ 2025-10-30 15:24 UTC (permalink / raw) To: bpf Cc: ast, andrii, daniel, martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, memxor, ameryhung, linux-kernel, kernel-patches-bot, Leon Hwang When updating local storage maps with BPF_F_LOCK on the fast path, the special fields were not freed after being replaced. This could cause memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the map gets freed. Similarly, on the other path, the old sdata's special fields were never freed when BPF_F_LOCK was specified, causing the same issue. Fix this by calling 'bpf_obj_free_fields()' after 'copy_map_value_locked()' to properly release the old fields. Fixes: 9db44fdd8105 ("bpf: Support kptrs in local storage maps") Signed-off-by: Leon Hwang <leon.hwang@linux.dev> --- kernel/bpf/bpf_local_storage.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index b931fbceb54da..9f447530f9564 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -609,6 +609,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) { copy_map_value_locked(&smap->map, old_sdata->data, value, false); + bpf_obj_free_fields(smap->map.record, old_sdata->data); return old_sdata; } } @@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, if (old_sdata && (map_flags & BPF_F_LOCK)) { copy_map_value_locked(&smap->map, old_sdata->data, value, false); + bpf_obj_free_fields(smap->map.record, old_sdata->data); selem = SELEM(old_sdata); goto unlock; } -- 2.51.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v4 3/4] bpf: Free special fields when update local storage maps with BPF_F_LOCK 2025-10-30 15:24 ` [PATCH bpf-next v4 3/4] bpf: Free special fields when update local storage " Leon Hwang @ 2025-10-30 22:35 ` Alexei Starovoitov 2025-11-03 5:17 ` Leon Hwang 0 siblings, 1 reply; 11+ messages in thread From: Alexei Starovoitov @ 2025-10-30 22:35 UTC (permalink / raw) To: Leon Hwang Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau, Eduard, Song Liu, Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Kumar Kartikeya Dwivedi, Amery Hung, LKML, kernel-patches-bot On Thu, Oct 30, 2025 at 8:25 AM Leon Hwang <leon.hwang@linux.dev> wrote: > > When updating local storage maps with BPF_F_LOCK on the fast path, the > special fields were not freed after being replaced. This could cause > memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the > map gets freed. > > Similarly, on the other path, the old sdata's special fields were never > freed when BPF_F_LOCK was specified, causing the same issue. > > Fix this by calling 'bpf_obj_free_fields()' after > 'copy_map_value_locked()' to properly release the old fields. > > Fixes: 9db44fdd8105 ("bpf: Support kptrs in local storage maps") > Signed-off-by: Leon Hwang <leon.hwang@linux.dev> > --- > kernel/bpf/bpf_local_storage.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c > index b931fbceb54da..9f447530f9564 100644 > --- a/kernel/bpf/bpf_local_storage.c > +++ b/kernel/bpf/bpf_local_storage.c > @@ -609,6 +609,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, > if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) { > copy_map_value_locked(&smap->map, old_sdata->data, > value, false); > + bpf_obj_free_fields(smap->map.record, old_sdata->data); > return old_sdata; > } > } > @@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, > if (old_sdata && (map_flags & BPF_F_LOCK)) { > copy_map_value_locked(&smap->map, old_sdata->data, value, > false); > + bpf_obj_free_fields(smap->map.record, old_sdata->data); > selem = SELEM(old_sdata); > goto unlock; > } Even with rqspinlock I feel this is a can of worms and recursion issues. I think it's better to disallow special fields and BPF_F_LOCK combination. We already do that for uptr: if ((map_flags & BPF_F_LOCK) && btf_record_has_field(map->record, BPF_UPTR)) return -EOPNOTSUPP; let's do it for all special types. So patches 2 and 3 will change to -EOPNOTSUPP. pw-bot: cr ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v4 3/4] bpf: Free special fields when update local storage maps with BPF_F_LOCK 2025-10-30 22:35 ` Alexei Starovoitov @ 2025-11-03 5:17 ` Leon Hwang 2025-11-03 17:24 ` Alexei Starovoitov 0 siblings, 1 reply; 11+ messages in thread From: Leon Hwang @ 2025-11-03 5:17 UTC (permalink / raw) To: Alexei Starovoitov Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau, Eduard, Song Liu, Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Kumar Kartikeya Dwivedi, Amery Hung, LKML, kernel-patches-bot On 31/10/25 06:35, Alexei Starovoitov wrote: > On Thu, Oct 30, 2025 at 8:25 AM Leon Hwang <leon.hwang@linux.dev> wrote: >> [...] >> @@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, >> if (old_sdata && (map_flags & BPF_F_LOCK)) { >> copy_map_value_locked(&smap->map, old_sdata->data, value, >> false); >> + bpf_obj_free_fields(smap->map.record, old_sdata->data); >> selem = SELEM(old_sdata); >> goto unlock; >> } > > Even with rqspinlock I feel this is a can of worms and > recursion issues. > > I think it's better to disallow special fields and BPF_F_LOCK combination. > We already do that for uptr: > if ((map_flags & BPF_F_LOCK) && > btf_record_has_field(map->record, BPF_UPTR)) > return -EOPNOTSUPP; > > let's do it for all special types. > So patches 2 and 3 will change to -EOPNOTSUPP. > Do you mean disallowing the combination of BPF_F_LOCK with other special fields (except for BPF_SPIN_LOCK) on the UAPI side — for example, in lookup_elem() and update_elem()? If so, I'd like to send a separate patch set to implement that after the series “bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags for percpu maps” is applied. After that, we can easily add the check in bpf_map_check_op_flags() for the UAPI side, like this: static inline int bpf_map_check_op_flags(...) { if ((flags & BPF_F_LOCK) && !btf_record_has_field(map->record, BPF_SPIN_LOCK)) return -EINVAL; if ((flags & BPF_F_LOCK) && btf_record_has_field(map->record, ~BPF_SPIN_LOCK)) return -EOPNOTSUPP; } Then we can clean up some code, including the bpf_obj_free_fields() calls that follow copy_map_value_locked(), as well as the existing UPTR check. Thanks, Leon ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v4 3/4] bpf: Free special fields when update local storage maps with BPF_F_LOCK 2025-11-03 5:17 ` Leon Hwang @ 2025-11-03 17:24 ` Alexei Starovoitov 0 siblings, 0 replies; 11+ messages in thread From: Alexei Starovoitov @ 2025-11-03 17:24 UTC (permalink / raw) To: Leon Hwang Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau, Eduard, Song Liu, Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Kumar Kartikeya Dwivedi, Amery Hung, LKML, kernel-patches-bot On Sun, Nov 2, 2025 at 9:18 PM Leon Hwang <leon.hwang@linux.dev> wrote: > > > > On 31/10/25 06:35, Alexei Starovoitov wrote: > > On Thu, Oct 30, 2025 at 8:25 AM Leon Hwang <leon.hwang@linux.dev> wrote: > >> > > [...] > > >> @@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, > >> if (old_sdata && (map_flags & BPF_F_LOCK)) { > >> copy_map_value_locked(&smap->map, old_sdata->data, value, > >> false); > >> + bpf_obj_free_fields(smap->map.record, old_sdata->data); > >> selem = SELEM(old_sdata); > >> goto unlock; > >> } > > > > Even with rqspinlock I feel this is a can of worms and > > recursion issues. > > > > I think it's better to disallow special fields and BPF_F_LOCK combination. > > We already do that for uptr: > > if ((map_flags & BPF_F_LOCK) && > > btf_record_has_field(map->record, BPF_UPTR)) > > return -EOPNOTSUPP; > > > > let's do it for all special types. > > So patches 2 and 3 will change to -EOPNOTSUPP. > > > > Do you mean disallowing the combination of BPF_F_LOCK with other special > fields (except for BPF_SPIN_LOCK) on the UAPI side — for example, in > lookup_elem() and update_elem()? yes > If so, I'd like to send a separate patch set to implement that after the > series > “bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags for percpu maps” is > applied. > > After that, we can easily add the check in bpf_map_check_op_flags() for > the UAPI side, like this: > > static inline int bpf_map_check_op_flags(...) > { > if ((flags & BPF_F_LOCK) && !btf_record_has_field(map->record, > BPF_SPIN_LOCK)) > return -EINVAL; > > if ((flags & BPF_F_LOCK) && btf_record_has_field(map->record, > ~BPF_SPIN_LOCK)) > return -EOPNOTSUPP; > } > > Then we can clean up some code, including the bpf_obj_free_fields() > calls that follow copy_map_value_locked(), as well as the existing UPTR > check. ok. fair enough. ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH bpf-next v4 4/4] selftests/bpf: Add tests to verify freeing the special fields when update hash and local storage maps 2025-10-30 15:24 [PATCH bpf-next v4 0/4] bpf: Free special fields when update hash and local storage maps Leon Hwang ` (2 preceding siblings ...) 2025-10-30 15:24 ` [PATCH bpf-next v4 3/4] bpf: Free special fields when update local storage " Leon Hwang @ 2025-10-30 15:24 ` Leon Hwang 2025-11-04 17:30 ` Yonghong Song 3 siblings, 1 reply; 11+ messages in thread From: Leon Hwang @ 2025-10-30 15:24 UTC (permalink / raw) To: bpf Cc: ast, andrii, daniel, martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, memxor, ameryhung, linux-kernel, kernel-patches-bot, Leon Hwang Add tests to verify that updating hash and local storage maps decrements refcount when BPF_KPTR_REF objects are involved. The tests perform the following steps: 1. Call update_elem() to insert an initial value. 2. Use bpf_refcount_acquire() to increment the refcount. 3. Store the node pointer in the map value. 4. Add the node to a linked list. 5. Probe-read the refcount and verify it is *2*. 6. Call update_elem() again to trigger refcount decrement. 7. Probe-read the refcount and verify it is *1*. Signed-off-by: Leon Hwang <leon.hwang@linux.dev> --- .../bpf/prog_tests/refcounted_kptr.c | 134 +++++++++++++++++- .../selftests/bpf/progs/refcounted_kptr.c | 129 +++++++++++++++++ 2 files changed, 262 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c b/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c index d6bd5e16e6372..0ec91ff914af7 100644 --- a/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c +++ b/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c @@ -3,7 +3,7 @@ #include <test_progs.h> #include <network_helpers.h> - +#include "cgroup_helpers.h" #include "refcounted_kptr.skel.h" #include "refcounted_kptr_fail.skel.h" @@ -44,3 +44,135 @@ void test_refcounted_kptr_wrong_owner(void) ASSERT_OK(opts.retval, "rbtree_wrong_owner_remove_fail_a2 retval"); refcounted_kptr__destroy(skel); } + +static void test_refcnt_leak(struct refcounted_kptr *skel, int key, void *values, size_t values_sz, + u64 flags, struct bpf_map *map, struct bpf_program *prog_leak, + struct bpf_program *prog_check, struct bpf_test_run_opts *opts) +{ + int ret, fd; + + ret = bpf_map__update_elem(map, &key, sizeof(key), values, values_sz, flags); + if (!ASSERT_OK(ret, "bpf_map__update_elem init")) + return; + + fd = bpf_program__fd(prog_leak); + ret = bpf_prog_test_run_opts(fd, opts); + if (!ASSERT_OK(ret, "bpf_prog_test_run_opts")) + return; + if (!ASSERT_EQ(skel->bss->kptr_refcount, 2, "refcount")) + return; + + ret = bpf_map__update_elem(map, &key, sizeof(key), values, values_sz, flags); + if (!ASSERT_OK(ret, "bpf_map__update_elem dec refcount")) + return; + + fd = bpf_program__fd(prog_check); + ret = bpf_prog_test_run_opts(fd, opts); + ASSERT_OK(ret, "bpf_prog_test_run_opts"); + ASSERT_EQ(skel->bss->kptr_refcount, 1, "refcount"); +} + +static void test_percpu_hash_refcount_leak(void) +{ + struct refcounted_kptr *skel; + size_t values_sz; + u64 *values; + int cpu_nr; + LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + + cpu_nr = libbpf_num_possible_cpus(); + if (!ASSERT_GT(cpu_nr, 0, "libbpf_num_possible_cpus")) + return; + + values = calloc(cpu_nr, sizeof(u64)); + if (!ASSERT_OK_PTR(values, "calloc values")) + return; + + skel = refcounted_kptr__open_and_load(); + if (!ASSERT_OK_PTR(skel, "refcounted_kptr__open_and_load")) { + free(values); + return; + } + + values_sz = cpu_nr * sizeof(u64); + memset(values, 0, values_sz); + + test_refcnt_leak(skel, 0, values, values_sz, 0, skel->maps.pcpu_hash, + skel->progs.pcpu_hash_refcount_leak, + skel->progs.check_pcpu_hash_refcount, &opts); + + refcounted_kptr__destroy(skel); + free(values); +} + +struct lock_map_value { + u64 kptr; + struct bpf_spin_lock lock; + int value; +}; + +static void test_hash_lock_refcount_leak(void) +{ + struct lock_map_value value = {}; + struct refcounted_kptr *skel; + LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + + skel = refcounted_kptr__open_and_load(); + if (!ASSERT_OK_PTR(skel, "refcounted_kptr__open_and_load")) + return; + + test_refcnt_leak(skel, 0, &value, sizeof(value), BPF_F_LOCK, skel->maps.lock_hash, + skel->progs.hash_lock_refcount_leak, + skel->progs.check_hash_lock_refcount, &opts); + + refcounted_kptr__destroy(skel); +} + +static void test_cgroup_storage_lock_refcount_leak(void) +{ + struct lock_map_value value = {}; + struct refcounted_kptr *skel; + int cgroup, err; + LIBBPF_OPTS(bpf_test_run_opts, opts); + + err = setup_cgroup_environment(); + if (!ASSERT_OK(err, "setup_cgroup_environment")) + return; + + cgroup = get_root_cgroup(); + if (!ASSERT_GE(cgroup, 0, "get_root_cgroup")) { + cleanup_cgroup_environment(); + return; + } + + skel = refcounted_kptr__open_and_load(); + if (!ASSERT_OK_PTR(skel, "refcounted_kptr__open_and_load")) + goto out; + + test_refcnt_leak(skel, cgroup, &value, sizeof(value), BPF_F_LOCK, skel->maps.cgrp_strg, + skel->progs.cgroup_storage_lock_refcount_leak, + skel->progs.check_cgroup_storage_lock_refcount, &opts); + + refcounted_kptr__destroy(skel); +out: + close(cgroup); + cleanup_cgroup_environment(); +} + +void test_kptr_refcount_leak(void) +{ + if (test__start_subtest("percpu_hash_refcount_leak")) + test_percpu_hash_refcount_leak(); + if (test__start_subtest("hash_lock_refcount_leak")) + test_hash_lock_refcount_leak(); + if (test__start_subtest("cgroup_storage_lock_refcount_leak")) + test_cgroup_storage_lock_refcount_leak(); +} diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c index 893a4fdb4b6e9..101ba630d93e8 100644 --- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c +++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c @@ -568,4 +568,133 @@ int BPF_PROG(rbtree_sleepable_rcu_no_explicit_rcu_lock, return 0; } +private(kptr_ref) u64 ref; +u32 kptr_refcount; + +static int probe_read_refcount(void) +{ + bpf_probe_read_kernel(&kptr_refcount, sizeof(kptr_refcount), (void *) ref); + return 0; +} + +static int __insert_in_list(struct bpf_list_head *head, struct bpf_spin_lock *lock, + struct node_data __kptr **node) +{ + struct node_data *n, *m; + + n = bpf_obj_new(typeof(*n)); + if (!n) + return 0; + + m = bpf_refcount_acquire(n); + n = bpf_kptr_xchg(node, n); + if (n) { + bpf_obj_drop(n); + bpf_obj_drop(m); + return 0; + } + + bpf_spin_lock(lock); + bpf_list_push_front(head, &m->l); + ref = (u64)(void *) &m->ref; + bpf_spin_unlock(lock); + return probe_read_refcount(); +} + +static void *__lookup_map(void *map) +{ + int key = 0; + + return bpf_map_lookup_elem(map, &key); +} + +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_HASH); + __type(key, int); + __type(value, struct map_value); + __uint(max_entries, 1); +} pcpu_hash SEC(".maps"); + +SEC("tc") +int pcpu_hash_refcount_leak(void *ctx) +{ + struct map_value *v; + + v = __lookup_map(&pcpu_hash); + if (!v) + return 0; + + return __insert_in_list(&head, &lock, &v->node); +} + +SEC("tc") +int check_pcpu_hash_refcount(void *ctx) +{ + return probe_read_refcount(); +} + +struct lock_map_value { + struct node_data __kptr *node; + struct bpf_spin_lock lock; + int value; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __type(key, int); + __type(value, struct lock_map_value); + __uint(max_entries, 1); +} lock_hash SEC(".maps"); + +SEC("tc") +int hash_lock_refcount_leak(void *ctx) +{ + struct lock_map_value *v; + + v = __lookup_map(&lock_hash); + if (!v) + return 0; + + bpf_spin_lock(&v->lock); + v->value = 42; + bpf_spin_unlock(&v->lock); + return __insert_in_list(&head, &lock, &v->node); +} + +SEC("tc") +int check_hash_lock_refcount(void *ctx) +{ + return probe_read_refcount(); +} + +struct { + __uint(type, BPF_MAP_TYPE_CGRP_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, struct lock_map_value); +} cgrp_strg SEC(".maps"); + +SEC("syscall") +int BPF_PROG(cgroup_storage_lock_refcount_leak) +{ + struct lock_map_value *v; + struct task_struct *task; + + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + v = bpf_cgrp_storage_get(&cgrp_strg, task->cgroups->dfl_cgrp, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + if (!v) + return 0; + + return __insert_in_list(&head, &lock, &v->node); +} + +SEC("syscall") +int BPF_PROG(check_cgroup_storage_lock_refcount) +{ + return probe_read_refcount(); +} + char _license[] SEC("license") = "GPL"; -- 2.51.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v4 4/4] selftests/bpf: Add tests to verify freeing the special fields when update hash and local storage maps 2025-10-30 15:24 ` [PATCH bpf-next v4 4/4] selftests/bpf: Add tests to verify freeing the special fields when update hash and local storage maps Leon Hwang @ 2025-11-04 17:30 ` Yonghong Song 2025-11-05 2:14 ` Leon Hwang 0 siblings, 1 reply; 11+ messages in thread From: Yonghong Song @ 2025-11-04 17:30 UTC (permalink / raw) To: Leon Hwang, bpf Cc: ast, andrii, daniel, martin.lau, eddyz87, song, john.fastabend, kpsingh, sdf, haoluo, jolsa, memxor, ameryhung, linux-kernel, kernel-patches-bot On 10/30/25 8:24 AM, Leon Hwang wrote: > Add tests to verify that updating hash and local storage maps decrements > refcount when BPF_KPTR_REF objects are involved. > > The tests perform the following steps: > > 1. Call update_elem() to insert an initial value. > 2. Use bpf_refcount_acquire() to increment the refcount. > 3. Store the node pointer in the map value. > 4. Add the node to a linked list. > 5. Probe-read the refcount and verify it is *2*. > 6. Call update_elem() again to trigger refcount decrement. > 7. Probe-read the refcount and verify it is *1*. > > Signed-off-by: Leon Hwang <leon.hwang@linux.dev> I applied this patch only (i.e., not including patches 1/2/3) to master branch and do bpf selftest and all tests succeeded. [root@arch-fb-vm1 bpf]# ./test_progs -t refcounted_kptr #294/1 refcounted_kptr/insert_read_both: remove from tree + list:OK ... #294/18 refcounted_kptr/pcpu_hash_refcount_leak:OK #294/19 refcounted_kptr/check_pcpu_hash_refcount:OK #294/20 refcounted_kptr/hash_lock_refcount_leak:OK #294/21 refcounted_kptr/check_hash_lock_refcount:OK #294/22 refcounted_kptr/rbtree_sleepable_rcu:OK #294/23 refcounted_kptr/rbtree_sleepable_rcu_no_explicit_rcu_lock:OK #294/24 refcounted_kptr/cgroup_storage_lock_refcount_leak:OK #294/25 refcounted_kptr/check_cgroup_storage_lock_refcount:OK ... Did I miss anything? > --- > .../bpf/prog_tests/refcounted_kptr.c | 134 +++++++++++++++++- > .../selftests/bpf/progs/refcounted_kptr.c | 129 +++++++++++++++++ > 2 files changed, 262 insertions(+), 1 deletion(-) > > diff --git a/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c b/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c > index d6bd5e16e6372..0ec91ff914af7 100644 > --- a/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c > +++ b/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c > @@ -3,7 +3,7 @@ > > #include <test_progs.h> > #include <network_helpers.h> > - > +#include "cgroup_helpers.h" > #include "refcounted_kptr.skel.h" > #include "refcounted_kptr_fail.skel.h" > > @@ -44,3 +44,135 @@ void test_refcounted_kptr_wrong_owner(void) > ASSERT_OK(opts.retval, "rbtree_wrong_owner_remove_fail_a2 retval"); > refcounted_kptr__destroy(skel); > } > + > +static void test_refcnt_leak(struct refcounted_kptr *skel, int key, void *values, size_t values_sz, > + u64 flags, struct bpf_map *map, struct bpf_program *prog_leak, > + struct bpf_program *prog_check, struct bpf_test_run_opts *opts) > +{ > + int ret, fd; > + > + ret = bpf_map__update_elem(map, &key, sizeof(key), values, values_sz, flags); > + if (!ASSERT_OK(ret, "bpf_map__update_elem init")) > + return; > + > + fd = bpf_program__fd(prog_leak); > + ret = bpf_prog_test_run_opts(fd, opts); > + if (!ASSERT_OK(ret, "bpf_prog_test_run_opts")) > + return; > + if (!ASSERT_EQ(skel->bss->kptr_refcount, 2, "refcount")) > + return; > + > + ret = bpf_map__update_elem(map, &key, sizeof(key), values, values_sz, flags); > + if (!ASSERT_OK(ret, "bpf_map__update_elem dec refcount")) > + return; > + > + fd = bpf_program__fd(prog_check); > + ret = bpf_prog_test_run_opts(fd, opts); > + ASSERT_OK(ret, "bpf_prog_test_run_opts"); > + ASSERT_EQ(skel->bss->kptr_refcount, 1, "refcount"); > +} > + [...] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v4 4/4] selftests/bpf: Add tests to verify freeing the special fields when update hash and local storage maps 2025-11-04 17:30 ` Yonghong Song @ 2025-11-05 2:14 ` Leon Hwang 2025-11-05 3:35 ` Yonghong Song 0 siblings, 1 reply; 11+ messages in thread From: Leon Hwang @ 2025-11-05 2:14 UTC (permalink / raw) To: Yonghong Song, bpf Cc: ast, andrii, daniel, martin.lau, eddyz87, song, john.fastabend, kpsingh, sdf, haoluo, jolsa, memxor, ameryhung, linux-kernel, kernel-patches-bot On 5/11/25 01:30, Yonghong Song wrote: > > > On 10/30/25 8:24 AM, Leon Hwang wrote: >> Add tests to verify that updating hash and local storage maps decrements >> refcount when BPF_KPTR_REF objects are involved. >> >> The tests perform the following steps: >> >> 1. Call update_elem() to insert an initial value. >> 2. Use bpf_refcount_acquire() to increment the refcount. >> 3. Store the node pointer in the map value. >> 4. Add the node to a linked list. >> 5. Probe-read the refcount and verify it is *2*. >> 6. Call update_elem() again to trigger refcount decrement. >> 7. Probe-read the refcount and verify it is *1*. >> >> Signed-off-by: Leon Hwang <leon.hwang@linux.dev> > > I applied this patch only (i.e., not including patches 1/2/3) to master > branch and do bpf selftest and all tests succeeded. > > [root@arch-fb-vm1 bpf]# ./test_progs -t refcounted_kptr > #294/1 refcounted_kptr/insert_read_both: remove from tree + list:OK > ... > #294/18 refcounted_kptr/pcpu_hash_refcount_leak:OK > #294/19 refcounted_kptr/check_pcpu_hash_refcount:OK > #294/20 refcounted_kptr/hash_lock_refcount_leak:OK > #294/21 refcounted_kptr/check_hash_lock_refcount:OK > #294/22 refcounted_kptr/rbtree_sleepable_rcu:OK > #294/23 refcounted_kptr/rbtree_sleepable_rcu_no_explicit_rcu_lock:OK > #294/24 refcounted_kptr/cgroup_storage_lock_refcount_leak:OK > #294/25 refcounted_kptr/check_cgroup_storage_lock_refcount:OK > ... > > Did I miss anything? > Oops. You should run: ./test_progs -t kptr_refcount The results are as follows: test_percpu_hash_refcount_leak:PASS:libbpf_num_possible_cpus 0 nsec test_percpu_hash_refcount_leak:PASS:calloc values 0 nsec test_percpu_hash_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:PASS:refcount 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 #158/1 kptr_refcount_leak/percpu_hash_refcount_leak:FAIL test_hash_lock_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:PASS:refcount 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 #158/2 kptr_refcount_leak/hash_lock_refcount_leak:FAIL test_cgroup_storage_lock_refcount_leak:PASS:setup_cgroup_environment 0 nsec test_cgroup_storage_lock_refcount_leak:PASS:get_root_cgroup 0 nsec test_cgroup_storage_lock_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:PASS:refcount 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 #158/3 kptr_refcount_leak/cgroup_storage_lock_refcount_leak:FAIL #158 kptr_refcount_leak:FAIL All error logs: test_percpu_hash_refcount_leak:PASS:libbpf_num_possible_cpus 0 nsec test_percpu_hash_refcount_leak:PASS:calloc values 0 nsec test_percpu_hash_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:PASS:refcount 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 #158/1 kptr_refcount_leak/percpu_hash_refcount_leak:FAIL test_hash_lock_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:PASS:refcount 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 #158/2 kptr_refcount_leak/hash_lock_refcount_leak:FAIL test_cgroup_storage_lock_refcount_leak:PASS:setup_cgroup_environment 0 nsec test_cgroup_storage_lock_refcount_leak:PASS:get_root_cgroup 0 nsec test_cgroup_storage_lock_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:PASS:refcount 0 nsec test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 #158/3 kptr_refcount_leak/cgroup_storage_lock_refcount_leak:FAIL #158 kptr_refcount_leak:FAIL Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED All three tests failed because the refcount remained 2 instead of decreasing to 1 after the second update_elem() call. The CI result [1] also demonstrates this issue. Sorry for the misleading test name earlier. Links: [1] https://github.com/kernel-patches/bpf/pull/10203 Thanks, Leon [...] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v4 4/4] selftests/bpf: Add tests to verify freeing the special fields when update hash and local storage maps 2025-11-05 2:14 ` Leon Hwang @ 2025-11-05 3:35 ` Yonghong Song 0 siblings, 0 replies; 11+ messages in thread From: Yonghong Song @ 2025-11-05 3:35 UTC (permalink / raw) To: Leon Hwang, bpf Cc: ast, andrii, daniel, martin.lau, eddyz87, song, john.fastabend, kpsingh, sdf, haoluo, jolsa, memxor, ameryhung, linux-kernel, kernel-patches-bot On 11/4/25 6:14 PM, Leon Hwang wrote: > > On 5/11/25 01:30, Yonghong Song wrote: >> >> On 10/30/25 8:24 AM, Leon Hwang wrote: >>> Add tests to verify that updating hash and local storage maps decrements >>> refcount when BPF_KPTR_REF objects are involved. >>> >>> The tests perform the following steps: >>> >>> 1. Call update_elem() to insert an initial value. >>> 2. Use bpf_refcount_acquire() to increment the refcount. >>> 3. Store the node pointer in the map value. >>> 4. Add the node to a linked list. >>> 5. Probe-read the refcount and verify it is *2*. >>> 6. Call update_elem() again to trigger refcount decrement. >>> 7. Probe-read the refcount and verify it is *1*. >>> >>> Signed-off-by: Leon Hwang <leon.hwang@linux.dev> >> I applied this patch only (i.e., not including patches 1/2/3) to master >> branch and do bpf selftest and all tests succeeded. >> >> [root@arch-fb-vm1 bpf]# ./test_progs -t refcounted_kptr >> #294/1 refcounted_kptr/insert_read_both: remove from tree + list:OK >> ... >> #294/18 refcounted_kptr/pcpu_hash_refcount_leak:OK >> #294/19 refcounted_kptr/check_pcpu_hash_refcount:OK >> #294/20 refcounted_kptr/hash_lock_refcount_leak:OK >> #294/21 refcounted_kptr/check_hash_lock_refcount:OK >> #294/22 refcounted_kptr/rbtree_sleepable_rcu:OK >> #294/23 refcounted_kptr/rbtree_sleepable_rcu_no_explicit_rcu_lock:OK >> #294/24 refcounted_kptr/cgroup_storage_lock_refcount_leak:OK >> #294/25 refcounted_kptr/check_cgroup_storage_lock_refcount:OK >> ... >> >> Did I miss anything? >> > Oops. > > You should run: > ./test_progs -t kptr_refcount > > The results are as follows: > > test_percpu_hash_refcount_leak:PASS:libbpf_num_possible_cpus 0 nsec > test_percpu_hash_refcount_leak:PASS:calloc values 0 nsec > test_percpu_hash_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:PASS:refcount 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 > #158/1 kptr_refcount_leak/percpu_hash_refcount_leak:FAIL > test_hash_lock_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:PASS:refcount 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 > #158/2 kptr_refcount_leak/hash_lock_refcount_leak:FAIL > test_cgroup_storage_lock_refcount_leak:PASS:setup_cgroup_environment 0 nsec > test_cgroup_storage_lock_refcount_leak:PASS:get_root_cgroup 0 nsec > test_cgroup_storage_lock_refcount_leak:PASS:refcounted_kptr__open_and_load > 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:PASS:refcount 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 > #158/3 kptr_refcount_leak/cgroup_storage_lock_refcount_leak:FAIL > #158 kptr_refcount_leak:FAIL > > All error logs: > test_percpu_hash_refcount_leak:PASS:libbpf_num_possible_cpus 0 nsec > test_percpu_hash_refcount_leak:PASS:calloc values 0 nsec > test_percpu_hash_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:PASS:refcount 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 > #158/1 kptr_refcount_leak/percpu_hash_refcount_leak:FAIL > test_hash_lock_refcount_leak:PASS:refcounted_kptr__open_and_load 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:PASS:refcount 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 > #158/2 kptr_refcount_leak/hash_lock_refcount_leak:FAIL > test_cgroup_storage_lock_refcount_leak:PASS:setup_cgroup_environment 0 nsec > test_cgroup_storage_lock_refcount_leak:PASS:get_root_cgroup 0 nsec > test_cgroup_storage_lock_refcount_leak:PASS:refcounted_kptr__open_and_load > 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem init 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:PASS:refcount 0 nsec > test_refcnt_leak:PASS:bpf_map__update_elem dec refcount 0 nsec > test_refcnt_leak:PASS:bpf_prog_test_run_opts 0 nsec > test_refcnt_leak:FAIL:refcount unexpected refcount: actual 2 != expected 1 > #158/3 kptr_refcount_leak/cgroup_storage_lock_refcount_leak:FAIL > #158 kptr_refcount_leak:FAIL > Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED > > All three tests failed because the refcount remained 2 instead of > decreasing to 1 after the second update_elem() call. > > The CI result [1] also demonstrates this issue. > > Sorry for the misleading test name earlier. Sorry. It is my fault. Indeed, with patches 1-3, the tests indeed failed. I obviously looked at the wrong selftest. > > Links: > [1] https://github.com/kernel-patches/bpf/pull/10203 > > Thanks, > Leon > > [...] > ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-11-05 3:35 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-10-30 15:24 [PATCH bpf-next v4 0/4] bpf: Free special fields when update hash and local storage maps Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 1/4] bpf: Free special fields when update [lru_,]percpu_hash maps Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 2/4] bpf: Free special fields when update hash maps with BPF_F_LOCK Leon Hwang 2025-10-30 15:24 ` [PATCH bpf-next v4 3/4] bpf: Free special fields when update local storage " Leon Hwang 2025-10-30 22:35 ` Alexei Starovoitov 2025-11-03 5:17 ` Leon Hwang 2025-11-03 17:24 ` Alexei Starovoitov 2025-10-30 15:24 ` [PATCH bpf-next v4 4/4] selftests/bpf: Add tests to verify freeing the special fields when update hash and local storage maps Leon Hwang 2025-11-04 17:30 ` Yonghong Song 2025-11-05 2:14 ` Leon Hwang 2025-11-05 3:35 ` Yonghong Song
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox