* [PATCH bpf-next] selftests/bpf: Make sure stashed kptr in local kptr is freed recursively
@ 2024-08-27 1:13 Amery Hung
2024-08-28 23:38 ` Martin KaFai Lau
2024-08-29 19:20 ` patchwork-bot+netdevbpf
0 siblings, 2 replies; 3+ messages in thread
From: Amery Hung @ 2024-08-27 1:13 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, alexei.starovoitov, martin.lau, houtao,
davemarchevsky, ameryhung, Amery Hung
When dropping a local kptr, any kptr stashed into it is supposed to be
freed through bpf_obj_free_fields->__bpf_obj_drop_impl recursively. Add a
test to make sure it happens.
The test first stashes a referenced kptr to "struct task" into a local
kptr and gets the reference count of the task. Then, it drops the local
kptr and reads the reference count of the task again. Since
bpf_obj_free_fields and __bpf_obj_drop_impl will go through the local kptr
recursively during bpf_obj_drop, the dtor of the stashed task kptr should
eventually be called. The second reference count should be one less than
the first one.
Signed-off-by: Amery Hung <amery.hung@bytedance.com>
---
.../selftests/bpf/progs/task_kfunc_success.c | 30 ++++++++++++++++++-
1 file changed, 29 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_success.c b/tools/testing/selftests/bpf/progs/task_kfunc_success.c
index 3138bb689b0b..a55149015063 100644
--- a/tools/testing/selftests/bpf/progs/task_kfunc_success.c
+++ b/tools/testing/selftests/bpf/progs/task_kfunc_success.c
@@ -143,8 +143,9 @@ int BPF_PROG(test_task_acquire_leave_in_map, struct task_struct *task, u64 clone
SEC("tp_btf/task_newtask")
int BPF_PROG(test_task_xchg_release, struct task_struct *task, u64 clone_flags)
{
- struct task_struct *kptr;
+ struct task_struct *kptr, *acquired;
struct __tasks_kfunc_map_value *v, *local;
+ int refcnt, refcnt_after_drop;
long status;
if (!is_test_kfunc_task())
@@ -190,7 +191,34 @@ int BPF_PROG(test_task_xchg_release, struct task_struct *task, u64 clone_flags)
return 0;
}
+ /* Stash a copy into local kptr and check if it is released recursively */
+ acquired = bpf_task_acquire(kptr);
+ if (!acquired) {
+ err = 7;
+ bpf_obj_drop(local);
+ bpf_task_release(kptr);
+ return 0;
+ }
+ bpf_probe_read_kernel(&refcnt, sizeof(refcnt), &acquired->rcu_users);
+
+ acquired = bpf_kptr_xchg(&local->task, acquired);
+ if (acquired) {
+ err = 8;
+ bpf_obj_drop(local);
+ bpf_task_release(kptr);
+ bpf_task_release(acquired);
+ return 0;
+ }
+
bpf_obj_drop(local);
+
+ bpf_probe_read_kernel(&refcnt_after_drop, sizeof(refcnt_after_drop), &kptr->rcu_users);
+ if (refcnt != refcnt_after_drop + 1) {
+ err = 9;
+ bpf_task_release(kptr);
+ return 0;
+ }
+
bpf_task_release(kptr);
return 0;
--
2.20.1
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH bpf-next] selftests/bpf: Make sure stashed kptr in local kptr is freed recursively
2024-08-27 1:13 [PATCH bpf-next] selftests/bpf: Make sure stashed kptr in local kptr is freed recursively Amery Hung
@ 2024-08-28 23:38 ` Martin KaFai Lau
2024-08-29 19:20 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: Martin KaFai Lau @ 2024-08-28 23:38 UTC (permalink / raw)
To: Amery Hung
Cc: bpf, daniel, andrii, alexei.starovoitov, martin.lau, houtao,
davemarchevsky, ameryhung
On 8/26/24 6:13 PM, Amery Hung wrote:
> When dropping a local kptr, any kptr stashed into it is supposed to be
> freed through bpf_obj_free_fields->__bpf_obj_drop_impl recursively. Add a
> test to make sure it happens.
>
> The test first stashes a referenced kptr to "struct task" into a local
> kptr and gets the reference count of the task. Then, it drops the local
> kptr and reads the reference count of the task again. Since
> bpf_obj_free_fields and __bpf_obj_drop_impl will go through the local kptr
> recursively during bpf_obj_drop, the dtor of the stashed task kptr should
> eventually be called. The second reference count should be one less than
> the first one.
Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH bpf-next] selftests/bpf: Make sure stashed kptr in local kptr is freed recursively
2024-08-27 1:13 [PATCH bpf-next] selftests/bpf: Make sure stashed kptr in local kptr is freed recursively Amery Hung
2024-08-28 23:38 ` Martin KaFai Lau
@ 2024-08-29 19:20 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-08-29 19:20 UTC (permalink / raw)
To: Amery Hung
Cc: bpf, daniel, andrii, alexei.starovoitov, martin.lau, houtao,
davemarchevsky, ameryhung
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Tue, 27 Aug 2024 01:13:01 +0000 you wrote:
> When dropping a local kptr, any kptr stashed into it is supposed to be
> freed through bpf_obj_free_fields->__bpf_obj_drop_impl recursively. Add a
> test to make sure it happens.
>
> The test first stashes a referenced kptr to "struct task" into a local
> kptr and gets the reference count of the task. Then, it drops the local
> kptr and reads the reference count of the task again. Since
> bpf_obj_free_fields and __bpf_obj_drop_impl will go through the local kptr
> recursively during bpf_obj_drop, the dtor of the stashed task kptr should
> eventually be called. The second reference count should be one less than
> the first one.
>
> [...]
Here is the summary with links:
- [bpf-next] selftests/bpf: Make sure stashed kptr in local kptr is freed recursively
https://git.kernel.org/bpf/bpf-next/c/bd0b4836a233
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-08-29 19:20 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-27 1:13 [PATCH bpf-next] selftests/bpf: Make sure stashed kptr in local kptr is freed recursively Amery Hung
2024-08-28 23:38 ` Martin KaFai Lau
2024-08-29 19:20 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox