BPF List
 help / color / mirror / Atom feed
* Re: [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork
       [not found] <20260513180923.628644-1-hodgesd@meta.com>
@ 2026-05-13 19:04 ` Alexei Starovoitov
  0 siblings, 0 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2026-05-13 19:04 UTC (permalink / raw)
  To: Daniel Hodges
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, Shuah Khan, Barret Rhoden,
	bpf, open list:KERNEL SELFTEST FRAMEWORK, LKML

On Wed, May 13, 2026 at 11:30 AM Daniel Hodges <hodgesd@meta.com> wrote:
>
> arena_vm_open() only increments a refcount on the existing vma_list
> entry without creating a new entry for the child's VMA. After fork,
> vml->vma still points to the parent's VMA. When the parent unmaps
> (arena_vm_close decrements refcount but doesn't remove the entry),
> vml->vma becomes a dangling pointer. A subsequent bpf_arena_free_pages
> call reaches zap_pages() which dereferences the freed VMA via
> zap_vma_range(vml->vma, ...), causing a use-after-free:
>
>   BUG: KASAN: slab-use-after-free in zap_vma_range+0xf2/0x100
>   Read of size 8 at addr ff11000113ec9b10 by task test_progs/198
>   Call Trace:
>    zap_vma_range+0xf2/0x100
>    arena_free_pages+0x6de/0x970
>    bpf_prog_a2b540a82b1066f3_arena_free+0x8b/0xb6
>    bpf_prog_test_run_syscall+0x3d3/0x8a0
>
> The same issue is triggered by __split_vma (partial munmap) and
> copy_vma (mremap), both of which call vm_ops->open.
>
> Fix this by giving each VMA its own vma_list entry instead of sharing
> one with a refcount. arena_vm_open now allocates a new entry for the
> new VMA, and arena_vm_close always removes and frees its own entry.
> If the allocation fails in arena_vm_open, vm_private_data is set to
> NULL and arena_vm_close handles this gracefully, meaning the VMA
> simply won't be zapped during arena page frees.
>
> Fixes: 317460317a02 ("bpf: Introduce bpf_arena.")
> Signed-off-by: Daniel Hodges <hodgesd@meta.com>
> Assisted-by: Claude-Code:claude-opus-4-6
> ---
>  kernel/bpf/arena.c | 19 +++++++++++++------
>  1 file changed, 13 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> index 49a8f7b1beef..a3c46100dd12 100644
> --- a/kernel/bpf/arena.c
> +++ b/kernel/bpf/arena.c
> @@ -310,64 +310,71 @@ static int arena_map_check_btf(struct bpf_map *map, const struct btf *btf,
>  }
>
>  static u64 arena_map_mem_usage(const struct bpf_map *map)
>  {
>         return 0;
>  }
>
>  struct vma_list {
>         struct vm_area_struct *vma;
>         struct list_head head;
> -       refcount_t mmap_count;
>  };
>
>  static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
>  {
>         struct vma_list *vml;
>
>         vml = kmalloc_obj(*vml);
>         if (!vml)
>                 return -ENOMEM;
> -       refcount_set(&vml->mmap_count, 1);
>         vma->vm_private_data = vml;
>         vml->vma = vma;
>         list_add(&vml->head, &arena->vma_list);
>         return 0;
>  }
>
>  static void arena_vm_open(struct vm_area_struct *vma)
>  {
> -       struct vma_list *vml = vma->vm_private_data;
> +       struct bpf_map *map = vma->vm_file->private_data;
> +       struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> +       struct vma_list *vml;
>
> -       refcount_inc(&vml->mmap_count);
> +       vml = kmalloc_obj(*vml);
> +       if (!vml) {
> +               vma->vm_private_data = NULL;
> +               return;
> +       }

I'm tired of seeing the same garbage patch from claude for the 3rd time.

Please make sure you use claude on the latest kernel.
The fix 4fddde2a732d ("bpf: Fix use-after-free in arena_vm_close on fork")
landed a month ago.

pw-bot: cr

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork
@ 2026-05-13 19:13 Daniel Hodges
  2026-05-13 19:13 ` [PATCH 2/2] selftests/bpf: Add test validating arena VMA tracking across fork Daniel Hodges
  2026-05-14 11:21 ` [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork sashiko-bot
  0 siblings, 2 replies; 5+ messages in thread
From: Daniel Hodges @ 2026-05-13 19:13 UTC (permalink / raw)
  To: bpf
  Cc: linux-kselftest, linux-kernel, ast, daniel, andrii, martin.lau,
	eddyz87, memxor, song, yonghong.song, jolsa, shuah, git, brho,
	hodgesd

arena_vm_open() only increments a refcount on the existing vma_list
entry without creating a new entry for the child's VMA. After fork,
vml->vma still points to the parent's VMA. When the parent unmaps
(arena_vm_close decrements refcount but doesn't remove the entry),
vml->vma becomes a dangling pointer. A subsequent bpf_arena_free_pages
call reaches zap_pages() which dereferences the freed VMA via
zap_vma_range(vml->vma, ...), causing a use-after-free:

  BUG: KASAN: slab-use-after-free in zap_vma_range+0xf2/0x100
  Read of size 8 at addr ff11000113ec9b10 by task test_progs/198
  Call Trace:
   zap_vma_range+0xf2/0x100
   arena_free_pages+0x6de/0x970
   bpf_prog_a2b540a82b1066f3_arena_free+0x8b/0xb6
   bpf_prog_test_run_syscall+0x3d3/0x8a0

The same issue is triggered by __split_vma (partial munmap) and
copy_vma (mremap), both of which call vm_ops->open.

Fix this by giving each VMA its own vma_list entry instead of sharing
one with a refcount. arena_vm_open now allocates a new entry for the
new VMA, and arena_vm_close always removes and frees its own entry.
If the allocation fails in arena_vm_open, vm_private_data is set to
NULL and arena_vm_close handles this gracefully, meaning the VMA
simply won't be zapped during arena page frees.

Fixes: 317460317a02 ("bpf: Introduce bpf_arena.")
Signed-off-by: Daniel Hodges <git@danielhodges.dev>
Assisted-by: Claude-Code:claude-opus-4-6
---
 kernel/bpf/arena.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
index 49a8f7b1beef..a3c46100dd12 100644
--- a/kernel/bpf/arena.c
+++ b/kernel/bpf/arena.c
@@ -317,7 +317,6 @@ static u64 arena_map_mem_usage(const struct bpf_map *map)
 struct vma_list {
 	struct vm_area_struct *vma;
 	struct list_head head;
-	refcount_t mmap_count;
 };
 
 static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
@@ -327,7 +326,6 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
 	vml = kmalloc_obj(*vml);
 	if (!vml)
 		return -ENOMEM;
-	refcount_set(&vml->mmap_count, 1);
 	vma->vm_private_data = vml;
 	vml->vma = vma;
 	list_add(&vml->head, &arena->vma_list);
@@ -336,9 +334,19 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
 
 static void arena_vm_open(struct vm_area_struct *vma)
 {
-	struct vma_list *vml = vma->vm_private_data;
+	struct bpf_map *map = vma->vm_file->private_data;
+	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
+	struct vma_list *vml;
 
-	refcount_inc(&vml->mmap_count);
+	vml = kmalloc_obj(*vml);
+	if (!vml) {
+		vma->vm_private_data = NULL;
+		return;
+	}
+	vml->vma = vma;
+	vma->vm_private_data = vml;
+	guard(mutex)(&arena->lock);
+	list_add(&vml->head, &arena->vma_list);
 }
 
 static int arena_vm_may_split(struct vm_area_struct *vma, unsigned long addr)
@@ -357,10 +365,9 @@ static void arena_vm_close(struct vm_area_struct *vma)
 	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
 	struct vma_list *vml = vma->vm_private_data;
 
-	if (!refcount_dec_and_test(&vml->mmap_count))
+	if (!vml)
 		return;
 	guard(mutex)(&arena->lock);
-	/* update link list under lock */
 	list_del(&vml->head);
 	vma->vm_private_data = NULL;
 	kfree(vml);
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] selftests/bpf: Add test validating arena VMA tracking across fork
  2026-05-13 19:13 [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork Daniel Hodges
@ 2026-05-13 19:13 ` Daniel Hodges
  2026-05-14 11:57   ` sashiko-bot
  2026-05-14 11:21 ` [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork sashiko-bot
  1 sibling, 1 reply; 5+ messages in thread
From: Daniel Hodges @ 2026-05-13 19:13 UTC (permalink / raw)
  To: bpf
  Cc: linux-kselftest, linux-kernel, ast, daniel, andrii, martin.lau,
	eddyz87, memxor, song, yonghong.song, jolsa, shuah, git, brho,
	hodgesd

Validate that arena page frees don't trigger a use-after-free when a
forked child holds an inherited arena mmap. The test allocates arena
pages, forks, munmaps the arena in the parent, then frees the arena
pages via BPF. With KASAN enabled, any stale VMA dereference in
zap_pages() will be caught.

Signed-off-by: Daniel Hodges <git@danielhodges.dev>
Assisted-by: Claude-Code:claude-opus-4-6
---
 .../bpf/prog_tests/arena_fork_free.c          | 114 ++++++++++++++++++
 .../selftests/bpf/progs/arena_fork_free.c     |  71 +++++++++++
 2 files changed, 185 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/arena_fork_free.c
 create mode 100644 tools/testing/selftests/bpf/progs/arena_fork_free.c

diff --git a/tools/testing/selftests/bpf/prog_tests/arena_fork_free.c b/tools/testing/selftests/bpf/prog_tests/arena_fork_free.c
new file mode 100644
index 000000000000..6a6e1d4197cf
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/arena_fork_free.c
@@ -0,0 +1,114 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+/*
+ * Validate arena VMA tracking across fork.
+ *
+ * When a process with an arena mmap forks, each VMA must be independently
+ * tracked. This test verifies that freeing arena pages after the parent
+ * munmaps does not access stale VMA pointers.
+ *
+ * Sequence:
+ *   1. Load arena, allocate pages via BPF
+ *   2. fork() — child inherits arena mmap
+ *   3. Parent: munmap the arena
+ *   4. Parent: free arena pages via BPF — zap_pages() must only touch
+ *      live VMAs
+ *
+ * With KASAN enabled, any stale VMA access will be caught.
+ */
+#include <test_progs.h>
+#include <sys/wait.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <sys/user.h>
+#ifndef PAGE_SIZE
+#define PAGE_SIZE getpagesize()
+#endif
+#include "arena_fork_free.skel.h"
+
+void test_arena_fork_free(void)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, opts);
+	struct arena_fork_free *skel;
+	int pipe_fds[2] = {-1, -1};
+	size_t arena_sz;
+	void *arena_area;
+	pid_t child;
+	int ret, status;
+	char buf;
+
+	skel = arena_fork_free__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "open_and_load"))
+		return;
+
+	/* Step 1: allocate arena pages via BPF */
+	ret = bpf_prog_test_run_opts(
+		bpf_program__fd(skel->progs.arena_alloc), &opts);
+	if (!ASSERT_OK(ret, "alloc_run"))
+		goto out;
+	if (!ASSERT_OK(opts.retval, "alloc_retval"))
+		goto out;
+	if (skel->bss->skip) {
+		printf("%s:SKIP:compiler doesn't support arena_cast\n",
+		       __func__);
+		test__skip();
+		goto out;
+	}
+
+	arena_area = bpf_map__initial_value(skel->maps.arena, &arena_sz);
+	if (!ASSERT_OK_PTR(arena_area, "arena_area"))
+		goto out;
+	arena_sz = bpf_map__max_entries(skel->maps.arena) * PAGE_SIZE;
+
+	if (!ASSERT_OK(pipe(pipe_fds), "pipe"))
+		goto out;
+
+	/* Step 2: fork — child inherits arena mmap */
+	child = fork();
+	if (!ASSERT_GE(child, 0, "fork")) {
+		close(pipe_fds[0]);
+		close(pipe_fds[1]);
+		goto out;
+	}
+
+	if (child == 0) {
+		/* Child: keep arena mmap alive, wait for parent to signal */
+		close(pipe_fds[1]);
+		read(pipe_fds[0], &buf, 1);
+		close(pipe_fds[0]);
+		_exit(0);
+	}
+
+	/* Parent continues */
+	close(pipe_fds[0]);
+	pipe_fds[0] = -1;
+
+	/* Step 3: munmap the arena in the parent */
+	ret = munmap(arena_area, arena_sz);
+	if (!ASSERT_OK(ret, "munmap"))
+		goto signal_child;
+
+	/*
+	 * Step 4: free arena pages via BPF.
+	 *
+	 * Wait for the RCU grace period so the parent's VMA slab memory
+	 * is actually freed (VMA freeing is deferred via call_rcu).
+	 * This ensures KASAN can detect any stale VMA dereference in
+	 * zap_pages().
+	 */
+	usleep(200000);
+	opts.retval = 0;
+	ret = bpf_prog_test_run_opts(
+		bpf_program__fd(skel->progs.arena_free), &opts);
+	ASSERT_OK(ret, "free_run");
+	ASSERT_OK(opts.retval, "free_retval");
+
+signal_child:
+	close(pipe_fds[1]);
+	pipe_fds[1] = -1;
+	waitpid(child, &status, 0);
+	ASSERT_TRUE(WIFEXITED(status), "child_exited");
+
+out:
+	arena_fork_free__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/arena_fork_free.c b/tools/testing/selftests/bpf/progs/arena_fork_free.c
new file mode 100644
index 000000000000..81b4f9a4e94b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/arena_fork_free.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+/*
+ * Validate arena VMA tracking across fork.
+ *
+ * Provides BPF programs to allocate and free arena pages, exercised by
+ * the userspace test to verify that zap_pages() correctly handles VMA
+ * lifecycle when a forked child holds an inherited arena mmap.
+ */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include "bpf_arena_common.h"
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARENA);
+	__uint(map_flags, BPF_F_MMAPABLE);
+	__uint(max_entries, 10); /* number of pages */
+#ifdef __TARGET_ARCH_arm64
+	__ulong(map_extra, 0x1ull << 32); /* start of mmap() region */
+#else
+	__ulong(map_extra, 0x1ull << 44); /* start of mmap() region */
+#endif
+} arena SEC(".maps");
+
+bool skip = false;
+
+#ifdef __BPF_FEATURE_ADDR_SPACE_CAST
+
+void __arena *alloc_ptr;
+int alloc_page_cnt;
+
+SEC("syscall")
+int arena_alloc(void *ctx)
+{
+	alloc_ptr = bpf_arena_alloc_pages(&arena, NULL, 2, NUMA_NO_NODE, 0);
+	if (!alloc_ptr)
+		return 1;
+	alloc_page_cnt = 2;
+	return 0;
+}
+
+SEC("syscall")
+int arena_free(void *ctx)
+{
+	if (!alloc_ptr || !alloc_page_cnt)
+		return 1;
+	bpf_arena_free_pages(&arena, alloc_ptr, alloc_page_cnt);
+	alloc_ptr = NULL;
+	alloc_page_cnt = 0;
+	return 0;
+}
+
+#else
+
+SEC("syscall")
+int arena_alloc(void *ctx)
+{
+	skip = true;
+	return 0;
+}
+
+SEC("syscall")
+int arena_free(void *ctx)
+{
+	skip = true;
+	return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork
  2026-05-13 19:13 [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork Daniel Hodges
  2026-05-13 19:13 ` [PATCH 2/2] selftests/bpf: Add test validating arena VMA tracking across fork Daniel Hodges
@ 2026-05-14 11:21 ` sashiko-bot
  1 sibling, 0 replies; 5+ messages in thread
From: sashiko-bot @ 2026-05-14 11:21 UTC (permalink / raw)
  To: Daniel Hodges; +Cc: bpf

Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [Critical] zap_pages() accesses user page tables locklessly, causing page table Use-After-Free during concurrent VMA teardown.
- [High] Silent allocation failure in arena_vm_open() leads to memory leak and physical page desynchronization.
--

commit 305653246e6ae24222d49bb8aa21f98a5bc22903
Author: Daniel Hodges <git@danielhodges.dev>
bpf: arena: fix use-after-free in VMA tracking on fork

This commit addresses a use-after-free by giving each vma its own vma_list
entry in arena_vm_open, rather than sharing a refcounted entry. It also
ensures arena_vm_close removes and frees the respective entry.

> diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> --- a/kernel/bpf/arena.c
> +++ b/kernel/bpf/arena.c

[ ... ]

> @@ -336,9 +334,19 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
>  
>  static void arena_vm_open(struct vm_area_struct *vma)
>  {
> -	struct vma_list *vml = vma->vm_private_data;
> +	struct bpf_map *map = vma->vm_file->private_data;
> +	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> +	struct vma_list *vml;
>  
> -	refcount_inc(&vml->mmap_count);
> +	vml = kmalloc_obj(*vml);
> +	if (!vml) {
> +		vma->vm_private_data = NULL;
> +		return;
> +	}

Because vm_ops->open has a void return type, returning early here does not
abort the vma creation. The child vma will successfully map the arena memory
but remain omitted from arena->vma_list.

Later, when bpf_arena_free_pages() calls zap_pages(), it iterates only over
arena->vma_list. Does this omission cause the child process to retain valid
user-space ptes pointing to the old physical pages?

If so, this could prevent the physical pages from returning to the buddy
allocator, leaking the memory. It may also break shared memory coherency if
new physical pages are reallocated at the same virtual offset while the
child still accesses the old pages.

> +	vml->vma = vma;
> +	vma->vm_private_data = vml;
> +	guard(mutex)(&arena->lock);
> +	list_add(&vml->head, &arena->vma_list);
>  }

This is an existing issue not introduced by this patch, but zap_pages()
iterates over arena->vma_list and calls zap_vma_range() holding only
arena->lock.

The memory management subsystem requires mmap_lock or the per-vma lock to
be held for vma->vm_mm when traversing a vma's page tables. Concurrently,
processes may exit or unmap the arena. In the teardown sequence,
free_pgtables() unlinks and frees the pud, pmd, and pte pages before
remove_vma() invokes vma->vm_ops->close.

Could zap_pages() execute during this teardown window and attempt to walk
page tables that have already been freed, leading to a use-after-free on
the page directory and table pages themselves?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260513191322.21319-1-git@danielhodges.dev?part=1

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] selftests/bpf: Add test validating arena VMA tracking across fork
  2026-05-13 19:13 ` [PATCH 2/2] selftests/bpf: Add test validating arena VMA tracking across fork Daniel Hodges
@ 2026-05-14 11:57   ` sashiko-bot
  0 siblings, 0 replies; 5+ messages in thread
From: sashiko-bot @ 2026-05-14 11:57 UTC (permalink / raw)
  To: Daniel Hodges; +Cc: bpf

Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [High] The test bypasses the intended VMA tracking logic because arena mmaps are created with VM_DONTCOPY, preventing the child from inheriting the VMA.
- [High] Double `munmap()` of libbpf-managed memory leads to potential silent memory corruption in concurrent test environments.
- [Medium] Flaky reliance on `usleep()` for RCU grace period synchronization.
--

commit ee7a95885f283c6cd8d8ee4195aa3d4bcee4e6a6
Author: Daniel Hodges <git@danielhodges.dev>

selftests/bpf: Add test validating arena VMA tracking across fork
    
This commit adds a BPF selftest to validate that freeing arena pages does
not trigger a use-after-free when a forked child holds an inherited arena mmap.

> diff --git a/tools/testing/selftests/bpf/prog_tests/arena_fork_free.c b/tools/testing/selftests/bpf/prog_tests/arena_fork_free.c
> new file mode 100644
> index 0000000000000..6a6e1d4197cf9
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/arena_fork_free.c

[ ... ]

> +	arena_area = bpf_map__initial_value(skel->maps.arena, &arena_sz);
> +	if (!ASSERT_OK_PTR(arena_area, "arena_area"))
> +		goto out;
> +	arena_sz = bpf_map__max_entries(skel->maps.arena) * PAGE_SIZE;
> +
> +	if (!ASSERT_OK(pipe(pipe_fds), "pipe"))
> +		goto out;
> +
> +	/* Step 2: fork — child inherits arena mmap */
> +	child = fork();

Does the child actually inherit the arena mmap here? 

The kernel's arena_map_mmap() sets VM_DONTCOPY on the arena VMA, meaning the
child process will not inherit it during fork() and arena_vm_open() will not
be called.

This causes the parent's munmap() to leave the tracking list empty, so when
bpf_arena_free_pages() triggers zap_pages(), it iterates over an empty list.
The test seems to pass without exercising the intended VMA tracking logic.

Could the test clear VM_DONTCOPY via madvise(..., MADV_DOFORK) before
calling fork()?

[ ... ]

> +	/* Parent continues */
> +	close(pipe_fds[0]);
> +	pipe_fds[0] = -1;
> +
> +	/* Step 3: munmap the arena in the parent */
> +	ret = munmap(arena_area, arena_sz);

Can this munmap() lead to a double munmap and silent memory corruption?

Because arena_area was obtained from bpf_map__initial_value(), it points to
libbpf's internal map->mmaped pointer. When the test completes and calls
arena_fork_free__destroy(skel), libbpf will unconditionally call munmap()
on this pointer again.

If another thread or parallel test mmaps memory in that address window
between this munmap() and the skeleton destruction, the second munmap()
will silently unmap the unrelated memory.

Would it be better for the test to mmap() the arena independently using its
FD instead of unmapping libbpf's internal pointer?

[ ... ]

> +	/*
> +	 * Step 4: free arena pages via BPF.
> +	 *
> +	 * Wait for the RCU grace period so the parent's VMA slab memory
> +	 * is actually freed (VMA freeing is deferred via call_rcu).
> +	 * This ensures KASAN can detect any stale VMA dereference in
> +	 * zap_pages().
> +	 */
> +	usleep(200000);

Is relying on a fixed timeout reliable for ensuring an RCU grace period
has elapsed?

Under heavy load in CI environments, the grace period might be delayed,
causing KASAN to miss the intended use-after-free because the VMA slab
memory hasn't actually been freed yet.

Could the test use kern_sync_rcu() from the BPF selftests framework to
deterministically wait for the grace period?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260513191322.21319-1-git@danielhodges.dev?part=2

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-05-14 11:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-13 19:13 [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork Daniel Hodges
2026-05-13 19:13 ` [PATCH 2/2] selftests/bpf: Add test validating arena VMA tracking across fork Daniel Hodges
2026-05-14 11:57   ` sashiko-bot
2026-05-14 11:21 ` [PATCH 1/2] bpf: arena: fix use-after-free in VMA tracking on fork sashiko-bot
     [not found] <20260513180923.628644-1-hodgesd@meta.com>
2026-05-13 19:04 ` Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox