From: Weiming Shi <bestswngs@gmail.com>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: Emil Tsalapatis <emil@etsalapatis.com>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <martin.lau@linux.dev>,
Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
Yonghong Song <yonghong.song@linux.dev>,
John Fastabend <john.fastabend@gmail.com>,
KP Singh <kpsingh@kernel.org>,
Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
Jiri Olsa <jolsa@kernel.org>, Barret Rhoden <brho@google.com>,
bpf <bpf@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>,
Xiang Mei <xmei5@asu.edu>
Subject: Re: [PATCH bpf v4 1/2] bpf: Fix use-after-free of arena VMA on fork
Date: Mon, 13 Apr 2026 18:12:40 +0800 [thread overview]
Message-ID: <adzBmOuXCZ0vHnbg@SLSGDTSWING002> (raw)
In-Reply-To: <CAADnVQLhnnt8uGoZk=38zH1jh_9Uy8ah=NdcoBE71qGkuH2O5g@mail.gmail.com>
On 26-04-12 14:30, Alexei Starovoitov wrote:
> On Sun, Apr 12, 2026 at 10:50 AM Emil Tsalapatis <emil@etsalapatis.com> wrote:
> >
> > On Sat Apr 11, 2026 at 10:27 PM EDT, Weiming Shi wrote:
> > > arena_vm_open() only increments a refcount on the shared vma_list entry
> > > but never registers the new VMA or updates the stored vma pointer. When
> > > the original VMA is unmapped while a forked copy still exists,
> > > arena_vm_close() drops the refcount without freeing the vma_list entry.
> > > The entry's vma pointer now refers to a freed vm_area_struct. A
> > > subsequent bpf_arena_free_pages() call iterates vma_list and passes
> > > the dangling pointer to zap_page_range_single(), causing a
> > > use-after-free.
> > >
> > > The bug is reachable by any process with CAP_BPF and CAP_PERFMON that
> > > can create a BPF_MAP_TYPE_ARENA, mmap it, and fork. It triggers
> > > deterministically -- no race condition is involved.
> > >
> > > BUG: KASAN: slab-use-after-free in zap_page_range_single (mm/memory.c:2234)
> > > Call Trace:
> > > <TASK>
> > > zap_page_range_single+0x101/0x110 mm/memory.c:2234
> > > zap_pages+0x80/0xf0 kernel/bpf/arena.c:658
> > > arena_free_pages+0x67a/0x860 kernel/bpf/arena.c:712
> > > bpf_prog_test_run_syscall+0x3da net/bpf/test_run.c:1640
> > > __sys_bpf+0x1662/0x50b0 kernel/bpf/syscall.c:6267
> > > __x64_sys_bpf+0x73/0xb0 kernel/bpf/syscall.c:6360
> > > do_syscall_64+0xf1/0x530 arch/x86/entry/syscall_64.c:63
> > > entry_SYSCALL_64_after_hwframe+0x77 arch/x86/entry/entry_64.S:130
> > > </TASK>
> > >
> > > Fix this by tracking each child VMA separately. arena_vm_open() now
> > > clears the inherited vm_private_data and calls remember_vma() to
> > > register a fresh vma_list entry for the new VMA. If remember_vma()
> > > fails due to OOM, vm_private_data stays NULL and arena_vm_close()
> > > skips the cleanup for that VMA. The shared refcount is no longer
> > > needed and is removed.
> > >
> > > Also add arena_vm_may_split() returning -EINVAL to prevent VMA
> > > splitting, so that arena_vm_open() only needs to handle fork and the
> > > vma_list tracking stays simple.
> > >
> > > Fixes: b90d77e5fd78 ("bpf: Fix remap of arena.")
> > > Reported-by: Xiang Mei <xmei5@asu.edu>
> > > Signed-off-by: Weiming Shi <bestswngs@gmail.com>
> > > ---
> > > kernel/bpf/arena.c | 23 +++++++++++++++++------
> > > 1 file changed, 17 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> > > index f355cf1c1a16..3462c4463617 100644
> > > --- a/kernel/bpf/arena.c
> > > +++ b/kernel/bpf/arena.c
> > > @@ -317,7 +317,6 @@ static u64 arena_map_mem_usage(const struct bpf_map *map)
> > > struct vma_list {
> > > struct vm_area_struct *vma;
> > > struct list_head head;
> > > - refcount_t mmap_count;
> > > };
> > >
> > > static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
> > > @@ -327,7 +326,6 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
> > > vml = kmalloc_obj(*vml);
> > > if (!vml)
> > > return -ENOMEM;
> > > - refcount_set(&vml->mmap_count, 1);
> > > vma->vm_private_data = vml;
> > > vml->vma = vma;
> > > list_add(&vml->head, &arena->vma_list);
> > > @@ -336,9 +334,17 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
> > >
> > > static void arena_vm_open(struct vm_area_struct *vma)
> > > {
> > > - struct vma_list *vml = vma->vm_private_data;
> > > + struct bpf_map *map = vma->vm_file->private_data;
> > > + struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> > >
> > > - refcount_inc(&vml->mmap_count);
> > > + /*
> > > + * vm_private_data points to the parent's vma_list entry after fork.
> > > + * Clear it and register this VMA separately.
> > > + */
> > > + vma->vm_private_data = NULL;
> > > + guard(mutex)(&arena->lock);
> > > + /* OOM is silently ignored; arena_vm_close() handles NULL. */
> >
> > I don't see any way this approach gonna work, and frankly makes no sense
> > to me. This patch doesn't take into account how the vma_list is actually
> > used. It frankly makes no sense, Please think through this: If we could
> > silently just not allocate the vml, why do we need it in the first place?
>
> +1
>
> Weiming,
>
> you should stop trusting AI so blindly.
> First, analyze the root cause (the first paragraph of the commit log).
> Is this really the case?
>
> Second, I copy pasted it to claude and got the same "fix" back,
> but implemented without your bug:
> + vml = kmalloc_obj(*vml);
> + if (!vml) {
> + vma->vm_private_data = NULL;
> + return;
> + }
> + vml->vma = vma;
> + vma->vm_private_data = vml;
> + guard(mutex)(&arena->lock);
> + list_add(&vml->head, &arena->vma_list);
>
> at least this part is kinda makes sense...
>
> and, of course, this part too:
>
> - if (!refcount_dec_and_test(&vml->mmap_count))
> + if (!vml)
> return;
>
> when you look at it you MUST ask AI back:
> "Is this buggy?"
>
> and it will reply:
> "
> Right — silently dropping the VMA from the list means zap_pages()
> won't unmap pages from it, which is a correctness problem, not just
> degraded behavior. Since vm_open can't fail, the allocation should use
> __GFP_NOFAIL. The struct is tiny so that's fine.
> "
>
> and it proceeded adding __GFP_NOFAIL.
>
> which is wrong too.
>
> So please don't just throw broken patches at maintainers.
> Do your homework. Fixing one maybe-bug and introducing
> more real bugs is not a step forward.
>
> pw-bot: cr
Thanks for the detailed review, really appreciate it.
I traced through it with GDB + KASAN in QEMU. Here's what happens:
1. mmap → remember_vma()
vml->vma = 0xffff88800abfe700, mmap_count = 1
now Parent VMA = 0xffff88800abfe700
2. fork → arena_vm_open(child_vma)
vml->vma = 0xffff88800abfe700 (unchanged), mmap_count = 2
3. parent munmap → arena_vm_close(parent_vma)
mmap_count = 1
vml->vma is now dangling
4. child bpf_arena_free_pages → zap_pages()
reads vml->vma = 0xffff88800abfe700 → UAF
The core issue is that arena_vm_open() never registers the child
VMA -- it only bumps the mmap_count . So vml->vma always points at
the parent, and dangles once the parent unmaps.
What approach would you suggest for fixing this?
next prev parent reply other threads:[~2026-04-13 10:12 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-12 2:27 [PATCH bpf v4 0/2] bpf: Fix arena VMA use-after-free on fork Weiming Shi
2026-04-12 2:27 ` [PATCH bpf v4 1/2] bpf: Fix use-after-free of arena VMA " Weiming Shi
2026-04-12 17:50 ` Emil Tsalapatis
2026-04-12 21:30 ` Alexei Starovoitov
2026-04-13 10:12 ` Weiming Shi [this message]
2026-04-13 18:53 ` Alexei Starovoitov
2026-04-13 19:44 ` Alexei Starovoitov
2026-04-12 2:27 ` [PATCH bpf v4 2/2] selftests/bpf: Add test for arena VMA use-after-free " Weiming Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=adzBmOuXCZ0vHnbg@SLSGDTSWING002 \
--to=bestswngs@gmail.com \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brho@google.com \
--cc=daniel@iogearbox.net \
--cc=eddyz87@gmail.com \
--cc=emil@etsalapatis.com \
--cc=haoluo@google.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kpsingh@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=martin.lau@linux.dev \
--cc=sdf@fomichev.me \
--cc=song@kernel.org \
--cc=xmei5@asu.edu \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox