BPF List
 help / color / mirror / Atom feed
From: "Emil Tsalapatis" <emil@etsalapatis.com>
To: "Kumar Kartikeya Dwivedi" <memxor@gmail.com>,
	"Emil Tsalapatis" <emil@etsalapatis.com>,
	"Alexei Starovoitov" <alexei.starovoitov@gmail.com>
Cc: "Tejun Heo" <tj@kernel.org>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Eduard Zingerman" <eddyz87@gmail.com>,
	"Andrii Nakryiko" <andrii@kernel.org>,
	"David Vernet" <void@manifault.com>,
	"Andrea Righi" <arighi@nvidia.com>,
	"Changwoo Min" <changwoo@igalia.com>, "bpf" <bpf@vger.kernel.org>,
	<sched-ext@lists.linux.dev>,
	"LKML" <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 2/9] bpf/arena: Add BPF_F_ARENA_MAP_ALWAYS for direct kernel access
Date: Tue, 12 May 2026 11:59:09 -0400	[thread overview]
Message-ID: <DIGTMO08GLD5.1TRLRCGA8OW2D@etsalapatis.com> (raw)
In-Reply-To: <DIGR8WQ9B6JQ.13C5FY863IJ3V@gmail.com>

On Tue May 12, 2026 at 10:07 AM EDT, Kumar Kartikeya Dwivedi wrote:
> On Tue May 12, 2026 at 2:29 PM CEST, Emil Tsalapatis wrote:
>> On Tue May 12, 2026 at 12:24 AM EDT, Alexei Starovoitov wrote:
>>> On Mon, May 11, 2026 at 8:49 PM Kumar Kartikeya Dwivedi
>>> <memxor@gmail.com> wrote:
>>>>
>>>> On Tue, 12 May 2026 at 05:25, Alexei Starovoitov
>>>> <alexei.starovoitov@gmail.com> wrote:
>>>> >
>>>> > On Mon May 11, 2026 at 7:43 PM PDT, Kumar Kartikeya Dwivedi wrote:
>>>> > >
>>>> > > If not, the best course to me seems to be to make the flag behavior
>>>> > > default, and just rely on ASan (and Rust in the future) to prevent any
>>>> > > memory safety issues, and drop the stream based feedback on fault,
>>>> > > etc.
>>>> >
>>>> > Agree that this needs to be new default without new uapi flags.
>>>> > How about we tweak the idea further.
>>>> > Let all arena pages be unmapped initially. bpf progs will fault
>>>> > on them and will be reported via bpf_streams.
>>>> > But we also prepare one "scratch page". Let's use this name,
>>>> > since "garbage page" reads too dirty.
>>>> > When kernel faults we populate pte with that scratch page
>>>> > and let the kernel code retry.
>>>> > To implement it the page_fault_oops() can have a callback
>>>> > into bpf/arena helper similar to kfence_handle_page_fault.
>>>> > If fault address is in arena, do kfence_unprotect()-like.
>>>>
>>>> Interesting idea. So I guess this page remains mapped once kernel
>>>> faults on it. I guess we can still reset it to NULL if we alloc and
>>>> free a page at the same address, so it's just a drop-in to prevent
>>>> further faults inside the kernel, since emulating instructions is ugly
>>>> and we're not using asm wrappers that have fixup labels etc. If we end
>>>> up allocating and freeing something at the same address it will likely
>>>> get reset to NULL (that would be ideal). But even if this happens in
>>>> parallel we may fault again and then will just fix up the NULL pte
>>>> with scratch page again. We can likely also preserve fault reporting
>>>> into streams when such scratch pages are brought in.
>>>
>>> Yep. All makes sense.
>>> The hope is that faults from kfuncs should be rare
>>> compared to faults from regular arena bugs.
>>> So the stuck scratch page shouldn't happen often and
>>> faults on unmapped will still be seen most of the time.
>>
>> This sounds great, it pretty much retains all arena behavior that we
>> care about. The most important part is that it reliably reports the
>> first memory access error, which even now is the only one that is
>> meaningful. The delta with current behavior is that subsequent accesses
>> are not caught, but we don't care about those because they are very
>> likely caused by reading zeros during the initial buggy access.
>>
>> Would the scratch page be actually mapped into the arena radix tree, or
>> just the pte? Because if it doesn't then I think we don't even need to
>
> Just the PTE.
>
>> worry about resetting it from the arena side. Just allocating it at
>> a later time will overwrite the scratch page PTE with new valid page,
>
> Which is fine IMO, and how it should be. Alloc and free cycle sets it to NULL,
> so be it. Users can also do it in parallel, that case will just cause a fault in
> the kernel again and we'll reset the PTE to the scratch page again.

Yeah this is why this solution does not interfere with any BPF arena code.
The allocator does not need to know about the scratch PTEs at all, it
can just allocate over them and automatically turns the address valid.

>
>> Until then the page is accessing the scratch page, but again we only
>> care about the first buggy access.
>
> Right.
>
>>
>> Small nit: Maybe default page instead of scratch page? Scratch page
>> sounds a bit like scratch space but we don't actually use the page to
>> store any data.
>
> It likely should also be zeroed out, to preserve the idea that reading
> 'faulting' regions returns zeroes. Let's just go with scratch page term.
>
> I think the main idea is we install a page fault handler after the KCSAN one,
> from the fault handler, use bpf_prog_find_from_stack() to obtain the first
> program in the stack trace, which will be the one originating the fault inside
> the kernel. Then make sure the faulting address lies in the prog->aux->arena,
> (likely including guard pages in its range), and just install the PTE for the
> zeroed out scratch page at that point and continue.
>
> I thought about various races, to me it seems it should be ok. If parallel
> installation wins over us, it either installed a valid page replacing scratch
> PTE, at which point we just let the kernel retry, or installed a scratch page.
> If it races and replaces existing scratch or valid page with NULL after we
> checked, we fault again and retry. In any case, either the kernel continues or
> it ends up faulting again, at which point we can handle the fault again and
> attempt to fix it up.
>
> We likely need to make sure the existing thing is pte_none()  only install if
> pte_none(), otherwise leave things as is. If racy attempts unmap and set scratch
> or valid page to none, we will fault again and reinstall. If racy attempts
> install scratch page or valid page, we let it be as is. More importantly we
> shouldn't install scratch page over a valid page, I think.
>
> Our PTE installation likely takes the form try_cmpxchg(pte, NULL, scratch_page).
>
> One corner case is that we may have cached scratch page TLB translations for a
> range we are trying to alloc pages over. Typically the way to eliminate stale
> TLBs would be to just do flush_tlb_kernel_range(). In this case I wonder whether
> we just skip it to avoid the cost and let the stale TLB stay, since it likely
> came due to program passing faultable memory into kernel.
>
> That said, a cheaper fix would be to install PTEs under the lock not with
> WRITE_ONCE() but xchg() so that we can inspect if we overwrote an entry that
> had scratch page and only do the extra TLB flush in that case. I would be fine
> with either option (leaving it as is, or the above), as long as we document it
> somewhere (either in the commit log or a comment in the code), just so we don't
> forget.
>

Let's skip the flush. When we hit races like that during the kfunc we should
care more about completing the call than about the result since the
program is already buggy.

> The main question is, what are the next steps? Do you want to take a stab at
> implementing this?

Can do, I will send a patch.


  reply	other threads:[~2026-05-12 15:59 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-27 10:51 [RFC PATCH 0/9] bpf/arena: Direct kernel-side access Tejun Heo
2026-04-27 10:51 ` [RFC PATCH 1/9] bpf/arena: Plumb struct bpf_arena * through PTE callbacks Tejun Heo
2026-04-27 10:51 ` [RFC PATCH 2/9] bpf/arena: Add BPF_F_ARENA_MAP_ALWAYS for direct kernel access Tejun Heo
2026-05-12  0:31   ` Kumar Kartikeya Dwivedi
2026-05-12  2:05     ` Emil Tsalapatis
2026-05-12  2:43       ` Kumar Kartikeya Dwivedi
2026-05-12  3:25         ` Alexei Starovoitov
2026-05-12  3:48           ` Kumar Kartikeya Dwivedi
2026-05-12  4:24             ` Alexei Starovoitov
2026-05-12 12:29               ` Emil Tsalapatis
2026-05-12 14:07                 ` Kumar Kartikeya Dwivedi
2026-05-12 15:59                   ` Emil Tsalapatis [this message]
2026-05-12  3:42         ` Emil Tsalapatis
2026-04-27 10:51 ` [RFC PATCH 3/9] bpf: Add sleepable variant of bpf_arena_alloc_pages for kernel callers Tejun Heo
2026-04-27 10:51 ` [RFC PATCH 4/9] bpf: Add bpf_struct_ops_for_each_prog() Tejun Heo
2026-04-27 10:51 ` [RFC PATCH 5/9] bpf: Add bpf_prog_for_each_used_map() Tejun Heo
2026-05-11 21:44   ` Kumar Kartikeya Dwivedi
2026-04-27 10:51 ` [RFC PATCH 6/9] bpf/arena: Add bpf_arena_map_kern_vm_start() Tejun Heo
2026-04-27 10:51 ` [RFC PATCH 7/9] sched_ext: Require MAP_ALWAYS arena for cid-form schedulers Tejun Heo
2026-04-27 10:51 ` [RFC PATCH 8/9] sched_ext: Sub-allocator over kernel-claimed BPF arena pages Tejun Heo
2026-04-27 10:51 ` [RFC PATCH 9/9] sched_ext: Convert ops.set_cmask() to arena-resident cmask Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DIGTMO08GLD5.1TRLRCGA8OW2D@etsalapatis.com \
    --to=emil@etsalapatis.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=arighi@nvidia.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=changwoo@igalia.com \
    --cc=eddyz87@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=memxor@gmail.com \
    --cc=sched-ext@lists.linux.dev \
    --cc=tj@kernel.org \
    --cc=void@manifault.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox