public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Matt Bobrowski <mattbobrowski@google.com>
To: Emil Tsalapatis <emil@etsalapatis.com>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>,
	bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org,
	daniel@iogearbox.net, eddyz87@gmail.com, song@kernel.org
Subject: Re: [PATCH bpf-next v8 6/8] selftests/bpf: Add buddy allocator for libarena
Date: Thu, 23 Apr 2026 20:24:43 +0000	[thread overview]
Message-ID: <aeqACzTZKohGZnuo@google.com> (raw)
In-Reply-To: <DI0OO269VLZY.1M86OGAGEQTRU@etsalapatis.com>

On Thu, Apr 23, 2026 at 12:43:13PM -0400, Emil Tsalapatis wrote:
> On Thu Apr 23, 2026 at 10:00 AM EDT, Kumar Kartikeya Dwivedi wrote:
> > On Thu, 23 Apr 2026 at 10:44, Matt Bobrowski <mattbobrowski@google.com> wrote:
> >>
> >> On Tue, Apr 21, 2026 at 12:50:35PM -0400, Emil Tsalapatis wrote:
> >> > Add a byte-oriented buddy allocator for libarena. The buddy
> >> > allocator provides an alloc/free interface for small arena allocations
> >> > ranging from 16 bytes to 512 KiB. Lower allocations values are rounded
> >> > up to 16 bytes. The buddy allocator does not handle larger allocations
> >> > that can instead use the existing bpf_arena_{alloc, free}_pages() kfunc.
> >> >
> >> > Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
> >>
> >> The implementation of this BPF arena backed buddy allocator looks
> >> rather solid. I noticed that we have a single global lock being used
> >> here to synchronize metadata structure modifications. Have you thought
> >> about how this might be a high contention point, particularly when
> >> dealing with systems with incredibly high CPU core counts? Did you
> >> consider possibly making use of localized per-CPU caches for the most
> >> common/small allocation sizes or something like that? This is
> >> something that definitely crossed my mind when I was initially
> >> thinking about implementing something like this a while back. I
> >> understand that this is an initial implementation, so it doesn't
> >> necessarily carry all the bells and whistles, but I was curious to
> >> know whether this had also crossed your mind and what your thoughts
> >> were on it.
> >
> > Yes, both reentrancy protection and a per-CPU caches are things Emil
> > is planning to work on next.
> >
> >>
> >> [...]
> 
> Can confirm. This is basically just the first version of the allocator
> just so we have some way to manage arena memory without hacks. The first
> thing to optimize is adding per-CPU caches to avoid contention. I've
> avoided adding them until now for two reasons:
> 
> 1) We don't really have arena workloads we can test optimizations
> against. I am working on porting microbenchmarks from userspace
> allocators so we can concretely see the effect of any features we add.
> I'm also working on adding data structures to libarena, and since they
> may do allocations/frees during regular operations we can use them for
> benchmarking.

Both sound like relatively good ideas to me. Notably, this point has
sparked another thing that I was kinda curious about. Do you envision
libarena being hosted outside of the Linux kernel source tree at some
point? TBH, I think it'd be quite nice if it was. I definitely
envision something like libarena being a submodule within numerous
other BPF-enabled projects.

Also, I kindly request that you CC me on any future revisions and
changes to libarena.

  reply	other threads:[~2026-04-23 20:24 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-21 16:50 [PATCH bpf-next v8 0/8] Introduce arena library and runtime Emil Tsalapatis
2026-04-21 16:50 ` [PATCH bpf-next v8 1/8] selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h Emil Tsalapatis
2026-04-23  8:27   ` Matt Bobrowski
2026-04-21 16:50 ` [PATCH bpf-next v8 2/8] selftests/bpf: Add basic libarena scaffolding Emil Tsalapatis
2026-04-21 20:08   ` sashiko-bot
2026-04-23  8:24   ` Matt Bobrowski
2026-04-21 16:50 ` [PATCH bpf-next v8 3/8] selftests/bpf: Move arena-related headers into libarena Emil Tsalapatis
2026-04-21 16:50 ` [PATCH bpf-next v8 4/8] selftests/bpf: Add arena ASAN runtime to libarena Emil Tsalapatis
2026-04-21 20:48   ` sashiko-bot
2026-04-21 16:50 ` [PATCH bpf-next v8 5/8] selftests/bpf: Add ASAN support for libarena selftests Emil Tsalapatis
2026-04-21 21:15   ` sashiko-bot
2026-04-21 16:50 ` [PATCH bpf-next v8 6/8] selftests/bpf: Add buddy allocator for libarena Emil Tsalapatis
2026-04-21 17:52   ` bot+bpf-ci
2026-04-21 17:56     ` Emil Tsalapatis
2026-04-21 21:42   ` sashiko-bot
2026-04-23  8:44   ` Matt Bobrowski
2026-04-23 14:00     ` Kumar Kartikeya Dwivedi
2026-04-23 16:43       ` Emil Tsalapatis
2026-04-23 20:24         ` Matt Bobrowski [this message]
2026-04-21 16:50 ` [PATCH bpf-next v8 7/8] selftests/bpf: Add selftests for libarena buddy allocator Emil Tsalapatis
2026-04-21 21:57   ` sashiko-bot
2026-04-21 16:50 ` [PATCH bpf-next v8 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests Emil Tsalapatis
2026-04-21 22:16   ` sashiko-bot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aeqACzTZKohGZnuo@google.com \
    --to=mattbobrowski@google.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=eddyz87@gmail.com \
    --cc=emil@etsalapatis.com \
    --cc=memxor@gmail.com \
    --cc=song@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox