From: sashiko-bot@kernel.org
To: "Emil Tsalapatis" <emil@etsalapatis.com>
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH bpf-next v8 6/8] selftests/bpf: Add buddy allocator for libarena
Date: Tue, 21 Apr 2026 21:42:20 +0000 [thread overview]
Message-ID: <20260421214220.DFA5CC2BCB0@smtp.kernel.org> (raw)
In-Reply-To: <20260421165037.4736-7-emil@etsalapatis.com>
Sashiko AI review found 5 potential issue(s):
- [High] Permanent physical memory leak in buddy_alloc_from_new_chunk() when allocator lock fails.
- [Medium] Missing NULL pointer checks after idx_to_header() can cause BPF program faults.
- [Medium] buddy_chunk_get() holds the global spinlock across expensive thread-local initialization, disabling preemption unnecessarily.
- [Medium] buddy_free_unlocked() lacks bounds checking, allowing arbitrary metadata corruption if given a bad pointer.
- [Medium] buddy_init() permanently leaks 1GiB of arena virtual address space on failure.
--
commit b2490ddf9769a03f5339f5d758f0fb5765e551bc
Author: Emil Tsalapatis <emil@etsalapatis.com>
selftests/bpf: Add buddy allocator for libarena
Add a byte-oriented buddy allocator for libarena. The buddy
allocator provides an alloc/free interface for small arena allocations
ranging from 16 bytes to 512 KiB. Lower allocations values are rounded
up to 16 bytes. The buddy allocator does not handle larger allocations
that can instead use the existing bpf_arena_{alloc, free}_pages() kfunc.
> diff --git a/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c b/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c
> index 0000000000000..489b7383a53a5
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c
[ ... ]
> +static buddy_chunk_t *buddy_chunk_get(buddy_t *buddy)
> +{
[ ... ]
> + chunk = bpf_arena_alloc_pages(&arena, (void __arena *)vaddr,
> + BUDDY_CHUNK_PAGES, NUMA_NO_NODE, 0);
> + if (!chunk) {
> + arena_stderr("[ALLOC FAILED]");
> + return NULL;
> + }
> +
> + if (buddy_lock(buddy)) {
> + /*
> + * We cannot reclaim the vaddr space, but that is ok - this
> + * operation should always succeed. The error path is to catch
> + * accidental deadlocks that will cause -ENOMEMs to the program as
> + * the allocator fails to refill itself, in which case vaddr usage
> + * is the least of our worries.
> + */
> + bpf_arena_free_pages(&arena, (void __arena *)vaddr, BUDDY_CHUNK_PAGES);
> + return NULL;
> + }
> +
> + asan_poison(chunk, BUDDY_POISONED, BUDDY_CHUNK_PAGES * __PAGE_SIZE);
> +
> + /* Unpoison the chunk itself. */
> + asan_unpoison(chunk, sizeof(*chunk));
> +
> + /* Mark all freelists as empty. */
> + for (ord = zero; ord < BUDDY_CHUNK_NUM_ORDERS && can_loop; ord++)
> + chunk->freelists[ord] = BUDDY_CHUNK_ITEMS;
Is it necessary to hold the global spinlock across this chunk initialization?
At this stage, the newly allocated physical chunk has not yet been linked into
the globally visible buddy->first_chunk list, making it strictly thread-local.
Holding the lock here appears to unnecessarily disable preemption across these
computationally expensive operations like the 1MiB memory poisoning and the
unbounded while loop, while also serializing unrelated memory allocations on
other CPUs.
[ ... ]
> +__weak int buddy_init(buddy_t __arg_arena *buddy)
> +{
> + buddy_chunk_t *chunk;
> + int ret;
> +
> + if (!asan_ready())
> + return -EINVAL;
> +
> + /* Reserve enough address space to ensure allocations are aligned. */
> + ret = buddy_reserve_arena_vaddr(buddy);
> + if (ret)
> + return ret;
> +
> + _Static_assert(BUDDY_CHUNK_PAGES > 0,
> + "chunk must use one or more pages");
> +
> + chunk = buddy_chunk_get(buddy);
> +
> + if (buddy_lock(buddy)) {
> + bpf_arena_free_pages(&arena, chunk, BUDDY_CHUNK_PAGES);
> + return -EINVAL;
> + }
Does this leak the 1GiB of arena virtual address space if the initial chunk
allocation fails or if the lock cannot be acquired?
If buddy_chunk_get() returns NULL due to memory pressure, or if buddy_lock()
fails, the function returns an error without calling
buddy_unreserve_arena_vaddr().
If the calling program handles the error and retries initialization, this
could leak an additional 1GiB per attempt, rapidly exhausting the BPF
arena's maximum 4GiB virtual address space.
[ ... ]
> +__weak u64 buddy_chunk_alloc(buddy_chunk_t __arg_arena *chunk, int order_req)
> +{
[ ... ]
> + retidx = chunk->freelists[order];
> + header = idx_to_header(chunk, retidx);
> + if (unlikely(!header))
> + return (u64) NULL;
> +
> + chunk->freelists[order] = header->next_index;
> +
> + if (header->next_index != BUDDY_CHUNK_ITEMS) {
> + next_header = idx_to_header(chunk, header->next_index);
> + next_header->prev_index = BUDDY_CHUNK_ITEMS;
> + }
Could next_header be NULL here?
The idx_to_header() function is designed to return NULL if an index is
invalid or the block is improperly marked as allocated.
If the allocator state becomes corrupted and next_header is NULL, the
subsequent dereference would cause a NULL pointer dereference within the
arena, leading the BPF exception handler to terminate the program instead
of returning an error gracefully.
[ ... ]
> +static u64 buddy_alloc_from_new_chunk(buddy_t *buddy, buddy_chunk_t *chunk, int order)
> +{
> + u64 address;
> +
> + if (buddy_lock(buddy))
> + return (u64)NULL;
Does this leak the newly allocated physical chunk if the lock fails?
The chunk passed to this function was newly allocated via buddy_chunk_get().
If the spinlock fails, the function returns NULL and the chunk pointer is
dropped without being linked into buddy->first_chunk or freed via
bpf_arena_free_pages().
This results in a permanent 1MiB physical memory leak per lock
acquisition failure.
[ ... ]
> +static __always_inline int buddy_free_unlocked(buddy_t *buddy, u64 addr)
> +{
> + buddy_header_t *header, *buddy_header;
> + u64 idx, buddy_idx, tmp_idx;
> + buddy_chunk_t *chunk;
> + bool allocated;
> + u8 order;
> + int ret;
> +
> + if (!buddy)
> + return -EINVAL;
> +
> + if (addr & (BUDDY_MIN_ALLOC_BYTES - 1)) {
> + arena_stderr("Freeing unaligned address %llx\n", addr);
> + return -EINVAL;
> + }
> +
> + /* Get (chunk, idx) out of the address. */
> + chunk = (void __arena *)(addr & ~BUDDY_CHUNK_OFFSET_MASK);
> + idx = (addr & BUDDY_CHUNK_OFFSET_MASK) / BUDDY_MIN_ALLOC_BYTES;
> +
> + /* Mark the block as unallocated so we can access the header. */
> + ret = idx_set_allocated(chunk, idx, false);
Are we missing bounds checking on the block index here?
The first sizeof(buddy_chunk_t) bytes of each 1MiB chunk are reserved for
internal metadata and permanently marked as allocated.
If a BPF program accidentally frees a pointer pointing into the metadata,
it appears the allocator will successfully free its own internal tracking
structures since it blindly marks the block as free with
idx_set_allocated() without verifying that the index falls outside the
reserved metadata area.
This would allow unintentional total corruption of the allocator's
internal state, coalescing metadata blocks and handing them out to
future malloc() callers.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260421165037.4736-1-emil@etsalapatis.com?part=6
next prev parent reply other threads:[~2026-04-21 21:42 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-21 16:50 [PATCH bpf-next v8 0/8] Introduce arena library and runtime Emil Tsalapatis
2026-04-21 16:50 ` [PATCH bpf-next v8 1/8] selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h Emil Tsalapatis
2026-04-21 16:50 ` [PATCH bpf-next v8 2/8] selftests/bpf: Add basic libarena scaffolding Emil Tsalapatis
2026-04-21 20:08 ` sashiko-bot
2026-04-21 16:50 ` [PATCH bpf-next v8 3/8] selftests/bpf: Move arena-related headers into libarena Emil Tsalapatis
2026-04-21 16:50 ` [PATCH bpf-next v8 4/8] selftests/bpf: Add arena ASAN runtime to libarena Emil Tsalapatis
2026-04-21 20:48 ` sashiko-bot
2026-04-21 16:50 ` [PATCH bpf-next v8 5/8] selftests/bpf: Add ASAN support for libarena selftests Emil Tsalapatis
2026-04-21 21:15 ` sashiko-bot
2026-04-21 16:50 ` [PATCH bpf-next v8 6/8] selftests/bpf: Add buddy allocator for libarena Emil Tsalapatis
2026-04-21 17:52 ` bot+bpf-ci
2026-04-21 17:56 ` Emil Tsalapatis
2026-04-21 21:42 ` sashiko-bot [this message]
2026-04-21 16:50 ` [PATCH bpf-next v8 7/8] selftests/bpf: Add selftests for libarena buddy allocator Emil Tsalapatis
2026-04-21 21:57 ` sashiko-bot
2026-04-21 16:50 ` [PATCH bpf-next v8 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests Emil Tsalapatis
2026-04-21 22:16 ` sashiko-bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260421214220.DFA5CC2BCB0@smtp.kernel.org \
--to=sashiko-bot@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=emil@etsalapatis.com \
--cc=sashiko@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox