public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v9 0/8] Introduce arena library and runtime
@ 2026-04-26 19:03 Emil Tsalapatis
  2026-04-26 19:03 ` [PATCH bpf-next v9 1/8] selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h Emil Tsalapatis
                   ` (8 more replies)
  0 siblings, 9 replies; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

Add a new subdirectory to tools/testing/selftests/bpf called libarena,
along with programs useful for writing arena-based BPF code. This
patchset adds the following:

1) libarena, a subdirectory where arena BPF code that is generally useful
to BPF arena programs can be easily added and tested.

2) An ASAN runtime for BPF arena programs. BPF arenas allow for accessing
memory after it has been freed or if it is out of bounds, making it more
difficult to triage bugs combined to regular BPF. Use LLVM's recently added
support for address-space based sanitization to selectively sanitize just
the arena accesses.

3) A buddy memory allocator that can be reused by BPF programs to handle
memory allocation/deletion. The allocator uses the ASAN runtime to add
address sanitization if requested.

The patch includes testing for the new allocators and ASAN features that
can be built from the top directory using "make libarena_test" and
"make libarena_test_asan". The generated binaries reside in libarena/.
The patch also adds test-progs-based selftests to the codebase for the
libarena code, so the new tests are run by ./test_progs.

The patchset has the following stucture:

1-3: Create basic libarena scaffolding and refactor existing headers.

4-5: Add the ASAN runtime and associated scaffolding.

6-8: Add the new buddy memory allocator along with selftests.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>

HISTORY
=======

v8->v9 (https://lore.kernel.org/bpf/20260421165037.4736-1-emil@etsalapatis.com)
- Added Matt's Acked-by for Patch 1
- Replaced open coded fls with __builtin based calculation (Matt)
- Add comment explaining the reasoning behind the zero variable (Matt)
- Remove the self-contained runner and present selftests as examples (Kumar).
  The reasoning is that including selftests with the library is not
  effective because the library will be sync'ed outside the tree
  infrequently, so it would be ineffective in catching issues before
  they land.
- Adjust syscall API for clarity and forward compatibility (Matt)
- Reset asan_validate state after failed check (Sashiko)
- Fix printf format specifier (Sashiko)
- Rename syscall to be generic arena_info (Matt)
- Namespace libarena headers by moving them to their own directory
- Rename selftest_helpers to libarena/userspace.h to reflect it is not
  just for selftests.

v7->v8 (https://lore.kernel.org/bpf/20260412174546.18684-1-emil@etsalapatis.com)
- Duplicate READ_ONCE/WRITE_ONCE instead of moving it to
  bpf_experimental.h to keep libarena self-contained (Kumar)
- Add libarena_asan test to test_progs and conditionally compile it if
  suppported (Kumar)
- Add stderr parsing for buddy tests when run under test_progs (Kumar)
- Move all arena-related headers into libarena and add its include/
  subdirectory in the standard include path (Kumar)
- Remove silent-by-default ASAN, add help message on test_libarena
  explaining that -v emits the messages (Kumar)
- Add run_prog_args as a libarena helper
- Add explanation on the use of __weak for the spinlock qnodes

v6->v7 (https://lore.kernel.org/bpf/20260412011857.3387-1-emil@etsalapatis.com)
- Modify patch 1 to allow operations between PTR_TO_ARENA src_reg
and dst_reg of any type. Adjust selftests accordingly (Alexei)
- Remove unnecessary include in patch 5 (Song)
- Removed unused definitions/assignments in patches 8/9, update patch
  descriptions

v5->v6 (https://lore.kernel.org/bpf/20260410163041.8063-1-emil@etsalapatis.com)
- Fix subreg_def management for SCALAR += PTR_TO_ARENA operations (AI)
- Add more selftests for the SCALAR += PTR_TO_ARENA patch (Sashiko)
- Adjust fls() operation to be in line with the kernel version (Sashiko)
- Address Sashiko selftests and debugging nits
- Add ASAN loadN and storeN _noabort variants and associated BTF anchor
- Remove unnecessary bit freeing of buddies during block splitting

v4->v5 (https://lore.kernel.org/bpf/20260407045730.13359-1-emil@etsalapatis.com)
Omitting various nits and fixups.
- Properly adjust subreg_def for scalar += ptr_to_arena calls (Sashiko)
- Remove extraneous definition from prog_tests/arena_spin_lock.c (Song)
- Trim extraneous comments from ASAN and buddy (Alexei)
- Remove asan_dummy call and replace with function pointer array (Alexei)
- Remove usersapi.h header and merge it into common.h (Alexei)
- Replace ASAN macros with function calls (Alexei)
- Embed buddy lock into the struct and move the buddy allocator to __arena_global
  (Alexei)
- Add commenting for buddy allocator constants (Alexei)
- Add default buddy allocator directly in common.bpf.c, so that the user does
  not need to define it.
- Expand test harnesses to dynamically find individual selftests. Now the
  selftests also reports each test individually (e.g., 5 entries for the
  buddy allocator instead of 1). This brings them to par with the rest of
  the test_progs.

v3->v4 (https://lore.kernel.org/bpf/20260403042720.18862-1-emil@etsalapatis.com)
- Add Acks by Song to patches 1-4.
- Expand the verifier's handling of scalar/arena operations to
  include all 3-operand operations in Patch 1 (Alexei)
- Add additional tests for arena/arena (allowed) and arena/pointer (not allowed)
operations in Patch 2
- Remove ASAN version of the library from default compilation since it requires
LLVM 22 and up (CI)
- Rework buddy allocator locking for clarity and add comments
- Fix From: email to be consistent with SOB
- Address (most) Sashiko comments

v2->v3 (https://lore.kernel.org/bpf/20260127181610.86376-1-emil@etsalapatis.com)
Nonexhaustive due to significant patch rework.
- Do not duplicate WRITE_ONCE macro (Mykyta, Kumar)
- Add SPDX headers (Alexei)
- Remove bump/stack allocators (Alexei)
- Integrate testing with test_progs (Kumar)
- Add short description of ASAN algorithm at the top of the file (Alexei)

v1->v2 (https://lore.kernel.org/bpf/20260122160131.2238331-1-etsal@meta.com/)

- Added missing format string argument (AI)
- Fix outdated selftests prog name check (AI)
- Fixed stack allocation check for segment creation (AI)
- Fix errors in non-ASAN bump allocator selftests (AI)
- Propagate error value from individual selftests in selftest.c
- Removed embedded metadata from bump allocator as it was needlessly
  complicating its behavior



Emil Tsalapatis (8):
  selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h
  selftests/bpf: Add basic libarena scaffolding
  selftests/bpf: Move arena-related headers into libarena
  selftests/bpf: Add arena ASAN runtime to libarena
  selftests/bpf: Add ASAN support for libarena selftests
  selftests/bpf: Add buddy allocator for libarena
  selftests/bpf: Add selftests for libarena buddy allocator
  selftests/bpf: Reuse stderr parsing for libarena ASAN tests

 tools/testing/selftests/bpf/Makefile          |  53 +-
 tools/testing/selftests/bpf/bpf_arena_alloc.h |   2 +-
 tools/testing/selftests/bpf/bpf_arena_list.h  |   2 +-
 .../selftests/bpf/bpf_arena_strsearch.h       |   2 +-
 .../testing/selftests/bpf/bpf_experimental.h  |  84 +-
 tools/testing/selftests/bpf/default.profraw   | Bin 0 -> 160 bytes
 tools/testing/selftests/bpf/libarena/Makefile |  94 ++
 .../{ => libarena/include}/bpf_arena_common.h |   0
 .../include}/bpf_arena_spin_lock.h            |  11 +-
 .../bpf/{ => libarena/include}/bpf_atomic.h   |   4 +-
 .../bpf/libarena/include/bpf_may_goto.h       |  84 ++
 .../bpf/libarena/include/libarena/asan.h      | 103 ++
 .../bpf/libarena/include/libarena/buddy.h     |  92 ++
 .../bpf/libarena/include/libarena/common.h    |  94 ++
 .../bpf/libarena/include/libarena/userspace.h | 132 +++
 .../libarena/selftests/st_asan_buddy.bpf.c    | 258 +++++
 .../bpf/libarena/selftests/st_asan_common.h   |  52 +
 .../bpf/libarena/selftests/st_buddy.bpf.c     | 209 ++++
 .../libarena/selftests/test_progs_compat.h    |  15 +
 .../selftests/bpf/libarena/src/asan.bpf.c     | 553 +++++++++++
 .../selftests/bpf/libarena/src/buddy.bpf.c    | 903 ++++++++++++++++++
 .../selftests/bpf/libarena/src/common.bpf.c   |  52 +
 .../bpf/prog_tests/arena_spin_lock.c          |   7 -
 .../selftests/bpf/prog_tests/libarena.c       |  66 ++
 .../selftests/bpf/prog_tests/libarena_asan.c  |  93 ++
 .../selftests/bpf/progs/arena_atomics.c       |   2 +-
 .../selftests/bpf/progs/arena_spin_lock.c     |   2 +-
 .../bpf/progs/compute_live_registers.c        |   2 +-
 .../selftests/bpf/progs/lpm_trie_bench.c      |   2 +-
 tools/testing/selftests/bpf/progs/stream.c    |   2 +-
 .../selftests/bpf/progs/verifier_arena.c      |   2 +-
 .../bpf/progs/verifier_arena_globals1.c       |   2 +-
 .../bpf/progs/verifier_arena_globals2.c       |   2 +-
 .../bpf/progs/verifier_arena_large.c          |   2 +-
 .../selftests/bpf/progs/verifier_ldsx.c       |   2 +-
 tools/testing/selftests/bpf/test_loader.c     |  51 +-
 tools/testing/selftests/bpf/test_progs.h      |   2 +
 37 files changed, 2917 insertions(+), 121 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/default.profraw
 create mode 100644 tools/testing/selftests/bpf/libarena/Makefile
 rename tools/testing/selftests/bpf/{ => libarena/include}/bpf_arena_common.h (100%)
 rename tools/testing/selftests/bpf/{progs => libarena/include}/bpf_arena_spin_lock.h (98%)
 rename tools/testing/selftests/bpf/{ => libarena/include}/bpf_atomic.h (98%)
 create mode 100644 tools/testing/selftests/bpf/libarena/include/bpf_may_goto.h
 create mode 100644 tools/testing/selftests/bpf/libarena/include/libarena/asan.h
 create mode 100644 tools/testing/selftests/bpf/libarena/include/libarena/buddy.h
 create mode 100644 tools/testing/selftests/bpf/libarena/include/libarena/common.h
 create mode 100644 tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
 create mode 100644 tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c
 create mode 100644 tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h
 create mode 100644 tools/testing/selftests/bpf/libarena/selftests/st_buddy.bpf.c
 create mode 100644 tools/testing/selftests/bpf/libarena/selftests/test_progs_compat.h
 create mode 100644 tools/testing/selftests/bpf/libarena/src/asan.bpf.c
 create mode 100644 tools/testing/selftests/bpf/libarena/src/buddy.bpf.c
 create mode 100644 tools/testing/selftests/bpf/libarena/src/common.bpf.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/libarena.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/libarena_asan.c

-- 
2.53.0


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v9 1/8] selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
@ 2026-04-26 19:03 ` Emil Tsalapatis
  2026-04-26 19:03 ` [PATCH bpf-next v9 2/8] selftests/bpf: Add basic libarena scaffolding Emil Tsalapatis
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

The WRITE_ONCE macro is identically defined both in bpf_atomic.h
and in bpf_arena_common.h. However, the bpf_atomic.h definition has no
ifdef guard. If bpf_atomic.h is included after bpf_arena.common.h,
compilation fails because of the duplicate definition.

Guard the definiton in bpf_atomic.h with and ifdef to let programs
include the two headers in any order. Duplicating the definition is
the simplest solution out of all the alternatives:

- Keeping one of the two existing definitions is not possible because
both BPF atomics and arena programs need the macro, and the two features
are independent. Using one should not require the header for the other.

- Factoring out the definition into a new header that only includes it
is more churn than just duplicating it.

- Factoring out the definition into bpf_experimental.h requires all
users of WRITE_ONCE to include the header. However, the arena library
introduced in subsequent commits must be self-contained, while
bpf_experimental.h is in the base selftests/bpf directory.

Both headers are moved to the arena library in a subsequent patch.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
Reviewed-by: Matt Bobrowski <mattbobrowski@google.com>
---
 tools/testing/selftests/bpf/bpf_atomic.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/testing/selftests/bpf/bpf_atomic.h b/tools/testing/selftests/bpf/bpf_atomic.h
index c550e5711967..d89a22d63c1c 100644
--- a/tools/testing/selftests/bpf/bpf_atomic.h
+++ b/tools/testing/selftests/bpf/bpf_atomic.h
@@ -42,7 +42,9 @@ extern bool CONFIG_X86_64 __kconfig __weak;
 
 #define READ_ONCE(x) (*(volatile typeof(x) *)&(x))
 
+#ifndef WRITE_ONCE
 #define WRITE_ONCE(x, val) ((*(volatile typeof(x) *)&(x)) = (val))
+#endif
 
 #define cmpxchg(p, old, new) __sync_val_compare_and_swap((p), old, new)
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v9 2/8] selftests/bpf: Add basic libarena scaffolding
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
  2026-04-26 19:03 ` [PATCH bpf-next v9 1/8] selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h Emil Tsalapatis
@ 2026-04-26 19:03 ` Emil Tsalapatis
  2026-04-26 19:34   ` sashiko-bot
  2026-04-26 19:03 ` [PATCH bpf-next v9 3/8] selftests/bpf: Move arena-related headers into libarena Emil Tsalapatis
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

Add initial code and a Makefile for an arena-based BPF library. Modules
can be added just by including the source file in the library's src/
subdirectory. Future commits will introduce the library code itself.

The code includes workarounds that are removed in subsequent patches
that ensure bisectability.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
---
 tools/testing/selftests/bpf/Makefile          | 27 +++++
 tools/testing/selftests/bpf/libarena/Makefile | 69 +++++++++++++
 .../bpf/libarena/include/libarena/common.h    | 79 +++++++++++++++
 .../bpf/libarena/include/libarena/userspace.h | 99 +++++++++++++++++++
 .../selftests/bpf/libarena/src/common.bpf.c   | 29 ++++++
 5 files changed, 303 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/libarena/Makefile
 create mode 100644 tools/testing/selftests/bpf/libarena/include/libarena/common.h
 create mode 100644 tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
 create mode 100644 tools/testing/selftests/bpf/libarena/src/common.bpf.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 6ef6872adbc3..5855064e7f9c 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -153,6 +153,7 @@ override define CLEAN
 	$(Q)$(RM) -r $(TEST_KMODS)
 	$(Q)$(RM) -r $(EXTRA_CLEAN)
 	$(Q)$(MAKE) -C test_kmods clean
+	$(Q)$(MAKE) -C libarena clean
 	$(Q)$(MAKE) docs-clean
 endef
 
@@ -525,6 +526,7 @@ LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
 LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
 
 HEADERS_FOR_BPF_OBJS := $(wildcard $(BPFDIR)/*.bpf.h)		\
+			$(wildcard $(CURDIR)/libarena/include/*.[ch])	\
 			$(addprefix $(BPFDIR)/,	bpf_core_read.h	\
 			                        bpf_endian.h	\
 						bpf_helpers.h	\
@@ -740,6 +742,29 @@ $(VERIFY_SIG_HDR): $(VERIFICATION_CERT)
 	 echo "};"; \
 	 echo "unsigned int test_progs_verification_cert_len = $$(wc -c < $<);") > $@
 
+LIBARENA_MAKE_ARGS = \
+		BPFTOOL="$(BPFTOOL)" \
+		INCLUDE_DIR="$(INCLUDE_DIR)" \
+		LIBBPF_INCLUDE="$(HOST_INCLUDE_DIR)" \
+		BPFOBJ="$(BPFOBJ)" \
+		LDLIBS="$(LDLIBS) -lzstd" \
+		CLANG="$(CLANG)" \
+		BPF_CFLAGS="$(BPF_CFLAGS) $(CLANG_CFLAGS)" \
+		BPF_TARGET_ENDIAN="$(BPF_TARGET_ENDIAN)" \
+		Q="$(Q)"
+
+LIBARENA_BPF_DEPS := $(wildcard libarena/Makefile		\
+				 libarena/include/*		\
+				 libarena/include/libarena/*	\
+				 libarena/src/*			\
+				 libarena/selftests/*		\
+				 libarena/*.bpf.o)
+
+LIBARENA_SKEL := libarena/libarena.skel.h
+
+$(LIBARENA_SKEL): $(INCLUDE_DIR)/vmlinux.h $(BPFOBJ) $(LIBARENA_BPF_DEPS)
+	+$(MAKE) -C libarena libarena.skel.h $(LIBARENA_MAKE_ARGS)
+
 # Define test_progs test runner.
 TRUNNER_TESTS_DIR := prog_tests
 TRUNNER_BPF_PROGS_DIR := progs
@@ -933,3 +958,5 @@ override define INSTALL_RULE
 		rsync -a $(OUTPUT)/$$DIR/*.bpf.o $(INSTALL_PATH)/$$DIR;\
 	done
 endef
+
+libarena: $(LIBARENA_SKEL)
diff --git a/tools/testing/selftests/bpf/libarena/Makefile b/tools/testing/selftests/bpf/libarena/Makefile
new file mode 100644
index 000000000000..e85b3ad96890
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/Makefile
@@ -0,0 +1,69 @@
+# SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+# Copyright (c) 2026 Meta Platforms, Inc. and affiliates.
+
+.PHONY: clean
+
+# Defaults for standalone builds
+
+CLANG ?= clang
+BPFTOOL ?= bpftool
+LDLIBS ?= -lbpf -lelf -lz -lrt -lpthread -lzstd
+
+ifeq ($(V),1)
+Q =
+msg =
+else
+Q ?= @
+msg = @printf '  %-8s%s %s%s\n' "$(1)" "$(if $(2), [$(2)])" "$(notdir $(3))" "$(if $(4), $(4))";
+endif
+
+IS_LITTLE_ENDIAN = $(shell $(CC) -dM -E - </dev/null | \
+			grep 'define __BYTE_ORDER__ __ORDER_LITTLE_ENDIAN__')
+BPF_TARGET_ENDIAN ?= $(if $(IS_LITTLE_ENDIAN),--target=bpfel,--target=bpfeb)
+
+LIBARENA=$(abspath .)
+BPFDIR=$(abspath $(LIBARENA)/..)
+
+INCLUDE_DIR ?= $(BPFDIR)/tools/include
+LIBBPF_INCLUDE ?= $(INCLUDE_DIR)
+
+# Scan src/ and selftests/ to generate the final binaries
+LIBARENA_SOURCES = $(wildcard $(LIBARENA)/src/*.bpf.c) $(wildcard $(LIBARENA)/selftests/*.bpf.c)
+LIBARENA_OBJECTS = $(notdir $(LIBARENA_SOURCES:.bpf.c=.bpf.o))
+
+INCLUDES = -I$(LIBARENA)/include -I$(BPFDIR)
+ifneq ($(INCLUDE_DIR),)
+INCLUDES += -I$(INCLUDE_DIR)
+endif
+ifneq ($(LIBBPF_INCLUDE),)
+INCLUDES += -I$(LIBBPF_INCLUDE)
+endif
+
+# ENABLE_ATOMICS_TESTS required because we use arena spinlocks
+override BPF_CFLAGS += -DENABLE_ATOMICS_TESTS
+override BPF_CFLAGS += -O2 -g
+override BPF_CFLAGS += -Wno-incompatible-pointer-types-discards-qualifiers
+# Required for suppressing harmless vmlinux.h-related warnings.
+override BPF_CFLAGS += -Wno-missing-declarations
+override BPF_CFLAGS += $(INCLUDES)
+
+CFLAGS = -O2 -no-pie
+CFLAGS += $(INCLUDES)
+
+vpath %.bpf.c $(LIBARENA)/src $(LIBARENA)/selftests
+vpath %.c $(LIBARENA)/src $(LIBARENA)/selftests
+
+libarena.skel.h: libarena.bpf.o
+	$(call msg,GEN-SKEL,libarena,$@)
+	$(Q)$(BPFTOOL) gen skeleton $< name "libarena" > $@
+
+libarena.bpf.o: $(LIBARENA_OBJECTS)
+	$(call msg,GEN-OBJ,libarena,$@)
+	$(Q)$(BPFTOOL) gen object $@ $^
+
+%.bpf.o: %.bpf.c
+	$(call msg,CLNG-BPF,libarena,$@)
+	$(Q)$(CLANG) $(BPF_CFLAGS) $(BPF_TARGET_ENDIAN) -c $< -o $@
+
+clean:
+	$(Q)rm -f *.skel.h *.bpf.o
diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/common.h b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
new file mode 100644
index 000000000000..92b67b20ed15
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
@@ -0,0 +1,79 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#pragma once
+
+#ifdef __BPF__
+
+#include <vmlinux.h>
+
+#include "../../bpf_arena_common.h"
+#include "../../progs/bpf_arena_spin_lock.h"
+
+#include <asm-generic/errno.h>
+
+#ifndef __BPF_FEATURE_ADDR_SPACE_CAST
+#error "Arena allocators require bpf_addr_space_cast feature"
+#endif
+
+#define arena_stdout(fmt, ...) bpf_stream_printk(1, (fmt), ##__VA_ARGS__)
+#define arena_stderr(fmt, ...) bpf_stream_printk(2, (fmt), ##__VA_ARGS__)
+
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((__unused__))
+#endif
+
+#define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8)))
+
+#define ARENA_PAGES (1UL << (32 - __builtin_ffs(__PAGE_SIZE) + 1))
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARENA);
+	__uint(map_flags, BPF_F_MMAPABLE);
+	__uint(max_entries, ARENA_PAGES); /* number of pages */
+#if defined(__TARGET_ARCH_arm64) || defined(__aarch64__)
+	__ulong(map_extra, (1ull << 32)); /* start of mmap() region */
+#else
+	__ulong(map_extra, (1ull << 44)); /* start of mmap() region */
+#endif
+} arena __weak SEC(".maps");
+
+/*
+ * This is a variable used to aid verification. The may_goto directive
+ * permits open-coded for loops, but requires that the index variable is
+ * imprecise. To force the variable to be imprecise, initialize it with
+ * the opaque volatile variable 0 instead of the constant 0.
+ */
+extern const volatile u32 zero;
+
+int arena_fls(__u64 word);
+
+#else /* ! __BPF__ */
+
+#include <stdint.h>
+
+#define __arena
+
+typedef uint8_t u8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef uint64_t u64;
+typedef int8_t s8;
+typedef int16_t s16;
+typedef int32_t s32;
+typedef int64_t s64;
+
+/* Dummy "definition" for userspace. */
+#define arena_spinlock_t int
+
+#endif /* __BPF__ */
+
+struct arena_get_info_args {
+	void __arena *arena_base;
+};
+
+struct arena_alloc_reserve_args {
+	u64 nr_pages;
+};
+
+/* Reasonable default number of pages reserved by arena_alloc_reserve. */
+#define ARENA_RESERVE_PAGES_DFL (8)
diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h b/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
new file mode 100644
index 000000000000..0438a751d5fd
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#pragma once
+
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/mman.h>
+
+#include <bpf/libbpf.h>
+#include <bpf/bpf.h>
+
+static inline int libarena_run_prog(int prog_fd)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, opts);
+	int ret;
+
+	ret = bpf_prog_test_run_opts(prog_fd, &opts);
+	if (ret)
+		return ret;
+
+	return opts.retval;
+}
+
+static inline bool libarena_is_test_prog(const char *name)
+{
+	return strstr(name, "test_") == name;
+}
+
+static inline int libarena_run_prog_args(int prog_fd, void *args, size_t argsize)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, opts);
+	int ret;
+
+	opts.ctx_in = args;
+	opts.ctx_size_in = argsize;
+
+	ret = bpf_prog_test_run_opts(prog_fd, &opts);
+
+	return ret ?: opts.retval;
+}
+
+static inline int libarena_get_arena_base(int arena_get_info_fd,
+					  void **arena_base)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, opts);
+	struct arena_get_info_args args = { .arena_base = NULL };
+	int ret;
+
+	opts.ctx_in = &args;
+	opts.ctx_size_in = sizeof(args);
+
+	ret = bpf_prog_test_run_opts(arena_get_info_fd, &opts);
+	if (ret)
+		return ret;
+	if (opts.retval)
+		return opts.retval;
+
+	*arena_base = args.arena_base;
+	return 0;
+}
+
+static inline int libarena_get_globals_pages(int arena_get_globals_fd,
+					     size_t arena_all_pages,
+					     u64 *globals_pages)
+{
+	size_t pgsize = sysconf(_SC_PAGESIZE);
+	void *arena_base;
+	ssize_t i;
+	u8 *vec;
+	int ret;
+
+	ret = libarena_get_arena_base(arena_get_globals_fd, &arena_base);
+	if (ret)
+		return ret;
+
+	if (!arena_base)
+		return -EINVAL;
+
+	vec = calloc(arena_all_pages, sizeof(*vec));
+	if (!vec)
+		return -ENOMEM;
+
+	if (mincore(arena_base, arena_all_pages * pgsize, vec) < 0) {
+		ret = -errno;
+		free(vec);
+		return ret;
+	}
+
+	*globals_pages = 0;
+	for (i = arena_all_pages - 1; i >= 0; i--) {
+		if (!(vec[i] & 0x1))
+			break;
+		*globals_pages += 1;
+	}
+
+	free(vec);
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/libarena/src/common.bpf.c b/tools/testing/selftests/bpf/libarena/src/common.bpf.c
new file mode 100644
index 000000000000..659ccead5624
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/src/common.bpf.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#include <libarena/common.h>
+
+const volatile u32 zero = 0;
+
+int arena_fls(__u64 word)
+{
+	if (!word)
+		return 0;
+
+	return 64 - __builtin_clzll(word);
+}
+
+SEC("syscall")
+__weak int arena_get_info(struct arena_get_info_args *args)
+{
+	args->arena_base = arena_base(&arena);
+
+	return 0;
+}
+
+SEC("syscall")
+__weak int arena_alloc_reserve(struct arena_alloc_reserve_args *args)
+{
+	return bpf_arena_reserve_pages(&arena, NULL, args->nr_pages);
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v9 3/8] selftests/bpf: Move arena-related headers into libarena
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
  2026-04-26 19:03 ` [PATCH bpf-next v9 1/8] selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h Emil Tsalapatis
  2026-04-26 19:03 ` [PATCH bpf-next v9 2/8] selftests/bpf: Add basic libarena scaffolding Emil Tsalapatis
@ 2026-04-26 19:03 ` Emil Tsalapatis
  2026-04-26 19:03 ` [PATCH bpf-next v9 4/8] selftests/bpf: Add arena ASAN runtime to libarena Emil Tsalapatis
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

The BPF selftest headers include functionality that is
specific to arenas and is required by libarena. Keep libarena
self-contained by moving all functionality into its include/
directory. Also add libarena/include to the standard include
paths for the selftests to make the moved headers easy to
access by existing selftests.

Some functionality is required by libarena but not strictly
arena-related. We still move it to the libarena/include path,
which is an upgrade from directly accessing them from the
selftests/bpf directory using relative paths.

A new bpf_may_goto.h file is split off of bpf_experimental.h.
bpf_arena_spin_lock.h and bpf_arena_common.h are moved to
libarena/include. bpf_atomic.h is also moved to libarena
because it is necessary for arena spinlocks.

For bpf_arena_spin_lock.h, mark the spinlock state array as __weak
to define the spinlock state array in the header while also
being compatible with multi-compilation unit programs. While
we're at it, we remove unnecessary definitions from existing
test programs.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
---
 tools/testing/selftests/bpf/Makefile          |  4 +-
 tools/testing/selftests/bpf/bpf_arena_alloc.h |  2 +-
 tools/testing/selftests/bpf/bpf_arena_list.h  |  2 +-
 .../selftests/bpf/bpf_arena_strsearch.h       |  2 +-
 .../testing/selftests/bpf/bpf_experimental.h  | 84 +------------------
 .../{ => libarena/include}/bpf_arena_common.h |  0
 .../include}/bpf_arena_spin_lock.h            | 11 ++-
 .../bpf/{ => libarena/include}/bpf_atomic.h   |  2 +-
 .../bpf/libarena/include/bpf_may_goto.h       | 84 +++++++++++++++++++
 .../bpf/libarena/include/libarena/common.h    |  4 +-
 .../bpf/prog_tests/arena_spin_lock.c          |  7 --
 .../selftests/bpf/progs/arena_atomics.c       |  2 +-
 .../selftests/bpf/progs/arena_spin_lock.c     |  2 +-
 .../bpf/progs/compute_live_registers.c        |  2 +-
 .../selftests/bpf/progs/lpm_trie_bench.c      |  2 +-
 tools/testing/selftests/bpf/progs/stream.c    |  2 +-
 .../selftests/bpf/progs/verifier_arena.c      |  2 +-
 .../bpf/progs/verifier_arena_globals1.c       |  2 +-
 .../bpf/progs/verifier_arena_globals2.c       |  2 +-
 .../bpf/progs/verifier_arena_large.c          |  2 +-
 .../selftests/bpf/progs/verifier_ldsx.c       |  2 +-
 21 files changed, 112 insertions(+), 110 deletions(-)
 rename tools/testing/selftests/bpf/{ => libarena/include}/bpf_arena_common.h (100%)
 rename tools/testing/selftests/bpf/{progs => libarena/include}/bpf_arena_spin_lock.h (98%)
 rename tools/testing/selftests/bpf/{ => libarena/include}/bpf_atomic.h (99%)
 create mode 100644 tools/testing/selftests/bpf/libarena/include/bpf_may_goto.h

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 5855064e7f9c..13959c449893 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -56,7 +56,8 @@ CFLAGS += -g $(OPT_FLAGS) -rdynamic -std=gnu11				\
 	  -Wno-unused-but-set-variable					\
 	  $(GENFLAGS) $(SAN_CFLAGS) $(LIBELF_CFLAGS)			\
 	  -I$(CURDIR) -I$(INCLUDE_DIR) -I$(GENDIR) -I$(LIBDIR)		\
-	  -I$(TOOLSINCDIR) -I$(TOOLSARCHINCDIR) -I$(APIDIR) -I$(OUTPUT)
+	  -I$(TOOLSINCDIR) -I$(TOOLSARCHINCDIR) -I$(APIDIR) -I$(OUTPUT)	\
+	  -I$(CURDIR)/libarena/include
 LDFLAGS += $(SAN_LDFLAGS)
 LDLIBS += $(LIBELF_LIBS) -lz -lrt -lpthread
 
@@ -447,6 +448,7 @@ endif
 CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG),$(CLANG_TARGET_ARCH))
 BPF_CFLAGS = -g -Wall -Werror -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN)	\
 	     -I$(INCLUDE_DIR) -I$(CURDIR) -I$(APIDIR)			\
+	     -I$(CURDIR)/libarena/include				\
 	     -I$(abspath $(OUTPUT)/../usr/include)			\
 	     -std=gnu11		 					\
 	     -fno-strict-aliasing 					\
diff --git a/tools/testing/selftests/bpf/bpf_arena_alloc.h b/tools/testing/selftests/bpf/bpf_arena_alloc.h
index c27678299e0c..cda147fd9d25 100644
--- a/tools/testing/selftests/bpf/bpf_arena_alloc.h
+++ b/tools/testing/selftests/bpf/bpf_arena_alloc.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
 /* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
 #pragma once
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 
 #ifndef __round_mask
 #define __round_mask(x, y) ((__typeof__(x))((y)-1))
diff --git a/tools/testing/selftests/bpf/bpf_arena_list.h b/tools/testing/selftests/bpf/bpf_arena_list.h
index e16fa7d95fcf..1af2ffc27d9c 100644
--- a/tools/testing/selftests/bpf/bpf_arena_list.h
+++ b/tools/testing/selftests/bpf/bpf_arena_list.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
 /* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
 #pragma once
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 
 struct arena_list_node;
 
diff --git a/tools/testing/selftests/bpf/bpf_arena_strsearch.h b/tools/testing/selftests/bpf/bpf_arena_strsearch.h
index c1b6eaa905bb..f0d575daef5a 100644
--- a/tools/testing/selftests/bpf/bpf_arena_strsearch.h
+++ b/tools/testing/selftests/bpf/bpf_arena_strsearch.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
 /* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
 #pragma once
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 
 __noinline int bpf_arena_strlen(const char __arena *s __arg_arena)
 {
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 2234bd6bc9d3..d1db355e872b 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -5,6 +5,7 @@
 #include <bpf/bpf_tracing.h>
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_core_read.h>
+#include <bpf_may_goto.h>
 
 #define __contains(name, node) __attribute__((btf_decl_tag("contains:" #name ":" #node)))
 
@@ -204,89 +205,6 @@ l_true:												\
        })
 #endif
 
-/*
- * Note that cond_break can only be portably used in the body of a breakable
- * construct, whereas can_loop can be used anywhere.
- */
-#ifdef __BPF_FEATURE_MAY_GOTO
-#define can_loop					\
-	({ __label__ l_break, l_continue;		\
-	bool ret = true;				\
-	asm volatile goto("may_goto %l[l_break]"	\
-		      :::: l_break);			\
-	goto l_continue;				\
-	l_break: ret = false;				\
-	l_continue:;					\
-	ret;						\
-	})
-
-#define __cond_break(expr)				\
-	({ __label__ l_break, l_continue;		\
-	asm volatile goto("may_goto %l[l_break]"	\
-		      :::: l_break);			\
-	goto l_continue;				\
-	l_break: expr;					\
-	l_continue:;					\
-	})
-#else
-#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
-#define can_loop					\
-	({ __label__ l_break, l_continue;		\
-	bool ret = true;				\
-	asm volatile goto("1:.byte 0xe5;		\
-		      .byte 0;				\
-		      .long ((%l[l_break] - 1b - 8) / 8) & 0xffff;	\
-		      .short 0"				\
-		      :::: l_break);			\
-	goto l_continue;				\
-	l_break: ret = false;				\
-	l_continue:;					\
-	ret;						\
-	})
-
-#define __cond_break(expr)				\
-	({ __label__ l_break, l_continue;		\
-	asm volatile goto("1:.byte 0xe5;		\
-		      .byte 0;				\
-		      .long ((%l[l_break] - 1b - 8) / 8) & 0xffff;	\
-		      .short 0"				\
-		      :::: l_break);			\
-	goto l_continue;				\
-	l_break: expr;					\
-	l_continue:;					\
-	})
-#else
-#define can_loop					\
-	({ __label__ l_break, l_continue;		\
-	bool ret = true;				\
-	asm volatile goto("1:.byte 0xe5;		\
-		      .byte 0;				\
-		      .long (((%l[l_break] - 1b - 8) / 8) & 0xffff) << 16;	\
-		      .short 0"				\
-		      :::: l_break);			\
-	goto l_continue;				\
-	l_break: ret = false;				\
-	l_continue:;					\
-	ret;						\
-	})
-
-#define __cond_break(expr)				\
-	({ __label__ l_break, l_continue;		\
-	asm volatile goto("1:.byte 0xe5;		\
-		      .byte 0;				\
-		      .long (((%l[l_break] - 1b - 8) / 8) & 0xffff) << 16;	\
-		      .short 0"				\
-		      :::: l_break);			\
-	goto l_continue;				\
-	l_break: expr;					\
-	l_continue:;					\
-	})
-#endif
-#endif
-
-#define cond_break __cond_break(break)
-#define cond_break_label(label) __cond_break(goto label)
-
 #ifndef bpf_nop_mov
 #define bpf_nop_mov(var) \
 	asm volatile("%[reg]=%[reg]"::[reg]"r"((short)var))
diff --git a/tools/testing/selftests/bpf/bpf_arena_common.h b/tools/testing/selftests/bpf/libarena/include/bpf_arena_common.h
similarity index 100%
rename from tools/testing/selftests/bpf/bpf_arena_common.h
rename to tools/testing/selftests/bpf/libarena/include/bpf_arena_common.h
diff --git a/tools/testing/selftests/bpf/progs/bpf_arena_spin_lock.h b/tools/testing/selftests/bpf/libarena/include/bpf_arena_spin_lock.h
similarity index 98%
rename from tools/testing/selftests/bpf/progs/bpf_arena_spin_lock.h
rename to tools/testing/selftests/bpf/libarena/include/bpf_arena_spin_lock.h
index f90531cf3ee5..164638690a4d 100644
--- a/tools/testing/selftests/bpf/progs/bpf_arena_spin_lock.h
+++ b/tools/testing/selftests/bpf/libarena/include/bpf_arena_spin_lock.h
@@ -5,7 +5,7 @@
 
 #include <vmlinux.h>
 #include <bpf/bpf_helpers.h>
-#include "bpf_atomic.h"
+#include <bpf_atomic.h>
 
 #define arch_mcs_spin_lock_contended_label(l, label) smp_cond_load_acquire_label(l, VAL, label)
 #define arch_mcs_spin_unlock_contended(l) smp_store_release((l), 1)
@@ -107,7 +107,12 @@ struct arena_qnode {
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
 #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
 
-struct arena_qnode __arena qnodes[_Q_MAX_CPUS][_Q_MAX_NODES];
+/*
+ * The qnodes are marked __weak so we can define them in the header
+ * while still ensuring all compilation units use the same struct
+ * instance.
+ */
+struct arena_qnode __weak __arena __hidden qnodes[_Q_MAX_CPUS][_Q_MAX_NODES];
 
 static inline u32 encode_tail(int cpu, int idx)
 {
@@ -240,7 +245,7 @@ static __always_inline int arena_spin_trylock(arena_spinlock_t __arena *lock)
 	return likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL));
 }
 
-__noinline
+__noinline __weak
 int arena_spin_lock_slowpath(arena_spinlock_t __arena __arg_arena *lock, u32 val)
 {
 	struct arena_mcs_spinlock __arena *prev, *next, *node0, *node;
diff --git a/tools/testing/selftests/bpf/bpf_atomic.h b/tools/testing/selftests/bpf/libarena/include/bpf_atomic.h
similarity index 99%
rename from tools/testing/selftests/bpf/bpf_atomic.h
rename to tools/testing/selftests/bpf/libarena/include/bpf_atomic.h
index d89a22d63c1c..b7b230431929 100644
--- a/tools/testing/selftests/bpf/bpf_atomic.h
+++ b/tools/testing/selftests/bpf/libarena/include/bpf_atomic.h
@@ -5,7 +5,7 @@
 
 #include <vmlinux.h>
 #include <bpf/bpf_helpers.h>
-#include "bpf_experimental.h"
+#include <bpf_may_goto.h>
 
 extern bool CONFIG_X86_64 __kconfig __weak;
 
diff --git a/tools/testing/selftests/bpf/libarena/include/bpf_may_goto.h b/tools/testing/selftests/bpf/libarena/include/bpf_may_goto.h
new file mode 100644
index 000000000000..9ba90689d6ba
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/include/bpf_may_goto.h
@@ -0,0 +1,84 @@
+#pragma once
+
+/*
+ * Note that cond_break can only be portably used in the body of a breakable
+ * construct, whereas can_loop can be used anywhere.
+ */
+#ifdef __BPF_FEATURE_MAY_GOTO
+#define can_loop					\
+	({ __label__ l_break, l_continue;		\
+	bool ret = true;				\
+	asm volatile goto("may_goto %l[l_break]"	\
+		      :::: l_break);			\
+	goto l_continue;				\
+	l_break: ret = false;				\
+	l_continue:;					\
+	ret;						\
+	})
+
+#define __cond_break(expr)				\
+	({ __label__ l_break, l_continue;		\
+	asm volatile goto("may_goto %l[l_break]"	\
+		      :::: l_break);			\
+	goto l_continue;				\
+	l_break: expr;					\
+	l_continue:;					\
+	})
+#else
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+#define can_loop					\
+	({ __label__ l_break, l_continue;		\
+	bool ret = true;				\
+	asm volatile goto("1:.byte 0xe5;		\
+		      .byte 0;				\
+		      .long ((%l[l_break] - 1b - 8) / 8) & 0xffff;	\
+		      .short 0"				\
+		      :::: l_break);			\
+	goto l_continue;				\
+	l_break: ret = false;				\
+	l_continue:;					\
+	ret;						\
+	})
+
+#define __cond_break(expr)				\
+	({ __label__ l_break, l_continue;		\
+	asm volatile goto("1:.byte 0xe5;		\
+		      .byte 0;				\
+		      .long ((%l[l_break] - 1b - 8) / 8) & 0xffff;	\
+		      .short 0"				\
+		      :::: l_break);			\
+	goto l_continue;				\
+	l_break: expr;					\
+	l_continue:;					\
+	})
+#else
+#define can_loop					\
+	({ __label__ l_break, l_continue;		\
+	bool ret = true;				\
+	asm volatile goto("1:.byte 0xe5;		\
+		      .byte 0;				\
+		      .long (((%l[l_break] - 1b - 8) / 8) & 0xffff) << 16;	\
+		      .short 0"				\
+		      :::: l_break);			\
+	goto l_continue;				\
+	l_break: ret = false;				\
+	l_continue:;					\
+	ret;						\
+	})
+
+#define __cond_break(expr)				\
+	({ __label__ l_break, l_continue;		\
+	asm volatile goto("1:.byte 0xe5;		\
+		      .byte 0;				\
+		      .long (((%l[l_break] - 1b - 8) / 8) & 0xffff) << 16;	\
+		      .short 0"				\
+		      :::: l_break);			\
+	goto l_continue;				\
+	l_break: expr;					\
+	l_continue:;					\
+	})
+#endif
+#endif
+
+#define cond_break __cond_break(break)
+#define cond_break_label(label) __cond_break(goto label)
diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/common.h b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
index 92b67b20ed15..d088f3e75798 100644
--- a/tools/testing/selftests/bpf/libarena/include/libarena/common.h
+++ b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
@@ -6,8 +6,8 @@
 
 #include <vmlinux.h>
 
-#include "../../bpf_arena_common.h"
-#include "../../progs/bpf_arena_spin_lock.h"
+#include <bpf_arena_common.h>
+#include <bpf_arena_spin_lock.h>
 
 #include <asm-generic/errno.h>
 
diff --git a/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c b/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
index 693fd86fbde6..acb9d53b5973 100644
--- a/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
+++ b/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
@@ -5,13 +5,6 @@
 #include <sys/sysinfo.h>
 
 struct __qspinlock { int val; };
-typedef struct __qspinlock arena_spinlock_t;
-
-struct arena_qnode {
-	unsigned long next;
-	int count;
-	int locked;
-};
 
 #include "arena_spin_lock.skel.h"
 
diff --git a/tools/testing/selftests/bpf/progs/arena_atomics.c b/tools/testing/selftests/bpf/progs/arena_atomics.c
index d1841aac94a2..2e7751a85399 100644
--- a/tools/testing/selftests/bpf/progs/arena_atomics.c
+++ b/tools/testing/selftests/bpf/progs/arena_atomics.c
@@ -5,7 +5,7 @@
 #include <bpf/bpf_tracing.h>
 #include <stdbool.h>
 #include <stdatomic.h>
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 #include "../../../include/linux/filter.h"
 #include "bpf_misc.h"
 
diff --git a/tools/testing/selftests/bpf/progs/arena_spin_lock.c b/tools/testing/selftests/bpf/progs/arena_spin_lock.c
index 086b57a426cf..7236d92d382f 100644
--- a/tools/testing/selftests/bpf/progs/arena_spin_lock.c
+++ b/tools/testing/selftests/bpf/progs/arena_spin_lock.c
@@ -4,7 +4,7 @@
 #include <bpf/bpf_tracing.h>
 #include <bpf/bpf_helpers.h>
 #include "bpf_misc.h"
-#include "bpf_arena_spin_lock.h"
+#include <bpf_arena_spin_lock.h>
 
 struct {
 	__uint(type, BPF_MAP_TYPE_ARENA);
diff --git a/tools/testing/selftests/bpf/progs/compute_live_registers.c b/tools/testing/selftests/bpf/progs/compute_live_registers.c
index f05e120f3450..d055fc7b3b95 100644
--- a/tools/testing/selftests/bpf/progs/compute_live_registers.c
+++ b/tools/testing/selftests/bpf/progs/compute_live_registers.c
@@ -3,7 +3,7 @@
 #include <linux/bpf.h>
 #include <bpf/bpf_helpers.h>
 #include "../../../include/linux/filter.h"
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 #include "bpf_misc.h"
 
 struct {
diff --git a/tools/testing/selftests/bpf/progs/lpm_trie_bench.c b/tools/testing/selftests/bpf/progs/lpm_trie_bench.c
index a0e6ebd5507a..2831cf4445e8 100644
--- a/tools/testing/selftests/bpf/progs/lpm_trie_bench.c
+++ b/tools/testing/selftests/bpf/progs/lpm_trie_bench.c
@@ -7,7 +7,7 @@
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_core_read.h>
 #include "bpf_misc.h"
-#include "bpf_atomic.h"
+#include <bpf_atomic.h>
 #include "progs/lpm_trie.h"
 
 #define BPF_OBJ_NAME_LEN 16U
diff --git a/tools/testing/selftests/bpf/progs/stream.c b/tools/testing/selftests/bpf/progs/stream.c
index 6f999ba951a3..92ba1d72e0ec 100644
--- a/tools/testing/selftests/bpf/progs/stream.c
+++ b/tools/testing/selftests/bpf/progs/stream.c
@@ -5,7 +5,7 @@
 #include <bpf/bpf_helpers.h>
 #include "bpf_misc.h"
 #include "bpf_experimental.h"
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 
 struct arr_elem {
 	struct bpf_res_spin_lock lock;
diff --git a/tools/testing/selftests/bpf/progs/verifier_arena.c b/tools/testing/selftests/bpf/progs/verifier_arena.c
index 62e282f4448a..89d72c8d756a 100644
--- a/tools/testing/selftests/bpf/progs/verifier_arena.c
+++ b/tools/testing/selftests/bpf/progs/verifier_arena.c
@@ -8,7 +8,7 @@
 #include <bpf/bpf_tracing.h>
 #include "bpf_misc.h"
 #include "bpf_experimental.h"
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 
 #define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8)))
 
diff --git a/tools/testing/selftests/bpf/progs/verifier_arena_globals1.c b/tools/testing/selftests/bpf/progs/verifier_arena_globals1.c
index 83182ddbfb95..45d364b0bc85 100644
--- a/tools/testing/selftests/bpf/progs/verifier_arena_globals1.c
+++ b/tools/testing/selftests/bpf/progs/verifier_arena_globals1.c
@@ -6,7 +6,7 @@
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_tracing.h>
 #include "bpf_experimental.h"
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 #include "bpf_misc.h"
 
 #define ARENA_PAGES (1UL<< (32 - __builtin_ffs(__PAGE_SIZE) + 1))
diff --git a/tools/testing/selftests/bpf/progs/verifier_arena_globals2.c b/tools/testing/selftests/bpf/progs/verifier_arena_globals2.c
index e6bd7b61f9f1..b51594dbc005 100644
--- a/tools/testing/selftests/bpf/progs/verifier_arena_globals2.c
+++ b/tools/testing/selftests/bpf/progs/verifier_arena_globals2.c
@@ -7,7 +7,7 @@
 #include <bpf/bpf_tracing.h>
 #include "bpf_misc.h"
 #include "bpf_experimental.h"
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 
 #define ARENA_PAGES (32)
 
diff --git a/tools/testing/selftests/bpf/progs/verifier_arena_large.c b/tools/testing/selftests/bpf/progs/verifier_arena_large.c
index 5f7e7afee169..6ab8730d4878 100644
--- a/tools/testing/selftests/bpf/progs/verifier_arena_large.c
+++ b/tools/testing/selftests/bpf/progs/verifier_arena_large.c
@@ -7,7 +7,7 @@
 #include <bpf/bpf_tracing.h>
 #include "bpf_misc.h"
 #include "bpf_experimental.h"
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 
 #define ARENA_SIZE (1ull << 32)
 
diff --git a/tools/testing/selftests/bpf/progs/verifier_ldsx.c b/tools/testing/selftests/bpf/progs/verifier_ldsx.c
index c8494b682c31..1026524a1983 100644
--- a/tools/testing/selftests/bpf/progs/verifier_ldsx.c
+++ b/tools/testing/selftests/bpf/progs/verifier_ldsx.c
@@ -3,7 +3,7 @@
 #include <linux/bpf.h>
 #include <bpf/bpf_helpers.h>
 #include "bpf_misc.h"
-#include "bpf_arena_common.h"
+#include <bpf_arena_common.h>
 
 #if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
 	(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v9 4/8] selftests/bpf: Add arena ASAN runtime to libarena
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
                   ` (2 preceding siblings ...)
  2026-04-26 19:03 ` [PATCH bpf-next v9 3/8] selftests/bpf: Move arena-related headers into libarena Emil Tsalapatis
@ 2026-04-26 19:03 ` Emil Tsalapatis
  2026-04-26 20:12   ` sashiko-bot
  2026-04-26 19:03 ` [PATCH bpf-next v9 5/8] selftests/bpf: Add ASAN support for libarena selftests Emil Tsalapatis
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

Add an address sanitizer (ASAN) runtime to the arena library. The
ASAN runtime implements the functions injected into BPF binaries
by LLVM sanitization when ASAN is enabled during compilation.

The runtime also includes functions called explicitly by memory
allocation code to mark memory as poisoned/unpoisoned to ASAN.
This code is a no-op when sanitization is turned off.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
---
 .../bpf/libarena/include/libarena/asan.h      | 103 ++++
 .../bpf/libarena/include/libarena/common.h    |   1 +
 .../selftests/bpf/libarena/src/asan.bpf.c     | 553 ++++++++++++++++++
 3 files changed, 657 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/libarena/include/libarena/asan.h
 create mode 100644 tools/testing/selftests/bpf/libarena/src/asan.bpf.c

diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/asan.h b/tools/testing/selftests/bpf/libarena/include/libarena/asan.h
new file mode 100644
index 000000000000..eb9fc69d9eb0
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/include/libarena/asan.h
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#pragma once
+
+struct asan_init_args {
+	u64 arena_all_pages;
+	u64 arena_globals_pages;
+};
+
+int asan_init(struct asan_init_args *args);
+
+extern volatile u64 __asan_shadow_memory_dynamic_address;
+extern volatile u32 asan_reported;
+extern volatile bool asan_inited;
+extern volatile bool asan_report_once;
+
+#ifdef __BPF__
+
+#define ASAN_SHADOW_SHIFT 3
+#define ASAN_SHADOW_SCALE (1ULL << ASAN_SHADOW_SHIFT)
+#define ASAN_GRANULE_MASK ((1ULL << ASAN_SHADOW_SHIFT) - 1)
+#define ASAN_GRANULE(addr) ((s8)((u32)(u64)((addr)) & ASAN_GRANULE_MASK))
+
+#define __noasan __attribute__((no_sanitize("address")))
+
+#ifdef BPF_ARENA_ASAN
+
+typedef s8 __arena s8a;
+
+static inline
+s8a *mem_to_shadow(void __arena __arg_arena *addr)
+{
+	return (s8a *)(((u32)(u64)addr >> ASAN_SHADOW_SHIFT) +
+			__asan_shadow_memory_dynamic_address);
+}
+
+__weak __noasan
+bool asan_ready(void)
+{
+	return __asan_shadow_memory_dynamic_address;
+}
+
+int asan_poison(void __arena *addr, s8 val, size_t size);
+int asan_unpoison(void __arena *addr, size_t size);
+bool asan_shadow_set(void __arena *addr);
+
+/*
+ * Dummy calls to ensure the ASAN runtime's BTF information is present
+ * in every object file when compiling the runtime and local BPF code
+ * separately. The runtime calls are injected into the LLVM IR file
+ */
+#define DECLARE_ASAN_LOAD_STORE_SIZE(size)				\
+	void __asan_store##size(intptr_t addr);				\
+	void __asan_store##size##_noabort(intptr_t addr);	\
+	void __asan_load##size(intptr_t addr);				\
+	void __asan_load##size##_noabort(intptr_t addr);	\
+	void __asan_report_store##size(intptr_t addr);			\
+	void __asan_report_store##size##_noabort(intptr_t addr);		\
+	void __asan_report_load##size(intptr_t addr);			\
+	void __asan_report_load##size##_noabort(intptr_t addr);
+
+DECLARE_ASAN_LOAD_STORE_SIZE(1);
+DECLARE_ASAN_LOAD_STORE_SIZE(2);
+DECLARE_ASAN_LOAD_STORE_SIZE(4);
+DECLARE_ASAN_LOAD_STORE_SIZE(8);
+
+void __asan_storeN(intptr_t addr, ssize_t size);
+void __asan_storeN_noabort(intptr_t addr, ssize_t size);
+void __asan_loadN(intptr_t addr, ssize_t size);
+void __asan_loadN_noabort(intptr_t addr, ssize_t size);
+
+/*
+ * Force LLVM to emit BTF information for the stubs,
+ * because the ASAN pass in LLVM by itself doesn't.
+ */
+#define ASAN_LOAD_STORE_SIZE(size)		\
+	__asan_store##size,			\
+	__asan_store##size##_noabort,		\
+	__asan_load##size,			\
+	__asan_load##size##_noabort,		\
+	__asan_report_store##size,		\
+	__asan_report_store##size##_noabort,	\
+	__asan_report_load##size,		\
+	__asan_report_load##size##_noabort
+
+__attribute__((used))
+static void (*__asan_btf_anchors[])(intptr_t) = {
+	ASAN_LOAD_STORE_SIZE(1),
+	ASAN_LOAD_STORE_SIZE(2),
+	ASAN_LOAD_STORE_SIZE(4),
+	ASAN_LOAD_STORE_SIZE(8),
+};
+
+#else /* BPF_ARENA_ASAN */
+
+static inline int asan_poison(void __arena *addr, s8 val, size_t size) { return 0; }
+static inline int asan_unpoison(void __arena *addr, size_t size) { return 0; }
+static inline bool asan_shadow_set(void __arena *addr) { return 0; }
+__weak bool asan_ready(void) { return true; }
+
+#endif /* BPF_ARENA_ASAN */
+
+#endif /* __BPF__ */
diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/common.h b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
index d088f3e75798..21eb18bf4533 100644
--- a/tools/testing/selftests/bpf/libarena/include/libarena/common.h
+++ b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
@@ -44,6 +44,7 @@ struct {
  * the opaque volatile variable 0 instead of the constant 0.
  */
 extern const volatile u32 zero;
+extern volatile u64 asan_violated;
 
 int arena_fls(__u64 word);
 
diff --git a/tools/testing/selftests/bpf/libarena/src/asan.bpf.c b/tools/testing/selftests/bpf/libarena/src/asan.bpf.c
new file mode 100644
index 000000000000..64c5b990086c
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/src/asan.bpf.c
@@ -0,0 +1,553 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#include <vmlinux.h>
+#include <libarena/common.h>
+#include <libarena/asan.h>
+
+
+enum {
+	/*
+	 * Is the access checked by check_region_inline
+	 * a read or a write?
+	 */
+	ASAN_READ		= 0x0U,
+	ASAN_WRITE		= 0x1U,
+};
+
+/*
+ * Address sanitizer (ASAN) for arena-based BPF programs, inspired
+ * by KASAN.
+ *
+ * The API
+ * -------
+ *
+ * The implementation includes two kinds of components: Implementation
+ * of ASAN hooks injected by LLVM into the program, and API calls that
+ * allocators use to mark memory as valid or invalid. The full list is:
+ *
+ * LLVM stubs:
+ *
+ * void __asan_{load, store}<size>(intptr_t addr)
+ *	Checks whether an access is valid. All variations covered
+ *	by check_region_inline().
+ *
+ * void __asan_{store, load}((intptr_t addr, ssize_t size)
+ *
+ * void __asan_report_{load, store}<size>(intptr_t addr)
+ *	Report an access violation for the program. Used when LLVM
+ *	uses direct code generation for shadow map checks.
+ *
+ * void *__asan_memcpy(void *d, const void *s, size_t n)
+ * void *__asan_memmove(void *d, const void *s, size_t n)
+ * void *__asan_memset(void *p, int c, size_t n)
+ *	Hooks for ASAN instrumentation of the LLVM mem* builtins.
+ *	Currently unimplemented just like the builtins themselves.
+ *
+ * API methods:
+ *
+ * asan_init()
+ *	Initialize the ASAN map for the arena.
+ *
+ * asan_poison()
+ *	Mark a region of memory as poisoned. Accessing poisoned memory
+ *	causes asan_report() to fire. Invoked during free().
+ *
+ * asan_unpoison()
+ *	Mark a region as unpoisoned after alloc().
+ *
+ * asan_shadow_set()
+ *	Check a byte's validity directly.
+ *
+ * The Algorithm In Brief
+ * ----------------------
+ * Each group of 8 bytes is mapped to a "granule" in the shadow map. This
+ * granule is the size of the byte and describes which bytes are valid.
+ * Possible values are:
+ *
+ * 0: All bytes are valid. Makes checks in the middle of an allocated region
+ * (most of them) fast.
+ * (0, 7]: How many consecutive bytes are valid, starting from the lowest one.
+ * The tradeoff is that we can't poison individual bytes in the middle of a
+ * valid region.
+ * [0x80, 0xff]: Special poison values, can be used to denote specific error
+ * modes (e.g., recently freed vs uninitialized memory).
+ *
+ * The mapping between a memory location and its shadow is:
+ * shadow_addr = shadow_base + (addr >> 3). We retain the 8:1 data:shadow
+ * ratio of existing ASAN implementations as a compromise between tracking
+ * granularity and space usage/scan overhead.
+ */
+
+#ifdef BPF_ARENA_ASAN
+
+#pragma clang attribute push(__attribute__((no_sanitize("address"))), \
+			     apply_to = function)
+
+#define SHADOW_ALL_ZEROES ((u64)-1)
+
+/*
+ * Canary variable for ASAN violations. Set to the offending address.
+ */
+volatile u64 asan_violated = 0;
+
+/*
+ * Shadow map occupancy map.
+ */
+volatile u64 __asan_shadow_memory_dynamic_address;
+
+volatile u32 asan_reported = false;
+volatile bool asan_inited = false;
+
+/*
+ * Set during program load.
+ */
+volatile bool asan_report_once = false;
+
+/*
+ * BPF does not currently support the memset/memcpy/memcmp intrinsics.
+ * For large sequential copies, or assignments of large data structures,
+ * the frontend will generate an intrinsic that causes the BPF backend
+ * to exit due to a missing implementation. Provide a simple implementation
+ * just for memset to use it for poisoning/unpoisoning the map.
+ */
+__weak int asan_memset(s8a __arg_arena *dst, s8 val, size_t size)
+{
+	size_t i;
+
+	for (i = zero; i < size && can_loop; i++)
+		dst[i] = val;
+
+	return 0;
+}
+
+/* Validate a 1-byte access, always within a single byte. */
+static __always_inline bool memory_is_poisoned_1(s8a *addr)
+{
+	s8 shadow_value = *(s8a *)mem_to_shadow(addr);
+
+	/* Byte is 0, access is valid. */
+	if (likely(!shadow_value))
+		return false;
+
+	/*
+	 * Byte is non-zero. Access is valid if granule offset in [0, shadow_value),
+	 * so the memory is poisoned if shadow_value is negative or smaller than
+	 * the granule's value.
+	 */
+
+	return ASAN_GRANULE(addr) >= shadow_value;
+}
+
+/* Validate a 2- 4-, 8-byte access, shadow spans up to 2 bytes. */
+static __always_inline bool memory_is_poisoned_2_4_8(s8a *addr, u64 size)
+{
+	u64 end = (u64)addr + size - 1;
+
+	/*
+	 * Region fully within a single byte (addition didn't
+	 * overflow above ASAN_GRANULE).
+	 */
+	if (likely(ASAN_GRANULE(end) >= size - 1))
+		return memory_is_poisoned_1((s8a *)end);
+
+	/*
+	 * Otherwise first byte must be fully unpoisoned, and second byte
+	 * must be unpoisoned up to the end of the accessed region.
+	 */
+
+	return *(s8a *)mem_to_shadow(addr) || memory_is_poisoned_1((s8a *)end);
+}
+
+__weak bool asan_shadow_set(void __arena __arg_arena *addr)
+{
+	return memory_is_poisoned_1(addr);
+}
+
+static __always_inline u64 first_nonzero_byte(u64 addr, size_t size)
+{
+	while (size && can_loop) {
+		if (unlikely(*(s8a *)addr))
+			return addr;
+		addr += 1;
+		size -= 1;
+	}
+
+	return SHADOW_ALL_ZEROES;
+}
+
+static __always_inline bool memory_is_poisoned_n(s8a *addr, u64 size)
+{
+	u64 ret;
+	u64 start;
+	u64 end;
+
+	/* Size of [start, end] is end - start + 1. */
+	start = (u64)mem_to_shadow(addr);
+	end = (u64)mem_to_shadow(addr + size - 1);
+
+	ret = first_nonzero_byte(start, (end - start) + 1);
+	if (likely(ret == SHADOW_ALL_ZEROES))
+		return false;
+
+	return unlikely(ret != end || ASAN_GRANULE(addr + size - 1) >= *(s8a *)end);
+}
+
+__weak int asan_report(s8a __arg_arena *addr, size_t sz, u32 flags)
+{
+	u32 reported = __sync_val_compare_and_swap(&asan_reported, false, true);
+
+	/* Only report the first ASAN violation. */
+	if (reported && asan_report_once)
+		return 0;
+
+	asan_violated = (u64)addr;
+
+	arena_stderr("Memory violation for address %p (0x%lx) for %s of size %ld\n",
+			addr, (u64)addr,
+			(flags & ASAN_WRITE) ? "write" : "read",
+			sz);
+	bpf_stream_print_stack(BPF_STDERR);
+
+	return 0;
+}
+
+static __always_inline bool check_asan_args(s8a *addr, size_t size,
+					    bool *result)
+{
+	bool valid = true;
+
+	/* Size 0 accesses are valid even if the address is invalid. */
+	if (unlikely(size == 0))
+		goto confirmed_valid;
+
+	/*
+	 * Wraparound is possible for values close to the the edge of the
+	 * 4GiB boundary of the arena (last valid address is 1UL << 32 - 1).
+	 *
+	 *
+	 * The wraparound detection below works for small sizes. check_asan_args is
+	 * always called from the builtin ASAN checks, so 1 <= size <= 64. Even
+	 * for storeN/loadN that we do not expect to encounter the intrinsics will
+	 * not have a large enough size that:
+	 *
+	 * - addr + size  > MAX_U32
+	 * - (u32)(addr + size) > (u32) addr
+	 *
+	 * which would defeat wraparound detection.
+	 */
+	if (unlikely((u32)(u64)(addr + size) < (u32)(u64)addr))
+		goto confirmed_invalid;
+
+	return false;
+
+confirmed_invalid:
+	valid = false;
+
+	/* FALLTHROUGH */
+confirmed_valid:
+	*result = valid;
+
+	return true;
+}
+
+static __always_inline bool check_region_inline(intptr_t ptr, size_t size,
+						u32 flags)
+{
+	s8a *addr = (s8a *)(u64)ptr;
+	bool is_poisoned, is_valid;
+
+	if (check_asan_args(addr, size, &is_valid)) {
+		if (!is_valid)
+			asan_report(addr, size, flags);
+		return is_valid;
+	}
+
+	switch (size) {
+	case 1:
+		is_poisoned = memory_is_poisoned_1(addr);
+		break;
+	case 2:
+	case 4:
+	case 8:
+		is_poisoned = memory_is_poisoned_2_4_8(addr, size);
+		break;
+	default:
+		is_poisoned = memory_is_poisoned_n(addr, size);
+	}
+
+	if (is_poisoned) {
+		asan_report(addr, size, flags);
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * __alias is not supported for BPF so define *__noabort() variants as wrappers.
+ */
+#define DEFINE_ASAN_LOAD_STORE(size)                                  \
+	__hidden void __asan_store##size(intptr_t addr)                  \
+	{                                                             \
+		check_region_inline(addr, size, ASAN_WRITE);          \
+	}                                                             \
+	__hidden void __asan_store##size##_noabort(intptr_t addr)        \
+	{                                                             \
+		check_region_inline(addr, size, ASAN_WRITE);          \
+	}                                                             \
+	__hidden void __asan_load##size(intptr_t addr)                   \
+	{                                                             \
+		check_region_inline(addr, size, ASAN_READ);           \
+	}                                                             \
+	__hidden void __asan_load##size##_noabort(intptr_t addr)         \
+	{                                                             \
+		check_region_inline(addr, size, ASAN_READ);           \
+	}                                                             \
+	__hidden void __asan_report_store##size(intptr_t addr)           \
+	{                                                             \
+		asan_report((s8a *)addr, size, ASAN_WRITE);           \
+	}                                                             \
+	__hidden void __asan_report_store##size##_noabort(intptr_t addr) \
+	{                                                             \
+		asan_report((s8a *)addr, size, ASAN_WRITE);           \
+	}                                                             \
+	__hidden void __asan_report_load##size(intptr_t addr)            \
+	{                                                             \
+		asan_report((s8a *)addr, size, ASAN_READ);            \
+	}                                                             \
+	__hidden void __asan_report_load##size##_noabort(intptr_t addr)  \
+	{                                                             \
+		asan_report((s8a *)addr, size, ASAN_READ);            \
+	}
+
+DEFINE_ASAN_LOAD_STORE(1);
+DEFINE_ASAN_LOAD_STORE(2);
+DEFINE_ASAN_LOAD_STORE(4);
+DEFINE_ASAN_LOAD_STORE(8);
+
+void __asan_storeN(intptr_t addr, ssize_t size)
+{
+	check_region_inline(addr, size, ASAN_WRITE);
+}
+
+void __asan_storeN_noabort(intptr_t addr, ssize_t size)
+{
+	check_region_inline(addr, size, ASAN_WRITE);
+}
+
+void __asan_loadN(intptr_t addr, ssize_t size)
+{
+	check_region_inline(addr, size, ASAN_READ);
+}
+
+void __asan_loadN_noabort(intptr_t addr, ssize_t size)
+{
+	check_region_inline(addr, size, ASAN_READ);
+}
+
+/*
+ * We currently do not sanitize globals.
+ */
+void __asan_register_globals(intptr_t globals, size_t n)
+{
+}
+
+void __asan_unregister_globals(intptr_t globals, size_t n)
+{
+}
+
+/*
+ * We do not currently have memcpy/memmove/memset intrinsics
+ * in LLVM. Do not implement sanitization.
+ */
+void *__asan_memcpy(void *d, const void *s, size_t n)
+{
+	arena_stderr("ASAN: Unexpected %s call", __func__);
+	return NULL;
+}
+
+void *__asan_memmove(void *d, const void *s, size_t n)
+{
+	arena_stderr("ASAN: Unexpected %s call", __func__);
+	return NULL;
+}
+
+void *__asan_memset(void *p, int c, size_t n)
+{
+	arena_stderr("ASAN: Unexpected %s call", __func__);
+	return NULL;
+}
+
+/*
+ * Poisoning code, used when we add more freed memory to the allocator by:
+ * 	a) pulling memory from the arena segment using bpf_arena_alloc_pages()
+ * 	b) freeing memory from application code
+ */
+__hidden __noasan int asan_poison(void __arena *addr, s8 val, size_t size)
+{
+	s8a *shadow;
+	size_t len;
+
+	/*
+	 * Poisoning from a non-granule address makes no sense: We can only allocate
+	 * memory to the application that has a granule-aligned starting address,
+	 * and bpf_arena_alloc_pages returns page-aligned memory. A non-aligned
+	 * addr then implies we're freeing a different address than the one we
+	 * allocated.
+	 */
+	if (unlikely((u64)addr & ASAN_GRANULE_MASK))
+		return -EINVAL;
+
+	/*
+	 * We cannot free an unaligned region because it'd be possible that we
+	 * cannot describe the resulting poisoning state of the granule in
+	 * the ASAN encoding.
+	 *
+	 * Every granule represents a region of memory that looks like the
+	 * following (P for poisoned bytes, C for clear):
+	 *
+	 * <Clear>  <Poisoned>
+	 * [ C C C ... P P ]
+	 *
+	 * The value of the granule's shadow map is the number of clear bytes in
+	 * it. We cannot represent granules with the following state:
+	 *
+	 * [ P P ... C C ... P P ]
+	 *
+	 * That would be possible if we could free unaligned regions, so prevent that.
+	 */
+	if (unlikely(size & ASAN_GRANULE_MASK))
+		return -EINVAL;
+
+	shadow = mem_to_shadow(addr);
+	len = size >> ASAN_SHADOW_SHIFT;
+
+	asan_memset(shadow, val, len);
+
+	return 0;
+}
+
+/*
+ * Unpoisoning code for marking memory as valid during allocation calls.
+ *
+ * Very similar to asan_poison, except we need to round up instead of
+ * down, then partially poison the last granule if necessary.
+ *
+ * Partial poisoning is useful for keeping the padding poisoned. Allocations
+ * are granule-aligned, so we we're reserving granule-aligned sizes for the
+ * allocation. However, we want to still treat accesses to the padding as
+ * invalid. Partial poisoning takes care of that. Freeing and poisoning the
+ * memory is still done in granule-aligned sizes and repoisons the already
+ * poisoned padding.
+ */
+__hidden __noasan int asan_unpoison(void __arena *addr, size_t size)
+{
+	size_t partial = size & ASAN_GRANULE_MASK;
+	s8a *shadow;
+	size_t len;
+
+	/*
+	 * We cannot allocate in the middle of the granule. The ASAN shadow
+	 * map encoding only describes regions of memory where every granule
+	 * follows this format (P for poisoned, C for clear):
+	 *
+	 * <Clear>  <Poisoned>
+	 * [ C C C ... P P ]
+	 *
+	 * This is so we can use a single number in [0, ASAN_SHADOW_SCALE)
+	 * to represent the poison state of the granule.
+	 */
+	if (unlikely((u64)addr & ASAN_GRANULE_MASK))
+		return -EINVAL;
+
+	shadow = mem_to_shadow(addr);
+	len = size >> ASAN_SHADOW_SHIFT;
+
+	asan_memset(shadow, 0, len);
+
+	/*
+	 * If we are allocating a non-granule aligned region, we need to adjust
+	 * the last byte of the shadow map to list how many bytes in the granule
+	 * are unpoisoned. If the region is aligned, then the memset call above
+	 * was enough.
+	 */
+	if (partial)
+		shadow[len] = partial;
+
+	return 0;
+}
+
+/*
+ * Initialize ASAN state when necessary. Triggered from userspace before
+ * allocator startup.
+ */
+SEC("syscall")
+__weak __noasan int asan_init(struct asan_init_args *args)
+{
+	u64 globals_pages = args->arena_globals_pages;
+	u64 all_pages = args->arena_all_pages;
+	u64 shadow_map, shadow_pgoff;
+	u64 shadow_pages;
+
+	if (asan_inited)
+		return 0;
+
+	/*
+	 * Round up the shadow map size to the nearest page.
+	 */
+	shadow_pages = all_pages >> ASAN_SHADOW_SHIFT;
+	if ((all_pages & ((1 << ASAN_SHADOW_SHIFT) - 1)))
+		shadow_pages += 1;
+
+	if (all_pages > (1ULL << 32) / __PAGE_SIZE) {
+		arena_stderr("error: arena size %lx too large", all_pages);
+		return -EINVAL;
+	}
+
+	if (globals_pages > all_pages) {
+		arena_stderr("error: globals %lx do not fit in arena %lx",
+				globals_pages, all_pages);
+		return -EINVAL;
+	}
+
+	if (globals_pages + shadow_pages >= all_pages) {
+		arena_stderr("error: globals %lx do not leave room for shadow map %lx "
+				"(arena pages %lx)",
+				globals_pages, shadow_pages, all_pages);
+		return -EINVAL;
+	}
+
+	shadow_pgoff = all_pages - shadow_pages - globals_pages;
+	__asan_shadow_memory_dynamic_address = shadow_pgoff * __PAGE_SIZE;
+
+	/*
+	 * Allocate the last (1/ASAN_SHADOW_SCALE)th of an arena's pages for the map
+	 * We find the offset and size from the arena map.
+	 *
+	 * The allocated map pages are zeroed out, meaning all memory is marked as valid
+	 * even if it's not allocated already. This is expected: Since the actual memory
+	 * pages are not allocated, accesses to it will trigger page faults and will be
+	 * reported through BPF streams. Any pages allocated through bpf_arena_alloc_pages
+	 * should be poisoned by the allocator right after the call succeeds.
+	 */
+	shadow_map = (u64)bpf_arena_alloc_pages(
+		&arena, (void __arena *)__asan_shadow_memory_dynamic_address,
+		shadow_pages, NUMA_NO_NODE, 0);
+	if (!shadow_map) {
+		arena_stderr("Could not allocate shadow map\n");
+
+		__asan_shadow_memory_dynamic_address = 0;
+
+		return -ENOMEM;
+	}
+
+	asan_inited = true;
+
+	return 0;
+}
+
+#pragma clang attribute pop
+
+#endif /* BPF_ARENA_ASAN */
+
+__weak char _license[] SEC("license") = "GPL";
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v9 5/8] selftests/bpf: Add ASAN support for libarena selftests
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
                   ` (3 preceding siblings ...)
  2026-04-26 19:03 ` [PATCH bpf-next v9 4/8] selftests/bpf: Add arena ASAN runtime to libarena Emil Tsalapatis
@ 2026-04-26 19:03 ` Emil Tsalapatis
  2026-04-26 19:33   ` bot+bpf-ci
  2026-04-26 20:28   ` sashiko-bot
  2026-04-26 19:03 ` [PATCH bpf-next v9 6/8] selftests/bpf: Add buddy allocator for libarena Emil Tsalapatis
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

Expand the arena library selftest infrastructure to support
address sanitization. Add the compiler flags necessary to
compile the library under ASAN when supported.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
---
 tools/testing/selftests/bpf/Makefile          | 22 +++++++-
 tools/testing/selftests/bpf/libarena/Makefile | 25 ++++++++-
 .../bpf/libarena/include/libarena/userspace.h | 33 ++++++++++++
 .../bpf/libarena/selftests/st_asan_common.h   | 52 +++++++++++++++++++
 .../selftests/bpf/libarena/src/common.bpf.c   |  2 +
 5 files changed, 132 insertions(+), 2 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 13959c449893..8f694131118c 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -79,6 +79,12 @@ ifneq ($(shell $(CLANG) --target=bpf -mcpu=help 2>&1 | grep 'v4'),)
 CLANG_CPUV4 := 1
 endif
 
+# Check whether clang supports BPF address sanitizer (requires LLVM 22+)
+CLANG_HAS_ARENA_ASAN := $(shell echo 'int x;' | \
+	$(CLANG) --target=bpf -fsanitize=kernel-address \
+	-mllvm -asan-shadow-addr-space=1 \
+	-x c -c - -o /dev/null 2>/dev/null && echo 1)
+
 # Order correspond to 'make run_tests' order
 TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_progs \
 	test_sockmap \
@@ -767,6 +773,14 @@ LIBARENA_SKEL := libarena/libarena.skel.h
 $(LIBARENA_SKEL): $(INCLUDE_DIR)/vmlinux.h $(BPFOBJ) $(LIBARENA_BPF_DEPS)
 	+$(MAKE) -C libarena libarena.skel.h $(LIBARENA_MAKE_ARGS)
 
+ifneq ($(CLANG_HAS_ARENA_ASAN),)
+LIBARENA_ASAN_SKEL := libarena/libarena_asan.skel.h
+CFLAGS += -DHAS_BPF_ARENA_ASAN
+
+$(LIBARENA_ASAN_SKEL): $(INCLUDE_DIR)/vmlinux.h $(BPFOBJ) $(LIBARENA_BPF_DEPS)
+	+$(MAKE) -C libarena libarena_asan.skel.h $(LIBARENA_MAKE_ARGS)
+endif
+
 # Define test_progs test runner.
 TRUNNER_TESTS_DIR := prog_tests
 TRUNNER_BPF_PROGS_DIR := progs
@@ -791,7 +805,9 @@ TRUNNER_EXTRA_SOURCES := test_progs.c		\
 			 flow_dissector_load.h	\
 			 ip_check_defrag_frags.h	\
 			 bpftool_helpers.c	\
-			 usdt_1.c usdt_2.c
+			 usdt_1.c usdt_2.c	\
+			 $(LIBARENA_SKEL)	\
+			 $(LIBARENA_ASAN_SKEL)
 TRUNNER_LIB_SOURCES := find_bit.c
 TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read				\
 		       $(OUTPUT)/liburandom_read.so			\
@@ -962,3 +978,7 @@ override define INSTALL_RULE
 endef
 
 libarena: $(LIBARENA_SKEL)
+
+ifneq ($(CLANG_HAS_ARENA_ASAN),)
+libarena_asan: $(LIBARENA_ASAN_SKEL)
+endif
diff --git a/tools/testing/selftests/bpf/libarena/Makefile b/tools/testing/selftests/bpf/libarena/Makefile
index e85b3ad96890..5e2ab514805e 100644
--- a/tools/testing/selftests/bpf/libarena/Makefile
+++ b/tools/testing/selftests/bpf/libarena/Makefile
@@ -30,6 +30,7 @@ LIBBPF_INCLUDE ?= $(INCLUDE_DIR)
 # Scan src/ and selftests/ to generate the final binaries
 LIBARENA_SOURCES = $(wildcard $(LIBARENA)/src/*.bpf.c) $(wildcard $(LIBARENA)/selftests/*.bpf.c)
 LIBARENA_OBJECTS = $(notdir $(LIBARENA_SOURCES:.bpf.c=.bpf.o))
+LIBARENA_OBJECTS_ASAN = $(notdir $(LIBARENA_SOURCES:.bpf.c=_asan.bpf.o))
 
 INCLUDES = -I$(LIBARENA)/include -I$(BPFDIR)
 ifneq ($(INCLUDE_DIR),)
@@ -39,6 +40,13 @@ ifneq ($(LIBBPF_INCLUDE),)
 INCLUDES += -I$(LIBBPF_INCLUDE)
 endif
 
+ASAN_FLAGS = -fsanitize=kernel-address -fno-stack-protector -fno-builtin
+ASAN_FLAGS += -mllvm -asan-instrument-address-spaces=1 -mllvm -asan-shadow-addr-space=1
+ASAN_FLAGS += -mllvm -asan-use-stack-safety=0 -mllvm -asan-stack=0
+ASAN_FLAGS += -mllvm -asan-kernel=1
+ASAN_FLAGS += -mllvm -asan-constructor-kind=none
+ASAN_FLAGS += -mllvm -asan-destructor-kind=none
+
 # ENABLE_ATOMICS_TESTS required because we use arena spinlocks
 override BPF_CFLAGS += -DENABLE_ATOMICS_TESTS
 override BPF_CFLAGS += -O2 -g
@@ -53,17 +61,32 @@ CFLAGS += $(INCLUDES)
 vpath %.bpf.c $(LIBARENA)/src $(LIBARENA)/selftests
 vpath %.c $(LIBARENA)/src $(LIBARENA)/selftests
 
+skeletons: libarena.skel.h libarena_asan.skel.h
+.PHONY: skeletons
+
+libarena_asan.skel.h: libarena_asan.bpf.o
+	$(call msg,GEN-SKEL,libarena,$@)
+	$(Q)$(BPFTOOL) gen skeleton $< name "libarena_asan" > $@
+
 libarena.skel.h: libarena.bpf.o
 	$(call msg,GEN-SKEL,libarena,$@)
 	$(Q)$(BPFTOOL) gen skeleton $< name "libarena" > $@
 
+libarena_asan.bpf.o: $(LIBARENA_OBJECTS_ASAN)
+	$(call msg,GEN-OBJ,libarena,$@)
+	$(Q)$(BPFTOOL) gen object $@ $^
+
 libarena.bpf.o: $(LIBARENA_OBJECTS)
 	$(call msg,GEN-OBJ,libarena,$@)
 	$(Q)$(BPFTOOL) gen object $@ $^
 
+%_asan.bpf.o: %.bpf.c
+	$(call msg,CLNG-BPF,libarena,$@)
+	$(Q)$(CLANG) $(BPF_CFLAGS) $(ASAN_FLAGS) -DBPF_ARENA_ASAN $(BPF_TARGET_ENDIAN) -c $< -o $@
+
 %.bpf.o: %.bpf.c
 	$(call msg,CLNG-BPF,libarena,$@)
 	$(Q)$(CLANG) $(BPF_CFLAGS) $(BPF_TARGET_ENDIAN) -c $< -o $@
 
 clean:
-	$(Q)rm -f *.skel.h *.bpf.o
+	$(Q)rm -f *.skel.h *.bpf.o *.linked*.o
diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h b/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
index 0438a751d5fd..88b68ac73cca 100644
--- a/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
+++ b/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
@@ -27,6 +27,11 @@ static inline bool libarena_is_test_prog(const char *name)
 	return strstr(name, "test_") == name;
 }
 
+static inline bool libarena_is_asan_test_prog(const char *name)
+{
+	return strstr(name, "asan_test") == name;
+}
+
 static inline int libarena_run_prog_args(int prog_fd, void *args, size_t argsize)
 {
 	LIBBPF_OPTS(bpf_test_run_opts, opts);
@@ -97,3 +102,31 @@ static inline int libarena_get_globals_pages(int arena_get_globals_fd,
 	free(vec);
 	return 0;
 }
+
+static inline int libarena_asan_init(int arena_asan_init_fd,
+				     int asan_init_fd,
+				     size_t arena_all_pages)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, opts);
+	struct asan_init_args args;
+	u64 globals_pages;
+	int ret;
+
+	ret = libarena_get_globals_pages(arena_asan_init_fd,
+					 arena_all_pages, &globals_pages);
+	if (ret)
+		return ret;
+
+	args = (struct asan_init_args){
+		.arena_all_pages = arena_all_pages,
+		.arena_globals_pages = globals_pages,
+	};
+
+	opts.ctx_in = &args;
+	opts.ctx_size_in = sizeof(args);
+
+	ret = bpf_prog_test_run_opts(asan_init_fd, &opts);
+	if (ret)
+		return ret;
+	return opts.retval;
+}
diff --git a/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h b/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h
new file mode 100644
index 000000000000..1d3edc4372ac
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#pragma once
+
+#define ST_PAGES 64
+
+static inline void print_asan_map_state(void __arena *addr)
+{
+	arena_stdout("%s:%d ASAN %p -> (val: %x gran: %x set: [%s])",
+			__func__, __LINE__, addr,
+			*(s8a *)(addr), ASAN_GRANULE(addr),
+			asan_shadow_set(addr) ? "yes" : "no");
+}
+
+/*
+ * Emit an error and force the current function to exit if the ASAN
+ * violation state is unexpected. Reset the violation state after.
+ */
+static inline int asan_validate_addr(bool cond, void __arena *addr)
+{
+	if ((asan_violated != 0) == cond) {
+		asan_violated = 0;
+		return 0;
+	}
+
+	arena_stdout("%s:%d ASAN asan_violated %lx", __func__, __LINE__,
+			(u64)asan_violated);
+	print_asan_map_state(addr);
+
+	asan_violated = 0;
+
+	return -EINVAL;
+}
+
+static inline int asan_validate(void)
+{
+	if (!asan_violated)
+		return 0;
+
+	arena_stdout("%s:%d Found ASAN violation at %lx", __func__, __LINE__,
+			asan_violated);
+
+	asan_violated = 0;
+
+	return -EINVAL;
+}
+
+struct blob {
+	volatile u8 mem[59];
+	u8 oob;
+};
diff --git a/tools/testing/selftests/bpf/libarena/src/common.bpf.c b/tools/testing/selftests/bpf/libarena/src/common.bpf.c
index 659ccead5624..84e8a8b7d42e 100644
--- a/tools/testing/selftests/bpf/libarena/src/common.bpf.c
+++ b/tools/testing/selftests/bpf/libarena/src/common.bpf.c
@@ -2,6 +2,8 @@
 /* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
 #include <libarena/common.h>
 
+#include <libarena/asan.h>
+
 const volatile u32 zero = 0;
 
 int arena_fls(__u64 word)
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v9 6/8] selftests/bpf: Add buddy allocator for libarena
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
                   ` (4 preceding siblings ...)
  2026-04-26 19:03 ` [PATCH bpf-next v9 5/8] selftests/bpf: Add ASAN support for libarena selftests Emil Tsalapatis
@ 2026-04-26 19:03 ` Emil Tsalapatis
  2026-04-26 19:46   ` bot+bpf-ci
  2026-04-26 20:54   ` sashiko-bot
  2026-04-26 19:03 ` [PATCH bpf-next v9 7/8] selftests/bpf: Add selftests for libarena buddy allocator Emil Tsalapatis
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

Add a byte-oriented buddy allocator for libarena. The buddy
allocator provides an alloc/free interface for small arena allocations
ranging from 16 bytes to 512 KiB. Lower allocations values are rounded
up to 16 bytes. The buddy allocator does not handle larger allocations
that can instead use the existing bpf_arena_{alloc, free}_pages() kfunc.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
---
 tools/testing/selftests/bpf/default.profraw   | Bin 0 -> 160 bytes
 tools/testing/selftests/bpf/libarena/Makefile |   2 +
 .../bpf/libarena/include/libarena/buddy.h     |  92 ++
 .../bpf/libarena/include/libarena/common.h    |  14 +
 .../selftests/bpf/libarena/src/buddy.bpf.c    | 903 ++++++++++++++++++
 .../selftests/bpf/libarena/src/common.bpf.c   |  23 +-
 6 files changed, 1033 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/default.profraw
 create mode 100644 tools/testing/selftests/bpf/libarena/include/libarena/buddy.h
 create mode 100644 tools/testing/selftests/bpf/libarena/src/buddy.bpf.c

diff --git a/tools/testing/selftests/bpf/default.profraw b/tools/testing/selftests/bpf/default.profraw
new file mode 100644
index 0000000000000000000000000000000000000000..e865e87829f8662aa272b0d0ca779c890dd26a70
GIT binary patch
literal 160
zcmZoHO3N=Q$obF3009aRismE})CLhKExb|ZcYWo}X~!y4U)Rnyov5~~E)paK05RPT
AxBvhE

literal 0
HcmV?d00001

diff --git a/tools/testing/selftests/bpf/libarena/Makefile b/tools/testing/selftests/bpf/libarena/Makefile
index 5e2ab514805e..3c695f9c0054 100644
--- a/tools/testing/selftests/bpf/libarena/Makefile
+++ b/tools/testing/selftests/bpf/libarena/Makefile
@@ -51,6 +51,8 @@ ASAN_FLAGS += -mllvm -asan-destructor-kind=none
 override BPF_CFLAGS += -DENABLE_ATOMICS_TESTS
 override BPF_CFLAGS += -O2 -g
 override BPF_CFLAGS += -Wno-incompatible-pointer-types-discards-qualifiers
+# Required to define our own arena-based free()
+override BPF_CFLAGS += -Wno-incompatible-library-redeclaration
 # Required for suppressing harmless vmlinux.h-related warnings.
 override BPF_CFLAGS += -Wno-missing-declarations
 override BPF_CFLAGS += $(INCLUDES)
diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/buddy.h b/tools/testing/selftests/bpf/libarena/include/libarena/buddy.h
new file mode 100644
index 000000000000..00e2437128ef
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/include/libarena/buddy.h
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#pragma once
+
+struct buddy_chunk;
+typedef struct buddy_chunk __arena buddy_chunk_t;
+
+struct buddy_header;
+typedef struct buddy_header __arena buddy_header_t;
+
+enum buddy_consts {
+	/*
+	 * Minimum allocation is 1 << BUDDY_MIN_ALLOC_SHIFT.
+	 * Larger sizes increase internal fragmentation, but smaller
+	 * sizes increase the space overhead of the block metadata.
+	 */
+	BUDDY_MIN_ALLOC_SHIFT	= 4,
+	BUDDY_MIN_ALLOC_BYTES	= 1 << BUDDY_MIN_ALLOC_SHIFT,
+
+	/*
+	 * How many orders the buddy allocator can serve. Minimum block
+	 * size is 1 << BUDDY_MIN_ALLOC_SHIFT, maximum block size is
+	 * 1 << (BUDDY_MIN_ALLOC_SHIFT + BUDDY_CHUNK_NUM_ORDERS - 1):
+	 * Each block has size 1 << BUDDY_MIN_ALLOC_SHIFT, and the
+	 * allocation orders are in [0, BUDDY_CHUNK_NUM_ORDERS).
+	 * We keep two blocks of the maximum size to retain the
+	 * property in the code that all blocks have a buddy.
+	 * Higher values increase the maximum allocation size,
+	 * but also the size of the metadata for each block.
+	 */
+	BUDDY_CHUNK_NUM_ORDERS	= 1 << 4,
+	BUDDY_CHUNK_BYTES	= BUDDY_MIN_ALLOC_BYTES << (BUDDY_CHUNK_NUM_ORDERS),
+
+	/* Offset of the buddy header within a free block, see buddy.bpf.c for details */
+	BUDDY_HEADER_OFF	= 8,
+
+	/* The maximum number of blocks a chunk may have to track. */
+	BUDDY_CHUNK_ITEMS	= 1 << (BUDDY_CHUNK_NUM_ORDERS),
+	BUDDY_CHUNK_OFFSET_MASK	= BUDDY_CHUNK_BYTES - 1,
+
+	/*
+	 * Alignment for chunk allocations based on bpf_arena_alloc_pages.
+	 * The arena allocation kfunc does not have an alignment argument,
+	 * but that is required for all block calculations in the chunk to
+	 * work.
+	 */
+	BUDDY_VADDR_OFFSET	= BUDDY_CHUNK_BYTES,
+
+	/* Total arena virtual address space the allocator can consume. */
+	BUDDY_VADDR_SIZE	= BUDDY_CHUNK_BYTES << 10
+};
+
+struct buddy_header {
+	u32 prev_index;	/* "Pointer" to the previous available allocation of the same size. */
+	u32 next_index; /* Same for the next allocation. */
+};
+
+/*
+ * We bring memory into the allocator 1 MiB at a time.
+ */
+struct buddy_chunk {
+	/* The order of the current allocation for a item. 4 bits per order. */
+	u8		orders[BUDDY_CHUNK_ITEMS / 2];
+	/*
+	 * Bit to denote whether chunk is allocated. Size of the allocated/free
+	 * chunk found from the orders array.
+	 */
+	u8		allocated[BUDDY_CHUNK_ITEMS / 8];
+	/* Freelists for O(1) allocation. */
+	u64		freelists[BUDDY_CHUNK_NUM_ORDERS];
+	buddy_chunk_t	*next;
+};
+
+struct buddy {
+	buddy_chunk_t *first_chunk;		/* Pointer to the chunk linked list. */
+	arena_spinlock_t lock;			/* Allocator lock */
+	u64 vaddr;				/* Allocation into reserved vaddr */
+};
+
+typedef struct buddy __arena buddy_t;
+
+#ifdef __BPF__
+
+int buddy_init(buddy_t *buddy);
+int buddy_destroy(buddy_t *buddy);
+int buddy_free_internal(buddy_t *buddy, u64 free);
+#define buddy_free(buddy, ptr) buddy_free_internal((buddy), (u64)(ptr))
+u64 buddy_alloc_internal(buddy_t *buddy, size_t size);
+#define buddy_alloc(alloc, size) ((void __arena *)buddy_alloc_internal((alloc), (size)))
+
+
+#endif /* __BPF__  */
diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/common.h b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
index 21eb18bf4533..e54cb7b869bd 100644
--- a/tools/testing/selftests/bpf/libarena/include/libarena/common.h
+++ b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
@@ -48,6 +48,20 @@ extern volatile u64 asan_violated;
 
 int arena_fls(__u64 word);
 
+u64 malloc_internal(size_t size);
+#define malloc(size) ((void __arena *)malloc_internal((size)))
+void free(void __arena *ptr);
+
+/*
+ * The verifier associates arenas with programs by checking LD.IMM
+ * instruction operands for an arena and populating the program state
+ * with the first instance it finds. This requires accessing our global
+ * arena variable, but subprogs do not necessarily do so while still
+ * using pointers from that arena. Insert an LD.IMM instruction  to
+ * access the arena and help the verifier.
+ */
+#define arena_subprog_init() do { asm volatile ("" :: "r"(&arena)); } while (0)
+
 #else /* ! __BPF__ */
 
 #include <stdint.h>
diff --git a/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c b/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c
new file mode 100644
index 000000000000..865e00803daa
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c
@@ -0,0 +1,903 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <libarena/common.h>
+#include <libarena/asan.h>
+#include <libarena/buddy.h>
+
+/*
+ * Buddy allocator arena-based implementation.
+ *
+ * Memory is organized into chunks. These chunks
+ * cannot be coalesced or split. Allocating
+ * chunks allocates their memory eagerly.
+ *
+ * Internally, each chunk is organized into blocks.
+ * Blocks _can_ be coalesced/split, but only inside
+ * the chunk. Each block can be allocated or
+ * unallocated. If allocated, the entire block holds
+ * user data. If unallocated, the block is mostly
+ * invalid memory, with the exception of a header
+ * used for freelist tracking.
+ *
+ * The header is placed at an offset inside the block
+ * to prevent off-by-one errors from the previous block
+ * from trivially overwriting the header. Such an error
+ * is also not catchable by ASAN, since the header remains
+ * valid memory even after the block is freed. It is still
+ * theoretically possible for the header to be corrupted
+ * without being caught by ASAN, but harder.
+ *
+ * Since the allocator needs to track order information for
+ * both allocated and free blocks, and allocated blocks cannot
+ * store a header, the allocator also stores per-chunk order
+ * information in a reserved region at the beginning of the
+ * chunk. The header includes a bitmap with the order of blocks
+ * and their allocation state. It also includes the freelist
+ * heads for the allocation itself.
+ */
+
+
+enum {
+	BUDDY_POISONED = (s8)0xef,
+
+	/* Number of pages to be allocated per chunk. */
+	BUDDY_CHUNK_PAGES	= BUDDY_CHUNK_BYTES / __PAGE_SIZE
+};
+
+static inline int buddy_lock(buddy_t *buddy)
+{
+	return arena_spin_lock(&buddy->lock);
+}
+
+static inline void buddy_unlock(buddy_t *buddy)
+{
+	arena_spin_unlock(&buddy->lock);
+}
+
+/*
+ * Reserve part of the arena address space for the allocator. We use
+ * this to get aligned addresses for the chunks, since the arena
+ * page alloc kfuncs do not support aligning to a boundary (in this
+ * case 1 MiB, see buddy.h on how this is derived).
+ */
+static int buddy_reserve_arena_vaddr(buddy_t *buddy)
+{
+	buddy->vaddr = 0;
+
+	return bpf_arena_reserve_pages(&arena,
+				       (void __arena *)BUDDY_VADDR_OFFSET,
+				       BUDDY_VADDR_SIZE / __PAGE_SIZE);
+}
+
+/*
+ * Free up any unused address space. Used only during teardown.
+ */
+static void buddy_unreserve_arena_vaddr(buddy_t *buddy)
+{
+	bpf_arena_free_pages(
+		&arena, (void __arena *)(BUDDY_VADDR_OFFSET + buddy->vaddr),
+		(BUDDY_VADDR_SIZE - buddy->vaddr) / __PAGE_SIZE);
+
+	buddy->vaddr = 0;
+}
+
+/*
+ * Carve out part of the reserved address space and hand it over
+ * to the buddy allocator.
+ *
+ * We are assuming the buddy allocator is the only allocator in the
+ * system, so there is no race between this function reserving a
+ * page range and some other allocator actually making the BPF call
+ * to really create and reserve it.
+ *
+ * However, bump allocation must still be atomic because this function
+ * is called without the buddy lock from multiple threads concurrently.
+ */
+__weak int buddy_alloc_arena_vaddr(buddy_t __arg_arena *buddy, u64 *vaddrp)
+{
+	u64 vaddr, old, new;
+
+	if (!buddy || !vaddrp)
+		return -EINVAL;
+
+	do {
+		vaddr = buddy->vaddr;
+		new = vaddr + BUDDY_CHUNK_BYTES;
+
+		if (new > BUDDY_VADDR_SIZE)
+			return -EINVAL;
+
+		old = __sync_val_compare_and_swap(&buddy->vaddr, vaddr, new);
+	} while (old != vaddr && can_loop);
+
+	if (old != vaddr)
+		return -EINVAL;
+
+	*vaddrp = BUDDY_VADDR_OFFSET + vaddr;
+
+	return 0;
+}
+
+static u64 arena_next_pow2(__u64 n)
+{
+	n--;
+	n |= n >> 1;
+	n |= n >> 2;
+	n |= n >> 4;
+	n |= n >> 8;
+	n |= n >> 16;
+	n |= n >> 32;
+	n++;
+
+	return n;
+}
+
+__weak
+int idx_set_allocated(buddy_chunk_t __arg_arena *chunk, u64 idx, bool allocated)
+{
+	bool already_allocated;
+
+	if (unlikely(idx >= BUDDY_CHUNK_ITEMS)) {
+		arena_stderr("setting state of invalid idx (%ld, max %d)\n", idx,
+			     BUDDY_CHUNK_ITEMS);
+		return -EINVAL;
+	}
+
+	already_allocated = chunk->allocated[idx / 8] & (1 << (idx % 8));
+	if (unlikely(already_allocated == allocated)) {
+		arena_stderr("Double %s of idx %ld for chunk %p",
+				allocated ? "alloc" : "free",
+				idx, chunk);
+		return -EINVAL;
+	}
+
+	if (allocated)
+		chunk->allocated[idx / 8] |= 1 << (idx % 8);
+	else
+		chunk->allocated[idx / 8] &= ~(1 << (idx % 8));
+
+	return 0;
+}
+
+static int idx_is_allocated(buddy_chunk_t *chunk, u64 idx, bool *allocated)
+{
+	if (unlikely(idx >= BUDDY_CHUNK_ITEMS)) {
+		arena_stderr("getting state of invalid idx (%llu, max %d)\n", idx,
+			     BUDDY_CHUNK_ITEMS);
+		return -EINVAL;
+	}
+
+	*allocated = chunk->allocated[idx / 8] & (1 << (idx % 8));
+	return 0;
+}
+
+__weak
+int idx_set_order(buddy_chunk_t __arg_arena *chunk, u64 idx, u8 order)
+{
+	u8 prev_order;
+
+	if (unlikely(order >= BUDDY_CHUNK_NUM_ORDERS)) {
+		arena_stderr("setting invalid order %u\n", order);
+		return -EINVAL;
+	}
+
+	if (unlikely(idx >= BUDDY_CHUNK_ITEMS)) {
+		arena_stderr("setting order of invalid idx (%d, max %d)\n", idx,
+			     BUDDY_CHUNK_ITEMS);
+		return -EINVAL;
+	}
+
+	/*
+	 * We store two order instances per byte, one per nibble.
+	 * Retain the existing nibble.
+	 */
+	prev_order = chunk->orders[idx / 2];
+	if (idx & 0x1) {
+		order &= 0xf;
+		order |= (prev_order & 0xf0);
+	} else {
+		order <<= 4;
+		order |= (prev_order & 0xf);
+	}
+
+	chunk->orders[idx / 2] = order;
+
+	return 0;
+}
+
+static u8 idx_get_order(buddy_chunk_t *chunk, u64 idx)
+{
+	u8 result;
+
+	_Static_assert(BUDDY_CHUNK_NUM_ORDERS <= 16,
+		       "order must fit in 4 bits");
+
+	if (unlikely(idx >= BUDDY_CHUNK_ITEMS)) {
+		arena_stderr("getting order of invalid idx %u\n", idx);
+		return BUDDY_CHUNK_NUM_ORDERS;
+	}
+
+	result = chunk->orders[idx / 2];
+
+	return (idx & 0x1) ? (result & 0xf) : (result >> 4);
+}
+
+static void __arena *idx_to_addr(buddy_chunk_t *chunk, size_t idx)
+{
+	u64 address;
+
+	if (unlikely(idx >= BUDDY_CHUNK_ITEMS)) {
+		arena_stderr("translating invalid idx %u\n", idx);
+		return NULL;
+	}
+
+	/*
+	 * The data blocks start in the chunk after the metadata block.
+	 * We find the actual address by indexing into the region at an
+	 * BUDDY_MIN_ALLOC_BYTES granularity, the minimum allowed.
+	 * The index number already accounts for the fact that the first
+	 * blocks in the chunk are occupied by the metadata, so we do
+	 * not need to offset it.
+	 */
+
+	address = (u64)chunk + (idx * BUDDY_MIN_ALLOC_BYTES);
+
+	return (void __arena *)address;
+}
+
+static buddy_header_t *idx_to_header(buddy_chunk_t *chunk, size_t idx)
+{
+	bool allocated;
+	u64 address;
+
+	if (unlikely(idx_is_allocated(chunk, idx, &allocated))) {
+		arena_stderr("accessing invalid idx 0x%lx\n", idx);
+		return NULL;
+	}
+
+	if (unlikely(allocated)) {
+		arena_stderr("accessing allocated idx 0x%lx as header\n", idx);
+		return NULL;
+	}
+
+	address = (u64)idx_to_addr(chunk, idx);
+	if (!address)
+		return NULL;
+
+	/*
+	 * Offset the header within the block. This avoids accidental overwrites
+	 * to the header because of off-by-one errors when using adjacent blocks.
+	 *
+	 * The offset has been chosen as a compromise between ASAN effectiveness
+	 * and allocator granularity:
+	 * 1) ASAN dictates valid data runs are 8-byte aligned.
+	 * 2) We want to keep a low minimum allocation size (currently 16).
+	 *
+	 * As a result, we have only two possible positions for the header: Bytes
+	 * 0 and 8. Keeping the header in byte 0 means off-by-ones from the previous
+	 * block touch the header, and, since the header must be accessible, ASAN
+	 * will not trigger. Keeping the header on byte 8 means off-by-one errors from
+	 * the previous block are caught by ASAN. Negative offsets are rarer, so
+	 * while accesses into the block from the next block are possible, they are
+	 * less probable.
+	 */
+
+	return (buddy_header_t *)(address + BUDDY_HEADER_OFF);
+}
+
+static void header_add_freelist(buddy_chunk_t *chunk, buddy_header_t *header,
+		u64 idx, u8 order)
+{
+	buddy_header_t *tmp_header;
+
+	idx_set_order(chunk, idx, order);
+
+	header->next_index = chunk->freelists[order];
+	header->prev_index = BUDDY_CHUNK_ITEMS;
+
+	if (header->next_index != BUDDY_CHUNK_ITEMS) {
+		tmp_header = idx_to_header(chunk, header->next_index);
+		tmp_header->prev_index = idx;
+	}
+
+	chunk->freelists[order] = idx;
+}
+
+static void header_remove_freelist(buddy_chunk_t  *chunk,
+				   buddy_header_t *header, u8 order)
+{
+	buddy_header_t *tmp_header;
+
+	if (header->prev_index != BUDDY_CHUNK_ITEMS) {
+		tmp_header = idx_to_header(chunk, header->prev_index);
+		tmp_header->next_index = header->next_index;
+	}
+
+	if (header->next_index != BUDDY_CHUNK_ITEMS) {
+		tmp_header = idx_to_header(chunk, header->next_index);
+		tmp_header->prev_index = header->prev_index;
+	}
+
+	/* Pop off the list head if necessary. */
+	if (idx_to_header(chunk, chunk->freelists[order]) == header)
+		chunk->freelists[order] = header->next_index;
+
+	header->prev_index = BUDDY_CHUNK_ITEMS;
+	header->next_index = BUDDY_CHUNK_ITEMS;
+}
+
+static u64 size_to_order(size_t size)
+{
+	u64 order;
+
+	/*
+	 * Legal sizes are [1, 4GiB] (the biggest possible arena).
+	 * Of course, sizes close to GiB are practically impossible
+	 * to fulfill and allocation will fail, but that's taken care
+	 * of by the caller.
+	 */
+
+	if (unlikely(size == 0 || size > (1UL << 32))) {
+		arena_stderr("illegal size request %lu\n", size);
+		return 64;
+	}
+	/*
+	 * To find the order of the allocation we find the first power of two
+	 * >= the requested size, take the log2, then adjust it for the minimum
+	 * allocation size by removing the minimum shift from it. Requests
+	 * smaller than the minimum allocation size are rounded up.
+	 */
+	order = arena_fls(arena_next_pow2(size)) - 1;
+	if (order < BUDDY_MIN_ALLOC_SHIFT)
+		return 0;
+
+	return order - BUDDY_MIN_ALLOC_SHIFT;
+}
+
+__weak
+int add_leftovers_to_freelist(buddy_chunk_t __arg_arena *chunk, u32 cur_idx,
+		u64 min_order, u64 max_order)
+{
+	buddy_header_t *header;
+	u64 ord;
+	u32 idx;
+
+	for (ord = min_order; ord < max_order && can_loop; ord++) {
+		/* Mark the buddy as free and add it to the freelists. */
+		idx = cur_idx + (1 << ord);
+
+		header = idx_to_header(chunk, idx);
+		if (unlikely(!header)) {
+			arena_stderr("idx %u has no header", idx);
+			return -EINVAL;
+		}
+
+		asan_unpoison(header, sizeof(*header));
+
+		header_add_freelist(chunk, header, idx, ord);
+	}
+
+	return 0;
+}
+
+static buddy_chunk_t *buddy_chunk_get(buddy_t *buddy)
+{
+	u64 order, ord, min_order, max_order;
+	buddy_chunk_t  *chunk;
+	size_t left;
+	int power2;
+	u64 vaddr;
+	u32 idx;
+	int ret;
+
+	/*
+	 * Step 1:  Allocate a properly aligned chunk, and
+	 * prep it for insertion into the buddy allocator.
+	 * We don't need the allocator lock until step 2.
+	 */
+
+	ret = buddy_alloc_arena_vaddr(buddy, &vaddr);
+	if (ret)
+		return NULL;
+
+	/* Addresses must be aligned to the chunk boundary. */
+	if (vaddr % BUDDY_CHUNK_BYTES)
+		return NULL;
+
+	/* Unreserve the address space. */
+	bpf_arena_free_pages(&arena, (void __arena *)vaddr,
+			     BUDDY_CHUNK_PAGES);
+
+	chunk = bpf_arena_alloc_pages(&arena, (void __arena *)vaddr,
+				      BUDDY_CHUNK_PAGES, NUMA_NO_NODE, 0);
+	if (!chunk) {
+		arena_stderr("[ALLOC FAILED]");
+		return NULL;
+	}
+
+	if (buddy_lock(buddy)) {
+		/*
+		 * We cannot reclaim the vaddr space, but that is ok - this
+		 * operation should always succeed. The error path is to catch
+		 * accidental deadlocks that will cause -ENOMEMs to the program as
+		 * the allocator fails to refill itself, in which case vaddr usage
+		 * is the least of our worries.
+		 */
+		bpf_arena_free_pages(&arena, (void __arena *)vaddr, BUDDY_CHUNK_PAGES);
+		return NULL;
+	}
+
+	asan_poison(chunk, BUDDY_POISONED, BUDDY_CHUNK_PAGES * __PAGE_SIZE);
+
+	/* Unpoison the chunk itself. */
+	asan_unpoison(chunk, sizeof(*chunk));
+
+	/* Mark all freelists as empty. */
+	for (ord = zero; ord < BUDDY_CHUNK_NUM_ORDERS && can_loop; ord++)
+		chunk->freelists[ord] = BUDDY_CHUNK_ITEMS;
+
+	/*
+	 * Initialize the chunk by carving out a page range to hold the metadata
+	 * struct above, then dumping the rest of the pages into the allocator.
+	 */
+
+	_Static_assert(BUDDY_CHUNK_PAGES * __PAGE_SIZE >=
+			       BUDDY_MIN_ALLOC_BYTES *
+				       BUDDY_CHUNK_ITEMS,
+		       "chunk must fit within the allocation");
+
+	/*
+	 * Step 2: Reserve a chunk for the chunk metadata, then breaks
+	 * the rest of the full allocation into the different buckets.
+	 * We allocating the memory by grabbing blocks of progressively
+	 * smaller sizes from the allocator, which are guaranteed to be
+	 * continuous.
+	 *
+	 * This operation also populates the allocator.
+	 *
+	 * Algorithm:
+	 *
+	 * - max_order: The last order allocation we made
+	 * - left: How many bytes are left to allocate
+	 * - cur_index: Current index into the top-level block we are
+	 * allocating from.
+	 *
+	 * Step 3:
+	 * - Find the largest power-of-2 allocation still smaller than left (infimum)
+	 * - Reserve a chunk of that size, along with its buddy
+	 * - For every order from [infimum + 1, last order), carve out a block
+	 *   and put it into the allocator.
+	 *
+	 *  Example: Chunk size 0b1010000 (80 bytes)
+	 *
+	 *  Step 1:
+	 *
+	 *   idx  infimum                             1 << max_order
+	 *   0        64        128                    1 << 20
+	 *   |________|_________|______________________|
+	 *
+	 *   Blocks set aside:
+	 *   	[0, 64)         - Completely allocated
+	 *   	[64, 128)       - Will be further split in the next iteration
+	 *
+	 *   Blocks added to the allocator:
+	 *   	[128, 256)
+	 *   	[256, 512)
+	 *   	...
+	 *   	[1 << 18, 1 << 19)
+	 *   	[1 << 19, 1 << 20)
+	 *
+	 *  Step 2:
+	 *
+	 *   idx  infimum			   idx + 1 << max_order
+	 *   64	      80	96		   	64 + 1 << 6 = 128
+	 *   |________|_________|______________________|
+	 *
+	 *   Blocks set aside:
+	 *   	[64, 80)	- Completely allocated
+	 *
+	 *   Blocks added to the allocator:
+	 *      [80, 96) - left == 0 so the buddy is unused and marked as freed
+	 *   	[96, 128)
+	 */
+	 max_order = BUDDY_CHUNK_NUM_ORDERS;
+	left = sizeof(*chunk);
+	idx = 0;
+	while (left && can_loop) {
+		power2 = arena_fls(left) - 1;
+		/*
+		 * Note: The condition below only triggers to catch serious bugs
+		 * early. There is no sane way to undo any block insertions from
+		 * the allocated chunk, so just leak any leftover allocations,
+		 * emit a diagnostic, unlock and exit.
+		 *
+		 */
+		if (unlikely(power2 >= BUDDY_CHUNK_NUM_ORDERS)) {
+			arena_stderr(
+				"buddy chunk metadata require allocation of order %d\n",
+				power2);
+			arena_stderr(
+				"chunk has size of 0x%lx bytes (left %lx bytes)\n",
+				sizeof(*chunk), left);
+			buddy_unlock(buddy);
+
+			return NULL;
+		}
+
+		/* Round up allocations that are too small. */
+
+		left -= (power2 >= BUDDY_MIN_ALLOC_SHIFT) ? 1 << power2 : left;
+		order = (power2 >= BUDDY_MIN_ALLOC_SHIFT) ? power2 - BUDDY_MIN_ALLOC_SHIFT : 0;
+
+		if (idx_set_allocated(chunk, idx, true)) {
+			buddy_unlock(buddy);
+			return NULL;
+		}
+
+		/*
+		 * Starting an order above the one we allocated, populate
+		 * the allocator with free blocks. If this is the last
+		 * allocation (left == 0), also mark the buddy as free.
+		 *
+		 * See comment above about error handling: The error path
+		 * is only there as a way to mitigate deeply buggy allocator
+		 * states by emitting a diagnostic in add_leftovers_to_freelist()
+		 * and leaking any memory not added in the freelists.
+		 */
+		min_order = left ? order + 1 : order;
+		if (add_leftovers_to_freelist(chunk, idx, min_order, max_order)) {
+			buddy_unlock(buddy);
+			return NULL;
+		}
+
+		/* Adjust the index. */
+		idx += 1 << order;
+		max_order = order;
+	}
+
+	buddy_unlock(buddy);
+
+	return chunk;
+}
+
+__weak int buddy_init(buddy_t __arg_arena *buddy)
+{
+	buddy_chunk_t *chunk;
+	int ret;
+
+	if (!asan_ready())
+		return -EINVAL;
+
+	/* Reserve enough address space to ensure allocations are aligned. */
+	ret = buddy_reserve_arena_vaddr(buddy);
+	if (ret)
+		return ret;
+
+	_Static_assert(BUDDY_CHUNK_PAGES > 0,
+		       "chunk must use one or more pages");
+
+	chunk = buddy_chunk_get(buddy);
+
+	if (buddy_lock(buddy)) {
+		bpf_arena_free_pages(&arena, chunk, BUDDY_CHUNK_PAGES);
+		return -EINVAL;
+	}
+
+	/* Chunk is already properly unpoisoned if allocated. */
+	if (chunk)
+		chunk->next = buddy->first_chunk;
+
+	/* Put the chunk at the beginning of the list. */
+	buddy->first_chunk = chunk;
+
+	buddy_unlock(buddy);
+
+	return chunk ? 0 : -ENOMEM;
+}
+
+/*
+ * Destroy the allocator. This does not check whether there are any allocations
+ * currently in use, so any pages being accessed will start taking arena faults.
+ * We do not take a lock because we are freeing arena pages, and nobody should
+ * be using the allocator at that point in the execution.
+ */
+__weak int buddy_destroy(buddy_t __arg_arena *buddy)
+{
+	buddy_chunk_t *chunk, *next;
+
+	if (!buddy)
+		return -EINVAL;
+
+	/*
+	 * Traverse all buddy chunks and free them back to the arena
+	 * with the same granularity they were allocated with.
+	 */
+	for (chunk = buddy->first_chunk; chunk && can_loop; chunk = next) {
+		next = chunk->next;
+
+		/* Wholesale poison the entire block. */
+		asan_poison(chunk, BUDDY_POISONED,
+			    BUDDY_CHUNK_PAGES * __PAGE_SIZE);
+		bpf_arena_free_pages(&arena, chunk, BUDDY_CHUNK_PAGES);
+	}
+
+	/* Free up any part of the address space that did not get used. */
+	buddy_unreserve_arena_vaddr(buddy);
+
+	/* Clear all fields. */
+	buddy->first_chunk = NULL;
+
+	return 0;
+}
+
+__weak u64 buddy_chunk_alloc(buddy_chunk_t __arg_arena *chunk, int order_req)
+{
+	buddy_header_t *header, *tmp_header, *next_header;
+	u32 idx, tmpidx, retidx;
+	u64 address;
+	u64 order = 0;
+	u64 i;
+
+	for (order = order_req; order < BUDDY_CHUNK_NUM_ORDERS && can_loop; order++) {
+		if (chunk->freelists[order] != BUDDY_CHUNK_ITEMS)
+			break;
+	}
+
+	if (order >= BUDDY_CHUNK_NUM_ORDERS)
+		return (u64)NULL;
+
+	retidx = chunk->freelists[order];
+	header = idx_to_header(chunk, retidx);
+	if (unlikely(!header))
+		return (u64) NULL;
+
+	chunk->freelists[order] = header->next_index;
+
+	if (header->next_index != BUDDY_CHUNK_ITEMS) {
+		next_header = idx_to_header(chunk, header->next_index);
+		next_header->prev_index = BUDDY_CHUNK_ITEMS;
+	}
+
+	header->prev_index = BUDDY_CHUNK_ITEMS;
+	header->next_index = BUDDY_CHUNK_ITEMS;
+	if (idx_set_order(chunk, retidx, order_req))
+		return (u64)NULL;
+
+	if (idx_set_allocated(chunk, retidx, true))
+		return (u64)NULL;
+
+	/*
+	 * Do not unpoison the address yet, will be done by the caller
+	 * because the caller has the exact allocation size requested.
+	 */
+	address = (u64)idx_to_addr(chunk, retidx);
+	if (!address)
+		return (u64)NULL;
+
+	/* If we allocated from a larger-order chunk, split the buddies. */
+	for (i = order_req; i < order && can_loop; i++) {
+		/*
+		 * Flip the bit for the current order (the bit is guaranteed
+		 * to be 0, so just add 1 << i).
+		 */
+		idx = retidx + (1 << i);
+
+		/* Add the buddy of the allocation to the free list. */
+		header = idx_to_header(chunk, idx);
+		/* Unpoison the buddy header */
+		asan_unpoison(header, sizeof(*header));
+
+		if (idx_set_order(chunk, idx, i))
+			return (u64)NULL;
+
+		/* Push the header to the beginning of the freelists list. */
+		tmpidx = chunk->freelists[i];
+
+		header->prev_index = BUDDY_CHUNK_ITEMS;
+		header->next_index = tmpidx;
+
+		if (tmpidx != BUDDY_CHUNK_ITEMS) {
+			tmp_header = idx_to_header(chunk, tmpidx);
+			tmp_header->prev_index = idx;
+		}
+
+		chunk->freelists[i] = idx;
+	}
+
+	return address;
+}
+
+/* Scan the existing chunks for available memory. */
+static u64 buddy_alloc_from_existing_chunks(buddy_t *buddy, int order)
+{
+	buddy_chunk_t *chunk;
+	u64 address;
+
+	for (chunk = buddy->first_chunk; chunk != NULL && can_loop;
+	     chunk = chunk->next) {
+		address = buddy_chunk_alloc(chunk, order);
+		if (address)
+			return address;
+	}
+
+	return (u64)NULL;
+}
+
+/*
+ * Try an allocation from a newly allocated chunk. Also
+ * incorporate the chunk into the linked list.
+ */
+static u64 buddy_alloc_from_new_chunk(buddy_t *buddy, buddy_chunk_t *chunk, int order)
+{
+	u64 address;
+
+	if (buddy_lock(buddy))
+		return (u64)NULL;
+
+
+	/*
+	 * Add the chunk into the allocator and try
+	 * to allocate specifically from that chunk.
+	 */
+	chunk->next = buddy->first_chunk;
+	buddy->first_chunk = chunk;
+
+	address = buddy_chunk_alloc(buddy->first_chunk, order);
+
+	buddy_unlock(buddy);
+
+	return (u64)address;
+}
+__weak
+u64 buddy_alloc_internal(buddy_t __arg_arena *buddy, size_t size)
+{
+	buddy_chunk_t *chunk;
+	u64 address = (u64)NULL;
+	int order;
+
+	if (!buddy)
+		return (u64)NULL;
+
+	order = size_to_order(size);
+	if (order >= BUDDY_CHUNK_NUM_ORDERS || order < 0) {
+		arena_stderr("invalid order %d (sz %lu)\n", order, size);
+		return (u64)NULL;
+	}
+
+	if (buddy_lock(buddy))
+		return (u64)NULL;
+
+	address = buddy_alloc_from_existing_chunks(buddy, order);
+	buddy_unlock(buddy);
+	if (address)
+		goto done;
+
+	/* Get a new chunk. */
+	chunk = buddy_chunk_get(buddy);
+	if (chunk)
+		address = buddy_alloc_from_new_chunk(buddy, chunk, order);
+
+done:
+	/* If we failed to allocate memory, return NULL. */
+	if (!address)
+		return (u64)NULL;
+
+	/*
+	 * Unpoison exactly the amount of bytes requested. If the
+	 * data is smaller than the header, we must poison any
+	 * unused bytes that were part of the header.
+	 */
+	if (size < BUDDY_HEADER_OFF + sizeof(buddy_header_t))
+		asan_poison((u8 __arena *)address + BUDDY_HEADER_OFF,
+			    BUDDY_POISONED, sizeof(buddy_header_t));
+
+	asan_unpoison((u8 __arena *)address, size);
+
+	return address;
+}
+
+static __always_inline int buddy_free_unlocked(buddy_t *buddy, u64 addr)
+{
+	buddy_header_t *header, *buddy_header;
+	u64 idx, buddy_idx, tmp_idx;
+	buddy_chunk_t *chunk;
+	bool allocated;
+	u8 order;
+	int ret;
+
+	if (!buddy)
+		return -EINVAL;
+
+	if (addr & (BUDDY_MIN_ALLOC_BYTES - 1)) {
+		arena_stderr("Freeing unaligned address %llx\n", addr);
+		return -EINVAL;
+	}
+
+	/* Get (chunk, idx) out of the address. */
+	chunk = (void __arena *)(addr & ~BUDDY_CHUNK_OFFSET_MASK);
+	idx = (addr & BUDDY_CHUNK_OFFSET_MASK) / BUDDY_MIN_ALLOC_BYTES;
+
+	/* Mark the block as unallocated so we can access the header. */
+	ret = idx_set_allocated(chunk, idx, false);
+	if (ret)
+		return ret;
+
+	order  = idx_get_order(chunk, idx);
+	header = idx_to_header(chunk, idx);
+
+	/* The header is in the block itself, keep it unpoisoned. */
+	asan_poison((u8 __arena *)addr, BUDDY_POISONED,
+		    BUDDY_MIN_ALLOC_BYTES << order);
+	asan_unpoison(header, sizeof(*header));
+
+	/*
+	 * Coalescing loop. Merge with free buddies of equal order.
+	 * For every coalescing step, keep the left buddy and
+	 * drop the right buddy's header.
+	 */
+	for (; order < BUDDY_CHUNK_NUM_ORDERS && can_loop; order++) {
+		buddy_idx = idx ^ (1 << order);
+
+		/* Check if the buddy is actually free. */
+		idx_is_allocated(chunk, buddy_idx, &allocated);
+		if (allocated)
+			break;
+
+		/*
+		 * If buddy is not the same order as the chunk
+		 * being freed, then we're done coalescing.
+		 */
+		if (idx_get_order(chunk, buddy_idx) != order)
+			break;
+
+		buddy_header = idx_to_header(chunk, buddy_idx);
+		header_remove_freelist(chunk, buddy_header, order);
+
+		/* Keep the left header out of the two buddies, drop the other one. */
+		if (buddy_idx < idx) {
+			tmp_idx = idx;
+			idx = buddy_idx;
+			buddy_idx = tmp_idx;
+		}
+
+		/* Remove the buddy from the freelists so that we can merge it. */
+		idx_set_order(chunk, buddy_idx, order);
+
+		buddy_header = idx_to_header(chunk, buddy_idx);
+		asan_poison(buddy_header, BUDDY_POISONED,
+			    sizeof(*buddy_header));
+	}
+
+	/* Header properly freed but not in any freelists yet .*/
+	idx_set_order(chunk, idx, order);
+
+	header = idx_to_header(chunk, idx);
+	header_add_freelist(chunk, header, idx, order);
+
+	return 0;
+}
+
+__weak int buddy_free_internal(buddy_t __arg_arena *buddy, u64 addr)
+{
+	int ret;
+
+	if (!buddy)
+		return -EINVAL;
+
+	/* Freeing NULL is a valid no-op. */
+	if (!addr)
+		return 0;
+
+	ret = buddy_lock(buddy);
+	if (ret)
+		return ret;
+
+	ret = buddy_free_unlocked(buddy, addr);
+
+	buddy_unlock(buddy);
+
+	return ret;
+}
+
+__weak char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/libarena/src/common.bpf.c b/tools/testing/selftests/bpf/libarena/src/common.bpf.c
index 84e8a8b7d42e..e5da1e37e83e 100644
--- a/tools/testing/selftests/bpf/libarena/src/common.bpf.c
+++ b/tools/testing/selftests/bpf/libarena/src/common.bpf.c
@@ -1,11 +1,13 @@
 // SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
 /* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
 #include <libarena/common.h>
-
 #include <libarena/asan.h>
+#include <libarena/buddy.h>
 
 const volatile u32 zero = 0;
 
+buddy_t buddy;
+
 int arena_fls(__u64 word)
 {
 	if (!word)
@@ -28,4 +30,23 @@ __weak int arena_alloc_reserve(struct arena_alloc_reserve_args *args)
 	return bpf_arena_reserve_pages(&arena, NULL, args->nr_pages);
 }
 
+SEC("syscall")
+__weak int arena_buddy_reset(void)
+{
+	buddy_destroy(&buddy);
+
+	return buddy_init(&buddy);
+}
+
+__weak u64 malloc_internal(size_t size)
+{
+	return buddy_alloc_internal(&buddy, size);
+}
+
+__weak void free(void __arg_arena __arena *ptr)
+{
+	buddy_free_internal(&buddy, (u64)ptr);
+}
+
+
 char _license[] SEC("license") = "GPL";
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v9 7/8] selftests/bpf: Add selftests for libarena buddy allocator
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
                   ` (5 preceding siblings ...)
  2026-04-26 19:03 ` [PATCH bpf-next v9 6/8] selftests/bpf: Add buddy allocator for libarena Emil Tsalapatis
@ 2026-04-26 19:03 ` Emil Tsalapatis
  2026-04-26 21:09   ` sashiko-bot
  2026-04-26 19:03 ` [PATCH bpf-next v9 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests Emil Tsalapatis
  2026-04-27  1:20 ` [PATCH bpf-next v9 0/8] Introduce arena library and runtime patchwork-bot+netdevbpf
  8 siblings, 1 reply; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

Introduce selftests for the buddy allocator with and without
ASAN. Add the libarena selftests both to the libarena test
runner and to test_progs, so that they are a) available when
libarena is pulled as a standalone library, and b) exercised
along with all other test programs in this directory.

ASAN for libarena requires LLVM 22. Add logic in the top-level
selftests Makefile to only compile the ASAN variant if the
compiler supports it, otherwise skip the test.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
---
 .../libarena/selftests/st_asan_buddy.bpf.c    | 240 ++++++++++++++++++
 .../bpf/libarena/selftests/st_buddy.bpf.c     | 209 +++++++++++++++
 .../selftests/bpf/prog_tests/libarena.c       |  66 +++++
 .../selftests/bpf/prog_tests/libarena_asan.c  |  91 +++++++
 4 files changed, 606 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c
 create mode 100644 tools/testing/selftests/bpf/libarena/selftests/st_buddy.bpf.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/libarena.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/libarena_asan.c

diff --git a/tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c b/tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c
new file mode 100644
index 000000000000..9dd2980b5d6c
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c
@@ -0,0 +1,240 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <libarena/common.h>
+#include <libarena/asan.h>
+#include <libarena/buddy.h>
+
+extern buddy_t buddy;
+
+#ifdef BPF_ARENA_ASAN
+
+#include "st_asan_common.h"
+
+static __always_inline int asan_test_buddy_oob_single(size_t alloc_size)
+{
+	u8 __arena *mem;
+	int ret, i;
+
+	ret = asan_validate();
+	if (ret < 0)
+		return ret;
+
+	mem = buddy_alloc(&buddy, alloc_size);
+	if (!mem) {
+		arena_stdout("buddy_alloc failed for size %lu", alloc_size);
+		return -ENOMEM;
+	}
+
+	ret = asan_validate();
+	if (ret < 0)
+		return ret;
+
+	for (i = zero; i < alloc_size && can_loop; i++) {
+		mem[i] = 0xba;
+		ret = asan_validate_addr(false, &mem[i]);
+		if (ret < 0)
+			return ret;
+	}
+
+	mem[alloc_size] = 0xba;
+	ret = asan_validate_addr(true, &mem[alloc_size]);
+	if (ret < 0)
+		return ret;
+
+	buddy_free(&buddy, mem);
+
+	return 0;
+}
+
+/*
+ * Factored out because asan_validate_addr is complex enough to cause
+ * verification failures if verified with the rest of asan_test_buddy_uaf_single.
+ */
+__weak int asan_test_buddy_byte(u8 __arena __arg_arena *mem, int i, bool freed)
+{
+	int ret;
+
+	/* The header in freed blocks doesn't get poisoned. */
+	if (freed && BUDDY_HEADER_OFF <= i &&
+		i < BUDDY_HEADER_OFF + sizeof(struct buddy_header))
+		return 0;
+
+	mem[i] = 0xba;
+	ret = asan_validate_addr(freed, &mem[i]);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+__weak int asan_test_buddy_uaf_single(size_t alloc_size)
+{
+	u8 __arena *mem;
+	int ret;
+	int i;
+
+	mem = buddy_alloc(&buddy, alloc_size);
+	if (!mem) {
+		arena_stdout("buddy_alloc failed for size %lu", alloc_size);
+		return -ENOMEM;
+	}
+
+	ret = asan_validate();
+	if (ret < 0)
+		return ret;
+
+	for (i = zero; i < alloc_size && can_loop; i++) {
+		ret = asan_test_buddy_byte(mem, i, false);
+		if (ret)
+			return ret;
+	}
+
+	ret = asan_validate();
+	if (ret < 0)
+		return ret;
+
+	buddy_free(&buddy, mem);
+
+	for (i = zero; i < alloc_size && can_loop; i++) {
+		ret = asan_test_buddy_byte(mem, i, true);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+struct buddy_blob {
+	volatile u8 mem[48];
+	u8 oob;
+};
+
+static __always_inline int asan_test_buddy_blob_single(void)
+{
+	volatile struct buddy_blob __arena *blob;
+	const size_t alloc_size = sizeof(struct buddy_blob) - 1;
+	int ret;
+
+	blob = buddy_alloc(&buddy, alloc_size);
+	if (!blob)
+		return -ENOMEM;
+
+	blob->mem[0] = 0xba;
+	ret = asan_validate_addr(false, &blob->mem[0]);
+	if (ret < 0)
+		return ret;
+
+	blob->mem[47] = 0xba;
+	ret = asan_validate_addr(false, &blob->mem[47]);
+	if (ret < 0)
+		return ret;
+
+	blob->oob = 0;
+	ret = asan_validate_addr(true, &blob->oob);
+	if (ret < 0)
+		return ret;
+
+	buddy_free(&buddy, (void __arena *)blob);
+
+	return 0;
+}
+
+SEC("syscall")
+__weak int asan_test_buddy_oob(void)
+{
+	size_t sizes[] = {
+		7, 8, 17, 18, 64, 256, 317, 512, 1024,
+	};
+	int ret, i;
+
+	ret = buddy_init(&buddy);
+	if (ret) {
+		arena_stdout("buddy_init failed with %d", ret);
+		return ret;
+	}
+
+	for (i = zero; i < sizeof(sizes) / sizeof(sizes[0]) && can_loop; i++) {
+		ret = asan_test_buddy_oob_single(sizes[i]);
+		if (ret) {
+			arena_stdout("%s:%d Failed for size %lu", __func__,
+				   __LINE__, sizes[i]);
+			buddy_destroy(&buddy);
+			return ret;
+		}
+	}
+
+	buddy_destroy(&buddy);
+
+	ret = asan_validate();
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+SEC("syscall")
+__weak int asan_test_buddy_uaf(void)
+{
+	size_t sizes[] = { 16, 32, 64, 128, 256, 512, 1024, 16384 };
+	int ret, i;
+
+	ret = buddy_init(&buddy);
+	if (ret) {
+		arena_stdout("buddy_init failed with %d", ret);
+		return ret;
+	}
+
+	for (i = zero; i < sizeof(sizes) / sizeof(sizes[0]) && can_loop; i++) {
+		ret = asan_test_buddy_uaf_single(sizes[i]);
+		if (ret) {
+			arena_stdout("%s:%d Failed for size %lu", __func__,
+				   __LINE__, sizes[i]);
+			buddy_destroy(&buddy);
+			return ret;
+		}
+	}
+
+	buddy_destroy(&buddy);
+
+	ret = asan_validate();
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+SEC("syscall")
+__weak int asan_test_buddy_blob(void)
+{
+	const int iters = 10;
+	int ret, i;
+
+	ret = buddy_init(&buddy);
+	if (ret) {
+		arena_stdout("buddy_init failed with %d", ret);
+		return ret;
+	}
+
+	for (i = zero; i < iters && can_loop; i++) {
+		ret = asan_test_buddy_blob_single();
+		if (ret) {
+			arena_stdout("%s:%d Failed on iteration %d", __func__,
+				   __LINE__, i);
+			buddy_destroy(&buddy);
+			return ret;
+		}
+	}
+
+	buddy_destroy(&buddy);
+
+	ret = asan_validate();
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+#endif
+
+__weak char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/libarena/selftests/st_buddy.bpf.c b/tools/testing/selftests/bpf/libarena/selftests/st_buddy.bpf.c
new file mode 100644
index 000000000000..79e6f0baabfe
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/selftests/st_buddy.bpf.c
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <libarena/common.h>
+
+#include <libarena/asan.h>
+#include <libarena/buddy.h>
+
+extern buddy_t buddy;
+
+struct segarr_entry {
+	u8 __arena *block;
+	size_t sz;
+	u8 poison;
+};
+
+#define SEGARRLEN (512)
+static struct segarr_entry __arena segarr[SEGARRLEN];
+static void __arena *ptrs[17];
+size_t __arena alloc_sizes[] = { 3, 17, 1025, 129, 16350, 333, 9, 517 };
+size_t __arena alloc_multiple_sizes[] = { 3, 17, 1025, 129, 16350, 333, 9, 517, 2099 };
+size_t __arena alloc_free_sizes[] = { 3, 17, 64, 129, 256, 333, 512, 517 };
+size_t __arena alignment_sizes[] = { 1, 3, 7, 8, 9, 15, 16, 17, 31,
+				     32, 64, 100, 128, 255, 256, 512, 1000 };
+
+SEC("syscall")
+__weak int test_buddy_create(void)
+{
+	const int iters = 10;
+	int ret, i;
+
+	for (i = zero; i < iters && can_loop; i++) {
+		ret = buddy_init(&buddy);
+		if (ret)
+			return ret;
+
+		ret = buddy_destroy(&buddy);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+SEC("syscall")
+__weak int test_buddy_alloc(void)
+{
+	void __arena *mem;
+	int ret, i;
+
+	for (i = zero; i < 8 && can_loop; i++) {
+		ret = buddy_init(&buddy);
+		if (ret)
+			return ret;
+
+		mem = buddy_alloc(&buddy, alloc_sizes[i]);
+		if (!mem) {
+			buddy_destroy(&buddy);
+			return -ENOMEM;
+		}
+
+		buddy_destroy(&buddy);
+	}
+
+	return 0;
+}
+
+SEC("syscall")
+__weak int test_buddy_alloc_free(void)
+{
+	const int iters = 800;
+	void __arena *mem;
+	int ret, i;
+
+	ret = buddy_init(&buddy);
+	if (ret)
+		return ret;
+
+	for (i = zero; i < iters && can_loop; i++) {
+		mem = buddy_alloc(&buddy, alloc_free_sizes[(i * 5) % 8]);
+		if (!mem) {
+			buddy_destroy(&buddy);
+			return -ENOMEM;
+		}
+
+		buddy_free(&buddy, mem);
+	}
+
+	buddy_destroy(&buddy);
+
+	return 0;
+}
+
+SEC("syscall")
+__weak int test_buddy_alloc_multiple(void)
+{
+	int ret, j;
+	u32 i, idx;
+	u8 __arena *mem;
+	size_t sz;
+	u8 poison;
+
+	ret = buddy_init(&buddy);
+	if (ret)
+		return ret;
+
+	/*
+	 * Cycle through each size, allocating an entry in the
+	 * segarr. Continue for SEGARRLEN iterations. For every
+	 * allocation write down the size, use the current index
+	 * as a poison value, and log it with the pointer in the
+	 * segarr entry. Use the poison value to poison the entire
+	 * allocated memory according to the size given.
+	 */
+	for (i = zero; i < SEGARRLEN && can_loop; i++) {
+		sz = alloc_multiple_sizes[i % 9];
+		poison = (u8)i;
+
+		mem = buddy_alloc(&buddy, sz);
+		if (!mem) {
+			buddy_destroy(&buddy);
+			arena_stdout("%s:%d", __func__, __LINE__);
+			return -ENOMEM;
+		}
+
+		segarr[i].block = mem;
+		segarr[i].sz = sz;
+		segarr[i].poison = poison;
+
+		for (j = zero; j < sz && can_loop; j++) {
+			mem[j] = poison;
+			if (mem[j] != poison) {
+				buddy_destroy(&buddy);
+				return -EINVAL;
+			}
+		}
+	}
+
+	/*
+	 * Go to (i * 17) % SEGARRLEN, and free the block pointed to.
+	 * Before freeing, check all bytes have the poisoned value
+	 * corresponding to the element. If any values are unexpected,
+	 * return an error. Skip some elements to test destroying the
+	 * buddy allocator while data is still allocated.
+	 */
+	for (i = 10; i < SEGARRLEN && can_loop; i++) {
+		idx = (i * 17) % SEGARRLEN;
+
+		mem = segarr[idx].block;
+		sz = segarr[idx].sz;
+		poison = segarr[idx].poison;
+
+		for (j = zero; j < sz && can_loop; j++) {
+			if (mem[j] != poison) {
+				buddy_destroy(&buddy);
+				arena_stdout("%s:%d %lx %u vs %u", __func__,
+					   __LINE__, (uintptr_t)&mem[j],
+					   mem[j], poison);
+				return -EINVAL;
+			}
+		}
+
+		buddy_free(&buddy, mem);
+	}
+
+	buddy_destroy(&buddy);
+
+	return 0;
+}
+
+SEC("syscall")
+__weak int test_buddy_alignment(void)
+{
+	int ret, i;
+
+	ret = buddy_init(&buddy);
+	if (ret)
+		return ret;
+
+	/* Allocate various sizes and check alignment */
+	for (i = zero; i < 17 && can_loop; i++) {
+		ptrs[i] = buddy_alloc(&buddy, alignment_sizes[i]);
+		if (!ptrs[i]) {
+			arena_stdout("alignment test: alloc failed for size %lu",
+				   alignment_sizes[i]);
+			buddy_destroy(&buddy);
+			return -ENOMEM;
+		}
+
+		/* Check 8-byte alignment */
+		if ((u64)ptrs[i] & 0x7) {
+			arena_stdout(
+				"alignment test: ptr %llx not 8-byte aligned (size %lu)",
+				(u64)ptrs[i], alignment_sizes[i]);
+			buddy_destroy(&buddy);
+			return -EINVAL;
+		}
+	}
+
+	/* Free all allocations */
+	for (i = zero; i < 17 && can_loop; i++)
+		buddy_free(&buddy, ptrs[i]);
+
+	buddy_destroy(&buddy);
+
+	return 0;
+}
+
+__weak char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/prog_tests/libarena.c b/tools/testing/selftests/bpf/prog_tests/libarena.c
new file mode 100644
index 000000000000..81bdb084c271
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/libarena.c
@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include <unistd.h>
+
+#include <libarena/common.h>
+#include <libarena/asan.h>
+#include <libarena/buddy.h>
+#include <libarena/userspace.h>
+
+#include "libarena/libarena.skel.h"
+
+static void run_libarena_test(struct libarena *skel, struct bpf_program *prog,
+		const char *name)
+{
+	int ret;
+
+	if (!strstr(name, "test_buddy")) {
+		ret = libarena_run_prog(bpf_program__fd(skel->progs.arena_buddy_reset));
+		if (!ASSERT_OK(ret, "arena_buddy_reset"))
+			return;
+	}
+
+	ret = libarena_run_prog(bpf_program__fd(prog));
+
+	ASSERT_OK(ret, name);
+
+}
+
+void test_libarena(void)
+{
+	struct arena_alloc_reserve_args args;
+	struct libarena *skel;
+	struct bpf_program *prog;
+	int ret;
+
+	skel = libarena__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "open_and_load"))
+		return;
+
+	ret = libarena__attach(skel);
+	if (!ASSERT_OK(ret, "attach"))
+		goto out;
+
+	args.nr_pages = ARENA_RESERVE_PAGES_DFL;
+
+	ret = libarena_run_prog_args(bpf_program__fd(skel->progs.arena_alloc_reserve),
+			&args, sizeof(args));
+	if (!ASSERT_OK(ret, "arena_alloc_reserve"))
+		goto out;
+
+	bpf_object__for_each_program(prog, skel->obj) {
+		const char *name = bpf_program__name(prog);
+
+		if (!libarena_is_test_prog(name))
+			continue;
+
+		if (!test__start_subtest(name))
+			continue;
+
+		run_libarena_test(skel, prog, name);
+	}
+
+out:
+	libarena__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/libarena_asan.c b/tools/testing/selftests/bpf/prog_tests/libarena_asan.c
new file mode 100644
index 000000000000..b4fba10cdfbf
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/libarena_asan.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+
+#ifdef HAS_BPF_ARENA_ASAN
+#include <unistd.h>
+
+#include <libarena/common.h>
+#include <libarena/asan.h>
+#include <libarena/buddy.h>
+#include <libarena/userspace.h>
+
+#include "libarena/libarena_asan.skel.h"
+
+static void run_libarena_asan_test(struct libarena_asan *skel,
+		struct bpf_program *prog, const char *name)
+{
+	int ret;
+
+	if (!strstr(name, "test_buddy")) {
+		ret = libarena_run_prog(bpf_program__fd(skel->progs.arena_buddy_reset));
+		if (!ASSERT_OK(ret, "arena_buddy_reset"))
+			return;
+	}
+
+	ret = libarena_run_prog(bpf_program__fd(prog));
+	ASSERT_OK(ret, name);
+}
+
+static void run_test(void)
+{
+	struct arena_alloc_reserve_args args;
+	struct libarena_asan *skel;
+	struct bpf_program *prog;
+	int ret;
+
+	skel = libarena_asan__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "open_and_load"))
+		return;
+
+	ret = libarena_asan__attach(skel);
+	if (!ASSERT_OK(ret, "attach"))
+		goto out;
+
+	args.nr_pages = ARENA_RESERVE_PAGES_DFL;
+
+	ret = libarena_run_prog_args(bpf_program__fd(skel->progs.arena_alloc_reserve),
+			&args, sizeof(args));
+	if (!ASSERT_OK(ret, "arena_alloc_reserve"))
+		goto out;
+
+	ret = libarena_asan_init(
+		bpf_program__fd(skel->progs.arena_get_info),
+		bpf_program__fd(skel->progs.asan_init),
+		(1ULL << 32) / sysconf(_SC_PAGESIZE));
+	if (!ASSERT_OK(ret, "libarena_asan_init"))
+		goto out;
+
+	bpf_object__for_each_program(prog, skel->obj) {
+		const char *name = bpf_program__name(prog);
+
+		if (!libarena_is_asan_test_prog(name))
+			continue;
+
+		if (!test__start_subtest(name))
+			continue;
+
+		run_libarena_asan_test(skel, prog, name);
+	}
+
+out:
+	libarena_asan__destroy(skel);
+}
+
+#endif /* HAS_BPF_ARENA_ASAN */
+
+/*
+ * Run the test depending on whether LLVM can compile arena ASAN
+ * programs.
+ */
+void test_libarena_asan(void)
+{
+#ifdef HAS_BPF_ARENA_ASAN
+	run_test();
+#else
+	test__skip();
+#endif
+
+	return;
+}
+
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v9 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
                   ` (6 preceding siblings ...)
  2026-04-26 19:03 ` [PATCH bpf-next v9 7/8] selftests/bpf: Add selftests for libarena buddy allocator Emil Tsalapatis
@ 2026-04-26 19:03 ` Emil Tsalapatis
  2026-04-26 19:46   ` bot+bpf-ci
  2026-04-26 21:38   ` sashiko-bot
  2026-04-27  1:20 ` [PATCH bpf-next v9 0/8] Introduce arena library and runtime patchwork-bot+netdevbpf
  8 siblings, 2 replies; 19+ messages in thread
From: Emil Tsalapatis @ 2026-04-26 19:03 UTC (permalink / raw)
  To: bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski,
	Emil Tsalapatis

Add code to directly test the output of libarena ASAN tests.
The code reuses testing infrastructure originally for BPF streams
to verify that ASAN emits call stacks when the selftests trigger
a memory error.

Since stderr() testing uses logic from test_progs, it is only
available on the test_progs-based selftest runner. The standalone
runner still uses internal ASAN state to verify access errors are
triaged as expected.

Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com>
---
 .../libarena/selftests/st_asan_buddy.bpf.c    | 18 +++++++
 .../libarena/selftests/test_progs_compat.h    | 15 ++++++
 .../selftests/bpf/prog_tests/libarena_asan.c  |  2 +
 tools/testing/selftests/bpf/test_loader.c     | 51 ++++++++++++++-----
 tools/testing/selftests/bpf/test_progs.h      |  2 +
 5 files changed, 76 insertions(+), 12 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/libarena/selftests/test_progs_compat.h

diff --git a/tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c b/tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c
index 9dd2980b5d6c..97acd50ffa5c 100644
--- a/tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c
+++ b/tools/testing/selftests/bpf/libarena/selftests/st_asan_buddy.bpf.c
@@ -5,6 +5,9 @@
 #include <libarena/asan.h>
 #include <libarena/buddy.h>
 
+/* Required for parsing the ASAN call stacks. */
+#include "test_progs_compat.h"
+
 extern buddy_t buddy;
 
 #ifdef BPF_ARENA_ASAN
@@ -141,6 +144,11 @@ static __always_inline int asan_test_buddy_blob_single(void)
 }
 
 SEC("syscall")
+__stderr("Memory violation for address {{.*}} for write of size 1")
+__stderr("CPU: {{[0-9]+}} UID: 0 PID: {{[0-9]+}} Comm: {{.*}}")
+__stderr("Call trace:\n"
+"{{([a-zA-Z_][a-zA-Z0-9_]*\\+0x[0-9a-fA-F]+/0x[0-9a-fA-F]+\n"
+"|[ \t]+[^\n]+\n)*}}")
 __weak int asan_test_buddy_oob(void)
 {
 	size_t sizes[] = {
@@ -174,6 +182,11 @@ __weak int asan_test_buddy_oob(void)
 }
 
 SEC("syscall")
+__stderr("Memory violation for address {{.*}} for write of size 1")
+__stderr("CPU: {{[0-9]+}} UID: 0 PID: {{[0-9]+}} Comm: {{.*}}")
+__stderr("Call trace:\n"
+"{{([a-zA-Z_][a-zA-Z0-9_]*\\+0x[0-9a-fA-F]+/0x[0-9a-fA-F]+\n"
+"|[ \t]+[^\n]+\n)*}}")
 __weak int asan_test_buddy_uaf(void)
 {
 	size_t sizes[] = { 16, 32, 64, 128, 256, 512, 1024, 16384 };
@@ -205,6 +218,11 @@ __weak int asan_test_buddy_uaf(void)
 }
 
 SEC("syscall")
+__stderr("Memory violation for address {{.*}} for write of size 1")
+__stderr("CPU: {{[0-9]+}} UID: 0 PID: {{[0-9]+}} Comm: {{.*}}")
+__stderr("Call trace:\n"
+"{{([a-zA-Z_][a-zA-Z0-9_]*\\+0x[0-9a-fA-F]+/0x[0-9a-fA-F]+\n"
+"|[ \t]+[^\n]+\n)*}}")
 __weak int asan_test_buddy_blob(void)
 {
 	const int iters = 10;
diff --git a/tools/testing/selftests/bpf/libarena/selftests/test_progs_compat.h b/tools/testing/selftests/bpf/libarena/selftests/test_progs_compat.h
new file mode 100644
index 000000000000..9d431376c42f
--- /dev/null
+++ b/tools/testing/selftests/bpf/libarena/selftests/test_progs_compat.h
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#pragma once
+
+#ifdef __BPF__
+
+/* Selftests use these tags for compatibility with test_progs. */
+#define __test_tag(tag)		__attribute__((btf_decl_tag("comment:" XSTR(__COUNTER__) ":" tag)))
+#define __stderr(msg)		__test_tag("test_expect_stderr=" msg)
+#define __stderr_unpriv(msg)	__test_tag("test_expect_stderr_unpriv=" msg)
+
+#define XSTR(s) STR(s)
+#define STR(s) #s
+
+#endif
diff --git a/tools/testing/selftests/bpf/prog_tests/libarena_asan.c b/tools/testing/selftests/bpf/prog_tests/libarena_asan.c
index b4fba10cdfbf..d59d9dd12ef2 100644
--- a/tools/testing/selftests/bpf/prog_tests/libarena_asan.c
+++ b/tools/testing/selftests/bpf/prog_tests/libarena_asan.c
@@ -25,6 +25,8 @@ static void run_libarena_asan_test(struct libarena_asan *skel,
 
 	ret = libarena_run_prog(bpf_program__fd(prog));
 	ASSERT_OK(ret, name);
+
+	verify_test_stderr(skel->obj, prog);
 }
 
 static void run_test(void)
diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
index c4c34cae6102..ee637809a1d4 100644
--- a/tools/testing/selftests/bpf/test_loader.c
+++ b/tools/testing/selftests/bpf/test_loader.c
@@ -93,7 +93,7 @@ void test_loader_fini(struct test_loader *tester)
 	free(tester->log_buf);
 }
 
-static void free_msgs(struct expected_msgs *msgs)
+void free_msgs(struct expected_msgs *msgs)
 {
 	int i;
 
@@ -789,6 +789,43 @@ static void emit_stderr(const char *stderr, bool force)
 	fprintf(stdout, "STDERR:\n=============\n%s=============\n", stderr);
 }
 
+static void verify_stderr(int prog_fd, struct expected_msgs *msgs)
+{
+	LIBBPF_OPTS(bpf_prog_stream_read_opts, ropts);
+	char *buf;
+	int ret;
+
+	if (!msgs->cnt)
+		return;
+
+	buf = malloc(TEST_LOADER_LOG_BUF_SZ);
+	if (!ASSERT_OK_PTR(buf, "malloc"))
+		return;
+
+	ret = bpf_prog_stream_read(prog_fd, 2, buf, TEST_LOADER_LOG_BUF_SZ - 1,
+				    &ropts);
+	if (ret > 0) {
+		buf[ret] = '\0';
+		emit_stderr(buf, false);
+		validate_msgs(buf, msgs, emit_stderr);
+	} else {
+		ASSERT_GT(ret, 0, "stderr stream read");
+	}
+
+	free(buf);
+}
+
+void verify_test_stderr(struct bpf_object *obj, struct bpf_program *prog)
+{
+	struct test_spec spec = {};
+
+	if (parse_test_spec(NULL, obj, prog, &spec))
+		return;
+
+	verify_stderr(bpf_program__fd(prog), &spec.priv.stderr);
+	free_test_spec(&spec);
+}
+
 static void emit_stdout(const char *bpf_stdout, bool force)
 {
 	if (!force && env.verbosity == VERBOSE_NONE)
@@ -1314,17 +1351,7 @@ void run_subtest(struct test_loader *tester,
 			goto tobj_cleanup;
 		}
 
-		if (subspec->stderr.cnt) {
-			err = get_stream(2, bpf_program__fd(tprog),
-					 tester->log_buf, tester->log_buf_sz);
-			if (err <= 0) {
-				PRINT_FAIL("Unexpected retval from get_stream(): %d, errno = %d\n",
-					   err, errno);
-				goto tobj_cleanup;
-			}
-			emit_stderr(tester->log_buf, false /*force*/);
-			validate_msgs(tester->log_buf, &subspec->stderr, emit_stderr);
-		}
+		verify_stderr(bpf_program__fd(tprog), &subspec->stderr);
 
 		if (subspec->stdout.cnt) {
 			err = get_stream(1, bpf_program__fd(tprog),
diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
index 1a44467f4310..37955a8ad385 100644
--- a/tools/testing/selftests/bpf/test_progs.h
+++ b/tools/testing/selftests/bpf/test_progs.h
@@ -563,5 +563,7 @@ struct expected_msgs {
 
 void validate_msgs(const char *log_buf, struct expected_msgs *msgs,
 		   void (*emit_fn)(const char *buf, bool force));
+void free_msgs(struct expected_msgs *msgs);
+void verify_test_stderr(struct bpf_object *obj, struct bpf_program *prog);
 
 #endif /* __TEST_PROGS_H */
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 5/8] selftests/bpf: Add ASAN support for libarena selftests
  2026-04-26 19:03 ` [PATCH bpf-next v9 5/8] selftests/bpf: Add ASAN support for libarena selftests Emil Tsalapatis
@ 2026-04-26 19:33   ` bot+bpf-ci
  2026-04-26 20:28   ` sashiko-bot
  1 sibling, 0 replies; 19+ messages in thread
From: bot+bpf-ci @ 2026-04-26 19:33 UTC (permalink / raw)
  To: emil, bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski, emil,
	ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1511 bytes --]

> diff --git a/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h b/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h
> new file mode 100644
> index 000000000000..1d3edc4372ac
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h
> @@ -0,0 +1,52 @@
> +// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
> +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
> +
> +#pragma once
> +
> +#define ST_PAGES 64
> +
> +static inline void print_asan_map_state(void __arena *addr)
> +{
> +	arena_stdout("%s:%d ASAN %p -> (val: %x gran: %x set: [%s])",
> +			__func__, __LINE__, addr,
> +			*(s8a *)(addr), ASAN_GRANULE(addr),
> +			asan_shadow_set(addr) ? "yes" : "no");
> +}

The print_asan_map_state() function dereferences addr with `*(s8a *)(addr)`,
which could be problematic. Could this trigger a secondary ASAN violation or
BPF exception if addr points to unmapped memory outside the arena?

In v8, sashiko-bot noted that this dereference could terminate the test before
the debug output is printed if addr is invalid. The suggestion was to use the
__noasan attribute to avoid interception by ASAN instrumentation.

Reference: https://lore.kernel.org/bpf/20260421211538.7BCA0C2BCB0@smtp.kernel.org/


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24964771851

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 2/8] selftests/bpf: Add basic libarena scaffolding
  2026-04-26 19:03 ` [PATCH bpf-next v9 2/8] selftests/bpf: Add basic libarena scaffolding Emil Tsalapatis
@ 2026-04-26 19:34   ` sashiko-bot
  0 siblings, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-04-26 19:34 UTC (permalink / raw)
  To: Emil Tsalapatis; +Cc: bpf

Sashiko AI review found 4 potential issue(s):
- [Medium] Silent argument truncation in `arena_alloc_reserve` causing fewer pages to be reserved than requested.
- [Medium] Buffer overflow in userspace `mincore()` array sizing if `sysconf(_SC_PAGESIZE)` differs from kernel `PAGE_SIZE`.
- [Low] Build Dependency Tracking Failure for libarena headers.
- [Low] Makefile logic flaw leading to silent test omission on name collision (vpath anti-pattern).
--

commit 7876e5868c676854bd421cbb98140a84f94fa226
Author: Emil Tsalapatis <emil@etsalapatis.com>

selftests/bpf: Add basic libarena scaffolding

Add initial code and a Makefile for an arena-based BPF library. Modules
can be added just by including the source file in the library's src/
subdirectory.

> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index 6ef6872adbc3..5855064e7f9c 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile

[ ... ]

> @@ -525,6 +526,7 @@ LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
>  LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
>  
>  HEADERS_FOR_BPF_OBJS := $(wildcard $(BPFDIR)/*.bpf.h)		\
> +			$(wildcard $(CURDIR)/libarena/include/*.[ch])	\
>  			$(addprefix $(BPFDIR)/,	bpf_core_read.h	\
>  			                        bpf_endian.h	\
>  						bpf_helpers.h	\

Does this code miss changes to the newly introduced headers?

The headers common.h and userspace.h are located in the nested
libarena/include/libarena/ subdirectory. Since GNU Make's wildcard function
does not perform recursive searches, this evaluates to empty. 

Modifications to the headers might silently fail to trigger a rebuild.

[ ... ]

> diff --git a/tools/testing/selftests/bpf/libarena/Makefile b/tools/testing/selftests/bpf/libarena/Makefile
> new file mode 100644
> index 000000000000..e85b3ad96890
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/Makefile

[ ... ]

> +# Scan src/ and selftests/ to generate the final binaries
> +LIBARENA_SOURCES = $(wildcard $(LIBARENA)/src/*.bpf.c) $(wildcard $(LIBARENA)/selftests/*.bpf.c)
> +LIBARENA_OBJECTS = $(notdir $(LIBARENA_SOURCES:.bpf.c=.bpf.o))

Could this cause test modules to be silently omitted on name collisions?

If a test module is added as selftests/foo.bpf.c when src/foo.bpf.c already
exists, notdir creates duplicate object names. Because of the vpath directive
used later in the Makefile, make will find the source file in src/ first and
compile it. 

The bpftool utility will link the identical object twice, completely ignoring
the test file in selftests/.

[ ... ]

> diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/common.h b/tools/testing/selftests/bpf/libarena/include/libarena/common.h
> new file mode 100644
> index 000000000000..92b67b20ed15
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/include/libarena/common.h

[ ... ]

> +struct arena_alloc_reserve_args {
> +	u64 nr_pages;
> +};

Will this silently truncate the number of requested pages?

The userspace API structure exposes a 64-bit nr_pages argument, which is
passed directly to the bpf_arena_reserve_pages BPF kfunc. However, the
internal BPF kfunc signature strictly takes a 32-bit integer.

If userspace requests a reservation exceeding 32 bits, the value will be
truncated by the BPF calling convention.

[ ... ]

> diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h b/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
> new file mode 100644
> index 000000000000..0438a751d5fd
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h

[ ... ]

> +static inline int libarena_get_globals_pages(int arena_get_globals_fd,
> +					     size_t arena_all_pages,
> +					     u64 *globals_pages)
> +{
> +	size_t pgsize = sysconf(_SC_PAGESIZE);
> +	void *arena_base;
> +	ssize_t i;
> +	u8 *vec;
> +	int ret;
> +
> +	ret = libarena_get_arena_base(arena_get_globals_fd, &arena_base);
> +	if (ret)
> +		return ret;
> +
> +	if (!arena_base)
> +		return -EINVAL;
> +
> +	vec = calloc(arena_all_pages, sizeof(*vec));
> +	if (!vec)
> +		return -ENOMEM;
> +
> +	if (mincore(arena_base, arena_all_pages * pgsize, vec) < 0) {

Can this cause a buffer overflow in the vec array?

The vec buffer is allocated to hold arena_all_pages bytes. The length passed
to the mincore syscall is calculated as arena_all_pages *
sysconf(_SC_PAGESIZE).

However, the kernel's mincore implementation determines the number of output
bytes by dividing the requested length by the kernel's internal page size. If
the userspace environment reports a page size larger than the kernel's, the
kernel could calculate a page count greater than arena_all_pages and write
past the end of the vec buffer.

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260426190338.4615-1-emil@etsalapatis.com?part=2

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests
  2026-04-26 19:03 ` [PATCH bpf-next v9 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests Emil Tsalapatis
@ 2026-04-26 19:46   ` bot+bpf-ci
  2026-04-26 21:38   ` sashiko-bot
  1 sibling, 0 replies; 19+ messages in thread
From: bot+bpf-ci @ 2026-04-26 19:46 UTC (permalink / raw)
  To: emil, bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski, emil,
	ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2127 bytes --]

> diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
> index c4c34cae6102..ee637809a1d4 100644
> --- a/tools/testing/selftests/bpf/test_loader.c
> +++ b/tools/testing/selftests/bpf/test_loader.c

[ ... ]

> @@ -789,6 +789,43 @@ static void emit_stderr(const char *stderr, bool force)
>  	fprintf(stdout, "STDERR:\n=============\n%s=============\n", stderr);
>  }
>
> +static void verify_stderr(int prog_fd, struct expected_msgs *msgs)
> +{
> +	LIBBPF_OPTS(bpf_prog_stream_read_opts, ropts);
> +	char *buf;
> +	int ret;
> +
> +	if (!msgs->cnt)
> +		return;
> +
> +	buf = malloc(TEST_LOADER_LOG_BUF_SZ);
> +	if (!ASSERT_OK_PTR(buf, "malloc"))
> +		return;
> +
> +	ret = bpf_prog_stream_read(prog_fd, 2, buf, TEST_LOADER_LOG_BUF_SZ - 1,
> +				    &ropts);
> +	if (ret > 0) {
> +		buf[ret] = '\0';
> +		emit_stderr(buf, false);
> +		validate_msgs(buf, msgs, emit_stderr);
> +	} else {
> +		ASSERT_GT(ret, 0, "stderr stream read");
> +	}
> +
> +	free(buf);
> +}
> +
> +void verify_test_stderr(struct bpf_object *obj, struct bpf_program *prog)
> +{
> +	struct test_spec spec = {};
> +
> +	if (parse_test_spec(NULL, obj, prog, &spec))
> +		return;
> +
> +	verify_stderr(bpf_program__fd(prog), &spec.priv.stderr);
> +	free_test_spec(&spec);
> +}
> +

The verify_test_stderr() function hardcodes the use of &spec.priv.stderr.
Since it lacks an unprivileged parameter to select the unprivileged spec,
any expected messages specified via __stderr_unpriv() might be silently
ignored.

While this doesn't affect current libarena tests (which don't run in
unprivileged mode), it creates an API inconsistency. The function defines
__stderr_unpriv macro in test_progs_compat.h but doesn't actually verify
those expectations.

This concern was previously raised in v8 by Sashiko AI:
https://lore.kernel.org/bpf/20260421221653.75221C2BCB0@smtp.kernel.org


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24964771851

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 6/8] selftests/bpf: Add buddy allocator for libarena
  2026-04-26 19:03 ` [PATCH bpf-next v9 6/8] selftests/bpf: Add buddy allocator for libarena Emil Tsalapatis
@ 2026-04-26 19:46   ` bot+bpf-ci
  2026-04-26 20:54   ` sashiko-bot
  1 sibling, 0 replies; 19+ messages in thread
From: bot+bpf-ci @ 2026-04-26 19:46 UTC (permalink / raw)
  To: emil, bpf
  Cc: ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski, emil,
	ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1848 bytes --]

> diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/buddy.h b/tools/testing/selftests/bpf/libarena/include/libarena/buddy.h
> new file mode 100644
> index 000000000000..00e2437128ef
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/include/libarena/buddy.h

[ ... ]

> +struct buddy_header {
> +	u32 prev_index;	/* "Pointer" to the previous available allocation of the same size. */
> +	u32 next_index; /* Same for the next allocation. */
> +};

In v4, Alexei Starovoitov requested a mathematical analysis of the memory
savings from using u32 indices instead of pointers in this structure:

  "BUDDY_CHUNK_ITEMS is used like NULL? The header is u32 + u32. Is it
  really worth saving 8 bytes? Use two proper pointers? How many headers
  are there? ... if everything is a pointer the whole algorithm would be
  so much easier to understand. Is it worth it to keep indices vs
  pointers? Please do the math and explain the savings."

  https://lore.kernel.org/bpf/CAADnVQKnHXg0YnuLvzfXjszMaqugpMNWD5=ipffWS4GVNg6rFQ@mail.gmail.com/

The response explained the rationale for ASAN effectiveness and memory
corruption protection (headers in free blocks offset at byte 8 to catch
off-by-one errors), but did not provide the quantitative memory savings
analysis that was explicitly requested.

The bot in v8 also flagged this as unaddressed:
https://lore.kernel.org/bpf/f6f35b4cb34724773df4e2f2d5014f64048931cd6eeb747875a84f3a9d1db4ed@mail.kernel.org/

Would it be possible to add a comment documenting the mathematical
trade-off analysis, or include it in the commit message?

[ ... ]


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24964771851

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 4/8] selftests/bpf: Add arena ASAN runtime to libarena
  2026-04-26 19:03 ` [PATCH bpf-next v9 4/8] selftests/bpf: Add arena ASAN runtime to libarena Emil Tsalapatis
@ 2026-04-26 20:12   ` sashiko-bot
  0 siblings, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-04-26 20:12 UTC (permalink / raw)
  To: Emil Tsalapatis; +Cc: bpf

Sashiko AI review found 6 potential issue(s):
- [Medium] Concurrent `asan_init` calls corrupt global ASAN state
- [Medium] Early loop exit due to `can_loop` causes silent validation bypass and incomplete poisoning
- [Medium] Missing bounds checking in `asan_poison` and `asan_unpoison` allows out-of-bounds writes
- [Medium] Uninitialized ASAN state leads to silent memory corruption
- [Low] Compilation error due to missing argument for `bpf_stream_print_stack`
- [Low] Compilation and linking errors due to missing ASAN functions and BTF anchors
--

commit 378c2bf4d6c7fb041761c5a5045f1680acbb2840
Author: Emil Tsalapatis <emil@etsalapatis.com>

selftests/bpf: Add arena ASAN runtime to libarena

This commit adds an ASAN runtime to the arena library for BPF selftests.
It implements LLVM sanitization hooks and provides manual poison/unpoison
functions.

> diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/asan.h b/tools/testing/selftests/bpf/libarena/include/libarena/asan.h
> new file mode 100644
> index 000000000000..eb9fc69d9eb0
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/include/libarena/asan.h
[ ... ]
> @@ -0,0 +1,103 @@
[ ... ]
> +/*
> + * Force LLVM to emit BTF information for the stubs,
> + * because the ASAN pass in LLVM by itself doesn't.
> + */
> +#define ASAN_LOAD_STORE_SIZE(size)		\
> +	__asan_store##size,			\
> +	__asan_store##size##_noabort,		\
> +	__asan_load##size,			\
> +	__asan_load##size##_noabort,		\
> +	__asan_report_store##size,		\
> +	__asan_report_store##size##_noabort,	\
> +	__asan_report_load##size,		\
> +	__asan_report_load##size##_noabort
> +
> +__attribute__((used))
> +static void (*__asan_btf_anchors[])(intptr_t) = {
> +	ASAN_LOAD_STORE_SIZE(1),
> +	ASAN_LOAD_STORE_SIZE(2),
> +	ASAN_LOAD_STORE_SIZE(4),
> +	ASAN_LOAD_STORE_SIZE(8),
> +};

Does this array need to include variable-sized hooks (like __asan_storeN),
global handlers, and memory intrinsics? Without these in the anchors list,
the compiler might not emit BTF data for them, potentially causing BPF load
failures if LLVM instruments variable-sized accesses.

> diff --git a/tools/testing/selftests/bpf/libarena/src/asan.bpf.c b/tools/testing/selftests/bpf/libarena/src/asan.bpf.c
> new file mode 100644
> index 000000000000..64c5b990086c
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/src/asan.bpf.c
[ ... ]
> @@ -0,0 +1,553 @@
[ ... ]
> +/*
> + * BPF does not currently support the memset/memcpy/memcmp intrinsics.
> + * For large sequential copies, or assignments of large data structures,
> + * the frontend will generate an intrinsic that causes the BPF backend
> + * to exit due to a missing implementation. Provide a simple implementation
> + * just for memset to use it for poisoning/unpoisoning the map.
> + */
> +__weak int asan_memset(s8a __arg_arena *dst, s8 val, size_t size)
> +{
> +	size_t i;
> +
> +	for (i = zero; i < size && can_loop; i++)
> +		dst[i] = val;
> +
> +	return 0;
> +}

Is can_loop defined anywhere? This looks like it might cause an undeclared
identifier compilation error.

Also, if can_loop causes the loop to exit early due to instruction limits,
does this result in incomplete poisoning? asan_memset would return 0, leaving
the rest of the shadow memory uninitialized.

[ ... ]
> +static __always_inline u64 first_nonzero_byte(u64 addr, size_t size)
> +{
> +	while (size && can_loop) {
> +		if (unlikely(*(s8a *)addr))
> +			return addr;
> +		addr += 1;
> +		size -= 1;
> +	}
> +
> +	return SHADOW_ALL_ZEROES;
> +}

Similarly, if can_loop terminates this loop early, could this falsely validate
the remaining memory and allow large out-of-bounds accesses to bypass ASAN
validation?

[ ... ]
> +__weak int asan_report(s8a __arg_arena *addr, size_t sz, u32 flags)
> +{
> +	u32 reported = __sync_val_compare_and_swap(&asan_reported, false, true);
> +
> +	/* Only report the first ASAN violation. */
> +	if (reported && asan_report_once)
> +		return 0;
> +
> +	asan_violated = (u64)addr;
> +
> +	arena_stderr("Memory violation for address %p (0x%lx) for %s of size %ld\n",
> +			addr, (u64)addr,
> +			(flags & ASAN_WRITE) ? "write" : "read",
> +			sz);
> +	bpf_stream_print_stack(BPF_STDERR);

Is this missing an argument? The kernel definition of this kfunc is
bpf_stream_print_stack(int stream_id, struct bpf_prog_aux *aux). Compiling
this with vmlinux.h could result in a compilation error.

[ ... ]
> +__hidden __noasan int asan_poison(void __arena *addr, s8 val, size_t size)
> +{
> +	s8a *shadow;
> +	size_t len;
[ ... ]
> +	shadow = mem_to_shadow(addr);
> +	len = size >> ASAN_SHADOW_SHIFT;
> +
> +	asan_memset(shadow, val, len);
> +
> +	return 0;
> +}

Does this code verify that asan_init() has been called before executing?
If __asan_shadow_memory_dynamic_address is 0, mem_to_shadow() could resolve
to the beginning of the user arena, causing asan_memset() to overwrite valid
application data.

Additionally, is there a bounds check missing to ensure addr + size falls
entirely within the valid user region? If size extends beyond the user
boundaries, could this write past the shadow map and corrupt the arena's
global variables?

[ ... ]
> +__hidden __noasan int asan_unpoison(void __arena *addr, size_t size)
> +{
> +	size_t partial = size & ASAN_GRANULE_MASK;
> +	s8a *shadow;
> +	size_t len;
[ ... ]
> +	shadow = mem_to_shadow(addr);
> +	len = size >> ASAN_SHADOW_SHIFT;
> +
> +	asan_memset(shadow, 0, len);

Does this unpoisoning function have the same uninitialized state and bounds
checking concerns as asan_poison()?

[ ... ]
> +SEC("syscall")
> +__weak __noasan int asan_init(struct asan_init_args *args)
> +{
> +	u64 globals_pages = args->arena_globals_pages;
> +	u64 all_pages = args->arena_all_pages;
> +	u64 shadow_map, shadow_pgoff;
> +	u64 shadow_pages;
> +
> +	if (asan_inited)
> +		return 0;
[ ... ]
> +	shadow_pgoff = all_pages - shadow_pages - globals_pages;
> +	__asan_shadow_memory_dynamic_address = shadow_pgoff * __PAGE_SIZE;
> +
> +	/*
> +	 * Allocate the last (1/ASAN_SHADOW_SCALE)th of an arena's pages for the map
> +	 * We find the offset and size from the arena map.
> +	 *
> +	 * The allocated map pages are zeroed out, meaning all memory is marked as valid
> +	 * even if it's not allocated already. This is expected: Since the actual memory
> +	 * pages are not allocated, accesses to it will trigger page faults and will be
> +	 * reported through BPF streams. Any pages allocated through bpf_arena_alloc_pages
> +	 * should be poisoned by the allocator right after the call succeeds.
> +	 */
> +	shadow_map = (u64)bpf_arena_alloc_pages(
> +		&arena, (void __arena *)__asan_shadow_memory_dynamic_address,
> +		shadow_pages, NUMA_NO_NODE, 0);
> +	if (!shadow_map) {
> +		arena_stderr("Could not allocate shadow map\n");
> +
> +		__asan_shadow_memory_dynamic_address = 0;
> +
> +		return -ENOMEM;
> +	}
> +
> +	asan_inited = true;
> +
> +	return 0;
> +}

Can this code corrupt the global shadow memory base if invoked concurrently?

If two threads bypass the lockless asan_inited check, thread 1 could
successfully allocate memory and set asan_inited = true. Thread 2 might
then fail its allocation (since the pages are already mapped) and execute
the error path, setting __asan_shadow_memory_dynamic_address = 0.

Would this permanently corrupt the global ASAN state for all subsequent checks
in the arena?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260426190338.4615-1-emil@etsalapatis.com?part=4

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 5/8] selftests/bpf: Add ASAN support for libarena selftests
  2026-04-26 19:03 ` [PATCH bpf-next v9 5/8] selftests/bpf: Add ASAN support for libarena selftests Emil Tsalapatis
  2026-04-26 19:33   ` bot+bpf-ci
@ 2026-04-26 20:28   ` sashiko-bot
  1 sibling, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-04-26 20:28 UTC (permalink / raw)
  To: Emil Tsalapatis; +Cc: bpf

Sashiko AI review found 4 potential issue(s):
- [Medium] Logic error in `asan_validate_addr` fails to validate the ASAN violation address.
- [Medium] `print_asan_map_state` missing `__noasan` attribute triggers secondary ASAN violations.
- [Low] Missing header includes in `userspace.h` and `st_asan_common.h` cause compilation errors.
- [Low] Standalone builds in the `libarena/` directory fail unconditionally on older Clang versions.
--

commit 7f45bd9cc6e3fb019a16a17f2e11b1f1ab5702ad
Author: Emil Tsalapatis <emil@etsalapatis.com>

selftests/bpf: Add ASAN support for libarena selftests

Expand the arena library selftest infrastructure to support
address sanitization. Add the compiler flags necessary to
compile the library under ASAN when supported.

> diff --git a/tools/testing/selftests/bpf/libarena/Makefile b/tools/testing/selftests/bpf/libarena/Makefile
> index e85b3ad96890..5e2ab514805e 100644
> --- a/tools/testing/selftests/bpf/libarena/Makefile
> +++ b/tools/testing/selftests/bpf/libarena/Makefile
[ ... ]
> @@ -53,17 +61,32 @@ CFLAGS += $(INCLUDES)
>  vpath %.bpf.c $(LIBARENA)/src $(LIBARENA)/selftests
>  vpath %.c $(LIBARENA)/src $(LIBARENA)/selftests
>  
> +skeletons: libarena.skel.h libarena_asan.skel.h
> +.PHONY: skeletons

Will this unconditionally build libarena_asan.skel.h when running make
directly in the libarena/ directory? The parent BPF Makefile guards ASAN
compilation with CLANG_HAS_ARENA_ASAN, but that guard is missing here. Could
this cause the build to fail on older Clang versions?

[ ... ]
> diff --git a/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h b/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
> index 0438a751d5fd..88b68ac73cca 100644
> --- a/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
> +++ b/tools/testing/selftests/bpf/libarena/include/libarena/userspace.h
[ ... ]
> @@ -97,3 +102,31 @@ static inline int libarena_get_globals_pages(int arena_get_globals_fd,
>  	free(vec);
>  	return 0;
>  }
> +
> +static inline int libarena_asan_init(int arena_asan_init_fd,
> +				     int asan_init_fd,
> +				     size_t arena_all_pages)
> +{
> +	LIBBPF_OPTS(bpf_test_run_opts, opts);
> +	struct asan_init_args args;

This isn't a bug, but struct asan_init_args is instantiated here without
including <libarena/asan.h>. Could this cause compilation errors if the
header is included independently?

[ ... ]
> diff --git a/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h b/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h
> new file mode 100644
> index 000000000000..1d3edc4372ac
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/selftests/st_asan_common.h
> @@ -0,0 +1,52 @@
> +// SPDX-License-Identifier: LGPL-2.1 OR BSD-2-Clause
> +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
> +
> +#pragma once
> +

Similar to the above, this header uses macros and variables like
asan_violated, ASAN_GRANULE, and arena_stdout but does not include
<libarena/common.h> or <libarena/asan.h>.

> +#define ST_PAGES 64
> +
> +static inline void print_asan_map_state(void __arena *addr)
> +{
> +	arena_stdout("%s:%d ASAN %p -> (val: %x gran: %x set: [%s])",
> +			__func__, __LINE__, addr,
> +			*(s8a *)(addr), ASAN_GRANULE(addr),
> +			asan_shadow_set(addr) ? "yes" : "no");
> +}

Since addr is an address that triggered an ASAN violation, does this need
the __noasan attribute? Otherwise, the memory read *(s8a *)(addr) will trigger
another ASAN violation.

> +
> +/*
> + * Emit an error and force the current function to exit if the ASAN
> + * violation state is unexpected. Reset the violation state after.
> + */
> +static inline int asan_validate_addr(bool cond, void __arena *addr)
> +{
> +	if ((asan_violated != 0) == cond) {

If a violation is expected when cond is true, does this code also need to
check if asan_violated == (u64)addr? It looks like it only verifies that
any violation occurred, which could allow tests to erroneously pass if a
completely unrelated ASAN violation happens.

> +		asan_violated = 0;
> +		return 0;
> +	}

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260426190338.4615-1-emil@etsalapatis.com?part=5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 6/8] selftests/bpf: Add buddy allocator for libarena
  2026-04-26 19:03 ` [PATCH bpf-next v9 6/8] selftests/bpf: Add buddy allocator for libarena Emil Tsalapatis
  2026-04-26 19:46   ` bot+bpf-ci
@ 2026-04-26 20:54   ` sashiko-bot
  1 sibling, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-04-26 20:54 UTC (permalink / raw)
  To: Emil Tsalapatis; +Cc: bpf

Sashiko AI review found 4 potential issue(s):
- [High] Permanent memory leaks of newly allocated chunks and freed blocks when allocator lock acquisition fails.
- [Medium] Unnecessary global lock contention during thread-local chunk initialization.
- [Medium] Missing rollback of virtual address space reservation on initialization failure.
- [Low] Accidental inclusion of default.profraw binary file.
--

commit d3559de018f16408b8163bc0362e7b5fdd94366e
Author: Emil Tsalapatis <emil@etsalapatis.com>
selftests/bpf: Add buddy allocator for libarena

Add a byte-oriented buddy allocator for libarena. The buddy allocator
provides an alloc/free interface for small arena allocations ranging
from 16 bytes to 512 KiB.

> diff --git a/tools/testing/selftests/bpf/default.profraw b/tools/testing/selftests/bpf/default.profraw
> new file mode 100644
> index 000000000000..e865e87829f8
> Binary files /dev/null and b/tools/testing/selftests/bpf/default.profraw differ

This isn't a bug, but it looks like a binary LLVM profile data file was
accidentally included in the patch.

> diff --git a/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c b/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c
> new file mode 100644
> index 000000000000..865e00803daa
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/libarena/src/buddy.bpf.c
[ ... ]
> +static buddy_chunk_t *buddy_chunk_get(buddy_t *buddy)
> +{
[ ... ]
> +	chunk = bpf_arena_alloc_pages(&arena, (void __arena *)vaddr,
> +				      BUDDY_CHUNK_PAGES, NUMA_NO_NODE, 0);
> +	if (!chunk) {
> +		arena_stderr("[ALLOC FAILED]");
> +		return NULL;
> +	}
> +
> +	if (buddy_lock(buddy)) {
> +		/*
> +		 * We cannot reclaim the vaddr space, but that is ok - this
> +		 * operation should always succeed. The error path is to catch
> +		 * accidental deadlocks that will cause -ENOMEMs to the program as
> +		 * the allocator fails to refill itself, in which case vaddr usage
> +		 * is the least of our worries.
> +		 */
> +		bpf_arena_free_pages(&arena, (void __arena *)vaddr, BUDDY_CHUNK_PAGES);
> +		return NULL;
> +	}
> +
> +	asan_poison(chunk, BUDDY_POISONED, BUDDY_CHUNK_PAGES * __PAGE_SIZE);
> +
> +	/* Unpoison the chunk itself. */
> +	asan_unpoison(chunk, sizeof(*chunk));
> +
> +	/* Mark all freelists as empty. */
> +	for (ord = zero; ord < BUDDY_CHUNK_NUM_ORDERS && can_loop; ord++)
> +		chunk->freelists[ord] = BUDDY_CHUNK_ITEMS;

Does holding the global allocator lock here create unnecessary contention?

Since this newly allocated chunk is not yet linked to the global allocator
state (which happens later in buddy_alloc_from_new_chunk()), could the
memory poisoning, unpoisoning, and freelist initialization loop be done
before acquiring the global lock?

[ ... ]
> +__weak int buddy_init(buddy_t __arg_arena *buddy)
> +{
> +	buddy_chunk_t *chunk;
> +	int ret;
> +
> +	if (!asan_ready())
> +		return -EINVAL;
> +
> +	/* Reserve enough address space to ensure allocations are aligned. */
> +	ret = buddy_reserve_arena_vaddr(buddy);
> +	if (ret)
> +		return ret;
> +
> +	_Static_assert(BUDDY_CHUNK_PAGES > 0,
> +		       "chunk must use one or more pages");
> +
> +	chunk = buddy_chunk_get(buddy);
> +
> +	if (buddy_lock(buddy)) {
> +		bpf_arena_free_pages(&arena, chunk, BUDDY_CHUNK_PAGES);
> +		return -EINVAL;
> +	}

If buddy_chunk_get() or buddy_lock() fails, does this permanently leak the
virtual address space reserved earlier?

It looks like buddy_unreserve_arena_vaddr() should be called in these
error paths to prevent exhaustion of the reserved range.

[ ... ]
> +static u64 buddy_alloc_from_new_chunk(buddy_t *buddy, buddy_chunk_t *chunk, int order)
> +{
> +	u64 address;
> +
> +	if (buddy_lock(buddy))
> +		return (u64)NULL;

If the allocator lock cannot be acquired, does this drop the newly allocated
chunk without linking it or freeing it?

Since bpf_arena_free_pages() isn't called on the chunk, it appears this
could permanently leak physical and virtual arena memory.

[ ... ]
> +__weak int buddy_free_internal(buddy_t __arg_arena *buddy, u64 addr)
> +{
> +	int ret;
> +
> +	if (!buddy)
> +		return -EINVAL;
> +
> +	/* Freeing NULL is a valid no-op. */
> +	if (!addr)
> +		return 0;
> +
> +	ret = buddy_lock(buddy);
> +	if (ret)
> +		return ret;

If lock acquisition fails here, does the block get permanently leaked?

Since the user-facing free() wrapper returns void, the caller has no way
of knowing the free failed, resulting in silently swallowed errors and
lost memory.

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260426190338.4615-1-emil@etsalapatis.com?part=6

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 7/8] selftests/bpf: Add selftests for libarena buddy allocator
  2026-04-26 19:03 ` [PATCH bpf-next v9 7/8] selftests/bpf: Add selftests for libarena buddy allocator Emil Tsalapatis
@ 2026-04-26 21:09   ` sashiko-bot
  0 siblings, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-04-26 21:09 UTC (permalink / raw)
  To: Emil Tsalapatis; +Cc: bpf

Sashiko AI review found 1 potential issue(s):
- [Medium] Commit message claims to add Makefile logic that is actually missing from the patch (added in a previous patch).
--

commit 1b0628e5609c4e162f40fb718876176d30532a33
Author: Emil Tsalapatis <emil@etsalapatis.com>

selftests/bpf: Add selftests for libarena buddy allocator

This commit introduces selftests for the libarena buddy allocator, including
both ASAN and non-ASAN variants. It integrates these tests into both the
standalone libarena test runner and the broader test_progs suite.

> ASAN for libarena requires LLVM 22. Add logic in the top-level
> selftests Makefile to only compile the ASAN variant if the
> compiler supports it, otherwise skip the test.

This isn't a bug, but the commit message mentions adding logic in the Makefile
to conditionally compile the ASAN variant. However, the diff for this patch
only contains additions for C source files and does not include any Makefile
modifications.

Was this Makefile logic perhaps already included in an earlier patch in the
series? If so, should the commit message be updated to accurately reflect
only the changes introduced by this specific patch?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260426190338.4615-1-emil@etsalapatis.com?part=7

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests
  2026-04-26 19:03 ` [PATCH bpf-next v9 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests Emil Tsalapatis
  2026-04-26 19:46   ` bot+bpf-ci
@ 2026-04-26 21:38   ` sashiko-bot
  1 sibling, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-04-26 21:38 UTC (permalink / raw)
  To: Emil Tsalapatis; +Cc: bpf

Sashiko AI review found 2 potential issue(s):
- [Low] Unnecessary export of `free_msgs` to the global namespace.
- [Low] Use of a magic number for the BPF stream ID.
--

commit 49cffc0bea9678ea2577c634e093c46121bf87fd
Author: Emil Tsalapatis <emil@etsalapatis.com>

selftests/bpf: Reuse stderr parsing for libarena ASAN tests

This commit adds code to directly test the output of libarena ASAN tests.
It reuses testing infrastructure originally for BPF streams to verify that
ASAN emits call stacks when memory errors occur.

> diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
> index c4c34cae6102..ee637809a1d4 100644
> --- a/tools/testing/selftests/bpf/test_loader.c
> +++ b/tools/testing/selftests/bpf/test_loader.c
> @@ -93,7 +93,7 @@ void test_loader_fini(struct test_loader *tester)
>  	free(tester->log_buf);
>  }
>  
> -static void free_msgs(struct expected_msgs *msgs)
> +void free_msgs(struct expected_msgs *msgs)

Are there any callers outside of test_loader.c that need this function?

It appears the newly added verify_test_stderr() encapsulates the cleanup
internally by calling free_test_spec(), leaving free_msgs() without any
external callers. Could this function remain static to avoid unnecessary
namespace pollution?

[ ... ]

> @@ -789,6 +789,43 @@ static void emit_stderr(const char *stderr, bool force)
>  	fprintf(stdout, "STDERR:\n=============\n%s=============\n", stderr);
>  }
>  
> +static void verify_stderr(int prog_fd, struct expected_msgs *msgs)
> +{
> +	LIBBPF_OPTS(bpf_prog_stream_read_opts, ropts);
> +	char *buf;
> +	int ret;
> +
> +	if (!msgs->cnt)
> +		return;
> +
> +	buf = malloc(TEST_LOADER_LOG_BUF_SZ);
> +	if (!ASSERT_OK_PTR(buf, "malloc"))
> +		return;
> +
> +	ret = bpf_prog_stream_read(prog_fd, 2, buf, TEST_LOADER_LOG_BUF_SZ - 1,
> +				    &ropts);

Would it be better to use the BPF_STREAM_STDERR macro here instead of
hardcoding the value 2?

Using the macro defined in include/uapi/linux/bpf.h might improve
readability and stay consistent with other stream testing code.

[ ... ]

> diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
> index 1a44467f4310..37955a8ad385 100644
> --- a/tools/testing/selftests/bpf/test_progs.h
> +++ b/tools/testing/selftests/bpf/test_progs.h
> @@ -563,5 +563,7 @@ struct expected_msgs {
>  
>  void validate_msgs(const char *log_buf, struct expected_msgs *msgs,
>  		   void (*emit_fn)(const char *buf, bool force));
> +void free_msgs(struct expected_msgs *msgs);

If free_msgs() can remain static, should this declaration be removed?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260426190338.4615-1-emil@etsalapatis.com?part=8

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v9 0/8] Introduce arena library and runtime
  2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
                   ` (7 preceding siblings ...)
  2026-04-26 19:03 ` [PATCH bpf-next v9 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests Emil Tsalapatis
@ 2026-04-27  1:20 ` patchwork-bot+netdevbpf
  8 siblings, 0 replies; 19+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-04-27  1:20 UTC (permalink / raw)
  To: Emil Tsalapatis
  Cc: bpf, ast, andrii, memxor, daniel, eddyz87, song, mattbobrowski

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Sun, 26 Apr 2026 15:03:30 -0400 you wrote:
> Add a new subdirectory to tools/testing/selftests/bpf called libarena,
> along with programs useful for writing arena-based BPF code. This
> patchset adds the following:
> 
> 1) libarena, a subdirectory where arena BPF code that is generally useful
> to BPF arena programs can be easily added and tested.
> 
> [...]

Here is the summary with links:
  - [bpf-next,v9,1/8] selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h
    https://git.kernel.org/bpf/bpf-next/c/1fb8e9b32e19
  - [bpf-next,v9,2/8] selftests/bpf: Add basic libarena scaffolding
    https://git.kernel.org/bpf/bpf-next/c/d5327480a12a
  - [bpf-next,v9,3/8] selftests/bpf: Move arena-related headers into libarena
    https://git.kernel.org/bpf/bpf-next/c/8c1e1c33fe5a
  - [bpf-next,v9,4/8] selftests/bpf: Add arena ASAN runtime to libarena
    https://git.kernel.org/bpf/bpf-next/c/9ab78691eb5f
  - [bpf-next,v9,5/8] selftests/bpf: Add ASAN support for libarena selftests
    https://git.kernel.org/bpf/bpf-next/c/cfc00618b9df
  - [bpf-next,v9,6/8] selftests/bpf: Add buddy allocator for libarena
    https://git.kernel.org/bpf/bpf-next/c/86426a28c52d
  - [bpf-next,v9,7/8] selftests/bpf: Add selftests for libarena buddy allocator
    https://git.kernel.org/bpf/bpf-next/c/b1487dc1b181
  - [bpf-next,v9,8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests
    https://git.kernel.org/bpf/bpf-next/c/554e4eb9e4b7

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2026-04-27  1:21 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-26 19:03 [PATCH bpf-next v9 0/8] Introduce arena library and runtime Emil Tsalapatis
2026-04-26 19:03 ` [PATCH bpf-next v9 1/8] selftests/bpf: Add ifdef guard for WRITE_ONCE macro in bpf_atomic.h Emil Tsalapatis
2026-04-26 19:03 ` [PATCH bpf-next v9 2/8] selftests/bpf: Add basic libarena scaffolding Emil Tsalapatis
2026-04-26 19:34   ` sashiko-bot
2026-04-26 19:03 ` [PATCH bpf-next v9 3/8] selftests/bpf: Move arena-related headers into libarena Emil Tsalapatis
2026-04-26 19:03 ` [PATCH bpf-next v9 4/8] selftests/bpf: Add arena ASAN runtime to libarena Emil Tsalapatis
2026-04-26 20:12   ` sashiko-bot
2026-04-26 19:03 ` [PATCH bpf-next v9 5/8] selftests/bpf: Add ASAN support for libarena selftests Emil Tsalapatis
2026-04-26 19:33   ` bot+bpf-ci
2026-04-26 20:28   ` sashiko-bot
2026-04-26 19:03 ` [PATCH bpf-next v9 6/8] selftests/bpf: Add buddy allocator for libarena Emil Tsalapatis
2026-04-26 19:46   ` bot+bpf-ci
2026-04-26 20:54   ` sashiko-bot
2026-04-26 19:03 ` [PATCH bpf-next v9 7/8] selftests/bpf: Add selftests for libarena buddy allocator Emil Tsalapatis
2026-04-26 21:09   ` sashiko-bot
2026-04-26 19:03 ` [PATCH bpf-next v9 8/8] selftests/bpf: Reuse stderr parsing for libarena ASAN tests Emil Tsalapatis
2026-04-26 19:46   ` bot+bpf-ci
2026-04-26 21:38   ` sashiko-bot
2026-04-27  1:20 ` [PATCH bpf-next v9 0/8] Introduce arena library and runtime patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox