DPDK-dev Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] bpf: introduce extensible load API
@ 2026-05-06 17:21 Marat Khalili
  2026-05-06 17:21 ` [PATCH 01/10] bpf: make logging prefixes more consistent Marat Khalili
                   ` (11 more replies)
  0 siblings, 12 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:21 UTC (permalink / raw)
  Cc: dev

This patchset introduces an extensible load API for the BPF library in
DPDK, addressing current limitations regarding ABI stability and feature
constraints.

Currently, `rte_bpf_load` relies on a fixed `struct rte_bpf_prm`, which
makes it difficult to add new loading options or parameters without
breaking the ABI.

To resolve these issues, this series introduces `rte_bpf_load_ex` taking
`struct rte_bpf_prm_ex`. The new parameter structure includes a `sz`
field for backward compatibility, allowing future extensions.

Taking advantage of the new extensible API, this patchset also adds
several new features:
* Support for loading and executing BPF programs with up to 5 arguments.
* Support for loading classic BPF (cBPF) directly.
* Support for loading ELF files directly from memory buffers.
* New API functions (`rte_bpf_eth_rx_install` and `rte_bpf_eth_tx_install`)
  to install an already loaded BPF program as a port callback, decoupling
  the loading phase from the installation phase.

Marat Khalili (10):
  bpf: make logging prefixes more consistent
  bpf: introduce extensible load API
  bpf: support up to 5 arguments
  bpf: add cBPF origin to rte_bpf_load_ex
  bpf: support rte_bpf_prm_ex with port callbacks
  bpf: support loading ELF files from memory
  test/bpf: test loading cBPF directly
  test/bpf: test loading ELF file from memory
  doc: add release notes for new extensible BPF API
  doc: add load API to BPF programmer's guide

 app/test/test_bpf.c                    | 325 +++++++++++++++----------
 doc/guides/prog_guide/bpf_lib.rst      |  75 +++++-
 doc/guides/rel_notes/release_26_07.rst |  20 ++
 lib/bpf/bpf.c                          |  32 ++-
 lib/bpf/bpf_convert.c                  |  97 +++++++-
 lib/bpf/bpf_exec.c                     | 126 +++++++++-
 lib/bpf/bpf_impl.h                     |  53 +++-
 lib/bpf/bpf_jit_arm64.c                |  18 +-
 lib/bpf/bpf_jit_x86.c                  |  10 +-
 lib/bpf/bpf_load.c                     | 200 +++++++++++++--
 lib/bpf/bpf_load_elf.c                 | 189 +++++++++-----
 lib/bpf/bpf_pkt.c                      |  65 +++--
 lib/bpf/bpf_stub.c                     |  46 ----
 lib/bpf/bpf_validate.c                 |  94 ++++---
 lib/bpf/meson.build                    |  15 +-
 lib/bpf/rte_bpf.h                      | 195 ++++++++++++++-
 lib/bpf/rte_bpf_ethdev.h               |  54 ++++
 17 files changed, 1245 insertions(+), 369 deletions(-)
 delete mode 100644 lib/bpf/bpf_stub.c

-- 
2.43.0


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 01/10] bpf: make logging prefixes more consistent
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
@ 2026-05-06 17:21 ` Marat Khalili
  2026-05-06 17:21 ` [PATCH 02/10] bpf: introduce extensible load API Marat Khalili
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:21 UTC (permalink / raw)
  To: Konstantin Ananyev, Wathsala Vithanage; +Cc: dev

Logging in lib/bpf is inconsistent: some places use `%s()`, other just
`%s` for `__func__`.

Introduce new macro for logging prefixed with function name and use it
everywhere function name without arguments is prefixed to the log line.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_convert.c   | 18 +++++++++---------
 lib/bpf/bpf_impl.h      |  3 +++
 lib/bpf/bpf_jit_arm64.c |  4 ++--
 lib/bpf/bpf_load.c      |  2 +-
 lib/bpf/bpf_load_elf.c  |  2 +-
 lib/bpf/bpf_stub.c      |  6 ++----
 lib/bpf/bpf_validate.c  | 25 ++++++++++++-------------
 7 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c
index 86e703299d05..953ca80670c4 100644
--- a/lib/bpf/bpf_convert.c
+++ b/lib/bpf/bpf_convert.c
@@ -247,8 +247,8 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len,
 	uint8_t bpf_src;
 
 	if (len > BPF_MAXINSNS) {
-		RTE_BPF_LOG_LINE(ERR, "%s: cBPF program too long (%zu insns)",
-			    __func__, len);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cBPF program too long (%zu insns)",
+			    len);
 		return -EINVAL;
 	}
 
@@ -483,8 +483,8 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len,
 
 			/* Unknown instruction. */
 		default:
-			RTE_BPF_LOG_LINE(ERR, "%s: Unknown instruction!: %#x",
-				    __func__, fp->code);
+			RTE_BPF_LOG_FUNC_LINE(ERR, "Unknown instruction!: %#x",
+				    fp->code);
 			goto err;
 		}
 
@@ -528,7 +528,7 @@ rte_bpf_convert(const struct bpf_program *prog)
 	int ret;
 
 	if (prog == NULL) {
-		RTE_BPF_LOG_LINE(ERR, "%s: NULL program", __func__);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "NULL program");
 		rte_errno = EINVAL;
 		return NULL;
 	}
@@ -536,13 +536,13 @@ rte_bpf_convert(const struct bpf_program *prog)
 	/* 1st pass: calculate the eBPF program length */
 	ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, NULL, &ebpf_len);
 	if (ret < 0) {
-		RTE_BPF_LOG_LINE(ERR, "%s: cannot get eBPF length", __func__);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cannot get eBPF length");
 		rte_errno = -ret;
 		return NULL;
 	}
 
-	RTE_BPF_LOG_LINE(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u",
-		    __func__, prog->bf_len, ebpf_len);
+	RTE_BPF_LOG_FUNC_LINE(DEBUG, "prog len cBPF=%u -> eBPF=%u",
+		    prog->bf_len, ebpf_len);
 
 	prm = rte_zmalloc("bpf_filter",
 			  sizeof(*prm) + ebpf_len * sizeof(*ebpf), 0);
@@ -557,7 +557,7 @@ rte_bpf_convert(const struct bpf_program *prog)
 	/* 2nd pass: remap cBPF to eBPF instructions  */
 	ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, ebpf, &ebpf_len);
 	if (ret < 0) {
-		RTE_BPF_LOG_LINE(ERR, "%s: cannot convert cBPF to eBPF", __func__);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cannot convert cBPF to eBPF");
 		rte_free(prm);
 		rte_errno = -ret;
 		return NULL;
diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index f5fa22098489..fb5ec3c4d65f 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -32,6 +32,9 @@ extern int rte_bpf_logtype;
 #define RTE_BPF_LOG_LINE(lvl, ...) \
 	RTE_LOG_LINE(lvl, BPF, __VA_ARGS__)
 
+#define RTE_BPF_LOG_FUNC_LINE(lvl, fmt, ...) \
+	RTE_LOG_LINE(lvl, BPF, "%s(): " fmt, __func__, ##__VA_ARGS__)
+
 static inline size_t
 bpf_size(uint32_t bpf_op_sz)
 {
diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c
index a04ef33a9c88..4bbb97da1b89 100644
--- a/lib/bpf/bpf_jit_arm64.c
+++ b/lib/bpf/bpf_jit_arm64.c
@@ -98,8 +98,8 @@ check_invalid_args(struct a64_jit_ctx *ctx, uint32_t limit)
 
 	for (idx = 0; idx < limit; idx++) {
 		if (rte_le_to_cpu_32(ctx->ins[idx]) == A64_INVALID_OP_CODE) {
-			RTE_BPF_LOG_LINE(ERR,
-				"%s: invalid opcode at %u;", __func__, idx);
+			RTE_BPF_LOG_FUNC_LINE(ERR,
+				"invalid opcode at %u;", idx);
 			return -EINVAL;
 		}
 	}
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index 6983c026af0e..b8a0426fe2ed 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -100,7 +100,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm)
 
 	if (rc != 0) {
 		rte_errno = -rc;
-		RTE_BPF_LOG_LINE(ERR, "%s: %d-th xsym is invalid", __func__, i);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "%d-th xsym is invalid", i);
 		return NULL;
 	}
 
diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c
index 1d30ba17e25d..2390823cbf30 100644
--- a/lib/bpf/bpf_load_elf.c
+++ b/lib/bpf/bpf_load_elf.c
@@ -122,7 +122,7 @@ check_elf_header(const Elf64_Ehdr *eh)
 		err = "unexpected machine type";
 
 	if (err != NULL) {
-		RTE_BPF_LOG_LINE(ERR, "%s(): %s", __func__, err);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "%s", err);
 		return -EINVAL;
 	}
 
diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c
index dea0d703ca27..e06e820d8327 100644
--- a/lib/bpf/bpf_stub.c
+++ b/lib/bpf/bpf_stub.c
@@ -21,8 +21,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 		return NULL;
 	}
 
-	RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libelf installed",
-		__func__);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
 	rte_errno = ENOTSUP;
 	return NULL;
 }
@@ -38,8 +37,7 @@ rte_bpf_convert(const struct bpf_program *prog)
 		return NULL;
 	}
 
-	RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libpcap installed",
-		__func__);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libpcap installed");
 	rte_errno = ENOTSUP;
 	return NULL;
 }
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index e8dbec282779..a7f4f576c9d6 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -1838,16 +1838,16 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx)
 	uint32_t ne;
 
 	if (nidx >= bvf->prm->nb_ins) {
-		RTE_BPF_LOG_LINE(ERR,
-			"%s: program boundary violation at pc: %u, next pc: %u",
-			__func__, get_node_idx(bvf, node), nidx);
+		RTE_BPF_LOG_FUNC_LINE(ERR,
+			"program boundary violation at pc: %u, next pc: %u",
+			get_node_idx(bvf, node), nidx);
 		return -EINVAL;
 	}
 
 	ne = node->nb_edge;
 	if (ne >= RTE_DIM(node->edge_dest)) {
-		RTE_BPF_LOG_LINE(ERR, "%s: internal error at pc: %u",
-			__func__, get_node_idx(bvf, node));
+		RTE_BPF_LOG_FUNC_LINE(ERR, "internal error at pc: %u",
+			get_node_idx(bvf, node));
 		return -EINVAL;
 	}
 
@@ -2005,8 +2005,7 @@ validate(struct bpf_verifier *bvf)
 
 		err = check_syntax(ins);
 		if (err != 0) {
-			RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u",
-				__func__, err, i);
+			RTE_BPF_LOG_FUNC_LINE(ERR, "%s at pc: %u", err, i);
 			rc |= -EINVAL;
 		}
 
@@ -2230,9 +2229,9 @@ save_cur_eval_state(struct bpf_verifier *bvf, struct inst_node *node)
 	/* get new eval_state for this node */
 	st = pull_eval_state(&bvf->evst_sr_pool);
 	if (st == NULL) {
-		RTE_BPF_LOG_LINE(ERR,
-			"%s: internal error (out of space) at pc: %u",
-			__func__, get_node_idx(bvf, node));
+		RTE_BPF_LOG_FUNC_LINE(ERR,
+			"internal error (out of space) at pc: %u",
+			get_node_idx(bvf, node));
 		return -ENOMEM;
 	}
 
@@ -2462,8 +2461,8 @@ evaluate(struct bpf_verifier *bvf)
 				err = ins_chk[op].eval(bvf, ins + idx);
 				stats.nb_eval++;
 				if (err != NULL) {
-					RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u",
-						__func__, err, idx);
+					RTE_BPF_LOG_FUNC_LINE(ERR,
+						"%s at pc: %u", err, idx);
 					rc = -EINVAL;
 				}
 			}
@@ -2533,7 +2532,7 @@ __rte_bpf_validate(struct rte_bpf *bpf)
 			bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR &&
 			(sizeof(uint64_t) != sizeof(uintptr_t) ||
 			bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) {
-		RTE_BPF_LOG_LINE(ERR, "%s: unsupported argument type", __func__);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "unsupported argument type");
 		return -ENOTSUP;
 	}
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 02/10] bpf: introduce extensible load API
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
  2026-05-06 17:21 ` [PATCH 01/10] bpf: make logging prefixes more consistent Marat Khalili
@ 2026-05-06 17:21 ` Marat Khalili
  2026-05-06 17:22 ` [PATCH 03/10] bpf: support up to 5 arguments Marat Khalili
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:21 UTC (permalink / raw)
  To: Konstantin Ananyev, Wathsala Vithanage; +Cc: dev

Introduce new BPF load parameters struct rte_bpf_prm_ex that can be
extended without breaking backward or forward compatibility. Introduce
new function rte_bpf_load_ex consolidating in one code path loading from
both ELF file and raw memory image, with possibility to add more options
in the future.

Some changes in code layout and sequence:
* Both old APIs now only forwarding calls to a new single entry point.
* There is now a centralized cleanup point for all temporary resources
  created during the load process.
* External symbols (xsyms) are now checked for validity just after the
  load started, not after they were already used for relocation.
* File bpf_load_elf.c now only handles opening ELF file and providing
  patched instruction array to the load process. These are left as two
  separate functions to support other ELF sources like memory image in
  the future.
* Function stubs for the case libelf is not available are moved to
  bpf_load_elf.c to make keeping track of them easier (forgetting to
  update stubs is a common problem).

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_exec.c      |  10 +--
 lib/bpf/bpf_impl.h      |  32 ++++++-
 lib/bpf/bpf_jit_arm64.c |  12 +--
 lib/bpf/bpf_jit_x86.c   |   8 +-
 lib/bpf/bpf_load.c      | 182 +++++++++++++++++++++++++++++++++++-----
 lib/bpf/bpf_load_elf.c  | 151 +++++++++++++++++++--------------
 lib/bpf/bpf_stub.c      |  17 ----
 lib/bpf/bpf_validate.c  |  32 +++----
 lib/bpf/meson.build     |   4 +-
 lib/bpf/rte_bpf.h       |  68 ++++++++++++++-
 10 files changed, 379 insertions(+), 137 deletions(-)

diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c
index 18013753b147..e4668ba10b64 100644
--- a/lib/bpf/bpf_exec.c
+++ b/lib/bpf/bpf_exec.c
@@ -47,7 +47,7 @@
 		RTE_BPF_LOG_LINE(ERR, \
 			"%s(%p): division by 0 at pc: %#zx;", \
 			__func__, bpf, \
-			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); \
+			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.raw.ins); \
 		return 0; \
 	} \
 } while (0)
@@ -81,7 +81,7 @@
 		RTE_BPF_LOG_LINE(ERR, \
 			"%s(%p): unsupported atomic operation at pc: %#zx;", \
 			__func__, bpf, \
-			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); \
+			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.raw.ins); \
 		return 0; \
 	} \
 } while (0)
@@ -157,7 +157,7 @@ bpf_ld_mbuf(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM],
 		RTE_BPF_LOG_LINE(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): "
 			"load beyond packet boundary at pc: %#zx;",
 			__func__, bpf, mb, off, len,
-			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins);
+			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.raw.ins);
 	return p;
 }
 
@@ -166,7 +166,7 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM])
 {
 	const struct ebpf_insn *ins;
 
-	for (ins = bpf->prm.ins; ; ins++) {
+	for (ins = bpf->prm.raw.ins; ; ins++) {
 		switch (ins->code) {
 		/* 32 bit ALU IMM operations */
 		case (BPF_ALU | BPF_ADD | BPF_K):
@@ -483,7 +483,7 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM])
 			RTE_BPF_LOG_LINE(ERR,
 				"%s(%p): invalid opcode %#x at pc: %#zx;",
 				__func__, bpf, ins->code,
-				(uintptr_t)ins - (uintptr_t)bpf->prm.ins);
+				(uintptr_t)ins - (uintptr_t)bpf->prm.raw.ins);
 			return 0;
 		}
 	}
diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index fb5ec3c4d65f..1cee109bc98a 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -11,17 +11,45 @@
 #define MAX_BPF_STACK_SIZE	0x200
 
 struct rte_bpf {
-	struct rte_bpf_prm prm;
+	struct rte_bpf_prm_ex prm;
 	struct rte_bpf_jit jit;
 	size_t sz;
 	uint32_t stack_sz;
 };
 
+/* Temporary copies etc. used by the load process. */
+struct __rte_bpf_load {
+	struct rte_bpf_prm_ex prm;
+
+	/* Loading ELF and applying relocations. */
+	int elf_fd;  /* ELF fd, must be negative (not zero) by default. */
+	void *elf;  /* Using void to avoid dependency on libelf. */
+
+	/* Value we are going to return, if any. */
+	struct rte_bpf *bpf;
+};
+
 /*
  * Use '__rte' prefix for non-static internal functions
  * to avoid potential name conflict with other libraries.
  */
-int __rte_bpf_validate(struct rte_bpf *bpf);
+
+/* Free temporary resources created by opening ELF. */
+void
+__rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load);
+
+/* Open the ELF file. */
+int
+__rte_bpf_load_elf_file(struct __rte_bpf_load *load);
+
+/* Get code from ELF and apply relocations to it. */
+int
+__rte_bpf_load_elf_code(struct __rte_bpf_load *load);
+
+/* Validate final BPF code and calculate stack size. */
+int
+__rte_bpf_validate(const struct rte_bpf_prm_ex *prm, uint32_t *stack_sz);
+
 int __rte_bpf_jit(struct rte_bpf *bpf);
 int __rte_bpf_jit_x86(struct rte_bpf *bpf);
 int __rte_bpf_jit_arm64(struct rte_bpf *bpf);
diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c
index 4bbb97da1b89..9e5e142c13ba 100644
--- a/lib/bpf/bpf_jit_arm64.c
+++ b/lib/bpf/bpf_jit_arm64.c
@@ -111,12 +111,12 @@ jump_offset_init(struct a64_jit_ctx *ctx, struct rte_bpf *bpf)
 {
 	uint32_t i;
 
-	ctx->map = malloc(bpf->prm.nb_ins * sizeof(ctx->map[0]));
+	ctx->map = malloc(bpf->prm.raw.nb_ins * sizeof(ctx->map[0]));
 	if (ctx->map == NULL)
 		return -ENOMEM;
 
 	/* Fill with fake offsets */
-	for (i = 0; i != bpf->prm.nb_ins; i++) {
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++) {
 		ctx->map[i].off = INT32_MAX;
 		ctx->map[i].off_to_b = 0;
 	}
@@ -1130,8 +1130,8 @@ check_program_has_call(struct a64_jit_ctx *ctx, struct rte_bpf *bpf)
 	uint8_t op;
 	uint32_t i;
 
-	for (i = 0; i != bpf->prm.nb_ins; i++) {
-		ins = bpf->prm.ins + i;
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++) {
+		ins = bpf->prm.raw.ins + i;
 		op = ins->code;
 
 		switch (op) {
@@ -1168,10 +1168,10 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf)
 
 	emit_prologue(ctx);
 
-	for (i = 0; i != bpf->prm.nb_ins; i++) {
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++) {
 
 		jump_offset_update(ctx, i);
-		ins = bpf->prm.ins + i;
+		ins = bpf->prm.raw.ins + i;
 		op = ins->code;
 		off = ins->off;
 		imm = ins->imm;
diff --git a/lib/bpf/bpf_jit_x86.c b/lib/bpf/bpf_jit_x86.c
index 88b1b5aeab1a..6f4235d43499 100644
--- a/lib/bpf/bpf_jit_x86.c
+++ b/lib/bpf/bpf_jit_x86.c
@@ -1324,12 +1324,12 @@ emit(struct bpf_jit_state *st, const struct rte_bpf *bpf)
 
 	emit_prolog(st, bpf->stack_sz);
 
-	for (i = 0; i != bpf->prm.nb_ins; i++) {
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++) {
 
 		st->idx = i;
 		st->off[i] = st->sz;
 
-		ins = bpf->prm.ins + i;
+		ins = bpf->prm.raw.ins + i;
 
 		dr = ebpf2x86[ins->dst_reg];
 		sr = ebpf2x86[ins->src_reg];
@@ -1532,13 +1532,13 @@ __rte_bpf_jit_x86(struct rte_bpf *bpf)
 
 	/* init state */
 	memset(&st, 0, sizeof(st));
-	st.off = malloc(bpf->prm.nb_ins * sizeof(st.off[0]));
+	st.off = malloc(bpf->prm.raw.nb_ins * sizeof(st.off[0]));
 	if (st.off == NULL)
 		return -ENOMEM;
 
 	/* fill with fake offsets */
 	st.exit.off = INT32_MAX;
-	for (i = 0; i != bpf->prm.nb_ins; i++)
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++)
 		st.off[i] = INT32_MAX;
 
 	/*
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index b8a0426fe2ed..650184167609 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -14,14 +14,14 @@
 #include "bpf_impl.h"
 
 static struct rte_bpf *
-bpf_load(const struct rte_bpf_prm *prm)
+bpf_load(const struct rte_bpf_prm_ex *prm)
 {
 	uint8_t *buf;
 	struct rte_bpf *bpf;
 	size_t sz, bsz, insz, xsz;
 
 	xsz =  prm->nb_xsym * sizeof(prm->xsym[0]);
-	insz = prm->nb_ins * sizeof(prm->ins[0]);
+	insz = prm->raw.nb_ins * sizeof(prm->raw.ins[0]);
 	bsz = sizeof(bpf[0]);
 	sz = insz + xsz + bsz;
 
@@ -37,10 +37,10 @@ bpf_load(const struct rte_bpf_prm *prm)
 
 	if (xsz > 0)
 		memcpy(buf + bsz, prm->xsym, xsz);
-	memcpy(buf + bsz + xsz, prm->ins, insz);
+	memcpy(buf + bsz + xsz, prm->raw.ins, insz);
 
 	bpf->prm.xsym = (void *)(buf + bsz);
-	bpf->prm.ins = (void *)(buf + bsz + xsz);
+	bpf->prm.raw.ins = (void *)(buf + bsz + xsz);
 
 	return bpf;
 }
@@ -80,37 +80,44 @@ bpf_check_xsym(const struct rte_bpf_xsym *xsym)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_load)
-struct rte_bpf *
-rte_bpf_load(const struct rte_bpf_prm *prm)
+static int
+bpf_check_xsyms(const struct rte_bpf_xsym *xsym, uint32_t nb_xsym)
 {
-	struct rte_bpf *bpf;
 	int32_t rc;
 	uint32_t i;
 
-	if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 ||
-			(prm->nb_xsym != 0 && prm->xsym == NULL)) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
+	if (nb_xsym != 0 && xsym == NULL)
+		return -EINVAL;
 
 	rc = 0;
-	for (i = 0; i != prm->nb_xsym && rc == 0; i++)
-		rc = bpf_check_xsym(prm->xsym + i);
+	for (i = 0; i != nb_xsym && rc == 0; i++)
+		rc = bpf_check_xsym(xsym + i);
 
 	if (rc != 0) {
-		rte_errno = -rc;
 		RTE_BPF_LOG_FUNC_LINE(ERR, "%d-th xsym is invalid", i);
-		return NULL;
+		return rc;
 	}
 
+	return 0;
+}
+
+static int
+bpf_load_raw(struct __rte_bpf_load *load)
+{
+	const struct rte_bpf_prm_ex *const prm = &load->prm;
+	struct rte_bpf *bpf;
+	int32_t rc;
+
+	RTE_ASSERT(prm->origin == RTE_BPF_ORIGIN_RAW);
+
+	if (prm->raw.ins == NULL || prm->raw.nb_ins == 0)
+		return -EINVAL;
+
 	bpf = bpf_load(prm);
-	if (bpf == NULL) {
-		rte_errno = ENOMEM;
-		return NULL;
-	}
+	if (bpf == NULL)
+		return -ENOMEM;
 
-	rc = __rte_bpf_validate(bpf);
+	rc = __rte_bpf_validate(&load->prm, &bpf->stack_sz);
 	if (rc == 0) {
 		__rte_bpf_jit(bpf);
 		if (mprotect(bpf, bpf->sz, PROT_READ) != 0)
@@ -119,9 +126,138 @@ rte_bpf_load(const struct rte_bpf_prm *prm)
 
 	if (rc != 0) {
 		rte_bpf_destroy(bpf);
+		return rc;
+	}
+
+	load->bpf = bpf;
+	return 0;
+}
+
+RTE_EXPORT_SYMBOL(rte_bpf_load)
+struct rte_bpf *
+rte_bpf_load(const struct rte_bpf_prm *prm)
+{
+	return rte_bpf_load_ex(&(struct rte_bpf_prm_ex){
+			.sz = sizeof(struct rte_bpf_prm_ex),
+			.origin = RTE_BPF_ORIGIN_RAW,
+			.raw.ins = prm->ins,
+			.raw.nb_ins = prm->nb_ins,
+			.xsym = prm->xsym,
+			.nb_xsym = prm->nb_xsym,
+			.prog_arg = prm->prog_arg,
+		});
+}
+
+RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
+struct rte_bpf *
+rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
+	const char *sname)
+{
+	return rte_bpf_load_ex(&(struct rte_bpf_prm_ex){
+			.sz = sizeof(struct rte_bpf_prm_ex),
+			.origin = RTE_BPF_ORIGIN_ELF_FILE,
+			.elf_file.path = fname,
+			.elf_file.section = sname,
+			.xsym = prm->xsym,
+			.nb_xsym = prm->nb_xsym,
+			.prog_arg = prm->prog_arg,
+		});
+}
+
+/*
+ * Check extensible opts for invalid size or non-zero unsupported members.
+ *
+ * This code provides forward compatibility with applications compiled against
+ * newer version of this library. `opts_sz` is the size of struct `opts` in the
+ * version used for compiling the application, read from the member `sz`;
+ * `type_sz` is the size of same struct in the version used for compiling the
+ * library.
+ *
+ * If new fields were added to the struct in the application version, `opts_sz`
+ * will be greater than `type_sz`. In this case we are making sure all bytes we
+ * don't know how to interpret are zeroes, that is any new features that are
+ * there are not being used.
+ *
+ * This function can be used to check any struct following this convention.
+ */
+static bool
+opts_valid(const void *opts, size_t opts_sz, size_t type_sz)
+{
+	if (opts == NULL)
+		return true;
+
+	if (opts_sz < sizeof(opts_sz))
+		/* Size of the struct is too small even for sz member. */
+		return false;
+
+	/* Verify that all extra bytes are zeroed. */
+	for (size_t offset = type_sz; offset < opts_sz; ++offset)
+		if (((const char *)opts)[offset] != 0)
+			return false;
+
+	return true;
+}
+
+static int
+load_try(struct __rte_bpf_load *load, const struct rte_bpf_prm_ex *app_prm)
+{
+	int rc;
+
+	if (app_prm == NULL || !opts_valid(app_prm, app_prm->sz, sizeof(load->prm)))
+		return -EINVAL;
+
+	/*
+	 * Convert extensible prm of application size to the size known to us.
+	 *
+	 * This code provides compatibility with applications compiled against
+	 * different version of this library. `app_prm->sz` is the size of
+	 * struct `rte_bpf_prm_ex` in the version used for compiling the
+	 * application; `sizeof(load->prm)` is the size of the same struct in
+	 * the version used for compiling the library.
+	 *
+	 * We are copying only the fields known to the application and leave
+	 * the rest filled with zeroes. Any features that not known to the
+	 * application will have backward-compatible default behaviour.
+	 */
+	memcpy(&load->prm, app_prm, RTE_MIN(app_prm->sz, sizeof(load->prm)));
+	load->prm.sz = sizeof(load->prm);
+
+	rc = bpf_check_xsyms(load->prm.xsym, load->prm.nb_xsym);
+
+	/* Convert prm origin to raw unless it already is. */
+	switch (load->prm.origin) {
+	case RTE_BPF_ORIGIN_RAW:
+		break;
+	case RTE_BPF_ORIGIN_ELF_FILE:
+		rc = rc < 0 ? rc : __rte_bpf_load_elf_file(load);
+		rc = rc < 0 ? rc : __rte_bpf_load_elf_code(load);
+		break;
+	default:
+		rc = rc < 0 ? rc : -EINVAL;
+	}
+
+	/* Now that it is raw load it as such. */
+	rc = rc < 0 ? rc : bpf_load_raw(load);
+
+	return rc;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_load_ex, 26.11)
+struct rte_bpf *
+rte_bpf_load_ex(const struct rte_bpf_prm_ex *prm)
+{
+	struct __rte_bpf_load load = { .elf_fd = -1 };
+
+	const int rc = load_try(&load, prm);
+
+	__rte_bpf_load_elf_cleanup(&load);
+
+	RTE_ASSERT((rc < 0) == (load.bpf == NULL));
+
+	if (rc < 0) {
 		rte_errno = -rc;
 		return NULL;
 	}
 
-	return bpf;
+	return load.bpf;
 }
diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c
index 2390823cbf30..4ae7492351ae 100644
--- a/lib/bpf/bpf_load_elf.c
+++ b/lib/bpf/bpf_load_elf.c
@@ -2,6 +2,13 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
+#include "bpf_impl.h"
+
+#include <errno.h>
+
+#ifdef RTE_LIBRTE_BPF_ELF
+
+#include <inttypes.h>
 #include <stdarg.h>
 #include <stdio.h>
 #include <string.h>
@@ -26,8 +33,6 @@
 #include <rte_byteorder.h>
 #include <rte_errno.h>
 
-#include "bpf_impl.h"
-
 /* To overcome compatibility issue */
 #ifndef EM_BPF
 #define	EM_BPF	247
@@ -56,7 +61,7 @@ bpf_find_xsym(const char *sn, enum rte_bpf_xtype type,
  */
 static int
 resolve_xsym(const char *sn, size_t ofs, struct ebpf_insn *ins, size_t ins_sz,
-	const struct rte_bpf_prm *prm)
+	const struct rte_bpf_prm_ex *prm)
 {
 	uint32_t idx, fidx;
 	enum rte_bpf_xtype type;
@@ -183,7 +188,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx)
  */
 static int
 process_reloc(Elf *elf, size_t sym_idx, Elf64_Rel *re, size_t re_sz,
-	struct ebpf_insn *ins, size_t ins_sz, const struct rte_bpf_prm *prm)
+	struct ebpf_insn *ins, size_t ins_sz, const struct rte_bpf_prm_ex *prm)
 {
 	int32_t rc;
 	uint32_t i, n;
@@ -232,8 +237,8 @@ process_reloc(Elf *elf, size_t sym_idx, Elf64_Rel *re, size_t re_sz,
  * and update bpf code.
  */
 static int
-elf_reloc_code(Elf *elf, Elf_Data *ed, size_t sidx,
-	const struct rte_bpf_prm *prm)
+elf_reloc_code(Elf *elf, struct ebpf_insn *ins, size_t ins_sz, size_t sidx,
+	const struct rte_bpf_prm_ex *prm)
 {
 	Elf64_Rel *re;
 	Elf_Scn *sc;
@@ -256,7 +261,7 @@ elf_reloc_code(Elf *elf, Elf_Data *ed, size_t sidx,
 					sd->d_size % sizeof(re[0]) != 0)
 				return -EINVAL;
 			rc = process_reloc(elf, sh->sh_link,
-				sd->d_buf, sd->d_size, ed->d_buf, ed->d_size,
+				sd->d_buf, sd->d_size, ins, ins_sz,
 				prm);
 		}
 	}
@@ -264,72 +269,96 @@ elf_reloc_code(Elf *elf, Elf_Data *ed, size_t sidx,
 	return rc;
 }
 
-static struct rte_bpf *
-bpf_load_elf(const struct rte_bpf_prm *prm, int32_t fd, const char *section)
+void
+__rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load)
 {
-	Elf *elf;
-	Elf_Data *sd;
-	size_t sidx;
-	int32_t rc;
-	struct rte_bpf *bpf;
-	struct rte_bpf_prm np;
+	elf_end(load->elf);
 
-	elf_version(EV_CURRENT);
-	elf = elf_begin(fd, ELF_C_READ, NULL);
+	if (load->elf_fd >= 0 && close(load->elf_fd) < 0) {
+		const int close_errno = errno;
+		RTE_BPF_LOG_FUNC_LINE(ERR, "error %d closing: %s",
+			close_errno, strerror(close_errno));
+	}
+}
 
-	rc = find_elf_code(elf, section, &sd, &sidx);
-	if (rc == 0)
-		rc = elf_reloc_code(elf, sd, sidx, prm);
+int
+__rte_bpf_load_elf_file(struct __rte_bpf_load *load)
+{
+	const struct rte_bpf_prm_ex *const prm = &load->prm;
 
-	if (rc == 0) {
-		np = prm[0];
-		np.ins = sd->d_buf;
-		np.nb_ins = sd->d_size / sizeof(struct ebpf_insn);
-		bpf = rte_bpf_load(&np);
-	} else {
-		bpf = NULL;
-		rte_errno = -rc;
+	RTE_ASSERT(prm->origin == RTE_BPF_ORIGIN_ELF_FILE);
+
+	if (prm->elf_file.path == NULL || prm->elf_file.section == NULL)
+		return -EINVAL;
+
+	if (elf_version(EV_CURRENT) == EV_NONE)
+		return -ENOTSUP;
+
+	load->elf_fd = open(prm->elf_file.path, O_RDONLY);
+	if (load->elf_fd < 0) {
+		const int open_errno = errno;
+		RTE_BPF_LOG_FUNC_LINE(ERR, "error %d opening \"%s\": %s",
+			open_errno, prm->elf_file.path, strerror(open_errno));
+		return -open_errno;
+	}
+
+	load->elf = elf_begin(load->elf_fd, ELF_C_READ, NULL);
+	if (load->elf == NULL) {
+		const int rc = elf_errno();
+		RTE_BPF_LOG_FUNC_LINE(ERR, "error %d opening ELF \"%s\": %s",
+			rc, prm->elf_file.path, elf_errmsg(rc));
+		return -EINVAL;
 	}
 
-	elf_end(elf);
-	return bpf;
+	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
-struct rte_bpf *
-rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
-	const char *sname)
+int
+__rte_bpf_load_elf_code(struct __rte_bpf_load *load)
 {
-	int32_t fd, rc;
-	struct rte_bpf *bpf;
+	struct rte_bpf_prm_ex *const prm = &load->prm;
+	Elf_Data *sd;
+	size_t sidx;
+	int rc;
 
-	if (prm == NULL || fname == NULL || sname == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
+	rc = find_elf_code(load->elf, prm->elf_file.section, &sd, &sidx);
+	if (rc < 0)
+		return rc;
 
-	fd = open(fname, O_RDONLY);
-	if (fd < 0) {
-		rc = errno;
-		RTE_BPF_LOG_LINE(ERR, "%s(%s) error code: %d(%s)",
-			__func__, fname, rc, strerror(rc));
-		rte_errno = EINVAL;
-		return NULL;
-	}
+	prm->origin = RTE_BPF_ORIGIN_RAW;
+	prm->raw.ins = sd->d_buf;
+	prm->raw.nb_ins = sd->d_size / sizeof(struct ebpf_insn);
 
-	bpf = bpf_load_elf(prm, fd, sname);
-	close(fd);
+	rc = elf_reloc_code(load->elf, sd->d_buf, sd->d_size, sidx, prm);
+	if (rc < 0)
+		return -EINVAL;
 
-	if (bpf == NULL) {
-		RTE_BPF_LOG_LINE(ERR,
-			"%s(fname=\"%s\", sname=\"%s\") failed, "
-			"error code: %d",
-			__func__, fname, sname, rte_errno);
-		return NULL;
-	}
+	return 0;
+}
+
+#else /* RTE_LIBRTE_BPF_ELF */
+
+void
+__rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load)
+{
+	RTE_ASSERT(load->elf == NULL);
+	RTE_ASSERT(load->elf_fd < 0);
+}
 
-	RTE_BPF_LOG_LINE(INFO, "%s(fname=\"%s\", sname=\"%s\") "
-		"successfully creates %p(jit={.func=%p,.sz=%zu});",
-		__func__, fname, sname, bpf, bpf->jit.func, bpf->jit.sz);
-	return bpf;
+int
+__rte_bpf_load_elf_file(struct __rte_bpf_load *load)
+{
+	RTE_SET_USED(load);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
+	return -ENOTSUP;
 }
+
+int
+__rte_bpf_load_elf_code(struct __rte_bpf_load *load)
+{
+	RTE_SET_USED(load);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
+	return -ENOTSUP;
+}
+
+#endif /* RTE_LIBRTE_BPF_ELF */
diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c
index e06e820d8327..4c329832c264 100644
--- a/lib/bpf/bpf_stub.c
+++ b/lib/bpf/bpf_stub.c
@@ -10,23 +10,6 @@
  * Contains stubs for unimplemented public API functions
  */
 
-#ifndef RTE_LIBRTE_BPF_ELF
-RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
-struct rte_bpf *
-rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
-	const char *sname)
-{
-	if (prm == NULL || fname == NULL || sname == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
-	rte_errno = ENOTSUP;
-	return NULL;
-}
-#endif
-
 #ifndef RTE_HAS_LIBPCAP
 RTE_EXPORT_SYMBOL(rte_bpf_convert)
 struct rte_bpf_prm *
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index a7f4f576c9d6..5bfc59296d05 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -80,7 +80,7 @@ struct evst_pool {
 };
 
 struct bpf_verifier {
-	const struct rte_bpf_prm *prm;
+	const struct rte_bpf_prm_ex *prm;
 	struct inst_node *in;
 	uint64_t stack_sz;
 	uint32_t nb_nodes;
@@ -1837,7 +1837,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx)
 {
 	uint32_t ne;
 
-	if (nidx >= bvf->prm->nb_ins) {
+	if (nidx >= bvf->prm->raw.nb_ins) {
 		RTE_BPF_LOG_FUNC_LINE(ERR,
 			"program boundary violation at pc: %u, next pc: %u",
 			get_node_idx(bvf, node), nidx);
@@ -1946,10 +1946,10 @@ log_unreachable(const struct bpf_verifier *bvf)
 	struct inst_node *node;
 	const struct ebpf_insn *ins;
 
-	for (i = 0; i != bvf->prm->nb_ins; i++) {
+	for (i = 0; i != bvf->prm->raw.nb_ins; i++) {
 
 		node = bvf->in + i;
-		ins = bvf->prm->ins + i;
+		ins = bvf->prm->raw.ins + i;
 
 		if (node->colour == WHITE &&
 				ins->code != (BPF_LD | BPF_IMM | EBPF_DW))
@@ -1966,7 +1966,7 @@ log_loop(const struct bpf_verifier *bvf)
 	uint32_t i, j;
 	struct inst_node *node;
 
-	for (i = 0; i != bvf->prm->nb_ins; i++) {
+	for (i = 0; i != bvf->prm->raw.nb_ins; i++) {
 
 		node = bvf->in + i;
 		if (node->colour != BLACK)
@@ -1998,9 +1998,9 @@ validate(struct bpf_verifier *bvf)
 	const char *err;
 
 	rc = 0;
-	for (i = 0; i < bvf->prm->nb_ins; i++) {
+	for (i = 0; i < bvf->prm->raw.nb_ins; i++) {
 
-		ins = bvf->prm->ins + i;
+		ins = bvf->prm->raw.ins + i;
 		node = bvf->in + i;
 
 		err = check_syntax(ins);
@@ -2432,7 +2432,7 @@ evaluate(struct bpf_verifier *bvf)
 
 	bvf->evst->rv[EBPF_REG_10] = rvfp;
 
-	ins = bvf->prm->ins;
+	ins = bvf->prm->raw.ins;
 	node = bvf->in;
 	next = node;
 	rc = 0;
@@ -2522,23 +2522,23 @@ evaluate(struct bpf_verifier *bvf)
 }
 
 int
-__rte_bpf_validate(struct rte_bpf *bpf)
+__rte_bpf_validate(const struct rte_bpf_prm_ex *prm, uint32_t *stack_sz)
 {
 	int32_t rc;
 	struct bpf_verifier bvf;
 
 	/* check input argument type, don't allow mbuf ptr on 32-bit */
-	if (bpf->prm.prog_arg.type != RTE_BPF_ARG_RAW &&
-			bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR &&
+	if (prm->prog_arg.type != RTE_BPF_ARG_RAW &&
+			prm->prog_arg.type != RTE_BPF_ARG_PTR &&
 			(sizeof(uint64_t) != sizeof(uintptr_t) ||
-			bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) {
+			prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) {
 		RTE_BPF_LOG_FUNC_LINE(ERR, "unsupported argument type");
 		return -ENOTSUP;
 	}
 
 	memset(&bvf, 0, sizeof(bvf));
-	bvf.prm = &bpf->prm;
-	bvf.in = calloc(bpf->prm.nb_ins, sizeof(bvf.in[0]));
+	bvf.prm = prm;
+	bvf.in = calloc(prm->raw.nb_ins, sizeof(bvf.in[0]));
 	if (bvf.in == NULL)
 		return -ENOMEM;
 
@@ -2555,11 +2555,11 @@ __rte_bpf_validate(struct rte_bpf *bpf)
 
 	/* copy collected info */
 	if (rc == 0) {
-		bpf->stack_sz = bvf.stack_sz;
+		*stack_sz = bvf.stack_sz;
 
 		/* for LD_ABS/LD_IND, we'll need extra space on the stack */
 		if (bvf.nb_ldmb_nodes != 0)
-			bpf->stack_sz = RTE_ALIGN_CEIL(bpf->stack_sz +
+			*stack_sz = RTE_ALIGN_CEIL(*stack_sz +
 				sizeof(uint64_t), sizeof(uint64_t));
 	}
 
diff --git a/lib/bpf/meson.build b/lib/bpf/meson.build
index 28df7f469a4c..4901b6ee1463 100644
--- a/lib/bpf/meson.build
+++ b/lib/bpf/meson.build
@@ -19,6 +19,7 @@ sources = files('bpf.c',
         'bpf_dump.c',
         'bpf_exec.c',
         'bpf_load.c',
+        'bpf_load_elf.c',
         'bpf_pkt.c',
         'bpf_stub.c',
         'bpf_validate.c')
@@ -38,10 +39,9 @@ deps += ['mbuf', 'net', 'ethdev']
 dep = dependency('libelf', required: false, method: 'pkg-config')
 if dep.found()
     dpdk_conf.set('RTE_LIBRTE_BPF_ELF', 1)
-    sources += files('bpf_load_elf.c')
     ext_deps += dep
 else
-    warning('libelf is missing, rte_bpf_elf_load API will be disabled')
+    warning('libelf is missing, ELF API will be disabled')
 endif
 
 if dpdk_conf.has('RTE_HAS_LIBPCAP')
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index 309d84bc516a..bf58a418191e 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -86,7 +86,47 @@ struct rte_bpf_xsym {
 };
 
 /**
- * Input parameters for loading eBPF code.
+ * Possible origins of eBPF program code.
+ */
+enum rte_bpf_origin {
+	RTE_BPF_ORIGIN_RAW,		/**< code loaded from raw array */
+	RTE_BPF_ORIGIN_RESERVED,	/**< reserved for cBPF */
+	RTE_BPF_ORIGIN_ELF_FILE,	/**< code loaded from elf_file */
+};
+
+/**
+ * Input parameters for loading eBPF code, extensible version.
+ *
+ * Follows libbpf conventions for extensible structs.
+ */
+struct rte_bpf_prm_ex {
+	size_t sz;  /**< size of this struct for backward compatibility */
+
+	uint32_t flags;  /**< flags controlling eBPF load and other options */
+
+	enum rte_bpf_origin origin;  /**< origin of eBPF program code */
+
+	/** program origin parameters, member in use depends on origin */
+	union {
+		struct {
+			const struct ebpf_insn *ins;  /**< eBPF instructions */
+			uint32_t nb_ins;  /**< number of instructions in ins */
+		} raw;
+		struct {
+			const char *path;  /**< path to the ELF file */
+			const char *section;  /**< ELF section with the code */
+		} elf_file;
+	};
+
+	const struct rte_bpf_xsym *xsym;
+	/**< array of external symbols that eBPF code is allowed to reference */
+	uint32_t nb_xsym;  /**< number of elements in xsym */
+
+	struct rte_bpf_arg prog_arg;  /**< input arg description */
+};
+
+/**
+ * Input parameters for loading eBPF code, legacy version.
  */
 struct rte_bpf_prm {
 	const struct ebpf_insn *ins; /**< array of eBPF instructions */
@@ -116,6 +156,32 @@ struct rte_bpf;
 void
 rte_bpf_destroy(struct rte_bpf *bpf);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Create a new eBPF execution context, load code from specified origin into it.
+ *
+ * @param prm
+ *   Parameters used to create and initialise the BPF execution context.
+ *
+ *   Member sz must be set to the struct size as known to the application.
+ *   If it exceeds the size known to the library, and the extra part has
+ *   non-zero bytes, parameter is rejected. If it's smaller than the size known
+ *   to the library, defaults are used for the members that are not present.
+ * @return
+ *   BPF handle that is used in future BPF operations,
+ *   or NULL on error, with error code set in rte_errno.
+ *   Possible rte_errno errors include:
+ *   - EINVAL  - invalid parameter passed to function
+ *   - ENOMEM  - can't reserve enough memory
+ *   - ENOTSUP - requested feature is not supported (e.g. no libelf to load ELF)
+ */
+__rte_experimental
+struct rte_bpf *
+rte_bpf_load_ex(const struct rte_bpf_prm_ex *prm)
+	__rte_malloc __rte_dealloc(rte_bpf_destroy, 1);
+
 /**
  * Create a new eBPF execution context and load given BPF code into it.
  *
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 03/10] bpf: support up to 5 arguments
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
  2026-05-06 17:21 ` [PATCH 01/10] bpf: make logging prefixes more consistent Marat Khalili
  2026-05-06 17:21 ` [PATCH 02/10] bpf: introduce extensible load API Marat Khalili
@ 2026-05-06 17:22 ` Marat Khalili
  2026-05-06 17:22 ` [PATCH 04/10] bpf: add cBPF origin to rte_bpf_load_ex Marat Khalili
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:22 UTC (permalink / raw)
  To: Konstantin Ananyev, Wathsala Vithanage; +Cc: dev

When using rte_bpf_load_ex allow up to 5 arguments for a BPF program.
Particularly useful for call-backs and other internal functions.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf.c           |  32 +++++++++--
 lib/bpf/bpf_exec.c      | 116 +++++++++++++++++++++++++++++++++++++++
 lib/bpf/bpf_impl.h      |   2 +-
 lib/bpf/bpf_jit_arm64.c |   2 +-
 lib/bpf/bpf_jit_x86.c   |   2 +-
 lib/bpf/bpf_load.c      |   6 ++-
 lib/bpf/bpf_validate.c  |  45 ++++++++++++----
 lib/bpf/rte_bpf.h       | 117 ++++++++++++++++++++++++++++++++++++++--
 8 files changed, 299 insertions(+), 23 deletions(-)

diff --git a/lib/bpf/bpf.c b/lib/bpf/bpf.c
index 5239b3e11e0e..67dededd9ae8 100644
--- a/lib/bpf/bpf.c
+++ b/lib/bpf/bpf.c
@@ -16,8 +16,8 @@ void
 rte_bpf_destroy(struct rte_bpf *bpf)
 {
 	if (bpf != NULL) {
-		if (bpf->jit.func != NULL)
-			munmap(bpf->jit.func, bpf->jit.sz);
+		if (bpf->jit.raw != NULL)
+			munmap(bpf->jit.raw, bpf->jit.sz);
 		munmap(bpf, bpf->sz);
 	}
 }
@@ -29,7 +29,33 @@ rte_bpf_get_jit(const struct rte_bpf *bpf, struct rte_bpf_jit *jit)
 	if (bpf == NULL || jit == NULL)
 		return -EINVAL;
 
-	jit[0] = bpf->jit;
+	if (bpf->prm.nb_prog_arg != 1) {
+		RTE_BPF_LOG_LINE(ERR,
+			"this program takes %d arguments, use rte_bpf_get_jit_ex",
+			bpf->prm.nb_prog_arg);
+		return -EINVAL;
+	}
+
+	*jit = (struct rte_bpf_jit) {
+		.func = bpf->jit.raw,
+		.sz = bpf->jit.sz,
+	};
+	return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_get_jit_ex, 26.11)
+int
+rte_bpf_get_jit_ex(const struct rte_bpf *bpf, struct rte_bpf_jit_ex *jit)
+{
+	if (bpf == NULL || jit == NULL)
+		return -EINVAL;
+
+	if (bpf->jit.raw == NULL) {
+		RTE_BPF_LOG_LINE(ERR, "no JIT-compiled version");
+		return -ENOENT;
+	}
+
+	*jit = bpf->jit;
 	return 0;
 }
 
diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c
index e4668ba10b64..d77c59991632 100644
--- a/lib/bpf/bpf_exec.c
+++ b/lib/bpf/bpf_exec.c
@@ -502,6 +502,10 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 	uint64_t reg[EBPF_REG_NUM];
 	uint64_t stack[MAX_BPF_STACK_SIZE / sizeof(uint64_t)];
 
+	if (bpf->prm.nb_prog_arg != 1)
+		/* Use rte_bpf_exec_burst_ex with this program. */
+		return -EINVAL;
+
 	for (i = 0; i != num; i++) {
 
 		reg[EBPF_REG_1] = (uintptr_t)ctx[i];
@@ -513,6 +517,107 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 	return i;
 }
 
+static uint32_t
+exec_vm_burst_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+	uint64_t rc[], uint32_t num)
+{
+	uint32_t i;
+	uint64_t reg[EBPF_REG_NUM];
+	uint64_t stack[MAX_BPF_STACK_SIZE / sizeof(uint64_t)];
+
+	for (i = 0; i != num; i++) {
+		const union rte_bpf_func_arg *const arg = ctx[i].arg;
+
+		switch (bpf->prm.nb_prog_arg) {
+		case 5:
+			reg[EBPF_REG_5] = arg[4].u64;
+			/* FALLTHROUGH */
+		case 4:
+			reg[EBPF_REG_4] = arg[3].u64;
+			/* FALLTHROUGH */
+		case 3:
+			reg[EBPF_REG_3] = arg[2].u64;
+			/* FALLTHROUGH */
+		case 2:
+			reg[EBPF_REG_2] = arg[1].u64;
+			/* FALLTHROUGH */
+		case 1:
+			reg[EBPF_REG_1] = arg[0].u64;
+			/* FALLTHROUGH */
+		case 0:
+			break;
+		}
+
+		reg[EBPF_REG_10] = (uintptr_t)(stack + RTE_DIM(stack));
+
+		rc[i] = bpf_exec(bpf, reg);
+	}
+
+	return i;
+}
+
+static uint32_t
+exec_jit_burst_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+	uint64_t rc[], uint32_t num)
+{
+	uint32_t i;
+	const struct rte_bpf_jit_ex jit = bpf->jit;
+
+	switch (bpf->prm.nb_prog_arg) {
+	case 0:
+		for (i = 0; i != num; i++)
+			rc[i] = jit.func0();
+		break;
+	case 1:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func1(arg[0]);
+		}
+		break;
+	case 2:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func2(arg[0], arg[1]);
+		}
+		break;
+	case 3:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func3(arg[0], arg[1], arg[2]);
+		}
+		break;
+	case 4:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func4(arg[0], arg[1], arg[2], arg[3]);
+		}
+		break;
+	case 5:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func5(arg[0], arg[1], arg[2], arg[3], arg[4]);
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return i;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_exec_burst_ex, 26.11)
+uint32_t
+rte_bpf_exec_burst_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+	uint64_t rc[], uint32_t num, uint64_t flags)
+{
+	if ((flags & ~RTE_BPF_EXEC_FLAG_MASK) != 0)
+		return -EINVAL;
+
+	return (flags & RTE_BPF_EXEC_FLAG_JIT) != 0 ?
+		exec_jit_burst_ex(bpf, ctx, rc, num) :
+		exec_vm_burst_ex(bpf, ctx, rc, num);
+}
+
 RTE_EXPORT_SYMBOL(rte_bpf_exec)
 uint64_t
 rte_bpf_exec(const struct rte_bpf *bpf, void *ctx)
@@ -522,3 +627,14 @@ rte_bpf_exec(const struct rte_bpf *bpf, void *ctx)
 	rte_bpf_exec_burst(bpf, &ctx, &rc, 1);
 	return rc;
 }
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_exec_ex, 26.11)
+uint64_t
+rte_bpf_exec_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+		uint64_t flags)
+{
+	uint64_t rc;
+
+	rte_bpf_exec_burst_ex(bpf, ctx, &rc, 1, flags);
+	return rc;
+}
diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index 1cee109bc98a..4a98b3373067 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -12,7 +12,7 @@
 
 struct rte_bpf {
 	struct rte_bpf_prm_ex prm;
-	struct rte_bpf_jit jit;
+	struct rte_bpf_jit_ex jit;
 	size_t sz;
 	uint32_t stack_sz;
 };
diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c
index 9e5e142c13ba..ba7ae4d680c5 100644
--- a/lib/bpf/bpf_jit_arm64.c
+++ b/lib/bpf/bpf_jit_arm64.c
@@ -1471,7 +1471,7 @@ __rte_bpf_jit_arm64(struct rte_bpf *bpf)
 	/* Flush the icache */
 	__builtin___clear_cache((char *)ctx.ins, (char *)(ctx.ins + ctx.idx));
 
-	bpf->jit.func = (void *)ctx.ins;
+	bpf->jit.raw = ctx.ins;
 	bpf->jit.sz = size;
 
 	goto finish;
diff --git a/lib/bpf/bpf_jit_x86.c b/lib/bpf/bpf_jit_x86.c
index 6f4235d43499..54eb279643b9 100644
--- a/lib/bpf/bpf_jit_x86.c
+++ b/lib/bpf/bpf_jit_x86.c
@@ -1568,7 +1568,7 @@ __rte_bpf_jit_x86(struct rte_bpf *bpf)
 	if (rc != 0)
 		munmap(st.ins, st.sz);
 	else {
-		bpf->jit.func = (void *)st.ins;
+		bpf->jit.raw = st.ins;
 		bpf->jit.sz = st.sz;
 	}
 
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index 650184167609..c9cbaf6ded7e 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -144,7 +144,8 @@ rte_bpf_load(const struct rte_bpf_prm *prm)
 			.raw.nb_ins = prm->nb_ins,
 			.xsym = prm->xsym,
 			.nb_xsym = prm->nb_xsym,
-			.prog_arg = prm->prog_arg,
+			.prog_arg[0] = prm->prog_arg,
+			.nb_prog_arg = 1,
 		});
 }
 
@@ -160,7 +161,8 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 			.elf_file.section = sname,
 			.xsym = prm->xsym,
 			.nb_xsym = prm->nb_xsym,
-			.prog_arg = prm->prog_arg,
+			.prog_arg[0] = prm->prog_arg,
+			.nb_prog_arg = 1,
 		});
 }
 
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 5bfc59296d05..bf8a4abb5a5a 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -2425,10 +2425,14 @@ evaluate(struct bpf_verifier *bvf)
 		.s = {.min = MAX_BPF_STACK_SIZE, .max = MAX_BPF_STACK_SIZE},
 	};
 
-	bvf->evst->rv[EBPF_REG_1].v = bvf->prm->prog_arg;
-	bvf->evst->rv[EBPF_REG_1].mask = UINT64_MAX;
-	if (bvf->prm->prog_arg.type == RTE_BPF_ARG_RAW)
-		eval_max_bound(bvf->evst->rv + EBPF_REG_1, UINT64_MAX);
+	for (uint32_t pai = 0; pai != bvf->prm->nb_prog_arg; ++pai) {
+		struct bpf_reg_val *reg = &bvf->evst->rv[EBPF_REG_1 + pai];
+
+		reg->v = bvf->prm->prog_arg[pai];
+		reg->mask = UINT64_MAX;
+		if (reg->v.type == RTE_BPF_ARG_RAW)
+			eval_max_bound(reg, UINT64_MAX);
+	}
 
 	bvf->evst->rv[EBPF_REG_10] = rvfp;
 
@@ -2521,21 +2525,42 @@ evaluate(struct bpf_verifier *bvf)
 	return rc;
 }
 
+static bool
+prog_arg_is_valid(const struct rte_bpf_arg *prog_arg)
+{
+	/* check input argument type, don't allow mbuf ptr on 32-bit */
+	if (prog_arg->type != RTE_BPF_ARG_RAW &&
+			prog_arg->type != RTE_BPF_ARG_PTR &&
+			(sizeof(uint64_t) != sizeof(uintptr_t) ||
+			prog_arg->type != RTE_BPF_ARG_PTR_MBUF)) {
+		RTE_BPF_LOG_FUNC_LINE(ERR, "unsupported argument type");
+		return false;
+	}
+
+	return true;
+}
+
 int
 __rte_bpf_validate(const struct rte_bpf_prm_ex *prm, uint32_t *stack_sz)
 {
 	int32_t rc;
 	struct bpf_verifier bvf;
 
-	/* check input argument type, don't allow mbuf ptr on 32-bit */
-	if (prm->prog_arg.type != RTE_BPF_ARG_RAW &&
-			prm->prog_arg.type != RTE_BPF_ARG_PTR &&
-			(sizeof(uint64_t) != sizeof(uintptr_t) ||
-			prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) {
-		RTE_BPF_LOG_FUNC_LINE(ERR, "unsupported argument type");
+	if (prm->nb_prog_arg > EBPF_FUNC_MAX_ARGS) {
+		RTE_BPF_LOG_FUNC_LINE(ERR,
+			"support up to %u arguments, found %u",
+			EBPF_FUNC_MAX_ARGS, prm->nb_prog_arg);
 		return -ENOTSUP;
 	}
 
+	for (uint32_t pai = 0; pai != prm->nb_prog_arg; ++pai)
+		if (!prog_arg_is_valid(&prm->prog_arg[pai])) {
+			RTE_BPF_LOG_FUNC_LINE(ERR,
+				"unsupported argument %d (r%d) type",
+				pai, EBPF_REG_1 + pai);
+			return -ENOTSUP;
+		}
+
 	memset(&bvf, 0, sizeof(bvf));
 	bvf.prm = prm;
 	bvf.in = calloc(prm->raw.nb_ins, sizeof(bvf.in[0]));
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index bf58a418191e..751b879bb7fd 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -25,6 +25,11 @@
 extern "C" {
 #endif
 
+#define RTE_BPF_EXEC_FLAG_JIT	RTE_BIT64(0)	/**< use JIT-compiled version */
+
+/** Mask with all supported `RTE_BPF_EXEC_FLAG_*` flags set. */
+#define RTE_BPF_EXEC_FLAG_MASK  RTE_BPF_EXEC_FLAG_JIT
+
 /**
  * Possible types for function/BPF program arguments.
  */
@@ -122,7 +127,8 @@ struct rte_bpf_prm_ex {
 	/**< array of external symbols that eBPF code is allowed to reference */
 	uint32_t nb_xsym;  /**< number of elements in xsym */
 
-	struct rte_bpf_arg prog_arg;  /**< input arg description */
+	struct rte_bpf_arg prog_arg[EBPF_FUNC_MAX_ARGS];  /**< program arguments */
+	uint32_t nb_prog_arg;  /**< program argument count */
 };
 
 /**
@@ -138,13 +144,49 @@ struct rte_bpf_prm {
 };
 
 /**
- * Information about compiled into native ISA eBPF code.
+ * Information about compiled into native ISA eBPF code accepting 1 argument.
  */
 struct rte_bpf_jit {
 	uint64_t (*func)(void *); /**< JIT-ed native code */
 	size_t sz;                /**< size of JIT-ed code */
 };
 
+union rte_bpf_func_arg {
+	uint64_t u64;
+	void *ptr;
+};
+
+typedef uint64_t (*rte_bpf_jit_func0_t)(void);
+typedef uint64_t (*rte_bpf_jit_func1_t)(union rte_bpf_func_arg);
+typedef uint64_t (*rte_bpf_jit_func2_t)(union rte_bpf_func_arg, union rte_bpf_func_arg);
+typedef uint64_t (*rte_bpf_jit_func3_t)(union rte_bpf_func_arg, union rte_bpf_func_arg,
+	union rte_bpf_func_arg);
+typedef uint64_t (*rte_bpf_jit_func4_t)(union rte_bpf_func_arg, union rte_bpf_func_arg,
+	union rte_bpf_func_arg, union rte_bpf_func_arg);
+typedef uint64_t (*rte_bpf_jit_func5_t)(union rte_bpf_func_arg, union rte_bpf_func_arg,
+	union rte_bpf_func_arg, union rte_bpf_func_arg, union rte_bpf_func_arg);
+
+/**
+ * JIT-ed native code, member depends on number of program arguments.
+ */
+struct rte_bpf_jit_ex {
+	union {
+		void *raw;
+		rte_bpf_jit_func0_t func0;  /* nullary function */
+		rte_bpf_jit_func1_t func1;  /* unary function */
+		rte_bpf_jit_func2_t func2;  /* binary function */
+		rte_bpf_jit_func3_t func3;  /* ternary function */
+		rte_bpf_jit_func4_t func4;  /* quaternary function */
+		rte_bpf_jit_func5_t func5;  /* quinary function */
+	};
+	size_t sz;
+};
+
+/* Tuple of eBPF program arguments. */
+struct rte_bpf_prog_ctx {
+	union rte_bpf_func_arg arg[EBPF_FUNC_MAX_ARGS];
+};
+
 struct rte_bpf;
 
 /**
@@ -224,7 +266,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 	__rte_malloc __rte_dealloc(rte_bpf_destroy, 1);
 
 /**
- * Execute given BPF bytecode.
+ * Execute given BPF bytecode accepting 1 argument.
  *
  * @param bpf
  *   handle for the BPF code to execute.
@@ -237,7 +279,27 @@ uint64_t
 rte_bpf_exec(const struct rte_bpf *bpf, void *ctx);
 
 /**
- * Execute given BPF bytecode over a set of input contexts.
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Execute given BPF bytecode accepting any number of arguments.
+ *
+ * @param bpf
+ *   handle for the BPF code to execute.
+ * @param ctx
+ *   program arguments tuple.
+ * @param flags
+ *   bitwise OR of `RTE_BPF_EXEC_FLAG_*` values controlling execution.
+ * @return
+ *   BPF execution return value.
+ */
+__rte_experimental
+uint64_t
+rte_bpf_exec_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+		uint64_t flags);
+
+/**
+ * Execute given BPF bytecode accepting 1 argument over a set of input contexts.
  *
  * @param bpf
  *   handle for the BPF code to execute.
@@ -255,7 +317,33 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 		uint32_t num);
 
 /**
- * Provide information about natively compiled code for given BPF handle.
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Execute given BPF program accepting any number of arguments over a set of
+ * input contexts.
+ *
+ * @param bpf
+ *   handle for the BPF code to execute.
+ * @param ctx
+ *   pointer to array of program argument tuples, can be NULL for nullary programs.
+ * @param rc
+ *   array of return values (one per input).
+ * @param num
+ *   number executions, number of elements in arrays ctx and rc[].
+ * @param flags
+ *   bitwise OR of `RTE_BPF_EXEC_FLAG_*` values controlling execution.
+ * @return
+ *   number of successfully processed inputs.
+ */
+__rte_experimental
+uint32_t
+rte_bpf_exec_burst_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+		uint64_t rc[], uint32_t num, uint64_t flags);
+
+/**
+ * Provide information about natively compiled code for given BPF program
+ * accepting 1 argument.
  *
  * @param bpf
  *   handle for the BPF code.
@@ -268,6 +356,25 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 int
 rte_bpf_get_jit(const struct rte_bpf *bpf, struct rte_bpf_jit *jit);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Get function JIT-compiled from the BPF program.
+ *
+ * @param bpf
+ *   handle for the BPF code.
+ * @param jit
+ *   pointer to the struct rte_bpf_jit_ex.
+ * @return
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOENT if there is no JIT-compiled version.
+ *   - Zero if operation completed successfully.
+ */
+__rte_experimental
+int
+rte_bpf_get_jit_ex(const struct rte_bpf *bpf, struct rte_bpf_jit_ex *jit);
+
 /**
  * Dump epf instructions to a file.
  *
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 04/10] bpf: add cBPF origin to rte_bpf_load_ex
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (2 preceding siblings ...)
  2026-05-06 17:22 ` [PATCH 03/10] bpf: support up to 5 arguments Marat Khalili
@ 2026-05-06 17:22 ` Marat Khalili
  2026-05-06 17:22 ` [PATCH 05/10] bpf: support rte_bpf_prm_ex with port callbacks Marat Khalili
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:22 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Add cBPF origin to rte_bpf_load_ex to allow loading PCAP filters and
other cBPF code through the unified interface.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_convert.c | 79 +++++++++++++++++++++++++++++++++++++++++--
 lib/bpf/bpf_impl.h    | 11 ++++++
 lib/bpf/bpf_load.c    | 12 ++++++-
 lib/bpf/bpf_stub.c    | 27 ---------------
 lib/bpf/meson.build   | 11 +++---
 lib/bpf/rte_bpf.h     |  8 ++++-
 6 files changed, 111 insertions(+), 37 deletions(-)
 delete mode 100644 lib/bpf/bpf_stub.c

diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c
index 953ca80670c4..c997116c691f 100644
--- a/lib/bpf/bpf_convert.c
+++ b/lib/bpf/bpf_convert.c
@@ -9,6 +9,12 @@
  * Copyright (c) 2011 - 2014 PLUMgrid, http://plumgrid.com
  */
 
+#include "bpf_impl.h"
+#include <eal_export.h>
+#include <rte_errno.h>
+
+#ifdef RTE_HAS_LIBPCAP
+
 #include <assert.h>
 #include <errno.h>
 #include <stdbool.h>
@@ -17,17 +23,14 @@
 #include <stdlib.h>
 #include <string.h>
 
-#include <eal_export.h>
 #include <rte_common.h>
 #include <rte_bpf.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
-#include <rte_errno.h>
 
 #include <pcap/pcap.h>
 #include <pcap/bpf.h>
 
-#include "bpf_impl.h"
 #include "bpf_def.h"
 
 #ifndef BPF_MAXINSNS
@@ -572,3 +575,73 @@ rte_bpf_convert(const struct bpf_program *prog)
 
 	return prm;
 }
+
+void
+__rte_bpf_convert_cleanup(struct __rte_bpf_load *load)
+{
+	free(load->ins);
+}
+
+int __rte_bpf_convert(struct __rte_bpf_load *load)
+{
+	struct rte_bpf_prm_ex *const prm = &load->prm;
+	uint32_t nb_ins = 0;
+	int ret;
+
+	RTE_ASSERT(prm->origin == RTE_BPF_ORIGIN_CBPF);
+
+	if (prm->cbpf.ins == NULL || prm->cbpf.nb_ins == 0)
+		return -EINVAL;
+
+	/* 1st pass: calculate the eBPF program length */
+	ret = bpf_convert_filter(prm->cbpf.ins, prm->cbpf.nb_ins, NULL, &nb_ins);
+	if (ret < 0) {
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cannot get eBPF length");
+		return ret;
+	}
+
+	RTE_ASSERT(load->ins == NULL);
+	load->ins = malloc(nb_ins * sizeof(load->ins[0]));
+	if (load->ins == NULL)
+		return -ENOMEM;
+
+	/* 2nd pass: remap cBPF to eBPF instructions  */
+	ret = bpf_convert_filter(prm->cbpf.ins, prm->cbpf.nb_ins, load->ins, &nb_ins);
+	if (ret < 0) {
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cannot convert cBPF to eBPF");
+		return ret;
+	}
+
+	prm->origin = RTE_BPF_ORIGIN_RAW;
+	prm->raw.ins = load->ins;
+	prm->raw.nb_ins = nb_ins;
+
+	return 0;
+}
+
+#else /* RTE_HAS_LIBPCAP */
+
+RTE_EXPORT_SYMBOL(rte_bpf_convert)
+struct rte_bpf_prm *
+rte_bpf_convert(const struct bpf_program *prog)
+{
+	RTE_SET_USED(prog);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libpcap installed");
+	rte_errno = ENOTSUP;
+	return NULL;
+}
+
+void
+__rte_bpf_convert_cleanup(struct __rte_bpf_load *load)
+{
+	RTE_ASSERT(load->ins == NULL);
+}
+
+int __rte_bpf_convert(struct __rte_bpf_load *load)
+{
+	RTE_SET_USED(load);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libpcap installed");
+	return -ENOTSUP;
+}
+
+#endif /* RTE_HAS_LIBPCAP */
diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index 4a98b3373067..92d03583d977 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -21,6 +21,9 @@ struct rte_bpf {
 struct __rte_bpf_load {
 	struct rte_bpf_prm_ex prm;
 
+	/* Conversion from cBPF. */
+	struct ebpf_insn *ins;
+
 	/* Loading ELF and applying relocations. */
 	int elf_fd;  /* ELF fd, must be negative (not zero) by default. */
 	void *elf;  /* Using void to avoid dependency on libelf. */
@@ -34,6 +37,14 @@ struct __rte_bpf_load {
  * to avoid potential name conflict with other libraries.
  */
 
+/* Free temporary resources created by converting from cBPF to eBPF. */
+void
+__rte_bpf_convert_cleanup(struct __rte_bpf_load *load);
+
+/* Convert program from cBPF to eBPF. */
+int
+__rte_bpf_convert(struct __rte_bpf_load *load);
+
 /* Free temporary resources created by opening ELF. */
 void
 __rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load);
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index c9cbaf6ded7e..c3c49ac49b1b 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -230,6 +230,9 @@ load_try(struct __rte_bpf_load *load, const struct rte_bpf_prm_ex *app_prm)
 	switch (load->prm.origin) {
 	case RTE_BPF_ORIGIN_RAW:
 		break;
+	case RTE_BPF_ORIGIN_CBPF:
+		rc = rc < 0 ? rc : __rte_bpf_convert(load);
+		break;
 	case RTE_BPF_ORIGIN_ELF_FILE:
 		rc = rc < 0 ? rc : __rte_bpf_load_elf_file(load);
 		rc = rc < 0 ? rc : __rte_bpf_load_elf_code(load);
@@ -244,6 +247,13 @@ load_try(struct __rte_bpf_load *load, const struct rte_bpf_prm_ex *app_prm)
 	return rc;
 }
 
+static void
+load_cleanup(struct __rte_bpf_load *load)
+{
+	__rte_bpf_convert_cleanup(load);
+	__rte_bpf_load_elf_cleanup(load);
+}
+
 RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_load_ex, 26.11)
 struct rte_bpf *
 rte_bpf_load_ex(const struct rte_bpf_prm_ex *prm)
@@ -252,7 +262,7 @@ rte_bpf_load_ex(const struct rte_bpf_prm_ex *prm)
 
 	const int rc = load_try(&load, prm);
 
-	__rte_bpf_load_elf_cleanup(&load);
+	load_cleanup(&load);
 
 	RTE_ASSERT((rc < 0) == (load.bpf == NULL));
 
diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c
deleted file mode 100644
index 4c329832c264..000000000000
--- a/lib/bpf/bpf_stub.c
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2021 Intel Corporation
- */
-
-#include "bpf_impl.h"
-#include <eal_export.h>
-#include <rte_errno.h>
-
-/**
- * Contains stubs for unimplemented public API functions
- */
-
-#ifndef RTE_HAS_LIBPCAP
-RTE_EXPORT_SYMBOL(rte_bpf_convert)
-struct rte_bpf_prm *
-rte_bpf_convert(const struct bpf_program *prog)
-{
-	if (prog == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libpcap installed");
-	rte_errno = ENOTSUP;
-	return NULL;
-}
-#endif
diff --git a/lib/bpf/meson.build b/lib/bpf/meson.build
index 4901b6ee1463..7e8a300e3f87 100644
--- a/lib/bpf/meson.build
+++ b/lib/bpf/meson.build
@@ -15,14 +15,16 @@ if arch_subdir == 'x86' and dpdk_conf.get('RTE_ARCH_32')
     subdir_done()
 endif
 
-sources = files('bpf.c',
+sources = files(
+        'bpf.c',
+        'bpf_convert.c',
         'bpf_dump.c',
         'bpf_exec.c',
         'bpf_load.c',
         'bpf_load_elf.c',
         'bpf_pkt.c',
-        'bpf_stub.c',
-        'bpf_validate.c')
+        'bpf_validate.c',
+)
 
 if arch_subdir == 'x86' and dpdk_conf.get('RTE_ARCH_64')
     sources += files('bpf_jit_x86.c')
@@ -45,8 +47,7 @@ else
 endif
 
 if dpdk_conf.has('RTE_HAS_LIBPCAP')
-    sources += files('bpf_convert.c')
     ext_deps += pcap_dep
 else
-    warning('libpcap is missing, rte_bpf_convert API will be disabled')
+    warning('libpcap is missing, cBPF API will be disabled')
 endif
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index 751b879bb7fd..dcb709352e17 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -95,10 +95,12 @@ struct rte_bpf_xsym {
  */
 enum rte_bpf_origin {
 	RTE_BPF_ORIGIN_RAW,		/**< code loaded from raw array */
-	RTE_BPF_ORIGIN_RESERVED,	/**< reserved for cBPF */
+	RTE_BPF_ORIGIN_CBPF,		/**< code converted from cbpf */
 	RTE_BPF_ORIGIN_ELF_FILE,	/**< code loaded from elf_file */
 };
 
+struct bpf_insn;
+
 /**
  * Input parameters for loading eBPF code, extensible version.
  *
@@ -117,6 +119,10 @@ struct rte_bpf_prm_ex {
 			const struct ebpf_insn *ins;  /**< eBPF instructions */
 			uint32_t nb_ins;  /**< number of instructions in ins */
 		} raw;
+		struct {
+			const struct bpf_insn *ins;  /**< cBPF instructions */
+			uint32_t nb_ins;  /**< number of instructions in ins */
+		} cbpf;
 		struct {
 			const char *path;  /**< path to the ELF file */
 			const char *section;  /**< ELF section with the code */
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 05/10] bpf: support rte_bpf_prm_ex with port callbacks
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (3 preceding siblings ...)
  2026-05-06 17:22 ` [PATCH 04/10] bpf: add cBPF origin to rte_bpf_load_ex Marat Khalili
@ 2026-05-06 17:22 ` Marat Khalili
  2026-05-06 17:22 ` [PATCH 06/10] bpf: support loading ELF files from memory Marat Khalili
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:22 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Introduce new functions to install an already loaded BPF program into RX
or TX port/queue, since previous API was tied to rte_bpf_prm.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_pkt.c        | 65 ++++++++++++++++++++++++++++++----------
 lib/bpf/rte_bpf_ethdev.h | 54 +++++++++++++++++++++++++++++++++
 2 files changed, 104 insertions(+), 15 deletions(-)

diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index 5007f6aef57d..87065e939f31 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -490,13 +490,11 @@ rte_bpf_eth_tx_unload(uint16_t port, uint16_t queue)
 }
 
 static int
-bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
-	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
-	uint32_t flags)
+bpf_eth_elf_install(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
+	struct rte_bpf *bpf, uint32_t flags)
 {
 	int32_t rc;
 	struct bpf_eth_cbi *bc;
-	struct rte_bpf *bpf;
 	rte_rx_callback_fn frx;
 	rte_tx_callback_fn ftx;
 	struct rte_bpf_jit jit;
@@ -504,14 +502,17 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
 	frx = NULL;
 	ftx = NULL;
 
-	if (prm == NULL || rte_eth_dev_is_valid_port(port) == 0 ||
+	if (bpf == NULL || rte_eth_dev_is_valid_port(port) == 0 ||
 			queue >= RTE_MAX_QUEUES_PER_PORT)
 		return -EINVAL;
 
+	if (bpf->prm.nb_prog_arg != 1)
+		return -EINVAL;
+
 	if (cbh->type == BPF_ETH_RX)
-		frx = select_rx_callback(prm->prog_arg.type, flags);
+		frx = select_rx_callback(bpf->prm.prog_arg[0].type, flags);
 	else
-		ftx = select_tx_callback(prm->prog_arg.type, flags);
+		ftx = select_tx_callback(bpf->prm.prog_arg[0].type, flags);
 
 	if (frx == NULL && ftx == NULL) {
 		RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no callback selected;",
@@ -519,16 +520,11 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
 		return -EINVAL;
 	}
 
-	bpf = rte_bpf_elf_load(prm, fname, sname);
-	if (bpf == NULL)
-		return -rte_errno;
-
 	rte_bpf_get_jit(bpf, &jit);
 
 	if ((flags & RTE_BPF_ETH_F_JIT) != 0 && jit.func == NULL) {
 		RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no JIT generated;",
 			__func__, port, queue);
-		rte_bpf_destroy(bpf);
 		return -ENOTSUP;
 	}
 
@@ -551,7 +547,6 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
 
 	if (bc->cb == NULL) {
 		rc = -rte_errno;
-		rte_bpf_destroy(bpf);
 		bpf_eth_cbi_cleanup(bc);
 	} else
 		rc = 0;
@@ -564,13 +559,33 @@ int
 rte_bpf_eth_rx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
 	uint32_t flags)
+{
+	struct rte_bpf *bpf;
+	int32_t rc;
+
+	bpf = rte_bpf_elf_load(prm, fname, sname);
+	if (bpf == NULL)
+		return -rte_errno;
+
+	rc = rte_bpf_eth_rx_install(port, queue, bpf, flags);
+
+	if (rc < 0)
+		rte_bpf_destroy(bpf);
+
+	return rc;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_eth_rx_install, 26.11)
+int
+rte_bpf_eth_rx_install(uint16_t port, uint16_t queue, struct rte_bpf *bpf,
+	uint32_t flags)
 {
 	int32_t rc;
 	struct bpf_eth_cbh *cbh;
 
 	cbh = &rx_cbh;
 	rte_spinlock_lock(&cbh->lock);
-	rc = bpf_eth_elf_load(cbh, port, queue, prm, fname, sname, flags);
+	rc = bpf_eth_elf_install(cbh, port, queue, bpf, flags);
 	rte_spinlock_unlock(&cbh->lock);
 
 	return rc;
@@ -581,13 +596,33 @@ int
 rte_bpf_eth_tx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
 	uint32_t flags)
+{
+	struct rte_bpf *bpf;
+	int32_t rc;
+
+	bpf = rte_bpf_elf_load(prm, fname, sname);
+	if (bpf == NULL)
+		return -rte_errno;
+
+	rc = rte_bpf_eth_tx_install(port, queue, bpf, flags);
+
+	if (rc < 0)
+		rte_bpf_destroy(bpf);
+
+	return rc;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_eth_tx_install, 26.11)
+int
+rte_bpf_eth_tx_install(uint16_t port, uint16_t queue, struct rte_bpf *bpf,
+	uint32_t flags)
 {
 	int32_t rc;
 	struct bpf_eth_cbh *cbh;
 
 	cbh = &tx_cbh;
 	rte_spinlock_lock(&cbh->lock);
-	rc = bpf_eth_elf_load(cbh, port, queue, prm, fname, sname, flags);
+	rc = bpf_eth_elf_install(cbh, port, queue, bpf, flags);
 	rte_spinlock_unlock(&cbh->lock);
 
 	return rc;
diff --git a/lib/bpf/rte_bpf_ethdev.h b/lib/bpf/rte_bpf_ethdev.h
index cab8e9e3887a..e5eaf5b245e3 100644
--- a/lib/bpf/rte_bpf_ethdev.h
+++ b/lib/bpf/rte_bpf_ethdev.h
@@ -109,6 +109,60 @@ rte_bpf_eth_tx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
 	uint32_t flags);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Install callback to execute specified BPF program on given TX port/queue.
+ *
+ * On success the ownership of the program passes to the library,
+ * rte_bpf_eth_unload must be used to unload it, and rte_bpf_destroy must no
+ * longer be called.
+ *
+ * @param port
+ *   The identifier of the ethernet port
+ * @param queue
+ *   The identifier of the TX queue on the given port
+ * @param bpf
+ *   BPF program
+ * @param flags
+ *   Flags that define expected behavior of the loaded filter
+ *   (i.e. jited/non-jited version to use).
+ * @return
+ *   Zero on successful completion or negative error code otherwise.
+ */
+__rte_experimental
+int
+rte_bpf_eth_rx_install(uint16_t port, uint16_t queue, struct rte_bpf *bpf,
+	uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Install callback to execute specified BPF program on given RX port/queue.
+ *
+ * On success the ownership of the program passes to the library,
+ * rte_bpf_eth_unload must be used to unload it, and rte_bpf_destroy must no
+ * longer be called.
+ *
+ * @param port
+ *   The identifier of the ethernet port
+ * @param queue
+ *   The identifier of the RX queue on the given port
+ * @param bpf
+ *   BPF program
+ * @param flags
+ *   Flags that define expected behavior of the loaded filter
+ *   (i.e. jited/non-jited version to use).
+ * @return
+ *   Zero on successful completion or negative error code otherwise.
+ */
+__rte_experimental
+int
+rte_bpf_eth_tx_install(uint16_t port, uint16_t queue, struct rte_bpf *bpf,
+	uint32_t flags);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 06/10] bpf: support loading ELF files from memory
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (4 preceding siblings ...)
  2026-05-06 17:22 ` [PATCH 05/10] bpf: support rte_bpf_prm_ex with port callbacks Marat Khalili
@ 2026-05-06 17:22 ` Marat Khalili
  2026-05-06 17:22 ` [PATCH 07/10] test/bpf: test loading cBPF directly Marat Khalili
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:22 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Introduce new ELF origin RTE_BPF_ORIGIN_ELF_MEMORY allowing one to
specify data area containing ELF image.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_impl.h     |  5 +++++
 lib/bpf/bpf_load.c     |  4 ++++
 lib/bpf/bpf_load_elf.c | 40 +++++++++++++++++++++++++++++++++++++++-
 lib/bpf/rte_bpf.h      |  6 ++++++
 4 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index 92d03583d977..14ad772d4beb 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -27,6 +27,7 @@ struct __rte_bpf_load {
 	/* Loading ELF and applying relocations. */
 	int elf_fd;  /* ELF fd, must be negative (not zero) by default. */
 	void *elf;  /* Using void to avoid dependency on libelf. */
+	const char *elf_section;
 
 	/* Value we are going to return, if any. */
 	struct rte_bpf *bpf;
@@ -53,6 +54,10 @@ __rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load);
 int
 __rte_bpf_load_elf_file(struct __rte_bpf_load *load);
 
+/* Open the ELF memory image. */
+int
+__rte_bpf_load_elf_memory(struct __rte_bpf_load *load);
+
 /* Get code from ELF and apply relocations to it. */
 int
 __rte_bpf_load_elf_code(struct __rte_bpf_load *load);
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index c3c49ac49b1b..b626f6c61645 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -237,6 +237,10 @@ load_try(struct __rte_bpf_load *load, const struct rte_bpf_prm_ex *app_prm)
 		rc = rc < 0 ? rc : __rte_bpf_load_elf_file(load);
 		rc = rc < 0 ? rc : __rte_bpf_load_elf_code(load);
 		break;
+	case RTE_BPF_ORIGIN_ELF_MEMORY:
+		rc = rc < 0 ? rc : __rte_bpf_load_elf_memory(load);
+		rc = rc < 0 ? rc : __rte_bpf_load_elf_code(load);
+		break;
 	default:
 		rc = rc < 0 ? rc : -EINVAL;
 	}
diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c
index 4ae7492351ae..80443cb63a61 100644
--- a/lib/bpf/bpf_load_elf.c
+++ b/lib/bpf/bpf_load_elf.c
@@ -310,6 +310,36 @@ __rte_bpf_load_elf_file(struct __rte_bpf_load *load)
 		return -EINVAL;
 	}
 
+	load->elf_section = prm->elf_file.section;
+
+	return 0;
+}
+
+int
+__rte_bpf_load_elf_memory(struct __rte_bpf_load *load)
+{
+	const struct rte_bpf_prm_ex *const prm = &load->prm;
+
+	RTE_ASSERT(prm->origin == RTE_BPF_ORIGIN_ELF_MEMORY);
+
+	if (prm->elf_memory.data == NULL || prm->elf_memory.section == NULL)
+		return -EINVAL;
+
+	if (elf_version(EV_CURRENT) == EV_NONE)
+		return -ENOTSUP;
+
+	load->elf = elf_memory(
+		/* Cast away const, we are not going to modify the ELF image. */
+		(char *)(uintptr_t)prm->elf_memory.data, prm->elf_memory.size);
+	if (load->elf == NULL) {
+		const int rc = elf_errno();
+		RTE_BPF_LOG_FUNC_LINE(ERR, "error %d opening ELF image: %s",
+			rc, elf_errmsg(rc));
+		return -EINVAL;
+	}
+
+	load->elf_section = prm->elf_memory.section;
+
 	return 0;
 }
 
@@ -321,7 +351,7 @@ __rte_bpf_load_elf_code(struct __rte_bpf_load *load)
 	size_t sidx;
 	int rc;
 
-	rc = find_elf_code(load->elf, prm->elf_file.section, &sd, &sidx);
+	rc = find_elf_code(load->elf, load->elf_section, &sd, &sidx);
 	if (rc < 0)
 		return rc;
 
@@ -353,6 +383,14 @@ __rte_bpf_load_elf_file(struct __rte_bpf_load *load)
 	return -ENOTSUP;
 }
 
+int
+__rte_bpf_load_elf_memory(struct __rte_bpf_load *load)
+{
+	RTE_SET_USED(load);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
+	return -ENOTSUP;
+}
+
 int
 __rte_bpf_load_elf_code(struct __rte_bpf_load *load)
 {
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index dcb709352e17..3c3848925bdf 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -97,6 +97,7 @@ enum rte_bpf_origin {
 	RTE_BPF_ORIGIN_RAW,		/**< code loaded from raw array */
 	RTE_BPF_ORIGIN_CBPF,		/**< code converted from cbpf */
 	RTE_BPF_ORIGIN_ELF_FILE,	/**< code loaded from elf_file */
+	RTE_BPF_ORIGIN_ELF_MEMORY,	/**< code loaded from elf_memory */
 };
 
 struct bpf_insn;
@@ -127,6 +128,11 @@ struct rte_bpf_prm_ex {
 			const char *path;  /**< path to the ELF file */
 			const char *section;  /**< ELF section with the code */
 		} elf_file;
+		struct {
+			const void *data;  /**< pointer to the ELF image */
+			size_t size;  /**< size of the ELF image */
+			const char *section;  /**< ELF section with the code */
+		} elf_memory;
 	};
 
 	const struct rte_bpf_xsym *xsym;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 07/10] test/bpf: test loading cBPF directly
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (5 preceding siblings ...)
  2026-05-06 17:22 ` [PATCH 06/10] bpf: support loading ELF files from memory Marat Khalili
@ 2026-05-06 17:22 ` Marat Khalili
  2026-05-06 17:22 ` [PATCH 08/10] test/bpf: test loading ELF file from memory Marat Khalili
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:22 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Run cBPF tests twice: via rte_bpf_convert, and using
RTE_BPF_FLAG_ORIGIN_CBPF origin of new rte_bpf_load_ex API.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 app/test/test_bpf.c | 132 +++++++++++++++++++++++++++-----------------
 1 file changed, 80 insertions(+), 52 deletions(-)

diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c
index dd247224504e..c8a4ee755097 100644
--- a/app/test/test_bpf.c
+++ b/app/test/test_bpf.c
@@ -4429,13 +4429,59 @@ test_bpf_dump(struct bpf_program *cbf, const struct rte_bpf_prm *prm)
 	}
 }
 
+/* Function loading BPF program from cBPF instructions array. */
+typedef struct rte_bpf *
+(*load_cbpf_program_t)(struct bpf_program *cbpf_program, const char *str);
+
+/* Load BPF program by converting cBPF array to rte_bpf_prm and then opening it. */
+static struct rte_bpf *
+load_cbpf_program_convert(struct bpf_program *cbpf_program, const char *str)
+{
+	struct rte_bpf_prm *prm = NULL;
+	struct rte_bpf *bpf;
+
+	prm = rte_bpf_convert(cbpf_program);
+	if (prm == NULL) {
+		printf("%s@%d: bpf_convert(\"%s\") failed\n",
+			__func__, __LINE__, str);
+		return NULL;
+	}
+
+	printf("bpf convert(\"%s\") produced:\n", str);
+	rte_bpf_dump(stdout, prm->ins, prm->nb_ins);
+
+	printf("%s \"%s\"\n", __func__, str);
+	test_bpf_dump(cbpf_program, prm);
+
+	bpf = rte_bpf_load(prm);
+	rte_free(prm);
+
+	return bpf;
+}
+
+/* Load BPF program by calling rte_bpf_load_ex and specifying cBPF array as the origin. */
+static struct rte_bpf *
+load_cbpf_program_direct(struct bpf_program *cbpf_program, const char *str __rte_unused)
+{
+	return rte_bpf_load_ex(&(struct rte_bpf_prm_ex){
+		.sz = sizeof(struct rte_bpf_prm_ex),
+		.origin = RTE_BPF_ORIGIN_CBPF,
+		.cbpf.ins = cbpf_program->bf_insns,
+		.cbpf.nb_ins = cbpf_program->bf_len,
+		.prog_arg[0] = {
+			.type = RTE_BPF_ARG_PTR_MBUF,
+			.size = sizeof(struct rte_mbuf),
+		},
+		.nb_prog_arg = 1,
+	});
+}
+
 static int
-test_bpf_match(pcap_t *pcap, const char *str,
-	       struct rte_mbuf *mb)
+test_bpf_match(pcap_t *pcap, const char *str, struct rte_mbuf *mb,
+	load_cbpf_program_t load_cbpf_program)
 {
 	struct bpf_program fcode;
-	struct rte_bpf_prm *prm = NULL;
-	struct rte_bpf *bpf = NULL;
+	struct rte_bpf *bpf;
 	int ret = -1;
 	uint64_t rc;
 
@@ -4445,17 +4491,10 @@ test_bpf_match(pcap_t *pcap, const char *str,
 		return -1;
 	}
 
-	prm = rte_bpf_convert(&fcode);
-	if (prm == NULL) {
-		printf("%s@%d: bpf_convert('%s') failed,, error=%d(%s);\n",
-		       __func__, __LINE__, str, rte_errno, strerror(rte_errno));
-		goto error;
-	}
-
-	bpf = rte_bpf_load(prm);
+	bpf = load_cbpf_program(&fcode, str);
 	if (bpf == NULL) {
-		printf("%s@%d: failed to load bpf code, error=%d(%s);\n",
-			__func__, __LINE__, rte_errno, strerror(rte_errno));
+		printf("%s@%d: failed to load cbpf program for \"%s\", error=%d(%s);\n",
+			__func__, __LINE__, str, rte_errno, strerror(rte_errno));
 		goto error;
 	}
 
@@ -4465,7 +4504,6 @@ test_bpf_match(pcap_t *pcap, const char *str,
 error:
 	if (bpf)
 		rte_bpf_destroy(bpf);
-	rte_free(prm);
 	pcap_freecode(&fcode);
 	return ret;
 }
@@ -4474,6 +4512,11 @@ test_bpf_match(pcap_t *pcap, const char *str,
 static int
 test_bpf_filter_sanity(pcap_t *pcap)
 {
+	static const load_cbpf_program_t cbpf_program_loaders[] = {
+		load_cbpf_program_convert,
+		load_cbpf_program_direct,
+	};
+
 	const uint32_t plen = 100;
 	struct rte_mbuf mb, *m;
 	uint8_t tbuf[RTE_MBUF_DEFAULT_BUF_SIZE];
@@ -4500,15 +4543,17 @@ test_bpf_filter_sanity(pcap_t *pcap)
 		.dst_addr = rte_cpu_to_be_32(RTE_IPV4_BROADCAST),
 	};
 
-	if (test_bpf_match(pcap, "ip", m) != 0) {
-		printf("%s@%d: filter \"ip\" doesn't match test data\n",
-		       __func__, __LINE__);
-		return -1;
-	}
-	if (test_bpf_match(pcap, "not ip", m) == 0) {
-		printf("%s@%d: filter \"not ip\" does match test data\n",
-		       __func__, __LINE__);
-		return -1;
+	for (int li = 0; li != RTE_DIM(cbpf_program_loaders); ++li) {
+		if (test_bpf_match(pcap, "ip", m, cbpf_program_loaders[li]) != 0) {
+			printf("%s@%d: filter \"ip\" doesn't match test data\n",
+			       __func__, __LINE__);
+			return -1;
+		}
+		if (test_bpf_match(pcap, "not ip", m, cbpf_program_loaders[li]) == 0) {
+			printf("%s@%d: filter \"not ip\" does match test data\n",
+			       __func__, __LINE__);
+			return -1;
+		}
 	}
 
 	return 0;
@@ -4556,44 +4601,25 @@ static const char * const sample_filters[] = {
 };
 
 static int
-test_bpf_filter(pcap_t *pcap, const char *s)
+test_bpf_filter(pcap_t *pcap, const char *s, load_cbpf_program_t load_cbpf_program)
 {
 	struct bpf_program fcode;
-	struct rte_bpf_prm *prm = NULL;
-	struct rte_bpf *bpf = NULL;
+	struct rte_bpf *bpf;
 
 	if (pcap_compile(pcap, &fcode, s, 1, PCAP_NETMASK_UNKNOWN)) {
-		printf("%s@%d: pcap_compile('%s') failed: %s;\n",
+		printf("%s@%d: pcap_compile(\"%s\") failed: %s;\n",
 		       __func__, __LINE__, s, pcap_geterr(pcap));
 		return -1;
 	}
 
-	prm = rte_bpf_convert(&fcode);
-	if (prm == NULL) {
-		printf("%s@%d: bpf_convert('%s') failed,, error=%d(%s);\n",
-		       __func__, __LINE__, s, rte_errno, strerror(rte_errno));
-		goto error;
-	}
-
-	printf("bpf convert for \"%s\" produced:\n", s);
-	rte_bpf_dump(stdout, prm->ins, prm->nb_ins);
-
-	bpf = rte_bpf_load(prm);
+	bpf = load_cbpf_program(&fcode, s);
 	if (bpf == NULL) {
-		printf("%s@%d: failed to load bpf code, error=%d(%s);\n",
-			__func__, __LINE__, rte_errno, strerror(rte_errno));
-		goto error;
+		printf("%s@%d: failed to load cbpf program for \"%s\" , error=%d(%s);\n",
+			__func__, __LINE__, s, rte_errno, strerror(rte_errno));
 	}
 
-error:
-	if (bpf)
-		rte_bpf_destroy(bpf);
-	else {
-		printf("%s \"%s\"\n", __func__, s);
-		test_bpf_dump(&fcode, prm);
-	}
+	rte_bpf_destroy(bpf);
 
-	rte_free(prm);
 	pcap_freecode(&fcode);
 	return (bpf == NULL) ? -1 : 0;
 }
@@ -4612,8 +4638,10 @@ test_bpf_convert(void)
 	}
 
 	rc = test_bpf_filter_sanity(pcap);
-	for (i = 0; i < RTE_DIM(sample_filters); i++)
-		rc |= test_bpf_filter(pcap, sample_filters[i]);
+	for (i = 0; i < RTE_DIM(sample_filters); i++) {
+		rc |= test_bpf_filter(pcap, sample_filters[i], load_cbpf_program_convert);
+		rc |= test_bpf_filter(pcap, sample_filters[i], load_cbpf_program_direct);
+	}
 
 	pcap_close(pcap);
 	return rc;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 08/10] test/bpf: test loading ELF file from memory
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (6 preceding siblings ...)
  2026-05-06 17:22 ` [PATCH 07/10] test/bpf: test loading cBPF directly Marat Khalili
@ 2026-05-06 17:22 ` Marat Khalili
  2026-05-06 17:22 ` [PATCH 09/10] doc: add release notes for new extensible BPF API Marat Khalili
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:22 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Run each subtest in test_bpf_elf twice: the old way loading ELF images
via temporary file, and using the new rte_bpf_load_ex API to load them
directly from memory.

In tests loading port/queue filters use new rte_bpf_eth_(rx|tx)_install
API to install an already loaded (via one of the ways) BPF program.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 app/test/test_bpf.c | 193 ++++++++++++++++++++++++++------------------
 1 file changed, 113 insertions(+), 80 deletions(-)

diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c
index c8a4ee755097..69e84f0cab56 100644
--- a/app/test/test_bpf.c
+++ b/app/test/test_bpf.c
@@ -3977,12 +3977,61 @@ create_temp_bpf_file(const uint8_t *data, size_t size, const char *name)
 
 #include "test_bpf_load.h"
 
+/* Function loading BPF program from ELF image in memory. */
+typedef struct rte_bpf *
+(*load_elf_image_t)(const void *data, size_t size, const char *section,
+	const struct rte_bpf_xsym *xsym, uint32_t nb_xsym, const struct rte_bpf_arg *prog_arg);
+
+/* Load BPF program by writing ELF image to temporary file and opening this file. */
+static struct rte_bpf *
+load_elf_image_temp_file(const void *data, size_t size, const char *section,
+	const struct rte_bpf_xsym *xsym, uint32_t nb_xsym, const struct rte_bpf_arg *prog_arg)
+{
+	/* Create temp file from embedded BPF object */
+	char *tmpfile = create_temp_bpf_file(data, size, "test");
+	if (tmpfile == NULL) {
+		rte_errno = EIO;
+		return NULL;
+	}
+
+	/* Try to load BPF program from temp file */
+	const struct rte_bpf_prm prm = {
+		.xsym = xsym,
+		.nb_xsym = nb_xsym,
+		.prog_arg = *prog_arg,
+	};
+
+	struct rte_bpf *bpf = rte_bpf_elf_load(&prm, tmpfile, section);
+	unlink(tmpfile);
+	free(tmpfile);
+
+	return bpf;
+}
+
+/* Load BPF program by calling rte_bpf_load_ex and specifying image as the origin. */
+static struct rte_bpf *
+load_elf_image_direct(const void *data, size_t size, const char *section,
+	const struct rte_bpf_xsym *xsym, uint32_t nb_xsym, const struct rte_bpf_arg *prog_arg)
+{
+	return rte_bpf_load_ex(&(struct rte_bpf_prm_ex){
+		.sz = sizeof(struct rte_bpf_prm_ex),
+		.origin = RTE_BPF_ORIGIN_ELF_MEMORY,
+		.elf_memory.data = data,
+		.elf_memory.size = size,
+		.elf_memory.section = section,
+		.xsym = xsym,
+		.nb_xsym = nb_xsym,
+		.prog_arg[0] = *prog_arg,
+		.nb_prog_arg = 1,
+	});
+}
+
 /*
  * Test loading BPF program from an object file.
  * This test uses same arguments as previous test_call1 example.
  */
 static int
-test_bpf_elf_load(void)
+test_bpf_elf_load(load_elf_image_t load_elf_image)
 {
 	static const char test_section[] = "call1";
 	uint8_t tbuf[sizeof(struct dummy_vect8)];
@@ -4010,28 +4059,15 @@ test_bpf_elf_load(void)
 			},
 		},
 	};
-	int ret;
-
-	/* Create temp file from embedded BPF object */
-	char *tmpfile = create_temp_bpf_file(app_test_bpf_load_o,
-					     app_test_bpf_load_o_len,
-					     "load");
-	if (tmpfile == NULL)
-		return -1;
-
-	/* Try to load BPF program from temp file */
-	const struct rte_bpf_prm prm = {
-		.xsym = xsym,
-		.nb_xsym = RTE_DIM(xsym),
-		.prog_arg = {
-			.type = RTE_BPF_ARG_PTR,
-			.size = sizeof(tbuf),
-		},
+	static const struct rte_bpf_arg prog_arg = {
+		.type = RTE_BPF_ARG_PTR,
+		.size = sizeof(tbuf),
 	};
+	struct rte_bpf *bpf;
+	int ret;
 
-	struct rte_bpf *bpf = rte_bpf_elf_load(&prm, tmpfile, test_section);
-	unlink(tmpfile);
-	free(tmpfile);
+	bpf = load_elf_image(app_test_bpf_load_o, app_test_bpf_load_o_len, test_section,
+		xsym, RTE_DIM(xsym), &prog_arg);
 
 	/* If libelf support is not available */
 	if (bpf == NULL && rte_errno == ENOTSUP)
@@ -4174,22 +4210,28 @@ setup_mbufs(struct rte_mbuf *burst[], unsigned int n)
 	return tcp_count;
 }
 
-static int bpf_tx_test(uint16_t port, const char *tmpfile, struct rte_mempool *pool,
-		       const char *section, uint32_t flags)
+static int bpf_tx_test(uint16_t port, struct rte_mempool *pool, load_elf_image_t load_elf_image,
+	const char *section, uint32_t flags)
 {
-	const struct rte_bpf_prm prm = {
-		.prog_arg = {
-			.type = RTE_BPF_ARG_PTR,
-			.size = sizeof(struct dummy_net),
-		},
+	static const struct rte_bpf_arg prog_arg = {
+		.type = RTE_BPF_ARG_PTR,
+		.size = sizeof(struct dummy_net),
 	};
+	struct rte_bpf *bpf;
 	int ret;
 
-	/* Try to load BPF TX program from temp file */
-	ret = rte_bpf_eth_tx_elf_load(port, 0, &prm, tmpfile, section, flags);
+	/* Try to load BPF program from image */
+	bpf = load_elf_image(app_test_bpf_filter_o, app_test_bpf_filter_o_len, section,
+		NULL, 0, &prog_arg);
+	TEST_ASSERT_NOT_NULL(bpf, "failed to load BPF filter from image, error=%d:(%s)\n",
+		       rte_errno, rte_strerror(rte_errno));
+
+	/* Try to install loaded BPF program */
+	ret = rte_bpf_eth_tx_install(port, 0, bpf, flags);
 	if (ret != 0) {
-		printf("%s@%d: failed to load BPF filter from file=%s error=%d:(%s)\n",
-		       __func__, __LINE__, tmpfile, rte_errno, rte_strerror(rte_errno));
+		printf("%s@%d: failed to install BPF filter, error=%d:(%s)\n",
+		       __func__, __LINE__, rte_errno, rte_strerror(rte_errno));
+		rte_bpf_destroy(bpf);
 		return ret;
 	}
 
@@ -4217,10 +4259,9 @@ static int bpf_tx_test(uint16_t port, const char *tmpfile, struct rte_mempool *p
 
 /* Test loading a transmit filter which only allows IPv4 packets */
 static int
-test_bpf_elf_tx_load(void)
+test_bpf_elf_tx_load(load_elf_image_t load_elf_image)
 {
 	static const char null_dev[] = "net_null_bpf0";
-	char *tmpfile = NULL;
 	struct rte_mempool *mb_pool = NULL;
 	uint16_t port = UINT16_MAX;
 	int ret;
@@ -4237,27 +4278,17 @@ test_bpf_elf_tx_load(void)
 	if (ret != 0)
 		goto fail;
 
-	/* Create temp file from embedded BPF object */
-	tmpfile = create_temp_bpf_file(app_test_bpf_filter_o, app_test_bpf_filter_o_len, "tx");
-	if (tmpfile == NULL)
-		goto fail;
-
 	/* Do test with VM */
-	ret = bpf_tx_test(port, tmpfile, mb_pool, "filter", 0);
+	ret = bpf_tx_test(port, mb_pool, load_elf_image, "filter", 0);
 	if (ret != 0)
 		goto fail;
 
 	/* Repeat with JIT */
-	ret = bpf_tx_test(port, tmpfile, mb_pool, "filter", RTE_BPF_ETH_F_JIT);
+	ret = bpf_tx_test(port, mb_pool, load_elf_image, "filter", RTE_BPF_ETH_F_JIT);
 	if (ret == 0)
 		printf("%s: TX ELF load test passed\n", __func__);
 
 fail:
-	if (tmpfile) {
-		unlink(tmpfile);
-		free(tmpfile);
-	}
-
 	if (port != UINT16_MAX)
 		rte_vdev_uninit(null_dev);
 
@@ -4272,23 +4303,28 @@ test_bpf_elf_tx_load(void)
 }
 
 /* Test loading a receive filter */
-static int bpf_rx_test(uint16_t port, const char *tmpfile, struct rte_mempool *pool,
-		       const char *section, uint32_t flags, uint16_t expected)
+static int bpf_rx_test(uint16_t port, struct rte_mempool *pool, load_elf_image_t load_elf_image,
+	const char *section, uint32_t flags, uint16_t expected)
 {
-	struct rte_mbuf *pkts[BPF_TEST_BURST];
-	const struct rte_bpf_prm prm = {
-		.prog_arg = {
-			.type = RTE_BPF_ARG_PTR,
-			.size = sizeof(struct dummy_net),
-		},
+	static const struct rte_bpf_arg prog_arg = {
+		.type = RTE_BPF_ARG_PTR,
+		.size = sizeof(struct dummy_net),
 	};
+	struct rte_mbuf *pkts[BPF_TEST_BURST];
+	struct rte_bpf *bpf;
 	int ret;
 
-	/* Load BPF program to drop all packets */
-	ret = rte_bpf_eth_rx_elf_load(port, 0, &prm, tmpfile, section, flags);
+	/* Try to load BPF program from image */
+	bpf = load_elf_image(app_test_bpf_filter_o, app_test_bpf_filter_o_len, section,
+		NULL, 0, &prog_arg);
+	TEST_ASSERT_NOT_NULL(bpf, "failed to load BPF filter from image, error=%d:(%s)\n",
+		       rte_errno, rte_strerror(rte_errno));
+
+	/* Try to install loaded BPF program */
+	ret = rte_bpf_eth_rx_install(port, 0, bpf, flags);
 	if (ret != 0) {
-		printf("%s@%d: failed to load BPF filter from file=%s error=%d:(%s)\n",
-		       __func__, __LINE__, tmpfile, rte_errno, rte_strerror(rte_errno));
+		printf("%s@%d: failed to install BPF filter, error=%d:(%s)\n",
+		       __func__, __LINE__, rte_errno, rte_strerror(rte_errno));
 		return ret;
 	}
 
@@ -4311,11 +4347,10 @@ static int bpf_rx_test(uint16_t port, const char *tmpfile, struct rte_mempool *p
 
 /* Test loading a receive filters, first with drop all and then with allow all packets */
 static int
-test_bpf_elf_rx_load(void)
+test_bpf_elf_rx_load(load_elf_image_t load_elf_image)
 {
 	static const char null_dev[] = "net_null_bpf0";
 	struct rte_mempool *pool = NULL;
-	char *tmpfile = NULL;
 	uint16_t port = UINT16_MAX;
 	int ret;
 
@@ -4331,28 +4366,23 @@ test_bpf_elf_rx_load(void)
 	if (ret != 0)
 		goto fail;
 
-	/* Create temp file from embedded BPF object */
-	tmpfile = create_temp_bpf_file(app_test_bpf_filter_o, app_test_bpf_filter_o_len, "rx");
-	if (tmpfile == NULL)
-		goto fail;
-
 	/* Do test with VM */
-	ret = bpf_rx_test(port, tmpfile, pool, "drop", 0, 0);
+	ret = bpf_rx_test(port, pool, load_elf_image, "drop", 0, 0);
 	if (ret != 0)
 		goto fail;
 
 	/* Repeat with JIT */
-	ret = bpf_rx_test(port, tmpfile, pool, "drop", RTE_BPF_ETH_F_JIT, 0);
+	ret = bpf_rx_test(port, pool, load_elf_image, "drop", RTE_BPF_ETH_F_JIT, 0);
 	if (ret != 0)
 		goto fail;
 
 	/* Repeat with allow all */
-	ret = bpf_rx_test(port, tmpfile, pool, "allow", 0, BPF_TEST_BURST);
+	ret = bpf_rx_test(port, pool, load_elf_image, "allow", 0, BPF_TEST_BURST);
 	if (ret != 0)
 		goto fail;
 
 	/* Repeat with JIT */
-	ret = bpf_rx_test(port, tmpfile, pool, "allow", RTE_BPF_ETH_F_JIT, BPF_TEST_BURST);
+	ret = bpf_rx_test(port, pool, load_elf_image, "allow", RTE_BPF_ETH_F_JIT, BPF_TEST_BURST);
 	if (ret != 0)
 		goto fail;
 
@@ -4364,11 +4394,6 @@ test_bpf_elf_rx_load(void)
 			  "Mempool available %u != %u leaks?", avail, BPF_TEST_POOLSIZE);
 
 fail:
-	if (tmpfile) {
-		unlink(tmpfile);
-		free(tmpfile);
-	}
-
 	if (port != UINT16_MAX)
 		rte_vdev_uninit(null_dev);
 
@@ -4381,13 +4406,21 @@ test_bpf_elf_rx_load(void)
 static int
 test_bpf_elf(void)
 {
-	int ret;
+	static const load_elf_image_t elf_image_loaders[] = {
+		load_elf_image_temp_file,
+		load_elf_image_direct,
+	};
 
-	ret = test_bpf_elf_load();
-	if (ret == TEST_SUCCESS)
-		ret = test_bpf_elf_tx_load();
-	if (ret == TEST_SUCCESS)
-		ret = test_bpf_elf_rx_load();
+	int ret = TEST_SUCCESS;
+
+	for (int li = 0; li != RTE_DIM(elf_image_loaders); ++li) {
+		if (ret == TEST_SUCCESS)
+			ret = test_bpf_elf_load(elf_image_loaders[li]);
+		if (ret == TEST_SUCCESS)
+			ret = test_bpf_elf_tx_load(elf_image_loaders[li]);
+		if (ret == TEST_SUCCESS)
+			ret = test_bpf_elf_rx_load(elf_image_loaders[li]);
+	}
 
 	return ret;
 }
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 09/10] doc: add release notes for new extensible BPF API
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (7 preceding siblings ...)
  2026-05-06 17:22 ` [PATCH 08/10] test/bpf: test loading ELF file from memory Marat Khalili
@ 2026-05-06 17:22 ` Marat Khalili
  2026-05-06 17:22 ` [PATCH 10/10] doc: add load API to BPF programmer's guide Marat Khalili
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:22 UTC (permalink / raw)
  Cc: dev

Document the following new eBPF features introduced in this release:
* Extensible BPF loading API (rte_bpf_load_ex, rte_bpf_prm_ex).
* Loading and executing eBPF programs with up to 5 arguments.
* Installing already loaded eBPF programs as port callbacks.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 doc/guides/rel_notes/release_26_07.rst | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst
index f012d47a4b1b..18810ab81d93 100644
--- a/doc/guides/rel_notes/release_26_07.rst
+++ b/doc/guides/rel_notes/release_26_07.rst
@@ -63,6 +63,26 @@ New Features
     ``rte_eal_init`` and the application is responsible for probing each device,
   * ``--auto-probing`` enables the initial bus probing, which is the current default behavior.
 
+* **Added extensible BPF loading API.**
+
+  Added an extensible BPF loading API comprising the function
+  ``rte_bpf_load_ex`` and struct ``rte_bpf_prm_ex``. This enables new features
+  such as loading classic BPF (cBPF), loading ELF images directly from memory
+  buffers, and executing multi-argument programs, while avoiding future ABI
+  breakages.
+
+* **Added support for executing BPF programs with multiple arguments.**
+
+  Added support for loading and executing BPF programs with up to 5 arguments.
+  This introduces new API functions ``rte_bpf_exec_ex``,
+  ``rte_bpf_exec_burst_ex``, and ``rte_bpf_get_jit_ex``.
+
+* **Added BPF port callback installation API.**
+
+  Added new API functions ``rte_bpf_eth_rx_install`` and
+  ``rte_bpf_eth_tx_install`` for installing already loaded BPF programs as
+  port callbacks (as opposed to loading them directly from ELF files).
+
 
 Removed Items
 -------------
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 10/10] doc: add load API to BPF programmer's guide
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (8 preceding siblings ...)
  2026-05-06 17:22 ` [PATCH 09/10] doc: add release notes for new extensible BPF API Marat Khalili
@ 2026-05-06 17:22 ` Marat Khalili
  2026-05-09 12:36 ` [PATCH 00/10] bpf: introduce extensible load API Konstantin Ananyev
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
  11 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-06 17:22 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Rewrite the basic operations list to focus on a typical use. Provide an
end-to-end example demonstrating loading from an ELF file, executing via
JIT or the interpreter, and properly handling multiple custom arguments
using rte_bpf_prog_ctx.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 doc/guides/prog_guide/bpf_lib.rst | 75 ++++++++++++++++++++++++++++---
 1 file changed, 68 insertions(+), 7 deletions(-)

diff --git a/doc/guides/prog_guide/bpf_lib.rst b/doc/guides/prog_guide/bpf_lib.rst
index 8c820328b984..df3782508829 100644
--- a/doc/guides/prog_guide/bpf_lib.rst
+++ b/doc/guides/prog_guide/bpf_lib.rst
@@ -15,17 +15,79 @@ for more information.
 Also it introduces basic framework to load/unload BPF-based filters
 on eth devices (right now only via SW RX/TX callbacks).
 
-The library API provides the following basic operations:
+The library API provides the following basic operations for working with BPF
+programs:
 
-*  Create a new BPF execution context and load user provided eBPF code into it.
+*   **Loading:** The extensible API (``rte_bpf_load_ex``) is the recommended
+    way to load a BPF program. By utilizing ``struct rte_bpf_prm_ex``, you can
+    load an eBPF program from an ELF file on disk, or load eBPF/cBPF bytecode
+    directly from memory buffers.
 
-*   Destroy an BPF execution context and its runtime structures and free the associated memory.
+*   **Execution via Callbacks:** Once loaded, a BPF program can be attached to
+    a specific ethernet device port and queue to automatically process incoming
+    or outgoing packets using ``rte_bpf_eth_rx_install`` or
+    ``rte_bpf_eth_tx_install``.
 
-*   Execute eBPF bytecode associated with provided input parameter.
+*   **Direct Execution:** You can execute a BPF program directly from your
+    application code using ``rte_bpf_exec_ex`` (or the burst variant
+    ``rte_bpf_exec_burst_ex``). This API allows passing an execution context
+    (``struct rte_bpf_prog_ctx``) containing up to 5 custom arguments.
 
-*   Provide information about natively compiled code for given BPF context.
+*   **JIT Execution:** For maximum performance, you can retrieve the natively
+    compiled (JIT) function pointer for a loaded program using
+    ``rte_bpf_get_jit_ex`` and call it directly from your code with the same
+    arguments.
 
-*   Load BPF program from the ELF file and install callback to execute it on given ethdev port/queue.
+*   **Cleanup:** Destroy a BPF execution context and free the associated memory
+    using ``rte_bpf_destroy``.
+
+The following is a concise example of loading an eBPF program from an ELF file,
+and executing it directly, utilizing the JIT-compiled version if available:
+
+.. code-block:: c
+
+    struct rte_bpf_prm_ex prm = {
+        .sz = sizeof(struct rte_bpf_prm_ex),
+        .origin = RTE_BPF_ORIGIN_ELF_FILE,
+        .elf_file = {
+            .path = "ptype.o",
+            .section = ".text",
+        },
+        .nb_prog_arg = 2,
+        .prog_arg = {
+            [0] = {
+                .type = RTE_BPF_ARG_PTR_MBUF,
+                .size = sizeof(struct rte_mbuf),
+                .buf_size = RTE_MBUF_DEFAULT_BUF_SIZE,
+            },
+            [1] = {
+                .type = RTE_BPF_ARG_RAW,
+                .size = sizeof(uint64_t),
+            },
+        },
+    };
+    struct rte_bpf *bpf = rte_bpf_load_ex(&prm);
+    if (bpf == NULL) {
+        /* Handle load failure */
+    }
+
+    struct rte_bpf_prog_ctx ctx = {
+        .arg[0] = { .ptr = mbuf },
+        .arg[1] = { .u64 = RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK },
+    };
+
+    struct rte_bpf_jit_ex jit;
+    uint64_t ret;
+    if (rte_bpf_get_jit_ex(bpf, &jit) == 0 && jit.func2 != NULL) {
+        /* Call the JIT-compiled function directly for best performance */
+        ret = jit.func2(ctx.arg[0], ctx.arg[1]);
+    } else {
+        /* Fallback to interpreter */
+        uint64_t flags = 0;
+        ret = rte_bpf_exec_ex(bpf, &ctx, flags);
+    }
+
+    rte_bpf_destroy(bpf);
 
 Packet data load instructions
 -----------------------------
@@ -60,7 +122,6 @@ Not currently supported eBPF features
 -------------------------------------
 
  - JIT support only available for X86_64 and arm64 platforms
- - cBPF
  - tail-pointer call
  - eBPF MAP
  - external function calls for 32-bit platforms
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* RE: [PATCH 00/10] bpf: introduce extensible load API
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (9 preceding siblings ...)
  2026-05-06 17:22 ` [PATCH 10/10] doc: add load API to BPF programmer's guide Marat Khalili
@ 2026-05-09 12:36 ` Konstantin Ananyev
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
  11 siblings, 0 replies; 23+ messages in thread
From: Konstantin Ananyev @ 2026-05-09 12:36 UTC (permalink / raw)
  To: Marat Khalili; +Cc: dev@dpdk.org



> This patchset introduces an extensible load API for the BPF library in
> DPDK, addressing current limitations regarding ABI stability and feature
> constraints.
> 
> Currently, `rte_bpf_load` relies on a fixed `struct rte_bpf_prm`, which
> makes it difficult to add new loading options or parameters without
> breaking the ABI.
> 
> To resolve these issues, this series introduces `rte_bpf_load_ex` taking
> `struct rte_bpf_prm_ex`. The new parameter structure includes a `sz`
> field for backward compatibility, allowing future extensions.
> 
> Taking advantage of the new extensible API, this patchset also adds
> several new features:
> * Support for loading and executing BPF programs with up to 5 arguments.
> * Support for loading classic BPF (cBPF) directly.
> * Support for loading ELF files directly from memory buffers.
> * New API functions (`rte_bpf_eth_rx_install` and `rte_bpf_eth_tx_install`)
>   to install an already loaded BPF program as a port callback, decoupling
>   the loading phase from the installation phase.
> 
> Marat Khalili (10):
>   bpf: make logging prefixes more consistent
>   bpf: introduce extensible load API
>   bpf: support up to 5 arguments
>   bpf: add cBPF origin to rte_bpf_load_ex
>   bpf: support rte_bpf_prm_ex with port callbacks
>   bpf: support loading ELF files from memory
>   test/bpf: test loading cBPF directly
>   test/bpf: test loading ELF file from memory
>   doc: add release notes for new extensible BPF API
>   doc: add load API to BPF programmer's guide
> 
>  app/test/test_bpf.c                    | 325 +++++++++++++++----------
>  doc/guides/prog_guide/bpf_lib.rst      |  75 +++++-
>  doc/guides/rel_notes/release_26_07.rst |  20 ++
>  lib/bpf/bpf.c                          |  32 ++-
>  lib/bpf/bpf_convert.c                  |  97 +++++++-
>  lib/bpf/bpf_exec.c                     | 126 +++++++++-
>  lib/bpf/bpf_impl.h                     |  53 +++-
>  lib/bpf/bpf_jit_arm64.c                |  18 +-
>  lib/bpf/bpf_jit_x86.c                  |  10 +-
>  lib/bpf/bpf_load.c                     | 200 +++++++++++++--
>  lib/bpf/bpf_load_elf.c                 | 189 +++++++++-----
>  lib/bpf/bpf_pkt.c                      |  65 +++--
>  lib/bpf/bpf_stub.c                     |  46 ----
>  lib/bpf/bpf_validate.c                 |  94 ++++---
>  lib/bpf/meson.build                    |  15 +-
>  lib/bpf/rte_bpf.h                      | 195 ++++++++++++++-
>  lib/bpf/rte_bpf_ethdev.h               |  54 ++++
>  17 files changed, 1245 insertions(+), 369 deletions(-)
>  delete mode 100644 lib/bpf/bpf_stub.c
> --

I already reviewed these changes offline, as part of our
internal patch acceptance process. 
Current version LGMT and addresses all comments I had. 

Series-Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

> 2.43.0


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v2 00/10] bpf: introduce extensible load API
  2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
                   ` (10 preceding siblings ...)
  2026-05-09 12:36 ` [PATCH 00/10] bpf: introduce extensible load API Konstantin Ananyev
@ 2026-05-14  9:37 ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 01/10] bpf: make logging prefixes more consistent Marat Khalili
                     ` (9 more replies)
  11 siblings, 10 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  Cc: dev, Konstantin Ananyev

This patchset introduces an extensible load API for the BPF library in
DPDK, addressing current limitations regarding ABI stability and feature
constraints.

Currently, `rte_bpf_load` relies on a fixed `struct rte_bpf_prm`, which
makes it difficult to add new loading options or parameters without
breaking the ABI.

To resolve these issues, this series introduces `rte_bpf_load_ex` taking
`struct rte_bpf_prm_ex`. The new parameter structure includes a `sz`
field for backward compatibility, allowing future extensions.

Taking advantage of the new extensible API, this patchset also adds
several new features:
* Support for loading and executing BPF programs with up to 5 arguments.
* Support for loading classic BPF (cBPF) directly.
* Support for loading ELF files directly from memory buffers.
* New API functions (`rte_bpf_eth_rx_install` and `rte_bpf_eth_tx_install`)
  to install an already loaded BPF program as a port callback, decoupling
  the loading phase from the installation phase.

v2:
* Fixed a potential segmentation fault in `exec_vm_burst_ex` by deferring
  the dereference of `ctx[i].arg` until it is confirmed that `nb_prog_arg > 0`.
* Clarified documentation and code comments for `RTE_BPF_EXEC_FLAG_JIT`
  requirements and fast-path expectations.

Series-Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

---
To the AI reviewer:
Thank you for the review. Several of your points were very helpful and
have been addressed in v2 (e.g. the potential null pointer dereference
in `exec_vm_burst_ex`, and documenting the `RTE_BPF_EXEC_FLAG_JIT`
expectations). The remaining points were investigated but do not require
code changes for the following reasons:

> Can the macro [RTE_BPF_LOG_FUNC_LINE] be enhanced for type safety?
> Does string literal concatenation support logging format modifiers
> consistently across all call sites?
`RTE_LOG_LINE` uses compiler attributes (`__attribute__((format(printf,
...)))`) to strictly enforce format type safety. Furthermore, standard C
preprocessors inherently support string literal concatenation prior to
format checking, ensuring it works safely across all macro invocations.

> Does `bpf_load()` always set `rte_errno` to `ENOMEM` when returning
> `NULL`, or could it mask other errors by hardcoding `-ENOMEM`?
By design, the internal `bpf_load()` function only returns `NULL` when
its internal memory allocation (`mmap`) fails. Returning `-ENOMEM`
perfectly accurately reflects the nature of this failure.

> Can the loop go out of bounds? When `opts_sz` is larger than
> `type_sz`, does this code read beyond the allocated memory region
> pointed to by `opts`?
The bounds are established by the application-supplied `opts_sz`.
Standard C semantics dictate that if a caller declares a size larger
than the actual allocated memory, it generates undefined behavior on
their end. `opts_valid` iterates exactly within the declared limits.

> Does the `memcpy()` handle the case where `app_prm->sz` is larger than
> `sizeof(load->prm)` correctly? Can partial copies lead to inconsistent
> state if the struct has pointer members?
The call to `opts_valid()` happens prior to `memcpy()` and guarantees
that any extra trailing space is definitively filled with zeroes. This
implies the application relies entirely on default functionality for the
newer unknown fields, making a truncated partial copy completely sound.

> Can `elf_end()` be called with `NULL` safely? Does it cause undefined
> behavior if `load->elf` is `NULL`?
`elf_end()` in `libelf` is explicitly defined to handle `NULL` pointers
as a safe no-op.

> Can this logic handle uninitialized or out-of-range `prog_arg->type`
> values? Does `RTE_BPF_ARG_PTR_MBUF` have a specific numeric value that
> should be validated before use?
The function utilizes strict positive equality checks against valid enum
values (`RTE_BPF_ARG_RAW`, `RTE_BPF_ARG_PTR`, `RTE_BPF_ARG_PTR_MBUF`).
Invalid or uninitialized types simply fall through and cleanly return
`false`.

> Does the `malloc` check for integer overflow in the size calculation
> `nb_ins * sizeof(load->ins[0])`?
`bpf_convert_filter()` acts as a strict gatekeeper and enforces an upper
bound limit (`BPF_MAXINSNS` = 4096) on instruction count. This extremely
small ceiling makes an arithmetic overflow in `malloc` mathematically
impossible.

> Does `bpf_eth_elf_install()` acquire any additional locks? Can this
> lead to a deadlock?
No secondary lock acquisitions exist inside `bpf_eth_elf_install()`. It
executes standard initialization logic and callback registrations
without touching threading constructs, ensuring a deadlock-free flow.

Marat Khalili (10):
  bpf: make logging prefixes more consistent
  bpf: introduce extensible load API
  bpf: support up to 5 arguments
  bpf: add cBPF origin to rte_bpf_load_ex
  bpf: support rte_bpf_prm_ex with port callbacks
  bpf: support loading ELF files from memory
  test/bpf: test loading cBPF directly
  test/bpf: test loading ELF file from memory
  doc: add release notes for new extensible BPF API
  doc: add load API to BPF programmer's guide

 app/test/test_bpf.c                    | 325 +++++++++++++++----------
 doc/guides/prog_guide/bpf_lib.rst      |  75 +++++-
 doc/guides/rel_notes/release_26_07.rst |  20 ++
 lib/bpf/bpf.c                          |  32 ++-
 lib/bpf/bpf_convert.c                  |  97 +++++++-
 lib/bpf/bpf_exec.c                     | 129 +++++++++-
 lib/bpf/bpf_impl.h                     |  53 +++-
 lib/bpf/bpf_jit_arm64.c                |  18 +-
 lib/bpf/bpf_jit_x86.c                  |  10 +-
 lib/bpf/bpf_load.c                     | 200 +++++++++++++--
 lib/bpf/bpf_load_elf.c                 | 189 +++++++++-----
 lib/bpf/bpf_pkt.c                      |  65 +++--
 lib/bpf/bpf_stub.c                     |  46 ----
 lib/bpf/bpf_validate.c                 |  94 ++++---
 lib/bpf/meson.build                    |  15 +-
 lib/bpf/rte_bpf.h                      | 199 ++++++++++++++-
 lib/bpf/rte_bpf_ethdev.h               |  54 ++++
 17 files changed, 1252 insertions(+), 369 deletions(-)
 delete mode 100644 lib/bpf/bpf_stub.c

-- 
2.43.0


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v2 01/10] bpf: make logging prefixes more consistent
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 02/10] bpf: introduce extensible load API Marat Khalili
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev, Wathsala Vithanage; +Cc: dev

Logging in lib/bpf is inconsistent: some places use `%s()`, other just
`%s` for `__func__`.

Introduce new macro for logging prefixed with function name and use it
everywhere function name without arguments is prefixed to the log line.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_convert.c   | 18 +++++++++---------
 lib/bpf/bpf_impl.h      |  3 +++
 lib/bpf/bpf_jit_arm64.c |  4 ++--
 lib/bpf/bpf_load.c      |  2 +-
 lib/bpf/bpf_load_elf.c  |  2 +-
 lib/bpf/bpf_stub.c      |  6 ++----
 lib/bpf/bpf_validate.c  | 25 ++++++++++++-------------
 7 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c
index 86e703299d..953ca80670 100644
--- a/lib/bpf/bpf_convert.c
+++ b/lib/bpf/bpf_convert.c
@@ -247,8 +247,8 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len,
 	uint8_t bpf_src;
 
 	if (len > BPF_MAXINSNS) {
-		RTE_BPF_LOG_LINE(ERR, "%s: cBPF program too long (%zu insns)",
-			    __func__, len);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cBPF program too long (%zu insns)",
+			    len);
 		return -EINVAL;
 	}
 
@@ -483,8 +483,8 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len,
 
 			/* Unknown instruction. */
 		default:
-			RTE_BPF_LOG_LINE(ERR, "%s: Unknown instruction!: %#x",
-				    __func__, fp->code);
+			RTE_BPF_LOG_FUNC_LINE(ERR, "Unknown instruction!: %#x",
+				    fp->code);
 			goto err;
 		}
 
@@ -528,7 +528,7 @@ rte_bpf_convert(const struct bpf_program *prog)
 	int ret;
 
 	if (prog == NULL) {
-		RTE_BPF_LOG_LINE(ERR, "%s: NULL program", __func__);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "NULL program");
 		rte_errno = EINVAL;
 		return NULL;
 	}
@@ -536,13 +536,13 @@ rte_bpf_convert(const struct bpf_program *prog)
 	/* 1st pass: calculate the eBPF program length */
 	ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, NULL, &ebpf_len);
 	if (ret < 0) {
-		RTE_BPF_LOG_LINE(ERR, "%s: cannot get eBPF length", __func__);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cannot get eBPF length");
 		rte_errno = -ret;
 		return NULL;
 	}
 
-	RTE_BPF_LOG_LINE(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u",
-		    __func__, prog->bf_len, ebpf_len);
+	RTE_BPF_LOG_FUNC_LINE(DEBUG, "prog len cBPF=%u -> eBPF=%u",
+		    prog->bf_len, ebpf_len);
 
 	prm = rte_zmalloc("bpf_filter",
 			  sizeof(*prm) + ebpf_len * sizeof(*ebpf), 0);
@@ -557,7 +557,7 @@ rte_bpf_convert(const struct bpf_program *prog)
 	/* 2nd pass: remap cBPF to eBPF instructions  */
 	ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, ebpf, &ebpf_len);
 	if (ret < 0) {
-		RTE_BPF_LOG_LINE(ERR, "%s: cannot convert cBPF to eBPF", __func__);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cannot convert cBPF to eBPF");
 		rte_free(prm);
 		rte_errno = -ret;
 		return NULL;
diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index f5fa220984..fb5ec3c4d6 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -32,6 +32,9 @@ extern int rte_bpf_logtype;
 #define RTE_BPF_LOG_LINE(lvl, ...) \
 	RTE_LOG_LINE(lvl, BPF, __VA_ARGS__)
 
+#define RTE_BPF_LOG_FUNC_LINE(lvl, fmt, ...) \
+	RTE_LOG_LINE(lvl, BPF, "%s(): " fmt, __func__, ##__VA_ARGS__)
+
 static inline size_t
 bpf_size(uint32_t bpf_op_sz)
 {
diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c
index a04ef33a9c..4bbb97da1b 100644
--- a/lib/bpf/bpf_jit_arm64.c
+++ b/lib/bpf/bpf_jit_arm64.c
@@ -98,8 +98,8 @@ check_invalid_args(struct a64_jit_ctx *ctx, uint32_t limit)
 
 	for (idx = 0; idx < limit; idx++) {
 		if (rte_le_to_cpu_32(ctx->ins[idx]) == A64_INVALID_OP_CODE) {
-			RTE_BPF_LOG_LINE(ERR,
-				"%s: invalid opcode at %u;", __func__, idx);
+			RTE_BPF_LOG_FUNC_LINE(ERR,
+				"invalid opcode at %u;", idx);
 			return -EINVAL;
 		}
 	}
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index 6983c026af..b8a0426fe2 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -100,7 +100,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm)
 
 	if (rc != 0) {
 		rte_errno = -rc;
-		RTE_BPF_LOG_LINE(ERR, "%s: %d-th xsym is invalid", __func__, i);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "%d-th xsym is invalid", i);
 		return NULL;
 	}
 
diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c
index 1d30ba17e2..2390823cbf 100644
--- a/lib/bpf/bpf_load_elf.c
+++ b/lib/bpf/bpf_load_elf.c
@@ -122,7 +122,7 @@ check_elf_header(const Elf64_Ehdr *eh)
 		err = "unexpected machine type";
 
 	if (err != NULL) {
-		RTE_BPF_LOG_LINE(ERR, "%s(): %s", __func__, err);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "%s", err);
 		return -EINVAL;
 	}
 
diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c
index dea0d703ca..e06e820d83 100644
--- a/lib/bpf/bpf_stub.c
+++ b/lib/bpf/bpf_stub.c
@@ -21,8 +21,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 		return NULL;
 	}
 
-	RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libelf installed",
-		__func__);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
 	rte_errno = ENOTSUP;
 	return NULL;
 }
@@ -38,8 +37,7 @@ rte_bpf_convert(const struct bpf_program *prog)
 		return NULL;
 	}
 
-	RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libpcap installed",
-		__func__);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libpcap installed");
 	rte_errno = ENOTSUP;
 	return NULL;
 }
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index e8dbec2827..a7f4f576c9 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -1838,16 +1838,16 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx)
 	uint32_t ne;
 
 	if (nidx >= bvf->prm->nb_ins) {
-		RTE_BPF_LOG_LINE(ERR,
-			"%s: program boundary violation at pc: %u, next pc: %u",
-			__func__, get_node_idx(bvf, node), nidx);
+		RTE_BPF_LOG_FUNC_LINE(ERR,
+			"program boundary violation at pc: %u, next pc: %u",
+			get_node_idx(bvf, node), nidx);
 		return -EINVAL;
 	}
 
 	ne = node->nb_edge;
 	if (ne >= RTE_DIM(node->edge_dest)) {
-		RTE_BPF_LOG_LINE(ERR, "%s: internal error at pc: %u",
-			__func__, get_node_idx(bvf, node));
+		RTE_BPF_LOG_FUNC_LINE(ERR, "internal error at pc: %u",
+			get_node_idx(bvf, node));
 		return -EINVAL;
 	}
 
@@ -2005,8 +2005,7 @@ validate(struct bpf_verifier *bvf)
 
 		err = check_syntax(ins);
 		if (err != 0) {
-			RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u",
-				__func__, err, i);
+			RTE_BPF_LOG_FUNC_LINE(ERR, "%s at pc: %u", err, i);
 			rc |= -EINVAL;
 		}
 
@@ -2230,9 +2229,9 @@ save_cur_eval_state(struct bpf_verifier *bvf, struct inst_node *node)
 	/* get new eval_state for this node */
 	st = pull_eval_state(&bvf->evst_sr_pool);
 	if (st == NULL) {
-		RTE_BPF_LOG_LINE(ERR,
-			"%s: internal error (out of space) at pc: %u",
-			__func__, get_node_idx(bvf, node));
+		RTE_BPF_LOG_FUNC_LINE(ERR,
+			"internal error (out of space) at pc: %u",
+			get_node_idx(bvf, node));
 		return -ENOMEM;
 	}
 
@@ -2462,8 +2461,8 @@ evaluate(struct bpf_verifier *bvf)
 				err = ins_chk[op].eval(bvf, ins + idx);
 				stats.nb_eval++;
 				if (err != NULL) {
-					RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u",
-						__func__, err, idx);
+					RTE_BPF_LOG_FUNC_LINE(ERR,
+						"%s at pc: %u", err, idx);
 					rc = -EINVAL;
 				}
 			}
@@ -2533,7 +2532,7 @@ __rte_bpf_validate(struct rte_bpf *bpf)
 			bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR &&
 			(sizeof(uint64_t) != sizeof(uintptr_t) ||
 			bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) {
-		RTE_BPF_LOG_LINE(ERR, "%s: unsupported argument type", __func__);
+		RTE_BPF_LOG_FUNC_LINE(ERR, "unsupported argument type");
 		return -ENOTSUP;
 	}
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 02/10] bpf: introduce extensible load API
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 01/10] bpf: make logging prefixes more consistent Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 03/10] bpf: support up to 5 arguments Marat Khalili
                     ` (7 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev, Wathsala Vithanage; +Cc: dev

Introduce new BPF load parameters struct rte_bpf_prm_ex that can be
extended without breaking backward or forward compatibility. Introduce
new function rte_bpf_load_ex consolidating in one code path loading from
both ELF file and raw memory image, with possibility to add more options
in the future.

Some changes in code layout and sequence:
* Both old APIs now only forwarding calls to a new single entry point.
* There is now a centralized cleanup point for all temporary resources
  created during the load process.
* External symbols (xsyms) are now checked for validity just after the
  load started, not after they were already used for relocation.
* File bpf_load_elf.c now only handles opening ELF file and providing
  patched instruction array to the load process. These are left as two
  separate functions to support other ELF sources like memory image in
  the future.
* Function stubs for the case libelf is not available are moved to
  bpf_load_elf.c to make keeping track of them easier (forgetting to
  update stubs is a common problem).

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_exec.c      |  10 +--
 lib/bpf/bpf_impl.h      |  32 ++++++-
 lib/bpf/bpf_jit_arm64.c |  12 +--
 lib/bpf/bpf_jit_x86.c   |   8 +-
 lib/bpf/bpf_load.c      | 182 +++++++++++++++++++++++++++++++++++-----
 lib/bpf/bpf_load_elf.c  | 151 +++++++++++++++++++--------------
 lib/bpf/bpf_stub.c      |  17 ----
 lib/bpf/bpf_validate.c  |  32 +++----
 lib/bpf/meson.build     |   4 +-
 lib/bpf/rte_bpf.h       |  68 ++++++++++++++-
 10 files changed, 379 insertions(+), 137 deletions(-)

diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c
index 18013753b1..e4668ba10b 100644
--- a/lib/bpf/bpf_exec.c
+++ b/lib/bpf/bpf_exec.c
@@ -47,7 +47,7 @@
 		RTE_BPF_LOG_LINE(ERR, \
 			"%s(%p): division by 0 at pc: %#zx;", \
 			__func__, bpf, \
-			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); \
+			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.raw.ins); \
 		return 0; \
 	} \
 } while (0)
@@ -81,7 +81,7 @@
 		RTE_BPF_LOG_LINE(ERR, \
 			"%s(%p): unsupported atomic operation at pc: %#zx;", \
 			__func__, bpf, \
-			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); \
+			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.raw.ins); \
 		return 0; \
 	} \
 } while (0)
@@ -157,7 +157,7 @@ bpf_ld_mbuf(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM],
 		RTE_BPF_LOG_LINE(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): "
 			"load beyond packet boundary at pc: %#zx;",
 			__func__, bpf, mb, off, len,
-			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins);
+			(uintptr_t)(ins) - (uintptr_t)(bpf)->prm.raw.ins);
 	return p;
 }
 
@@ -166,7 +166,7 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM])
 {
 	const struct ebpf_insn *ins;
 
-	for (ins = bpf->prm.ins; ; ins++) {
+	for (ins = bpf->prm.raw.ins; ; ins++) {
 		switch (ins->code) {
 		/* 32 bit ALU IMM operations */
 		case (BPF_ALU | BPF_ADD | BPF_K):
@@ -483,7 +483,7 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM])
 			RTE_BPF_LOG_LINE(ERR,
 				"%s(%p): invalid opcode %#x at pc: %#zx;",
 				__func__, bpf, ins->code,
-				(uintptr_t)ins - (uintptr_t)bpf->prm.ins);
+				(uintptr_t)ins - (uintptr_t)bpf->prm.raw.ins);
 			return 0;
 		}
 	}
diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index fb5ec3c4d6..1cee109bc9 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -11,17 +11,45 @@
 #define MAX_BPF_STACK_SIZE	0x200
 
 struct rte_bpf {
-	struct rte_bpf_prm prm;
+	struct rte_bpf_prm_ex prm;
 	struct rte_bpf_jit jit;
 	size_t sz;
 	uint32_t stack_sz;
 };
 
+/* Temporary copies etc. used by the load process. */
+struct __rte_bpf_load {
+	struct rte_bpf_prm_ex prm;
+
+	/* Loading ELF and applying relocations. */
+	int elf_fd;  /* ELF fd, must be negative (not zero) by default. */
+	void *elf;  /* Using void to avoid dependency on libelf. */
+
+	/* Value we are going to return, if any. */
+	struct rte_bpf *bpf;
+};
+
 /*
  * Use '__rte' prefix for non-static internal functions
  * to avoid potential name conflict with other libraries.
  */
-int __rte_bpf_validate(struct rte_bpf *bpf);
+
+/* Free temporary resources created by opening ELF. */
+void
+__rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load);
+
+/* Open the ELF file. */
+int
+__rte_bpf_load_elf_file(struct __rte_bpf_load *load);
+
+/* Get code from ELF and apply relocations to it. */
+int
+__rte_bpf_load_elf_code(struct __rte_bpf_load *load);
+
+/* Validate final BPF code and calculate stack size. */
+int
+__rte_bpf_validate(const struct rte_bpf_prm_ex *prm, uint32_t *stack_sz);
+
 int __rte_bpf_jit(struct rte_bpf *bpf);
 int __rte_bpf_jit_x86(struct rte_bpf *bpf);
 int __rte_bpf_jit_arm64(struct rte_bpf *bpf);
diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c
index 4bbb97da1b..9e5e142c13 100644
--- a/lib/bpf/bpf_jit_arm64.c
+++ b/lib/bpf/bpf_jit_arm64.c
@@ -111,12 +111,12 @@ jump_offset_init(struct a64_jit_ctx *ctx, struct rte_bpf *bpf)
 {
 	uint32_t i;
 
-	ctx->map = malloc(bpf->prm.nb_ins * sizeof(ctx->map[0]));
+	ctx->map = malloc(bpf->prm.raw.nb_ins * sizeof(ctx->map[0]));
 	if (ctx->map == NULL)
 		return -ENOMEM;
 
 	/* Fill with fake offsets */
-	for (i = 0; i != bpf->prm.nb_ins; i++) {
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++) {
 		ctx->map[i].off = INT32_MAX;
 		ctx->map[i].off_to_b = 0;
 	}
@@ -1130,8 +1130,8 @@ check_program_has_call(struct a64_jit_ctx *ctx, struct rte_bpf *bpf)
 	uint8_t op;
 	uint32_t i;
 
-	for (i = 0; i != bpf->prm.nb_ins; i++) {
-		ins = bpf->prm.ins + i;
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++) {
+		ins = bpf->prm.raw.ins + i;
 		op = ins->code;
 
 		switch (op) {
@@ -1168,10 +1168,10 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf)
 
 	emit_prologue(ctx);
 
-	for (i = 0; i != bpf->prm.nb_ins; i++) {
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++) {
 
 		jump_offset_update(ctx, i);
-		ins = bpf->prm.ins + i;
+		ins = bpf->prm.raw.ins + i;
 		op = ins->code;
 		off = ins->off;
 		imm = ins->imm;
diff --git a/lib/bpf/bpf_jit_x86.c b/lib/bpf/bpf_jit_x86.c
index 88b1b5aeab..6f4235d434 100644
--- a/lib/bpf/bpf_jit_x86.c
+++ b/lib/bpf/bpf_jit_x86.c
@@ -1324,12 +1324,12 @@ emit(struct bpf_jit_state *st, const struct rte_bpf *bpf)
 
 	emit_prolog(st, bpf->stack_sz);
 
-	for (i = 0; i != bpf->prm.nb_ins; i++) {
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++) {
 
 		st->idx = i;
 		st->off[i] = st->sz;
 
-		ins = bpf->prm.ins + i;
+		ins = bpf->prm.raw.ins + i;
 
 		dr = ebpf2x86[ins->dst_reg];
 		sr = ebpf2x86[ins->src_reg];
@@ -1532,13 +1532,13 @@ __rte_bpf_jit_x86(struct rte_bpf *bpf)
 
 	/* init state */
 	memset(&st, 0, sizeof(st));
-	st.off = malloc(bpf->prm.nb_ins * sizeof(st.off[0]));
+	st.off = malloc(bpf->prm.raw.nb_ins * sizeof(st.off[0]));
 	if (st.off == NULL)
 		return -ENOMEM;
 
 	/* fill with fake offsets */
 	st.exit.off = INT32_MAX;
-	for (i = 0; i != bpf->prm.nb_ins; i++)
+	for (i = 0; i != bpf->prm.raw.nb_ins; i++)
 		st.off[i] = INT32_MAX;
 
 	/*
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index b8a0426fe2..6501841676 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -14,14 +14,14 @@
 #include "bpf_impl.h"
 
 static struct rte_bpf *
-bpf_load(const struct rte_bpf_prm *prm)
+bpf_load(const struct rte_bpf_prm_ex *prm)
 {
 	uint8_t *buf;
 	struct rte_bpf *bpf;
 	size_t sz, bsz, insz, xsz;
 
 	xsz =  prm->nb_xsym * sizeof(prm->xsym[0]);
-	insz = prm->nb_ins * sizeof(prm->ins[0]);
+	insz = prm->raw.nb_ins * sizeof(prm->raw.ins[0]);
 	bsz = sizeof(bpf[0]);
 	sz = insz + xsz + bsz;
 
@@ -37,10 +37,10 @@ bpf_load(const struct rte_bpf_prm *prm)
 
 	if (xsz > 0)
 		memcpy(buf + bsz, prm->xsym, xsz);
-	memcpy(buf + bsz + xsz, prm->ins, insz);
+	memcpy(buf + bsz + xsz, prm->raw.ins, insz);
 
 	bpf->prm.xsym = (void *)(buf + bsz);
-	bpf->prm.ins = (void *)(buf + bsz + xsz);
+	bpf->prm.raw.ins = (void *)(buf + bsz + xsz);
 
 	return bpf;
 }
@@ -80,37 +80,44 @@ bpf_check_xsym(const struct rte_bpf_xsym *xsym)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_load)
-struct rte_bpf *
-rte_bpf_load(const struct rte_bpf_prm *prm)
+static int
+bpf_check_xsyms(const struct rte_bpf_xsym *xsym, uint32_t nb_xsym)
 {
-	struct rte_bpf *bpf;
 	int32_t rc;
 	uint32_t i;
 
-	if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 ||
-			(prm->nb_xsym != 0 && prm->xsym == NULL)) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
+	if (nb_xsym != 0 && xsym == NULL)
+		return -EINVAL;
 
 	rc = 0;
-	for (i = 0; i != prm->nb_xsym && rc == 0; i++)
-		rc = bpf_check_xsym(prm->xsym + i);
+	for (i = 0; i != nb_xsym && rc == 0; i++)
+		rc = bpf_check_xsym(xsym + i);
 
 	if (rc != 0) {
-		rte_errno = -rc;
 		RTE_BPF_LOG_FUNC_LINE(ERR, "%d-th xsym is invalid", i);
-		return NULL;
+		return rc;
 	}
 
+	return 0;
+}
+
+static int
+bpf_load_raw(struct __rte_bpf_load *load)
+{
+	const struct rte_bpf_prm_ex *const prm = &load->prm;
+	struct rte_bpf *bpf;
+	int32_t rc;
+
+	RTE_ASSERT(prm->origin == RTE_BPF_ORIGIN_RAW);
+
+	if (prm->raw.ins == NULL || prm->raw.nb_ins == 0)
+		return -EINVAL;
+
 	bpf = bpf_load(prm);
-	if (bpf == NULL) {
-		rte_errno = ENOMEM;
-		return NULL;
-	}
+	if (bpf == NULL)
+		return -ENOMEM;
 
-	rc = __rte_bpf_validate(bpf);
+	rc = __rte_bpf_validate(&load->prm, &bpf->stack_sz);
 	if (rc == 0) {
 		__rte_bpf_jit(bpf);
 		if (mprotect(bpf, bpf->sz, PROT_READ) != 0)
@@ -119,9 +126,138 @@ rte_bpf_load(const struct rte_bpf_prm *prm)
 
 	if (rc != 0) {
 		rte_bpf_destroy(bpf);
+		return rc;
+	}
+
+	load->bpf = bpf;
+	return 0;
+}
+
+RTE_EXPORT_SYMBOL(rte_bpf_load)
+struct rte_bpf *
+rte_bpf_load(const struct rte_bpf_prm *prm)
+{
+	return rte_bpf_load_ex(&(struct rte_bpf_prm_ex){
+			.sz = sizeof(struct rte_bpf_prm_ex),
+			.origin = RTE_BPF_ORIGIN_RAW,
+			.raw.ins = prm->ins,
+			.raw.nb_ins = prm->nb_ins,
+			.xsym = prm->xsym,
+			.nb_xsym = prm->nb_xsym,
+			.prog_arg = prm->prog_arg,
+		});
+}
+
+RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
+struct rte_bpf *
+rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
+	const char *sname)
+{
+	return rte_bpf_load_ex(&(struct rte_bpf_prm_ex){
+			.sz = sizeof(struct rte_bpf_prm_ex),
+			.origin = RTE_BPF_ORIGIN_ELF_FILE,
+			.elf_file.path = fname,
+			.elf_file.section = sname,
+			.xsym = prm->xsym,
+			.nb_xsym = prm->nb_xsym,
+			.prog_arg = prm->prog_arg,
+		});
+}
+
+/*
+ * Check extensible opts for invalid size or non-zero unsupported members.
+ *
+ * This code provides forward compatibility with applications compiled against
+ * newer version of this library. `opts_sz` is the size of struct `opts` in the
+ * version used for compiling the application, read from the member `sz`;
+ * `type_sz` is the size of same struct in the version used for compiling the
+ * library.
+ *
+ * If new fields were added to the struct in the application version, `opts_sz`
+ * will be greater than `type_sz`. In this case we are making sure all bytes we
+ * don't know how to interpret are zeroes, that is any new features that are
+ * there are not being used.
+ *
+ * This function can be used to check any struct following this convention.
+ */
+static bool
+opts_valid(const void *opts, size_t opts_sz, size_t type_sz)
+{
+	if (opts == NULL)
+		return true;
+
+	if (opts_sz < sizeof(opts_sz))
+		/* Size of the struct is too small even for sz member. */
+		return false;
+
+	/* Verify that all extra bytes are zeroed. */
+	for (size_t offset = type_sz; offset < opts_sz; ++offset)
+		if (((const char *)opts)[offset] != 0)
+			return false;
+
+	return true;
+}
+
+static int
+load_try(struct __rte_bpf_load *load, const struct rte_bpf_prm_ex *app_prm)
+{
+	int rc;
+
+	if (app_prm == NULL || !opts_valid(app_prm, app_prm->sz, sizeof(load->prm)))
+		return -EINVAL;
+
+	/*
+	 * Convert extensible prm of application size to the size known to us.
+	 *
+	 * This code provides compatibility with applications compiled against
+	 * different version of this library. `app_prm->sz` is the size of
+	 * struct `rte_bpf_prm_ex` in the version used for compiling the
+	 * application; `sizeof(load->prm)` is the size of the same struct in
+	 * the version used for compiling the library.
+	 *
+	 * We are copying only the fields known to the application and leave
+	 * the rest filled with zeroes. Any features that not known to the
+	 * application will have backward-compatible default behaviour.
+	 */
+	memcpy(&load->prm, app_prm, RTE_MIN(app_prm->sz, sizeof(load->prm)));
+	load->prm.sz = sizeof(load->prm);
+
+	rc = bpf_check_xsyms(load->prm.xsym, load->prm.nb_xsym);
+
+	/* Convert prm origin to raw unless it already is. */
+	switch (load->prm.origin) {
+	case RTE_BPF_ORIGIN_RAW:
+		break;
+	case RTE_BPF_ORIGIN_ELF_FILE:
+		rc = rc < 0 ? rc : __rte_bpf_load_elf_file(load);
+		rc = rc < 0 ? rc : __rte_bpf_load_elf_code(load);
+		break;
+	default:
+		rc = rc < 0 ? rc : -EINVAL;
+	}
+
+	/* Now that it is raw load it as such. */
+	rc = rc < 0 ? rc : bpf_load_raw(load);
+
+	return rc;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_load_ex, 26.11)
+struct rte_bpf *
+rte_bpf_load_ex(const struct rte_bpf_prm_ex *prm)
+{
+	struct __rte_bpf_load load = { .elf_fd = -1 };
+
+	const int rc = load_try(&load, prm);
+
+	__rte_bpf_load_elf_cleanup(&load);
+
+	RTE_ASSERT((rc < 0) == (load.bpf == NULL));
+
+	if (rc < 0) {
 		rte_errno = -rc;
 		return NULL;
 	}
 
-	return bpf;
+	return load.bpf;
 }
diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c
index 2390823cbf..4ae7492351 100644
--- a/lib/bpf/bpf_load_elf.c
+++ b/lib/bpf/bpf_load_elf.c
@@ -2,6 +2,13 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
+#include "bpf_impl.h"
+
+#include <errno.h>
+
+#ifdef RTE_LIBRTE_BPF_ELF
+
+#include <inttypes.h>
 #include <stdarg.h>
 #include <stdio.h>
 #include <string.h>
@@ -26,8 +33,6 @@
 #include <rte_byteorder.h>
 #include <rte_errno.h>
 
-#include "bpf_impl.h"
-
 /* To overcome compatibility issue */
 #ifndef EM_BPF
 #define	EM_BPF	247
@@ -56,7 +61,7 @@ bpf_find_xsym(const char *sn, enum rte_bpf_xtype type,
  */
 static int
 resolve_xsym(const char *sn, size_t ofs, struct ebpf_insn *ins, size_t ins_sz,
-	const struct rte_bpf_prm *prm)
+	const struct rte_bpf_prm_ex *prm)
 {
 	uint32_t idx, fidx;
 	enum rte_bpf_xtype type;
@@ -183,7 +188,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx)
  */
 static int
 process_reloc(Elf *elf, size_t sym_idx, Elf64_Rel *re, size_t re_sz,
-	struct ebpf_insn *ins, size_t ins_sz, const struct rte_bpf_prm *prm)
+	struct ebpf_insn *ins, size_t ins_sz, const struct rte_bpf_prm_ex *prm)
 {
 	int32_t rc;
 	uint32_t i, n;
@@ -232,8 +237,8 @@ process_reloc(Elf *elf, size_t sym_idx, Elf64_Rel *re, size_t re_sz,
  * and update bpf code.
  */
 static int
-elf_reloc_code(Elf *elf, Elf_Data *ed, size_t sidx,
-	const struct rte_bpf_prm *prm)
+elf_reloc_code(Elf *elf, struct ebpf_insn *ins, size_t ins_sz, size_t sidx,
+	const struct rte_bpf_prm_ex *prm)
 {
 	Elf64_Rel *re;
 	Elf_Scn *sc;
@@ -256,7 +261,7 @@ elf_reloc_code(Elf *elf, Elf_Data *ed, size_t sidx,
 					sd->d_size % sizeof(re[0]) != 0)
 				return -EINVAL;
 			rc = process_reloc(elf, sh->sh_link,
-				sd->d_buf, sd->d_size, ed->d_buf, ed->d_size,
+				sd->d_buf, sd->d_size, ins, ins_sz,
 				prm);
 		}
 	}
@@ -264,72 +269,96 @@ elf_reloc_code(Elf *elf, Elf_Data *ed, size_t sidx,
 	return rc;
 }
 
-static struct rte_bpf *
-bpf_load_elf(const struct rte_bpf_prm *prm, int32_t fd, const char *section)
+void
+__rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load)
 {
-	Elf *elf;
-	Elf_Data *sd;
-	size_t sidx;
-	int32_t rc;
-	struct rte_bpf *bpf;
-	struct rte_bpf_prm np;
+	elf_end(load->elf);
 
-	elf_version(EV_CURRENT);
-	elf = elf_begin(fd, ELF_C_READ, NULL);
+	if (load->elf_fd >= 0 && close(load->elf_fd) < 0) {
+		const int close_errno = errno;
+		RTE_BPF_LOG_FUNC_LINE(ERR, "error %d closing: %s",
+			close_errno, strerror(close_errno));
+	}
+}
 
-	rc = find_elf_code(elf, section, &sd, &sidx);
-	if (rc == 0)
-		rc = elf_reloc_code(elf, sd, sidx, prm);
+int
+__rte_bpf_load_elf_file(struct __rte_bpf_load *load)
+{
+	const struct rte_bpf_prm_ex *const prm = &load->prm;
 
-	if (rc == 0) {
-		np = prm[0];
-		np.ins = sd->d_buf;
-		np.nb_ins = sd->d_size / sizeof(struct ebpf_insn);
-		bpf = rte_bpf_load(&np);
-	} else {
-		bpf = NULL;
-		rte_errno = -rc;
+	RTE_ASSERT(prm->origin == RTE_BPF_ORIGIN_ELF_FILE);
+
+	if (prm->elf_file.path == NULL || prm->elf_file.section == NULL)
+		return -EINVAL;
+
+	if (elf_version(EV_CURRENT) == EV_NONE)
+		return -ENOTSUP;
+
+	load->elf_fd = open(prm->elf_file.path, O_RDONLY);
+	if (load->elf_fd < 0) {
+		const int open_errno = errno;
+		RTE_BPF_LOG_FUNC_LINE(ERR, "error %d opening \"%s\": %s",
+			open_errno, prm->elf_file.path, strerror(open_errno));
+		return -open_errno;
+	}
+
+	load->elf = elf_begin(load->elf_fd, ELF_C_READ, NULL);
+	if (load->elf == NULL) {
+		const int rc = elf_errno();
+		RTE_BPF_LOG_FUNC_LINE(ERR, "error %d opening ELF \"%s\": %s",
+			rc, prm->elf_file.path, elf_errmsg(rc));
+		return -EINVAL;
 	}
 
-	elf_end(elf);
-	return bpf;
+	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
-struct rte_bpf *
-rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
-	const char *sname)
+int
+__rte_bpf_load_elf_code(struct __rte_bpf_load *load)
 {
-	int32_t fd, rc;
-	struct rte_bpf *bpf;
+	struct rte_bpf_prm_ex *const prm = &load->prm;
+	Elf_Data *sd;
+	size_t sidx;
+	int rc;
 
-	if (prm == NULL || fname == NULL || sname == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
+	rc = find_elf_code(load->elf, prm->elf_file.section, &sd, &sidx);
+	if (rc < 0)
+		return rc;
 
-	fd = open(fname, O_RDONLY);
-	if (fd < 0) {
-		rc = errno;
-		RTE_BPF_LOG_LINE(ERR, "%s(%s) error code: %d(%s)",
-			__func__, fname, rc, strerror(rc));
-		rte_errno = EINVAL;
-		return NULL;
-	}
+	prm->origin = RTE_BPF_ORIGIN_RAW;
+	prm->raw.ins = sd->d_buf;
+	prm->raw.nb_ins = sd->d_size / sizeof(struct ebpf_insn);
 
-	bpf = bpf_load_elf(prm, fd, sname);
-	close(fd);
+	rc = elf_reloc_code(load->elf, sd->d_buf, sd->d_size, sidx, prm);
+	if (rc < 0)
+		return -EINVAL;
 
-	if (bpf == NULL) {
-		RTE_BPF_LOG_LINE(ERR,
-			"%s(fname=\"%s\", sname=\"%s\") failed, "
-			"error code: %d",
-			__func__, fname, sname, rte_errno);
-		return NULL;
-	}
+	return 0;
+}
+
+#else /* RTE_LIBRTE_BPF_ELF */
+
+void
+__rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load)
+{
+	RTE_ASSERT(load->elf == NULL);
+	RTE_ASSERT(load->elf_fd < 0);
+}
 
-	RTE_BPF_LOG_LINE(INFO, "%s(fname=\"%s\", sname=\"%s\") "
-		"successfully creates %p(jit={.func=%p,.sz=%zu});",
-		__func__, fname, sname, bpf, bpf->jit.func, bpf->jit.sz);
-	return bpf;
+int
+__rte_bpf_load_elf_file(struct __rte_bpf_load *load)
+{
+	RTE_SET_USED(load);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
+	return -ENOTSUP;
 }
+
+int
+__rte_bpf_load_elf_code(struct __rte_bpf_load *load)
+{
+	RTE_SET_USED(load);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
+	return -ENOTSUP;
+}
+
+#endif /* RTE_LIBRTE_BPF_ELF */
diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c
index e06e820d83..4c329832c2 100644
--- a/lib/bpf/bpf_stub.c
+++ b/lib/bpf/bpf_stub.c
@@ -10,23 +10,6 @@
  * Contains stubs for unimplemented public API functions
  */
 
-#ifndef RTE_LIBRTE_BPF_ELF
-RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
-struct rte_bpf *
-rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
-	const char *sname)
-{
-	if (prm == NULL || fname == NULL || sname == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
-	rte_errno = ENOTSUP;
-	return NULL;
-}
-#endif
-
 #ifndef RTE_HAS_LIBPCAP
 RTE_EXPORT_SYMBOL(rte_bpf_convert)
 struct rte_bpf_prm *
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index a7f4f576c9..5bfc59296d 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -80,7 +80,7 @@ struct evst_pool {
 };
 
 struct bpf_verifier {
-	const struct rte_bpf_prm *prm;
+	const struct rte_bpf_prm_ex *prm;
 	struct inst_node *in;
 	uint64_t stack_sz;
 	uint32_t nb_nodes;
@@ -1837,7 +1837,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx)
 {
 	uint32_t ne;
 
-	if (nidx >= bvf->prm->nb_ins) {
+	if (nidx >= bvf->prm->raw.nb_ins) {
 		RTE_BPF_LOG_FUNC_LINE(ERR,
 			"program boundary violation at pc: %u, next pc: %u",
 			get_node_idx(bvf, node), nidx);
@@ -1946,10 +1946,10 @@ log_unreachable(const struct bpf_verifier *bvf)
 	struct inst_node *node;
 	const struct ebpf_insn *ins;
 
-	for (i = 0; i != bvf->prm->nb_ins; i++) {
+	for (i = 0; i != bvf->prm->raw.nb_ins; i++) {
 
 		node = bvf->in + i;
-		ins = bvf->prm->ins + i;
+		ins = bvf->prm->raw.ins + i;
 
 		if (node->colour == WHITE &&
 				ins->code != (BPF_LD | BPF_IMM | EBPF_DW))
@@ -1966,7 +1966,7 @@ log_loop(const struct bpf_verifier *bvf)
 	uint32_t i, j;
 	struct inst_node *node;
 
-	for (i = 0; i != bvf->prm->nb_ins; i++) {
+	for (i = 0; i != bvf->prm->raw.nb_ins; i++) {
 
 		node = bvf->in + i;
 		if (node->colour != BLACK)
@@ -1998,9 +1998,9 @@ validate(struct bpf_verifier *bvf)
 	const char *err;
 
 	rc = 0;
-	for (i = 0; i < bvf->prm->nb_ins; i++) {
+	for (i = 0; i < bvf->prm->raw.nb_ins; i++) {
 
-		ins = bvf->prm->ins + i;
+		ins = bvf->prm->raw.ins + i;
 		node = bvf->in + i;
 
 		err = check_syntax(ins);
@@ -2432,7 +2432,7 @@ evaluate(struct bpf_verifier *bvf)
 
 	bvf->evst->rv[EBPF_REG_10] = rvfp;
 
-	ins = bvf->prm->ins;
+	ins = bvf->prm->raw.ins;
 	node = bvf->in;
 	next = node;
 	rc = 0;
@@ -2522,23 +2522,23 @@ evaluate(struct bpf_verifier *bvf)
 }
 
 int
-__rte_bpf_validate(struct rte_bpf *bpf)
+__rte_bpf_validate(const struct rte_bpf_prm_ex *prm, uint32_t *stack_sz)
 {
 	int32_t rc;
 	struct bpf_verifier bvf;
 
 	/* check input argument type, don't allow mbuf ptr on 32-bit */
-	if (bpf->prm.prog_arg.type != RTE_BPF_ARG_RAW &&
-			bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR &&
+	if (prm->prog_arg.type != RTE_BPF_ARG_RAW &&
+			prm->prog_arg.type != RTE_BPF_ARG_PTR &&
 			(sizeof(uint64_t) != sizeof(uintptr_t) ||
-			bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) {
+			prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) {
 		RTE_BPF_LOG_FUNC_LINE(ERR, "unsupported argument type");
 		return -ENOTSUP;
 	}
 
 	memset(&bvf, 0, sizeof(bvf));
-	bvf.prm = &bpf->prm;
-	bvf.in = calloc(bpf->prm.nb_ins, sizeof(bvf.in[0]));
+	bvf.prm = prm;
+	bvf.in = calloc(prm->raw.nb_ins, sizeof(bvf.in[0]));
 	if (bvf.in == NULL)
 		return -ENOMEM;
 
@@ -2555,11 +2555,11 @@ __rte_bpf_validate(struct rte_bpf *bpf)
 
 	/* copy collected info */
 	if (rc == 0) {
-		bpf->stack_sz = bvf.stack_sz;
+		*stack_sz = bvf.stack_sz;
 
 		/* for LD_ABS/LD_IND, we'll need extra space on the stack */
 		if (bvf.nb_ldmb_nodes != 0)
-			bpf->stack_sz = RTE_ALIGN_CEIL(bpf->stack_sz +
+			*stack_sz = RTE_ALIGN_CEIL(*stack_sz +
 				sizeof(uint64_t), sizeof(uint64_t));
 	}
 
diff --git a/lib/bpf/meson.build b/lib/bpf/meson.build
index 28df7f469a..4901b6ee14 100644
--- a/lib/bpf/meson.build
+++ b/lib/bpf/meson.build
@@ -19,6 +19,7 @@ sources = files('bpf.c',
         'bpf_dump.c',
         'bpf_exec.c',
         'bpf_load.c',
+        'bpf_load_elf.c',
         'bpf_pkt.c',
         'bpf_stub.c',
         'bpf_validate.c')
@@ -38,10 +39,9 @@ deps += ['mbuf', 'net', 'ethdev']
 dep = dependency('libelf', required: false, method: 'pkg-config')
 if dep.found()
     dpdk_conf.set('RTE_LIBRTE_BPF_ELF', 1)
-    sources += files('bpf_load_elf.c')
     ext_deps += dep
 else
-    warning('libelf is missing, rte_bpf_elf_load API will be disabled')
+    warning('libelf is missing, ELF API will be disabled')
 endif
 
 if dpdk_conf.has('RTE_HAS_LIBPCAP')
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index 309d84bc51..bf58a41819 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -86,7 +86,47 @@ struct rte_bpf_xsym {
 };
 
 /**
- * Input parameters for loading eBPF code.
+ * Possible origins of eBPF program code.
+ */
+enum rte_bpf_origin {
+	RTE_BPF_ORIGIN_RAW,		/**< code loaded from raw array */
+	RTE_BPF_ORIGIN_RESERVED,	/**< reserved for cBPF */
+	RTE_BPF_ORIGIN_ELF_FILE,	/**< code loaded from elf_file */
+};
+
+/**
+ * Input parameters for loading eBPF code, extensible version.
+ *
+ * Follows libbpf conventions for extensible structs.
+ */
+struct rte_bpf_prm_ex {
+	size_t sz;  /**< size of this struct for backward compatibility */
+
+	uint32_t flags;  /**< flags controlling eBPF load and other options */
+
+	enum rte_bpf_origin origin;  /**< origin of eBPF program code */
+
+	/** program origin parameters, member in use depends on origin */
+	union {
+		struct {
+			const struct ebpf_insn *ins;  /**< eBPF instructions */
+			uint32_t nb_ins;  /**< number of instructions in ins */
+		} raw;
+		struct {
+			const char *path;  /**< path to the ELF file */
+			const char *section;  /**< ELF section with the code */
+		} elf_file;
+	};
+
+	const struct rte_bpf_xsym *xsym;
+	/**< array of external symbols that eBPF code is allowed to reference */
+	uint32_t nb_xsym;  /**< number of elements in xsym */
+
+	struct rte_bpf_arg prog_arg;  /**< input arg description */
+};
+
+/**
+ * Input parameters for loading eBPF code, legacy version.
  */
 struct rte_bpf_prm {
 	const struct ebpf_insn *ins; /**< array of eBPF instructions */
@@ -116,6 +156,32 @@ struct rte_bpf;
 void
 rte_bpf_destroy(struct rte_bpf *bpf);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Create a new eBPF execution context, load code from specified origin into it.
+ *
+ * @param prm
+ *   Parameters used to create and initialise the BPF execution context.
+ *
+ *   Member sz must be set to the struct size as known to the application.
+ *   If it exceeds the size known to the library, and the extra part has
+ *   non-zero bytes, parameter is rejected. If it's smaller than the size known
+ *   to the library, defaults are used for the members that are not present.
+ * @return
+ *   BPF handle that is used in future BPF operations,
+ *   or NULL on error, with error code set in rte_errno.
+ *   Possible rte_errno errors include:
+ *   - EINVAL  - invalid parameter passed to function
+ *   - ENOMEM  - can't reserve enough memory
+ *   - ENOTSUP - requested feature is not supported (e.g. no libelf to load ELF)
+ */
+__rte_experimental
+struct rte_bpf *
+rte_bpf_load_ex(const struct rte_bpf_prm_ex *prm)
+	__rte_malloc __rte_dealloc(rte_bpf_destroy, 1);
+
 /**
  * Create a new eBPF execution context and load given BPF code into it.
  *
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 03/10] bpf: support up to 5 arguments
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 01/10] bpf: make logging prefixes more consistent Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 02/10] bpf: introduce extensible load API Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 04/10] bpf: add cBPF origin to rte_bpf_load_ex Marat Khalili
                     ` (6 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev, Wathsala Vithanage; +Cc: dev

When using rte_bpf_load_ex allow up to 5 arguments for a BPF program.
Particularly useful for call-backs and other internal functions.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf.c           |  32 ++++++++++-
 lib/bpf/bpf_exec.c      | 119 +++++++++++++++++++++++++++++++++++++++
 lib/bpf/bpf_impl.h      |   2 +-
 lib/bpf/bpf_jit_arm64.c |   2 +-
 lib/bpf/bpf_jit_x86.c   |   2 +-
 lib/bpf/bpf_load.c      |   6 +-
 lib/bpf/bpf_validate.c  |  45 +++++++++++----
 lib/bpf/rte_bpf.h       | 121 ++++++++++++++++++++++++++++++++++++++--
 8 files changed, 306 insertions(+), 23 deletions(-)

diff --git a/lib/bpf/bpf.c b/lib/bpf/bpf.c
index 5239b3e11e..67dededd9a 100644
--- a/lib/bpf/bpf.c
+++ b/lib/bpf/bpf.c
@@ -16,8 +16,8 @@ void
 rte_bpf_destroy(struct rte_bpf *bpf)
 {
 	if (bpf != NULL) {
-		if (bpf->jit.func != NULL)
-			munmap(bpf->jit.func, bpf->jit.sz);
+		if (bpf->jit.raw != NULL)
+			munmap(bpf->jit.raw, bpf->jit.sz);
 		munmap(bpf, bpf->sz);
 	}
 }
@@ -29,7 +29,33 @@ rte_bpf_get_jit(const struct rte_bpf *bpf, struct rte_bpf_jit *jit)
 	if (bpf == NULL || jit == NULL)
 		return -EINVAL;
 
-	jit[0] = bpf->jit;
+	if (bpf->prm.nb_prog_arg != 1) {
+		RTE_BPF_LOG_LINE(ERR,
+			"this program takes %d arguments, use rte_bpf_get_jit_ex",
+			bpf->prm.nb_prog_arg);
+		return -EINVAL;
+	}
+
+	*jit = (struct rte_bpf_jit) {
+		.func = bpf->jit.raw,
+		.sz = bpf->jit.sz,
+	};
+	return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_get_jit_ex, 26.11)
+int
+rte_bpf_get_jit_ex(const struct rte_bpf *bpf, struct rte_bpf_jit_ex *jit)
+{
+	if (bpf == NULL || jit == NULL)
+		return -EINVAL;
+
+	if (bpf->jit.raw == NULL) {
+		RTE_BPF_LOG_LINE(ERR, "no JIT-compiled version");
+		return -ENOENT;
+	}
+
+	*jit = bpf->jit;
 	return 0;
 }
 
diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c
index e4668ba10b..350a216ae5 100644
--- a/lib/bpf/bpf_exec.c
+++ b/lib/bpf/bpf_exec.c
@@ -502,6 +502,10 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 	uint64_t reg[EBPF_REG_NUM];
 	uint64_t stack[MAX_BPF_STACK_SIZE / sizeof(uint64_t)];
 
+	if (bpf->prm.nb_prog_arg != 1)
+		/* Use rte_bpf_exec_burst_ex with this program. */
+		return -EINVAL;
+
 	for (i = 0; i != num; i++) {
 
 		reg[EBPF_REG_1] = (uintptr_t)ctx[i];
@@ -513,6 +517,110 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 	return i;
 }
 
+static uint32_t
+exec_vm_burst_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+	uint64_t rc[], uint32_t num)
+{
+	uint32_t i;
+	uint64_t reg[EBPF_REG_NUM];
+	uint64_t stack[MAX_BPF_STACK_SIZE / sizeof(uint64_t)];
+
+	for (i = 0; i != num; i++) {
+
+		switch (bpf->prm.nb_prog_arg) {
+		case 5:
+			reg[EBPF_REG_5] = ctx[i].arg[4].u64;
+			/* FALLTHROUGH */
+		case 4:
+			reg[EBPF_REG_4] = ctx[i].arg[3].u64;
+			/* FALLTHROUGH */
+		case 3:
+			reg[EBPF_REG_3] = ctx[i].arg[2].u64;
+			/* FALLTHROUGH */
+		case 2:
+			reg[EBPF_REG_2] = ctx[i].arg[1].u64;
+			/* FALLTHROUGH */
+		case 1:
+			reg[EBPF_REG_1] = ctx[i].arg[0].u64;
+			/* FALLTHROUGH */
+		case 0:
+			break;
+		}
+
+		reg[EBPF_REG_10] = (uintptr_t)(stack + RTE_DIM(stack));
+
+		rc[i] = bpf_exec(bpf, reg);
+	}
+
+	return i;
+}
+
+static uint32_t
+exec_jit_burst_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+	uint64_t rc[], uint32_t num)
+{
+	uint32_t i;
+	const struct rte_bpf_jit_ex jit = bpf->jit;
+
+	/*
+	 * Fast path: assumes application pre-validated RTE_BPF_EXEC_FLAG_JIT
+	 * and successful JIT generation. No explicit NULL checks here.
+	 */
+	switch (bpf->prm.nb_prog_arg) {
+	case 0:
+		for (i = 0; i != num; i++)
+			rc[i] = jit.func0();
+		break;
+	case 1:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func1(arg[0]);
+		}
+		break;
+	case 2:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func2(arg[0], arg[1]);
+		}
+		break;
+	case 3:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func3(arg[0], arg[1], arg[2]);
+		}
+		break;
+	case 4:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func4(arg[0], arg[1], arg[2], arg[3]);
+		}
+		break;
+	case 5:
+		for (i = 0; i != num; i++) {
+			const union rte_bpf_func_arg *const arg = ctx[i].arg;
+			rc[i] = jit.func5(arg[0], arg[1], arg[2], arg[3], arg[4]);
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return i;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_exec_burst_ex, 26.11)
+uint32_t
+rte_bpf_exec_burst_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+	uint64_t rc[], uint32_t num, uint64_t flags)
+{
+	if ((flags & ~RTE_BPF_EXEC_FLAG_MASK) != 0)
+		return -EINVAL;
+
+	return (flags & RTE_BPF_EXEC_FLAG_JIT) != 0 ?
+		exec_jit_burst_ex(bpf, ctx, rc, num) :
+		exec_vm_burst_ex(bpf, ctx, rc, num);
+}
+
 RTE_EXPORT_SYMBOL(rte_bpf_exec)
 uint64_t
 rte_bpf_exec(const struct rte_bpf *bpf, void *ctx)
@@ -522,3 +630,14 @@ rte_bpf_exec(const struct rte_bpf *bpf, void *ctx)
 	rte_bpf_exec_burst(bpf, &ctx, &rc, 1);
 	return rc;
 }
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_exec_ex, 26.11)
+uint64_t
+rte_bpf_exec_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+		uint64_t flags)
+{
+	uint64_t rc;
+
+	rte_bpf_exec_burst_ex(bpf, ctx, &rc, 1, flags);
+	return rc;
+}
diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index 1cee109bc9..4a98b33730 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -12,7 +12,7 @@
 
 struct rte_bpf {
 	struct rte_bpf_prm_ex prm;
-	struct rte_bpf_jit jit;
+	struct rte_bpf_jit_ex jit;
 	size_t sz;
 	uint32_t stack_sz;
 };
diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c
index 9e5e142c13..ba7ae4d680 100644
--- a/lib/bpf/bpf_jit_arm64.c
+++ b/lib/bpf/bpf_jit_arm64.c
@@ -1471,7 +1471,7 @@ __rte_bpf_jit_arm64(struct rte_bpf *bpf)
 	/* Flush the icache */
 	__builtin___clear_cache((char *)ctx.ins, (char *)(ctx.ins + ctx.idx));
 
-	bpf->jit.func = (void *)ctx.ins;
+	bpf->jit.raw = ctx.ins;
 	bpf->jit.sz = size;
 
 	goto finish;
diff --git a/lib/bpf/bpf_jit_x86.c b/lib/bpf/bpf_jit_x86.c
index 6f4235d434..54eb279643 100644
--- a/lib/bpf/bpf_jit_x86.c
+++ b/lib/bpf/bpf_jit_x86.c
@@ -1568,7 +1568,7 @@ __rte_bpf_jit_x86(struct rte_bpf *bpf)
 	if (rc != 0)
 		munmap(st.ins, st.sz);
 	else {
-		bpf->jit.func = (void *)st.ins;
+		bpf->jit.raw = st.ins;
 		bpf->jit.sz = st.sz;
 	}
 
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index 6501841676..c9cbaf6ded 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -144,7 +144,8 @@ rte_bpf_load(const struct rte_bpf_prm *prm)
 			.raw.nb_ins = prm->nb_ins,
 			.xsym = prm->xsym,
 			.nb_xsym = prm->nb_xsym,
-			.prog_arg = prm->prog_arg,
+			.prog_arg[0] = prm->prog_arg,
+			.nb_prog_arg = 1,
 		});
 }
 
@@ -160,7 +161,8 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 			.elf_file.section = sname,
 			.xsym = prm->xsym,
 			.nb_xsym = prm->nb_xsym,
-			.prog_arg = prm->prog_arg,
+			.prog_arg[0] = prm->prog_arg,
+			.nb_prog_arg = 1,
 		});
 }
 
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 5bfc59296d..bf8a4abb5a 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -2425,10 +2425,14 @@ evaluate(struct bpf_verifier *bvf)
 		.s = {.min = MAX_BPF_STACK_SIZE, .max = MAX_BPF_STACK_SIZE},
 	};
 
-	bvf->evst->rv[EBPF_REG_1].v = bvf->prm->prog_arg;
-	bvf->evst->rv[EBPF_REG_1].mask = UINT64_MAX;
-	if (bvf->prm->prog_arg.type == RTE_BPF_ARG_RAW)
-		eval_max_bound(bvf->evst->rv + EBPF_REG_1, UINT64_MAX);
+	for (uint32_t pai = 0; pai != bvf->prm->nb_prog_arg; ++pai) {
+		struct bpf_reg_val *reg = &bvf->evst->rv[EBPF_REG_1 + pai];
+
+		reg->v = bvf->prm->prog_arg[pai];
+		reg->mask = UINT64_MAX;
+		if (reg->v.type == RTE_BPF_ARG_RAW)
+			eval_max_bound(reg, UINT64_MAX);
+	}
 
 	bvf->evst->rv[EBPF_REG_10] = rvfp;
 
@@ -2521,21 +2525,42 @@ evaluate(struct bpf_verifier *bvf)
 	return rc;
 }
 
+static bool
+prog_arg_is_valid(const struct rte_bpf_arg *prog_arg)
+{
+	/* check input argument type, don't allow mbuf ptr on 32-bit */
+	if (prog_arg->type != RTE_BPF_ARG_RAW &&
+			prog_arg->type != RTE_BPF_ARG_PTR &&
+			(sizeof(uint64_t) != sizeof(uintptr_t) ||
+			prog_arg->type != RTE_BPF_ARG_PTR_MBUF)) {
+		RTE_BPF_LOG_FUNC_LINE(ERR, "unsupported argument type");
+		return false;
+	}
+
+	return true;
+}
+
 int
 __rte_bpf_validate(const struct rte_bpf_prm_ex *prm, uint32_t *stack_sz)
 {
 	int32_t rc;
 	struct bpf_verifier bvf;
 
-	/* check input argument type, don't allow mbuf ptr on 32-bit */
-	if (prm->prog_arg.type != RTE_BPF_ARG_RAW &&
-			prm->prog_arg.type != RTE_BPF_ARG_PTR &&
-			(sizeof(uint64_t) != sizeof(uintptr_t) ||
-			prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) {
-		RTE_BPF_LOG_FUNC_LINE(ERR, "unsupported argument type");
+	if (prm->nb_prog_arg > EBPF_FUNC_MAX_ARGS) {
+		RTE_BPF_LOG_FUNC_LINE(ERR,
+			"support up to %u arguments, found %u",
+			EBPF_FUNC_MAX_ARGS, prm->nb_prog_arg);
 		return -ENOTSUP;
 	}
 
+	for (uint32_t pai = 0; pai != prm->nb_prog_arg; ++pai)
+		if (!prog_arg_is_valid(&prm->prog_arg[pai])) {
+			RTE_BPF_LOG_FUNC_LINE(ERR,
+				"unsupported argument %d (r%d) type",
+				pai, EBPF_REG_1 + pai);
+			return -ENOTSUP;
+		}
+
 	memset(&bvf, 0, sizeof(bvf));
 	bvf.prm = prm;
 	bvf.in = calloc(prm->raw.nb_ins, sizeof(bvf.in[0]));
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index bf58a41819..0e7eaa3c18 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -25,6 +25,11 @@
 extern "C" {
 #endif
 
+#define RTE_BPF_EXEC_FLAG_JIT	RTE_BIT64(0)	/**< use JIT-compiled version */
+
+/** Mask with all supported `RTE_BPF_EXEC_FLAG_*` flags set. */
+#define RTE_BPF_EXEC_FLAG_MASK  RTE_BPF_EXEC_FLAG_JIT
+
 /**
  * Possible types for function/BPF program arguments.
  */
@@ -122,7 +127,8 @@ struct rte_bpf_prm_ex {
 	/**< array of external symbols that eBPF code is allowed to reference */
 	uint32_t nb_xsym;  /**< number of elements in xsym */
 
-	struct rte_bpf_arg prog_arg;  /**< input arg description */
+	struct rte_bpf_arg prog_arg[EBPF_FUNC_MAX_ARGS];  /**< program arguments */
+	uint32_t nb_prog_arg;  /**< program argument count */
 };
 
 /**
@@ -138,13 +144,49 @@ struct rte_bpf_prm {
 };
 
 /**
- * Information about compiled into native ISA eBPF code.
+ * Information about compiled into native ISA eBPF code accepting 1 argument.
  */
 struct rte_bpf_jit {
 	uint64_t (*func)(void *); /**< JIT-ed native code */
 	size_t sz;                /**< size of JIT-ed code */
 };
 
+union rte_bpf_func_arg {
+	uint64_t u64;
+	void *ptr;
+};
+
+typedef uint64_t (*rte_bpf_jit_func0_t)(void);
+typedef uint64_t (*rte_bpf_jit_func1_t)(union rte_bpf_func_arg);
+typedef uint64_t (*rte_bpf_jit_func2_t)(union rte_bpf_func_arg, union rte_bpf_func_arg);
+typedef uint64_t (*rte_bpf_jit_func3_t)(union rte_bpf_func_arg, union rte_bpf_func_arg,
+	union rte_bpf_func_arg);
+typedef uint64_t (*rte_bpf_jit_func4_t)(union rte_bpf_func_arg, union rte_bpf_func_arg,
+	union rte_bpf_func_arg, union rte_bpf_func_arg);
+typedef uint64_t (*rte_bpf_jit_func5_t)(union rte_bpf_func_arg, union rte_bpf_func_arg,
+	union rte_bpf_func_arg, union rte_bpf_func_arg, union rte_bpf_func_arg);
+
+/**
+ * JIT-ed native code, member depends on number of program arguments.
+ */
+struct rte_bpf_jit_ex {
+	union {
+		void *raw;
+		rte_bpf_jit_func0_t func0;  /* nullary function */
+		rte_bpf_jit_func1_t func1;  /* unary function */
+		rte_bpf_jit_func2_t func2;  /* binary function */
+		rte_bpf_jit_func3_t func3;  /* ternary function */
+		rte_bpf_jit_func4_t func4;  /* quaternary function */
+		rte_bpf_jit_func5_t func5;  /* quinary function */
+	};
+	size_t sz;
+};
+
+/* Tuple of eBPF program arguments. */
+struct rte_bpf_prog_ctx {
+	union rte_bpf_func_arg arg[EBPF_FUNC_MAX_ARGS];
+};
+
 struct rte_bpf;
 
 /**
@@ -224,7 +266,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 	__rte_malloc __rte_dealloc(rte_bpf_destroy, 1);
 
 /**
- * Execute given BPF bytecode.
+ * Execute given BPF bytecode accepting 1 argument.
  *
  * @param bpf
  *   handle for the BPF code to execute.
@@ -237,7 +279,29 @@ uint64_t
 rte_bpf_exec(const struct rte_bpf *bpf, void *ctx);
 
 /**
- * Execute given BPF bytecode over a set of input contexts.
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Execute given BPF bytecode accepting any number of arguments.
+ *
+ * @param bpf
+ *   handle for the BPF code to execute.
+ * @param ctx
+ *   program arguments tuple.
+ * @param flags
+ *   bitwise OR of `RTE_BPF_EXEC_FLAG_*` values controlling execution.
+ *   Flag RTE_BPF_EXEC_FLAG_JIT requires presence of JIT version (can be checked
+ *   with rte_bpf_get_jit_ex).
+ * @return
+ *   BPF execution return value.
+ */
+__rte_experimental
+uint64_t
+rte_bpf_exec_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+		uint64_t flags);
+
+/**
+ * Execute given BPF bytecode accepting 1 argument over a set of input contexts.
  *
  * @param bpf
  *   handle for the BPF code to execute.
@@ -255,7 +319,35 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 		uint32_t num);
 
 /**
- * Provide information about natively compiled code for given BPF handle.
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Execute given BPF program accepting any number of arguments over a set of
+ * input contexts.
+ *
+ * @param bpf
+ *   handle for the BPF code to execute.
+ * @param ctx
+ *   pointer to array of program argument tuples, can be NULL for nullary programs.
+ * @param rc
+ *   array of return values (one per input).
+ * @param num
+ *   number executions, number of elements in arrays ctx and rc[].
+ * @param flags
+ *   bitwise OR of `RTE_BPF_EXEC_FLAG_*` values controlling execution.
+ *   Flag RTE_BPF_EXEC_FLAG_JIT requires presence of JIT version (can be checked
+ *   with rte_bpf_get_jit_ex).
+ * @return
+ *   number of successfully processed inputs.
+ */
+__rte_experimental
+uint32_t
+rte_bpf_exec_burst_ex(const struct rte_bpf *bpf, const struct rte_bpf_prog_ctx *ctx,
+		uint64_t rc[], uint32_t num, uint64_t flags);
+
+/**
+ * Provide information about natively compiled code for given BPF program
+ * accepting 1 argument.
  *
  * @param bpf
  *   handle for the BPF code.
@@ -268,6 +360,25 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 int
 rte_bpf_get_jit(const struct rte_bpf *bpf, struct rte_bpf_jit *jit);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Get function JIT-compiled from the BPF program.
+ *
+ * @param bpf
+ *   handle for the BPF code.
+ * @param jit
+ *   pointer to the struct rte_bpf_jit_ex.
+ * @return
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOENT if there is no JIT-compiled version.
+ *   - Zero if operation completed successfully.
+ */
+__rte_experimental
+int
+rte_bpf_get_jit_ex(const struct rte_bpf *bpf, struct rte_bpf_jit_ex *jit);
+
 /**
  * Dump epf instructions to a file.
  *
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 04/10] bpf: add cBPF origin to rte_bpf_load_ex
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
                     ` (2 preceding siblings ...)
  2026-05-14  9:37   ` [PATCH v2 03/10] bpf: support up to 5 arguments Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 05/10] bpf: support rte_bpf_prm_ex with port callbacks Marat Khalili
                     ` (5 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Add cBPF origin to rte_bpf_load_ex to allow loading PCAP filters and
other cBPF code through the unified interface.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_convert.c | 79 +++++++++++++++++++++++++++++++++++++++++--
 lib/bpf/bpf_impl.h    | 11 ++++++
 lib/bpf/bpf_load.c    | 12 ++++++-
 lib/bpf/bpf_stub.c    | 27 ---------------
 lib/bpf/meson.build   | 11 +++---
 lib/bpf/rte_bpf.h     |  8 ++++-
 6 files changed, 111 insertions(+), 37 deletions(-)
 delete mode 100644 lib/bpf/bpf_stub.c

diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c
index 953ca80670..c997116c69 100644
--- a/lib/bpf/bpf_convert.c
+++ b/lib/bpf/bpf_convert.c
@@ -9,6 +9,12 @@
  * Copyright (c) 2011 - 2014 PLUMgrid, http://plumgrid.com
  */
 
+#include "bpf_impl.h"
+#include <eal_export.h>
+#include <rte_errno.h>
+
+#ifdef RTE_HAS_LIBPCAP
+
 #include <assert.h>
 #include <errno.h>
 #include <stdbool.h>
@@ -17,17 +23,14 @@
 #include <stdlib.h>
 #include <string.h>
 
-#include <eal_export.h>
 #include <rte_common.h>
 #include <rte_bpf.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
-#include <rte_errno.h>
 
 #include <pcap/pcap.h>
 #include <pcap/bpf.h>
 
-#include "bpf_impl.h"
 #include "bpf_def.h"
 
 #ifndef BPF_MAXINSNS
@@ -572,3 +575,73 @@ rte_bpf_convert(const struct bpf_program *prog)
 
 	return prm;
 }
+
+void
+__rte_bpf_convert_cleanup(struct __rte_bpf_load *load)
+{
+	free(load->ins);
+}
+
+int __rte_bpf_convert(struct __rte_bpf_load *load)
+{
+	struct rte_bpf_prm_ex *const prm = &load->prm;
+	uint32_t nb_ins = 0;
+	int ret;
+
+	RTE_ASSERT(prm->origin == RTE_BPF_ORIGIN_CBPF);
+
+	if (prm->cbpf.ins == NULL || prm->cbpf.nb_ins == 0)
+		return -EINVAL;
+
+	/* 1st pass: calculate the eBPF program length */
+	ret = bpf_convert_filter(prm->cbpf.ins, prm->cbpf.nb_ins, NULL, &nb_ins);
+	if (ret < 0) {
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cannot get eBPF length");
+		return ret;
+	}
+
+	RTE_ASSERT(load->ins == NULL);
+	load->ins = malloc(nb_ins * sizeof(load->ins[0]));
+	if (load->ins == NULL)
+		return -ENOMEM;
+
+	/* 2nd pass: remap cBPF to eBPF instructions  */
+	ret = bpf_convert_filter(prm->cbpf.ins, prm->cbpf.nb_ins, load->ins, &nb_ins);
+	if (ret < 0) {
+		RTE_BPF_LOG_FUNC_LINE(ERR, "cannot convert cBPF to eBPF");
+		return ret;
+	}
+
+	prm->origin = RTE_BPF_ORIGIN_RAW;
+	prm->raw.ins = load->ins;
+	prm->raw.nb_ins = nb_ins;
+
+	return 0;
+}
+
+#else /* RTE_HAS_LIBPCAP */
+
+RTE_EXPORT_SYMBOL(rte_bpf_convert)
+struct rte_bpf_prm *
+rte_bpf_convert(const struct bpf_program *prog)
+{
+	RTE_SET_USED(prog);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libpcap installed");
+	rte_errno = ENOTSUP;
+	return NULL;
+}
+
+void
+__rte_bpf_convert_cleanup(struct __rte_bpf_load *load)
+{
+	RTE_ASSERT(load->ins == NULL);
+}
+
+int __rte_bpf_convert(struct __rte_bpf_load *load)
+{
+	RTE_SET_USED(load);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libpcap installed");
+	return -ENOTSUP;
+}
+
+#endif /* RTE_HAS_LIBPCAP */
diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index 4a98b33730..92d03583d9 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -21,6 +21,9 @@ struct rte_bpf {
 struct __rte_bpf_load {
 	struct rte_bpf_prm_ex prm;
 
+	/* Conversion from cBPF. */
+	struct ebpf_insn *ins;
+
 	/* Loading ELF and applying relocations. */
 	int elf_fd;  /* ELF fd, must be negative (not zero) by default. */
 	void *elf;  /* Using void to avoid dependency on libelf. */
@@ -34,6 +37,14 @@ struct __rte_bpf_load {
  * to avoid potential name conflict with other libraries.
  */
 
+/* Free temporary resources created by converting from cBPF to eBPF. */
+void
+__rte_bpf_convert_cleanup(struct __rte_bpf_load *load);
+
+/* Convert program from cBPF to eBPF. */
+int
+__rte_bpf_convert(struct __rte_bpf_load *load);
+
 /* Free temporary resources created by opening ELF. */
 void
 __rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load);
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index c9cbaf6ded..c3c49ac49b 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -230,6 +230,9 @@ load_try(struct __rte_bpf_load *load, const struct rte_bpf_prm_ex *app_prm)
 	switch (load->prm.origin) {
 	case RTE_BPF_ORIGIN_RAW:
 		break;
+	case RTE_BPF_ORIGIN_CBPF:
+		rc = rc < 0 ? rc : __rte_bpf_convert(load);
+		break;
 	case RTE_BPF_ORIGIN_ELF_FILE:
 		rc = rc < 0 ? rc : __rte_bpf_load_elf_file(load);
 		rc = rc < 0 ? rc : __rte_bpf_load_elf_code(load);
@@ -244,6 +247,13 @@ load_try(struct __rte_bpf_load *load, const struct rte_bpf_prm_ex *app_prm)
 	return rc;
 }
 
+static void
+load_cleanup(struct __rte_bpf_load *load)
+{
+	__rte_bpf_convert_cleanup(load);
+	__rte_bpf_load_elf_cleanup(load);
+}
+
 RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_load_ex, 26.11)
 struct rte_bpf *
 rte_bpf_load_ex(const struct rte_bpf_prm_ex *prm)
@@ -252,7 +262,7 @@ rte_bpf_load_ex(const struct rte_bpf_prm_ex *prm)
 
 	const int rc = load_try(&load, prm);
 
-	__rte_bpf_load_elf_cleanup(&load);
+	load_cleanup(&load);
 
 	RTE_ASSERT((rc < 0) == (load.bpf == NULL));
 
diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c
deleted file mode 100644
index 4c329832c2..0000000000
--- a/lib/bpf/bpf_stub.c
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2021 Intel Corporation
- */
-
-#include "bpf_impl.h"
-#include <eal_export.h>
-#include <rte_errno.h>
-
-/**
- * Contains stubs for unimplemented public API functions
- */
-
-#ifndef RTE_HAS_LIBPCAP
-RTE_EXPORT_SYMBOL(rte_bpf_convert)
-struct rte_bpf_prm *
-rte_bpf_convert(const struct bpf_program *prog)
-{
-	if (prog == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libpcap installed");
-	rte_errno = ENOTSUP;
-	return NULL;
-}
-#endif
diff --git a/lib/bpf/meson.build b/lib/bpf/meson.build
index 4901b6ee14..7e8a300e3f 100644
--- a/lib/bpf/meson.build
+++ b/lib/bpf/meson.build
@@ -15,14 +15,16 @@ if arch_subdir == 'x86' and dpdk_conf.get('RTE_ARCH_32')
     subdir_done()
 endif
 
-sources = files('bpf.c',
+sources = files(
+        'bpf.c',
+        'bpf_convert.c',
         'bpf_dump.c',
         'bpf_exec.c',
         'bpf_load.c',
         'bpf_load_elf.c',
         'bpf_pkt.c',
-        'bpf_stub.c',
-        'bpf_validate.c')
+        'bpf_validate.c',
+)
 
 if arch_subdir == 'x86' and dpdk_conf.get('RTE_ARCH_64')
     sources += files('bpf_jit_x86.c')
@@ -45,8 +47,7 @@ else
 endif
 
 if dpdk_conf.has('RTE_HAS_LIBPCAP')
-    sources += files('bpf_convert.c')
     ext_deps += pcap_dep
 else
-    warning('libpcap is missing, rte_bpf_convert API will be disabled')
+    warning('libpcap is missing, cBPF API will be disabled')
 endif
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index 0e7eaa3c18..da2bdea7e0 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -95,10 +95,12 @@ struct rte_bpf_xsym {
  */
 enum rte_bpf_origin {
 	RTE_BPF_ORIGIN_RAW,		/**< code loaded from raw array */
-	RTE_BPF_ORIGIN_RESERVED,	/**< reserved for cBPF */
+	RTE_BPF_ORIGIN_CBPF,		/**< code converted from cbpf */
 	RTE_BPF_ORIGIN_ELF_FILE,	/**< code loaded from elf_file */
 };
 
+struct bpf_insn;
+
 /**
  * Input parameters for loading eBPF code, extensible version.
  *
@@ -117,6 +119,10 @@ struct rte_bpf_prm_ex {
 			const struct ebpf_insn *ins;  /**< eBPF instructions */
 			uint32_t nb_ins;  /**< number of instructions in ins */
 		} raw;
+		struct {
+			const struct bpf_insn *ins;  /**< cBPF instructions */
+			uint32_t nb_ins;  /**< number of instructions in ins */
+		} cbpf;
 		struct {
 			const char *path;  /**< path to the ELF file */
 			const char *section;  /**< ELF section with the code */
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 05/10] bpf: support rte_bpf_prm_ex with port callbacks
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
                     ` (3 preceding siblings ...)
  2026-05-14  9:37   ` [PATCH v2 04/10] bpf: add cBPF origin to rte_bpf_load_ex Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 06/10] bpf: support loading ELF files from memory Marat Khalili
                     ` (4 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Introduce new functions to install an already loaded BPF program into RX
or TX port/queue, since previous API was tied to rte_bpf_prm.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_pkt.c        | 65 ++++++++++++++++++++++++++++++----------
 lib/bpf/rte_bpf_ethdev.h | 54 +++++++++++++++++++++++++++++++++
 2 files changed, 104 insertions(+), 15 deletions(-)

diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index 5007f6aef5..87065e939f 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -490,13 +490,11 @@ rte_bpf_eth_tx_unload(uint16_t port, uint16_t queue)
 }
 
 static int
-bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
-	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
-	uint32_t flags)
+bpf_eth_elf_install(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
+	struct rte_bpf *bpf, uint32_t flags)
 {
 	int32_t rc;
 	struct bpf_eth_cbi *bc;
-	struct rte_bpf *bpf;
 	rte_rx_callback_fn frx;
 	rte_tx_callback_fn ftx;
 	struct rte_bpf_jit jit;
@@ -504,14 +502,17 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
 	frx = NULL;
 	ftx = NULL;
 
-	if (prm == NULL || rte_eth_dev_is_valid_port(port) == 0 ||
+	if (bpf == NULL || rte_eth_dev_is_valid_port(port) == 0 ||
 			queue >= RTE_MAX_QUEUES_PER_PORT)
 		return -EINVAL;
 
+	if (bpf->prm.nb_prog_arg != 1)
+		return -EINVAL;
+
 	if (cbh->type == BPF_ETH_RX)
-		frx = select_rx_callback(prm->prog_arg.type, flags);
+		frx = select_rx_callback(bpf->prm.prog_arg[0].type, flags);
 	else
-		ftx = select_tx_callback(prm->prog_arg.type, flags);
+		ftx = select_tx_callback(bpf->prm.prog_arg[0].type, flags);
 
 	if (frx == NULL && ftx == NULL) {
 		RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no callback selected;",
@@ -519,16 +520,11 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
 		return -EINVAL;
 	}
 
-	bpf = rte_bpf_elf_load(prm, fname, sname);
-	if (bpf == NULL)
-		return -rte_errno;
-
 	rte_bpf_get_jit(bpf, &jit);
 
 	if ((flags & RTE_BPF_ETH_F_JIT) != 0 && jit.func == NULL) {
 		RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no JIT generated;",
 			__func__, port, queue);
-		rte_bpf_destroy(bpf);
 		return -ENOTSUP;
 	}
 
@@ -551,7 +547,6 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
 
 	if (bc->cb == NULL) {
 		rc = -rte_errno;
-		rte_bpf_destroy(bpf);
 		bpf_eth_cbi_cleanup(bc);
 	} else
 		rc = 0;
@@ -564,13 +559,33 @@ int
 rte_bpf_eth_rx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
 	uint32_t flags)
+{
+	struct rte_bpf *bpf;
+	int32_t rc;
+
+	bpf = rte_bpf_elf_load(prm, fname, sname);
+	if (bpf == NULL)
+		return -rte_errno;
+
+	rc = rte_bpf_eth_rx_install(port, queue, bpf, flags);
+
+	if (rc < 0)
+		rte_bpf_destroy(bpf);
+
+	return rc;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_eth_rx_install, 26.11)
+int
+rte_bpf_eth_rx_install(uint16_t port, uint16_t queue, struct rte_bpf *bpf,
+	uint32_t flags)
 {
 	int32_t rc;
 	struct bpf_eth_cbh *cbh;
 
 	cbh = &rx_cbh;
 	rte_spinlock_lock(&cbh->lock);
-	rc = bpf_eth_elf_load(cbh, port, queue, prm, fname, sname, flags);
+	rc = bpf_eth_elf_install(cbh, port, queue, bpf, flags);
 	rte_spinlock_unlock(&cbh->lock);
 
 	return rc;
@@ -581,13 +596,33 @@ int
 rte_bpf_eth_tx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
 	uint32_t flags)
+{
+	struct rte_bpf *bpf;
+	int32_t rc;
+
+	bpf = rte_bpf_elf_load(prm, fname, sname);
+	if (bpf == NULL)
+		return -rte_errno;
+
+	rc = rte_bpf_eth_tx_install(port, queue, bpf, flags);
+
+	if (rc < 0)
+		rte_bpf_destroy(bpf);
+
+	return rc;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_eth_tx_install, 26.11)
+int
+rte_bpf_eth_tx_install(uint16_t port, uint16_t queue, struct rte_bpf *bpf,
+	uint32_t flags)
 {
 	int32_t rc;
 	struct bpf_eth_cbh *cbh;
 
 	cbh = &tx_cbh;
 	rte_spinlock_lock(&cbh->lock);
-	rc = bpf_eth_elf_load(cbh, port, queue, prm, fname, sname, flags);
+	rc = bpf_eth_elf_install(cbh, port, queue, bpf, flags);
 	rte_spinlock_unlock(&cbh->lock);
 
 	return rc;
diff --git a/lib/bpf/rte_bpf_ethdev.h b/lib/bpf/rte_bpf_ethdev.h
index cab8e9e388..e5eaf5b245 100644
--- a/lib/bpf/rte_bpf_ethdev.h
+++ b/lib/bpf/rte_bpf_ethdev.h
@@ -109,6 +109,60 @@ rte_bpf_eth_tx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
 	uint32_t flags);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Install callback to execute specified BPF program on given TX port/queue.
+ *
+ * On success the ownership of the program passes to the library,
+ * rte_bpf_eth_unload must be used to unload it, and rte_bpf_destroy must no
+ * longer be called.
+ *
+ * @param port
+ *   The identifier of the ethernet port
+ * @param queue
+ *   The identifier of the TX queue on the given port
+ * @param bpf
+ *   BPF program
+ * @param flags
+ *   Flags that define expected behavior of the loaded filter
+ *   (i.e. jited/non-jited version to use).
+ * @return
+ *   Zero on successful completion or negative error code otherwise.
+ */
+__rte_experimental
+int
+rte_bpf_eth_rx_install(uint16_t port, uint16_t queue, struct rte_bpf *bpf,
+	uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.
+ *
+ * Install callback to execute specified BPF program on given RX port/queue.
+ *
+ * On success the ownership of the program passes to the library,
+ * rte_bpf_eth_unload must be used to unload it, and rte_bpf_destroy must no
+ * longer be called.
+ *
+ * @param port
+ *   The identifier of the ethernet port
+ * @param queue
+ *   The identifier of the RX queue on the given port
+ * @param bpf
+ *   BPF program
+ * @param flags
+ *   Flags that define expected behavior of the loaded filter
+ *   (i.e. jited/non-jited version to use).
+ * @return
+ *   Zero on successful completion or negative error code otherwise.
+ */
+__rte_experimental
+int
+rte_bpf_eth_tx_install(uint16_t port, uint16_t queue, struct rte_bpf *bpf,
+	uint32_t flags);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 06/10] bpf: support loading ELF files from memory
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
                     ` (4 preceding siblings ...)
  2026-05-14  9:37   ` [PATCH v2 05/10] bpf: support rte_bpf_prm_ex with port callbacks Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 07/10] test/bpf: test loading cBPF directly Marat Khalili
                     ` (3 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Introduce new ELF origin RTE_BPF_ORIGIN_ELF_MEMORY allowing one to
specify data area containing ELF image.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 lib/bpf/bpf_impl.h     |  5 +++++
 lib/bpf/bpf_load.c     |  4 ++++
 lib/bpf/bpf_load_elf.c | 40 +++++++++++++++++++++++++++++++++++++++-
 lib/bpf/rte_bpf.h      |  6 ++++++
 4 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h
index 92d03583d9..14ad772d4b 100644
--- a/lib/bpf/bpf_impl.h
+++ b/lib/bpf/bpf_impl.h
@@ -27,6 +27,7 @@ struct __rte_bpf_load {
 	/* Loading ELF and applying relocations. */
 	int elf_fd;  /* ELF fd, must be negative (not zero) by default. */
 	void *elf;  /* Using void to avoid dependency on libelf. */
+	const char *elf_section;
 
 	/* Value we are going to return, if any. */
 	struct rte_bpf *bpf;
@@ -53,6 +54,10 @@ __rte_bpf_load_elf_cleanup(struct __rte_bpf_load *load);
 int
 __rte_bpf_load_elf_file(struct __rte_bpf_load *load);
 
+/* Open the ELF memory image. */
+int
+__rte_bpf_load_elf_memory(struct __rte_bpf_load *load);
+
 /* Get code from ELF and apply relocations to it. */
 int
 __rte_bpf_load_elf_code(struct __rte_bpf_load *load);
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index c3c49ac49b..b626f6c616 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -237,6 +237,10 @@ load_try(struct __rte_bpf_load *load, const struct rte_bpf_prm_ex *app_prm)
 		rc = rc < 0 ? rc : __rte_bpf_load_elf_file(load);
 		rc = rc < 0 ? rc : __rte_bpf_load_elf_code(load);
 		break;
+	case RTE_BPF_ORIGIN_ELF_MEMORY:
+		rc = rc < 0 ? rc : __rte_bpf_load_elf_memory(load);
+		rc = rc < 0 ? rc : __rte_bpf_load_elf_code(load);
+		break;
 	default:
 		rc = rc < 0 ? rc : -EINVAL;
 	}
diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c
index 4ae7492351..80443cb63a 100644
--- a/lib/bpf/bpf_load_elf.c
+++ b/lib/bpf/bpf_load_elf.c
@@ -310,6 +310,36 @@ __rte_bpf_load_elf_file(struct __rte_bpf_load *load)
 		return -EINVAL;
 	}
 
+	load->elf_section = prm->elf_file.section;
+
+	return 0;
+}
+
+int
+__rte_bpf_load_elf_memory(struct __rte_bpf_load *load)
+{
+	const struct rte_bpf_prm_ex *const prm = &load->prm;
+
+	RTE_ASSERT(prm->origin == RTE_BPF_ORIGIN_ELF_MEMORY);
+
+	if (prm->elf_memory.data == NULL || prm->elf_memory.section == NULL)
+		return -EINVAL;
+
+	if (elf_version(EV_CURRENT) == EV_NONE)
+		return -ENOTSUP;
+
+	load->elf = elf_memory(
+		/* Cast away const, we are not going to modify the ELF image. */
+		(char *)(uintptr_t)prm->elf_memory.data, prm->elf_memory.size);
+	if (load->elf == NULL) {
+		const int rc = elf_errno();
+		RTE_BPF_LOG_FUNC_LINE(ERR, "error %d opening ELF image: %s",
+			rc, elf_errmsg(rc));
+		return -EINVAL;
+	}
+
+	load->elf_section = prm->elf_memory.section;
+
 	return 0;
 }
 
@@ -321,7 +351,7 @@ __rte_bpf_load_elf_code(struct __rte_bpf_load *load)
 	size_t sidx;
 	int rc;
 
-	rc = find_elf_code(load->elf, prm->elf_file.section, &sd, &sidx);
+	rc = find_elf_code(load->elf, load->elf_section, &sd, &sidx);
 	if (rc < 0)
 		return rc;
 
@@ -353,6 +383,14 @@ __rte_bpf_load_elf_file(struct __rte_bpf_load *load)
 	return -ENOTSUP;
 }
 
+int
+__rte_bpf_load_elf_memory(struct __rte_bpf_load *load)
+{
+	RTE_SET_USED(load);
+	RTE_BPF_LOG_FUNC_LINE(ERR, "not supported, rebuild with libelf installed");
+	return -ENOTSUP;
+}
+
 int
 __rte_bpf_load_elf_code(struct __rte_bpf_load *load)
 {
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index da2bdea7e0..413ccf0497 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -97,6 +97,7 @@ enum rte_bpf_origin {
 	RTE_BPF_ORIGIN_RAW,		/**< code loaded from raw array */
 	RTE_BPF_ORIGIN_CBPF,		/**< code converted from cbpf */
 	RTE_BPF_ORIGIN_ELF_FILE,	/**< code loaded from elf_file */
+	RTE_BPF_ORIGIN_ELF_MEMORY,	/**< code loaded from elf_memory */
 };
 
 struct bpf_insn;
@@ -127,6 +128,11 @@ struct rte_bpf_prm_ex {
 			const char *path;  /**< path to the ELF file */
 			const char *section;  /**< ELF section with the code */
 		} elf_file;
+		struct {
+			const void *data;  /**< pointer to the ELF image */
+			size_t size;  /**< size of the ELF image */
+			const char *section;  /**< ELF section with the code */
+		} elf_memory;
 	};
 
 	const struct rte_bpf_xsym *xsym;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 07/10] test/bpf: test loading cBPF directly
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
                     ` (5 preceding siblings ...)
  2026-05-14  9:37   ` [PATCH v2 06/10] bpf: support loading ELF files from memory Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 08/10] test/bpf: test loading ELF file from memory Marat Khalili
                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Run cBPF tests twice: via rte_bpf_convert, and using
RTE_BPF_FLAG_ORIGIN_CBPF origin of new rte_bpf_load_ex API.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 app/test/test_bpf.c | 132 +++++++++++++++++++++++++++-----------------
 1 file changed, 80 insertions(+), 52 deletions(-)

diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c
index dd24722450..c8a4ee7550 100644
--- a/app/test/test_bpf.c
+++ b/app/test/test_bpf.c
@@ -4429,13 +4429,59 @@ test_bpf_dump(struct bpf_program *cbf, const struct rte_bpf_prm *prm)
 	}
 }
 
+/* Function loading BPF program from cBPF instructions array. */
+typedef struct rte_bpf *
+(*load_cbpf_program_t)(struct bpf_program *cbpf_program, const char *str);
+
+/* Load BPF program by converting cBPF array to rte_bpf_prm and then opening it. */
+static struct rte_bpf *
+load_cbpf_program_convert(struct bpf_program *cbpf_program, const char *str)
+{
+	struct rte_bpf_prm *prm = NULL;
+	struct rte_bpf *bpf;
+
+	prm = rte_bpf_convert(cbpf_program);
+	if (prm == NULL) {
+		printf("%s@%d: bpf_convert(\"%s\") failed\n",
+			__func__, __LINE__, str);
+		return NULL;
+	}
+
+	printf("bpf convert(\"%s\") produced:\n", str);
+	rte_bpf_dump(stdout, prm->ins, prm->nb_ins);
+
+	printf("%s \"%s\"\n", __func__, str);
+	test_bpf_dump(cbpf_program, prm);
+
+	bpf = rte_bpf_load(prm);
+	rte_free(prm);
+
+	return bpf;
+}
+
+/* Load BPF program by calling rte_bpf_load_ex and specifying cBPF array as the origin. */
+static struct rte_bpf *
+load_cbpf_program_direct(struct bpf_program *cbpf_program, const char *str __rte_unused)
+{
+	return rte_bpf_load_ex(&(struct rte_bpf_prm_ex){
+		.sz = sizeof(struct rte_bpf_prm_ex),
+		.origin = RTE_BPF_ORIGIN_CBPF,
+		.cbpf.ins = cbpf_program->bf_insns,
+		.cbpf.nb_ins = cbpf_program->bf_len,
+		.prog_arg[0] = {
+			.type = RTE_BPF_ARG_PTR_MBUF,
+			.size = sizeof(struct rte_mbuf),
+		},
+		.nb_prog_arg = 1,
+	});
+}
+
 static int
-test_bpf_match(pcap_t *pcap, const char *str,
-	       struct rte_mbuf *mb)
+test_bpf_match(pcap_t *pcap, const char *str, struct rte_mbuf *mb,
+	load_cbpf_program_t load_cbpf_program)
 {
 	struct bpf_program fcode;
-	struct rte_bpf_prm *prm = NULL;
-	struct rte_bpf *bpf = NULL;
+	struct rte_bpf *bpf;
 	int ret = -1;
 	uint64_t rc;
 
@@ -4445,17 +4491,10 @@ test_bpf_match(pcap_t *pcap, const char *str,
 		return -1;
 	}
 
-	prm = rte_bpf_convert(&fcode);
-	if (prm == NULL) {
-		printf("%s@%d: bpf_convert('%s') failed,, error=%d(%s);\n",
-		       __func__, __LINE__, str, rte_errno, strerror(rte_errno));
-		goto error;
-	}
-
-	bpf = rte_bpf_load(prm);
+	bpf = load_cbpf_program(&fcode, str);
 	if (bpf == NULL) {
-		printf("%s@%d: failed to load bpf code, error=%d(%s);\n",
-			__func__, __LINE__, rte_errno, strerror(rte_errno));
+		printf("%s@%d: failed to load cbpf program for \"%s\", error=%d(%s);\n",
+			__func__, __LINE__, str, rte_errno, strerror(rte_errno));
 		goto error;
 	}
 
@@ -4465,7 +4504,6 @@ test_bpf_match(pcap_t *pcap, const char *str,
 error:
 	if (bpf)
 		rte_bpf_destroy(bpf);
-	rte_free(prm);
 	pcap_freecode(&fcode);
 	return ret;
 }
@@ -4474,6 +4512,11 @@ test_bpf_match(pcap_t *pcap, const char *str,
 static int
 test_bpf_filter_sanity(pcap_t *pcap)
 {
+	static const load_cbpf_program_t cbpf_program_loaders[] = {
+		load_cbpf_program_convert,
+		load_cbpf_program_direct,
+	};
+
 	const uint32_t plen = 100;
 	struct rte_mbuf mb, *m;
 	uint8_t tbuf[RTE_MBUF_DEFAULT_BUF_SIZE];
@@ -4500,15 +4543,17 @@ test_bpf_filter_sanity(pcap_t *pcap)
 		.dst_addr = rte_cpu_to_be_32(RTE_IPV4_BROADCAST),
 	};
 
-	if (test_bpf_match(pcap, "ip", m) != 0) {
-		printf("%s@%d: filter \"ip\" doesn't match test data\n",
-		       __func__, __LINE__);
-		return -1;
-	}
-	if (test_bpf_match(pcap, "not ip", m) == 0) {
-		printf("%s@%d: filter \"not ip\" does match test data\n",
-		       __func__, __LINE__);
-		return -1;
+	for (int li = 0; li != RTE_DIM(cbpf_program_loaders); ++li) {
+		if (test_bpf_match(pcap, "ip", m, cbpf_program_loaders[li]) != 0) {
+			printf("%s@%d: filter \"ip\" doesn't match test data\n",
+			       __func__, __LINE__);
+			return -1;
+		}
+		if (test_bpf_match(pcap, "not ip", m, cbpf_program_loaders[li]) == 0) {
+			printf("%s@%d: filter \"not ip\" does match test data\n",
+			       __func__, __LINE__);
+			return -1;
+		}
 	}
 
 	return 0;
@@ -4556,44 +4601,25 @@ static const char * const sample_filters[] = {
 };
 
 static int
-test_bpf_filter(pcap_t *pcap, const char *s)
+test_bpf_filter(pcap_t *pcap, const char *s, load_cbpf_program_t load_cbpf_program)
 {
 	struct bpf_program fcode;
-	struct rte_bpf_prm *prm = NULL;
-	struct rte_bpf *bpf = NULL;
+	struct rte_bpf *bpf;
 
 	if (pcap_compile(pcap, &fcode, s, 1, PCAP_NETMASK_UNKNOWN)) {
-		printf("%s@%d: pcap_compile('%s') failed: %s;\n",
+		printf("%s@%d: pcap_compile(\"%s\") failed: %s;\n",
 		       __func__, __LINE__, s, pcap_geterr(pcap));
 		return -1;
 	}
 
-	prm = rte_bpf_convert(&fcode);
-	if (prm == NULL) {
-		printf("%s@%d: bpf_convert('%s') failed,, error=%d(%s);\n",
-		       __func__, __LINE__, s, rte_errno, strerror(rte_errno));
-		goto error;
-	}
-
-	printf("bpf convert for \"%s\" produced:\n", s);
-	rte_bpf_dump(stdout, prm->ins, prm->nb_ins);
-
-	bpf = rte_bpf_load(prm);
+	bpf = load_cbpf_program(&fcode, s);
 	if (bpf == NULL) {
-		printf("%s@%d: failed to load bpf code, error=%d(%s);\n",
-			__func__, __LINE__, rte_errno, strerror(rte_errno));
-		goto error;
+		printf("%s@%d: failed to load cbpf program for \"%s\" , error=%d(%s);\n",
+			__func__, __LINE__, s, rte_errno, strerror(rte_errno));
 	}
 
-error:
-	if (bpf)
-		rte_bpf_destroy(bpf);
-	else {
-		printf("%s \"%s\"\n", __func__, s);
-		test_bpf_dump(&fcode, prm);
-	}
+	rte_bpf_destroy(bpf);
 
-	rte_free(prm);
 	pcap_freecode(&fcode);
 	return (bpf == NULL) ? -1 : 0;
 }
@@ -4612,8 +4638,10 @@ test_bpf_convert(void)
 	}
 
 	rc = test_bpf_filter_sanity(pcap);
-	for (i = 0; i < RTE_DIM(sample_filters); i++)
-		rc |= test_bpf_filter(pcap, sample_filters[i]);
+	for (i = 0; i < RTE_DIM(sample_filters); i++) {
+		rc |= test_bpf_filter(pcap, sample_filters[i], load_cbpf_program_convert);
+		rc |= test_bpf_filter(pcap, sample_filters[i], load_cbpf_program_direct);
+	}
 
 	pcap_close(pcap);
 	return rc;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 08/10] test/bpf: test loading ELF file from memory
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
                     ` (6 preceding siblings ...)
  2026-05-14  9:37   ` [PATCH v2 07/10] test/bpf: test loading cBPF directly Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 09/10] doc: add release notes for new extensible BPF API Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 10/10] doc: add load API to BPF programmer's guide Marat Khalili
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Run each subtest in test_bpf_elf twice: the old way loading ELF images
via temporary file, and using the new rte_bpf_load_ex API to load them
directly from memory.

In tests loading port/queue filters use new rte_bpf_eth_(rx|tx)_install
API to install an already loaded (via one of the ways) BPF program.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 app/test/test_bpf.c | 193 ++++++++++++++++++++++++++------------------
 1 file changed, 113 insertions(+), 80 deletions(-)

diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c
index c8a4ee7550..69e84f0cab 100644
--- a/app/test/test_bpf.c
+++ b/app/test/test_bpf.c
@@ -3977,12 +3977,61 @@ create_temp_bpf_file(const uint8_t *data, size_t size, const char *name)
 
 #include "test_bpf_load.h"
 
+/* Function loading BPF program from ELF image in memory. */
+typedef struct rte_bpf *
+(*load_elf_image_t)(const void *data, size_t size, const char *section,
+	const struct rte_bpf_xsym *xsym, uint32_t nb_xsym, const struct rte_bpf_arg *prog_arg);
+
+/* Load BPF program by writing ELF image to temporary file and opening this file. */
+static struct rte_bpf *
+load_elf_image_temp_file(const void *data, size_t size, const char *section,
+	const struct rte_bpf_xsym *xsym, uint32_t nb_xsym, const struct rte_bpf_arg *prog_arg)
+{
+	/* Create temp file from embedded BPF object */
+	char *tmpfile = create_temp_bpf_file(data, size, "test");
+	if (tmpfile == NULL) {
+		rte_errno = EIO;
+		return NULL;
+	}
+
+	/* Try to load BPF program from temp file */
+	const struct rte_bpf_prm prm = {
+		.xsym = xsym,
+		.nb_xsym = nb_xsym,
+		.prog_arg = *prog_arg,
+	};
+
+	struct rte_bpf *bpf = rte_bpf_elf_load(&prm, tmpfile, section);
+	unlink(tmpfile);
+	free(tmpfile);
+
+	return bpf;
+}
+
+/* Load BPF program by calling rte_bpf_load_ex and specifying image as the origin. */
+static struct rte_bpf *
+load_elf_image_direct(const void *data, size_t size, const char *section,
+	const struct rte_bpf_xsym *xsym, uint32_t nb_xsym, const struct rte_bpf_arg *prog_arg)
+{
+	return rte_bpf_load_ex(&(struct rte_bpf_prm_ex){
+		.sz = sizeof(struct rte_bpf_prm_ex),
+		.origin = RTE_BPF_ORIGIN_ELF_MEMORY,
+		.elf_memory.data = data,
+		.elf_memory.size = size,
+		.elf_memory.section = section,
+		.xsym = xsym,
+		.nb_xsym = nb_xsym,
+		.prog_arg[0] = *prog_arg,
+		.nb_prog_arg = 1,
+	});
+}
+
 /*
  * Test loading BPF program from an object file.
  * This test uses same arguments as previous test_call1 example.
  */
 static int
-test_bpf_elf_load(void)
+test_bpf_elf_load(load_elf_image_t load_elf_image)
 {
 	static const char test_section[] = "call1";
 	uint8_t tbuf[sizeof(struct dummy_vect8)];
@@ -4010,28 +4059,15 @@ test_bpf_elf_load(void)
 			},
 		},
 	};
-	int ret;
-
-	/* Create temp file from embedded BPF object */
-	char *tmpfile = create_temp_bpf_file(app_test_bpf_load_o,
-					     app_test_bpf_load_o_len,
-					     "load");
-	if (tmpfile == NULL)
-		return -1;
-
-	/* Try to load BPF program from temp file */
-	const struct rte_bpf_prm prm = {
-		.xsym = xsym,
-		.nb_xsym = RTE_DIM(xsym),
-		.prog_arg = {
-			.type = RTE_BPF_ARG_PTR,
-			.size = sizeof(tbuf),
-		},
+	static const struct rte_bpf_arg prog_arg = {
+		.type = RTE_BPF_ARG_PTR,
+		.size = sizeof(tbuf),
 	};
+	struct rte_bpf *bpf;
+	int ret;
 
-	struct rte_bpf *bpf = rte_bpf_elf_load(&prm, tmpfile, test_section);
-	unlink(tmpfile);
-	free(tmpfile);
+	bpf = load_elf_image(app_test_bpf_load_o, app_test_bpf_load_o_len, test_section,
+		xsym, RTE_DIM(xsym), &prog_arg);
 
 	/* If libelf support is not available */
 	if (bpf == NULL && rte_errno == ENOTSUP)
@@ -4174,22 +4210,28 @@ setup_mbufs(struct rte_mbuf *burst[], unsigned int n)
 	return tcp_count;
 }
 
-static int bpf_tx_test(uint16_t port, const char *tmpfile, struct rte_mempool *pool,
-		       const char *section, uint32_t flags)
+static int bpf_tx_test(uint16_t port, struct rte_mempool *pool, load_elf_image_t load_elf_image,
+	const char *section, uint32_t flags)
 {
-	const struct rte_bpf_prm prm = {
-		.prog_arg = {
-			.type = RTE_BPF_ARG_PTR,
-			.size = sizeof(struct dummy_net),
-		},
+	static const struct rte_bpf_arg prog_arg = {
+		.type = RTE_BPF_ARG_PTR,
+		.size = sizeof(struct dummy_net),
 	};
+	struct rte_bpf *bpf;
 	int ret;
 
-	/* Try to load BPF TX program from temp file */
-	ret = rte_bpf_eth_tx_elf_load(port, 0, &prm, tmpfile, section, flags);
+	/* Try to load BPF program from image */
+	bpf = load_elf_image(app_test_bpf_filter_o, app_test_bpf_filter_o_len, section,
+		NULL, 0, &prog_arg);
+	TEST_ASSERT_NOT_NULL(bpf, "failed to load BPF filter from image, error=%d:(%s)\n",
+		       rte_errno, rte_strerror(rte_errno));
+
+	/* Try to install loaded BPF program */
+	ret = rte_bpf_eth_tx_install(port, 0, bpf, flags);
 	if (ret != 0) {
-		printf("%s@%d: failed to load BPF filter from file=%s error=%d:(%s)\n",
-		       __func__, __LINE__, tmpfile, rte_errno, rte_strerror(rte_errno));
+		printf("%s@%d: failed to install BPF filter, error=%d:(%s)\n",
+		       __func__, __LINE__, rte_errno, rte_strerror(rte_errno));
+		rte_bpf_destroy(bpf);
 		return ret;
 	}
 
@@ -4217,10 +4259,9 @@ static int bpf_tx_test(uint16_t port, const char *tmpfile, struct rte_mempool *p
 
 /* Test loading a transmit filter which only allows IPv4 packets */
 static int
-test_bpf_elf_tx_load(void)
+test_bpf_elf_tx_load(load_elf_image_t load_elf_image)
 {
 	static const char null_dev[] = "net_null_bpf0";
-	char *tmpfile = NULL;
 	struct rte_mempool *mb_pool = NULL;
 	uint16_t port = UINT16_MAX;
 	int ret;
@@ -4237,27 +4278,17 @@ test_bpf_elf_tx_load(void)
 	if (ret != 0)
 		goto fail;
 
-	/* Create temp file from embedded BPF object */
-	tmpfile = create_temp_bpf_file(app_test_bpf_filter_o, app_test_bpf_filter_o_len, "tx");
-	if (tmpfile == NULL)
-		goto fail;
-
 	/* Do test with VM */
-	ret = bpf_tx_test(port, tmpfile, mb_pool, "filter", 0);
+	ret = bpf_tx_test(port, mb_pool, load_elf_image, "filter", 0);
 	if (ret != 0)
 		goto fail;
 
 	/* Repeat with JIT */
-	ret = bpf_tx_test(port, tmpfile, mb_pool, "filter", RTE_BPF_ETH_F_JIT);
+	ret = bpf_tx_test(port, mb_pool, load_elf_image, "filter", RTE_BPF_ETH_F_JIT);
 	if (ret == 0)
 		printf("%s: TX ELF load test passed\n", __func__);
 
 fail:
-	if (tmpfile) {
-		unlink(tmpfile);
-		free(tmpfile);
-	}
-
 	if (port != UINT16_MAX)
 		rte_vdev_uninit(null_dev);
 
@@ -4272,23 +4303,28 @@ test_bpf_elf_tx_load(void)
 }
 
 /* Test loading a receive filter */
-static int bpf_rx_test(uint16_t port, const char *tmpfile, struct rte_mempool *pool,
-		       const char *section, uint32_t flags, uint16_t expected)
+static int bpf_rx_test(uint16_t port, struct rte_mempool *pool, load_elf_image_t load_elf_image,
+	const char *section, uint32_t flags, uint16_t expected)
 {
-	struct rte_mbuf *pkts[BPF_TEST_BURST];
-	const struct rte_bpf_prm prm = {
-		.prog_arg = {
-			.type = RTE_BPF_ARG_PTR,
-			.size = sizeof(struct dummy_net),
-		},
+	static const struct rte_bpf_arg prog_arg = {
+		.type = RTE_BPF_ARG_PTR,
+		.size = sizeof(struct dummy_net),
 	};
+	struct rte_mbuf *pkts[BPF_TEST_BURST];
+	struct rte_bpf *bpf;
 	int ret;
 
-	/* Load BPF program to drop all packets */
-	ret = rte_bpf_eth_rx_elf_load(port, 0, &prm, tmpfile, section, flags);
+	/* Try to load BPF program from image */
+	bpf = load_elf_image(app_test_bpf_filter_o, app_test_bpf_filter_o_len, section,
+		NULL, 0, &prog_arg);
+	TEST_ASSERT_NOT_NULL(bpf, "failed to load BPF filter from image, error=%d:(%s)\n",
+		       rte_errno, rte_strerror(rte_errno));
+
+	/* Try to install loaded BPF program */
+	ret = rte_bpf_eth_rx_install(port, 0, bpf, flags);
 	if (ret != 0) {
-		printf("%s@%d: failed to load BPF filter from file=%s error=%d:(%s)\n",
-		       __func__, __LINE__, tmpfile, rte_errno, rte_strerror(rte_errno));
+		printf("%s@%d: failed to install BPF filter, error=%d:(%s)\n",
+		       __func__, __LINE__, rte_errno, rte_strerror(rte_errno));
 		return ret;
 	}
 
@@ -4311,11 +4347,10 @@ static int bpf_rx_test(uint16_t port, const char *tmpfile, struct rte_mempool *p
 
 /* Test loading a receive filters, first with drop all and then with allow all packets */
 static int
-test_bpf_elf_rx_load(void)
+test_bpf_elf_rx_load(load_elf_image_t load_elf_image)
 {
 	static const char null_dev[] = "net_null_bpf0";
 	struct rte_mempool *pool = NULL;
-	char *tmpfile = NULL;
 	uint16_t port = UINT16_MAX;
 	int ret;
 
@@ -4331,28 +4366,23 @@ test_bpf_elf_rx_load(void)
 	if (ret != 0)
 		goto fail;
 
-	/* Create temp file from embedded BPF object */
-	tmpfile = create_temp_bpf_file(app_test_bpf_filter_o, app_test_bpf_filter_o_len, "rx");
-	if (tmpfile == NULL)
-		goto fail;
-
 	/* Do test with VM */
-	ret = bpf_rx_test(port, tmpfile, pool, "drop", 0, 0);
+	ret = bpf_rx_test(port, pool, load_elf_image, "drop", 0, 0);
 	if (ret != 0)
 		goto fail;
 
 	/* Repeat with JIT */
-	ret = bpf_rx_test(port, tmpfile, pool, "drop", RTE_BPF_ETH_F_JIT, 0);
+	ret = bpf_rx_test(port, pool, load_elf_image, "drop", RTE_BPF_ETH_F_JIT, 0);
 	if (ret != 0)
 		goto fail;
 
 	/* Repeat with allow all */
-	ret = bpf_rx_test(port, tmpfile, pool, "allow", 0, BPF_TEST_BURST);
+	ret = bpf_rx_test(port, pool, load_elf_image, "allow", 0, BPF_TEST_BURST);
 	if (ret != 0)
 		goto fail;
 
 	/* Repeat with JIT */
-	ret = bpf_rx_test(port, tmpfile, pool, "allow", RTE_BPF_ETH_F_JIT, BPF_TEST_BURST);
+	ret = bpf_rx_test(port, pool, load_elf_image, "allow", RTE_BPF_ETH_F_JIT, BPF_TEST_BURST);
 	if (ret != 0)
 		goto fail;
 
@@ -4364,11 +4394,6 @@ test_bpf_elf_rx_load(void)
 			  "Mempool available %u != %u leaks?", avail, BPF_TEST_POOLSIZE);
 
 fail:
-	if (tmpfile) {
-		unlink(tmpfile);
-		free(tmpfile);
-	}
-
 	if (port != UINT16_MAX)
 		rte_vdev_uninit(null_dev);
 
@@ -4381,13 +4406,21 @@ test_bpf_elf_rx_load(void)
 static int
 test_bpf_elf(void)
 {
-	int ret;
+	static const load_elf_image_t elf_image_loaders[] = {
+		load_elf_image_temp_file,
+		load_elf_image_direct,
+	};
 
-	ret = test_bpf_elf_load();
-	if (ret == TEST_SUCCESS)
-		ret = test_bpf_elf_tx_load();
-	if (ret == TEST_SUCCESS)
-		ret = test_bpf_elf_rx_load();
+	int ret = TEST_SUCCESS;
+
+	for (int li = 0; li != RTE_DIM(elf_image_loaders); ++li) {
+		if (ret == TEST_SUCCESS)
+			ret = test_bpf_elf_load(elf_image_loaders[li]);
+		if (ret == TEST_SUCCESS)
+			ret = test_bpf_elf_tx_load(elf_image_loaders[li]);
+		if (ret == TEST_SUCCESS)
+			ret = test_bpf_elf_rx_load(elf_image_loaders[li]);
+	}
 
 	return ret;
 }
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 09/10] doc: add release notes for new extensible BPF API
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
                     ` (7 preceding siblings ...)
  2026-05-14  9:37   ` [PATCH v2 08/10] test/bpf: test loading ELF file from memory Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  2026-05-14  9:37   ` [PATCH v2 10/10] doc: add load API to BPF programmer's guide Marat Khalili
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  Cc: dev

Document the following new eBPF features introduced in this release:
* Extensible BPF loading API (rte_bpf_load_ex, rte_bpf_prm_ex).
* Loading and executing eBPF programs with up to 5 arguments.
* Installing already loaded eBPF programs as port callbacks.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 doc/guides/rel_notes/release_26_07.rst | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst
index f012d47a4b..18810ab81d 100644
--- a/doc/guides/rel_notes/release_26_07.rst
+++ b/doc/guides/rel_notes/release_26_07.rst
@@ -63,6 +63,26 @@ New Features
     ``rte_eal_init`` and the application is responsible for probing each device,
   * ``--auto-probing`` enables the initial bus probing, which is the current default behavior.
 
+* **Added extensible BPF loading API.**
+
+  Added an extensible BPF loading API comprising the function
+  ``rte_bpf_load_ex`` and struct ``rte_bpf_prm_ex``. This enables new features
+  such as loading classic BPF (cBPF), loading ELF images directly from memory
+  buffers, and executing multi-argument programs, while avoiding future ABI
+  breakages.
+
+* **Added support for executing BPF programs with multiple arguments.**
+
+  Added support for loading and executing BPF programs with up to 5 arguments.
+  This introduces new API functions ``rte_bpf_exec_ex``,
+  ``rte_bpf_exec_burst_ex``, and ``rte_bpf_get_jit_ex``.
+
+* **Added BPF port callback installation API.**
+
+  Added new API functions ``rte_bpf_eth_rx_install`` and
+  ``rte_bpf_eth_tx_install`` for installing already loaded BPF programs as
+  port callbacks (as opposed to loading them directly from ELF files).
+
 
 Removed Items
 -------------
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 10/10] doc: add load API to BPF programmer's guide
  2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
                     ` (8 preceding siblings ...)
  2026-05-14  9:37   ` [PATCH v2 09/10] doc: add release notes for new extensible BPF API Marat Khalili
@ 2026-05-14  9:37   ` Marat Khalili
  9 siblings, 0 replies; 23+ messages in thread
From: Marat Khalili @ 2026-05-14  9:37 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev

Rewrite the basic operations list to focus on a typical use. Provide an
end-to-end example demonstrating loading from an ELF file, executing via
JIT or the interpreter, and properly handling multiple custom arguments
using rte_bpf_prog_ctx.

Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
 doc/guides/prog_guide/bpf_lib.rst | 75 ++++++++++++++++++++++++++++---
 1 file changed, 68 insertions(+), 7 deletions(-)

diff --git a/doc/guides/prog_guide/bpf_lib.rst b/doc/guides/prog_guide/bpf_lib.rst
index 8c820328b9..df37825088 100644
--- a/doc/guides/prog_guide/bpf_lib.rst
+++ b/doc/guides/prog_guide/bpf_lib.rst
@@ -15,17 +15,79 @@ for more information.
 Also it introduces basic framework to load/unload BPF-based filters
 on eth devices (right now only via SW RX/TX callbacks).
 
-The library API provides the following basic operations:
+The library API provides the following basic operations for working with BPF
+programs:
 
-*  Create a new BPF execution context and load user provided eBPF code into it.
+*   **Loading:** The extensible API (``rte_bpf_load_ex``) is the recommended
+    way to load a BPF program. By utilizing ``struct rte_bpf_prm_ex``, you can
+    load an eBPF program from an ELF file on disk, or load eBPF/cBPF bytecode
+    directly from memory buffers.
 
-*   Destroy an BPF execution context and its runtime structures and free the associated memory.
+*   **Execution via Callbacks:** Once loaded, a BPF program can be attached to
+    a specific ethernet device port and queue to automatically process incoming
+    or outgoing packets using ``rte_bpf_eth_rx_install`` or
+    ``rte_bpf_eth_tx_install``.
 
-*   Execute eBPF bytecode associated with provided input parameter.
+*   **Direct Execution:** You can execute a BPF program directly from your
+    application code using ``rte_bpf_exec_ex`` (or the burst variant
+    ``rte_bpf_exec_burst_ex``). This API allows passing an execution context
+    (``struct rte_bpf_prog_ctx``) containing up to 5 custom arguments.
 
-*   Provide information about natively compiled code for given BPF context.
+*   **JIT Execution:** For maximum performance, you can retrieve the natively
+    compiled (JIT) function pointer for a loaded program using
+    ``rte_bpf_get_jit_ex`` and call it directly from your code with the same
+    arguments.
 
-*   Load BPF program from the ELF file and install callback to execute it on given ethdev port/queue.
+*   **Cleanup:** Destroy a BPF execution context and free the associated memory
+    using ``rte_bpf_destroy``.
+
+The following is a concise example of loading an eBPF program from an ELF file,
+and executing it directly, utilizing the JIT-compiled version if available:
+
+.. code-block:: c
+
+    struct rte_bpf_prm_ex prm = {
+        .sz = sizeof(struct rte_bpf_prm_ex),
+        .origin = RTE_BPF_ORIGIN_ELF_FILE,
+        .elf_file = {
+            .path = "ptype.o",
+            .section = ".text",
+        },
+        .nb_prog_arg = 2,
+        .prog_arg = {
+            [0] = {
+                .type = RTE_BPF_ARG_PTR_MBUF,
+                .size = sizeof(struct rte_mbuf),
+                .buf_size = RTE_MBUF_DEFAULT_BUF_SIZE,
+            },
+            [1] = {
+                .type = RTE_BPF_ARG_RAW,
+                .size = sizeof(uint64_t),
+            },
+        },
+    };
+    struct rte_bpf *bpf = rte_bpf_load_ex(&prm);
+    if (bpf == NULL) {
+        /* Handle load failure */
+    }
+
+    struct rte_bpf_prog_ctx ctx = {
+        .arg[0] = { .ptr = mbuf },
+        .arg[1] = { .u64 = RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK },
+    };
+
+    struct rte_bpf_jit_ex jit;
+    uint64_t ret;
+    if (rte_bpf_get_jit_ex(bpf, &jit) == 0 && jit.func2 != NULL) {
+        /* Call the JIT-compiled function directly for best performance */
+        ret = jit.func2(ctx.arg[0], ctx.arg[1]);
+    } else {
+        /* Fallback to interpreter */
+        uint64_t flags = 0;
+        ret = rte_bpf_exec_ex(bpf, &ctx, flags);
+    }
+
+    rte_bpf_destroy(bpf);
 
 Packet data load instructions
 -----------------------------
@@ -60,7 +122,6 @@ Not currently supported eBPF features
 -------------------------------------
 
  - JIT support only available for X86_64 and arm64 platforms
- - cBPF
  - tail-pointer call
  - eBPF MAP
  - external function calls for 32-bit platforms
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2026-05-14  9:39 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 17:21 [PATCH 00/10] bpf: introduce extensible load API Marat Khalili
2026-05-06 17:21 ` [PATCH 01/10] bpf: make logging prefixes more consistent Marat Khalili
2026-05-06 17:21 ` [PATCH 02/10] bpf: introduce extensible load API Marat Khalili
2026-05-06 17:22 ` [PATCH 03/10] bpf: support up to 5 arguments Marat Khalili
2026-05-06 17:22 ` [PATCH 04/10] bpf: add cBPF origin to rte_bpf_load_ex Marat Khalili
2026-05-06 17:22 ` [PATCH 05/10] bpf: support rte_bpf_prm_ex with port callbacks Marat Khalili
2026-05-06 17:22 ` [PATCH 06/10] bpf: support loading ELF files from memory Marat Khalili
2026-05-06 17:22 ` [PATCH 07/10] test/bpf: test loading cBPF directly Marat Khalili
2026-05-06 17:22 ` [PATCH 08/10] test/bpf: test loading ELF file from memory Marat Khalili
2026-05-06 17:22 ` [PATCH 09/10] doc: add release notes for new extensible BPF API Marat Khalili
2026-05-06 17:22 ` [PATCH 10/10] doc: add load API to BPF programmer's guide Marat Khalili
2026-05-09 12:36 ` [PATCH 00/10] bpf: introduce extensible load API Konstantin Ananyev
2026-05-14  9:37 ` [PATCH v2 " Marat Khalili
2026-05-14  9:37   ` [PATCH v2 01/10] bpf: make logging prefixes more consistent Marat Khalili
2026-05-14  9:37   ` [PATCH v2 02/10] bpf: introduce extensible load API Marat Khalili
2026-05-14  9:37   ` [PATCH v2 03/10] bpf: support up to 5 arguments Marat Khalili
2026-05-14  9:37   ` [PATCH v2 04/10] bpf: add cBPF origin to rte_bpf_load_ex Marat Khalili
2026-05-14  9:37   ` [PATCH v2 05/10] bpf: support rte_bpf_prm_ex with port callbacks Marat Khalili
2026-05-14  9:37   ` [PATCH v2 06/10] bpf: support loading ELF files from memory Marat Khalili
2026-05-14  9:37   ` [PATCH v2 07/10] test/bpf: test loading cBPF directly Marat Khalili
2026-05-14  9:37   ` [PATCH v2 08/10] test/bpf: test loading ELF file from memory Marat Khalili
2026-05-14  9:37   ` [PATCH v2 09/10] doc: add release notes for new extensible BPF API Marat Khalili
2026-05-14  9:37   ` [PATCH v2 10/10] doc: add load API to BPF programmer's guide Marat Khalili

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox