* [PATCH 00/25] bpf: test and fix issues in verifier
@ 2026-05-06 17:38 Marat Khalili
2026-05-06 17:38 ` [PATCH 01/25] bpf: format and dump jlt, jle, jslt, and jsle Marat Khalili
` (25 more replies)
0 siblings, 26 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
Cc: dev
This patchset addresses numerous bugs in the BPF verifier's abstract
interpretation logic and introduces a new validation debugger API to
enable precise, robust testing of the verifier itself.
While the existing DPDK eBPF verifier is capable of checking basic
execution graph loops and dead code, the mathematical tracking of
register bounds (both signed and unsigned) contained flaws resulting in
false positives and false negatives, undefined behavior, and hardware
exceptions such as SIGFPE during validation.
To resolve these issues and ensure they do not regress, this patchset
first introduces the "Validation Debugger API"
(`rte_bpf_validate_debug_*`). This gdb-like interface allows setting
breakpoints and catchpoints during the validation process to inspect the
verifier's internal state.
Using this new API, a comprehensive test harness
(`app/test/test_bpf_validate.c`) was created to formally check the
abstract domains of instructions across all their valid branches. The
remainder of the patchset incrementally fixes the math and bounds logic
for individual eBPF instructions, using the new tests to prove the
correctness of the fixes.
This debugger API also lays the foundation for an interactive eBPF
validation debugger to be introduced in the future.
Depends-on: series-38068 ("bpf: introduce extensible load API")
Marat Khalili (25):
bpf: format and dump jlt, jle, jslt, and jsle
bpf: add format instruction function
bpf/validate: break on error in evaluate
bpf/validate: expand comments in evaluate cycle
bpf/validate: introduce debugging interface
bpf/validate: fix BPF_ADD of pointer to a scalar
bpf/validate: fix BPF_LDX | EBPF_DW signed range
test/bpf_validate: add setup and basic tests
test/bpf_validate: add harness for pointer tests
bpf/validate: fix EBPF_JSLT | BPF_X evaluation
bpf/validate: fix BPF_NEG of INT64_MIN and 0
bpf/validate: fix BPF_DIV and BPF_MOD signed part
bpf/validate: fix BPF_MUL ranges minimum typo
bpf/validate: fix BPF_MUL signed overflow UB
bpf/validate: fix BPF_JGT/EBPF_JSGT no-jump max
bpf/validate: fix BPF_JMP source range calculation
bpf/validate: fix BPF_JMP empty range handling
bpf/validate: fix BPF_AND min calculations
bpf/validate: fix BPF_LSH shift-out-of-bounds UB
bpf/validate: fix BPF_OR min calculations
bpf/validate: fix BPF_SUB signed max zero case
bpf/validate: fix BPF_XOR signed min calculation
bpf/validate: prevent overflow when building graph
doc: add release notes for BPF validation fixes
doc: add BPF validate debug to programmer's guide
app/test/meson.build | 1 +
app/test/test_bpf.c | 99 ++
app/test/test_bpf_validate.c | 2271 ++++++++++++++++++++++++
doc/guides/prog_guide/bpf_lib.rst | 31 +
doc/guides/rel_notes/release_26_07.rst | 16 +
lib/bpf/bpf_dump.c | 292 +--
lib/bpf/bpf_validate.c | 730 +++++++-
lib/bpf/bpf_validate.h | 54 +
lib/bpf/bpf_validate_debug.c | 663 +++++++
lib/bpf/bpf_validate_debug.h | 86 +
lib/bpf/bpf_value_set.c | 403 +++++
lib/bpf/bpf_value_set.h | 126 ++
lib/bpf/meson.build | 9 +-
lib/bpf/rte_bpf.h | 55 +
lib/bpf/rte_bpf_validate_debug.h | 377 ++++
15 files changed, 5016 insertions(+), 197 deletions(-)
create mode 100644 app/test/test_bpf_validate.c
create mode 100644 lib/bpf/bpf_validate.h
create mode 100644 lib/bpf/bpf_validate_debug.c
create mode 100644 lib/bpf/bpf_validate_debug.h
create mode 100644 lib/bpf/bpf_value_set.c
create mode 100644 lib/bpf/bpf_value_set.h
create mode 100644 lib/bpf/rte_bpf_validate_debug.h
--
2.43.0
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 01/25] bpf: format and dump jlt, jle, jslt, and jsle
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 02/25] bpf: add format instruction function Marat Khalili
` (24 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev
Signed and unsigned less and less-then conditional jumps were not
supported by the eBPF format and dump functions, add these instructions.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
lib/bpf/bpf_dump.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/bpf/bpf_dump.c b/lib/bpf/bpf_dump.c
index 91bc7c0a7af1..0abaeef8ae98 100644
--- a/lib/bpf/bpf_dump.c
+++ b/lib/bpf/bpf_dump.c
@@ -42,6 +42,8 @@ static const char *const jump_tbl[16] = {
[BPF_JSET >> 4] = "jset", [EBPF_JNE >> 4] = "jne",
[EBPF_JSGT >> 4] = "jsgt", [EBPF_JSGE >> 4] = "jsge",
[EBPF_CALL >> 4] = "call", [EBPF_EXIT >> 4] = "exit",
+ [EBPF_JLT >> 4] = "jlt", [EBPF_JLE >> 4] = "jle",
+ [EBPF_JSLT >> 4] = "jslt", [EBPF_JSLE >> 4] = "jsle",
};
static inline const char *
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 02/25] bpf: add format instruction function
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
2026-05-06 17:38 ` [PATCH 01/25] bpf: format and dump jlt, jle, jslt, and jsle Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 03/25] bpf/validate: break on error in evaluate Marat Khalili
` (23 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev
BPF library already contains BPF instruction formatting functions, but
they could only be used via `rte_bpf_dump` to dump result into file. Add
new function `rte_bpf_format` to format instruction in various way
(hexadecimal, disassembly) into a user-provided buffer, as well as a
service function `rte_bpf_insn_is_wide` to detect wide instructions.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
lib/bpf/bpf_dump.c | 290 +++++++++++++++++++++++++++------------------
lib/bpf/rte_bpf.h | 51 ++++++++
2 files changed, 226 insertions(+), 115 deletions(-)
diff --git a/lib/bpf/bpf_dump.c b/lib/bpf/bpf_dump.c
index 0abaeef8ae98..4fd67ad5a1df 100644
--- a/lib/bpf/bpf_dump.c
+++ b/lib/bpf/bpf_dump.c
@@ -46,6 +46,38 @@ static const char *const jump_tbl[16] = {
[EBPF_JSLT >> 4] = "jslt", [EBPF_JSLE >> 4] = "jsle",
};
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_insn_is_wide, 26.07)
+bool
+rte_bpf_insn_is_wide(const struct ebpf_insn *ins)
+{
+ return ins->code == (BPF_LD | BPF_IMM | EBPF_DW);
+}
+
+
+/* Format one (possibly wide) eBPF command as hexadecimal in objdump format. */
+static int
+format_hexadecimal(char *buffer, size_t bufsz, const struct ebpf_insn *ins,
+ uint32_t flags)
+{
+ const char *const b = (const char *)ins;
+
+ RTE_ASSERT((flags & RTE_BPF_FORMAT_FLAG_HEXADECIMAL) != 0);
+
+ RTE_BUILD_BUG_ON(sizeof(*ins) != 8);
+
+ if ((flags & RTE_BPF_FORMAT_FLAG_NEVER_WIDE) == 0 && rte_bpf_insn_is_wide(ins))
+ return snprintf(buffer, bufsz,
+ "%02hhx %02hhx %02hhx %02hhx %02hhx %02hhx %02hhx %02hhx "
+ "%02hhx %02hhx %02hhx %02hhx %02hhx %02hhx %02hhx %02hhx",
+ b[0], b[1], b[2], b[3], b[4], b[5], b[6], b[7],
+ b[8], b[9], b[10], b[11], b[12], b[13], b[14], b[15]);
+ else
+ return snprintf(buffer, bufsz,
+ "%02hhx %02hhx %02hhx %02hhx %02hhx %02hhx %02hhx %02hhx",
+ b[0], b[1], b[2], b[3], b[4], b[5], b[6], b[7]);
+}
+
+/* Return atomic subcommand mnemonic based on BPF_STX immediate. */
static inline const char *
atomic_op(int32_t imm)
{
@@ -59,130 +91,158 @@ atomic_op(int32_t imm)
}
}
-RTE_EXPORT_SYMBOL(rte_bpf_dump)
-void rte_bpf_dump(FILE *f, const struct ebpf_insn *buf, uint32_t len)
+/* Format one (possibly wide) eBPF command as assembler. */
+static int
+format_disassembly(char *buffer, size_t bufsz, const struct ebpf_insn *ins,
+ uint32_t pc, uint32_t flags)
{
- uint32_t i;
+ uint8_t cls = BPF_CLASS(ins->code);
+ const char *op, *postfix = "", *warning = "";
+ char jump[16];
- for (i = 0; i < len; ++i) {
- const struct ebpf_insn *ins = buf + i;
- uint8_t cls = BPF_CLASS(ins->code);
- const char *op, *postfix = "", *warning = "";
+ RTE_ASSERT((flags & RTE_BPF_FORMAT_FLAG_HEXADECIMAL) == 0);
- fprintf(f, " L%u:\t", i);
+ switch (cls) {
+ default:
+ return snprintf(buffer, bufsz, "unimp 0x%x // class: %s",
+ ins->code, class_tbl[cls]);
+ case BPF_ALU:
+ postfix = "32";
+ /* fall through */
+ case EBPF_ALU64:
+ op = alu_op_tbl[BPF_OP_INDEX(ins->code)];
+ if (ins->off != 0)
+ /* Not yet supported variation with non-zero offset. */
+ warning = ", off != 0";
+ if (BPF_SRC(ins->code) == BPF_X)
+ return snprintf(buffer, bufsz, "%s%s r%u, r%u%s", op, postfix, ins->dst_reg,
+ ins->src_reg, warning);
+ else
+ return snprintf(buffer, bufsz, "%s%s r%u, #0x%x%s", op, postfix,
+ ins->dst_reg, ins->imm, warning);
+ case BPF_LD:
+ op = "ld";
+ postfix = size_tbl[BPF_SIZE_INDEX(ins->code)];
+ if (ins->code == (BPF_LD | BPF_IMM | EBPF_DW)) {
+ uint64_t val;
- switch (cls) {
- default:
- fprintf(f, "unimp 0x%x // class: %s\n",
- ins->code, class_tbl[cls]);
- break;
- case BPF_ALU:
- postfix = "32";
- /* fall through */
- case EBPF_ALU64:
- op = alu_op_tbl[BPF_OP_INDEX(ins->code)];
- if (ins->off != 0)
- /* Not yet supported variation with non-zero offset. */
- warning = ", off != 0";
- if (BPF_SRC(ins->code) == BPF_X)
- fprintf(f, "%s%s r%u, r%u%s\n", op, postfix, ins->dst_reg,
- ins->src_reg, warning);
- else
- fprintf(f, "%s%s r%u, #0x%x%s\n", op, postfix,
- ins->dst_reg, ins->imm, warning);
- break;
- case BPF_LD:
- op = "ld";
- postfix = size_tbl[BPF_SIZE_INDEX(ins->code)];
- if (ins->code == (BPF_LD | BPF_IMM | EBPF_DW)) {
- uint64_t val;
-
- if (ins->src_reg != 0)
- /* Not yet supported variation with non-zero src. */
- warning = ", src != 0";
- val = (uint32_t)ins[0].imm |
- (uint64_t)(uint32_t)ins[1].imm << 32;
- fprintf(f, "%s%s r%d, #0x%"PRIx64"%s\n",
- op, postfix, ins->dst_reg, val, warning);
- i++;
- } else if (BPF_MODE(ins->code) == BPF_IMM)
- fprintf(f, "%s%s r%d, #0x%x\n", op, postfix,
- ins->dst_reg, ins->imm);
- else if (BPF_MODE(ins->code) == BPF_ABS)
- fprintf(f, "%s%s r%d, [%d]\n", op, postfix,
- ins->dst_reg, ins->imm);
- else if (BPF_MODE(ins->code) == BPF_IND)
- fprintf(f, "%s%s r%d, [r%u + %d]\n", op, postfix,
- ins->dst_reg, ins->src_reg, ins->imm);
- else
- fprintf(f, "// BUG: LD opcode 0x%02x in eBPF insns\n",
- ins->code);
- break;
- case BPF_LDX:
- op = "ldx";
- postfix = size_tbl[BPF_SIZE_INDEX(ins->code)];
- if (BPF_MODE(ins->code) == BPF_MEM)
- fprintf(f, "%s%s r%d, [r%u + %d]\n", op, postfix, ins->dst_reg,
- ins->src_reg, ins->off);
- else
- fprintf(f, "// BUG: LDX opcode 0x%02x in eBPF insns\n",
- ins->code);
- break;
- case BPF_ST:
- op = "st";
- postfix = size_tbl[BPF_SIZE_INDEX(ins->code)];
- if (BPF_MODE(ins->code) == BPF_MEM)
- fprintf(f, "%s%s [r%d + %d], #0x%x\n", op, postfix,
- ins->dst_reg, ins->off, ins->imm);
- else
- fprintf(f, "// BUG: ST opcode 0x%02x in eBPF insns\n",
- ins->code);
- break;
- case BPF_STX:
- if (BPF_MODE(ins->code) == BPF_MEM)
- op = "stx";
- else if (BPF_MODE(ins->code) == EBPF_ATOMIC) {
- op = atomic_op(ins->imm);
- if (op == NULL) {
- fprintf(f, "// BUG: ATOMIC operation 0x%x in eBPF insns\n",
- ins->imm);
- break;
- }
- } else {
- fprintf(f, "// BUG: STX opcode 0x%02x in eBPF insns\n",
- ins->code);
- break;
- }
- postfix = size_tbl[BPF_SIZE_INDEX(ins->code)];
- fprintf(f, "%s%s [r%d + %d], r%u\n", op, postfix,
- ins->dst_reg, ins->off, ins->src_reg);
- break;
-#define L(pc, off) ((int)(pc) + 1 + (off))
- case BPF_JMP:
- op = jump_tbl[BPF_OP_INDEX(ins->code)];
if (ins->src_reg != 0)
- /* Not yet supported variation with non-zero src w/o condition. */
+ /* Not yet supported variation with non-zero src. */
warning = ", src != 0";
+ val = (uint32_t)ins[0].imm |
+ (uint64_t)(uint32_t)ins[1].imm << 32;
+ return snprintf(buffer, bufsz, "%s%s r%d, #0x%"PRIx64"%s",
+ op, postfix, ins->dst_reg, val, warning);
+ }
+ switch (BPF_MODE(ins->code)) {
+ case BPF_IMM:
+ return snprintf(buffer, bufsz, "%s%s r%d, #0x%x", op, postfix,
+ ins->dst_reg, ins->imm);
+ case BPF_ABS:
+ return snprintf(buffer, bufsz, "%s%s r%d, [%d]", op, postfix,
+ ins->dst_reg, ins->imm);
+ case BPF_IND:
+ return snprintf(buffer, bufsz, "%s%s r%d, [r%u + %d]", op, postfix,
+ ins->dst_reg, ins->src_reg, ins->imm);
+ default:
+ return snprintf(buffer, bufsz, "// BUG: LD opcode 0x%02x in eBPF insns",
+ ins->code);
+ }
+ case BPF_LDX:
+ op = "ldx";
+ postfix = size_tbl[BPF_SIZE_INDEX(ins->code)];
+ if (BPF_MODE(ins->code) == BPF_MEM)
+ return snprintf(buffer, bufsz, "%s%s r%d, [r%u + %d]", op, postfix,
+ ins->dst_reg, ins->src_reg, ins->off);
+ else
+ return snprintf(buffer, bufsz, "// BUG: LDX opcode 0x%02x in eBPF insns",
+ ins->code);
+ case BPF_ST:
+ op = "st";
+ postfix = size_tbl[BPF_SIZE_INDEX(ins->code)];
+ if (BPF_MODE(ins->code) == BPF_MEM)
+ return snprintf(buffer, bufsz, "%s%s [r%d + %d], #0x%x", op, postfix,
+ ins->dst_reg, ins->off, ins->imm);
+ else
+ return snprintf(buffer, bufsz, "// BUG: ST opcode 0x%02x in eBPF insns",
+ ins->code);
+ case BPF_STX:
+ switch (BPF_MODE(ins->code)) {
+ case BPF_MEM:
+ op = "stx";
+ break;
+ case EBPF_ATOMIC:
+ op = atomic_op(ins->imm);
if (op == NULL)
- fprintf(f, "invalid jump opcode: %#x\n", ins->code);
- else if (BPF_OP(ins->code) == BPF_JA)
- fprintf(f, "%s L%d%s\n", op, L(i, ins->off), warning);
- else if (BPF_OP(ins->code) == EBPF_CALL)
- /* Call of helper function with index in immediate. */
- fprintf(f, "%s #%u%s\n", op, ins->imm, warning);
- else if (BPF_OP(ins->code) == EBPF_EXIT)
- fprintf(f, "%s%s\n", op, warning);
- else if (BPF_SRC(ins->code) == BPF_X)
- fprintf(f, "%s r%u, r%u, L%d\n", op, ins->dst_reg,
- ins->src_reg, L(i, ins->off));
- else
- fprintf(f, "%s r%u, #0x%x, L%d\n", op, ins->dst_reg,
- ins->imm, L(i, ins->off));
+ return snprintf(buffer, bufsz,
+ "// BUG: ATOMIC operation 0x%x in eBPF insns", ins->imm);
break;
- case BPF_RET:
- fprintf(f, "// BUG: RET opcode 0x%02x in eBPF insns\n",
+ default:
+ return snprintf(buffer, bufsz, "// BUG: STX opcode 0x%02x in eBPF insns",
ins->code);
- break;
}
+ postfix = size_tbl[BPF_SIZE_INDEX(ins->code)];
+ return snprintf(buffer, bufsz, "%s%s [r%d + %d], r%u", op, postfix,
+ ins->dst_reg, ins->off, ins->src_reg);
+ case BPF_JMP:
+ op = jump_tbl[BPF_OP_INDEX(ins->code)];
+ if (op == NULL)
+ return snprintf(buffer, bufsz, "invalid jump opcode: %#x", ins->code);
+
+ if ((flags & RTE_BPF_FORMAT_FLAG_ABSOLUTE_JUMPS) != 0)
+ snprintf(jump, sizeof(jump), "L%d", pc + 1 + ins->off);
+ else
+ snprintf(jump, sizeof(jump), "%+d", (int)ins->off);
+
+ if (ins->src_reg != 0)
+ /* Not yet supported variation with non-zero src w/o condition. */
+ warning = ", src != 0";
+ switch (BPF_OP(ins->code)) {
+ case BPF_JA:
+ return snprintf(buffer, bufsz, "%s %s%s", op, jump, warning);
+ case EBPF_CALL:
+ /* Call of helper function with index in immediate. */
+ return snprintf(buffer, bufsz, "%s #%u%s", op, ins->imm, warning);
+ case EBPF_EXIT:
+ return snprintf(buffer, bufsz, "%s%s", op, warning);
+ }
+
+ if (BPF_SRC(ins->code) == BPF_X)
+ return snprintf(buffer, bufsz, "%s r%u, r%u, %s", op, ins->dst_reg,
+ ins->src_reg, jump);
+ else
+ return snprintf(buffer, bufsz, "%s r%u, #0x%x, %s", op, ins->dst_reg,
+ ins->imm, jump);
+ case BPF_RET:
+ return snprintf(buffer, bufsz, "// BUG: RET opcode 0x%02x in eBPF insns",
+ ins->code);
+ }
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_format, 26.07)
+int
+rte_bpf_format(char *buffer, size_t bufsz, const struct ebpf_insn *ins,
+ uint32_t pc, uint32_t flags)
+{
+ if ((flags & RTE_BPF_FORMAT_FLAG_HEXADECIMAL) != 0)
+ return format_hexadecimal(buffer, bufsz, ins, flags);
+ else
+ return format_disassembly(buffer, bufsz, ins, pc, flags);
+}
+
+RTE_EXPORT_SYMBOL(rte_bpf_dump)
+void rte_bpf_dump(FILE *f, const struct ebpf_insn *buf, uint32_t len)
+{
+ uint32_t i;
+ char buffer[256];
+
+ for (i = 0; i < len; ++i) {
+ const struct ebpf_insn *ins = buf + i;
+
+ format_disassembly(buffer, sizeof(buffer), ins, i,
+ RTE_BPF_FORMAT_FLAG_DISASSEMBLY |
+ RTE_BPF_FORMAT_FLAG_ABSOLUTE_JUMPS);
+ fprintf(f, " L%u:\t%s\n", i, buffer);
+ i += rte_bpf_insn_is_wide(ins);
}
}
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index 3c3848925bdf..944e0b79ac8c 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -30,6 +30,23 @@ extern "C" {
/** Mask with all supported `RTE_BPF_EXEC_FLAG_*` flags set. */
#define RTE_BPF_EXEC_FLAG_MASK RTE_BPF_EXEC_FLAG_JIT
+/* Format instructions as assembler. */
+#define RTE_BPF_FORMAT_FLAG_DISASSEMBLY 0
+/* Format instructions as hexadecimal. */
+#define RTE_BPF_FORMAT_FLAG_HEXADECIMAL RTE_BIT32(0)
+
+/* Only valid in disassembly mode. */
+/* Format jump offsets relative to the next instruction. */
+#define RTE_BPF_FORMAT_FLAG_RELATIVE_JUMPS 0
+/* Format jump targets relative to the start of the program. */
+#define RTE_BPF_FORMAT_FLAG_ABSOLUTE_JUMPS RTE_BIT32(1)
+
+/* Only valid in hexadecimal mode. */
+/* Format full hexadecimal representation of wide instructions. */
+#define RTE_BPF_FORMAT_FLAG_AUTO_WIDE 0
+/* Format as hexadecimal only first half of wide instructions. */
+#define RTE_BPF_FORMAT_FLAG_NEVER_WIDE RTE_BIT32(2)
+
/**
* Possible types for function/BPF program arguments.
*/
@@ -387,6 +404,40 @@ __rte_experimental
int
rte_bpf_get_jit_ex(const struct rte_bpf *bpf, struct rte_bpf_jit_ex *jit);
+/**
+ * Determine instruction width.
+ *
+ * @return
+ * True if ins points to a wide (128-bit) instruction.
+ */
+__rte_experimental
+bool
+rte_bpf_insn_is_wide(const struct ebpf_insn *ins);
+
+/**
+ * Print eBPF instruction into a buffer.
+ *
+ * Semantics of handling buffer size repeats those of snprintf.
+ *
+ * @param buffer
+ * Output buffer (may be NULL if bufsz is zero).
+ * @param bufsz
+ * Output buffer size.
+ * @param ins
+ * Narrow or wide (depending on opcode) eBPF instruction. That is, when
+ * `rte_bpf_insn_is_wide` is true `ins[1]` is also accessed.
+ * @param pc
+ * Current instruction number for displaying absolute jump targets.
+ * @param flags
+ * Bitwise-OR combination of `RTE_BPF_FORMAT_FLAG_*` values.
+ * @return
+ * Number of characters to be written excluding terminating zero.
+ */
+__rte_experimental
+int
+rte_bpf_format(char *buffer, size_t bufsz, const struct ebpf_insn *ins,
+ uint32_t pc, uint32_t flags);
+
/**
* Dump epf instructions to a file.
*
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 03/25] bpf/validate: break on error in evaluate
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
2026-05-06 17:38 ` [PATCH 01/25] bpf: format and dump jlt, jle, jslt, and jsle Marat Khalili
2026-05-06 17:38 ` [PATCH 02/25] bpf: add format instruction function Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 04/25] bpf/validate: expand comments in evaluate cycle Marat Khalili
` (22 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev
Evaluation loop previously continued until the cycle end in case of an
evaluation error. It made reasoning about the code difficult since it
might be executing when the evaluation is already in an invalid state.
Change loop logic to break out of the loop immediately after an error.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
lib/bpf/bpf_validate.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index bf8a4abb5a5a..1619faf3604a 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -2401,11 +2401,11 @@ prune_eval_state(struct bpf_verifier *bvf, const struct inst_node *node,
static int
evaluate(struct bpf_verifier *bvf)
{
- int32_t rc;
uint32_t idx, op;
const char *err;
const struct ebpf_insn *ins;
struct inst_node *next, *node;
+ int rc = 0;
struct {
uint32_t nb_eval;
@@ -2439,11 +2439,10 @@ evaluate(struct bpf_verifier *bvf)
ins = bvf->prm->raw.ins;
node = bvf->in;
next = node;
- rc = 0;
memset(&stats, 0, sizeof(stats));
- while (node != NULL && rc == 0) {
+ while (node != NULL) {
/*
* current node evaluation, make sure we evaluate
@@ -2457,17 +2456,20 @@ evaluate(struct bpf_verifier *bvf)
/* for jcc node make a copy of evaluation state */
if (node->nb_edge > 1) {
- rc |= save_cur_eval_state(bvf, node);
+ rc = save_cur_eval_state(bvf, node);
+ if (rc < 0)
+ break;
stats.nb_save++;
}
- if (ins_chk[op].eval != NULL && rc == 0) {
+ if (ins_chk[op].eval != NULL) {
err = ins_chk[op].eval(bvf, ins + idx);
stats.nb_eval++;
if (err != NULL) {
RTE_BPF_LOG_FUNC_LINE(ERR,
"%s at pc: %u", err, idx);
rc = -EINVAL;
+ break;
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 04/25] bpf/validate: expand comments in evaluate cycle
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (2 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 03/25] bpf/validate: break on error in evaluate Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 05/25] bpf/validate: introduce debugging interface Marat Khalili
` (21 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev
Logic of execution tree traversal is not 100% obvious, and had some bugs
in the past. Add and expand comments to clarify what `next` and `node`
variables are supposed to point to at various points of the cycle.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
lib/bpf/bpf_validate.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 1619faf3604a..362d00c77095 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -2449,6 +2449,7 @@ evaluate(struct bpf_verifier *bvf)
* each node only once.
*/
if (next != NULL) {
+ /* just started or stepped down the tree, node == next */
bvf->evin = node;
idx = get_node_idx(bvf, node);
@@ -2481,8 +2482,10 @@ evaluate(struct bpf_verifier *bvf)
next = get_next_node(bvf, node);
if (next != NULL) {
-
- /* proceed with next child */
+ /*
+ * proceed with next child
+ * next points to an unwalked subtree of node
+ */
if (node->cur_edge == node->nb_edge &&
node->evst.cur != NULL) {
restore_cur_eval_state(bvf, node);
@@ -2514,6 +2517,11 @@ evaluate(struct bpf_verifier *bvf)
/* first node will not have prev, signalling finish */
}
+
+ /*
+ * next != NULL: stepped down the tree, node == next;
+ * next == NULL: stepped up after processing or pruning subtree;
+ */
}
RTE_LOG(DEBUG, BPF, "%s(%p) returns %d, stats:\n"
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 05/25] bpf/validate: introduce debugging interface
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (3 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 04/25] bpf/validate: expand comments in evaluate cycle Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 06/25] bpf/validate: fix BPF_ADD of pointer to a scalar Marat Khalili
` (20 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev
Introduce debugging interface for BPF validator. New API lets one
observe evaluation of the validated BPF program, including step
evaluation, setting break- and catchpoints, inspecting possible jumps
and memory accesses in current state, as well as formatting current
state elements for the user. It can be used to build both automated
tests and interactive validation debuggers without tight coupling to a
specific validator implementation.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
lib/bpf/bpf_validate.c | 448 ++++++++++++++++++++-
lib/bpf/bpf_validate.h | 54 +++
lib/bpf/bpf_validate_debug.c | 663 +++++++++++++++++++++++++++++++
lib/bpf/bpf_validate_debug.h | 86 ++++
lib/bpf/bpf_value_set.c | 403 +++++++++++++++++++
lib/bpf/bpf_value_set.h | 126 ++++++
lib/bpf/meson.build | 9 +-
lib/bpf/rte_bpf.h | 4 +
lib/bpf/rte_bpf_validate_debug.h | 375 +++++++++++++++++
9 files changed, 2163 insertions(+), 5 deletions(-)
create mode 100644 lib/bpf/bpf_validate.h
create mode 100644 lib/bpf/bpf_validate_debug.c
create mode 100644 lib/bpf/bpf_validate_debug.h
create mode 100644 lib/bpf/bpf_value_set.c
create mode 100644 lib/bpf/bpf_value_set.h
create mode 100644 lib/bpf/rte_bpf_validate_debug.h
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 362d00c77095..8dac908c394f 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -9,9 +9,13 @@
#include <stdint.h>
#include <inttypes.h>
+#include <rte_bpf_validate_debug.h>
#include <rte_common.h>
#include "bpf_impl.h"
+#include "bpf_validate.h"
+#include "bpf_validate_debug.h"
+#include "bpf_value_set.h"
#define BPF_ARG_PTR_STACK RTE_BPF_ARG_RESERVED
@@ -92,6 +96,7 @@ struct bpf_verifier {
struct inst_node *evin;
struct evst_pool evst_sr_pool; /* for evst save/restore */
struct evst_pool evst_tp_pool; /* for evst track/prune */
+ struct rte_bpf_validate_debug *debug;
};
struct bpf_ins_check {
@@ -118,6 +123,409 @@ struct bpf_ins_check {
/* For LD_IND R6 is an implicit CTX register. */
#define IND_SRC_REGS (WRT_REGS ^ 1 << EBPF_REG_6)
+/*
+ * Debugging internal interface and helpers.
+ */
+
+static bool
+reg_val_range_is_valid(const struct bpf_reg_val *rv)
+{
+ if (rv->v.type == RTE_BPF_ARG_UNDEF)
+ return true;
+
+ if (rv->s.min > rv->s.max)
+ return false;
+
+ if (rv->u.min > rv->u.max)
+ return false;
+
+ /* If one of the ranges does not change sign, the other should match. */
+ if (rv->s.min >= 0 || rv->s.max < 0 ||
+ rv->u.min > INT64_MAX || rv->u.max <= INT64_MAX)
+ return rv->u.min == (uint64_t)rv->s.min &&
+ rv->u.max == (uint64_t)rv->s.max;
+
+ return true;
+}
+
+int
+__rte_bpf_validate_state_is_valid(const struct bpf_verifier *verifier)
+{
+ const struct bpf_eval_state *const st = verifier->evst;
+
+ for (int reg = 0; reg != RTE_DIM(st->rv); ++reg)
+ if (!reg_val_range_is_valid(st->rv + reg))
+ return false;
+
+ for (int var = 0; var != RTE_DIM(st->sv); ++var)
+ if (!reg_val_range_is_valid(st->sv + var))
+ return false;
+
+ return true;
+}
+
+int
+__rte_bpf_validate_can_access(const struct bpf_verifier *verifier,
+ const struct ebpf_insn *access, uint64_t off64)
+{
+ const struct bpf_eval_state *const st = verifier->evst;
+ const struct bpf_reg_val *rv;
+ /* Set of accessed byte offsets relative to memory area base. */
+ struct value_set access_set;
+ uint32_t opsz;
+
+ switch (BPF_CLASS(access->code)) {
+ case BPF_LDX:
+ rv = &st->rv[access->src_reg];
+ if (rv->v.type == BPF_ARG_PTR_STACK)
+ /* Not supporting stack access queries yet. */
+ return -ENOTSUP;
+ break;
+ case BPF_ST:
+ rv = &st->rv[access->dst_reg];
+ break;
+ case BPF_STX:
+ rv = &st->rv[access->dst_reg];
+ if (st->rv[access->src_reg].v.type == RTE_BPF_ARG_UNDEF)
+ return false;
+ break;
+ default:
+ return -ENOTSUP;
+ }
+
+ if (!RTE_BPF_ARG_PTR_TYPE(rv->v.type) || rv->v.size == 0)
+ return false;
+
+ access_set = value_set_from_pair(rv->s.min, rv->s.max, rv->u.min, rv->u.max);
+ value_set_translate(&access_set, off64);
+ opsz = bpf_size(BPF_SIZE(access->code));
+ value_set_add_contiguous(&access_set, 0, opsz - 1);
+
+ return value_set_is_covered_by_contiguous(&access_set, 0, rv->v.size - 1);
+}
+
+/* Return true if instruction `code` is supported by `may_jump`. */
+static bool
+may_jump_code_is_supported(uint8_t code)
+{
+ if (BPF_CLASS(code) != BPF_JMP)
+ return false;
+
+ switch (BPF_OP(code)) {
+ case BPF_JEQ:
+ case BPF_JGT:
+ case BPF_JGE:
+ case EBPF_JNE:
+ case EBPF_JSGT:
+ case EBPF_JSGE:
+ case EBPF_JLT:
+ case EBPF_JLE:
+ case EBPF_JSLT:
+ case EBPF_JSLE:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/* Return true if instruction `code` corresponds to a signed comparison. */
+static bool
+may_jump_code_is_signed(uint8_t code)
+{
+ switch (BPF_OP(code)) {
+ case EBPF_JSGT:
+ case EBPF_JSGE:
+ case EBPF_JSLT:
+ case EBPF_JSLE:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/* Return true the specified jump condition _may_ be true. */
+static bool
+may_jump(uint8_t code, const struct value_set *origin,
+ const struct value_set *dst_set, const struct value_set *src_set)
+{
+ switch (BPF_OP(code)) {
+ case BPF_JEQ:
+ return value_sets_intersect(dst_set, src_set);
+ case EBPF_JNE:
+ return !(value_set_is_singleton(dst_set) &&
+ value_sets_equal(dst_set, src_set));
+ case BPF_JGT:
+ case EBPF_JSGT:
+ return !value_sets_based_less_or_equal(origin, dst_set, src_set);
+ case BPF_JGE:
+ case EBPF_JSGE:
+ return !value_sets_based_less(origin, dst_set, src_set);
+ case EBPF_JLT:
+ case EBPF_JSLT:
+ return !value_sets_based_less_or_equal(origin, src_set, dst_set);
+ case EBPF_JSLE:
+ case EBPF_JLE:
+ return !value_sets_based_less(origin, src_set, dst_set);
+ }
+ /* may_jump_code_is_supported should have caught this */
+ RTE_ASSERT(false);
+ return false;
+}
+
+/* Return instruction code for jump condition complement (negated result). */
+static uint8_t
+may_jump_code_complement(uint8_t code)
+{
+ switch (BPF_OP(code)) {
+ case BPF_JEQ:
+ case EBPF_JNE:
+ return code ^ BPF_JEQ ^ EBPF_JNE;
+ case BPF_JGT:
+ case EBPF_JLE:
+ return code ^ BPF_JGT ^ EBPF_JLE;
+ case BPF_JGE:
+ case EBPF_JLT:
+ return code ^ BPF_JGE ^ EBPF_JLT;
+ case EBPF_JSGT:
+ case EBPF_JSLE:
+ return code ^ EBPF_JSGT ^ EBPF_JSLE;
+ case EBPF_JSGE:
+ case EBPF_JSLT:
+ return code ^ EBPF_JSGE ^ EBPF_JSLT;
+ }
+ /* may_jump_code_is_supported should have caught this */
+ RTE_ASSERT(false);
+ return 0;
+}
+
+int
+__rte_bpf_validate_may_jump(const struct bpf_verifier *verifier,
+ const struct ebpf_insn *jump, uint64_t imm64)
+{
+ const struct bpf_eval_state *const st = verifier->evst;
+ const struct bpf_reg_val *rd, *rs;
+ struct value_set dst_set, src_set, origin;
+ int result;
+
+ if (!may_jump_code_is_supported(jump->code))
+ return -ENOTSUP;
+
+ rd = &st->rv[jump->dst_reg];
+ dst_set = (rd->v.type == RTE_BPF_ARG_UNDEF) ? value_set_full :
+ value_set_from_pair(rd->s.min, rd->s.max, rd->u.min, rd->u.max);
+
+ rs = BPF_SRC(jump->code) == BPF_X ? &st->rv[jump->src_reg] : NULL;
+ src_set = rs == NULL ? value_set_singleton((int64_t)jump->imm) :
+ rs->v.type == RTE_BPF_ARG_UNDEF ? value_set_full :
+ value_set_from_pair(rs->s.min, rs->s.max, rs->u.min, rs->u.max);
+
+ value_set_translate(&src_set, imm64);
+
+ if (RTE_BPF_ARG_PTR_TYPE(rd->v.type) &&
+ (rs != NULL && RTE_BPF_ARG_PTR_TYPE(rs->v.type)) &&
+ rd->v.size == rs->v.size) {
+ /*
+ * Both sides are pointers with the same memory area size.
+ * Until tracking of memory areas is implemented we will consider them
+ * pointing to the same memory area just because of this.
+ * In this case our value sets represent offsets from the memory area base,
+ * which is some unknown distance from the scalar zero (NULL).
+ * We know however that the memory area cannot cross zero address.
+ * Thus range of origin relative to memory base starts with 1 byte gap
+ * after the memory area and ends just before it.
+ */
+ origin = value_set_contiguous(rd->v.size + 1, -1);
+ } else {
+ /* Scalar value of a pointer depends on the memory area base address. */
+ if (RTE_BPF_ARG_PTR_TYPE(rd->v.type))
+ value_set_add_contiguous(&dst_set, 1, UINT64_MAX - rd->v.size);
+ if (rs != NULL && RTE_BPF_ARG_PTR_TYPE(rs->v.type))
+ value_set_add_contiguous(&dst_set, 1, UINT64_MAX - rs->v.size);
+ origin = value_set_singleton(0);
+ }
+
+ if (may_jump_code_is_signed(jump->code))
+ /* Shift origin to the minimal value for signed comparisons. */
+ value_set_translate(&origin, INT64_MIN);
+
+ result = 0;
+
+ if (may_jump(jump->code, &origin, &dst_set, &src_set))
+ result |= RTE_BPF_VALIDATE_DEBUG_MAY_BE_TRUE;
+
+ if (may_jump(may_jump_code_complement(jump->code), &origin, &dst_set, &src_set))
+ result |= RTE_BPF_VALIDATE_DEBUG_MAY_BE_FALSE;
+
+ return result;
+}
+
+/* Like snprintf, but advances (except for overflow) ptr and reduces szleft. */
+__attribute__((__format__ (__printf__, 3, 4)))
+static int
+buf_printf(char **ptr, ssize_t *szleft, const char *format, ...)
+{
+ va_list args;
+ int rc;
+
+ va_start(args, format);
+ rc = vsnprintf(*ptr, RTE_MAX(0, *szleft), format, args);
+ va_end(args);
+
+ if (rc > 0) {
+ *szleft -= rc;
+ if (*szleft > 0)
+ *ptr += rc;
+ }
+
+ return rc;
+}
+
+static int
+format_memory_area(char **ptr, ssize_t *szleft, const struct bpf_reg_val *rv)
+{
+ switch (rv->v.type) {
+ case RTE_BPF_ARG_RAW:
+ return 0;
+ case RTE_BPF_ARG_PTR:
+ return buf_printf(ptr, szleft, "%%buffer<%zu> + ",
+ (size_t)rv->v.size);
+ case RTE_BPF_ARG_PTR_MBUF:
+ return buf_printf(ptr, szleft, "%%mbuf<%zu, %zu> + ",
+ (size_t)rv->v.size, (size_t)rv->v.buf_size);
+ case BPF_ARG_PTR_STACK:
+ return buf_printf(ptr, szleft, "%%stack + ");
+ default:
+ return -ENOTSUP;
+ }
+}
+
+/* Format min..max interval using validate-debug API and updating ptr and szleft. */
+static int
+buf_print_interval(char **ptr, ssize_t *szleft, char format, uint64_t min, uint64_t max)
+{
+ int rc;
+
+ rc = rte_bpf_validate_debug_format_interval(*ptr, RTE_MAX(0, *szleft),
+ format, min, max);
+
+ if (rc > 0) {
+ *szleft -= rc;
+ if (*szleft > 0)
+ *ptr += rc;
+ }
+
+ return rc;
+}
+
+/* Format rv roughly as "<signed-range> INTERSECT <unsigned-hex-range>" */
+static int
+format_register_range(char **ptr, ssize_t *szleft, const struct bpf_reg_val *rv)
+{
+ int rc;
+ uint64_t expected_unsigned_min, expected_unsigned_max;
+ const bool valid = reg_val_range_is_valid(rv);
+
+ /* Print signed unless trivial. */
+ if (!valid || rv->s.min != INT64_MIN || rv->s.max != INT64_MAX) {
+ rc = buf_print_interval(ptr, szleft, 'd', rv->s.min, rv->s.max);
+ if (rc < 0)
+ return rc;
+
+ if (valid) {
+ /* Skip printing unsigned if it has expected values. */
+ if (rv->s.min >= 0 || rv->s.max < 0) {
+ expected_unsigned_min = (uint64_t)rv->s.min;
+ expected_unsigned_max = (uint64_t)rv->s.max;
+ } else {
+ expected_unsigned_min = 0;
+ expected_unsigned_max = UINT64_MAX;
+ }
+
+ if (rv->u.min == expected_unsigned_min &&
+ rv->u.max == expected_unsigned_max)
+ return 0;
+ }
+
+ rc = buf_printf(ptr, szleft, " INTERSECT ");
+ if (rc < 0)
+ return rc;
+ }
+
+ rc = buf_print_interval(ptr, szleft, 'x', rv->u.min, rv->u.max);
+ if (rc < 0)
+ return rc;
+
+ if (!valid) {
+ rc = buf_printf(ptr, szleft, " (!)");
+ if (rc < 0)
+ return rc;
+ }
+
+ return 0;
+}
+
+/* Format rv roughly as "<memory-object> + <offsets-range>" */
+static int
+format_reg_val(char *buffer, size_t bufsz, const struct bpf_reg_val *rv)
+{
+ char *ptr = buffer;
+ ssize_t szleft = bufsz;
+ int rc;
+
+ if (rv->v.type == RTE_BPF_ARG_UNDEF)
+ return snprintf(buffer, bufsz, "%%undefined");
+
+ /* Print data area info, if any. */
+ rc = format_memory_area(&ptr, &szleft, rv);
+ if (rc < 0)
+ return rc;
+
+ rc = format_register_range(&ptr, &szleft, rv);
+ if (rc < 0)
+ return rc;
+
+ /* At least one snprintf was called and added terminating zero. */
+ RTE_ASSERT(szleft < (ssize_t)bufsz);
+ --szleft;
+
+ return bufsz - szleft;
+}
+
+int
+__rte_bpf_validate_format_register_info(const struct bpf_verifier *verifier,
+ char *buffer, size_t bufsz, uint8_t reg)
+{
+ if (reg >= EBPF_REG_NUM)
+ return -EINVAL;
+
+ return format_reg_val(buffer, bufsz, &verifier->evst->rv[reg]);
+}
+
+int
+__rte_bpf_validate_format_frame_info(const struct bpf_verifier *verifier,
+ char *buffer, size_t bufsz, int32_t offset)
+{
+ if (offset % sizeof(uint64_t) != 0)
+ return -EINVAL;
+
+ if (offset >= 0 || offset < -MAX_BPF_STACK_SIZE)
+ return -ERANGE;
+
+ offset = (MAX_BPF_STACK_SIZE + offset) / sizeof(uint64_t);
+
+ return format_reg_val(buffer, bufsz, &verifier->evst->sv[offset]);
+}
+
+int32_t
+__rte_bpf_validate_get_frame_size(const struct bpf_verifier *verifier)
+{
+ if (verifier->stack_sz > INT32_MAX)
+ return -ERANGE;
+
+ return verifier->stack_sz;
+}
+
+
/*
* check and evaluate functions for particular instruction types.
*/
@@ -2405,7 +2813,9 @@ evaluate(struct bpf_verifier *bvf)
const char *err;
const struct ebpf_insn *ins;
struct inst_node *next, *node;
- int rc = 0;
+ int prev_nb_edge; /* branching number of the previous instruction */
+ int rc, debug_rc;
+ struct rte_bpf_validate_debug *const debug = bvf->prm->debug;
struct {
uint32_t nb_eval;
@@ -2439,11 +2849,15 @@ evaluate(struct bpf_verifier *bvf)
ins = bvf->prm->raw.ins;
node = bvf->in;
next = node;
+ prev_nb_edge = 1;
memset(&stats, 0, sizeof(stats));
- while (node != NULL) {
+ rc = __rte_bpf_validate_debug_evaluate_start(debug, bvf, bvf->prm);
+ if (rc < 0)
+ return rc;
+ while (node != NULL) {
/*
* current node evaluation, make sure we evaluate
* each node only once.
@@ -2464,6 +2878,13 @@ evaluate(struct bpf_verifier *bvf)
}
if (ins_chk[op].eval != NULL) {
+ rc = __rte_bpf_validate_debug_evaluate_step(
+ debug, idx, prev_nb_edge > 1 ?
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_ENTER :
+ RTE_BPF_VALIDATE_DEBUG_EVENT_STEP);
+ if (rc < 0)
+ break;
+
err = ins_chk[op].eval(bvf, ins + idx);
stats.nb_eval++;
if (err != NULL) {
@@ -2499,10 +2920,17 @@ evaluate(struct bpf_verifier *bvf)
*/
if (node->nb_edge > 1 && prune_eval_state(bvf, node,
next) == 0) {
+ rc = __rte_bpf_validate_debug_evaluate_step(
+ debug, get_node_idx(bvf, next),
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_PRUNE);
+ if (rc < 0)
+ break;
+
next = NULL;
stats.nb_prune++;
} else {
next->prev_node = node;
+ prev_nb_edge = node->nb_edge;
node = next;
}
} else {
@@ -2511,8 +2939,18 @@ evaluate(struct bpf_verifier *bvf)
* mark it's @start state as safe for future references,
* and proceed with parent.
*/
+
+ if (prev_nb_edge != 0) {
+ rc = __rte_bpf_validate_debug_evaluate_step(
+ debug, get_node_idx(bvf, node) + 1,
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_RETURN);
+ if (rc < 0)
+ break;
+ }
+
node->cur_edge = 0;
save_safe_eval_state(bvf, node);
+ prev_nb_edge = 0;
node = node->prev_node;
/* first node will not have prev, signalling finish */
@@ -2532,7 +2970,11 @@ evaluate(struct bpf_verifier *bvf)
__func__, bvf, rc,
stats.nb_eval, stats.nb_prune, stats.nb_save, stats.nb_restore);
- return rc;
+ debug_rc = __rte_bpf_validate_debug_evaluate_finish(debug, rc);
+ rc = debug_rc < 0 ? debug_rc : rc;
+
+ /* Caller does not expect positive values. */
+ return RTE_MIN(0, rc);
}
static bool
diff --git a/lib/bpf/bpf_validate.h b/lib/bpf/bpf_validate.h
new file mode 100644
index 000000000000..c674ca414f96
--- /dev/null
+++ b/lib/bpf/bpf_validate.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _BPF_VALIDATE_H_
+#define _BPF_VALIDATE_H_
+
+/**
+ * @file bpf_validate.h
+ *
+ * Internal-use headers for eBPF validation observability.
+ */
+
+#include <bpf_def.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct bpf_verifier;
+
+/* Return true if the verifier passes internal self-check. */
+int
+__rte_bpf_validate_state_is_valid(const struct bpf_verifier *verifier);
+
+/* Return true the specified access instruction is valid. */
+int
+__rte_bpf_validate_can_access(const struct bpf_verifier *verifier,
+ const struct ebpf_insn *access, uint64_t off64);
+
+/* Get possible truth values of the specified jump condition. */
+int
+__rte_bpf_validate_may_jump(const struct bpf_verifier *verifier,
+ const struct ebpf_insn *jump, uint64_t imm64);
+
+/* Format known information about the register for the user. */
+int
+__rte_bpf_validate_format_register_info(const struct bpf_verifier *verifier,
+ char *buffer, size_t bufsz, uint8_t reg);
+
+/* Format known information about the frame location for the user. */
+int
+__rte_bpf_validate_format_frame_info(const struct bpf_verifier *verifier,
+ char *buffer, size_t bufsz, int32_t offset);
+
+/* Return frame size. */
+int32_t
+__rte_bpf_validate_get_frame_size(const struct bpf_verifier *verifier);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _BPF_VALIDATE_H_ */
diff --git a/lib/bpf/bpf_validate_debug.c b/lib/bpf/bpf_validate_debug.c
new file mode 100644
index 000000000000..d1898ca4536c
--- /dev/null
+++ b/lib/bpf/bpf_validate_debug.c
@@ -0,0 +1,663 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include "bpf_impl.h"
+#include "bpf_validate.h"
+#include "bpf_validate_debug.h"
+
+#include <eal_export.h>
+#include <rte_bpf_validate_debug.h>
+#include <rte_errno.h>
+#include <rte_per_lcore.h>
+
+#include <errno.h>
+#include <stddef.h>
+#include <stdlib.h>
+
+#ifndef LIST_FOREACH_SAFE
+/* We need this macro which neither Linux nor EAL for Linux include yet. */
+#define LIST_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = LIST_FIRST((head)); \
+ (var) && ((tvar) = LIST_NEXT((var), field), 1); \
+ (var) = (tvar))
+#else
+#ifdef RTE_EXEC_ENV_LINUX
+#error "Don't need LIST_FOREACH_SAFE in this version of DPDK anymore, remove it."
+#endif
+#endif
+
+#define EVENT_ARRAY_LENGTH RTE_BPF_VALIDATE_DEBUG_EVENT_END
+
+struct rte_bpf_validate_debug_point {
+ LIST_ENTRY(rte_bpf_validate_debug_point) list;
+ struct rte_bpf_validate_debug_callback callback;
+ uint32_t pc;
+};
+
+LIST_HEAD(point_list, rte_bpf_validate_debug_point);
+
+struct rte_bpf_validate_debug {
+ /* Accessible immediately after object creation. */
+ struct point_list pending_breakpoints;
+ struct point_list *catchpoint_lists;
+ struct rte_bpf_validate_debug_callback step_callback;
+
+ /* Accessible only after evaluate start. */
+ const struct bpf_verifier *verifier;
+ const struct rte_bpf_prm_ex *bpf_prm;
+ struct point_list *breakpoint_lists;
+ struct rte_bpf_validate_debug_point *last_point;
+ uint32_t pc;
+ /* Evaluate stage (only tracking `evaluate` part at the moment). */
+ bool evaluate_started;
+ bool evaluate_finished;
+ int evaluate_result; /* Only valid if `evaluate_finished` is true. */
+};
+
+/* Point lists functions. */
+
+/* Destroy all points in the list. */
+static void
+point_list_destroy(struct point_list *point_list)
+{
+ struct rte_bpf_validate_debug_point *point, *next;
+
+ LIST_FOREACH_SAFE(point, point_list, list, next)
+ rte_bpf_validate_debug_point_destroy(point);
+
+ RTE_ASSERT(LIST_EMPTY(point_list));
+}
+
+/* Destroy all points in all lists in the array and free the array. */
+static void
+point_lists_destroy(struct point_list *point_lists, uint32_t length)
+{
+ if (point_lists == NULL)
+ return;
+
+ for (uint32_t pli = 0; pli != length; ++pli)
+ point_list_destroy(&point_lists[pli]);
+
+ free(point_lists);
+}
+
+/* Dynamically allocate and initialize an array of point lists. */
+static struct point_list *
+point_lists_create(uint32_t length)
+{
+ /* Allocate at least one element to avoid calloc(0, ...) shenanigans. */
+ struct point_list *const array =
+ calloc(RTE_MAX(1u, length), sizeof(*array));
+ if (array == NULL)
+ return NULL;
+
+ for (uint32_t pli = 0; pli != length; ++pli)
+ LIST_INIT(&array[pli]);
+
+ return array;
+}
+
+/* Move point to a different list. */
+static inline void
+point_move(struct rte_bpf_validate_debug_point *point,
+ struct point_list *destination)
+{
+ LIST_REMOVE(point, list);
+ LIST_INSERT_HEAD(destination, point, list);
+}
+
+/* Move all points between lists (the order is inverted). */
+static void
+points_move(struct point_list *source, struct point_list *destination)
+{
+ struct rte_bpf_validate_debug_point *point, *next;
+
+ LIST_FOREACH_SAFE(point, source, list, next)
+ point_move(point, destination);
+ RTE_ASSERT(LIST_EMPTY(source));
+}
+
+/* Pending breakpoints. */
+
+/* Return true if all pending breakpoints have pc less than nb_ins. */
+static bool
+debug_pending_breakpoints_are_valid(const struct rte_bpf_validate_debug *debug,
+ uint32_t nb_ins)
+{
+ const struct rte_bpf_validate_debug_point *breakpoint;
+
+ LIST_FOREACH(breakpoint, &debug->pending_breakpoints, list)
+ if (breakpoint->pc >= nb_ins)
+ return false;
+
+ return true;
+}
+
+/* Move all pending breakpoints to correct per-pc lists. */
+static void
+debug_pending_breakpoints_restore(struct rte_bpf_validate_debug *debug)
+{
+ struct rte_bpf_validate_debug_point *breakpoint, *next;
+ struct point_list breakpoints;
+
+ /* Invert the list first to preserve point order when we move them. */
+ LIST_INIT(&breakpoints);
+ points_move(&debug->pending_breakpoints, &breakpoints);
+
+ LIST_FOREACH_SAFE(breakpoint, &breakpoints, list, next)
+ point_move(breakpoint, &debug->breakpoint_lists[breakpoint->pc]);
+ RTE_ASSERT(LIST_EMPTY(&breakpoints));
+}
+
+/* Move all breakpoints from per-pc lists to the pending one. */
+static void
+debug_pending_breakpoints_save(struct rte_bpf_validate_debug *debug)
+{
+ struct point_list breakpoints;
+
+ LIST_INIT(&breakpoints);
+ for (uint32_t pc = 0; pc != debug->bpf_prm->raw.nb_ins; ++pc)
+ points_move(&debug->breakpoint_lists[pc], &breakpoints);
+
+ /* Invert the list to restore point order after we moved them. */
+ RTE_ASSERT(LIST_EMPTY(&debug->pending_breakpoints));
+ points_move(&breakpoints, &debug->pending_breakpoints);
+}
+
+/* Debug instance creation and destruction. */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_destroy, 26.07)
+void
+rte_bpf_validate_debug_destroy(struct rte_bpf_validate_debug *debug)
+{
+ if (debug == NULL)
+ return;
+
+ /* Cannot destroy the instance during validation. */
+ RTE_ASSERT(!debug->evaluate_started);
+
+ point_lists_destroy(debug->catchpoint_lists, EVENT_ARRAY_LENGTH);
+ point_list_destroy(&debug->pending_breakpoints);
+ free(debug);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_create, 26.07)
+struct rte_bpf_validate_debug *
+rte_bpf_validate_debug_create(void)
+{
+ struct rte_bpf_validate_debug *const debug = calloc(1, sizeof(*debug));
+ if (debug == NULL) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ LIST_INIT(&debug->pending_breakpoints);
+
+ debug->catchpoint_lists = point_lists_create(EVENT_ARRAY_LENGTH);
+ if (debug->catchpoint_lists == NULL) {
+ free(debug);
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ return debug;
+}
+
+/* Managing callbacks. */
+
+/* Call back the user function with correct arguments for a point. */
+static inline int
+debug_point_call_back(struct rte_bpf_validate_debug *debug,
+ struct rte_bpf_validate_debug_point *point)
+{
+ debug->last_point = point;
+ return point->callback.fn(debug, point->callback.ctx);
+}
+
+/* Call back all points in point_list. */
+static int
+debug_points_call_back(struct rte_bpf_validate_debug *debug,
+ const struct point_list *point_list)
+{
+ struct rte_bpf_validate_debug_point *point, *next;
+ int rc = 0;
+
+ LIST_FOREACH_SAFE(point, point_list, list, next)
+ rc = rc < 0 ? rc : debug_point_call_back(debug, point);
+
+ return rc;
+}
+
+/* Call back all catchpoints for the specified event. */
+static int
+debug_send_event(struct rte_bpf_validate_debug *debug, debug_event_t event)
+{
+ return debug_points_call_back(debug, &debug->catchpoint_lists[event]);
+}
+
+/* Create new point and insert it into the specified list. */
+static struct rte_bpf_validate_debug_point *
+point_list_insert(struct point_list *point_list,
+ const struct rte_bpf_validate_debug_callback *callback, uint32_t pc)
+{
+ struct rte_bpf_validate_debug_point *const point =
+ malloc(sizeof(*point));
+ if (point == NULL) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ LIST_INSERT_HEAD(point_list, point, list);
+ point->callback = *callback;
+ point->pc = pc;
+ return point;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_break, 26.07)
+struct rte_bpf_validate_debug_point *
+rte_bpf_validate_debug_break(struct rte_bpf_validate_debug *debug, uint32_t pc,
+ const struct rte_bpf_validate_debug_callback *callback)
+{
+ if (debug == NULL || callback == NULL || callback->fn == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ if (!debug->evaluate_started)
+ return point_list_insert(&debug->pending_breakpoints,
+ callback, pc);
+
+ if (pc >= debug->bpf_prm->raw.nb_ins) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return point_list_insert(&debug->breakpoint_lists[pc], callback, pc);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_catch, 26.07)
+struct rte_bpf_validate_debug_point *
+rte_bpf_validate_debug_catch(struct rte_bpf_validate_debug *debug,
+ debug_event_t event, const struct rte_bpf_validate_debug_callback *callback)
+{
+ if (debug == NULL || callback == NULL || callback->fn == NULL ||
+ event < 0 || event >= RTE_BPF_VALIDATE_DEBUG_EVENT_END) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ return point_list_insert(&debug->catchpoint_lists[event], callback, 0);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_point_destroy, 26.07)
+void
+rte_bpf_validate_debug_point_destroy(struct rte_bpf_validate_debug_point *point)
+{
+ if (point == NULL)
+ return;
+
+ LIST_REMOVE(point, list);
+ free(point);
+}
+
+/* Querying execution state. */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_get_bpf_param, 26.07)
+const struct rte_bpf_prm_ex *
+rte_bpf_validate_debug_get_bpf_param(const struct rte_bpf_validate_debug *debug)
+{
+ if (debug == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ if (!debug->evaluate_started) {
+ rte_errno = ECHILD;
+ return NULL;
+ }
+
+ return debug->bpf_prm;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_get_ins, 26.07)
+int
+rte_bpf_validate_debug_get_ins(const struct rte_bpf_validate_debug *debug,
+ const struct ebpf_insn **ins, uint32_t *nb_ins)
+{
+ if (debug == NULL)
+ return -EINVAL;
+
+ if (!debug->evaluate_started)
+ return -ECHILD;
+
+ if (debug->bpf_prm->origin != RTE_BPF_ORIGIN_RAW)
+ return -ENOTSUP;
+
+ *ins = debug->bpf_prm->raw.ins;
+ *nb_ins = debug->bpf_prm->raw.nb_ins;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_get_last_point, 26.07)
+struct rte_bpf_validate_debug_point *
+rte_bpf_validate_debug_get_last_point(const struct rte_bpf_validate_debug *debug)
+{
+ if (debug == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ return debug->last_point;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_get_pc, 26.07)
+uint32_t
+rte_bpf_validate_debug_get_pc(const struct rte_bpf_validate_debug *debug)
+{
+ if (debug == NULL || !debug->evaluate_started)
+ return UINT32_MAX;
+
+ return debug->pc;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_get_validation_result, 26.07)
+int
+rte_bpf_validate_debug_get_validation_result(const struct rte_bpf_validate_debug *debug,
+ int *result)
+{
+ if (debug == NULL)
+ return -EINVAL;
+
+ if (!debug->evaluate_finished)
+ return -EAGAIN;
+
+ *result = debug->evaluate_result;
+
+ return 0;
+}
+
+/* Querying VM state. */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_can_access, 26.07)
+int
+rte_bpf_validate_debug_can_access(const struct rte_bpf_validate_debug *debug,
+ const struct ebpf_insn *access, uint64_t off64)
+{
+ if (debug == NULL || access == NULL)
+ return -EINVAL;
+
+ if (!debug->evaluate_started)
+ return -ECHILD;
+
+ return __rte_bpf_validate_can_access(debug->verifier, access, off64);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_may_jump, 26.07)
+int
+rte_bpf_validate_debug_may_jump(const struct rte_bpf_validate_debug *debug,
+ const struct ebpf_insn *jump, uint64_t imm64)
+{
+ if (debug == NULL || jump == NULL)
+ return -EINVAL;
+
+ if (!debug->evaluate_started)
+ return -ECHILD;
+
+ return __rte_bpf_validate_may_jump(debug->verifier, jump, imm64);
+}
+
+/* Formatting VM state for user. */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_format_register_info, 26.07)
+int
+rte_bpf_validate_debug_format_register_info(const struct rte_bpf_validate_debug *debug,
+ char *buffer, size_t bufsz, uint8_t reg)
+{
+ if (debug == NULL)
+ return -EINVAL;
+
+ if (!debug->evaluate_started)
+ return -ECHILD;
+
+ return __rte_bpf_validate_format_register_info(debug->verifier, buffer,
+ bufsz, reg);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_format_frame_info, 26.07)
+int
+rte_bpf_validate_debug_format_frame_info(const struct rte_bpf_validate_debug *debug,
+ char *buffer, size_t bufsz, int32_t offset)
+{
+ if (debug == NULL)
+ return -EINVAL;
+
+ if (!debug->evaluate_started)
+ return -ECHILD;
+
+ return __rte_bpf_validate_format_frame_info(debug->verifier, buffer,
+ bufsz, offset);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_get_frame_size, 26.07)
+int32_t
+rte_bpf_validate_debug_get_frame_size(const struct rte_bpf_validate_debug *debug)
+{
+ if (debug == NULL)
+ return -EINVAL;
+
+ if (!debug->evaluate_started)
+ return -ECHILD;
+
+ return __rte_bpf_validate_get_frame_size(debug->verifier);
+}
+
+/* Courtesy formatting functions for user-supplied values. */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_format_value, 26.07)
+int
+rte_bpf_validate_debug_format_value(char *buffer, size_t bufsz, char format,
+ uint64_t value)
+{
+ static const struct {
+ uint64_t value;
+ const char *name;
+ } constants[] = {
+ { .value = INT64_MIN, .name = "INT64_MIN" },
+ { .value = INT32_MIN, .name = "INT32_MIN" },
+ { .value = INT16_MIN, .name = "INT16_MIN" },
+ { .value = INT8_MIN, .name = "INT8_MIN" },
+ { .value = INT8_MAX, .name = "INT8_MAX" },
+ { .value = UINT8_MAX, .name = "UINT8_MAX" },
+ { .value = INT16_MAX, .name = "INT16_MAX" },
+ { .value = UINT16_MAX, .name = "UINT16_MAX" },
+ { .value = INT32_MAX, .name = "INT32_MAX" },
+ { .value = UINT32_MAX, .name = "UINT32_MAX" },
+ { .value = INT64_MAX, .name = "INT64_MAX" },
+ /* UINT64_MAX omitted on purpose, it looks better as -1 */
+ };
+
+ switch (format) {
+ case 'd':
+ for (int ci = 0; ci != RTE_DIM(constants); ++ci)
+ if (constants[ci].value == value)
+ return snprintf(buffer, bufsz, "%s", constants[ci].name);
+ /*
+ * Special case numbers close to int32_t or int64_t range ends,
+ * since they are hard to recognize in decimal otherwise.
+ */
+ if (value - INT64_MIN < 1000000)
+ return snprintf(buffer, bufsz, "INT64_MIN+%" PRId64,
+ value - INT64_MIN);
+ if (INT64_MAX - value < 1000000)
+ return snprintf(buffer, bufsz, "INT64_MAX-%" PRId64,
+ INT64_MAX - value);
+ if (value - INT32_MIN < 1000)
+ return snprintf(buffer, bufsz, "INT32_MIN+%" PRId64,
+ value - INT32_MIN);
+ if (INT32_MAX - value < 1000)
+ return snprintf(buffer, bufsz, "INT32_MAX-%" PRId64,
+ INT32_MAX - value);
+ return snprintf(buffer, bufsz, "%" PRId64, value);
+ case 'x':
+ /* Special case only the common case of UINT64_MAX. */
+ if (value == UINT64_MAX)
+ return snprintf(buffer, bufsz, "%s", "UINT64_MAX");
+ return snprintf(buffer, bufsz, "%#" PRIx64, value);
+ default:
+ return -EINVAL;
+ }
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bpf_validate_debug_format_interval, 26.07)
+int
+rte_bpf_validate_debug_format_interval(char *buffer, size_t bufsz, char format,
+ uint64_t min, uint64_t max)
+{
+ char min_buffer[32], max_buffer[32];
+ int rc;
+
+ if (min == max)
+ return rte_bpf_validate_debug_format_value(buffer, bufsz, format, min);
+
+ rc = rte_bpf_validate_debug_format_value(min_buffer, sizeof(min_buffer), format, min);
+ if (rc < 0)
+ return rc;
+
+ rc = rte_bpf_validate_debug_format_value(max_buffer, sizeof(max_buffer), format, max);
+ if (rc < 0)
+ return rc;
+
+ return snprintf(buffer, bufsz, "%s..%s", min_buffer, max_buffer);
+}
+
+/* Evaluation start and finish. */
+
+/* Free all resources associated with current evaluation. */
+static void
+debug_evaluate_close(struct rte_bpf_validate_debug *debug)
+{
+ RTE_ASSERT(debug->evaluate_started);
+ debug_pending_breakpoints_save(debug);
+ free(debug->breakpoint_lists);
+ debug->breakpoint_lists = NULL;
+ debug->evaluate_started = false;
+}
+
+int
+__rte_bpf_validate_debug_evaluate_start(struct rte_bpf_validate_debug *debug,
+ const struct bpf_verifier *verifier, const struct rte_bpf_prm_ex *bpf_prm)
+{
+ if (debug == NULL)
+ return 0;
+
+ if (verifier == NULL || bpf_prm == NULL ||
+ bpf_prm->origin != RTE_BPF_ORIGIN_RAW)
+ return -EINVAL;
+
+ if (debug->evaluate_started) {
+ RTE_BPF_LOG_FUNC_LINE(ERR, "already started");
+ return -EEXIST;
+ }
+
+ if (!debug_pending_breakpoints_are_valid(debug, bpf_prm->raw.nb_ins))
+ return -ENOENT;
+
+ debug->verifier = verifier;
+ debug->bpf_prm = bpf_prm;
+ debug->breakpoint_lists = point_lists_create(bpf_prm->raw.nb_ins);
+ if (debug->breakpoint_lists == NULL)
+ return -ENOMEM;
+ debug_pending_breakpoints_restore(debug);
+ debug->last_point = NULL;
+ debug->pc = 0;
+ debug->evaluate_started = true;
+
+ const int rc = debug_send_event(debug,
+ RTE_BPF_VALIDATE_DEBUG_EVENT_VALIDATION_START);
+ if (rc < 0) {
+ debug_evaluate_close(debug);
+ return rc;
+ }
+
+ RTE_BPF_LOG_FUNC_LINE(DEBUG, "evaluate started");
+ return 0;
+}
+
+int
+__rte_bpf_validate_debug_evaluate_step(struct rte_bpf_validate_debug *debug,
+ uint32_t pc, debug_event_t event)
+{
+ int rc;
+
+ if (debug == NULL)
+ return 0;
+
+ if (!debug->evaluate_started) {
+ RTE_BPF_LOG_FUNC_LINE(ERR, "not started");
+ return -ECHILD;
+ }
+
+ if (pc > debug->bpf_prm->raw.nb_ins || event < 0 ||
+ event >= RTE_BPF_VALIDATE_DEBUG_EVENT_END)
+ return -EINVAL;
+
+ debug->pc = pc;
+
+ rc = __rte_bpf_validate_state_is_valid(debug->verifier);
+ if (rc == false)
+ rc = debug_send_event(debug,
+ RTE_BPF_VALIDATE_DEBUG_EVENT_INVALID_STATE);
+
+ if (event != RTE_BPF_VALIDATE_DEBUG_EVENT_STEP)
+ rc = rc < 0 ? rc : debug_send_event(debug, event);
+
+ if (event == RTE_BPF_VALIDATE_DEBUG_EVENT_STEP ||
+ event == RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_ENTER)
+ /* Stepping into a real instruction to execute. */
+ rc = rc < 0 ? rc : debug_points_call_back(debug,
+ &debug->breakpoint_lists[pc]);
+
+ rc = rc < 0 ? rc : debug_send_event(debug,
+ RTE_BPF_VALIDATE_DEBUG_EVENT_STEP);
+
+ return rc;
+}
+
+int
+__rte_bpf_validate_debug_evaluate_finish(struct rte_bpf_validate_debug *debug,
+ int result)
+{
+ int rc = 0;
+ uint32_t pc;
+ debug_event_t event;
+
+ if (debug == NULL)
+ return 0;
+
+ if (!debug->evaluate_started) {
+ RTE_BPF_LOG_FUNC_LINE(ERR, "not started");
+ return -ECHILD;
+ }
+
+ debug->evaluate_finished = true;
+ debug->evaluate_result = result;
+
+ if (result != -ECANCELED) {
+ if (result < 0) {
+ /* Last known pc is the place we failed. */
+ pc = debug->pc;
+ event = RTE_BPF_VALIDATE_DEBUG_EVENT_VALIDATION_FAILURE;
+ } else {
+ /* Show program end, not particular instruction. */
+ pc = debug->bpf_prm->raw.nb_ins;
+ event = RTE_BPF_VALIDATE_DEBUG_EVENT_VALIDATION_SUCCESS;
+ }
+
+ rc = __rte_bpf_validate_debug_evaluate_step(debug, pc, event);
+ }
+
+ debug_evaluate_close(debug);
+
+ return rc;
+}
diff --git a/lib/bpf/bpf_validate_debug.h b/lib/bpf/bpf_validate_debug.h
new file mode 100644
index 000000000000..a91f3e9c48b2
--- /dev/null
+++ b/lib/bpf/bpf_validate_debug.h
@@ -0,0 +1,86 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _BPF_VALIDATE_DEBUG_H_
+#define _BPF_VALIDATE_DEBUG_H_
+
+/**
+ * @file bpf_validate_debug.h
+ *
+ * Internal-use headers for eBPF validation debug notifications.
+ */
+
+#include "rte_bpf_validate_debug.h"
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_bpf_prm_ex;
+struct rte_bpf_validate_debug;
+struct bpf_verifier;
+
+/* Type alias for validation event enum. */
+typedef enum rte_bpf_validate_debug_event debug_event_t;
+
+/*
+ * Signal beginning of evaluation process.
+ *
+ * Immediately return 0 if debug is NULL.
+ *
+ * @param debug
+ * Validate debug instance configured by user, can be NULL.
+ * @param verifier
+ * Opaque pointer that can be used for calling bpf_validate.h API.
+ * @param bpf_prm
+ * Parameters struct of the validated eBPF program, including code with all
+ * patches and relocations applied.
+ * @return
+ * Non-negative value on success, negative errno on failure.
+ */
+int
+__rte_bpf_validate_debug_evaluate_start(struct rte_bpf_validate_debug *debug,
+ const struct bpf_verifier *verifier, const struct rte_bpf_prm_ex *bpf_prm);
+
+/*
+ * Signal each instruction, branch end, or evaluation end.
+ *
+ * Immediately return 0 if debug is NULL.
+ *
+ * @param debug
+ * Validate debug instance configured by user, can be NULL.
+ * @param pc
+ * Current value of the program counter, or next after last instruction.
+ * @param event
+ * Specific evaluation event if any, or RTE_BPF_VALIDATE_DEBUG_EVENT_STEP.
+ * @return
+ * Non-negative value: evaluation should continue;
+ * -ECANCELED: evaluation should fail without calling this API again;
+ * Other negative value: evaluation should fail signalling failure;
+ */
+int
+__rte_bpf_validate_debug_evaluate_step(struct rte_bpf_validate_debug *debug,
+ uint32_t pc, debug_event_t event);
+
+/*
+ * Signal end of evaluation process.
+ *
+ * Immediately return 0 if debug is NULL.
+ *
+ * @param debug
+ * Validate debug instance configured by user, can be NULL.
+ * @return
+ * Non-negative value on success, negative errno on failure.
+ */
+int
+__rte_bpf_validate_debug_evaluate_finish(struct rte_bpf_validate_debug *debug,
+ int result);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _BPF_VALIDATE_DEBUG_H_ */
diff --git a/lib/bpf/bpf_value_set.c b/lib/bpf/bpf_value_set.c
new file mode 100644
index 000000000000..86f46de66f2f
--- /dev/null
+++ b/lib/bpf/bpf_value_set.c
@@ -0,0 +1,403 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Huawei Technologies Co., Ltd
+ */
+
+#include "bpf_value_set.h"
+
+#include <rte_debug.h>
+
+/* Helper interval operations and checks. */
+
+/* One of many possible full intervals. */
+static const struct value_set_interval canonical_full_interval = {
+ .first = 0,
+ .last = UINT64_MAX,
+};
+
+/* Translate ("shift") interval by `offset`. */
+static void
+interval_translate(struct value_set_interval *interval, uint64_t offset)
+{
+ interval->first += offset;
+ interval->last += offset;
+}
+
+/* Return true if the interval includes all possible values. */
+static bool
+interval_is_full(struct value_set_interval interval)
+{
+ return interval.last + 1 == interval.first;
+}
+
+/* Return true if the interval includes `value`. */
+static bool
+interval_contains(struct value_set_interval interval, uint64_t value)
+{
+ return value - interval.first <= interval.last - interval.first;
+}
+
+/* Return true if the interval `lhs` includes all values from `rhs`. */
+static bool
+interval_covers(struct value_set_interval lhs, struct value_set_interval rhs)
+{
+ const uint64_t offset = -lhs.first;
+ interval_translate(&lhs, offset);
+ interval_translate(&rhs, offset);
+ RTE_ASSERT(lhs.first == 0);
+
+ return lhs.last == UINT64_MAX ||
+ (lhs.last >= rhs.last && rhs.last >= rhs.first);
+}
+
+/* Return true if the interval includes step from UINT64_MAX to 0. */
+static bool
+interval_crosses_zero(struct value_set_interval interval)
+{
+ return interval.last < interval.first;
+}
+
+/* Return number of elements in a non-full elements, 0 for full interval. */
+static uint64_t
+interval_size(struct value_set_interval interval)
+{
+ return interval.last - interval.first + 1;
+}
+
+/* Return true if two intervals represent same sets of values. */
+static bool
+intervals_equal(struct value_set_interval lhs, struct value_set_interval rhs)
+{
+ return (interval_is_full(lhs) && interval_is_full(rhs)) ||
+ (lhs.first == rhs.first && lhs.last == rhs.last);
+}
+
+/* Return true if two intervals have common elements. */
+static bool
+intervals_intersect(struct value_set_interval lhs, struct value_set_interval rhs)
+{
+ return interval_contains(lhs, rhs.first) || interval_contains(rhs, lhs.first);
+}
+
+/* Return true if `rhs.first` follows `lhs.last` with some gap. Does not check other ends! */
+static bool
+intervals_follow_with_gap(struct value_set_interval lhs, struct value_set_interval rhs)
+{
+ return lhs.last != UINT64_MAX && rhs.first > lhs.last + 1;
+}
+
+/* Return true if `(l - o) < (r - o)` for all `(o in origin, l in lhs, r in rhs)`. */
+static bool
+intervals_based_less(struct value_set_interval origin, struct value_set_interval lhs,
+ struct value_set_interval rhs)
+{
+ /* Translate all intervals for the origin to start at 0. */
+ const uint64_t offset = -origin.first;
+ interval_translate(&origin, offset);
+ interval_translate(&lhs, offset);
+ interval_translate(&rhs, offset);
+ RTE_ASSERT(origin.first == 0);
+
+ return origin.last <= lhs.first &&
+ lhs.first <= lhs.last &&
+ lhs.last < rhs.first &&
+ rhs.first <= rhs.last;
+}
+
+/* Return true if `(l - o) <= (r - o)` for all `(o in origin, l in lhs, r in rhs)`. */
+static bool
+intervals_based_less_or_equal(struct value_set_interval origin, struct value_set_interval lhs,
+ struct value_set_interval rhs)
+{
+ /* Translate all intervals for the origin to start at 0. */
+ const uint64_t offset = -origin.first;
+ interval_translate(&origin, offset);
+ interval_translate(&lhs, offset);
+ interval_translate(&rhs, offset);
+ RTE_ASSERT(origin.first == 0);
+
+ /* Special cases. */
+ if (origin.last == 0 && lhs.first == 0 && lhs.last == 0)
+ return true;
+ if (origin.last == 0 && rhs.first == UINT64_MAX && rhs.last == UINT64_MAX)
+ return true;
+ if (lhs.first == lhs.last && lhs.last == rhs.first && rhs.first == rhs.last)
+ return true;
+
+ return origin.last <= lhs.first &&
+ lhs.first <= lhs.last &&
+ lhs.last <= rhs.first &&
+ rhs.first <= rhs.last;
+}
+
+/* Append interval rhs to list of intervals in lhs. */
+static void
+value_set_append(struct value_set *lhs, struct value_set_interval rhs)
+{
+ RTE_VERIFY(lhs->nb_interval < VALUE_SET_NB_INTERVAL_MAX);
+ RTE_VERIFY(lhs->nb_interval == 0 ||
+ intervals_follow_with_gap(lhs->interval[lhs->nb_interval - 1], rhs));
+ lhs->interval[lhs->nb_interval++] = rhs;
+}
+
+/*
+ * Helper operations on noncyclic value set and intervals.
+ * Noncyclic means no interval crosses zero,
+ * but in return last value set interval may touch first.
+ */
+
+static struct value_set
+noncyclic_value_set_union_interval(const struct value_set *lhs, const struct value_set_interval rhs)
+{
+ struct value_set result = {};
+ uint32_t index = 0;
+
+ RTE_ASSERT(lhs->nb_interval == 0 ||
+ !interval_crosses_zero(lhs->interval[lhs->nb_interval - 1]));
+ RTE_ASSERT(!interval_crosses_zero(rhs));
+
+ /* Append to result all lhs intervals preceding rhs. */
+ for (; index != lhs->nb_interval; ++index) {
+ const struct value_set_interval lhs_interval = lhs->interval[index];
+ if (!intervals_follow_with_gap(lhs_interval, rhs))
+ break;
+
+ value_set_append(&result, lhs_interval);
+ }
+
+ /* Appendinterval joined from rhs and all lhs intervals intersecting or touching it. */
+ struct value_set_interval joint_interval = rhs;
+ for (; index != lhs->nb_interval; ++index) {
+ const struct value_set_interval lhs_interval = lhs->interval[index];
+ if (intervals_follow_with_gap(rhs, lhs_interval))
+ break;
+
+ joint_interval.first = RTE_MIN(joint_interval.first, lhs_interval.first);
+ joint_interval.last = RTE_MAX(joint_interval.last, lhs_interval.last);
+ }
+ value_set_append(&result, joint_interval);
+
+ /* Append to result all lhs intervals following rhs. */
+ for (; index != lhs->nb_interval; ++index)
+ value_set_append(&result, lhs->interval[index]);
+
+ return result;
+}
+
+/* Make "normal" maximal disjoint interval value set out of noncyclic one. */
+static struct value_set
+value_set_from_noncyclic(const struct value_set *set)
+{
+ struct value_set result = {};
+ uint32_t index = 0;
+
+ if (set->nb_interval <= 1)
+ return *set;
+
+ struct value_set_interval last_interval = set->interval[set->nb_interval - 1];
+ if (last_interval.last == UINT64_MAX && set->interval[0].first == 0) {
+ /* Join first interval with the last one instead of copying it. */
+ last_interval.last = set->interval[0].last;
+ ++index;
+ }
+
+ for (; index != set->nb_interval - 1; ++index)
+ value_set_append(&result, set->interval[index]);
+
+ value_set_append(&result, last_interval);
+
+ return result;
+}
+
+/* Make lhs a union of lhs and rhs. */
+static void
+value_set_union_interval(struct value_set *lhs, const struct value_set_interval rhs)
+{
+ struct value_set temp;
+
+ if (value_set_is_empty(lhs)) {
+ value_set_append(lhs, rhs);
+ return;
+ }
+
+ struct value_set_interval *const last_interval = &lhs->interval[lhs->nb_interval - 1];
+ const bool last_interval_crossed_zero = interval_crosses_zero(*last_interval);
+ const uint64_t wrapping_last = last_interval->last;
+
+ if (last_interval_crossed_zero)
+ /* Make value set noncyclic by removing crossing part of last interval. */
+ last_interval->last = UINT64_MAX;
+
+ if (interval_crosses_zero(rhs)) {
+ /* Add parts before and after zero separately. */
+ temp = noncyclic_value_set_union_interval(lhs,
+ (struct value_set_interval){
+ .first = rhs.first,
+ .last = UINT64_MAX,
+ });
+ temp = noncyclic_value_set_union_interval(lhs,
+ (struct value_set_interval){
+ .first = 0,
+ .last = rhs.last,
+ });
+ } else
+ temp = noncyclic_value_set_union_interval(lhs, rhs);
+
+ if (last_interval_crossed_zero)
+ /* Restore previously removed part. */
+ temp = noncyclic_value_set_union_interval(&temp,
+ (struct value_set_interval){
+ .first = 0,
+ .last = wrapping_last,
+ });
+
+ *lhs = value_set_from_noncyclic(&temp);
+}
+
+/* Set `lhs` to the set of possible sums between values from `lhs` and `rhs`. */
+static void
+value_set_add_interval(struct value_set *lhs, struct value_set_interval rhs)
+{
+ const struct value_set temp = *lhs;
+ lhs->nb_interval = 0;
+
+ for (uint32_t index = 0; index != temp.nb_interval; ++index) {
+ const struct value_set_interval interval = temp.interval[index];
+ if (interval_is_full(rhs) || interval_is_full(interval) ||
+ interval_size(interval) > UINT64_MAX - interval_size(rhs)) {
+ value_set_append(lhs, canonical_full_interval);
+ return;
+ }
+ }
+
+ for (uint32_t index = 0; index != temp.nb_interval; ++index)
+ value_set_union_interval(lhs, (struct value_set_interval){
+ /* Checked sizes above, so these interval expansions won't overflow. */
+ .first = temp.interval[index].first + rhs.first,
+ .last = temp.interval[index].last + rhs.last,
+ });
+}
+
+struct value_set
+value_set_singleton(uint64_t value)
+{
+ return value_set_contiguous(value, value);
+}
+
+struct value_set
+value_set_contiguous(uint64_t first, uint64_t last)
+{
+ return (struct value_set){
+ .nb_interval = 1,
+ .interval = {
+ { .first = first, .last = last },
+ },
+ };
+}
+
+struct value_set
+value_set_from_pair(uint64_t first1, uint64_t last1, uint64_t first2, uint64_t last2)
+{
+ struct value_set result = {};
+
+ if (first1 - first2 <= last2 - first2)
+ /* Interval 1 starts within interval 2. */
+ value_set_union_interval(&result, (struct value_set_interval){
+ .first = first1,
+ .last = first1 + RTE_MIN(last1 - first1, last2 - first1),
+ });
+
+ if (first2 - first1 <= last1 - first1)
+ /* Interval 2 starts within interval 1. */
+ value_set_union_interval(&result, (struct value_set_interval){
+ .first = first2,
+ .last = first2 + RTE_MIN(last2 - first2, last1 - first2),
+ });
+
+ return result;
+}
+
+bool
+value_set_is_empty(const struct value_set *set)
+{
+ return set->nb_interval == 0;
+}
+
+bool
+value_set_is_singleton(const struct value_set *set)
+{
+ return set->nb_interval == 1 && interval_size(set->interval[0]) == 1;
+}
+
+bool
+value_sets_equal(const struct value_set *lhs, const struct value_set *rhs)
+{
+ if (lhs->nb_interval != rhs->nb_interval)
+ return false;
+
+ for (uint32_t index = 0; index != lhs->nb_interval; ++index)
+ if (!intervals_equal(lhs->interval[index], rhs->interval[index]))
+ return false;
+
+ return true;
+}
+
+bool
+value_sets_intersect(const struct value_set *lhs, const struct value_set *rhs)
+{
+ for (uint32_t lhs_index = 0; lhs_index != lhs->nb_interval; ++lhs_index)
+ for (uint32_t rhs_index = 0; rhs_index != rhs->nb_interval; ++rhs_index)
+ if (intervals_intersect(lhs->interval[lhs_index], rhs->interval[rhs_index]))
+ return true;
+
+ return false;
+}
+
+bool
+value_set_is_covered_by_contiguous(const struct value_set *lhs, uint64_t first, uint64_t last)
+{
+ const struct value_set_interval rhs = { .first = first, .last = last };
+ for (uint32_t lhs_index = 0; lhs_index != lhs->nb_interval; ++lhs_index)
+ if (!interval_covers(rhs, lhs->interval[lhs_index]))
+ return false;
+
+ return true;
+}
+
+bool
+value_sets_based_less(const struct value_set *origin, const struct value_set *lhs,
+ const struct value_set *rhs)
+{
+ for (uint32_t origin_index = 0; origin_index != origin->nb_interval; ++origin_index)
+ for (uint32_t lhs_index = 0; lhs_index != lhs->nb_interval; ++lhs_index)
+ for (uint32_t rhs_index = 0; rhs_index != rhs->nb_interval; ++rhs_index)
+ if (!intervals_based_less(origin->interval[origin_index],
+ lhs->interval[lhs_index], rhs->interval[rhs_index]))
+ return false;
+ return true;
+}
+
+bool
+value_sets_based_less_or_equal(const struct value_set *origin, const struct value_set *lhs,
+ const struct value_set *rhs)
+{
+ for (uint32_t origin_index = 0; origin_index != origin->nb_interval; ++origin_index)
+ for (uint32_t lhs_index = 0; lhs_index != lhs->nb_interval; ++lhs_index)
+ for (uint32_t rhs_index = 0; rhs_index != rhs->nb_interval; ++rhs_index)
+ if (!intervals_based_less_or_equal(origin->interval[origin_index],
+ lhs->interval[lhs_index], rhs->interval[rhs_index]))
+ return false;
+ return true;
+}
+
+void
+value_set_translate(struct value_set *set, uint64_t offset)
+{
+ for (uint32_t index = 0; index != set->nb_interval; ++index)
+ interval_translate(&set->interval[index], offset);
+}
+
+void
+value_set_add_contiguous(struct value_set *lhs, uint64_t first, uint64_t last)
+{
+ value_set_add_interval(lhs, (struct value_set_interval){ .first = first, .last = last });
+}
diff --git a/lib/bpf/bpf_value_set.h b/lib/bpf/bpf_value_set.h
new file mode 100644
index 000000000000..5e7f8e521f55
--- /dev/null
+++ b/lib/bpf/bpf_value_set.h
@@ -0,0 +1,126 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _BPF_VALUE_SET_H_
+#define _BPF_VALUE_SET_H_
+
+/**
+ * @file value_set.h
+ *
+ * Value set operations for BPF validate debug.
+ *
+ * This is not a general use library, only minimal set of operations is provided
+ * that are necessary for implementing validate debug interface.
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define VALUE_SET_NB_INTERVAL_MAX 3
+
+/*
+ * Cyclic interval on uint64_t.
+ *
+ * Cyclic means value of `last` might be numerically smaller than `first`,
+ * that is the interval may cross from UINT64_MAX to 0.
+ *
+ * Contains element `first` and all elements that can be obtained from it by
+ * adding 1 until the result reaches `last`, which is included.
+ * There is thus multiple representations of the full set and no representation
+ * of the empty set.
+ *
+ * When `first` and `last` are accepted separately as function arguments, the
+ * term _contiguous_ is being used. It means that values of `first` and `last`
+ * are used to create a contiguous set composed of a single cyclic interval
+ * defined by these points.
+ */
+struct value_set_interval {
+ uint64_t first;
+ uint64_t last;
+};
+
+/*
+ * Set of values represented as an ordered sequence of maximal disjoint cyclic intervals.
+ *
+ * Condition `maximal disjoint` means intervals do not intersect or touch each other.
+ *
+ * The sequence is ordered by member `first`. Only last interval may thus cross zero.
+ */
+struct value_set {
+ uint32_t nb_interval;
+ struct value_set_interval interval[VALUE_SET_NB_INTERVAL_MAX];
+};
+
+/* Empty value set. */
+static const struct value_set value_set_empty = {
+ .nb_interval = 0,
+};
+
+/* Full (including every possible value) value set. */
+static const struct value_set value_set_full = {
+ .nb_interval = 1,
+ .interval = {
+ { .first = 0, .last = UINT64_MAX },
+ },
+};
+
+/* Return set containing only `value`. */
+struct value_set
+value_set_singleton(uint64_t value);
+
+/* Return set of all values between and including `first` and `last` (AKA first..last). */
+struct value_set
+value_set_contiguous(uint64_t first, uint64_t last);
+
+/* Return set of all values belonging to _both_ first1..last1 and first2..last. */
+struct value_set
+value_set_from_pair(uint64_t first1, uint64_t last1, uint64_t first2, uint64_t last2);
+
+/* Return true if the set is empty. */
+bool
+value_set_is_empty(const struct value_set *set);
+
+/* Return true if the set only contains one element. */
+bool
+value_set_is_singleton(const struct value_set *set);
+
+/* Return true if lhs and rhs represent the same set. */
+bool
+value_sets_equal(const struct value_set *lhs, const struct value_set *rhs);
+
+/* Return true if sets intersect (contain common elements). */
+bool
+value_sets_intersect(const struct value_set *lhs, const struct value_set *rhs);
+
+/* Return true if all elements in lhs belong to interval first..last */
+bool
+value_set_is_covered_by_contiguous(const struct value_set *lhs, uint64_t first, uint64_t last);
+
+/* Return true if `(l - o) < (r - o)` for all `(o in origin, l in lhs, r in rhs)`. */
+bool
+value_sets_based_less(const struct value_set *origin, const struct value_set *lhs,
+ const struct value_set *rhs);
+
+/* Return true if `(l - o) <= (r - o)` for all `(o in origin, l in lhs, r in rhs)`. */
+bool
+value_sets_based_less_or_equal(const struct value_set *origin, const struct value_set *lhs,
+ const struct value_set *rhs);
+
+/* Translate ("shift") all set elements by `offset`. */
+void
+value_set_translate(struct value_set *lhs, uint64_t rhs);
+
+/* Set `lhs` to the set of possible sums between values from `lhs` and `rhs`. */
+void
+value_set_add_contiguous(struct value_set *lhs, uint64_t first, uint64_t last);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _BPF_VALUE_SET_H */
diff --git a/lib/bpf/meson.build b/lib/bpf/meson.build
index 7e8a300e3f87..b74a5c232107 100644
--- a/lib/bpf/meson.build
+++ b/lib/bpf/meson.build
@@ -24,6 +24,8 @@ sources = files(
'bpf_load_elf.c',
'bpf_pkt.c',
'bpf_validate.c',
+ 'bpf_validate_debug.c',
+ 'bpf_value_set.c',
)
if arch_subdir == 'x86' and dpdk_conf.get('RTE_ARCH_64')
@@ -32,9 +34,12 @@ elif dpdk_conf.has('RTE_ARCH_ARM64')
sources += files('bpf_jit_arm64.c')
endif
-headers = files('bpf_def.h',
+headers = files(
+ 'bpf_def.h',
'rte_bpf.h',
- 'rte_bpf_ethdev.h')
+ 'rte_bpf_ethdev.h',
+ 'rte_bpf_validate_debug.h',
+)
deps += ['mbuf', 'net', 'ethdev']
diff --git a/lib/bpf/rte_bpf.h b/lib/bpf/rte_bpf.h
index 944e0b79ac8c..8fe0e9edf24d 100644
--- a/lib/bpf/rte_bpf.h
+++ b/lib/bpf/rte_bpf.h
@@ -118,6 +118,7 @@ enum rte_bpf_origin {
};
struct bpf_insn;
+struct rte_bpf_validate_debug;
/**
* Input parameters for loading eBPF code, extensible version.
@@ -158,6 +159,9 @@ struct rte_bpf_prm_ex {
struct rte_bpf_arg prog_arg[EBPF_FUNC_MAX_ARGS]; /**< program arguments */
uint32_t nb_prog_arg; /**< program argument count */
+
+ /* Validate debug instance. */
+ struct rte_bpf_validate_debug *debug;
};
/**
diff --git a/lib/bpf/rte_bpf_validate_debug.h b/lib/bpf/rte_bpf_validate_debug.h
new file mode 100644
index 000000000000..2e8275625d8e
--- /dev/null
+++ b/lib/bpf/rte_bpf_validate_debug.h
@@ -0,0 +1,375 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _RTE_BPF_VALIDATE_DEBUG_H_
+#define _RTE_BPF_VALIDATE_DEBUG_H_
+
+/**
+ * @file rte_bpf_validate_debug.h
+ *
+ * Debugging interface for BPF validation.
+ *
+ * Can be used for debugging BPF validation problems as well as in tests.
+ */
+
+#include <bpf_def.h>
+#include <rte_compat.h>
+
+#include <stdbool.h>
+#include <stddef.h>
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_BPF_VALIDATE_DEBUG_MAY_BE_FALSE RTE_BIT32(0)
+#define RTE_BPF_VALIDATE_DEBUG_MAY_BE_TRUE RTE_BIT32(1)
+
+/**
+ * Supported validate events.
+ *
+ * Valid events begin from 0 and end before `RTE_BPF_VALIDATE_DEBUG_EVENT_END`.
+ */
+enum rte_bpf_validate_debug_event {
+ /* Just before every instruction, at branch or validation end. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_STEP,
+ /* Validator has failed its internal self-checks. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_INVALID_STATE,
+ /* Start of validation. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_VALIDATION_START,
+ /* Successful finish of validation. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_VALIDATION_SUCCESS,
+ /* Finish of validation with error. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_VALIDATION_FAILURE,
+ /* Beginning of a branch just after the jump. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_ENTER,
+ /* Pruning branch as verified earlier. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_PRUNE,
+ /* End of branch verification, after the last verified instruction. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_RETURN,
+ /* Number of valid event values. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_END,
+};
+
+struct rte_bpf_validate_debug;
+struct rte_bpf_validate_debug_point;
+
+/** User callback description. */
+struct rte_bpf_validate_debug_callback {
+ int (*fn)(struct rte_bpf_validate_debug *debug, void *ctx);
+ void *ctx;
+};
+
+/** Invoked by rte_bpf_validate_debug_for_each_point for each breakpoint and catchpoint. */
+typedef int (*rte_bpf_validate_debug_point_process_t)(struct rte_bpf_validate_debug_point *point,
+ void *ctx);
+
+/**
+ * Create new debug instance.
+ *
+ * @return
+ * Debug instance in case of success.
+ * NULL with rte_errno set in case of a failure.
+ */
+__rte_experimental
+struct rte_bpf_validate_debug *
+rte_bpf_validate_debug_create(void);
+
+/**
+ * Destroy debug instance.
+ *
+ * Behavior is undefined if validation with this debug instance is ongoing.
+ *
+ * @param debug
+ * Debug instance, or NULL.
+ */
+__rte_experimental
+void
+rte_bpf_validate_debug_destroy(struct rte_bpf_validate_debug *debug);
+
+/**
+ * Create new breakpoint at specified location.
+ *
+ * Can be called before the validation has started. If at validation start later
+ * the program will not have the specified instruction, the start will fail.
+ *
+ * It is allowed to create breakpoints for the same location a callback is
+ * currently executing for, but it will not be invoked in the same cycle.
+ *
+ * @param debug
+ * Debug instance.
+ * @param pc
+ * Program counter to create breakpoint at.
+ * @param callback
+ * Callback to invoke.
+ * @return
+ * New breakpoint on success, NULL with rte_errno set on failure.
+ */
+__rte_experimental
+struct rte_bpf_validate_debug_point *
+rte_bpf_validate_debug_break(struct rte_bpf_validate_debug *debug, uint32_t pc,
+ const struct rte_bpf_validate_debug_callback *callback);
+
+/**
+ * Create new catchpoint for specified event.
+ *
+ * Can be called before the validation has started.
+ *
+ * It is allowed to create catchpoints for the same event a callback is
+ * currently executing for, but it will not be invoked in the same cycle.
+ *
+ * @param debug
+ * Debug instance.
+ * @param event
+ * Validation event to create catchpoint for.
+ * @param callback
+ * Callback to invoke.
+ * @return
+ * New breakpoint on success, NULL with rte_errno set on failure.
+ */
+__rte_experimental
+struct rte_bpf_validate_debug_point *
+rte_bpf_validate_debug_catch(struct rte_bpf_validate_debug *debug,
+ enum rte_bpf_validate_debug_event event,
+ const struct rte_bpf_validate_debug_callback *callback);
+
+/**
+ * Delete breakpoint or catchpoint and free all associated resources.
+ *
+ * If a callback is currently being executed, calling this API is allowed for:
+ * - breakpoint or catchpoint the callback is executed for;
+ * - breakpoints or catchpoints for other locations or events;
+ * and NOT allowed for:
+ * - other breakpoints or catchpoints for the same location or event.
+ *
+ * @param point
+ * Breakpoint or catchpoint to destroy, or NULL.
+ */
+__rte_experimental
+void
+rte_bpf_validate_debug_point_destroy(struct rte_bpf_validate_debug_point *point);
+
+/**
+ * Get effective eBPF parameters struct.
+ *
+ * @param debug
+ * Debug instance.
+ * @return
+ * Parameters struct of the validated eBPF program, including code with all
+ * patches and relocations applied.
+ */
+__rte_experimental
+const struct rte_bpf_prm_ex *
+rte_bpf_validate_debug_get_bpf_param(const struct rte_bpf_validate_debug *debug);
+
+/**
+ * Get pointer to effective eBPF program instructions.
+ *
+ * @param debug
+ * Debug instance.
+ * @param ins
+ * Upon return, program instructions with all patches and relocations applied.
+ * @param nb_ins
+ * Upon return, number of program instructions.
+ * @return
+ * Non-negative value on success, negative errno on failure.
+ */
+__rte_experimental
+int
+rte_bpf_validate_debug_get_ins(const struct rte_bpf_validate_debug *debug,
+ const struct ebpf_insn **ins, uint32_t *nb_ins);
+
+/**
+ * Get last triggered breakpoint or catchpoint.
+ *
+ * Can be used to destroy currently processed breakpoint or catchpoint.
+ *
+ * The pointer may be invalid if the breakpoint or catchpoint has already been
+ * destroyed earlier.
+ *
+ * @param debug
+ * Debug instance.
+ * @return
+ * Last triggered breakpoint or callpoint, including one the callback is
+ * currently executing for.
+ * NULL of none were triggered in the current validation process.
+ */
+__rte_experimental
+struct rte_bpf_validate_debug_point *
+rte_bpf_validate_debug_get_last_point(const struct rte_bpf_validate_debug *debug);
+
+/**
+ * Get current instruction index, or one after last if finishing.
+ *
+ * @param debug
+ * Debug instance.
+ * @return
+ * Current program counter being validated, or one after last.
+ * UINT32_MAX if no program is being validated.
+ */
+__rte_experimental
+uint32_t
+rte_bpf_validate_debug_get_pc(const struct rte_bpf_validate_debug *debug);
+
+/**
+ * Get the validation result, if it has finished.
+ *
+ * @param debug
+ * Debug instance.
+ * @param result
+ * Upon successful return, the validation result (negative if validation failed).
+ * @return
+ * Non-negative value if validation has finished and result variable was written;
+ * -EAGAIN if validation is still ongoing;
+ * other negative errno in case of failure;
+ */
+__rte_experimental
+int
+rte_bpf_validate_debug_get_validation_result(const struct rte_bpf_validate_debug *debug,
+ int *result);
+
+/**
+ * Check if specified memory access instruction is currently valid.
+ *
+ * @param debug
+ * Debug instance.
+ * @param access
+ * Memory load or store eBPF instruction.
+ * @param off64
+ * Additional 64-bit offset added to ins->off.
+ * @return
+ * true if specified memory access is currently valid;
+ * false if specified memory access is currently invalid;
+ * negative errno in case of failure;
+ */
+__rte_experimental
+int
+rte_bpf_validate_debug_can_access(const struct rte_bpf_validate_debug *debug,
+ const struct ebpf_insn *access, uint64_t off64);
+
+/**
+ * Get possible truth values of the specified jump condition.
+ *
+ * @param debug
+ * Debug instance.
+ * @param jump
+ * Conditional jump instruction specifying the condition.
+ * @param imm64
+ * Additional 64-bit immediate added to the source.
+ * @return
+ * in case of success, bitwise combination of:
+ * RTE_BPF_VALIDATE_DEBUG_MAY_BE_FALSE if the jump condition may be false;
+ * RTE_BPF_VALIDATE_DEBUG_MAY_BE_TRUE if the jump condition may be true;
+ * negative errno in case of failure.
+ */
+__rte_experimental
+int
+rte_bpf_validate_debug_may_jump(const struct rte_bpf_validate_debug *debug,
+ const struct ebpf_insn *jump, uint64_t imm64);
+
+/**
+ * Format information about specified register for the user.
+ *
+ * Parameters buffer, bufsz and return value work the same way as for snprintf.
+ *
+ * @param debug
+ * Debug instance.
+ * @param buffer
+ * Buffer to fill with register information.
+ * @param bufsz
+ * Buffer size (including space for terminating zero).
+ * @param reg
+ * Register to provide information about.
+ * @return
+ * Number of characters needed _excluding_ terminating zero.
+ */
+__rte_experimental
+int
+rte_bpf_validate_debug_format_register_info(const struct rte_bpf_validate_debug *debug,
+ char *buffer, size_t bufsz, uint8_t reg);
+
+/**
+ * Format information about specified stack frame location for the user.
+ *
+ * Parameters buffer, bufsz and return value work the same way as for snprintf.
+ *
+ * @param debug
+ * Debug instance.
+ * @param buffer
+ * Buffer to fill with register information.
+ * @param bufsz
+ * Buffer size (including space for terminating zero).
+ * @param offset
+ * Stack frame offset to provide information about, in bytes.
+ * Typically a negative multiple of 8.
+ * @return
+ * Number of characters needed _excluding_ terminating zero.
+ */
+__rte_experimental
+int
+rte_bpf_validate_debug_format_frame_info(const struct rte_bpf_validate_debug *debug,
+ char *buffer, size_t bufsz, int32_t offset);
+
+/**
+ * Get program stack frame size.
+ *
+ * @param debug
+ * Debug instance.
+ * @return
+ * Program stack frame size in bytes.
+ */
+__rte_experimental
+int32_t
+rte_bpf_validate_debug_get_frame_size(const struct rte_bpf_validate_debug *debug);
+
+/**
+ * Format value following the style of register format function.
+ *
+ * Parameters buffer, bufsz and return value work the same way as for snprintf.
+ *
+ * @param buffer
+ * Buffer to fill with register information.
+ * @param bufsz
+ * Buffer size (including space for terminating zero).
+ * @param format
+ * One of characters 'd' or 'x' for signed or hexadecimal format.
+ * @param value
+ * Formatted value, can be signed typecast to unsigned.
+ * @return
+ * Number of characters needed _excluding_ terminating zero.
+ */
+__rte_experimental
+int
+rte_bpf_validate_debug_format_value(char *buffer, size_t bufsz, char format,
+ uint64_t value);
+
+/**
+ * Format interval following the style of register format function.
+ *
+ * Parameters buffer, bufsz and return value work the same way as for snprintf.
+ *
+ * @param buffer
+ * Buffer to fill with register information.
+ * @param bufsz
+ * Buffer size (including space for terminating zero).
+ * @param format
+ * One of characters 'd' or 'x' for signed or hexadecimal format.
+ * @param min
+ * Minimum value of the interval, can be signed typecast to unsigned.
+ * @param max
+ * Maximum value of the interval, can be signed typecast to unsigned.
+ * @return
+ * Number of characters needed _excluding_ terminating zero.
+ */
+__rte_experimental
+int
+rte_bpf_validate_debug_format_interval(char *buffer, size_t bufsz, char format,
+ uint64_t min, uint64_t max);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BPF_VALIDATE_DEBUG_H_ */
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 06/25] bpf/validate: fix BPF_ADD of pointer to a scalar
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (4 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 05/25] bpf/validate: introduce debugging interface Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 07/25] bpf/validate: fix BPF_LDX | EBPF_DW signed range Marat Khalili
` (19 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_add` preserved type of the destination register even when
a pointer was added to it. If it contained scalar, it remained a scalar,
and if it contained pointer, it remained a pointer.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: mov r3, #0x0
2: add r3, r1 ; tested instruction
3: ldxdw r2, [r3 + 16]
4: mov r0, #0x1
5: exit
After the tested instruction validator considers r3 to be scalar and
fails validation with the error:
BPF: evaluate(): destination is not a pointer at pc: 3
However, this code is valid as long as program argument points to a
valid memory area at least 24 bytes long which we read at offset 16.
When adding pointer to a scalar set type of the result to pointer of
the same type. When adding pointer to a pointer set type of the result
to scalar and value to unknown.
The test will be added in subsequent commits since it depends on other
fixes.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
lib/bpf/bpf_validate.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 8dac908c394f..41dca2fb7673 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -647,8 +647,20 @@ eval_apply_mask(struct bpf_reg_val *rv, uint64_t mask)
static void
eval_add(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk)
{
+ struct bpf_reg_val rs_buf;
struct bpf_reg_val rv;
+ if (RTE_BPF_ARG_PTR_TYPE(rs->v.type) != 0) {
+ if (RTE_BPF_ARG_PTR_TYPE(rd->v.type) != 0) {
+ /* treat sum of pointers as sum of two unknown scalars */
+ eval_fill_max_bound(&rs_buf, msk);
+ *rd = rs_buf;
+ rs = &rs_buf;
+ } else
+ /* scalar + pointer is a pointer of the same type */
+ rd->v = rs->v;
+ }
+
rv.u.min = (rd->u.min + rs->u.min) & msk;
rv.u.max = (rd->u.max + rs->u.max) & msk;
rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk;
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 07/25] bpf/validate: fix BPF_LDX | EBPF_DW signed range
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (5 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 06/25] bpf/validate: fix BPF_ADD of pointer to a scalar Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 08/25] test/bpf_validate: add setup and basic tests Marat Khalili
` (18 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_max_load` copied signed range from unsigned regardless of
the mask (operation width) producing on 64-bit load nonsensical signed
range 0..-1 that breaks invariant min <= max relied upon in multiple
places (e.g. signed overflow detection in `eval_mul` only checks `s.min`
to make sure the range is non-negative and so on).
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: mov r3, #0x0
2: add r3, r1
3: ldxdw r2, [r3 + 16] ; tested instruction
4: mov r0, #0x1
5: exit
Pre-state:
r2: %undefined
r3: %buffer<24> + 0
Post-state:
r2: 0..-1 INTERSECT 0..UINT64_MAX (!)
r3: %buffer<24> + 0
Part before INTERSECT represents signed range, part after INTERSECT
represents unsigned range. Unsigned range is correctly set to full range
0..UINT64_MAX, but signed range copied from it becomes 0..-1.
Fix loading logic to only copy unsigned to signed for non-full mask.
The test will be added in subsequent commits since it depends on other
fixes.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
lib/bpf/bpf_validate.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 41dca2fb7673..391be9cbb474 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -1220,10 +1220,11 @@ eval_max_load(struct bpf_reg_val *rv, uint64_t mask)
/* full 64-bit load */
if (mask == UINT64_MAX)
eval_smax_bound(rv, mask);
-
- /* zero-extend load */
- rv->s.min = rv->u.min;
- rv->s.max = rv->u.max;
+ else {
+ /* zero-extend load */
+ rv->s.min = rv->u.min;
+ rv->s.max = rv->u.max;
+ }
}
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 08/25] test/bpf_validate: add setup and basic tests
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (6 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 07/25] bpf/validate: fix BPF_LDX | EBPF_DW signed range Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 09/25] test/bpf_validate: add harness for pointer tests Marat Khalili
` (17 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
Cc: dev
Introduce tests for validation of specific eBPF instructions, generating
a sample eBPF program setting specified pre-conditions for the
instruction, then validating both pre- and post-conditions using step
execution of the validation over the validate debug interface.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/meson.build | 1 +
app/test/test_bpf_validate.c | 908 +++++++++++++++++++++++++++++++++++
2 files changed, 909 insertions(+)
create mode 100644 app/test/test_bpf_validate.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 7d458f9c079a..45a18ee68bb7 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -35,6 +35,7 @@ source_file_deps = {
'test_bitset.c': [],
'test_bitratestats.c': ['metrics', 'bitratestats', 'ethdev'] + sample_packet_forward_deps,
'test_bpf.c': ['bpf', 'net'],
+ 'test_bpf_validate.c': ['bpf'],
'test_byteorder.c': [],
'test_cfgfile.c': ['cfgfile'],
'test_cksum.c': ['net'],
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
new file mode 100644
index 000000000000..20b0dfaf87b2
--- /dev/null
+++ b/app/test/test_bpf_validate.c
@@ -0,0 +1,908 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include "test.h"
+
+#include <bpf_def.h>
+#include <rte_bpf.h>
+#include <rte_bpf_validate_debug.h>
+#include <rte_errno.h>
+
+/*
+ * Tests of BPF validation.
+ */
+
+extern int test_bpf_validate_logtype;
+#define RTE_LOGTYPE_TEST_BPF_VALIDATE test_bpf_validate_logtype
+#define TEST_LOG_LINE(level, ...) \
+ RTE_LOG_LINE(level, TEST_BPF_VALIDATE, "" __VA_ARGS__)
+
+RTE_LOG_REGISTER(test_bpf_validate_logtype, test.bpf_validate, NOTICE);
+
+/* Special value indicating that program counter variable is not being used. */
+#define NO_PROGRAM_COUNTER UINT32_MAX
+
+/* Special value indicating that register variable is not being used. */
+#define NO_REGISTER UINT8_MAX
+
+/* Sizes of text buffers used for formatting various debug outputs. */
+#define VALUE_FORMAT_BUFFER_SIZE 24
+#define INTERVAL_FORMAT_BUFFER_SIZE 64
+#define REGISTER_FORMAT_BUFFER_SIZE 256
+#define DISASSEMBLY_FORMAT_BUFFER_SIZE 64
+
+/* Interval bounded by two signed values, inclusive; min <= max. */
+struct signed_interval {
+ int64_t min;
+ int64_t max;
+};
+
+/* Interval bounded by two unsigned values, inclusive; min <= max. */
+struct unsigned_interval {
+ uint64_t min;
+ uint64_t max;
+};
+
+/*
+ * Expected interval of register values.
+ *
+ * If `is_defined` is not set, domain is considered to be unused in verification
+ * parameters (instruction is not accessing corresponding register).
+ * It's not the same as `unknown` domain which describes register that is being
+ * used but can hold any value.
+ */
+struct domain {
+ bool is_defined;
+ struct signed_interval s;
+ struct unsigned_interval u;
+};
+
+/* Expected validation state at certain point. */
+struct state {
+ /* Specifies that the branch is dynamically unreachable. */
+ bool is_unreachable;
+ struct domain dst;
+ struct domain src;
+};
+
+/* Instruction verification parameters. */
+struct verify_instruction_param {
+ struct ebpf_insn tested_instruction;
+ size_t area_size;
+ /* States just before the tested instruction, just after, or if jumped. */
+ struct state pre;
+ struct state post;
+ struct state jump;
+};
+
+/* Point (pre/post/jump) specific verification context. */
+struct point_context {
+ uint32_t program_counter;
+ uint32_t hit_count;
+ char formatted_dst[REGISTER_FORMAT_BUFFER_SIZE];
+ char formatted_src[REGISTER_FORMAT_BUFFER_SIZE];
+};
+
+/* Verification context. */
+struct verify_instruction_context {
+ struct verify_instruction_param prm;
+ /* Allocation of registers in the generated program. */
+ uint8_t base_reg;
+ uint8_t dst_reg;
+ uint8_t src_reg;
+ uint8_t tmp_reg;
+ /* Number of times invalid state callback was called. */
+ uint32_t invalid_state_count;
+ /* Contexts just before the tested instruction, just after, or if jumped. */
+ struct point_context pre;
+ struct point_context post;
+ struct point_context jump;
+};
+
+/* Domain with both signed and unsigned interval having maximum size. */
+static const struct domain unknown = {
+ .is_defined = true,
+ .s = { .min = INT64_MIN, .max = INT64_MAX },
+ .u = { .min = 0, .max = UINT64_MAX },
+};
+
+
+/* BUILDING DOMAINS */
+
+/* Create domain from singleton interval. */
+static struct domain
+make_singleton_domain(uint64_t value)
+{
+ return (struct domain){
+ .is_defined = true,
+ .s = { .min = value, .max = value },
+ .u = { .min = value, .max = value },
+ };
+}
+
+/* Create domain from signed interval. */
+static struct domain
+make_signed_domain(int64_t min, int64_t max)
+{
+ RTE_VERIFY(min <= max);
+ return (struct domain){
+ .is_defined = true,
+ .s = { .min = min, .max = max },
+ .u = (min ^ max) >= 0 ?
+ (struct unsigned_interval){ .min = min, .max = max } :
+ unknown.u,
+ };
+}
+
+/* Create domain from unsigned interval. */
+static struct domain
+make_unsigned_domain(uint64_t min, uint64_t max)
+{
+ RTE_VERIFY(min <= max);
+ return (struct domain){
+ .is_defined = true,
+ .s = (int64_t)(min ^ max) >= 0 ?
+ (struct signed_interval){ .min = min, .max = max } :
+ unknown.s,
+ .u = { .min = min, .max = max },
+ };
+}
+
+/* Return true if domain is a singleton. */
+static bool
+domain_is_singleton(const struct domain *domain)
+{
+ return domain->s.min == domain->s.max &&
+ (uint64_t)domain->s.max == domain->u.min &&
+ domain->u.min == domain->u.max;
+}
+
+/* Print error message into buffer if rc signifies error or overflow. */
+static void
+handle_format_errors(char *buffer, size_t bufsz, int rc)
+{
+ if (rc < 0)
+ snprintf(buffer, bufsz, "FORMAT ERROR %d!", -rc);
+ else if ((unsigned int)rc >= bufsz)
+ snprintf(buffer, bufsz, "FORMAT OVERFLOW!");
+}
+
+/* Format register information into provided buffer and return the buffer. */
+static const char *
+format_value(char *buffer, size_t bufsz, char format, uint64_t value)
+{
+ handle_format_errors(buffer, bufsz,
+ rte_bpf_validate_debug_format_value(buffer, bufsz, format, value));
+ return buffer;
+}
+
+/* Format register information into provided buffer and return the buffer. */
+static const char *
+format_interval(char *buffer, size_t bufsz, char format, uint64_t min, uint64_t max)
+{
+ handle_format_errors(buffer, bufsz,
+ rte_bpf_validate_debug_format_interval(buffer, bufsz, format, min, max));
+ return buffer;
+}
+
+/* Format domain information into provided buffer and return the buffer. */
+static const char *
+format_domain(char *buffer, size_t bufsz, const struct domain *domain)
+{
+ char signed_buffer[INTERVAL_FORMAT_BUFFER_SIZE];
+ char unsigned_buffer[INTERVAL_FORMAT_BUFFER_SIZE];
+
+ const int rc = !domain->is_defined ?
+ snprintf(buffer, bufsz, "UNDEFINED") :
+ snprintf(buffer, bufsz, "%s INTERSECT %s",
+ format_interval(signed_buffer, sizeof(signed_buffer), 'd',
+ domain->s.min, domain->s.max),
+ format_interval(unsigned_buffer, sizeof(unsigned_buffer), 'x',
+ domain->u.min, domain->u.max));
+
+ handle_format_errors(buffer, bufsz, rc < 0 ? -errno : rc);
+
+ return buffer;
+}
+
+/* Format register information into provided buffer and return the buffer. */
+static const char *
+format_register(struct rte_bpf_validate_debug *debug, char *buffer, size_t bufsz, uint8_t reg)
+{
+ handle_format_errors(buffer, bufsz,
+ rte_bpf_validate_debug_format_register_info(debug, buffer, bufsz, reg));
+ return buffer;
+}
+
+
+/* CHECKING REGISTER ACTUAL DOMAINS */
+
+/* Return true the specified conditional jump _may_ occur at current state. */
+static bool
+may_jump(const struct rte_bpf_validate_debug *debug,
+ const struct ebpf_insn *jump, uint64_t imm64)
+{
+ const int result = rte_bpf_validate_debug_may_jump(debug, jump, imm64);
+ RTE_VERIFY(result >= 0);
+ return (result & RTE_BPF_VALIDATE_DEBUG_MAY_BE_TRUE) != 0;
+}
+
+/* Check interval of the register interpreted as signed. */
+static int
+check_signed_interval(struct rte_bpf_validate_debug *debug,
+ uint8_t reg, struct signed_interval interval)
+{
+ char buffer[VALUE_FORMAT_BUFFER_SIZE];
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | EBPF_JSLT | BPF_K),
+ .dst_reg = reg,
+ }, interval.min),
+ false,
+ "r%hhu s< %s is impossible", reg,
+ format_value(buffer, sizeof(buffer), 'd', interval.min));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | BPF_JEQ | BPF_K),
+ .dst_reg = reg,
+ }, interval.min),
+ true,
+ "r%hhu == %s is possible", reg,
+ format_value(buffer, sizeof(buffer), 'd', interval.min));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | BPF_JEQ | BPF_K),
+ .dst_reg = reg,
+ }, interval.max),
+ true,
+ "r%hhu == %s is possible", reg,
+ format_value(buffer, sizeof(buffer), 'd', interval.max));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | EBPF_JSGT | BPF_K),
+ .dst_reg = reg,
+ }, interval.max),
+ false,
+ "r%hhu s> %s is impossible", reg,
+ format_value(buffer, sizeof(buffer), 'd', interval.max));
+
+ return TEST_SUCCESS;
+}
+
+/* Check interval of the register interpreted as unsigned. */
+static int
+check_unsigned_interval(struct rte_bpf_validate_debug *debug,
+ uint8_t reg, struct unsigned_interval interval)
+{
+ char buffer[VALUE_FORMAT_BUFFER_SIZE];
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | EBPF_JLT | BPF_K),
+ .dst_reg = reg,
+ }, interval.min),
+ false,
+ "r%hhu u< %s is impossible", reg,
+ format_value(buffer, sizeof(buffer), 'x', interval.min));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | BPF_JEQ | BPF_K),
+ .dst_reg = reg,
+ }, interval.min),
+ true,
+ "r%hhu == %s is possible", reg,
+ format_value(buffer, sizeof(buffer), 'x', interval.min));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | BPF_JEQ | BPF_K),
+ .dst_reg = reg,
+ }, interval.max),
+ true,
+ "r%hhu == %s is possible", reg,
+ format_value(buffer, sizeof(buffer), 'x', interval.max));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | BPF_JGT | BPF_K),
+ .dst_reg = reg,
+ }, interval.max),
+ false,
+ "r%hhu u> %s is impossible", reg,
+ format_value(buffer, sizeof(buffer), 'x', interval.max));
+
+ return TEST_SUCCESS;
+}
+
+/* Check domain of the register interpreted as value. */
+static int
+check_domain_impl(struct rte_bpf_validate_debug *debug, uint8_t reg,
+ const struct domain *domain)
+{
+ TEST_ASSERT_SUCCESS(
+ check_signed_interval(debug, reg, domain->s),
+ "signed interval check");
+
+ TEST_ASSERT_SUCCESS(
+ check_unsigned_interval(debug, reg, domain->u),
+ "unsigned interval check");
+
+ return TEST_SUCCESS;
+}
+
+/* Check domain of the register and format the values in case of an error. */
+static int
+check_domain(struct rte_bpf_validate_debug *debug, uint8_t reg,
+ const struct domain *domain)
+{
+ char buffer[REGISTER_FORMAT_BUFFER_SIZE];
+
+ const int rc = check_domain_impl(debug, reg, domain);
+
+ if (rc != TEST_SUCCESS) {
+ TEST_LOG_LINE(WARNING, "\tExpected: r%hhu = %s", reg,
+ format_domain(buffer, sizeof(buffer), domain));
+
+ TEST_LOG_LINE(WARNING, "\tFound: r%hhu = %s", reg,
+ format_register(debug, buffer, sizeof(buffer), reg));
+ }
+
+ return rc;
+}
+
+
+/* GENERATING TEST PROGRAM */
+
+static bool
+fits_in_imm32(int64_t value)
+{
+ return value >= INT32_MIN && value <= INT32_MAX;
+}
+
+/* Load constant into the register. */
+static void
+load_constant(struct ebpf_insn **ins, uint8_t reg, int64_t value)
+{
+ if (fits_in_imm32(value)) {
+ *(*ins)++ = (struct ebpf_insn){
+ .code = (EBPF_ALU64 | EBPF_MOV | BPF_K),
+ .dst_reg = reg,
+ .imm = (int32_t)value,
+ };
+ } else {
+ /* Load imm64 into tmp_reg using wide load, lower bits first... */
+ *(*ins)++ = (struct ebpf_insn){
+ .code = (BPF_LD | BPF_IMM | EBPF_DW),
+ .dst_reg = reg,
+ .imm = (uint32_t)value,
+ };
+ /* ... then higher bits. */
+ *(*ins)++ = (struct ebpf_insn){
+ .imm = (uint32_t)(value >> 32),
+ };
+ }
+}
+
+/*
+ * Compare specified register to value and jump.
+ *
+ * Jump offset is not filled and should be patched in by the caller.
+ */
+static void
+compare_and_jump(struct ebpf_insn **ins, uint8_t op, uint8_t reg,
+ int64_t value, uint8_t tmp_reg)
+{
+ if (fits_in_imm32(value)) {
+ /* Jump on specified condition between reg and immediate. */
+ *(*ins)++ = (struct ebpf_insn){
+ .code = (BPF_JMP | op | BPF_K),
+ .dst_reg = reg,
+ .imm = value,
+ };
+ } else {
+ /* Load value into tmp_reg. */
+ load_constant(ins, tmp_reg, value);
+
+ /* Jump on specified condition between reg and tmp_reg. */
+ *(*ins)++ = (struct ebpf_insn){
+ .code = (BPF_JMP | op | BPF_X),
+ .dst_reg = reg,
+ .src_reg = tmp_reg,
+ };
+ }
+}
+
+/*
+ * Prepare register to be in the specified domain.
+ *
+ * Unless singleton, load unknown value into it and clamp it with conditional jumps.
+ * (Jump offsets are not filled and should be patched in by the caller.)
+ */
+static void
+prepare_domain(struct ebpf_insn **ins, uint8_t reg,
+ const struct domain *domain, uint8_t base_reg, int *service_cell_count,
+ uint8_t tmp_reg)
+{
+ if (domain_is_singleton(domain)) {
+ /* Don't need any uncertainty for a singleton. */
+ load_constant(ins, reg, domain->s.min);
+ return;
+ }
+
+ /* Load value from memory area into the register. */
+ *(*ins)++ = (struct ebpf_insn){
+ .code = (BPF_LDX | EBPF_DW | BPF_MEM),
+ .dst_reg = reg,
+ .src_reg = base_reg,
+ .off = sizeof(uint64_t) * (*service_cell_count)++,
+ };
+
+ /*
+ * Use both signed and unsigned conditions, even if redundant.
+ * It makes it more robust if conditional jump verification itself
+ * contains bugs like not updating the other type of interval.
+ * Jump instructions themselves can be tested separately to catch
+ * these bugs, this preparation phase is not a test for them.
+ */
+ if (domain->u.min > unknown.u.min)
+ compare_and_jump(ins, EBPF_JLT, reg, domain->u.min, tmp_reg);
+ if (domain->u.max < unknown.u.max)
+ compare_and_jump(ins, BPF_JGT, reg, domain->u.max, tmp_reg);
+ if (domain->s.min > unknown.s.min)
+ compare_and_jump(ins, EBPF_JSLT, reg, domain->s.min, tmp_reg);
+ if (domain->s.max < unknown.s.max)
+ compare_and_jump(ins, EBPF_JSGT, reg, domain->s.max, tmp_reg);
+}
+
+static void
+fill_verify_instruction_defaults(struct verify_instruction_param *prm)
+{
+
+ if (BPF_CLASS(prm->tested_instruction.code) != BPF_JMP)
+ prm->jump.is_unreachable = true;
+
+ RTE_VERIFY(!prm->pre.is_unreachable);
+ if (prm->post.is_unreachable) {
+ RTE_VERIFY(!prm->post.dst.is_defined);
+ RTE_VERIFY(!prm->post.src.is_defined);
+ } else {
+ if (!prm->post.dst.is_defined)
+ prm->post.dst = prm->pre.dst;
+ if (!prm->post.src.is_defined)
+ prm->post.src = prm->pre.src;
+ }
+
+ if (prm->jump.is_unreachable) {
+ RTE_VERIFY(!prm->jump.dst.is_defined);
+ RTE_VERIFY(!prm->jump.src.is_defined);
+ } else {
+ if (!prm->jump.dst.is_defined)
+ prm->jump.dst = prm->pre.dst;
+ if (!prm->jump.src.is_defined)
+ prm->jump.src = prm->pre.src;
+ }
+}
+
+/* Generate program for the tested instruction and domains from the context.
+ *
+ * Return number of instructions.
+ *
+ * Destination and source registers in tested_instruction should not be specified,
+ * they are filled in by the function as long as domains for them are specified.
+ * Jump offset should not be specified, it is filled in by the function.
+ *
+ * If `pre.dst` or `pre.src` domain is not defined, corresponding register
+ * is not prepared.
+ *
+ * For non-jump instructions `jump.is_unreachable` is always set automatically.
+ *
+ * If any of the post or jump domains are not defined, they are copied from src
+ * unless corresponding branch is unreachable.
+ *
+ * Memory area size is automatically expanded to have enough space for loading
+ * unknown dst and src register values, thus testing sizes less than 16 bytes is
+ * not guaranteed.
+ *
+ * Limitations:
+ * - Support for jump instructions is incomplete (e.g. exit, ja).
+ * - Wide instructions are not supported yet.
+ */
+static uint32_t
+generate_program(struct verify_instruction_context *ctx, struct ebpf_insn *ins)
+{
+ struct ebpf_insn *const ins_buf = ins;
+ /* Number of double words used for service purposes. */
+ int service_cell_count = 0;
+
+ /* Make sure we actually support provided instruction. */
+ switch (BPF_CLASS(ctx->prm.tested_instruction.code)) {
+ case BPF_LD:
+ /* Wide instructions are not supported yet. */
+ RTE_VERIFY(!rte_bpf_insn_is_wide(&ctx->prm.tested_instruction));
+ break;
+ }
+
+ fill_verify_instruction_defaults(&ctx->prm);
+
+ /* Allocate registers, base_reg is received as program argument. */
+ ctx->base_reg = EBPF_REG_1;
+ ctx->dst_reg = (ctx->prm.pre.dst.is_defined || ctx->prm.post.dst.is_defined ||
+ ctx->prm.jump.dst.is_defined) ? EBPF_REG_2 : NO_REGISTER;
+ ctx->src_reg = (ctx->prm.pre.src.is_defined || ctx->prm.post.src.is_defined ||
+ ctx->prm.jump.src.is_defined) ? EBPF_REG_3 : NO_REGISTER;
+ ctx->tmp_reg = EBPF_REG_4;
+
+ /* Clear r0 to make it eligible as a return value. */
+ load_constant(&ins, EBPF_REG_0, 0);
+
+ /* Fill dst register in the instruction if defined anywhere, prepare if needed. */
+ if (ctx->dst_reg != NO_REGISTER) {
+ RTE_VERIFY(ctx->prm.tested_instruction.dst_reg == 0);
+ ctx->prm.tested_instruction.dst_reg = ctx->dst_reg;
+
+ if (ctx->prm.pre.dst.is_defined)
+ prepare_domain(&ins, ctx->dst_reg, &ctx->prm.pre.dst,
+ ctx->base_reg, &service_cell_count, ctx->tmp_reg);
+ else
+ TEST_LOG_LINE(DEBUG, "Not preparing undefined r%hhu", ctx->dst_reg);
+ }
+
+ /* Fill src register in the instruction if defined anywhere, prepare if needed. */
+ if (ctx->src_reg != NO_REGISTER) {
+ RTE_VERIFY(ctx->prm.tested_instruction.src_reg == 0);
+ ctx->prm.tested_instruction.src_reg = ctx->src_reg;
+
+ if (ctx->prm.pre.src.is_defined)
+ prepare_domain(&ins, ctx->src_reg, &ctx->prm.pre.src,
+ ctx->base_reg, &service_cell_count, ctx->tmp_reg);
+ else
+ TEST_LOG_LINE(DEBUG, "Not preparing undefined r%hhu", ctx->src_reg);
+ }
+
+ /* Automatically increase area size if needed. */
+ ctx->prm.area_size = RTE_MAX(ctx->prm.area_size, service_cell_count * sizeof(uint64_t));
+
+ /* Issue tested instruction. */
+ ctx->pre.program_counter = ins - ins_buf;
+ *ins++ = ctx->prm.tested_instruction;
+
+ /* Issue post instruction (for setting post breakpoint). */
+ ctx->post.program_counter = ins - ins_buf;
+ load_constant(&ins, EBPF_REG_0, 1);
+
+ /* Issue jump branch for the jump instruction, even if dynamically unreachable. */
+ if (BPF_CLASS(ctx->prm.tested_instruction.code) != BPF_JMP)
+ ctx->jump.program_counter = NO_PROGRAM_COUNTER;
+ else {
+ /* Finish previous branch by issuing exit. */
+ *ins++ = (struct ebpf_insn){ .code = (BPF_JMP | EBPF_EXIT) };
+
+ /* Issue jump target instruction (for setting jump breakpoint). */
+ ctx->jump.program_counter = ins - ins_buf;
+ load_constant(&ins, EBPF_REG_0, 2);
+
+ /* Patch jump in tested jump instruction. */
+ RTE_VERIFY(ins_buf[ctx->pre.program_counter].off == 0);
+ ins_buf[ctx->pre.program_counter].off =
+ ctx->jump.program_counter - ctx->post.program_counter;
+ }
+
+ /* Issue exit instruction. */
+ const uint32_t exit_pc = ins - ins_buf;
+ *ins++ = (struct ebpf_insn){ .code = (BPF_JMP | EBPF_EXIT) };
+
+ /* Patch all jumps to point to exit. */
+ for (uint32_t pc = 0; pc != ctx->pre.program_counter; ++pc)
+ if (BPF_CLASS(ins_buf[pc].code) == BPF_JMP) {
+ RTE_ASSERT(ins_buf[pc].off == 0);
+ ins_buf[pc].off = exit_pc - (pc + 1);
+ }
+
+ const uint32_t nb_ins = ins - ins_buf;
+ return nb_ins;
+}
+
+
+/* VERIFICATION OF AN ARBITRARY INSTRUCTION */
+
+/* Invoked when invalid state is detected. */
+static int
+invalid_state_cb(struct rte_bpf_validate_debug *debug, void *void_ctx)
+{
+ struct verify_instruction_context *const ctx = void_ctx;
+
+ ++ctx->invalid_state_count;
+
+ TEST_LOG_LINE(WARNING,
+ "Invalid state detected at pc %u",
+ rte_bpf_validate_debug_get_pc(debug));
+
+ RTE_SET_USED(debug);
+
+ return TEST_SUCCESS;
+}
+
+static int
+point_callback(struct rte_bpf_validate_debug *debug, const struct verify_instruction_context *ctx,
+ struct point_context *point_ctx, const struct state *state)
+{
+ TEST_ASSERT_EQUAL(point_ctx->hit_count, 0, "not called before");
+
+ const uint32_t pc = rte_bpf_validate_debug_get_pc(debug);
+ TEST_ASSERT_EQUAL(pc, point_ctx->program_counter,
+ "Expected program counter: %" PRIu32 ", found: %" PRIu32,
+ point_ctx->program_counter, pc);
+
+ if (ctx->dst_reg != NO_REGISTER) {
+ format_register(debug, point_ctx->formatted_dst,
+ sizeof(point_ctx->formatted_dst), ctx->dst_reg);
+
+ if (state->dst.is_defined) {
+ TEST_ASSERT_SUCCESS(
+ check_domain(debug, ctx->dst_reg, &state->dst),
+ "dst domain check");
+ TEST_LOG_LINE(DEBUG, "Successfully checked r%hhu.", ctx->dst_reg);
+ } else
+ TEST_LOG_LINE(DEBUG, "Not checking undefined r%hhu.", ctx->dst_reg);
+ }
+
+ if (ctx->src_reg != NO_REGISTER) {
+ format_register(debug, point_ctx->formatted_src,
+ sizeof(point_ctx->formatted_src), ctx->src_reg);
+
+ if (state->src.is_defined) {
+ TEST_ASSERT_SUCCESS(
+ check_domain(debug, ctx->src_reg, &state->src),
+ "src domain check");
+ TEST_LOG_LINE(DEBUG, "Successfully checked r%hhu.", ctx->src_reg);
+ } else
+ TEST_LOG_LINE(DEBUG, "Not checking undefined r%hhu.", ctx->src_reg);
+ }
+
+ ++point_ctx->hit_count;
+
+ return TEST_SUCCESS;
+}
+
+/*
+ * Invoked before the tested instruction and checks pre-conditions.
+ *
+ * Also formats registers in the pre state for postmortem, if needed.
+ */
+static int
+pre_callback(struct rte_bpf_validate_debug *debug, void *void_ctx)
+{
+ struct verify_instruction_context *const ctx = void_ctx;
+
+ TEST_LOG_LINE(DEBUG, "Pre callback invoked.");
+
+ TEST_ASSERT_SUCCESS(
+ point_callback(debug, ctx, &ctx->pre, &ctx->prm.pre),
+ "pre-state check");
+
+ return TEST_SUCCESS;
+}
+
+/* Invoked after the tested instruction and checks post-conditions. */
+static int
+post_callback(struct rte_bpf_validate_debug *debug, void *void_ctx)
+{
+ struct verify_instruction_context *const ctx = void_ctx;
+
+ TEST_LOG_LINE(DEBUG, "Post callback invoked.");
+
+ TEST_ASSERT_SUCCESS(
+ point_callback(debug, ctx, &ctx->post, &ctx->prm.post),
+ "post-state check");
+
+ return TEST_SUCCESS;
+}
+
+/* Invoked after the tested instruction jumped and checks jump post-conditions. */
+static int
+jump_callback(struct rte_bpf_validate_debug *debug, void *void_ctx)
+{
+ struct verify_instruction_context *const ctx = void_ctx;
+
+ TEST_LOG_LINE(DEBUG, "Jump callback invoked.");
+
+ TEST_ASSERT_SUCCESS(
+ point_callback(debug, ctx, &ctx->jump, &ctx->prm.jump),
+ "jump-state check");
+
+ return TEST_SUCCESS;
+}
+
+static int
+debug_validation(struct verify_instruction_context *ctx, const struct ebpf_insn *ins,
+ uint32_t nb_ins)
+{
+ struct rte_bpf_validate_debug *const debug = rte_bpf_validate_debug_create();
+ TEST_ASSERT_NOT_NULL(debug, "validate debug create error %d", rte_errno);
+
+ const struct rte_bpf_prm_ex prm = {
+ .sz = sizeof(struct rte_bpf_prm_ex),
+ .origin = RTE_BPF_ORIGIN_RAW,
+ .raw.ins = ins,
+ .raw.nb_ins = nb_ins,
+ .prog_arg[0] = {
+ .type = RTE_BPF_ARG_PTR,
+ .size = ctx->prm.area_size,
+ },
+ .nb_prog_arg = 1,
+ .debug = debug,
+ };
+
+ /* Catch invalid states. */
+ TEST_ASSERT_NOT_NULL(rte_bpf_validate_debug_catch(debug,
+ RTE_BPF_VALIDATE_DEBUG_EVENT_INVALID_STATE,
+ &(struct rte_bpf_validate_debug_callback){
+ .fn = invalid_state_cb,
+ .ctx = ctx,
+ }), "add catchpoint error %d", rte_errno);
+
+ /* Break on pre test instruction. */
+ TEST_ASSERT_NOT_NULL(rte_bpf_validate_debug_break(debug, ctx->pre.program_counter,
+ &(struct rte_bpf_validate_debug_callback){
+ .fn = pre_callback,
+ .ctx = ctx,
+ }), "add pre breakpoint error %d", rte_errno);
+
+ /* Break on post test instruction. */
+ TEST_ASSERT_NOT_NULL(rte_bpf_validate_debug_break(debug, ctx->post.program_counter,
+ &(struct rte_bpf_validate_debug_callback){
+ .fn = post_callback,
+ .ctx = ctx,
+ }), "add post breakpoint error %d", rte_errno);
+
+ if (ctx->jump.program_counter != NO_PROGRAM_COUNTER)
+ /* Break on jump test instruction. */
+ TEST_ASSERT_NOT_NULL(rte_bpf_validate_debug_break(debug, ctx->jump.program_counter,
+ &(struct rte_bpf_validate_debug_callback){
+ .fn = jump_callback,
+ .ctx = ctx,
+ }), "add jump breakpoint error %d", rte_errno);
+
+ struct rte_bpf *const bpf = rte_bpf_load_ex(&prm);
+ const int validation_errno = rte_errno;
+
+ rte_bpf_destroy(bpf);
+ rte_bpf_validate_debug_destroy(debug);
+
+ TEST_ASSERT_NOT_NULL(bpf, "validation error %d", validation_errno);
+
+ TEST_ASSERT_EQUAL(ctx->pre.hit_count, !ctx->prm.pre.is_unreachable,
+ "pre hit_count = %d", ctx->pre.hit_count);
+ TEST_ASSERT_EQUAL(ctx->post.hit_count, !ctx->prm.post.is_unreachable,
+ "post hit_count = %d", ctx->post.hit_count);
+ TEST_ASSERT_EQUAL(ctx->jump.hit_count, !ctx->prm.jump.is_unreachable,
+ "jump hit_count = %d", ctx->jump.hit_count);
+
+ return TEST_SUCCESS;
+}
+
+/* Dump whole program to log. */
+static void
+log_program_dump(const struct ebpf_insn *ins, uint32_t nb_ins, uint32_t pre_pc)
+{
+ char hexadecimal[DISASSEMBLY_FORMAT_BUFFER_SIZE];
+ char disassembly[DISASSEMBLY_FORMAT_BUFFER_SIZE];
+
+ TEST_LOG_LINE(NOTICE, "\tTested program:");
+ for (uint32_t pc = 0; pc != nb_ins; ++pc) {
+ rte_bpf_format(hexadecimal, sizeof(hexadecimal), &ins[pc], pc,
+ RTE_BPF_FORMAT_FLAG_HEXADECIMAL |
+ RTE_BPF_FORMAT_FLAG_NEVER_WIDE);
+ rte_bpf_format(disassembly, sizeof(disassembly), &ins[pc], pc,
+ RTE_BPF_FORMAT_FLAG_DISASSEMBLY |
+ RTE_BPF_FORMAT_FLAG_ABSOLUTE_JUMPS);
+ TEST_LOG_LINE(NOTICE, "\t%5u: \t%s \t%s%s",
+ pc, hexadecimal, disassembly,
+ pc != pre_pc ? "" : " ; tested instruction");
+
+ if (!rte_bpf_insn_is_wide(&ins[pc]))
+ continue;
+
+ ++pc;
+
+ rte_bpf_format(hexadecimal, sizeof(hexadecimal), &ins[pc], pc,
+ RTE_BPF_FORMAT_FLAG_HEXADECIMAL |
+ RTE_BPF_FORMAT_FLAG_NEVER_WIDE);
+ TEST_LOG_LINE(NOTICE, "\t%6s \t%s", "", hexadecimal);
+ }
+}
+
+static void
+log_formatted_registers(const char *heading, const struct verify_instruction_context *ctx,
+ const struct point_context *point_ctx)
+{
+ char register_name[8];
+
+ TEST_LOG_LINE(NOTICE, "\t%s", heading);
+ if (ctx->dst_reg != NO_REGISTER) {
+ snprintf(register_name, sizeof(register_name), "r%hhu", ctx->dst_reg);
+ TEST_LOG_LINE(NOTICE, "\t%5s: \t%s", register_name, point_ctx->formatted_dst);
+ }
+ if (ctx->src_reg != NO_REGISTER) {
+ snprintf(register_name, sizeof(register_name), "r%hhu", ctx->src_reg);
+ TEST_LOG_LINE(NOTICE, "\t%5s: \t%s", register_name, point_ctx->formatted_src);
+ }
+}
+
+/*
+ * Verify instruction validation behaviour described by prm.
+ *
+ * Generate the program containing specified instruction on the code path with
+ * specified register pre-domains and verify specified register post-domains.
+ *
+ * See comment to `generate_program` for more requirements and limitations.
+ */
+static int
+verify_instruction(struct verify_instruction_param prm)
+{
+ struct verify_instruction_context ctx = {
+ .prm = prm,
+ };
+ struct ebpf_insn ins_buf[64];
+
+ const uint32_t nb_ins = generate_program(&ctx, ins_buf);
+ RTE_ASSERT(nb_ins <= RTE_DIM(ins_buf));
+
+ const int rc = debug_validation(&ctx, ins_buf, nb_ins);
+
+ /* Log more data at DEBUG level on success, NOTICE on failure. */
+ if (rte_log_can_log(RTE_LOGTYPE_TEST_BPF_VALIDATE, RTE_LOG_DEBUG) ||
+ rc != TEST_SUCCESS) {
+ log_program_dump(ins_buf, nb_ins, ctx.pre.program_counter);
+ log_formatted_registers("Pre-state:", &ctx, &ctx.pre);
+ log_formatted_registers("Post-state:", &ctx, &ctx.post);
+ if (ctx.jump.program_counter != NO_PROGRAM_COUNTER)
+ log_formatted_registers("Jump-state:", &ctx, &ctx.jump);
+ }
+
+ return rc;
+}
+
+
+/* TESTS FOR SPECIFIC INSTRUCTIONS */
+
+/* 64-bit addition of immediate to a range. */
+static int
+test_alu64_add_k(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_ADD | BPF_K),
+ .imm = 17,
+ },
+ .pre.dst = make_signed_domain(11, 29),
+ .post.dst = make_signed_domain(11 + 17, 29 + 17),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_add_k_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_add_k);
+
+/* Jump if greater than immediate. */
+static int
+test_jmp64_jeq_k(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGT | BPF_K),
+ .imm = 0,
+ },
+ .pre.dst = make_unsigned_domain(0, 1),
+ .post.dst = make_singleton_domain(0),
+ .jump.dst = make_singleton_domain(1),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_jmp64_jeq_k_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_jeq_k);
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 09/25] test/bpf_validate: add harness for pointer tests
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (7 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 08/25] test/bpf_validate: add setup and basic tests Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 10/25] bpf/validate: fix EBPF_JSLT | BPF_X evaluation Marat Khalili
` (16 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
Cc: dev
Add necessary harness for testing pointer values in the registers and
add basic tests for adding pointers and scalars in various combinations.
These tests cover previously introduced fixes for BPF_ADD and BPF_LDX.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 311 +++++++++++++++++++++++++++++++++--
1 file changed, 297 insertions(+), 14 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 20b0dfaf87b2..cdceae3e0728 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -51,9 +51,12 @@ struct unsigned_interval {
* parameters (instruction is not accessing corresponding register).
* It's not the same as `unknown` domain which describes register that is being
* used but can hold any value.
+ *
+ * Flag `is_pointer` tells if the interval is relative to some memory area base.
*/
struct domain {
bool is_defined;
+ bool is_pointer;
struct signed_interval s;
struct unsigned_interval u;
};
@@ -149,7 +152,16 @@ make_unsigned_domain(uint64_t min, uint64_t max)
};
}
-/* Return true if domain is a singleton. */
+/* Create domain from signed interval. */
+static struct domain
+make_pointer_domain(int64_t min, int64_t max)
+{
+ struct domain result = make_signed_domain(min, max);
+ result.is_pointer = true;
+ return result;
+}
+
+/* Return true if domain is a scalar or pointer singleton. */
static bool
domain_is_singleton(const struct domain *domain)
{
@@ -195,7 +207,8 @@ format_domain(char *buffer, size_t bufsz, const struct domain *domain)
const int rc = !domain->is_defined ?
snprintf(buffer, bufsz, "UNDEFINED") :
- snprintf(buffer, bufsz, "%s INTERSECT %s",
+ snprintf(buffer, bufsz, "%s %s INTERSECT %s",
+ domain->is_pointer ? "pointer" : "scalar",
format_interval(signed_buffer, sizeof(signed_buffer), 'd',
domain->s.min, domain->s.max),
format_interval(unsigned_buffer, sizeof(unsigned_buffer), 'x',
@@ -228,7 +241,7 @@ may_jump(const struct rte_bpf_validate_debug *debug,
return (result & RTE_BPF_VALIDATE_DEBUG_MAY_BE_TRUE) != 0;
}
-/* Check interval of the register interpreted as signed. */
+/* Check interval of the register interpreted as signed scalar. */
static int
check_signed_interval(struct rte_bpf_validate_debug *debug,
uint8_t reg, struct signed_interval interval)
@@ -274,7 +287,7 @@ check_signed_interval(struct rte_bpf_validate_debug *debug,
return TEST_SUCCESS;
}
-/* Check interval of the register interpreted as unsigned. */
+/* Check interval of the register interpreted as unsigned scalar. */
static int
check_unsigned_interval(struct rte_bpf_validate_debug *debug,
uint8_t reg, struct unsigned_interval interval)
@@ -320,18 +333,154 @@ check_unsigned_interval(struct rte_bpf_validate_debug *debug,
return TEST_SUCCESS;
}
-/* Check domain of the register interpreted as value. */
+/* Check interval of the register relative to the base register. */
+static int
+check_relative_interval(struct rte_bpf_validate_debug *debug,
+ uint8_t reg, struct signed_interval interval, uint8_t base_reg)
+{
+ char buffer[VALUE_FORMAT_BUFFER_SIZE];
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
+ .dst_reg = reg,
+ .src_reg = base_reg,
+ }, interval.min),
+ false,
+ "r%hhu u< r%hhu + %s is impossible", reg, base_reg,
+ format_value(buffer, sizeof(buffer), 'd', interval.min));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | BPF_JEQ | BPF_X),
+ .dst_reg = reg,
+ .src_reg = base_reg,
+ }, interval.min),
+ true,
+ "r%hhu == r%hhu + %s is possible", reg, base_reg,
+ format_value(buffer, sizeof(buffer), 'd', interval.min));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | BPF_JEQ | BPF_X),
+ .dst_reg = reg,
+ .src_reg = base_reg,
+ }, interval.max),
+ true,
+ "r%hhu == r%hhu + %s is possible", reg, base_reg,
+ format_value(buffer, sizeof(buffer), 'd', interval.max));
+
+ TEST_ASSERT_EQUAL(may_jump(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_JMP | BPF_JGT | BPF_X),
+ .dst_reg = reg,
+ .src_reg = base_reg,
+ }, interval.max),
+ false,
+ "r%hhu u> r%hhu + %s is impossible", reg, base_reg,
+ format_value(buffer, sizeof(buffer), 'd', interval.max));
+
+ return TEST_SUCCESS;
+}
+
+/*
+ * Check access of the register interpreted as pointer.
+ *
+ * Unlike other similar functions, min > max is not a problem here,
+ * so either signed or unsigned pair can be passed without any issues.
+ *
+ * This is the reason we are not using signed_interval or unsigned_interval here
+ * to avoid confusion.
+ */
static int
-check_domain_impl(struct rte_bpf_validate_debug *debug, uint8_t reg,
+check_pointer_access(struct rte_bpf_validate_debug *debug, uint8_t reg,
+ uint64_t min, uint64_t max, size_t area_size)
+{
+ char buffer[VALUE_FORMAT_BUFFER_SIZE];
+
+ /* Start and end of the valid offsets window (unless empty). */
+ const uint64_t window_begin = -min;
+ const uint64_t window_end = area_size - max;
+
+ /* Only have accessible bytes if the interval is smaller than the area. */
+ const uint64_t interval_size = max - min;
+ const bool window_empty = (interval_size >= area_size);
+
+ TEST_ASSERT_EQUAL(rte_bpf_validate_debug_can_access(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_LDX | BPF_B | BPF_MEM),
+ .src_reg = reg
+ }, window_begin - 1),
+ false,
+ "r%hhu + %s (before window begin) dereference is invalid", reg,
+ format_value(buffer, sizeof(buffer), 'd', window_begin - 1));
+
+ TEST_ASSERT_EQUAL(rte_bpf_validate_debug_can_access(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_LDX | BPF_B | BPF_MEM),
+ .src_reg = reg
+ }, window_begin),
+ !window_empty,
+ "r%hhu + %s (after window begin) dereference is %s", reg,
+ format_value(buffer, sizeof(buffer), 'd', window_begin),
+ window_empty ? "invalid for empty window" : "valid");
+
+ TEST_ASSERT_EQUAL(rte_bpf_validate_debug_can_access(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_LDX | BPF_B | BPF_MEM),
+ .src_reg = reg
+ }, window_end - 1),
+ !window_empty,
+ "r%hhu + %s (before window end) dereference is %s", reg,
+ format_value(buffer, sizeof(buffer), 'd', window_end - 1),
+ window_empty ? "invalid for empty window" : "valid");
+
+ TEST_ASSERT_EQUAL(rte_bpf_validate_debug_can_access(debug,
+ &(struct ebpf_insn){
+ .code = (BPF_LDX | BPF_B | BPF_MEM),
+ .src_reg = reg
+ }, window_end),
+ false,
+ "r%hhu + %s (after window end) dereference is invalid", reg,
+ format_value(buffer, sizeof(buffer), 'd', window_end));
+
+ return TEST_SUCCESS;
+}
+
+/* Check domain of the register interpreted as absolute value. */
+static int
+check_scalar_domain(struct rte_bpf_validate_debug *debug, uint8_t reg,
const struct domain *domain)
{
TEST_ASSERT_SUCCESS(
check_signed_interval(debug, reg, domain->s),
- "signed interval check");
+ "absolute signed interval check");
TEST_ASSERT_SUCCESS(
check_unsigned_interval(debug, reg, domain->u),
- "unsigned interval check");
+ "absolute unsigned interval check");
+
+ return TEST_SUCCESS;
+}
+
+/* Check domain of the register interpreted as relative pointer. */
+static int
+check_pointer_domain(struct rte_bpf_validate_debug *debug, uint8_t reg,
+ const struct domain *domain, uint8_t base_reg, size_t area_size)
+{
+ TEST_ASSERT_SUCCESS(
+ check_relative_interval(debug, reg, domain->s, base_reg),
+ "relative interval check");
+
+ TEST_ASSERT_SUCCESS(
+ check_pointer_access(debug, reg, domain->s.min, domain->s.max,
+ area_size),
+ "pointer signed access check");
+
+ TEST_ASSERT_SUCCESS(
+ check_pointer_access(debug, reg, domain->u.min, domain->u.max,
+ area_size),
+ "pointer unsigned access check");
return TEST_SUCCESS;
}
@@ -339,11 +488,13 @@ check_domain_impl(struct rte_bpf_validate_debug *debug, uint8_t reg,
/* Check domain of the register and format the values in case of an error. */
static int
check_domain(struct rte_bpf_validate_debug *debug, uint8_t reg,
- const struct domain *domain)
+ const struct domain *domain, uint8_t base_reg, size_t area_size)
{
char buffer[REGISTER_FORMAT_BUFFER_SIZE];
- const int rc = check_domain_impl(debug, reg, domain);
+ const int rc = domain->is_pointer ?
+ check_pointer_domain(debug, reg, domain, base_reg, area_size) :
+ check_scalar_domain(debug, reg, domain);
if (rc != TEST_SUCCESS) {
TEST_LOG_LINE(WARNING, "\tExpected: r%hhu = %s", reg,
@@ -419,13 +570,13 @@ compare_and_jump(struct ebpf_insn **ins, uint8_t op, uint8_t reg,
}
/*
- * Prepare register to be in the specified domain.
+ * Prepare register to be in the specified scalar domain.
*
* Unless singleton, load unknown value into it and clamp it with conditional jumps.
* (Jump offsets are not filled and should be patched in by the caller.)
*/
static void
-prepare_domain(struct ebpf_insn **ins, uint8_t reg,
+prepare_scalar_domain(struct ebpf_insn **ins, uint8_t reg,
const struct domain *domain, uint8_t base_reg, int *service_cell_count,
uint8_t tmp_reg)
{
@@ -460,6 +611,28 @@ prepare_domain(struct ebpf_insn **ins, uint8_t reg,
compare_and_jump(ins, EBPF_JSGT, reg, domain->s.max, tmp_reg);
}
+/*
+ * Prepare register to be in the specified scalar or pointer domain, if any.
+ *
+ * If `domain` is NULL, do nothing. Otherwise prepare scalar domain,
+ * and then add base register to it to convert it to a pointer, if needed.
+ */
+static void
+prepare_domain(struct ebpf_insn **ins, uint8_t reg,
+ const struct domain *domain, uint8_t base_reg, int *service_cell_count,
+ uint8_t tmp_reg)
+{
+ prepare_scalar_domain(ins, reg, domain, base_reg, service_cell_count, tmp_reg);
+
+ if (domain->is_pointer)
+ /* Add base_reg to convert resulting scalar into a pointer. */
+ *(*ins)++ = (struct ebpf_insn){
+ .code = (EBPF_ALU64 | BPF_ADD | BPF_X),
+ .dst_reg = reg,
+ .src_reg = base_reg,
+ };
+}
+
static void
fill_verify_instruction_defaults(struct verify_instruction_param *prm)
{
@@ -645,7 +818,8 @@ point_callback(struct rte_bpf_validate_debug *debug, const struct verify_instruc
if (state->dst.is_defined) {
TEST_ASSERT_SUCCESS(
- check_domain(debug, ctx->dst_reg, &state->dst),
+ check_domain(debug, ctx->dst_reg, &state->dst,
+ ctx->base_reg, ctx->prm.area_size),
"dst domain check");
TEST_LOG_LINE(DEBUG, "Successfully checked r%hhu.", ctx->dst_reg);
} else
@@ -658,7 +832,8 @@ point_callback(struct rte_bpf_validate_debug *debug, const struct verify_instruc
if (state->src.is_defined) {
TEST_ASSERT_SUCCESS(
- check_domain(debug, ctx->src_reg, &state->src),
+ check_domain(debug, ctx->src_reg, &state->src,
+ ctx->base_reg, ctx->prm.area_size),
"src domain check");
TEST_LOG_LINE(DEBUG, "Successfully checked r%hhu.", ctx->src_reg);
} else
@@ -889,6 +1064,96 @@ test_alu64_add_k(void)
REGISTER_FAST_TEST(bpf_validate_alu64_add_k_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_add_k);
+/* 64-bit addition of immediate to a pointer range. */
+static int
+test_alu64_add_k_pointer(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_ADD | BPF_K),
+ .imm = 17,
+ },
+ .area_size = 256,
+ .pre.dst = make_pointer_domain(11, 29),
+ .post.dst = make_pointer_domain(11 + 17, 29 + 17),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_add_k_pointer_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_add_k_pointer);
+
+/* 64-bit addition of pointer to a pointer. */
+static int
+test_alu64_add_x_pointer_pointer(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_ADD | BPF_X),
+ },
+ .area_size = 256,
+ .pre.dst = make_pointer_domain(11, 29),
+ .pre.src = make_pointer_domain(17, 23),
+ .post.dst = unknown,
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_add_x_pointer_pointer_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_add_x_pointer_pointer);
+
+/* 64-bit addition of scalar to a pointer. */
+static int
+test_alu64_add_x_pointer_scalar(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_ADD | BPF_X),
+ },
+ .area_size = 256,
+ .pre.dst = make_pointer_domain(11, 29),
+ .pre.src = make_signed_domain(17, 23),
+ .post.dst = make_pointer_domain(11 + 17, 29 + 23),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_add_x_pointer_scalar_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_add_x_pointer_scalar);
+
+/* 64-bit addition of pointer to a scalar. */
+static int
+test_alu64_add_x_scalar_pointer(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_ADD | BPF_X),
+ },
+ .area_size = 256,
+ .pre.dst = make_signed_domain(11, 29),
+ .pre.src = make_pointer_domain(17, 23),
+ .post.dst = make_pointer_domain(11 + 17, 29 + 23),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_add_x_scalar_pointer_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_add_x_scalar_pointer);
+
+/* 64-bit addition of scalar to a scalar. */
+static int
+test_alu64_add_x_scalar_scalar(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_ADD | BPF_X),
+ },
+ .area_size = 256,
+ .pre.dst = make_signed_domain(11, 29),
+ .pre.src = make_signed_domain(17, 23),
+ .post.dst = make_signed_domain(11 + 17, 29 + 23),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_add_x_scalar_scalar_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_add_x_scalar_scalar);
+
/* Jump if greater than immediate. */
static int
test_jmp64_jeq_k(void)
@@ -906,3 +1171,21 @@ test_jmp64_jeq_k(void)
REGISTER_FAST_TEST(bpf_validate_jmp64_jeq_k_autotest, NOHUGE_OK, ASAN_OK,
test_jmp64_jeq_k);
+
+/* 64-bit load from heap (should be set to unknown). */
+static int
+test_mem_ldx_dw_heap(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_MEM | BPF_LDX | EBPF_DW),
+ .off = 16,
+ },
+ .area_size = 24,
+ .pre.src = make_pointer_domain(0, 0),
+ .post.dst = unknown,
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_mem_ldx_dw_heap_autotest, NOHUGE_OK, ASAN_OK,
+ test_mem_ldx_dw_heap);
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 10/25] bpf/validate: fix EBPF_JSLT | BPF_X evaluation
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (8 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 09/25] test/bpf_validate: add harness for pointer tests Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 11/25] bpf/validate: fix BPF_NEG of INT64_MIN and 0 Marat Khalili
` (15 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev, Ferruh Yigit; +Cc: dev, stable
Function `eval_jcc` was never called for instruction `(BPF_JMP |
EBPF_JSLT | BPF_X)` due to omission from the table `ins_chk`.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jslt r2, #0xfffffffd, L9
3: jsgt r2, #0x3, L9
4: mov r3, #0x0
5: jslt r2, r3, L8 ; tested instruction
6: mov r0, #0x1
7: exit
8: mov r0, #0x2
9: exit
Pre-state:
r2:
r3:
// skip Post-state
Jump-state:
r2: -3..3
Step 8 should only be reachable (jumped to) for values of r2 less than 0
(value assigned to r3 at step 4), but validator still considers r2 to
have same range -3..3 that it had before the step 5. Moreover the
pre-state that should have been saved on step 5 is not filled in the
test DEBUG output at all, demonstrating that evaluation of this state
just did not happen.
Add missing function and change execution logic to not ignore missing
functions. Add test.
Fixes: 6e12ec4c4d6d ("bpf: add more checks")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 18 +++++++++++++++
lib/bpf/bpf_validate.c | 45 ++++++++++++++++++++++++------------
2 files changed, 48 insertions(+), 15 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index cdceae3e0728..d7396a88beb8 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1172,6 +1172,24 @@ test_jmp64_jeq_k(void)
REGISTER_FAST_TEST(bpf_validate_jmp64_jeq_k_autotest, NOHUGE_OK, ASAN_OK,
test_jmp64_jeq_k);
+/* Jump if signed less than another register. */
+static int
+test_jmp64_jslt_x(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JSLT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(-3, 3),
+ .pre.src = make_signed_domain(0, 0),
+ .post.dst = make_signed_domain(0, 3),
+ .jump.dst = make_signed_domain(-3, -1),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_jmp64_jslt_x_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_jslt_x);
+
/* 64-bit load from heap (should be set to unknown). */
static int
test_mem_ldx_dw_heap(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 391be9cbb474..b0d88fe7d273 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -1372,6 +1372,14 @@ eval_store(struct bpf_verifier *bvf, const struct ebpf_insn *ins)
return NULL;
}
+static const char *
+eval_ja(struct bpf_verifier *bvf, const struct ebpf_insn *ins)
+{
+ RTE_SET_USED(bvf);
+ RTE_SET_USED(ins);
+ return NULL;
+}
+
static const char *
eval_func_arg(struct bpf_verifier *bvf, const struct rte_bpf_arg *arg,
struct bpf_reg_val *rv)
@@ -2023,6 +2031,7 @@ static const struct bpf_ins_check ins_chk[UINT8_MAX + 1] = {
.mask = { .dreg = ZERO_REG, .sreg = ZERO_REG},
.off = { .min = 0, .max = UINT16_MAX},
.imm = { .min = 0, .max = 0},
+ .eval = eval_ja,
},
/* jcc IMM instructions */
[(BPF_JMP | BPF_JEQ | BPF_K)] = {
@@ -2138,6 +2147,7 @@ static const struct bpf_ins_check ins_chk[UINT8_MAX + 1] = {
.mask = { .dreg = ALL_REGS, .sreg = ALL_REGS},
.off = { .min = 0, .max = UINT16_MAX},
.imm = { .min = 0, .max = 0},
+ .eval = eval_jcc,
},
[(BPF_JMP | EBPF_JSGE | BPF_X)] = {
.mask = { .dreg = ALL_REGS, .sreg = ALL_REGS},
@@ -2890,22 +2900,27 @@ evaluate(struct bpf_verifier *bvf)
stats.nb_save++;
}
- if (ins_chk[op].eval != NULL) {
- rc = __rte_bpf_validate_debug_evaluate_step(
- debug, idx, prev_nb_edge > 1 ?
- RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_ENTER :
- RTE_BPF_VALIDATE_DEBUG_EVENT_STEP);
- if (rc < 0)
- break;
+ if (ins_chk[op].eval == NULL) {
+ RTE_BPF_LOG_FUNC_LINE(ERR,
+ "Unrecognized instruction at pc: %u", idx);
+ rc = -EINVAL;
+ break;
+ }
- err = ins_chk[op].eval(bvf, ins + idx);
- stats.nb_eval++;
- if (err != NULL) {
- RTE_BPF_LOG_FUNC_LINE(ERR,
- "%s at pc: %u", err, idx);
- rc = -EINVAL;
- break;
- }
+ rc = __rte_bpf_validate_debug_evaluate_step(debug, idx,
+ prev_nb_edge > 1 ?
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_ENTER :
+ RTE_BPF_VALIDATE_DEBUG_EVENT_STEP);
+ if (rc < 0)
+ break;
+
+ err = ins_chk[op].eval(bvf, ins + idx);
+ stats.nb_eval++;
+ if (err != NULL) {
+ RTE_BPF_LOG_FUNC_LINE(ERR,
+ "%s at pc: %u", err, idx);
+ rc = -EINVAL;
+ break;
}
log_dbg_eval_state(bvf, ins + idx, idx);
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 11/25] bpf/validate: fix BPF_NEG of INT64_MIN and 0
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (9 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 10/25] bpf/validate: fix EBPF_JSLT | BPF_X evaluation Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 12/25] bpf/validate: fix BPF_DIV and BPF_MOD signed part Marat Khalili
` (14 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_neg` did not treat values of INT64_MIN and 0 specially
when calculating negation ranges (e.g. negated unsigned range 0..2
should turn into 0..UINT64_MAX) producing incorrect results. On top of
this negating signed INT64_MIN caused undefined behaviour.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: lddw r4, #0x8000000000000000
4: jgt r2, r4, L7
5: neg r2, #0x0 ; tested instruction
6: mov r0, #0x1
7: exit
Pre-state:
r2: 0..0x8000000000000000
Post-state:
r2: INT64_MIN..INT64_MIN+1 INTERSECT 0..0x8000000000000000 (!)
After the tested instruction validator considers r2 to be within
INT64_MIN..INT64_MIN+1 if viewed as signed, or within
0..0x8000000000000000 if viewed as unsigned, however if 1 was loaded on
step 1 into r2 it is possible for it to become -1 after the tested
instruction which satisfies neither of the ranges.
With sanitizer the following diagnostic is generated:
lib/bpf/bpf_validate.c:1120:7: runtime error: negation of
-9223372036854775808 cannot be represented in type 'long int'; cast
to an unsigned type to negate
#0 0x000002747230 in eval_neg lib/bpf/bpf_validate.c:1120
#1 0x000002748fb6 in eval_alu lib/bpf/bpf_validate.c:1251
#2 0x000002759dd3 in evaluate lib/bpf/bpf_validate.c:3161
...
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior
lib/bpf/bpf_validate.c:1120:7
Add missing handling of special cases, add tests.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 126 +++++++++++++++++++++++++++++++++++
lib/bpf/bpf_validate.c | 55 ++++++++++++---
2 files changed, 173 insertions(+), 8 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index d7396a88beb8..995f7363b80f 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1154,6 +1154,132 @@ test_alu64_add_x_scalar_scalar(void)
REGISTER_FAST_TEST(bpf_validate_alu64_add_x_scalar_scalar_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_add_x_scalar_scalar);
+/* 64-bit negation when interval first element is INT64_MIN. */
+static int
+test_alu64_neg_int64min_first(void)
+{
+ static const int64_t other_values[] = {
+ INT64_MIN,
+ INT64_MIN + 1,
+ INT64_MIN + 13,
+ -17,
+ -1,
+ 0,
+ 1,
+ 19,
+ INT64_MAX - 23,
+ INT64_MAX - 1,
+ INT64_MAX,
+ };
+ for (int other_index = 0; other_index != RTE_DIM(other_values); ++other_index) {
+ const int64_t other_value = other_values[other_index];
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_NEG),
+ },
+ .pre.dst = make_signed_domain(INT64_MIN, other_value),
+ .post.dst = other_value > 0 ? unknown :
+ make_unsigned_domain(-(uint64_t)other_value, INT64_MIN),
+ }), "other_index=%d", other_index);
+ }
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_neg_int64min_first_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_neg_int64min_first);
+
+/* 64-bit negation when interval last element is INT64_MIN. */
+static int
+test_alu64_neg_int64min_last(void)
+{
+ static const uint64_t other_values[] = {
+ 0,
+ 1,
+ 19,
+ INT64_MAX - 23,
+ INT64_MAX - 1,
+ INT64_MAX,
+ INT64_MIN,
+ };
+ for (int other_index = 0; other_index != RTE_DIM(other_values); ++other_index) {
+ const int64_t other_value = other_values[other_index];
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_NEG),
+ },
+ .pre.dst = make_unsigned_domain(other_value, INT64_MIN),
+ .post.dst = make_signed_domain(INT64_MIN, -(uint64_t)other_value),
+ }), "other_index=%d", other_index);
+ }
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_neg_int64min_last_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_neg_int64min_last);
+
+/* 64-bit negation when interval first element is zero. */
+static int
+test_alu64_neg_zero_first(void)
+{
+ static const uint64_t other_values[] = {
+ 0,
+ 1,
+ 19,
+ INT64_MAX - 23,
+ INT64_MAX - 1,
+ INT64_MAX,
+ INT64_MIN,
+ INT64_MIN + 1,
+ INT64_MIN + 13,
+ -17,
+ -1,
+ };
+ for (int other_index = 0; other_index != RTE_DIM(other_values); ++other_index) {
+ const uint64_t other_value = other_values[other_index];
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_NEG),
+ },
+ .pre.dst = make_unsigned_domain(0, other_value),
+ .post.dst = other_value > (uint64_t)INT64_MIN ? unknown :
+ make_signed_domain(-(uint64_t)other_value, 0),
+ }), "other_index=%d", other_index);
+ }
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_neg_zero_first_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_neg_zero_first);
+
+/* 64-bit negation when interval last element is zero. */
+static int
+test_alu64_neg_zero_last(void)
+{
+ static const int64_t other_values[] = {
+ INT64_MIN,
+ INT64_MIN + 1,
+ INT64_MIN + 13,
+ -17,
+ -1,
+ 0,
+ };
+ for (int other_index = 0; other_index != RTE_DIM(other_values); ++other_index) {
+ const int64_t other_value = other_values[other_index];
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_NEG),
+ },
+ .pre.dst = make_signed_domain(other_value, 0),
+ .post.dst = make_unsigned_domain(0, -(uint64_t)other_value),
+ }), "other_index=%d", other_index);
+ }
+
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_neg_zero_last_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_neg_zero_last);
+
/* Jump if greater than immediate. */
static int
test_jmp64_jeq_k(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index b0d88fe7d273..79c8679ac535 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -990,6 +990,11 @@ eval_neg(struct bpf_reg_val *rd, size_t opsz, uint64_t msk)
{
uint64_t ux, uy;
int64_t sx, sy;
+ /* additional limits imposed by signed on unsigned and back */
+ struct bpf_reg_val cross_limits = {
+ .s = { INT64_MIN, INT64_MAX },
+ .u = { 0, UINT64_MAX },
+ };
/* if we have 32-bit values - extend them to 64-bit */
if (opsz == sizeof(uint32_t) * CHAR_BIT) {
@@ -997,11 +1002,29 @@ eval_neg(struct bpf_reg_val *rd, size_t opsz, uint64_t msk)
rd->u.max = (int32_t)rd->u.max;
}
- ux = -(int64_t)rd->u.min & msk;
- uy = -(int64_t)rd->u.max & msk;
+ if (rd->u.min == 0) {
+ /* special case: ranges that include 0 and, possibly, 1 */
+
+ /*
+ * Calculate requirements on the signed range of negation.
+ * It is only possible when negated range does not cross from
+ * INT64_MIN to INT64_MAX, which means our original range does
+ * not reach (uint64_t)-INT64_MAX.
+ */
+ if (rd->u.max < (uint64_t)-INT64_MAX) {
+ cross_limits.s.min = -rd->u.max;
+ cross_limits.s.max = -rd->u.min;
+ }
+
+ if (rd->u.max != 0)
+ rd->u.max = UINT64_MAX;
+ } else {
+ ux = -rd->u.min & msk;
+ uy = -rd->u.max & msk;
- rd->u.max = RTE_MAX(ux, uy);
- rd->u.min = RTE_MIN(ux, uy);
+ rd->u.max = RTE_MAX(ux, uy);
+ rd->u.min = RTE_MIN(ux, uy);
+ }
/* if we have 32-bit values - extend them to 64-bit */
if (opsz == sizeof(uint32_t) * CHAR_BIT) {
@@ -1009,11 +1032,27 @@ eval_neg(struct bpf_reg_val *rd, size_t opsz, uint64_t msk)
rd->s.max = (int32_t)rd->s.max;
}
- sx = -rd->s.min & msk;
- sy = -rd->s.max & msk;
+ if (rd->s.min == INT64_MIN) {
+ /* special case: negation of INT64_MIN is INT64_MIN */
+ if (rd->s.max <= 0) {
+ cross_limits.u.min = -(uint64_t)rd->s.max;
+ cross_limits.u.max = -(uint64_t)rd->s.min;
+ }
+ if (rd->s.max != INT64_MIN)
+ rd->s.max = INT64_MAX;
+ } else {
+ /* since max >= min, neither can be INT64_MIN here */
+ sx = -rd->s.min & msk;
+ sy = -rd->s.max & msk;
+
+ rd->s.max = RTE_MAX(sx, sy);
+ rd->s.min = RTE_MIN(sx, sy);
+ }
- rd->s.max = RTE_MAX(sx, sy);
- rd->s.min = RTE_MIN(sx, sy);
+ rd->s.min = RTE_MAX(rd->s.min, cross_limits.s.min) & msk;
+ rd->s.max = RTE_MIN(rd->s.max, cross_limits.s.max) & msk;
+ rd->u.min = RTE_MAX(rd->u.min, cross_limits.u.min) & msk;
+ rd->u.max = RTE_MIN(rd->u.max, cross_limits.u.max) & msk;
}
static const char *
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 12/25] bpf/validate: fix BPF_DIV and BPF_MOD signed part
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (10 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 11/25] bpf/validate: fix BPF_NEG of INT64_MIN and 0 Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 13/25] bpf/validate: fix BPF_MUL ranges minimum typo Marat Khalili
` (13 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_divmod` for _unsigned_ division or modulo operation
calculated signed ranges using _signed_ division, which is
mathematically incorrect: unlike some other mathematical operations,
signed and unsigned divisions in the CPU register cyclic ring math are
not equivalent.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: lddw r2, #0xaaaaaaaaaaaaaaaa
3: mov r3, #0x2
4: div r2, r3 ; tested instruction
5: mov r0, #0x1
6: exit
Pre-state:
r2: -6148914691236517206
r3: 2
Post-state:
r2: -3074457345618258603 INTERSECT 0x5555555555555555 (!)
After the tested instruction validator considers r2 to equal
0x5555555555555555 if viewed as unsigned (correct, this is
0xaaaaaaaaaaaaaaaaull / 2), but equal -3074457345618258603 or
0xd555555555555555 if viewed as signed, although it cannot be both true.
Additionally, when validating division or modulo of INT64_MIN by -1
overflow happened in the validator possibly triggering an exception.
The following error is shown without sanitizer:
1/1 DPDK:fast-tests / bpf_autotest FAIL 0.37s
killed by signal 8 SIGFPE
With sanitizer the following diagnostic is generated:
lib/bpf/bpf_validate.c:1086:14: runtime error: division of
-9223372036854775808 by -1 cannot be represented in type 'long int'
#0 0x0000027484bb in eval_divmod lib/bpf/bpf_validate.c:1086
#1 0x00000274bcf3 in eval_alu lib/bpf/bpf_validate.c:1280
#2 0x00000275cb3e in evaluate lib/bpf/bpf_validate.c:3192
...
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior
lib/bpf/bpf_validate.c:1086:14
Change logic to copy results from unsigned division into signed. Add
both validation and execution tests for the case that triggered an
exception. Add validation tests for non-constant division to make sure
it is still valid (ranges of the non-constant division or modulo are not
really minimal, this can be addressed in the future).
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf.c | 99 +++++++++++++++++++++++++
app/test/test_bpf_validate.c | 135 +++++++++++++++++++++++++++++++++++
lib/bpf/bpf_validate.c | 38 +++-------
3 files changed, 244 insertions(+), 28 deletions(-)
diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c
index 69e84f0cab56..744bf02f7356 100644
--- a/app/test/test_bpf.c
+++ b/app/test/test_bpf.c
@@ -393,6 +393,13 @@ cmp_res(const char *func, uint64_t exp_rc, uint64_t ret_rc,
return ret;
}
+/* Empty prepare function */
+static void
+dummy_prepare(void *arg)
+{
+ RTE_SET_USED(arg);
+}
+
/* store immediate test-cases */
static const struct ebpf_insn test_store1_prog[] = {
{
@@ -3157,6 +3164,70 @@ static const struct ebpf_insn test_ld_mbuf3_prog[] = {
},
};
+/* divide INT64_MIN by -1 */
+static const struct ebpf_insn test_int64min_udiv_uint64max_prog[] = {
+ /* Load INT64_MIN into r0 */
+ {
+ .code = (BPF_LD | BPF_IMM | EBPF_DW),
+ .dst_reg = EBPF_REG_0,
+ .imm = (int32_t)INT64_MIN,
+ },
+ {
+ .imm = (int32_t)(INT64_MIN >> 32),
+ },
+ /* Divide r0 by immediate -1 */
+ {
+ .code = (EBPF_ALU64 | BPF_DIV | BPF_K),
+ .dst_reg = EBPF_REG_0,
+ .imm = -1,
+ },
+ /* Exit for correctness otherwise */
+ {
+ .code = (BPF_JMP | EBPF_EXIT),
+ },
+};
+
+static int
+test_int64min_udiv_uint64max_check(uint64_t rc, const void *arg)
+{
+ RTE_SET_USED(arg);
+ /* 0x8000000000000000ull / 0xFFFFFFFFFFFFFFFFull == 0 */
+ TEST_ASSERT_EQUAL(rc, 0, "expected 0, found %#" PRIx64, rc);
+ return TEST_SUCCESS;
+}
+
+/* modulo INT64_MIN by -1 */
+static const struct ebpf_insn test_int64min_umod_uint64max_prog[] = {
+ /* Load INT64_MIN into r0 */
+ {
+ .code = (BPF_LD | BPF_IMM | EBPF_DW),
+ .dst_reg = EBPF_REG_0,
+ .imm = (int32_t)INT64_MIN,
+ },
+ {
+ .imm = (int32_t)(INT64_MIN >> 32),
+ },
+ /* Modulo r0 by immediate -1 */
+ {
+ .code = (EBPF_ALU64 | BPF_MOD | BPF_K),
+ .dst_reg = EBPF_REG_0,
+ .imm = -1,
+ },
+ /* Exit for correctness otherwise */
+ {
+ .code = (BPF_JMP | EBPF_EXIT),
+ },
+};
+
+static int
+test_int64min_umod_uint64max_check(uint64_t rc, const void *arg)
+{
+ RTE_SET_USED(arg);
+ /* 0x8000000000000000ull % 0xFFFFFFFFFFFFFFFFull == 0x8000000000000000ull */
+ TEST_ASSERT_EQUAL(rc, (uint64_t)INT64_MIN, "expected INT64_MIN, found %#" PRIx64, rc);
+ return TEST_SUCCESS;
+}
+
/* all bpf test cases */
static const struct bpf_test tests[] = {
{
@@ -3465,6 +3536,34 @@ static const struct bpf_test tests[] = {
/* mbuf as input argument is not supported on 32 bit platform */
.allow_fail = (sizeof(uint64_t) != sizeof(uintptr_t)),
},
+ {
+ .name = "test_int64min_udiv_uint64max",
+ .arg_sz = sizeof(struct dummy_vect8),
+ .prm = {
+ .ins = test_int64min_udiv_uint64max_prog,
+ .nb_ins = RTE_DIM(test_int64min_udiv_uint64max_prog),
+ .prog_arg = {
+ .type = RTE_BPF_ARG_PTR,
+ .size = sizeof(struct dummy_vect8),
+ },
+ },
+ .prepare = dummy_prepare,
+ .check_result = test_int64min_udiv_uint64max_check,
+ },
+ {
+ .name = "test_int64min_umod_uint64max",
+ .arg_sz = 1,
+ .prm = {
+ .ins = test_int64min_umod_uint64max_prog,
+ .nb_ins = RTE_DIM(test_int64min_umod_uint64max_prog),
+ .prog_arg = {
+ .type = RTE_BPF_ARG_PTR,
+ .size = 1,
+ },
+ },
+ .prepare = dummy_prepare,
+ .check_result = test_int64min_umod_uint64max_check,
+ },
};
static int
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 995f7363b80f..aada6e110337 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1154,6 +1154,141 @@ test_alu64_add_x_scalar_scalar(void)
REGISTER_FAST_TEST(bpf_validate_alu64_add_x_scalar_scalar_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_add_x_scalar_scalar);
+/* 64-bit division and modulo of UINT64_MAX*2/3. */
+static int
+test_alu64_div_mod_big_constant(void)
+{
+ const uint64_t dividend = UINT64_MAX / 3 * 2;
+ static const uint64_t divisors[] = {
+ 1,
+ 2,
+ 3,
+ UINT64_MAX / 3,
+ INT64_MAX,
+ INT64_MIN,
+ UINT64_MAX / 3 * 2,
+ UINT64_MAX / 4 * 3,
+ UINT64_MAX,
+ };
+ for (int index = 0; index != RTE_DIM(divisors); ++index) {
+ const uint64_t divisor = divisors[index];
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_DIV | BPF_X),
+ },
+ .pre.dst = make_singleton_domain(dividend),
+ .pre.src = make_singleton_domain(divisor),
+ .post.dst = make_singleton_domain(dividend / divisor),
+ }), "(EBPF_ALU64 | BPF_DIV | BPF_X) check, index=%d", index);
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_MOD | BPF_X),
+ },
+ .pre.dst = make_singleton_domain(dividend),
+ .pre.src = make_singleton_domain(divisor),
+ .post.dst = make_singleton_domain(dividend % divisor),
+ }), "(EBPF_ALU64 | BPF_MOD | BPF_X) check, index=%d", index);
+ }
+
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_div_mod_big_constant_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_div_mod_big_constant);
+
+/* 64-bit division and modulo of UINT64_MAX/3..UINT64_MAX*2/3 by a constant. */
+static int
+test_alu64_div_mod_big_range(void)
+{
+ const uint64_t dividend_first = UINT64_MAX / 3;
+ const uint64_t dividend_last = UINT64_MAX / 3 * 2;
+ static const uint64_t divisors[] = {
+ 1,
+ 2,
+ 3,
+ UINT64_MAX / 3,
+ INT64_MAX,
+ INT64_MIN,
+ UINT64_MAX / 3 * 2,
+ UINT64_MAX / 4 * 3,
+ UINT64_MAX,
+ };
+ for (int index = 0; index != RTE_DIM(divisors); ++index) {
+ const uint64_t divisor = divisors[index];
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_DIV | BPF_X),
+ },
+ .pre.dst = make_unsigned_domain(dividend_first, dividend_last),
+ .pre.src = make_singleton_domain(divisor),
+ .post.dst = make_unsigned_domain(0, dividend_last),
+ }), "(EBPF_ALU64 | BPF_DIV | BPF_X) check, index=%d", index);
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_MOD | BPF_X),
+ },
+ .pre.dst = make_unsigned_domain(dividend_first, dividend_last),
+ .pre.src = make_singleton_domain(divisor),
+ .post.dst = make_unsigned_domain(0, RTE_MIN(dividend_last, divisor - 1)),
+ }), "(EBPF_ALU64 | BPF_MOD | BPF_X) check, index=%d", index);
+ }
+
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_div_mod_big_range_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_div_mod_big_range);
+
+/* 64-bit division and modulo of INT64_MIN by -1. */
+static int
+test_alu64_div_mod_overflow(void)
+{
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_DIV | BPF_K),
+ .imm = -1,
+ },
+ .pre.dst = make_singleton_domain(INT64_MIN),
+ .post.dst = make_singleton_domain(0),
+ }), "(EBPF_ALU64 | BPF_DIV | BPF_K) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_DIV | BPF_X),
+ },
+ .pre.dst = make_singleton_domain(INT64_MIN),
+ .pre.src = make_singleton_domain(-1),
+ .post.dst = make_singleton_domain(0),
+ }), "(EBPF_ALU64 | BPF_DIV | BPF_X) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_MOD | BPF_K),
+ .imm = -1,
+ },
+ .pre.dst = make_singleton_domain(INT64_MIN),
+ .post.dst = make_singleton_domain(INT64_MIN),
+ }), "(EBPF_ALU64 | BPF_MOD | BPF_K) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_MOD | BPF_X),
+ },
+ .pre.dst = make_singleton_domain(INT64_MIN),
+ .pre.src = make_singleton_domain(-1),
+ .post.dst = make_singleton_domain(INT64_MIN),
+ }), "(EBPF_ALU64 | BPF_MOD | BPF_X) check");
+
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_div_mod_overflow_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_div_mod_overflow);
+
/* 64-bit negation when interval first element is INT64_MIN. */
static int
test_alu64_neg_int64min_first(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 79c8679ac535..b784777bbb6b 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -932,8 +932,7 @@ eval_mul(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
}
static const char *
-eval_divmod(uint32_t op, struct bpf_reg_val *rd, struct bpf_reg_val *rs,
- size_t opsz, uint64_t msk)
+eval_divmod(uint32_t op, struct bpf_reg_val *rd, struct bpf_reg_val *rs, uint64_t msk)
{
/* both operands are constants */
if (rd->u.min == rd->u.max && rs->u.min == rs->u.max) {
@@ -954,34 +953,17 @@ eval_divmod(uint32_t op, struct bpf_reg_val *rd, struct bpf_reg_val *rs,
rd->u.min = 0;
}
- /* if we have 32-bit values - extend them to 64-bit */
- if (opsz == sizeof(uint32_t) * CHAR_BIT) {
- rd->s.min = (int32_t)rd->s.min;
- rd->s.max = (int32_t)rd->s.max;
- rs->s.min = (int32_t)rs->s.min;
- rs->s.max = (int32_t)rs->s.max;
- }
-
- /* both operands are constants */
- if (rd->s.min == rd->s.max && rs->s.min == rs->s.max) {
- if (rs->s.max == 0)
- return "division by 0";
- if (op == BPF_DIV) {
- rd->s.min /= rs->s.min;
- rd->s.max /= rs->s.max;
- } else {
- rd->s.min %= rs->s.min;
- rd->s.max %= rs->s.max;
- }
- } else if (op == BPF_MOD) {
- rd->s.min = RTE_MAX(rd->s.max, 0);
- rd->s.min = RTE_MIN(rd->s.min, 0);
+ if (rd->u.min >= (uint64_t)INT64_MIN || rd->u.max <= (uint64_t)INT64_MAX) {
+ /*
+ * All values have the same sign bit, which means range
+ * contiguous as unsigned is also contiguous as signed,
+ * so we can just reuse it without any changes.
+ */
+ rd->s.min = rd->u.min;
+ rd->s.max = rd->u.max;
} else
eval_smax_bound(rd, msk);
- rd->s.max &= msk;
- rd->s.min &= msk;
-
return NULL;
}
@@ -1165,7 +1147,7 @@ eval_alu(struct bpf_verifier *bvf, const struct ebpf_insn *ins)
else if (op == BPF_MUL)
eval_mul(rd, &rs, opsz, msk);
else if (op == BPF_DIV || op == BPF_MOD)
- err = eval_divmod(op, rd, &rs, opsz, msk);
+ err = eval_divmod(op, rd, &rs, msk);
else if (op == BPF_NEG)
eval_neg(rd, opsz, msk);
else if (op == EBPF_MOV)
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 13/25] bpf/validate: fix BPF_MUL ranges minimum typo
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (11 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 12/25] bpf/validate: fix BPF_DIV and BPF_MOD signed part Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 14/25] bpf/validate: fix BPF_MUL signed overflow UB Marat Khalili
` (12 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_mul` calculated minimum of the both signed and unsigned
ranges as destination square instead of product with source due to a
typo.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jlt r2, #0x11, L8
3: jgt r2, #0x1d, L8
4: jslt r2, #0x11, L8
5: jsgt r2, #0x1d, L8
6: mul r2, #0xb ; tested instruction
7: mov r0, #0x1
8: exit
Pre-state:
r2: 17..29
Post-state:
r2: 289..319
After the tested instruction validator considers r2 to be no less than
289, however if 20 was loaded on step 1 it is possible for it after
multiplying by 11 to become 220 which is less than 289.
Fix the typo, add test.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 17 +++++++++++++++++
lib/bpf/bpf_validate.c | 4 ++--
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index aada6e110337..3e0493f831ae 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1289,6 +1289,23 @@ test_alu64_div_mod_overflow(void)
REGISTER_FAST_TEST(bpf_validate_alu64_div_mod_overflow_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_div_mod_overflow);
+/* 64-bit mul of small scalar range and immediate. */
+static int
+test_alu64_mul_k_range_small(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_MUL | BPF_K),
+ .imm = 11,
+ },
+ .pre.dst = make_unsigned_domain(17, 29),
+ .post.dst = make_unsigned_domain(17 * 11, 29 * 11),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_mul_k_range_small_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_mul_k_range_small);
+
/* 64-bit negation when interval first element is INT64_MIN. */
static int
test_alu64_neg_int64min_first(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index b784777bbb6b..39c75bbcd76f 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -915,7 +915,7 @@ eval_mul(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
/* check for overflow */
} else if (rd->u.max <= msk >> opsz / 2 && rs->u.max <= msk >> opsz) {
rd->u.max *= rs->u.max;
- rd->u.min *= rd->u.min;
+ rd->u.min *= rs->u.min;
} else
eval_umax_bound(rd, msk);
@@ -926,7 +926,7 @@ eval_mul(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
/* check that both operands are positive and no overflow */
} else if (rd->s.min >= 0 && rs->s.min >= 0) {
rd->s.max *= rs->s.max;
- rd->s.min *= rd->s.min;
+ rd->s.min *= rs->s.min;
} else
eval_smax_bound(rd, msk);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 14/25] bpf/validate: fix BPF_MUL signed overflow UB
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (12 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 13/25] bpf/validate: fix BPF_MUL ranges minimum typo Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 15/25] bpf/validate: fix BPF_JGT/EBPF_JSGT no-jump max Marat Khalili
` (11 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_mul` triggered signed overflow for large constants.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: lddw r2, #0x9876543210
3: mul r2, #0x12345678 ; tested instruction
4: mov r0, #0x1
5: exit
With sanitizer the following diagnostic is generated:
lib/bpf/bpf_validate.c:1032:26: runtime error: signed integer
overflow: 654820258320 * 305419896 cannot be represented in type
'long int'
#0 0x000002746bfd in eval_mul lib/bpf/bpf_validate.c:1032
#1 0x00000274b6ac in eval_alu lib/bpf/bpf_validate.c:1260
#2 0x00000275c526 in evaluate lib/bpf/bpf_validate.c:3174
...
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior
lib/bpf/bpf_validate.c:1032:26
Multiply constants as unsigned which will produce mathematically correct
result in two's complement representation, add test.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 17 +++++++++++++++++
lib/bpf/bpf_validate.c | 4 ++--
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 3e0493f831ae..b4cb5d8cdf8d 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1289,6 +1289,23 @@ test_alu64_div_mod_overflow(void)
REGISTER_FAST_TEST(bpf_validate_alu64_div_mod_overflow_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_div_mod_overflow);
+/* 64-bit multiplication of constant and immediate with overflow. */
+static int
+test_alu64_mul_k_overflow(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_MUL | BPF_K),
+ .imm = 0x12345678,
+ },
+ .pre.dst = make_singleton_domain(0x9876543210),
+ .post.dst = make_singleton_domain(0x9876543210u * 0x12345678),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_mul_k_overflow_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_mul_k_overflow);
+
/* 64-bit mul of small scalar range and immediate. */
static int
test_alu64_mul_k_range_small(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 39c75bbcd76f..a53048801a23 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -921,8 +921,8 @@ eval_mul(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
/* both operands are constants */
if (rd->s.min == rd->s.max && rs->s.min == rs->s.max) {
- rd->s.min = (rd->s.min * rs->s.min) & msk;
- rd->s.max = (rd->s.max * rs->s.max) & msk;
+ rd->s.min = ((uint64_t)rd->s.min * (uint64_t)rs->s.min) & msk;
+ rd->s.max = ((uint64_t)rd->s.max * (uint64_t)rs->s.max) & msk;
/* check that both operands are positive and no overflow */
} else if (rd->s.min >= 0 && rs->s.min >= 0) {
rd->s.max *= rs->s.max;
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 15/25] bpf/validate: fix BPF_JGT/EBPF_JSGT no-jump max
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (13 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 14/25] bpf/validate: fix BPF_MUL signed overflow UB Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 16/25] bpf/validate: fix BPF_JMP source range calculation Marat Khalili
` (10 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Functions `eval_jgt_jle` and `eval_jsgt_jsle` reduced range maximum for
BPF_JGT and EBPF_JSGT instructions in the no-jump case to the minimum of
src register instead of the maximum, producing more conservative
estimate that could cause false positives.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jlt r2, #0x14, L15
3: jgt r2, #0x3c, L15
4: jslt r2, #0x14, L15
5: jsgt r2, #0x3c, L15
6: ldxdw r3, [r1 + 8]
7: jlt r3, #0x1e, L15
8: jgt r3, #0x32, L15
9: jslt r3, #0x1e, L15
10: jsgt r3, #0x32, L15
11: jgt r2, r3, L14 ; tested instruction
12: mov r0, #0x1
13: exit
14: mov r0, #0x2
15: exit
Pre-state:
r2: 20..60
r3: 30..50
Post-state:
r2: 20..60 INTERSECT 0x14..0x1e (!)
Immediately after the tested instruction on step 12 validator expects r2
to contain values up to 60, for example 55, however for this value jump
condition r2 > r3 on step 11 would be always satisfied since r3 is known
to not exceed 50, and thus execution will always jump to step 14 instead
of continuing to step 12.
Fix range calculation, add tests for cases where range of src register
values is a strict subset of dst. Other cases will be covered in the
subsequent commits.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 90 ++++++++++++++++++++++++++++++++++++
lib/bpf/bpf_validate.c | 4 +-
2 files changed, 92 insertions(+), 2 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index b4cb5d8cdf8d..359e50aaaf8f 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1485,6 +1485,96 @@ test_jmp64_jslt_x(void)
REGISTER_FAST_TEST(bpf_validate_jmp64_jslt_x_autotest, NOHUGE_OK, ASAN_OK,
test_jmp64_jslt_x);
+/* Jump on ordering relationship with narrower range. */
+static int
+test_jmp64_jxx_x_ordering_narrower(void)
+{
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(30, 50),
+ .post.dst = make_signed_domain(20, 50),
+ .jump.dst = make_signed_domain(31, 60),
+ }), "(BPF_JMP | BPF_JGT | BPF_X) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(30, 50),
+ .post.dst = make_signed_domain(20, 49),
+ .jump.dst = make_signed_domain(30, 60),
+ }), "(BPF_JMP | BPF_JGE | BPF_X) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(30, 50),
+ .post.dst = make_signed_domain(30, 60),
+ .jump.dst = make_signed_domain(20, 49),
+ }), "(BPF_JMP | EBPF_JLT | BPF_X) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(30, 50),
+ .post.dst = make_signed_domain(31, 60),
+ .jump.dst = make_signed_domain(20, 50),
+ }), "(BPF_JMP | EBPF_JLE | BPF_X) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JSGT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(30, 50),
+ .post.dst = make_signed_domain(20, 50),
+ .jump.dst = make_signed_domain(31, 60),
+ }), "(BPF_JMP | EBPF_JSGT | BPF_X) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JSGE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(30, 50),
+ .post.dst = make_signed_domain(20, 49),
+ .jump.dst = make_signed_domain(30, 60),
+ }), "(BPF_JMP | EBPF_JSGE | BPF_X) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JSLT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(30, 50),
+ .post.dst = make_signed_domain(30, 60),
+ .jump.dst = make_signed_domain(20, 49),
+ }), "(BPF_JMP | EBPF_JSLT | BPF_X) check");
+
+ TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JSLE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(30, 50),
+ .post.dst = make_signed_domain(31, 60),
+ .jump.dst = make_signed_domain(20, 50),
+ }), "(BPF_JMP | EBPF_JSLE | BPF_X) check");
+
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_jmp64_jxx_x_ordering_narrower_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_jxx_x_ordering_narrower);
+
/* 64-bit load from heap (should be set to unknown). */
static int
test_mem_ldx_dw_heap(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index a53048801a23..ddc468fa0dce 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -1521,7 +1521,7 @@ static void
eval_jgt_jle(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
- frd->u.max = RTE_MIN(frd->u.max, frs->u.min);
+ frd->u.max = RTE_MIN(frd->u.max, frs->u.max);
trd->u.min = RTE_MAX(trd->u.min, trs->u.min + 1);
}
@@ -1537,7 +1537,7 @@ static void
eval_jsgt_jsle(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
- frd->s.max = RTE_MIN(frd->s.max, frs->s.min);
+ frd->s.max = RTE_MIN(frd->s.max, frs->s.max);
trd->s.min = RTE_MAX(trd->s.min, trs->s.min + 1);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 16/25] bpf/validate: fix BPF_JMP source range calculation
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (14 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 15/25] bpf/validate: fix BPF_JGT/EBPF_JSGT no-jump max Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 17/25] bpf/validate: fix BPF_JMP empty range handling Marat Khalili
` (9 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
All two-register ordering comparison functions (`eval_jgt_jle`,
`eval_jlt_jge`, `eval_jsgt_jsle`, `eval_jslt_jsge`) were updating only
the destination register value set but not the source register one. For
instance, instruction `jgt r2, r3` should be exactly equivalent to `jlt
r3, r2`, but previously the former only updated the possible values of
r2 while the latter only updated possible values of r3. Thus the
estimate for source register was conservative and could cause false
positives.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: mov r2, #0x28
2: ldxdw r3, [r1 + 0]
3: jlt r3, #0x14, L11
4: jgt r3, #0x3c, L11
5: jslt r3, #0x14, L11
6: jsgt r3, #0x3c, L11
7: jgt r2, r3, L10 ; tested instruction
8: mov r0, #0x1
9: exit
10: mov r0, #0x2
11: exit
Pre-state:
r2: 40
r3: 20..60
...
Jump-state:
r2: 40
r3: 20..60
If tested instruction jumped from step 7 to step 10 validator expects r3
to contain values up to 60, for example 55, however for this value jump
condition r2 > r3 will never be satisfied since r2 is known to equal 40,
and thus execution would always continue to step 8 instead of jumping.
Add missing source register values update.
Introduce test harness for verifying all equivalent variations of a
comparison instruction. Add tests for all cases where both code branches
are reachable (unreachable branches will be covered by subsequent
commits).
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 394 +++++++++++++++++++++++++++++++----
lib/bpf/bpf_validate.c | 8 +
2 files changed, 358 insertions(+), 44 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 359e50aaaf8f..1c40ebddf07a 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -32,6 +32,31 @@ RTE_LOG_REGISTER(test_bpf_validate_logtype, test.bpf_validate, NOTICE);
#define REGISTER_FORMAT_BUFFER_SIZE 256
#define DISASSEMBLY_FORMAT_BUFFER_SIZE 64
+#define COMPARISON_INDEX_IMMEDIATE RTE_BIT32(0)
+#define COMPARISON_INDEX_GREATER RTE_BIT32(1)
+#define COMPARISON_INDEX_INCLUSIVE RTE_BIT32(2)
+#define COMPARISON_INDEX_SIGNED RTE_BIT32(3)
+
+/* List comparison opcodes to make their index bits match constants above. */
+static const uint8_t comparisons_opcode[] = {
+ (BPF_JMP | EBPF_JLT | BPF_X),
+ (BPF_JMP | EBPF_JLT | BPF_K),
+ (BPF_JMP | BPF_JGT | BPF_X),
+ (BPF_JMP | BPF_JGT | BPF_K),
+ (BPF_JMP | EBPF_JLE | BPF_X),
+ (BPF_JMP | EBPF_JLE | BPF_K),
+ (BPF_JMP | BPF_JGE | BPF_X),
+ (BPF_JMP | BPF_JGE | BPF_K),
+ (BPF_JMP | EBPF_JSLT | BPF_X),
+ (BPF_JMP | EBPF_JSLT | BPF_K),
+ (BPF_JMP | EBPF_JSGT | BPF_X),
+ (BPF_JMP | EBPF_JSGT | BPF_K),
+ (BPF_JMP | EBPF_JSLE | BPF_X),
+ (BPF_JMP | EBPF_JSLE | BPF_K),
+ (BPF_JMP | EBPF_JSGE | BPF_X),
+ (BPF_JMP | EBPF_JSGE | BPF_K),
+};
+
/* Interval bounded by two signed values, inclusive; min <= max. */
struct signed_interval {
int64_t min;
@@ -1044,6 +1069,206 @@ verify_instruction(struct verify_instruction_param prm)
return rc;
}
+static int
+opcode_comparison_index(uint8_t opcode)
+{
+ for (int index = 0; index != RTE_DIM(comparisons_opcode); ++index)
+ if (comparisons_opcode[index] == opcode)
+ return index;
+ TEST_LOG_LINE(ERR, "Unsupported or not a comparison opcode: %hhx", opcode);
+ RTE_VERIFY(false);
+}
+
+/* Change two-register comparison verification to immediate one. */
+static bool
+make_comparison_immediate(struct verify_instruction_param *prm)
+{
+ int comparison_index = opcode_comparison_index(prm->tested_instruction.code);
+ const int64_t value = prm->pre.src.s.min;
+
+ if ((comparison_index & COMPARISON_INDEX_IMMEDIATE) != 0) {
+ TEST_LOG_LINE(ERR, "Comparison %hhx is already immediate.",
+ prm->tested_instruction.code);
+ RTE_VERIFY(false);
+ }
+
+ if (!domain_is_singleton(&prm->pre.src) || !domain_is_singleton(&prm->post.src) ||
+ !domain_is_singleton(&prm->jump.src)) {
+ TEST_LOG_LINE(DEBUG, "Cannot make immediate out of a non-singleton domain.");
+ return false;
+ }
+ if (prm->pre.src.is_pointer || prm->post.src.is_pointer || prm->jump.src.is_pointer) {
+ TEST_LOG_LINE(DEBUG, "Cannot make immediate out of a pointer.");
+ return false;
+ }
+ if (prm->post.src.s.min != value || prm->jump.src.s.min != value) {
+ TEST_LOG_LINE(DEBUG, "Cannot make immediate if the value changes.");
+ return false;
+ }
+ if (!fits_in_imm32(value)) {
+ TEST_LOG_LINE(ERR, "Cannot make immediate unless value fits in int32.");
+ return false;
+ }
+
+ comparison_index |= COMPARISON_INDEX_IMMEDIATE;
+ prm->tested_instruction.code = comparisons_opcode[comparison_index];
+ prm->tested_instruction.imm = value;
+
+ RTE_VERIFY(prm->pre.src.is_defined);
+ prm->pre.src.is_defined = false;
+
+ if (!prm->post.is_unreachable) {
+ RTE_VERIFY(prm->post.src.is_defined);
+ prm->post.src.is_defined = false;
+ }
+
+ if (!prm->jump.is_unreachable) {
+ RTE_VERIFY(prm->jump.src.is_defined);
+ prm->jump.src.is_defined = false;
+ }
+
+ return true;
+}
+
+/* Change immediate comparison verification to two-register one. */
+static void
+make_comparison_two_register(struct verify_instruction_param *prm)
+{
+ int comparison_index = opcode_comparison_index(prm->tested_instruction.code);
+ const int64_t value = prm->tested_instruction.imm;
+
+ if ((comparison_index & COMPARISON_INDEX_IMMEDIATE) == 0) {
+ TEST_LOG_LINE(ERR, "Comparison %hhx is already two-register.",
+ prm->tested_instruction.code);
+ RTE_VERIFY(false);
+ }
+
+ comparison_index &= ~COMPARISON_INDEX_IMMEDIATE;
+ prm->tested_instruction.code = comparisons_opcode[comparison_index];
+ prm->tested_instruction.imm = 0;
+
+ RTE_VERIFY(!prm->pre.src.is_defined);
+ prm->pre.src = make_singleton_domain(value);
+
+ if (!prm->post.is_unreachable) {
+ RTE_VERIFY(!prm->post.src.is_defined);
+ prm->post.src = prm->pre.src;
+ }
+
+ if (!prm->jump.is_unreachable) {
+ RTE_VERIFY(!prm->jump.src.is_defined);
+ prm->jump.src = prm->pre.src;
+ }
+}
+
+/* Change comparison verification to complement (negated result) one. */
+static void
+make_comparison_complement(struct verify_instruction_param *prm)
+{
+ int comparison_index = opcode_comparison_index(prm->tested_instruction.code);
+ comparison_index ^= COMPARISON_INDEX_GREATER | COMPARISON_INDEX_INCLUSIVE;
+ prm->tested_instruction.code = comparisons_opcode[comparison_index];
+ RTE_SWAP(prm->post, prm->jump);
+}
+
+/* Change comparison verification to converse (swapped operands) one. */
+static void
+make_comparison_converse(struct verify_instruction_param *prm)
+{
+ int comparison_index = opcode_comparison_index(prm->tested_instruction.code);
+ comparison_index ^= COMPARISON_INDEX_GREATER;
+ prm->tested_instruction.code = comparisons_opcode[comparison_index];
+ RTE_SWAP(prm->pre.dst, prm->pre.src);
+ RTE_SWAP(prm->post.dst, prm->post.src);
+ RTE_SWAP(prm->jump.dst, prm->jump.src);
+}
+
+/* Change signed comparison verification to unsigned one. */
+static void
+make_comparison_signed(struct verify_instruction_param *prm)
+{
+ int comparison_index = opcode_comparison_index(prm->tested_instruction.code);
+ if ((comparison_index & COMPARISON_INDEX_SIGNED) != 0) {
+ TEST_LOG_LINE(ERR, "Comparison %hhx is already signed.",
+ prm->tested_instruction.code);
+ RTE_VERIFY(false);
+ }
+ comparison_index |= COMPARISON_INDEX_SIGNED;
+ prm->tested_instruction.code = comparisons_opcode[comparison_index];
+}
+
+/* Verify specified two-register comparison and, if possible, immediate one. */
+static int
+verify_comparison_subcase(struct verify_instruction_param prm)
+{
+ TEST_ASSERT_SUCCESS(verify_instruction(prm), "two-register version check");
+
+ if (make_comparison_immediate(&prm))
+ TEST_ASSERT_SUCCESS(verify_instruction(prm), "immediate version check");
+
+ return TEST_SUCCESS;
+}
+
+/*
+ * Verify comparison instruction validation behaviour.
+ *
+ * Call `verify_instruction` for all valid variations of the instruction.
+ *
+ * For instance, `jgt r2, r3` verifies:
+ * * `jgt r2, r3`;
+ * * `jlt r3, r2` src and dst swapped with each other;
+ * * `jle r2, r3` with post and jump domains swapped with each other;
+ * * `jge r3, r2` with all corresponding swaps;
+ * * immediate versions of everything above where possible,
+ * that is, register on the right is an int32 scalar singleton;
+ * * signed versions of everything above if `also_signed` is true;
+ *
+ * Regardless if passed instruction compares with immediate or singleton src
+ * both cases are generated and tested.
+ */
+static int
+verify_comparison(struct verify_instruction_param prm, bool also_signed)
+{
+ fill_verify_instruction_defaults(&prm);
+
+ if (!prm.pre.src.is_defined)
+ /* Convert from immediate form to simplify further logic. */
+ make_comparison_two_register(&prm);
+
+ /* All reachable domains must be defined by this point. */
+ RTE_VERIFY(prm.pre.dst.is_defined);
+ RTE_VERIFY(prm.pre.src.is_defined);
+ if (!prm.post.is_unreachable) {
+ RTE_VERIFY(prm.post.dst.is_defined);
+ RTE_VERIFY(prm.post.src.is_defined);
+ }
+ if (!prm.jump.is_unreachable) {
+ RTE_VERIFY(prm.jump.dst.is_defined);
+ RTE_VERIFY(prm.jump.src.is_defined);
+ }
+
+ for (int make_signed = 0; make_signed <= also_signed; ++make_signed) {
+ if (make_signed)
+ make_comparison_signed(&prm);
+
+ for (int complement = false; complement <= true; ++complement) {
+
+ for (int converse = false; converse <= true; ++converse) {
+
+ TEST_ASSERT_SUCCESS(verify_comparison_subcase(prm),
+ "make_signed=%d, complement=%d, converse=%d",
+ make_signed, complement, converse);
+
+ make_comparison_converse(&prm);
+ }
+
+ make_comparison_complement(&prm);
+ }
+ }
+
+ return TEST_SUCCESS;
+}
+
/* TESTS FOR SPECIFIC INSTRUCTIONS */
@@ -1485,31 +1710,69 @@ test_jmp64_jslt_x(void)
REGISTER_FAST_TEST(bpf_validate_jmp64_jslt_x_autotest, NOHUGE_OK, ASAN_OK,
test_jmp64_jslt_x);
-/* Jump on ordering relationship with narrower range. */
+/* Jump on ordering comparisons between two ranges. */
static int
-test_jmp64_jxx_x_ordering_narrower(void)
+test_jmp64_ordering_ranges(void)
{
- TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ /* All ranges used are valid for both signed and unsigned comparisons. */
+ const bool also_signed = true;
+
+ /*
+ * 20 ---- dst ---- 60
+ * 10 -- src -- 40
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
.tested_instruction = {
- .code = (BPF_JMP | BPF_JGT | BPF_X),
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
},
.pre.dst = make_signed_domain(20, 60),
- .pre.src = make_signed_domain(30, 50),
- .post.dst = make_signed_domain(20, 50),
- .jump.dst = make_signed_domain(31, 60),
- }), "(BPF_JMP | BPF_JGT | BPF_X) check");
+ .pre.src = make_signed_domain(10, 40),
+ .jump.dst = make_signed_domain(20, 39),
+ .jump.src = make_signed_domain(21, 40),
+ }, also_signed), "strict, dst range weakly greater than src range");
- TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
.tested_instruction = {
- .code = (BPF_JMP | BPF_JGE | BPF_X),
+ .code = (BPF_JMP | EBPF_JLE | BPF_X),
},
.pre.dst = make_signed_domain(20, 60),
- .pre.src = make_signed_domain(30, 50),
- .post.dst = make_signed_domain(20, 49),
- .jump.dst = make_signed_domain(30, 60),
- }), "(BPF_JMP | BPF_JGE | BPF_X) check");
+ .pre.src = make_signed_domain(10, 40),
+ .jump.dst = make_signed_domain(20, 40),
+ .jump.src = make_signed_domain(20, 40),
+ }, also_signed), "non-strict, dst range weakly greater than src range");
- TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ /*
+ * 20 ---- dst ---- 60
+ * 10 -------- src -------- 70
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(10, 70),
+ .post.src = make_signed_domain(10, 60),
+ .jump.src = make_signed_domain(21, 70),
+ }, also_signed), "strict, dst range included in src range");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(10, 70),
+ .post.src = make_signed_domain(10, 59),
+ .jump.src = make_signed_domain(20, 70),
+ }, also_signed), "non-strict, dst range included in src range");
+
+ /*
+ * 20 ---- dst ---- 60
+ * 30 - src - 50
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
.tested_instruction = {
.code = (BPF_JMP | EBPF_JLT | BPF_X),
},
@@ -1517,9 +1780,9 @@ test_jmp64_jxx_x_ordering_narrower(void)
.pre.src = make_signed_domain(30, 50),
.post.dst = make_signed_domain(30, 60),
.jump.dst = make_signed_domain(20, 49),
- }), "(BPF_JMP | EBPF_JLT | BPF_X) check");
+ }, also_signed), "strict, dst range includes src range");
- TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
.tested_instruction = {
.code = (BPF_JMP | EBPF_JLE | BPF_X),
},
@@ -1527,53 +1790,96 @@ test_jmp64_jxx_x_ordering_narrower(void)
.pre.src = make_signed_domain(30, 50),
.post.dst = make_signed_domain(31, 60),
.jump.dst = make_signed_domain(20, 50),
- }), "(BPF_JMP | EBPF_JLE | BPF_X) check");
+ }, also_signed), "non-strict, dst range includes src range");
- TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ /*
+ * 20 ---- dst ---- 60
+ * 40 -- src -- 70
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
.tested_instruction = {
- .code = (BPF_JMP | EBPF_JSGT | BPF_X),
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
},
.pre.dst = make_signed_domain(20, 60),
- .pre.src = make_signed_domain(30, 50),
- .post.dst = make_signed_domain(20, 50),
- .jump.dst = make_signed_domain(31, 60),
- }), "(BPF_JMP | EBPF_JSGT | BPF_X) check");
+ .pre.src = make_signed_domain(40, 70),
+ .post.dst = make_signed_domain(40, 60),
+ .post.src = make_signed_domain(40, 60),
+ }, also_signed), "strict, dst range weakly less than src range");
- TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
.tested_instruction = {
- .code = (BPF_JMP | EBPF_JSGE | BPF_X),
+ .code = (BPF_JMP | EBPF_JLE | BPF_X),
},
.pre.dst = make_signed_domain(20, 60),
- .pre.src = make_signed_domain(30, 50),
- .post.dst = make_signed_domain(20, 49),
- .jump.dst = make_signed_domain(30, 60),
- }), "(BPF_JMP | EBPF_JSGE | BPF_X) check");
+ .pre.src = make_signed_domain(40, 70),
+ .post.dst = make_signed_domain(41, 60),
+ .post.src = make_signed_domain(40, 59),
+ }, also_signed), "non-strict, dst range weakly less than src range");
- TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_jmp64_ordering_ranges_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_ordering_ranges);
+
+/* Jump on ordering comparisons with singleton. */
+static int
+test_jmp64_ordering_singleton(void)
+{
+ /* All ranges used are valid for both signed and unsigned comparisons. */
+ const bool also_signed = true;
+
+ /*
+ * 20 ---- dst ---- 60
+ * imm
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
.tested_instruction = {
- .code = (BPF_JMP | EBPF_JSLT | BPF_X),
+ .code = (BPF_JMP | EBPF_JLT | BPF_K),
+ .imm = 40,
},
.pre.dst = make_signed_domain(20, 60),
- .pre.src = make_signed_domain(30, 50),
- .post.dst = make_signed_domain(30, 60),
- .jump.dst = make_signed_domain(20, 49),
- }), "(BPF_JMP | EBPF_JSLT | BPF_X) check");
+ .post.dst = make_signed_domain(40, 60),
+ .jump.dst = make_signed_domain(20, 39),
+ }, also_signed), "(BPF_JMP | EBPF_JLT | BPF_K) check");
- TEST_ASSERT_SUCCESS(verify_instruction((struct verify_instruction_param){
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
.tested_instruction = {
- .code = (BPF_JMP | EBPF_JSLE | BPF_X),
+ .code = (BPF_JMP | BPF_JGT | BPF_K),
+ .imm = 40,
},
.pre.dst = make_signed_domain(20, 60),
- .pre.src = make_signed_domain(30, 50),
- .post.dst = make_signed_domain(31, 60),
- .jump.dst = make_signed_domain(20, 50),
- }), "(BPF_JMP | EBPF_JSLE | BPF_X) check");
+ .post.dst = make_signed_domain(20, 40),
+ .jump.dst = make_signed_domain(41, 60),
+ }, also_signed), "(BPF_JMP | EBPF_JGT | BPF_K) check");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_K),
+ .imm = 40,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .post.dst = make_signed_domain(41, 60),
+ .jump.dst = make_signed_domain(20, 40),
+ }, also_signed), "(BPF_JMP | EBPF_JLE | BPF_K) check");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGE | BPF_K),
+ .imm = 40,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .post.dst = make_signed_domain(20, 39),
+ .jump.dst = make_signed_domain(40, 60),
+ }, also_signed), "(BPF_JMP | EBPF_JGE | BPF_K) check");
return TEST_SUCCESS;
}
-REGISTER_FAST_TEST(bpf_validate_jmp64_jxx_x_ordering_narrower_autotest, NOHUGE_OK, ASAN_OK,
- test_jmp64_jxx_x_ordering_narrower);
+REGISTER_FAST_TEST(bpf_validate_jmp64_ordering_singleton_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_ordering_singleton);
/* 64-bit load from heap (should be set to unknown). */
static int
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index ddc468fa0dce..8b7c27a2fa3a 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -1522,7 +1522,9 @@ eval_jgt_jle(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
frd->u.max = RTE_MIN(frd->u.max, frs->u.max);
+ frs->u.min = RTE_MAX(frs->u.min, frd->u.min);
trd->u.min = RTE_MAX(trd->u.min, trs->u.min + 1);
+ trs->u.max = RTE_MIN(trs->u.max, trd->u.max - 1);
}
static void
@@ -1530,7 +1532,9 @@ eval_jlt_jge(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
frd->u.min = RTE_MAX(frd->u.min, frs->u.min);
+ frs->u.max = RTE_MIN(frs->u.max, frd->u.max);
trd->u.max = RTE_MIN(trd->u.max, trs->u.max - 1);
+ trs->u.min = RTE_MAX(trs->u.min, trd->u.min + 1);
}
static void
@@ -1538,7 +1542,9 @@ eval_jsgt_jsle(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
frd->s.max = RTE_MIN(frd->s.max, frs->s.max);
+ frs->s.min = RTE_MAX(frs->s.min, frd->s.min);
trd->s.min = RTE_MAX(trd->s.min, trs->s.min + 1);
+ trs->s.max = RTE_MIN(trs->s.max, trd->s.max - 1);
}
static void
@@ -1546,7 +1552,9 @@ eval_jslt_jsge(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
frd->s.min = RTE_MAX(frd->s.min, frs->s.min);
+ frs->s.max = RTE_MIN(frs->s.max, frd->s.max);
trd->s.max = RTE_MIN(trd->s.max, trs->s.max - 1);
+ trs->s.min = RTE_MAX(trs->s.min, trd->s.min + 1);
}
static const char *
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 17/25] bpf/validate: fix BPF_JMP empty range handling
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (15 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 16/25] bpf/validate: fix BPF_JMP source range calculation Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 18/25] bpf/validate: fix BPF_AND min calculations Marat Khalili
` (8 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_jcc` did not account for 'dynamically unreachable' code
paths. Some code paths may be _dynamically_ unreachable, which measn
that according to validator calculations no valid values are left to
evaluate. This does not indicate dead code since same code might be
reachable through other code paths. Previous behaviour resulted in:
* undefined behaviour in corner cases;
* ranges breaking min <= max invariant relied upon in multiple places
(e.g. signed overflow detection in `eval_mul` only checks `s.min` to
make sure the range is non-negative and so on);
* unnecessary work for validator contributing to exponential code paths
grow in some cases.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: mov r2, #0x2a
2: lddw r3, #0x8000000000000000
4: jslt r2, r3, L7 ; tested instruction
5: mov r0, #0x1
6: exit
7: mov r0, #0x2
8: exit
Pre-state:
r2: 42
r3: INT64_MIN
Post-state:
r2: 42
r3: INT64_MIN
Jump-state:
r2: 42
r3: 43..INT64_MIN INTERSECT 0x8000000000000000 (!)
At step 7 after jump from tested instruction validator considers r3 to
equal 0x8000000000000000 if viewed as unsigned, or have nonsensical
range 43..INT64_MIN if viewed as signed. In reality there is just no
valid range for this code path since it will never occur.
With sanitizer the following diagnostic is generated:
lib/bpf/bpf_validate.c:1824:15: runtime error: signed integer
overflow: -9223372036854775808 - 1 cannot be represented in type
'long int'
#0 0x000002761e41 in eval_jslt_jsge lib/bpf/bpf_validate.c:1824
#1 0x000002762acb in eval_jcc lib/bpf/bpf_validate.c:1881
#2 0x00000276b749 in evaluate lib/bpf/bpf_validate.c:3245
...
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior
lib/bpf/bpf_validate.c:1824:15
Add pruning of dynamically unreachable code paths that arise from
ordering comparisons. Add tests for remaining ordering jump cases.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 277 ++++++++++++++++++++++++++++++-
lib/bpf/bpf_validate.c | 96 ++++++++---
lib/bpf/rte_bpf_validate_debug.h | 2 +
3 files changed, 351 insertions(+), 24 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 1c40ebddf07a..4b06918c5cea 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -135,6 +135,11 @@ static const struct domain unknown = {
.u = { .min = 0, .max = UINT64_MAX },
};
+/* Unreachable state. */
+static const struct state unreachable = {
+ .is_unreachable = true,
+};
+
/* BUILDING DOMAINS */
@@ -1710,6 +1715,55 @@ test_jmp64_jslt_x(void)
REGISTER_FAST_TEST(bpf_validate_jmp64_jslt_x_autotest, NOHUGE_OK, ASAN_OK,
test_jmp64_jslt_x);
+/* Jump on ordering comparisons with potential bound overflow. */
+static int
+test_jmp64_ordering_overflow(void)
+{
+ /* In this test signed and unsigned cases are spelled out explicitly. */
+ const bool also_signed = false;
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JSLT | BPF_X),
+ },
+ .pre.dst = make_singleton_domain(42),
+ .pre.src = make_singleton_domain(INT64_MIN),
+ .jump = unreachable,
+ }, also_signed), "signed less than INT64_MIN");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JSGT | BPF_X),
+ },
+ .pre.dst = make_singleton_domain(42),
+ .pre.src = make_singleton_domain(INT64_MAX),
+ .jump = unreachable,
+ }, also_signed), "signed greater than INT64_MAX");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
+ },
+ .pre.dst = make_singleton_domain(42),
+ .pre.src = make_singleton_domain(0),
+ .jump = unreachable,
+ }, also_signed), "unsigned less than zero");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGT | BPF_X),
+ },
+ .pre.dst = make_singleton_domain(42),
+ .pre.src = make_singleton_domain(UINT64_MAX),
+ .jump = unreachable,
+ }, also_signed), "unsigned greater than UINT64_MAX");
+
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_jmp64_ordering_overflow_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_ordering_overflow);
+
/* Jump on ordering comparisons between two ranges. */
static int
test_jmp64_ordering_ranges(void)
@@ -1717,6 +1771,29 @@ test_jmp64_ordering_ranges(void)
/* All ranges used are valid for both signed and unsigned comparisons. */
const bool also_signed = true;
+ /*
+ * 20 ---- dst ---- 60
+ * 0 - src - 10
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(0, 10),
+ .jump = unreachable,
+ }, also_signed), "strict, dst range strongly greater than src range");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(0, 10),
+ .jump = unreachable,
+ }, also_signed), "non-strict, dst range strongly greater than src range");
+
/*
* 20 ---- dst ---- 60
* 10 -- src -- 40
@@ -1817,15 +1894,38 @@ test_jmp64_ordering_ranges(void)
.post.src = make_signed_domain(40, 59),
}, also_signed), "non-strict, dst range weakly less than src range");
+ /*
+ * 20 ---- dst ---- 60
+ * 70 - src - 80
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(70, 80),
+ .post = unreachable,
+ }, also_signed), "strict, dst range strongly less than src range");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .pre.src = make_signed_domain(70, 80),
+ .post = unreachable,
+ }, also_signed), "non-strict, dst range strongly less than src range");
+
return TEST_SUCCESS;
}
REGISTER_FAST_TEST(bpf_validate_jmp64_ordering_ranges_autotest, NOHUGE_OK, ASAN_OK,
test_jmp64_ordering_ranges);
-/* Jump on ordering comparisons with singleton. */
+/* Jump on ordering comparisons with singleton inside the range. */
static int
-test_jmp64_ordering_singleton(void)
+test_jmp64_ordering_singleton_inside(void)
{
/* All ranges used are valid for both signed and unsigned comparisons. */
const bool also_signed = true;
@@ -1878,8 +1978,177 @@ test_jmp64_ordering_singleton(void)
return TEST_SUCCESS;
}
-REGISTER_FAST_TEST(bpf_validate_jmp64_ordering_singleton_autotest, NOHUGE_OK, ASAN_OK,
- test_jmp64_ordering_singleton);
+REGISTER_FAST_TEST(bpf_validate_jmp64_ordering_singleton_inside_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_ordering_singleton_inside);
+
+/* Jump on ordering comparisons with singleton outside the range. */
+static int
+test_jmp64_ordering_singleton_outside(void)
+{
+ /* All ranges used are valid for both signed and unsigned comparisons. */
+ const bool also_signed = true;
+
+ /*
+ * 20 ---- dst ---- 60
+ * imm
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_K),
+ .imm = 10,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .jump = unreachable,
+ }, also_signed), "(BPF_JMP | EBPF_JLT | BPF_K) check, range greater than imm");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_K),
+ .imm = 10,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .jump = unreachable,
+ }, also_signed), "(BPF_JMP | EBPF_JLE | BPF_K) check, range greater than imm");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGT | BPF_K),
+ .imm = 10,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .post = unreachable,
+ }, also_signed), "(BPF_JMP | EBPF_JGT | BPF_K) check, range greater than imm");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGE | BPF_K),
+ .imm = 10,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .post = unreachable,
+ }, also_signed), "(BPF_JMP | EBPF_JGE | BPF_K) check, range greater than imm");
+
+ /*
+ * 20 ---- dst ---- 60
+ * imm
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_K),
+ .imm = 70,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .post = unreachable,
+ }, also_signed), "(BPF_JMP | EBPF_JLT | BPF_K) check, range less than imm");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_K),
+ .imm = 70,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .post = unreachable,
+ }, also_signed), "(BPF_JMP | EBPF_JLE | BPF_K) check, range less than imm");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGT | BPF_K),
+ .imm = 70,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .jump = unreachable,
+ }, also_signed), "(BPF_JMP | EBPF_JGT | BPF_K) check, range less than imm");
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | BPF_JGE | BPF_K),
+ .imm = 70,
+ },
+ .pre.dst = make_signed_domain(20, 60),
+ .jump = unreachable,
+ }, also_signed), "(BPF_JMP | EBPF_JGE | BPF_K) check, range less than imm");
+
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_jmp64_ordering_singleton_outside_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_ordering_singleton_outside);
+
+/* Jump on ordering comparisons with ranges "touching" each other. */
+static int
+test_jmp64_ordering_touching(void)
+{
+ /* All ranges used are valid for both signed and unsigned comparisons. */
+ const bool also_signed = true;
+
+ for (int overlap = 0; overlap != 3; ++overlap) {
+
+ /*
+ * 20 - dst - 30
+ * 10 - src - (19 + overlap)
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 30),
+ .pre.src = make_signed_domain(10, 19 + overlap),
+ .jump = overlap <= 1 ? unreachable : (struct state){
+ .dst = make_singleton_domain(20),
+ .src = make_singleton_domain(21),
+ },
+ }, also_signed), "strict, dst left touching src right, overlap=%d", overlap);
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(20, 30),
+ .pre.src = make_signed_domain(10, 19 + overlap),
+ .jump = overlap < 1 ? unreachable : (struct state){
+ .dst = make_signed_domain(20, 19 + overlap),
+ .src = make_signed_domain(20, 19 + overlap),
+ },
+ }, also_signed), "non-strict, dst left touching src right, overlap=%d", overlap);
+
+ /*
+ * 10 - dst - (19 + overlap)
+ * 20 - src - 30
+ */
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLT | BPF_X),
+ },
+ .pre.dst = make_signed_domain(10, 19 + overlap),
+ .pre.src = make_signed_domain(20, 30),
+ .post = overlap < 1 ? unreachable : (struct state){
+ .dst = make_signed_domain(20, 19 + overlap),
+ .src = make_signed_domain(20, 19 + overlap),
+ },
+ }, also_signed), "strict, dst right touching src left, overlap=%d", overlap);
+
+ TEST_ASSERT_SUCCESS(verify_comparison((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (BPF_JMP | EBPF_JLE | BPF_X),
+ },
+ .pre.dst = make_signed_domain(10, 19 + overlap),
+ .pre.src = make_signed_domain(20, 30),
+ .post = overlap <= 1 ? unreachable : (struct state){
+ .dst = make_singleton_domain(21),
+ .src = make_singleton_domain(20),
+ },
+ }, also_signed), "non-strict, dst right touching src left, overlap=%d", overlap);
+ }
+
+ return TEST_SUCCESS;
+}
+
+REGISTER_FAST_TEST(bpf_validate_jmp64_ordering_touching_autotest, NOHUGE_OK, ASAN_OK,
+ test_jmp64_ordering_touching);
/* 64-bit load from heap (should be set to unknown). */
static int
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 8b7c27a2fa3a..fbae70df924e 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -19,6 +19,9 @@
#define BPF_ARG_PTR_STACK RTE_BPF_ARG_RESERVED
+/* type containing no values (AKA "bottom", "never" etc) */
+#define BPF_ARG_UNINHABITED ((enum rte_bpf_arg_type)(RTE_BPF_ARG_UNDEF - 1))
+
struct bpf_reg_val {
struct rte_bpf_arg v;
uint64_t mask;
@@ -36,6 +39,8 @@ struct bpf_eval_state {
SLIST_ENTRY(bpf_eval_state) next; /* for @safe list traversal */
struct bpf_reg_val rv[EBPF_REG_NUM];
struct bpf_reg_val sv[MAX_BPF_STACK_SIZE / sizeof(uint64_t)];
+ /* flag set for branches determined to be dynamically unreachable */
+ bool unreachable;
};
SLIST_HEAD(bpf_evst_head, bpf_eval_state);
@@ -174,6 +179,9 @@ __rte_bpf_validate_can_access(const struct bpf_verifier *verifier,
struct value_set access_set;
uint32_t opsz;
+ if (st->unreachable)
+ return -ENOENT;
+
switch (BPF_CLASS(access->code)) {
case BPF_LDX:
rv = &st->rv[access->src_reg];
@@ -310,6 +318,10 @@ __rte_bpf_validate_may_jump(const struct bpf_verifier *verifier,
if (!may_jump_code_is_supported(jump->code))
return -ENOTSUP;
+ if (st->unreachable)
+ /* Set no bits since neither false nor true is possible. */
+ return 0;
+
rd = &st->rv[jump->dst_reg];
dst_set = (rd->v.type == RTE_BPF_ARG_UNDEF) ? value_set_full :
value_set_from_pair(rd->s.min, rd->s.max, rd->u.min, rd->u.max);
@@ -1521,40 +1533,68 @@ static void
eval_jgt_jle(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
- frd->u.max = RTE_MIN(frd->u.max, frs->u.max);
- frs->u.min = RTE_MAX(frs->u.min, frd->u.min);
- trd->u.min = RTE_MAX(trd->u.min, trs->u.min + 1);
- trs->u.max = RTE_MIN(trs->u.max, trd->u.max - 1);
+ if (frd->u.min <= frs->u.max) {
+ frd->u.max = RTE_MIN(frd->u.max, frs->u.max);
+ frs->u.min = RTE_MAX(frs->u.min, frd->u.min);
+ } else
+ frd->v.type = frs->v.type = BPF_ARG_UNINHABITED;
+
+ if (trs->u.min < trd->u.max) {
+ trd->u.min = RTE_MAX(trd->u.min, trs->u.min + 1);
+ trs->u.max = RTE_MIN(trs->u.max, trd->u.max - 1);
+ } else
+ trd->v.type = trs->v.type = BPF_ARG_UNINHABITED;
}
static void
eval_jlt_jge(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
- frd->u.min = RTE_MAX(frd->u.min, frs->u.min);
- frs->u.max = RTE_MIN(frs->u.max, frd->u.max);
- trd->u.max = RTE_MIN(trd->u.max, trs->u.max - 1);
- trs->u.min = RTE_MAX(trs->u.min, trd->u.min + 1);
+ if (frs->u.min <= frd->u.max) {
+ frd->u.min = RTE_MAX(frd->u.min, frs->u.min);
+ frs->u.max = RTE_MIN(frs->u.max, frd->u.max);
+ } else
+ frd->v.type = frs->v.type = BPF_ARG_UNINHABITED;
+
+ if (trd->u.min < trs->u.max) {
+ trd->u.max = RTE_MIN(trd->u.max, trs->u.max - 1);
+ trs->u.min = RTE_MAX(trs->u.min, trd->u.min + 1);
+ } else
+ trd->v.type = trs->v.type = BPF_ARG_UNINHABITED;
}
static void
eval_jsgt_jsle(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
- frd->s.max = RTE_MIN(frd->s.max, frs->s.max);
- frs->s.min = RTE_MAX(frs->s.min, frd->s.min);
- trd->s.min = RTE_MAX(trd->s.min, trs->s.min + 1);
- trs->s.max = RTE_MIN(trs->s.max, trd->s.max - 1);
+ if (frd->s.min <= frs->s.max) {
+ frd->s.max = RTE_MIN(frd->s.max, frs->s.max);
+ frs->s.min = RTE_MAX(frs->s.min, frd->s.min);
+ } else
+ frd->v.type = frs->v.type = BPF_ARG_UNINHABITED;
+
+ if (trs->s.min < trd->s.max) {
+ trd->s.min = RTE_MAX(trd->s.min, trs->s.min + 1);
+ trs->s.max = RTE_MIN(trs->s.max, trd->s.max - 1);
+ } else
+ trd->v.type = trs->v.type = BPF_ARG_UNINHABITED;
}
static void
eval_jslt_jsge(struct bpf_reg_val *trd, struct bpf_reg_val *trs,
struct bpf_reg_val *frd, struct bpf_reg_val *frs)
{
- frd->s.min = RTE_MAX(frd->s.min, frs->s.min);
- frs->s.max = RTE_MIN(frs->s.max, frd->s.max);
- trd->s.max = RTE_MIN(trd->s.max, trs->s.max - 1);
- trs->s.min = RTE_MAX(trs->s.min, trd->s.min + 1);
+ if (frs->s.min <= frd->s.max) {
+ frd->s.min = RTE_MAX(frd->s.min, frs->s.min);
+ frs->s.max = RTE_MIN(frs->s.max, frd->s.max);
+ } else
+ frd->v.type = frs->v.type = BPF_ARG_UNINHABITED;
+
+ if (trd->s.min < trs->s.max) {
+ trd->s.max = RTE_MIN(trd->s.max, trs->s.max - 1);
+ trs->s.min = RTE_MAX(trs->s.min, trd->s.min + 1);
+ } else
+ trd->v.type = trs->v.type = BPF_ARG_UNINHABITED;
}
static const char *
@@ -1609,6 +1649,14 @@ eval_jcc(struct bpf_verifier *bvf, const struct ebpf_insn *ins)
else if (op == EBPF_JSGE)
eval_jslt_jsge(frd, frs, trd, trs);
+ if (trd->v.type == BPF_ARG_UNINHABITED ||
+ trs->v.type == BPF_ARG_UNINHABITED)
+ tst->unreachable = true;
+
+ if (frd->v.type == BPF_ARG_UNINHABITED ||
+ frs->v.type == BPF_ARG_UNINHABITED)
+ fst->unreachable = true;
+
return NULL;
}
@@ -2349,7 +2397,7 @@ set_edge_type(struct bpf_verifier *bvf, struct inst_node *node,
* Depth-First Search (DFS) through previously constructed
* Control Flow Graph (CFG).
* Information collected at this path would be used later
- * to determine is there any loops, and/or unreachable instructions.
+ * to determine is there any loops, and/or statically unreachable instructions.
* PREREQUISITE: there is at least one node.
*/
static void
@@ -2397,7 +2445,7 @@ dfs(struct bpf_verifier *bvf)
}
/*
- * report unreachable instructions.
+ * report statically unreachable instructions.
*/
static void
log_unreachable(const struct bpf_verifier *bvf)
@@ -2970,13 +3018,21 @@ evaluate(struct bpf_verifier *bvf)
stats.nb_restore++;
}
+ if (bvf->evst->unreachable) {
+ rc = __rte_bpf_validate_debug_evaluate_step(
+ debug, get_node_idx(bvf, next),
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_UNREACHABLE);
+ if (rc < 0)
+ break;
+
+ next = NULL;
/*
* for jcc targets: check did we already evaluated
* that path and can it's evaluation be skipped that
* time.
*/
- if (node->nb_edge > 1 && prune_eval_state(bvf, node,
- next) == 0) {
+ } else if (node->nb_edge > 1 &&
+ prune_eval_state(bvf, node, next) == 0) {
rc = __rte_bpf_validate_debug_evaluate_step(
debug, get_node_idx(bvf, next),
RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_PRUNE);
diff --git a/lib/bpf/rte_bpf_validate_debug.h b/lib/bpf/rte_bpf_validate_debug.h
index 2e8275625d8e..edf023d614ee 100644
--- a/lib/bpf/rte_bpf_validate_debug.h
+++ b/lib/bpf/rte_bpf_validate_debug.h
@@ -47,6 +47,8 @@ enum rte_bpf_validate_debug_event {
RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_ENTER,
/* Pruning branch as verified earlier. */
RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_PRUNE,
+ /* Pruning branch as dynamically unreachable. */
+ RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_UNREACHABLE,
/* End of branch verification, after the last verified instruction. */
RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_RETURN,
/* Number of valid event values. */
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 18/25] bpf/validate: fix BPF_AND min calculations
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (16 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 17/25] bpf/validate: fix BPF_JMP empty range handling Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 19/25] bpf/validate: fix BPF_LSH shift-out-of-bounds UB Marat Khalili
` (7 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_and` calculated both signed (if positive) and unsigned
minimum values as bitwise AND between corresponding minimums, which is
incorrect since intermediate values can have zeroes in bits where
minimum values don't.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jlt r2, #0x6, L8
3: jgt r2, #0x8, L8
4: jslt r2, #0x6, L8
5: jsgt r2, #0x8, L8
6: and r2, #0x5 ; tested instruction
7: mov r0, #0x1
8: exit
Pre-state:
r2: 6..8
Post-state:
r2: 4..7
After the tested instruction validator considers r2 to be equal or
greater than 4, however if 8 was loaded on step 1 it is possible for it
to be zero (0x8 & 0x5 == 0).
Use zero as a new safe lower bound for both signed (if positive) and
unsigned minimum. Add test.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 17 +++++++++++++++++
lib/bpf/bpf_validate.c | 4 ++--
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 4b06918c5cea..646313cdacf2 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1384,6 +1384,23 @@ test_alu64_add_x_scalar_scalar(void)
REGISTER_FAST_TEST(bpf_validate_alu64_add_x_scalar_scalar_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_add_x_scalar_scalar);
+/* 64-bit bitwise AND between a scalar range and immediate. */
+static int
+test_alu64_and_k(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_AND | BPF_K),
+ .imm = 5,
+ },
+ .pre.dst = make_signed_domain(6, 8),
+ .post.dst = make_signed_domain(0, 7),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_and_k_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_and_k);
+
/* 64-bit division and modulo of UINT64_MAX*2/3. */
static int
test_alu64_div_mod_big_constant(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index fbae70df924e..4dbf3a3ef892 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -848,7 +848,7 @@ eval_and(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
rd->u.max &= rs->u.max;
} else {
rd->u.max = eval_uand_max(rd->u.max, rs->u.max, opsz);
- rd->u.min &= rs->u.min;
+ rd->u.min = 0;
}
/* both operands are constants */
@@ -859,7 +859,7 @@ eval_and(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
} else if (rd->s.min >= 0 || rs->s.min >= 0) {
rd->s.max = eval_uand_max(rd->s.max & (msk >> 1),
rs->s.max & (msk >> 1), opsz);
- rd->s.min &= rs->s.min;
+ rd->s.min = 0;
} else
eval_smax_bound(rd, msk);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 19/25] bpf/validate: fix BPF_LSH shift-out-of-bounds UB
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (17 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 18/25] bpf/validate: fix BPF_AND min calculations Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 20/25] bpf/validate: fix BPF_OR min calculations Marat Khalili
` (6 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_lsh` when validating left shift by 63 invoked macro
`RTE_LEN2MASK(0, int64_t)` which triggered shift-out-of-bounds undefined
behaviour.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jlt r2, #0x3, L8
3: jgt r2, #0x5, L8
4: jslt r2, #0x3, L8
5: jsgt r2, #0x5, L8
6: lsh r2, #0x3f ; tested instruction
7: mov r0, #0x1
8: exit
Pre-state:
r2: 3..5
Post-state:
r2: 0..UINT64_MAX
With sanitizer the following diagnostic is generated:
lib/bpf/bpf_validate.c:785:4: runtime error: shift exponent 64 is
too large for 64-bit type 'long unsigned int'
#0 0x00000274d5e0 in eval_lsh lib/bpf/bpf_validate.c:785
#1 0x00000275a2ea in eval_alu lib/bpf/bpf_validate.c:1310
#2 0x00000276ce3d in evaluate lib/bpf/bpf_validate.c:3284
Add guard for this case, add test.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 17 +++++++++++++++++
lib/bpf/bpf_validate.c | 3 ++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 646313cdacf2..64047af44e4a 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1536,6 +1536,23 @@ test_alu64_div_mod_overflow(void)
REGISTER_FAST_TEST(bpf_validate_alu64_div_mod_overflow_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_div_mod_overflow);
+/* 64-bit left shift by 63. */
+static int
+test_alu64_lsh_63(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_LSH | BPF_K),
+ .imm = 63,
+ },
+ .pre.dst = make_signed_domain(3, 5),
+ .post.dst = unknown,
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_lsh_63_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_lsh_63);
+
/* 64-bit multiplication of constant and immediate with overflow. */
static int
test_alu64_mul_k_overflow(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 4dbf3a3ef892..2c61e5d96a5f 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -746,7 +746,8 @@ eval_lsh(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
/* check that dreg values are and would remain always positive */
if ((uint64_t)rd->s.min >> (opsz - 1) != 0 || rd->s.max >=
- RTE_LEN2MASK(opsz - rs->u.max - 1, int64_t))
+ (rs->u.max == opsz - 1 ? 0 :
+ RTE_LEN2MASK(opsz - rs->u.max - 1, int64_t)))
eval_smax_bound(rd, msk);
else {
rd->s.max <<= rs->u.max;
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 20/25] bpf/validate: fix BPF_OR min calculations
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (18 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 19/25] bpf/validate: fix BPF_LSH shift-out-of-bounds UB Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 21/25] bpf/validate: fix BPF_SUB signed max zero case Marat Khalili
` (5 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
This commit fixes two different problems in signed and unsigned minimum
calculations within `eval_or`. Passing tests requires both problems to
be fixed which is why the changes are squashed in one commit.
1) Function `eval_or` calculated result signed minimum as bitwise OR
between corresponding minimums as long as any of them is non-negative,
which is incorrect since values within the range can have zeroes where
the minimums don't, including the sign bit.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jlt r2, #0x5, L8
3: jgt r2, #0x6, L8
4: jslt r2, #0x5, L8
5: jsgt r2, #0x6, L8
6: or r2, #0xfffffffe ; tested instruction
7: mov r0, #0x1
8: exit
Pre-state:
r2: 5..6
Post-state:
r2: -1
After the tested instruction validator considers r2 to always equal -1,
however if 6 was loaded on step 1 it is possible for it to be -2:
0x6 & 0xfffffffffffffffe == 0xfffffffffffffffe = -2
Set signed range to full if any of the operands can be negative,
otherwise use the maximum of both minimums as a new signed minimum
following the idea that result of bitwise OR cannot be smaller than its
operands. Add test.
2) Function `eval_or` calculated result unsigned minimum as bitwise OR
between corresponding minimums, which is incorrect since values within
the range can have zeroes the minimums don't.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jlt r2, #0x5, L8
3: jgt r2, #0x6, L8
4: jslt r2, #0x5, L8
5: jsgt r2, #0x6, L8
6: or r2, #0x2 ; tested instruction
7: mov r0, #0x1
8: exit
Pre-state:
r2: 5..6
Post-state:
r2: 7
After the tested instruction validator considers r2 to always equal 7,
however if 6 was loaded on step 1 it is possible for it to be 6:
0x6 & 0x2 == 0x6
Use the maximum of both minimums as a new unsigned minimum following the
idea that result of bitwise OR cannot be smaller than its operands. Add
test.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 34 ++++++++++++++++++++++++++++++++++
lib/bpf/bpf_validate.c | 6 +++---
2 files changed, 37 insertions(+), 3 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 64047af44e4a..9d3e48b5f93c 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1713,6 +1713,40 @@ test_alu64_neg_zero_last(void)
REGISTER_FAST_TEST(bpf_validate_alu64_neg_zero_last_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_neg_zero_last);
+/* 64-bit bitwise OR between a positive scalar range and negative immediate. */
+static int
+test_alu64_or_k_negative(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_OR | BPF_K),
+ .imm = -2,
+ },
+ .pre.dst = make_signed_domain(5, 6),
+ .post.dst = make_signed_domain(-2, -1),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_or_k_negative_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_or_k_negative);
+
+/* 64-bit bitwise OR between a positive scalar range and positive immediate. */
+static int
+test_alu64_or_k_positive(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_OR | BPF_K),
+ .imm = 2,
+ },
+ .pre.dst = make_signed_domain(5, 6),
+ .post.dst = make_signed_domain(5, 7),
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_or_k_positive_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_or_k_positive);
+
/* Jump if greater than immediate. */
static int
test_jmp64_jeq_k(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 2c61e5d96a5f..d9ee0563c9d3 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -875,7 +875,7 @@ eval_or(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
rd->u.max |= rs->u.max;
} else {
rd->u.max = eval_uor_max(rd->u.max, rs->u.max, opsz);
- rd->u.min |= rs->u.min;
+ rd->u.min = RTE_MAX(rd->u.min, rs->u.min);
}
/* both operands are constants */
@@ -884,9 +884,9 @@ eval_or(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
rd->s.max |= rs->s.max;
/* both operands are non-negative */
- } else if (rd->s.min >= 0 || rs->s.min >= 0) {
+ } else if (rd->s.min >= 0 && rs->s.min >= 0) {
rd->s.max = eval_uor_max(rd->s.max, rs->s.max, opsz);
- rd->s.min |= rs->s.min;
+ rd->s.min = RTE_MAX(rd->s.min, rs->s.min);
} else
eval_smax_bound(rd, msk);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 21/25] bpf/validate: fix BPF_SUB signed max zero case
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (19 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 20/25] bpf/validate: fix BPF_OR min calculations Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 22/25] bpf/validate: fix BPF_XOR signed min calculation Marat Khalili
` (4 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_sub` used source register signed minimum to detect
overflow of the difference (operation result) signed minimum, and source
register signed maximum to detect overflow of the difference signed
maximum. However in the actual formula for difference source register
bounds are swapped (correctly, since we subtract it), so in overflow
detection we should also have swapped them. It caused false negatives in
certain cases.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jsgt r2, #0x0, L7
3: ldxdw r3, [r1 + 8]
4: jsgt r3, #0x0, L7
5: sub r2, r3 ; tested instruction
6: mov r0, #0x1
7: exit
Pre-state:
r2: INT64_MIN..0
r3: INT64_MIN..0
Post-state:
r2: INT64_MIN
Validator ignores overflow of signed minimum and considers result to
always equal INT64_MIN. However, if -1 was loaded on step 1 and -2 was
loaded on step 3 it is possible for the difference to equal 1.
Swap source register signed minimum and maximum in the overflow
condition to match the new range formula, add test.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 17 +++++++++++++++++
lib/bpf/bpf_validate.c | 4 ++--
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 9d3e48b5f93c..44e08062b3ee 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1747,6 +1747,23 @@ test_alu64_or_k_positive(void)
REGISTER_FAST_TEST(bpf_validate_alu64_or_k_positive_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_or_k_positive);
+/* 64-bit difference between two negative ranges.. */
+static int
+test_alu64_sub_x_src_signed_max_zero(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_SUB | BPF_X),
+ },
+ .pre.dst = make_signed_domain(INT64_MIN, 0),
+ .pre.src = make_signed_domain(INT64_MIN, 0),
+ .post.dst = unknown,
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_sub_x_src_signed_max_zero_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_sub_x_src_signed_max_zero);
+
/* Jump if greater than immediate. */
static int
test_jmp64_jeq_k(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index d9ee0563c9d3..a500ad662c1b 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -716,9 +716,9 @@ eval_sub(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk)
eval_umax_bound(&rv, msk);
if ((rd->s.min != rd->s.max || rs->s.min != rs->s.max) &&
- (((rs->s.min < 0 && rv.s.min < rd->s.min) ||
+ (((rs->s.max < 0 && rv.s.min < rd->s.min) ||
rv.s.min > rd->s.min) ||
- ((rs->s.max < 0 && rv.s.max < rd->s.max) ||
+ ((rs->s.min < 0 && rv.s.max < rd->s.max) ||
rv.s.max > rd->s.max)))
eval_smax_bound(&rv, msk);
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 22/25] bpf/validate: fix BPF_XOR signed min calculation
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (20 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 21/25] bpf/validate: fix BPF_SUB signed max zero case Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 23/25] bpf/validate: prevent overflow when building graph Marat Khalili
` (3 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `eval_xor` calculated signed minimum using essentially unsigned
algorithm as long as any of the operands have non-negative range, which
is incorrect since it ignores any negative numbers that may have the
sign or any other bits set.
E.g. consider the following program with the current validation code:
Tested program:
0: mov r0, #0x0
1: ldxdw r2, [r1 + 0]
2: jsgt r2, #0x0, L5
3: xor r2, #0x0 ; tested instruction
4: mov r0, #0x1
5: exit
Pre-state:
r2: INT64_MIN..0
Post-state:
r2: 0
After the tested instruction validator considers r2 to equal 0, however
if -1 was loaded on step 1 it is possible for it to be -1.
Set signed range to full if any of the operands can be negative,
otherwise (if both operands are non-negative) use same algorithm as for
unsigned numbers. Add test.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
app/test/test_bpf_validate.c | 17 +++++++++++++++++
lib/bpf/bpf_validate.c | 2 +-
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/app/test/test_bpf_validate.c b/app/test/test_bpf_validate.c
index 44e08062b3ee..b08c9ae33b6a 100644
--- a/app/test/test_bpf_validate.c
+++ b/app/test/test_bpf_validate.c
@@ -1764,6 +1764,23 @@ test_alu64_sub_x_src_signed_max_zero(void)
REGISTER_FAST_TEST(bpf_validate_alu64_sub_x_src_signed_max_zero_autotest, NOHUGE_OK, ASAN_OK,
test_alu64_sub_x_src_signed_max_zero);
+/* 64-bit bitwise XOR between a negative scalar range and zero immediate. */
+static int
+test_alu64_xor_k_negative(void)
+{
+ return verify_instruction((struct verify_instruction_param){
+ .tested_instruction = {
+ .code = (EBPF_ALU64 | BPF_XOR | BPF_K),
+ .imm = 0,
+ },
+ .pre.dst = make_signed_domain(INT64_MIN, 0),
+ .post.dst = unknown,
+ });
+}
+
+REGISTER_FAST_TEST(bpf_validate_alu64_xor_k_negative_autotest, NOHUGE_OK, ASAN_OK,
+ test_alu64_xor_k_negative);
+
/* Jump if greater than immediate. */
static int
test_jmp64_jeq_k(void)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index a500ad662c1b..35b7d4ad83f6 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -910,7 +910,7 @@ eval_xor(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, size_t opsz,
rd->s.max ^= rs->s.max;
/* both operands are non-negative */
- } else if (rd->s.min >= 0 || rs->s.min >= 0) {
+ } else if (rd->s.min >= 0 && rs->s.min >= 0) {
rd->s.max = eval_uor_max(rd->s.max, rs->s.max, opsz);
rd->s.min = 0;
} else
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 23/25] bpf/validate: prevent overflow when building graph
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (21 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 22/25] bpf/validate: fix BPF_XOR signed min calculation Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 24/25] doc: add release notes for BPF validation fixes Marat Khalili
` (2 subsequent siblings)
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, stable
Function `evst_pool_init` for malicious or corrupt BPF program with
number of conditional jumps exceeding a third of UINT32_MAX could cause
arithmetic and buffer overflows when working with the program graph.
Fix the issue by limiting maximum number of conditional jumps supported
by UINT32_MAX / 4, or more than 1 billion.
Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program")
Cc: stable@dpdk.org
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
lib/bpf/bpf_validate.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 35b7d4ad83f6..23311a36d14e 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -2662,6 +2662,10 @@ evst_pool_init(struct bpf_verifier *bvf)
{
uint32_t k, n;
+ if (bvf->nb_jcc_nodes > UINT32_MAX / 4)
+ /* Calculations that follow may overflow. */
+ return -E2BIG;
+
/*
* We need nb_jcc_nodes + 1 for save_cur/restore_cur
* remaining ones will be used for state tracking/pruning.
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 24/25] doc: add release notes for BPF validation fixes
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (22 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 23/25] bpf/validate: prevent overflow when building graph Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-06 17:38 ` [PATCH 25/25] doc: add BPF validate debug to programmer's guide Marat Khalili
2026-05-09 12:36 ` [PATCH 00/25] bpf: test and fix issues in verifier Konstantin Ananyev
25 siblings, 0 replies; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
Cc: dev
Document the following new features and fixes:
* Added BPF validation debugger API (rte_bpf_validate_debug_*).
* Hardened BPF validator with numerous bug fixes and UB preventions.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
doc/guides/rel_notes/release_26_07.rst | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst
index 18810ab81d93..4ef2d354635b 100644
--- a/doc/guides/rel_notes/release_26_07.rst
+++ b/doc/guides/rel_notes/release_26_07.rst
@@ -83,6 +83,22 @@ New Features
``rte_bpf_eth_tx_install`` for installing already loaded BPF programs as
port callbacks (as opposed to loading them directly from ELF files).
+* **Hardened BPF validator.**
+
+ Fixed numerous bugs in the BPF validator's abstract interpretation logic,
+ including incorrect bounds tracking for jumps and arithmetic operations, as
+ well as fixing several instances of undefined behavior (UB) when verifying
+ malicious or corrupt programs.
+
+* **Added BPF validation debugger API.**
+
+ Introduced a new set of APIs (prefixed with ``rte_bpf_validate_debug_``) to
+ introspect the BPF validator. This provides a mechanism to set breakpoints or
+ catchpoints during validation and inspect the verifier's internal state
+ (such as tracked register bounds). This API is crucial primarily for writing
+ comprehensive tests for the validator, but also serves as a foundation for a
+ future interactive eBPF validation debugger.
+
Removed Items
-------------
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 25/25] doc: add BPF validate debug to programmer's guide
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (23 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 24/25] doc: add release notes for BPF validation fixes Marat Khalili
@ 2026-05-06 17:38 ` Marat Khalili
2026-05-08 17:41 ` Stephen Hemminger
2026-05-09 12:36 ` [PATCH 00/25] bpf: test and fix issues in verifier Konstantin Ananyev
25 siblings, 1 reply; 28+ messages in thread
From: Marat Khalili @ 2026-05-06 17:38 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev
Document the new gdb-like validation debugger API, outlining how it can
be used to set breakpoints and inspect register states during
validation.
Highlight its primary use case: writing robust tests for the eBPF
verifier using the harness in app/test/test_bpf_validate.c.
Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
---
doc/guides/prog_guide/bpf_lib.rst | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/doc/guides/prog_guide/bpf_lib.rst b/doc/guides/prog_guide/bpf_lib.rst
index df3782508829..08ea1876875a 100644
--- a/doc/guides/prog_guide/bpf_lib.rst
+++ b/doc/guides/prog_guide/bpf_lib.rst
@@ -118,6 +118,37 @@ For example, ``(BPF_IND | BPF_W | BPF_LD)`` means:
and ``R1-R5`` were scratched.
+Validation Debugger
+-------------------
+
+The DPDK BPF library includes a validation debugger API designed primarily for
+writing comprehensive unit tests for the eBPF verifier. It allows developers
+to introspect the abstract interpretation process step-by-step to guarantee
+that the verifier correctly models the semantics of eBPF instructions.
+
+The debugger operates using a gdb-like approach:
+
+1. **Initialization:** Create a debug session using
+ ``rte_bpf_validate_debug_create()`` and pass it to the loader via the
+ ``debug`` field in ``struct rte_bpf_prm_ex``.
+2. **Breakpoints and Catchpoints:** Before loading, use
+ ``rte_bpf_validate_debug_break()`` or ``rte_bpf_validate_debug_catch()``
+ to register callback functions that trigger at specific instruction indices
+ (program counters) or upon specific validation events.
+3. **State Introspection:** Within the callbacks, the API provides functions
+ like ``rte_bpf_validate_debug_can_access()``,
+ ``rte_bpf_validate_debug_may_jump()``, and various formatting functions
+ to safely inspect the verifier's internal belief about register bounds
+ and memory states at that specific execution point.
+
+When adding a test for a new eBPF instruction or fixing a validator bug,
+developers should utilize the harness provided in
+``app/test/test_bpf_validate.c``. This harness encapsulates the debugger API,
+allowing you to define the expected abstract domains (signed and unsigned
+intervals) for registers before and after a tested instruction, generating
+the necessary eBPF bytecode and breakpoints automatically.
+
+
Not currently supported eBPF features
-------------------------------------
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 25/25] doc: add BPF validate debug to programmer's guide
2026-05-06 17:38 ` [PATCH 25/25] doc: add BPF validate debug to programmer's guide Marat Khalili
@ 2026-05-08 17:41 ` Stephen Hemminger
0 siblings, 0 replies; 28+ messages in thread
From: Stephen Hemminger @ 2026-05-08 17:41 UTC (permalink / raw)
To: Marat Khalili; +Cc: Konstantin Ananyev, dev
On Wed, 6 May 2026 18:38:43 +0100
Marat Khalili <marat.khalili@huawei.com> wrote:
> Document the new gdb-like validation debugger API, outlining how it can
> be used to set breakpoints and inspect register states during
> validation.
>
> Highlight its primary use case: writing robust tests for the eBPF
> verifier using the harness in app/test/test_bpf_validate.c.
>
> Signed-off-by: Marat Khalili <marat.khalili@huawei.com>
> ---
Regular AI review doesn't dig deep enough on this; so redid it with
stronger model and prompt.
Reviewed the series. It cannot be applied as-is for two compounding
reasons; once those are resolved, the individual fixes look correct
and well-tested.
Series-level: the patches reference rte_bpf_prm_ex, enum rte_bpf_origin
(RTE_BPF_ORIGIN_RAW), rte_bpf_load_ex, and rte_bpf_get_jit_ex. None of
these exist in upstream master, and none are introduced by this 25-patch
series — diff context in patch 5 (struct rte_bpf_prm_ex { hunk header)
and patch 2 (rte_bpf_get_jit_ex near the insertion point) confirms the
prerequisite must already be in the base. Please declare the dependency
in a 0/25 cover letter so reviewers can apply on the right base.
Patches 03, 05, 10: Build break. RTE_BPF_LOG_FUNC_LINE is used (replacing
RTE_BPF_LOG_LINE) but never defined — not in bpf_impl.h, not elsewhere in
DPDK, not in any of the 25 patches (verified by grep). Either add the
macro to bpf_impl.h, e.g.
#define RTE_BPF_LOG_FUNC_LINE(lvl, ...) \
RTE_LOG_LINE_PREFIX(lvl, BPF, "%s(): ", \
__func__ RTE_LOG_COMMA __VA_ARGS__)
following the pattern in lib/eal/common/eal_trace.h, or revert the call
sites to RTE_BPF_LOG_LINE(... "%s: ...", __func__, ...).
Patch 05: __rte_bpf_validate_state_is_valid and
__rte_bpf_validate_can_access return int but mix tri-state (true / false
/ -errno) with bool semantics; the caller does `if (rc == false)`
against an int. Works, but consider splitting the bool case from the
tri-state case for clarity.
Patch 08: Test file uses rte_bpf_load_ex, struct rte_bpf_prm_ex, and
RTE_BPF_ORIGIN_RAW (see series-level note). Two minor items in the test
helpers: load_constant / compare_and_jump assign int64_t to .imm
(int32_t) — fits_in_imm32() guards the value but an explicit (int32_t)
cast would document intent; and `value >> 32` on int64_t is
implementation-defined for negative values, a (uint64_t) cast before
the shift would be portable.
Patch 17: RTE_BPF_VALIDATE_DEBUG_EVENT_BRANCH_UNREACHABLE is inserted in
the middle of the enum, shifting BRANCH_RETURN and the _END sentinel.
Fine for an unreleased experimental API in the same series. Once the API
stabilizes, additions should go at the end before the sentinel.
Patches 01, 02, 04, 06, 07, 09, 11–16, 18, 19: no comments. The bug
descriptions with reproducer programs, expected-vs-actual output, and
UBSan diagnostics are very useful and the fixes are well-targeted.
I did not get to patches 20–25 in this pass.
^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 00/25] bpf: test and fix issues in verifier
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
` (24 preceding siblings ...)
2026-05-06 17:38 ` [PATCH 25/25] doc: add BPF validate debug to programmer's guide Marat Khalili
@ 2026-05-09 12:36 ` Konstantin Ananyev
25 siblings, 0 replies; 28+ messages in thread
From: Konstantin Ananyev @ 2026-05-09 12:36 UTC (permalink / raw)
To: Marat Khalili; +Cc: dev@dpdk.org
> This patchset addresses numerous bugs in the BPF verifier's abstract
> interpretation logic and introduces a new validation debugger API to
> enable precise, robust testing of the verifier itself.
>
> While the existing DPDK eBPF verifier is capable of checking basic
> execution graph loops and dead code, the mathematical tracking of
> register bounds (both signed and unsigned) contained flaws resulting in
> false positives and false negatives, undefined behavior, and hardware
> exceptions such as SIGFPE during validation.
>
> To resolve these issues and ensure they do not regress, this patchset
> first introduces the "Validation Debugger API"
> (`rte_bpf_validate_debug_*`). This gdb-like interface allows setting
> breakpoints and catchpoints during the validation process to inspect the
> verifier's internal state.
>
> Using this new API, a comprehensive test harness
> (`app/test/test_bpf_validate.c`) was created to formally check the
> abstract domains of instructions across all their valid branches. The
> remainder of the patchset incrementally fixes the math and bounds logic
> for individual eBPF instructions, using the new tests to prove the
> correctness of the fixes.
>
> This debugger API also lays the foundation for an interactive eBPF
> validation debugger to be introduced in the future.
>
> Depends-on: series-38068 ("bpf: introduce extensible load API")
>
> Marat Khalili (25):
> bpf: format and dump jlt, jle, jslt, and jsle
> bpf: add format instruction function
> bpf/validate: break on error in evaluate
> bpf/validate: expand comments in evaluate cycle
> bpf/validate: introduce debugging interface
> bpf/validate: fix BPF_ADD of pointer to a scalar
> bpf/validate: fix BPF_LDX | EBPF_DW signed range
> test/bpf_validate: add setup and basic tests
> test/bpf_validate: add harness for pointer tests
> bpf/validate: fix EBPF_JSLT | BPF_X evaluation
> bpf/validate: fix BPF_NEG of INT64_MIN and 0
> bpf/validate: fix BPF_DIV and BPF_MOD signed part
> bpf/validate: fix BPF_MUL ranges minimum typo
> bpf/validate: fix BPF_MUL signed overflow UB
> bpf/validate: fix BPF_JGT/EBPF_JSGT no-jump max
> bpf/validate: fix BPF_JMP source range calculation
> bpf/validate: fix BPF_JMP empty range handling
> bpf/validate: fix BPF_AND min calculations
> bpf/validate: fix BPF_LSH shift-out-of-bounds UB
> bpf/validate: fix BPF_OR min calculations
> bpf/validate: fix BPF_SUB signed max zero case
> bpf/validate: fix BPF_XOR signed min calculation
> bpf/validate: prevent overflow when building graph
> doc: add release notes for BPF validation fixes
> doc: add BPF validate debug to programmer's guide
>
> app/test/meson.build | 1 +
> app/test/test_bpf.c | 99 ++
> app/test/test_bpf_validate.c | 2271 ++++++++++++++++++++++++
> doc/guides/prog_guide/bpf_lib.rst | 31 +
> doc/guides/rel_notes/release_26_07.rst | 16 +
> lib/bpf/bpf_dump.c | 292 +--
> lib/bpf/bpf_validate.c | 730 +++++++-
> lib/bpf/bpf_validate.h | 54 +
> lib/bpf/bpf_validate_debug.c | 663 +++++++
> lib/bpf/bpf_validate_debug.h | 86 +
> lib/bpf/bpf_value_set.c | 403 +++++
> lib/bpf/bpf_value_set.h | 126 ++
> lib/bpf/meson.build | 9 +-
> lib/bpf/rte_bpf.h | 55 +
> lib/bpf/rte_bpf_validate_debug.h | 377 ++++
> 15 files changed, 5016 insertions(+), 197 deletions(-)
> create mode 100644 app/test/test_bpf_validate.c
> create mode 100644 lib/bpf/bpf_validate.h
> create mode 100644 lib/bpf/bpf_validate_debug.c
> create mode 100644 lib/bpf/bpf_validate_debug.h
> create mode 100644 lib/bpf/bpf_value_set.c
> create mode 100644 lib/bpf/bpf_value_set.h
> create mode 100644 lib/bpf/rte_bpf_validate_debug.h
>
> --
I already reviewed these changes offline, as part of our
internal patch acceptance process.
Current version LGMT and addresses all comments I had.
Series-Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> 2.43.0
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2026-05-09 12:36 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 17:38 [PATCH 00/25] bpf: test and fix issues in verifier Marat Khalili
2026-05-06 17:38 ` [PATCH 01/25] bpf: format and dump jlt, jle, jslt, and jsle Marat Khalili
2026-05-06 17:38 ` [PATCH 02/25] bpf: add format instruction function Marat Khalili
2026-05-06 17:38 ` [PATCH 03/25] bpf/validate: break on error in evaluate Marat Khalili
2026-05-06 17:38 ` [PATCH 04/25] bpf/validate: expand comments in evaluate cycle Marat Khalili
2026-05-06 17:38 ` [PATCH 05/25] bpf/validate: introduce debugging interface Marat Khalili
2026-05-06 17:38 ` [PATCH 06/25] bpf/validate: fix BPF_ADD of pointer to a scalar Marat Khalili
2026-05-06 17:38 ` [PATCH 07/25] bpf/validate: fix BPF_LDX | EBPF_DW signed range Marat Khalili
2026-05-06 17:38 ` [PATCH 08/25] test/bpf_validate: add setup and basic tests Marat Khalili
2026-05-06 17:38 ` [PATCH 09/25] test/bpf_validate: add harness for pointer tests Marat Khalili
2026-05-06 17:38 ` [PATCH 10/25] bpf/validate: fix EBPF_JSLT | BPF_X evaluation Marat Khalili
2026-05-06 17:38 ` [PATCH 11/25] bpf/validate: fix BPF_NEG of INT64_MIN and 0 Marat Khalili
2026-05-06 17:38 ` [PATCH 12/25] bpf/validate: fix BPF_DIV and BPF_MOD signed part Marat Khalili
2026-05-06 17:38 ` [PATCH 13/25] bpf/validate: fix BPF_MUL ranges minimum typo Marat Khalili
2026-05-06 17:38 ` [PATCH 14/25] bpf/validate: fix BPF_MUL signed overflow UB Marat Khalili
2026-05-06 17:38 ` [PATCH 15/25] bpf/validate: fix BPF_JGT/EBPF_JSGT no-jump max Marat Khalili
2026-05-06 17:38 ` [PATCH 16/25] bpf/validate: fix BPF_JMP source range calculation Marat Khalili
2026-05-06 17:38 ` [PATCH 17/25] bpf/validate: fix BPF_JMP empty range handling Marat Khalili
2026-05-06 17:38 ` [PATCH 18/25] bpf/validate: fix BPF_AND min calculations Marat Khalili
2026-05-06 17:38 ` [PATCH 19/25] bpf/validate: fix BPF_LSH shift-out-of-bounds UB Marat Khalili
2026-05-06 17:38 ` [PATCH 20/25] bpf/validate: fix BPF_OR min calculations Marat Khalili
2026-05-06 17:38 ` [PATCH 21/25] bpf/validate: fix BPF_SUB signed max zero case Marat Khalili
2026-05-06 17:38 ` [PATCH 22/25] bpf/validate: fix BPF_XOR signed min calculation Marat Khalili
2026-05-06 17:38 ` [PATCH 23/25] bpf/validate: prevent overflow when building graph Marat Khalili
2026-05-06 17:38 ` [PATCH 24/25] doc: add release notes for BPF validation fixes Marat Khalili
2026-05-06 17:38 ` [PATCH 25/25] doc: add BPF validate debug to programmer's guide Marat Khalili
2026-05-08 17:41 ` Stephen Hemminger
2026-05-09 12:36 ` [PATCH 00/25] bpf: test and fix issues in verifier Konstantin Ananyev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox