* [PATCH 0/3] bpf: simple tests and fixes
@ 2025-11-10 15:30 Marat Khalili
2025-11-10 15:30 ` [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT Marat Khalili
` (3 more replies)
0 siblings, 4 replies; 51+ messages in thread
From: Marat Khalili @ 2025-11-10 15:30 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: Stephen Hemminger, dev
Add simple unit-style tests for rte_bpf_load, and fix some minor
discovered bugs.
Create new source file for new tests since app/test/test_bpf.c was
growing rather large, and tests there look more like functional rather
than unit ones.
Marat Khalili (3):
bpf: fix signed shift overflows in ARM JIT
bpf: disallow empty program
bpf: make add/subtract one program validate
app/test/meson.build | 1 +
app/test/test_bpf_simple.c | 189 +++++++++++++++++++++++++++++++++++++
lib/bpf/bpf_jit_arm64.c | 188 ++++++++++++++++++++----------------
lib/bpf/bpf_load.c | 2 +-
lib/bpf/bpf_validate.c | 28 ++++--
5 files changed, 318 insertions(+), 90 deletions(-)
create mode 100644 app/test/test_bpf_simple.c
--
2.43.0
^ permalink raw reply [flat|nested] 51+ messages in thread* [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-10 15:30 [PATCH 0/3] bpf: simple tests and fixes Marat Khalili @ 2025-11-10 15:30 ` Marat Khalili 2025-11-11 6:25 ` Jerin Jacob 2025-11-10 15:30 ` [PATCH 2/3] bpf: disallow empty program Marat Khalili ` (2 subsequent siblings) 3 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-11-10 15:30 UTC (permalink / raw) To: Konstantin Ananyev; +Cc: Stephen Hemminger, dev Left shifts of integer literals and bool values overwriting the sign bit were used multiple times in bpf_jit_arm64.c. E.g.: insn = (!!is64) << 31; where is64 has type bool (double bang is a no-op here). The operand of left shift was promoted to type int, which when 32-bit wide cannot represent the result. Similarly literal integers have int type by default. Sanitizer produced the following diagnostic during runtime (for various lines): lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' To fix the issue use RTE_BIT32 and similar macros instead. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- lib/bpf/bpf_jit_arm64.c | 188 +++++++++++++++++++++++----------------- 1 file changed, 107 insertions(+), 81 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 96b8cd2e03..5f43db0170 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -28,7 +28,33 @@ #define A64_ZR 31 #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) >> (n))) -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) + +/** + * Get the uint32_t shifted value. + * + * Works similarly to RTE_SHIFT_VAL32 but accepts non-literal arguments. + * Performs identically to RTE_SHIFT_VAL32 with literal arguments. + * + * @param val + * The value to be shifted, can be non-literal. + * @param nr + * The shift number in range of 0 to (32 - width of val). + */ +#define SHIFT_VAR32(val, nr) ((uint32_t)(val) << (nr)) + +/** + * Get the uint64_t shifted value. + * + * Works similarly to RTE_SHIFT_VAL64 but accepts non-literal arguments. + * Performs identically to RTE_SHIFT_VAL64 with literal arguments. + * + * @param val + * The value to be shifted, can be non-literal. + * @param nr + * The shift number in range of 0 to (64 - width of val). + */ +#define SHIFT_VAR64(val, nr) ((uint64_t)(val) << (nr)) struct ebpf_a64_map { uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ @@ -238,12 +264,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, uint32_t insn, imm; imm = mask_imm(12, imm12); - insn = (!!is64) << 31; - insn |= (!!sub) << 30; + insn = SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(sub, 30); insn |= 0x11000000; insn |= rd; - insn |= rn << 5; - insn |= imm << 10; + insn |= SHIFT_VAR32(rn, 5); + insn |= SHIFT_VAR32(imm, 10); emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); @@ -279,16 +305,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, { uint32_t insn; - insn = (!!load) << 22; - insn |= (!!pre_index) << 24; + insn = SHIFT_VAR32(load, 22); + insn |= SHIFT_VAR32(pre_index, 24); insn |= 0xa8800000; insn |= rt; - insn |= rn << 5; - insn |= rt2 << 10; + insn |= SHIFT_VAR32(rn, 5); + insn |= SHIFT_VAR32(rt2, 10); if (push) - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ + insn |= SHIFT_VAR32(0x7e, 15); /* 0x7e means -2 with imm7 */ else - insn |= 0x2 << 15; + insn |= SHIFT_VAR32(0x2, 15); emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || check_reg(rt2)); @@ -317,11 +343,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t type, { uint32_t insn; - insn = (!!is64) << 31; - insn |= type << 29; - insn |= 0x25 << 23; - insn |= (shift/16) << 21; - insn |= imm16 << 5; + insn = SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(type, 29); + insn |= SHIFT_VAR32(0x25, 23); + insn |= SHIFT_VAR32(shift/16, 21); + insn |= SHIFT_VAR32(imm16, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); @@ -334,7 +360,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint32_t val) uint16_t lower = val & 0xffff; /* Positive number */ - if ((val & 1UL << 31) == 0) { + if ((val & RTE_BIT32(31)) == 0) { mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); if (upper) mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); @@ -393,21 +419,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, uint8_t rt, uint8_t rn, uint8_t rm, { uint32_t insn; - insn = 0x1c1 << 21; + insn = SHIFT_VAR32(0x1c1, 21); if (load) - insn |= 1 << 22; + insn |= RTE_BIT32(22); if (sz == BPF_B) - insn |= 0 << 30; + insn |= SHIFT_VAR32(0, 30); else if (sz == BPF_H) - insn |= 1 << 30; + insn |= SHIFT_VAR32(1, 30); else if (sz == BPF_W) - insn |= 2 << 30; + insn |= SHIFT_VAR32(2, 30); else if (sz == EBPF_DW) - insn |= 3 << 30; + insn |= SHIFT_VAR32(3, 30); - insn |= rm << 16; - insn |= 0x1a << 10; /* LSL and S = 0 */ - insn |= rn << 5; + insn |= SHIFT_VAR32(rm, 16); + insn |= SHIFT_VAR32(0x1a, 10); /* LSL and S = 0 */ + insn |= SHIFT_VAR32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || check_reg(rm) || @@ -436,10 +462,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 21; /* shift == 0 */ - insn |= rm << 16; - insn |= rn << 5; + insn = SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(op, 21); /* shift == 0 */ + insn |= SHIFT_VAR32(rm, 16); + insn |= SHIFT_VAR32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -468,11 +494,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rm) { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= A64_ZR << 10; - insn |= rd << 5; + insn = SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(0xd8, 21); + insn |= SHIFT_VAR32(rm, 16); + insn |= SHIFT_VAR32(A64_ZR, 10); + insn |= SHIFT_VAR32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -489,11 +515,11 @@ emit_data_process_two_src(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd6 << 21; - insn |= rm << 16; - insn |= op << 10; - insn |= rn << 5; + insn = SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(0xd6, 21); + insn |= SHIFT_VAR32(rm, 16); + insn |= SHIFT_VAR32(op, 10); + insn |= SHIFT_VAR32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -532,14 +558,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; + insn = SHIFT_VAR32(is64, 31); if (insn) - insn |= 1 << 22; /* Set N bit when is64 is set */ - insn |= op << 29; - insn |= 0x26 << 23; - insn |= immr << 16; - insn |= imms << 10; - insn |= rn << 5; + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ + insn |= SHIFT_VAR32(op, 29); + insn |= SHIFT_VAR32(0x26, 23); + insn |= SHIFT_VAR32(immr, 16); + insn |= SHIFT_VAR32(imms, 10); + insn |= SHIFT_VAR32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || @@ -578,11 +604,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 29; - insn |= 0x50 << 21; - insn |= rm << 16; - insn |= rd << 5; + insn = SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(op, 29); + insn |= SHIFT_VAR32(0x50, 21); + insn |= SHIFT_VAR32(rm, 16); + insn |= SHIFT_VAR32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -612,12 +638,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= 0x1 << 15; - insn |= ra << 10; - insn |= rn << 5; + insn = SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(0xd8, 21); + insn |= SHIFT_VAR32(rm, 16); + insn |= SHIFT_VAR32(0x1, 15); + insn |= SHIFT_VAR32(ra, 10); + insn |= SHIFT_VAR32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_reg(rm) || @@ -638,7 +664,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) uint32_t insn; insn = 0xd63f0000; - insn |= rn << 5; + insn |= SHIFT_VAR32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); } @@ -669,22 +695,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) uint32_t insn; insn = 0xdac00000; - insn |= rd << 5; + insn |= SHIFT_VAR32(rd, 5); insn |= rd; switch (imm) { case 16: - insn |= 1 << 10; + insn |= SHIFT_VAR32(1, 10); emit_insn(ctx, insn, check_reg(rd)); emit_zero_extend(ctx, rd, 16); break; case 32: - insn |= 2 << 10; + insn |= SHIFT_VAR32(2, 10); emit_insn(ctx, insn, check_reg(rd)); /* Upper 32 bits already cleared */ break; case 64: - insn |= 3 << 10; + insn |= SHIFT_VAR32(3, 10); emit_insn(ctx, insn, check_reg(rd)); break; default: @@ -933,9 +959,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = (!!is64) << 31; - insn |= 0x35 << 24; - insn |= imm << 5; + insn = SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(0x35, 24); + insn |= SHIFT_VAR32(imm, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); @@ -947,7 +973,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) uint32_t insn, imm; imm = mask_imm(26, imm26); - insn = 0x5 << 26; + insn = SHIFT_VAR32(0x5, 26); insn |= imm; emit_insn(ctx, insn, check_imm(26, imm26)); @@ -971,9 +997,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) uint32_t insn; insn = 0xb820001f; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= SHIFT_VAR32(is64, 30); + insn |= SHIFT_VAR32(rs, 16); + insn |= SHIFT_VAR32(rn, 5); emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); } @@ -984,8 +1010,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) uint32_t insn; insn = 0x885f7c00; - insn |= (!!is64) << 30; - insn |= rn << 5; + insn |= SHIFT_VAR32(is64, 30); + insn |= SHIFT_VAR32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); @@ -998,9 +1024,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, uint32_t insn; insn = 0x88007c00; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= SHIFT_VAR32(is64, 30); + insn |= SHIFT_VAR32(rs, 16); + insn |= SHIFT_VAR32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || check_reg(rn)); @@ -1051,9 +1077,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t insn; insn = opc; - insn |= (!!is64) << 31; - insn |= rm << 16; - insn |= rn << 5; + insn |= SHIFT_VAR32(is64, 31); + insn |= SHIFT_VAR32(rm, 16); + insn |= SHIFT_VAR32(rn, 5); emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); } @@ -1076,8 +1102,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t cond, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = 0x15 << 26; - insn |= imm << 5; + insn = SHIFT_VAR32(0x15, 26); + insn |= SHIFT_VAR32(imm, 5); insn |= cond; emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); @@ -1301,7 +1327,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) break; /* dst = imm64 */ case (BPF_LD | BPF_IMM | EBPF_DW): - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; + u64 = SHIFT_VAR64(ins[1].imm, 32) | (uint32_t)imm; emit_mov_imm(ctx, 1, dst, u64); i++; break; -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-10 15:30 ` [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-11-11 6:25 ` Jerin Jacob 2025-11-11 7:53 ` Morten Brørup 2025-11-11 10:10 ` Marat Khalili 0 siblings, 2 replies; 51+ messages in thread From: Jerin Jacob @ 2025-11-11 6:25 UTC (permalink / raw) To: Marat Khalili; +Cc: Konstantin Ananyev, Stephen Hemminger, dev On Mon, Nov 10, 2025 at 9:01 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > Left shifts of integer literals and bool values overwriting the sign bit > were used multiple times in bpf_jit_arm64.c. E.g.: > > insn = (!!is64) << 31; > > where is64 has type bool (double bang is a no-op here). The operand of > left shift was promoted to type int, which when 32-bit wide cannot > represent the result. Similarly literal integers have int type by > default. Sanitizer produced the following diagnostic during runtime > (for various lines): > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 > places cannot be represented in type 'int' Wonder why none of the tests in app/test/test_bpf.c able to catch this? The generated ARM opcode looks OK (otherwise tests wont pass). Could you check what is missing in the app/test/test_bpf.c? Also SHIFT_VAR32 needs goto common code. > > To fix the issue use RTE_BIT32 and similar macros instead. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- > lib/bpf/bpf_jit_arm64.c | 188 +++++++++++++++++++++++----------------- > 1 file changed, 107 insertions(+), 81 deletions(-) > > diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c > index 96b8cd2e03..5f43db0170 100644 > --- a/lib/bpf/bpf_jit_arm64.c > +++ b/lib/bpf/bpf_jit_arm64.c > @@ -28,7 +28,33 @@ > #define A64_ZR 31 > > #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) >> (n))) > -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) > +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) > + > +/** > + * Get the uint32_t shifted value. > + * > + * Works similarly to RTE_SHIFT_VAL32 but accepts non-literal arguments. > + * Performs identically to RTE_SHIFT_VAL32 with literal arguments. > + * > + * @param val > + * The value to be shifted, can be non-literal. > + * @param nr > + * The shift number in range of 0 to (32 - width of val). > + */ > +#define SHIFT_VAR32(val, nr) ((uint32_t)(val) << (nr)) > + > +/** > + * Get the uint64_t shifted value. > + * > + * Works similarly to RTE_SHIFT_VAL64 but accepts non-literal arguments. > + * Performs identically to RTE_SHIFT_VAL64 with literal arguments. > + * > + * @param val > + * The value to be shifted, can be non-literal. > + * @param nr > + * The shift number in range of 0 to (64 - width of val). > + */ > +#define SHIFT_VAR64(val, nr) ((uint64_t)(val) << (nr)) > > struct ebpf_a64_map { > uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ > @@ -238,12 +264,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, > uint32_t insn, imm; > > imm = mask_imm(12, imm12); > - insn = (!!is64) << 31; > - insn |= (!!sub) << 30; > + insn = SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(sub, 30); > insn |= 0x11000000; > insn |= rd; > - insn |= rn << 5; > - insn |= imm << 10; > + insn |= SHIFT_VAR32(rn, 5); > + insn |= SHIFT_VAR32(imm, 10); > > emit_insn(ctx, insn, > check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); > @@ -279,16 +305,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, > { > uint32_t insn; > > - insn = (!!load) << 22; > - insn |= (!!pre_index) << 24; > + insn = SHIFT_VAR32(load, 22); > + insn |= SHIFT_VAR32(pre_index, 24); > insn |= 0xa8800000; > insn |= rt; > - insn |= rn << 5; > - insn |= rt2 << 10; > + insn |= SHIFT_VAR32(rn, 5); > + insn |= SHIFT_VAR32(rt2, 10); > if (push) > - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ > + insn |= SHIFT_VAR32(0x7e, 15); /* 0x7e means -2 with imm7 */ > else > - insn |= 0x2 << 15; > + insn |= SHIFT_VAR32(0x2, 15); > > emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || check_reg(rt2)); > > @@ -317,11 +343,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t type, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= type << 29; > - insn |= 0x25 << 23; > - insn |= (shift/16) << 21; > - insn |= imm16 << 5; > + insn = SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(type, 29); > + insn |= SHIFT_VAR32(0x25, 23); > + insn |= SHIFT_VAR32(shift/16, 21); > + insn |= SHIFT_VAR32(imm16, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); > @@ -334,7 +360,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint32_t val) > uint16_t lower = val & 0xffff; > > /* Positive number */ > - if ((val & 1UL << 31) == 0) { > + if ((val & RTE_BIT32(31)) == 0) { > mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); > if (upper) > mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); > @@ -393,21 +419,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, uint8_t rt, uint8_t rn, uint8_t rm, > { > uint32_t insn; > > - insn = 0x1c1 << 21; > + insn = SHIFT_VAR32(0x1c1, 21); > if (load) > - insn |= 1 << 22; > + insn |= RTE_BIT32(22); > if (sz == BPF_B) > - insn |= 0 << 30; > + insn |= SHIFT_VAR32(0, 30); > else if (sz == BPF_H) > - insn |= 1 << 30; > + insn |= SHIFT_VAR32(1, 30); > else if (sz == BPF_W) > - insn |= 2 << 30; > + insn |= SHIFT_VAR32(2, 30); > else if (sz == EBPF_DW) > - insn |= 3 << 30; > + insn |= SHIFT_VAR32(3, 30); > > - insn |= rm << 16; > - insn |= 0x1a << 10; /* LSL and S = 0 */ > - insn |= rn << 5; > + insn |= SHIFT_VAR32(rm, 16); > + insn |= SHIFT_VAR32(0x1a, 10); /* LSL and S = 0 */ > + insn |= SHIFT_VAR32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || check_reg(rm) || > @@ -436,10 +462,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= op << 21; /* shift == 0 */ > - insn |= rm << 16; > - insn |= rn << 5; > + insn = SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(op, 21); /* shift == 0 */ > + insn |= SHIFT_VAR32(rm, 16); > + insn |= SHIFT_VAR32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -468,11 +494,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rm) > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd8 << 21; > - insn |= rm << 16; > - insn |= A64_ZR << 10; > - insn |= rd << 5; > + insn = SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(0xd8, 21); > + insn |= SHIFT_VAR32(rm, 16); > + insn |= SHIFT_VAR32(A64_ZR, 10); > + insn |= SHIFT_VAR32(rd, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -489,11 +515,11 @@ emit_data_process_two_src(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd6 << 21; > - insn |= rm << 16; > - insn |= op << 10; > - insn |= rn << 5; > + insn = SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(0xd6, 21); > + insn |= SHIFT_VAR32(rm, 16); > + insn |= SHIFT_VAR32(op, 10); > + insn |= SHIFT_VAR32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -532,14 +558,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > + insn = SHIFT_VAR32(is64, 31); > if (insn) > - insn |= 1 << 22; /* Set N bit when is64 is set */ > - insn |= op << 29; > - insn |= 0x26 << 23; > - insn |= immr << 16; > - insn |= imms << 10; > - insn |= rn << 5; > + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ > + insn |= SHIFT_VAR32(op, 29); > + insn |= SHIFT_VAR32(0x26, 23); > + insn |= SHIFT_VAR32(immr, 16); > + insn |= SHIFT_VAR32(imms, 10); > + insn |= SHIFT_VAR32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || > @@ -578,11 +604,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= op << 29; > - insn |= 0x50 << 21; > - insn |= rm << 16; > - insn |= rd << 5; > + insn = SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(op, 29); > + insn |= SHIFT_VAR32(0x50, 21); > + insn |= SHIFT_VAR32(rm, 16); > + insn |= SHIFT_VAR32(rd, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -612,12 +638,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd8 << 21; > - insn |= rm << 16; > - insn |= 0x1 << 15; > - insn |= ra << 10; > - insn |= rn << 5; > + insn = SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(0xd8, 21); > + insn |= SHIFT_VAR32(rm, 16); > + insn |= SHIFT_VAR32(0x1, 15); > + insn |= SHIFT_VAR32(ra, 10); > + insn |= SHIFT_VAR32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_reg(rm) || > @@ -638,7 +664,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) > uint32_t insn; > > insn = 0xd63f0000; > - insn |= rn << 5; > + insn |= SHIFT_VAR32(rn, 5); > > emit_insn(ctx, insn, check_reg(rn)); > } > @@ -669,22 +695,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) > uint32_t insn; > > insn = 0xdac00000; > - insn |= rd << 5; > + insn |= SHIFT_VAR32(rd, 5); > insn |= rd; > > switch (imm) { > case 16: > - insn |= 1 << 10; > + insn |= SHIFT_VAR32(1, 10); > emit_insn(ctx, insn, check_reg(rd)); > emit_zero_extend(ctx, rd, 16); > break; > case 32: > - insn |= 2 << 10; > + insn |= SHIFT_VAR32(2, 10); > emit_insn(ctx, insn, check_reg(rd)); > /* Upper 32 bits already cleared */ > break; > case 64: > - insn |= 3 << 10; > + insn |= SHIFT_VAR32(3, 10); > emit_insn(ctx, insn, check_reg(rd)); > break; > default: > @@ -933,9 +959,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, int32_t imm19) > uint32_t insn, imm; > > imm = mask_imm(19, imm19); > - insn = (!!is64) << 31; > - insn |= 0x35 << 24; > - insn |= imm << 5; > + insn = SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(0x35, 24); > + insn |= SHIFT_VAR32(imm, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); > @@ -947,7 +973,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) > uint32_t insn, imm; > > imm = mask_imm(26, imm26); > - insn = 0x5 << 26; > + insn = SHIFT_VAR32(0x5, 26); > insn |= imm; > > emit_insn(ctx, insn, check_imm(26, imm26)); > @@ -971,9 +997,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) > uint32_t insn; > > insn = 0xb820001f; > - insn |= (!!is64) << 30; > - insn |= rs << 16; > - insn |= rn << 5; > + insn |= SHIFT_VAR32(is64, 30); > + insn |= SHIFT_VAR32(rs, 16); > + insn |= SHIFT_VAR32(rn, 5); > > emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); > } > @@ -984,8 +1010,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) > uint32_t insn; > > insn = 0x885f7c00; > - insn |= (!!is64) << 30; > - insn |= rn << 5; > + insn |= SHIFT_VAR32(is64, 30); > + insn |= SHIFT_VAR32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); > @@ -998,9 +1024,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, > uint32_t insn; > > insn = 0x88007c00; > - insn |= (!!is64) << 30; > - insn |= rs << 16; > - insn |= rn << 5; > + insn |= SHIFT_VAR32(is64, 30); > + insn |= SHIFT_VAR32(rs, 16); > + insn |= SHIFT_VAR32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || check_reg(rn)); > @@ -1051,9 +1077,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, > uint32_t insn; > > insn = opc; > - insn |= (!!is64) << 31; > - insn |= rm << 16; > - insn |= rn << 5; > + insn |= SHIFT_VAR32(is64, 31); > + insn |= SHIFT_VAR32(rm, 16); > + insn |= SHIFT_VAR32(rn, 5); > > emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); > } > @@ -1076,8 +1102,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t cond, int32_t imm19) > uint32_t insn, imm; > > imm = mask_imm(19, imm19); > - insn = 0x15 << 26; > - insn |= imm << 5; > + insn = SHIFT_VAR32(0x15, 26); > + insn |= SHIFT_VAR32(imm, 5); > insn |= cond; > > emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); > @@ -1301,7 +1327,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) > break; > /* dst = imm64 */ > case (BPF_LD | BPF_IMM | EBPF_DW): > - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; > + u64 = SHIFT_VAR64(ins[1].imm, 32) | (uint32_t)imm; > emit_mov_imm(ctx, 1, dst, u64); > i++; > break; > -- > 2.43.0 > ^ permalink raw reply [flat|nested] 51+ messages in thread
* RE: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-11 6:25 ` Jerin Jacob @ 2025-11-11 7:53 ` Morten Brørup 2025-11-11 10:10 ` Marat Khalili 1 sibling, 0 replies; 51+ messages in thread From: Morten Brørup @ 2025-11-11 7:53 UTC (permalink / raw) To: Jerin Jacob, Marat Khalili; +Cc: Konstantin Ananyev, Stephen Hemminger, dev > From: Jerin Jacob [mailto:jerinjacobk@gmail.com] > Sent: Tuesday, 11 November 2025 07.25 > > On Mon, Nov 10, 2025 at 9:01 PM Marat Khalili > <marat.khalili@huawei.com> wrote: > > > > Left shifts of integer literals and bool values overwriting the sign > bit > > were used multiple times in bpf_jit_arm64.c. E.g.: > > > > insn = (!!is64) << 31; > > > > where is64 has type bool (double bang is a no-op here). The operand > of > > left shift was promoted to type int, which when 32-bit wide cannot > > represent the result. Similarly literal integers have int type by > > default. Sanitizer produced the following diagnostic during runtime > > (for various lines): > > > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by > 31 > > places cannot be represented in type 'int' > > Wonder why none of the tests in app/test/test_bpf.c able to catch > this? The generated ARM opcode looks OK (otherwise tests wont pass). > Could you check what is missing in the app/test/test_bpf.c? > > Also SHIFT_VAR32 needs goto common code. > > > > > > To fix the issue use RTE_BIT32 and similar macros instead. > > > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > > --- > > lib/bpf/bpf_jit_arm64.c | 188 +++++++++++++++++++++++--------------- > -- > > 1 file changed, 107 insertions(+), 81 deletions(-) > > > > diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c > > index 96b8cd2e03..5f43db0170 100644 > > --- a/lib/bpf/bpf_jit_arm64.c > > +++ b/lib/bpf/bpf_jit_arm64.c > > @@ -28,7 +28,33 @@ > > #define A64_ZR 31 > > > > #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : > !!((~val) >> (n))) > > -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) > > +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) > > + > > +/** > > + * Get the uint32_t shifted value. > > + * > > + * Works similarly to RTE_SHIFT_VAL32 but accepts non-literal > arguments. > > + * Performs identically to RTE_SHIFT_VAL32 with literal arguments. > > + * > > + * @param val > > + * The value to be shifted, can be non-literal. > > + * @param nr > > + * The shift number in range of 0 to (32 - width of val). > > + */ > > +#define SHIFT_VAR32(val, nr) ((uint32_t)(val) << (nr)) > > + > > +/** > > + * Get the uint64_t shifted value. > > + * > > + * Works similarly to RTE_SHIFT_VAL64 but accepts non-literal > arguments. > > + * Performs identically to RTE_SHIFT_VAL64 with literal arguments. > > + * > > + * @param val > > + * The value to be shifted, can be non-literal. > > + * @param nr > > + * The shift number in range of 0 to (64 - width of val). > > + */ > > +#define SHIFT_VAR64(val, nr) ((uint64_t)(val) << (nr)) Better replace the RTE_SHIFT_VAL32/VAL64 macros with these, to support both literal and non-literal arguments. ^ permalink raw reply [flat|nested] 51+ messages in thread
* RE: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-11 6:25 ` Jerin Jacob 2025-11-11 7:53 ` Morten Brørup @ 2025-11-11 10:10 ` Marat Khalili 2025-11-11 16:29 ` Jerin Jacob 2025-11-11 16:31 ` Jerin Jacob 1 sibling, 2 replies; 51+ messages in thread From: Marat Khalili @ 2025-11-11 10:10 UTC (permalink / raw) To: Jerin Jacob; +Cc: Konstantin Ananyev, Stephen Hemminger, dev@dpdk.org > On Mon, Nov 10, 2025 at 9:01 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > > Left shifts of integer literals and bool values overwriting the sign bit > > were used multiple times in bpf_jit_arm64.c. E.g.: > > > > insn = (!!is64) << 31; > > > > where is64 has type bool (double bang is a no-op here). The operand of > > left shift was promoted to type int, which when 32-bit wide cannot > > represent the result. Similarly literal integers have int type by > > default. Sanitizer produced the following diagnostic during runtime > > (for various lines): > > > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 > > places cannot be represented in type 'int' > > Wonder why none of the tests in app/test/test_bpf.c able to catch > this? The generated ARM opcode looks OK (otherwise tests wont pass). > Could you check what is missing in the app/test/test_bpf.c? That tests do trigger it. RTE>>bpf_autotest ../../src/lib/bpf/bpf_jit_arm64.c:320:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' #0 0x22bf59c in mov_imm ../../src/lib/bpf/bpf_jit_arm64.c:320 #1 0x22bfae0 in emit_mov_imm ../../src/lib/bpf/bpf_jit_arm64.c:378 #2 0x22c3bfc in emit ../../src/lib/bpf/bpf_jit_arm64.c:1321 #3 0x22c4314 in __rte_bpf_jit_arm64 ../../src/lib/bpf/bpf_jit_arm64.c:1414 #4 0x22775c0 in __rte_bpf_jit ../../src/lib/bpf/bpf.c:44 #5 0x228fbdc in rte_bpf_load ../../src/lib/bpf/bpf_load.c:115 #6 0x6312f8 in run_test ../../src/app/test/test_bpf.c:3225 #7 0x631848 in test_bpf ../../src/app/test/test_bpf.c:3269 [...] RTE>>bpf_convert_autotest ../../src/lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' #0 0x22bf04c in emit_add_sub_imm ../../src/lib/bpf/bpf_jit_arm64.c:241 #1 0x22bf270 in emit_mov ../../src/lib/bpf/bpf_jit_arm64.c:267 #2 0x22bf2a4 in emit_mov_64 ../../src/lib/bpf/bpf_jit_arm64.c:273 #3 0x22c1a68 in emit_prologue_no_call ../../src/lib/bpf/bpf_jit_arm64.c:884 #4 0x22c1c9c in emit_prologue ../../src/lib/bpf/bpf_jit_arm64.c:904 #5 0x22c327c in emit ../../src/lib/bpf/bpf_jit_arm64.c:1136 #6 0x22c4314 in __rte_bpf_jit_arm64 ../../src/lib/bpf/bpf_jit_arm64.c:1414 #7 0x22775c0 in __rte_bpf_jit ../../src/lib/bpf/bpf.c:44 #8 0x228fbdc in rte_bpf_load ../../src/lib/bpf/bpf_load.c:115 #9 0x631e7c in test_bpf_match ../../src/app/test/test_bpf.c:3328 #10 0x632ce8 in test_bpf_filter_sanity ../../src/app/test/test_bpf.c:3376 #11 0x6331c4 in test_bpf_convert ../../src/app/test/test_bpf.c:3487 [...] ^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-11 10:10 ` Marat Khalili @ 2025-11-11 16:29 ` Jerin Jacob 2025-11-11 16:31 ` Jerin Jacob 1 sibling, 0 replies; 51+ messages in thread From: Jerin Jacob @ 2025-11-11 16:29 UTC (permalink / raw) To: Marat Khalili; +Cc: Konstantin Ananyev, Stephen Hemminger, dev@dpdk.org On Tue, Nov 11, 2025 at 3:40 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > On Mon, Nov 10, 2025 at 9:01 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > > > > Left shifts of integer literals and bool values overwriting the sign bit > > > were used multiple times in bpf_jit_arm64.c. E.g.: > > > > > > insn = (!!is64) << 31; > > > > > > where is64 has type bool (double bang is a no-op here). The operand of > > > left shift was promoted to type int, which when 32-bit wide cannot > > > represent the result. Similarly literal integers have int type by > > > default. Sanitizer produced the following diagnostic during runtime > > > (for various lines): > > > > > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 > > > places cannot be represented in type 'int' > > > > Wonder why none of the tests in app/test/test_bpf.c able to catch > > this? The generated ARM opcode looks OK (otherwise tests wont pass). > > Could you check what is missing in the app/test/test_bpf.c? > > That tests do trigger it. > > RTE>>bpf_autotest > ../../src/lib/bpf/bpf_jit_arm64.c:320:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' > #0 0x22bf59c in mov_imm ../../src/lib/bpf/bpf_jit_arm64.c:320 > #1 0x22bfae0 in emit_mov_imm ../../src/lib/bpf/bpf_jit_arm64.c:378 > #2 0x22c3bfc in emit ../../src/lib/bpf/bpf_jit_arm64.c:1321 > #3 0x22c4314 in __rte_bpf_jit_arm64 ../../src/lib/bpf/bpf_jit_arm64.c:1414 > #4 0x22775c0 in __rte_bpf_jit ../../src/lib/bpf/bpf.c:44 > #5 0x228fbdc in rte_bpf_load ../../src/lib/bpf/bpf_load.c:115 > #6 0x6312f8 in run_test ../../src/app/test/test_bpf.c:3225 > #7 0x631848 in test_bpf ../../src/app/test/test_bpf.c:3269 > [...] > > RTE>>bpf_convert_autotest > ../../src/lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' > #0 0x22bf04c in emit_add_sub_imm ../../src/lib/bpf/bpf_jit_arm64.c:241 > #1 0x22bf270 in emit_mov ../../src/lib/bpf/bpf_jit_arm64.c:267 > #2 0x22bf2a4 in emit_mov_64 ../../src/lib/bpf/bpf_jit_arm64.c:273 > #3 0x22c1a68 in emit_prologue_no_call ../../src/lib/bpf/bpf_jit_arm64.c:884 > #4 0x22c1c9c in emit_prologue ../../src/lib/bpf/bpf_jit_arm64.c:904 > #5 0x22c327c in emit ../../src/lib/bpf/bpf_jit_arm64.c:1136 > #6 0x22c4314 in __rte_bpf_jit_arm64 ../../src/lib/bpf/bpf_jit_arm64.c:1414 > #7 0x22775c0 in __rte_bpf_jit ../../src/lib/bpf/bpf.c:44 > #8 0x228fbdc in rte_bpf_load ../../src/lib/bpf/bpf_load.c:115 > #9 0x631e7c in test_bpf_match ../../src/app/test/test_bpf.c:3328 > #10 0x632ce8 in test_bpf_filter_sanity ../../src/app/test/test_bpf.c:3376 > #11 0x6331c4 in test_bpf_convert ../../src/app/test/test_bpf.c:3487 > [...] ^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-11 10:10 ` Marat Khalili 2025-11-11 16:29 ` Jerin Jacob @ 2025-11-11 16:31 ` Jerin Jacob 2025-11-11 16:39 ` Marat Khalili 1 sibling, 1 reply; 51+ messages in thread From: Jerin Jacob @ 2025-11-11 16:31 UTC (permalink / raw) To: Marat Khalili; +Cc: Konstantin Ananyev, Stephen Hemminger, dev@dpdk.org On Tue, Nov 11, 2025 at 3:40 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > On Mon, Nov 10, 2025 at 9:01 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > > > > Left shifts of integer literals and bool values overwriting the sign bit > > > were used multiple times in bpf_jit_arm64.c. E.g.: > > > > > > insn = (!!is64) << 31; > > > > > > where is64 has type bool (double bang is a no-op here). The operand of > > > left shift was promoted to type int, which when 32-bit wide cannot > > > represent the result. Similarly literal integers have int type by > > > default. Sanitizer produced the following diagnostic during runtime > > > (for various lines): > > > > > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 > > > places cannot be represented in type 'int' > > > > Wonder why none of the tests in app/test/test_bpf.c able to catch > > this? The generated ARM opcode looks OK (otherwise tests wont pass). > > Could you check what is missing in the app/test/test_bpf.c? > > That tests do trigger it. > > RTE>>bpf_autotest > ../../src/lib/bpf/bpf_jit_arm64.c:320:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' This is when ASAN is enabled. Right? Is it really generating BAD opcode(i.e emiter generating bad OPcode) and test fails? > #0 0x22bf59c in mov_imm ../../src/lib/bpf/bpf_jit_arm64.c:320 > #1 0x22bfae0 in emit_mov_imm ../../src/lib/bpf/bpf_jit_arm64.c:378 > #2 0x22c3bfc in emit ../../src/lib/bpf/bpf_jit_arm64.c:1321 > #3 0x22c4314 in __rte_bpf_jit_arm64 ../../src/lib/bpf/bpf_jit_arm64.c:1414 > #4 0x22775c0 in __rte_bpf_jit ../../src/lib/bpf/bpf.c:44 > #5 0x228fbdc in rte_bpf_load ../../src/lib/bpf/bpf_load.c:115 > #6 0x6312f8 in run_test ../../src/app/test/test_bpf.c:3225 > #7 0x631848 in test_bpf ../../src/app/test/test_bpf.c:3269 > [...] > > RTE>>bpf_convert_autotest > ../../src/lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' > #0 0x22bf04c in emit_add_sub_imm ../../src/lib/bpf/bpf_jit_arm64.c:241 > #1 0x22bf270 in emit_mov ../../src/lib/bpf/bpf_jit_arm64.c:267 > #2 0x22bf2a4 in emit_mov_64 ../../src/lib/bpf/bpf_jit_arm64.c:273 > #3 0x22c1a68 in emit_prologue_no_call ../../src/lib/bpf/bpf_jit_arm64.c:884 > #4 0x22c1c9c in emit_prologue ../../src/lib/bpf/bpf_jit_arm64.c:904 > #5 0x22c327c in emit ../../src/lib/bpf/bpf_jit_arm64.c:1136 > #6 0x22c4314 in __rte_bpf_jit_arm64 ../../src/lib/bpf/bpf_jit_arm64.c:1414 > #7 0x22775c0 in __rte_bpf_jit ../../src/lib/bpf/bpf.c:44 > #8 0x228fbdc in rte_bpf_load ../../src/lib/bpf/bpf_load.c:115 > #9 0x631e7c in test_bpf_match ../../src/app/test/test_bpf.c:3328 > #10 0x632ce8 in test_bpf_filter_sanity ../../src/app/test/test_bpf.c:3376 > #11 0x6331c4 in test_bpf_convert ../../src/app/test/test_bpf.c:3487 > [...] ^ permalink raw reply [flat|nested] 51+ messages in thread
* RE: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-11 16:31 ` Jerin Jacob @ 2025-11-11 16:39 ` Marat Khalili 2025-11-12 5:23 ` Jerin Jacob 0 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-11-11 16:39 UTC (permalink / raw) To: Jerin Jacob; +Cc: Konstantin Ananyev, Stephen Hemminger, dev@dpdk.org > -----Original Message----- > From: Jerin Jacob <jerinjacobk@gmail.com> > Sent: Tuesday 11 November 2025 16:31 > To: Marat Khalili <marat.khalili@huawei.com> > Cc: Konstantin Ananyev <konstantin.ananyev@huawei.com>; Stephen Hemminger <stephen@networkplumber.org>; > dev@dpdk.org > Subject: Re: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT > > On Tue, Nov 11, 2025 at 3:40 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > > > On Mon, Nov 10, 2025 at 9:01 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > > > > > > Left shifts of integer literals and bool values overwriting the sign bit > > > > were used multiple times in bpf_jit_arm64.c. E.g.: > > > > > > > > insn = (!!is64) << 31; > > > > > > > > where is64 has type bool (double bang is a no-op here). The operand of > > > > left shift was promoted to type int, which when 32-bit wide cannot > > > > represent the result. Similarly literal integers have int type by > > > > default. Sanitizer produced the following diagnostic during runtime > > > > (for various lines): > > > > > > > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 > > > > places cannot be represented in type 'int' > > > > > > Wonder why none of the tests in app/test/test_bpf.c able to catch > > > this? The generated ARM opcode looks OK (otherwise tests wont pass). > > > Could you check what is missing in the app/test/test_bpf.c? > > > > That tests do trigger it. > > > > RTE>>bpf_autotest > > ../../src/lib/bpf/bpf_jit_arm64.c:320:18: runtime error: left shift of 1 by 31 places cannot be > represented in type 'int' > > This is when ASAN is enabled. Right? More likely UBSAN, but yes. > Is it really generating BAD > opcode(i.e emiter generating bad OPcode) and test fails? If allowed to finish the test does not fail. However, I have not compared DPDK or JIT compiled code with and without changes. ^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-11 16:39 ` Marat Khalili @ 2025-11-12 5:23 ` Jerin Jacob 2025-11-12 10:16 ` Marat Khalili 0 siblings, 1 reply; 51+ messages in thread From: Jerin Jacob @ 2025-11-12 5:23 UTC (permalink / raw) To: Marat Khalili; +Cc: Konstantin Ananyev, Stephen Hemminger, dev@dpdk.org On Tue, Nov 11, 2025 at 10:09 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > -----Original Message----- > > From: Jerin Jacob <jerinjacobk@gmail.com> > > Sent: Tuesday 11 November 2025 16:31 > > To: Marat Khalili <marat.khalili@huawei.com> > > Cc: Konstantin Ananyev <konstantin.ananyev@huawei.com>; Stephen Hemminger <stephen@networkplumber.org>; > > dev@dpdk.org > > Subject: Re: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT > > > > On Tue, Nov 11, 2025 at 3:40 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > > > > > On Mon, Nov 10, 2025 at 9:01 PM Marat Khalili <marat.khalili@huawei.com> wrote: > > > > > > > > > > Left shifts of integer literals and bool values overwriting the sign bit > > > > > were used multiple times in bpf_jit_arm64.c. E.g.: > > > > > > > > > > insn = (!!is64) << 31; > > > > > > > > > > where is64 has type bool (double bang is a no-op here). The operand of > > > > > left shift was promoted to type int, which when 32-bit wide cannot > > > > > represent the result. Similarly literal integers have int type by > > > > > default. Sanitizer produced the following diagnostic during runtime > > > > > (for various lines): > > > > > > > > > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 > > > > > places cannot be represented in type 'int' > > > > > > > > Wonder why none of the tests in app/test/test_bpf.c able to catch > > > > this? The generated ARM opcode looks OK (otherwise tests wont pass). > > > > Could you check what is missing in the app/test/test_bpf.c? > > > > > > That tests do trigger it. > > > > > > RTE>>bpf_autotest > > > ../../src/lib/bpf/bpf_jit_arm64.c:320:18: runtime error: left shift of 1 by 31 places cannot be > > represented in type 'int' > > > > This is when ASAN is enabled. Right? > > More likely UBSAN, but yes. > > > Is it really generating BAD > > opcode(i.e emiter generating bad OPcode) and test fails? > > If allowed to finish the test does not fail. Does that mean, it is a false positive? Right? Anyway, There is no harm in the new patch, Change the patch to use SHIFT macros from EAL. > However, I have not compared DPDK or JIT compiled code with and without changes. ^ permalink raw reply [flat|nested] 51+ messages in thread
* RE: [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT 2025-11-12 5:23 ` Jerin Jacob @ 2025-11-12 10:16 ` Marat Khalili 0 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2025-11-12 10:16 UTC (permalink / raw) To: Jerin Jacob; +Cc: Konstantin Ananyev, Stephen Hemminger, dev@dpdk.org > > > > That tests do trigger it. > > > > > > > > RTE>>bpf_autotest > > > > ../../src/lib/bpf/bpf_jit_arm64.c:320:18: runtime error: left shift of 1 by 31 places cannot be > > > represented in type 'int' > > > > > > This is when ASAN is enabled. Right? > > > > More likely UBSAN, but yes. > > > > > Is it really generating BAD > > > opcode(i.e emiter generating bad OPcode) and test fails? > > > > If allowed to finish the test does not fail. > > Does that mean, it is a false positive? Right? It depends on who you ask. Certain compilers with certain compilation options tend to delete execution branches that cause UB, and the test may not be detailed enough to notice this. > Anyway, There is no harm in the new patch, Change the patch to use > SHIFT macros from EAL. Ok, will do. It is probably too late for the current release anyway. ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH 2/3] bpf: disallow empty program 2025-11-10 15:30 [PATCH 0/3] bpf: simple tests and fixes Marat Khalili 2025-11-10 15:30 ` [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-11-10 15:30 ` Marat Khalili 2025-11-10 16:40 ` Stephen Hemminger 2025-11-12 15:35 ` Konstantin Ananyev 2025-11-10 15:30 ` [PATCH 3/3] bpf: make add/subtract one program validate Marat Khalili 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili 3 siblings, 2 replies; 51+ messages in thread From: Marat Khalili @ 2025-11-10 15:30 UTC (permalink / raw) To: Konstantin Ananyev; +Cc: Stephen Hemminger, dev Add tests for some simple cases: * Program with no instructions; * Program with only EXIT instruction but no return value set; * Program with return value set but no EXIT instruction; * Minimal valid program with return value set and an EXIT instruction. Fix found bugs: * a program with no instructions was accepted; * a program with no EXIT instruction read outside the buffer. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- app/test/meson.build | 1 + app/test/test_bpf_simple.c | 131 +++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 20 ++++-- 4 files changed, 149 insertions(+), 5 deletions(-) create mode 100644 app/test/test_bpf_simple.c diff --git a/app/test/meson.build b/app/test/meson.build index 8df8d3edd1..9d48431ba0 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -35,6 +35,7 @@ source_file_deps = { 'test_bitset.c': [], 'test_bitratestats.c': ['metrics', 'bitratestats', 'ethdev'] + sample_packet_forward_deps, 'test_bpf.c': ['bpf', 'net'], + 'test_bpf_simple.c': ['bpf'], 'test_byteorder.c': [], 'test_cfgfile.c': ['cfgfile'], 'test_cksum.c': ['net'], diff --git a/app/test/test_bpf_simple.c b/app/test/test_bpf_simple.c new file mode 100644 index 0000000000..576a6ed029 --- /dev/null +++ b/app/test/test_bpf_simple.c @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2025 Huawei Technologies Co., Ltd + */ + +#include "test.h" + +#include <rte_bpf.h> +#include <rte_errno.h> + +/* Tests of most simple BPF programs (no instructions, one instruction etc.) */ + +/* + * Try to load a simple bpf program from the instructions array. + * + * When `expected_errno` is zero, expect it to load successfully. + * When `expected_errno` is non-zero, expect it to fail with this `rte_errno`. + * + * @param nb_ins + * Number of instructions in the `ins` array. + * @param ins + * BPF instructions array. + * @param expected_errno + * Expected result. + * @return + * TEST_SUCCESS on success, error code on failure. + */ +static int +simple_bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, + int expected_errno) +{ + const struct rte_bpf_prm prm = { + .ins = ins, + .nb_ins = nb_ins, + .prog_arg = { + .type = RTE_BPF_ARG_RAW, + .size = sizeof(uint64_t), + }, + }; + + struct rte_bpf *const bpf = rte_bpf_load(&prm); + const int actual_errno = rte_errno; + rte_bpf_destroy(bpf); + + if (expected_errno != 0) { + RTE_TEST_ASSERT_EQUAL(bpf, NULL, + "expect rte_bpf_load() == NULL"); + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, + "expect rte_errno == %d, found %d", + expected_errno, actual_errno); + } else + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, + "expect rte_bpf_load() != NULL"); + + return TEST_SUCCESS; +} + +/* + * Try and load completely empty BPF program. + * Should fail because there is no EXIT (and also return value is undefined). + */ +static int +test_simple_no_instructions(void) +{ + static const struct ebpf_insn ins[] = {}; + return simple_bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_simple_no_instructions_autotest, true, true, + test_simple_no_instructions); + +/* + * Try and load a BPF program comprising single EXIT instruction. + * Should fail because the return value is undefined. + */ +static int +test_simple_exit_only(void) +{ + static const struct ebpf_insn ins[] = { + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return simple_bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_simple_exit_only_autotest, true, true, + test_simple_exit_only); + +/* + * Try and load a BPF program with no EXIT instruction. + * Should fail because of this. + */ +static int +test_simple_no_exit(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + }; + return simple_bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_simple_no_exit_autotest, true, true, + test_simple_no_exit); + +/* + * Try and load smallest possible valid BPF program. + */ +static int +test_simple_minimal_working(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return simple_bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_simple_minimal_working_autotest, true, true, + test_simple_minimal_working); diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 556e613762..6983c026af 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) int32_t rc; uint32_t i; - if (prm == NULL || prm->ins == NULL || + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || (prm->nb_xsym != 0 && prm->xsym == NULL)) { rte_errno = EINVAL; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 4f47d6dc7b..23444b3eaa 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) { uint32_t ne; - if (nidx > bvf->prm->nb_ins) { + if (nidx >= bvf->prm->nb_ins) { RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, next pc: %u", __func__, get_node_idx(bvf, node), nidx); @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) * Control Flow Graph (CFG). * Information collected at this path would be used later * to determine is there any loops, and/or unreachable instructions. + * PREREQUISITE: there is at least one node. */ static void dfs(struct bpf_verifier *bvf) { struct inst_node *next, *node; - node = bvf->in; - while (node != NULL) { + RTE_ASSERT(bvf->nb_nodes != 0); + /* + * Since there is at least one node, node with index 0 always exists; + * it is our program entry point. + */ + node = &bvf->in[0]; + do { if (node->colour == WHITE) set_node_colour(bvf, node, GREY); @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) } } else node = NULL; - } + } while (node != NULL); } /* @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) if (rc != 0) return rc; + if (bvf->nb_nodes == 0) { + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", + __func__, bvf); + return -EINVAL; + } + dfs(bvf); RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH 2/3] bpf: disallow empty program 2025-11-10 15:30 ` [PATCH 2/3] bpf: disallow empty program Marat Khalili @ 2025-11-10 16:40 ` Stephen Hemminger 2025-11-10 16:46 ` Marat Khalili 2025-11-12 15:35 ` Konstantin Ananyev 1 sibling, 1 reply; 51+ messages in thread From: Stephen Hemminger @ 2025-11-10 16:40 UTC (permalink / raw) To: Marat Khalili; +Cc: Konstantin Ananyev, dev On Mon, 10 Nov 2025 15:30:45 +0000 Marat Khalili <marat.khalili@huawei.com> wrote: > Add tests for some simple cases: > * Program with no instructions; > * Program with only EXIT instruction but no return value set; > * Program with return value set but no EXIT instruction; > * Minimal valid program with return value set and an EXIT instruction. > > Fix found bugs: > * a program with no instructions was accepted; > * a program with no EXIT instruction read outside the buffer. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Can this just be folded into existing test_bpf file please. ^ permalink raw reply [flat|nested] 51+ messages in thread
* RE: [PATCH 2/3] bpf: disallow empty program 2025-11-10 16:40 ` Stephen Hemminger @ 2025-11-10 16:46 ` Marat Khalili 0 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2025-11-10 16:46 UTC (permalink / raw) To: Stephen Hemminger; +Cc: Konstantin Ananyev, dev@dpdk.org > Can this just be folded into existing test_bpf file please. Sure; any more issues? :) ^ permalink raw reply [flat|nested] 51+ messages in thread
* RE: [PATCH 2/3] bpf: disallow empty program 2025-11-10 15:30 ` [PATCH 2/3] bpf: disallow empty program Marat Khalili 2025-11-10 16:40 ` Stephen Hemminger @ 2025-11-12 15:35 ` Konstantin Ananyev 1 sibling, 0 replies; 51+ messages in thread From: Konstantin Ananyev @ 2025-11-12 15:35 UTC (permalink / raw) To: Marat Khalili; +Cc: Stephen Hemminger, dev@dpdk.org > Add tests for some simple cases: > * Program with no instructions; > * Program with only EXIT instruction but no return value set; > * Program with return value set but no EXIT instruction; > * Minimal valid program with return value set and an EXIT instruction. > > Fix found bugs: > * a program with no instructions was accepted; > * a program with no EXIT instruction read outside the buffer. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- > app/test/meson.build | 1 + > app/test/test_bpf_simple.c | 131 +++++++++++++++++++++++++++++++++++++ > lib/bpf/bpf_load.c | 2 +- > lib/bpf/bpf_validate.c | 20 ++++-- > 4 files changed, 149 insertions(+), 5 deletions(-) > create mode 100644 app/test/test_bpf_simple.c > > diff --git a/app/test/meson.build b/app/test/meson.build > index 8df8d3edd1..9d48431ba0 100644 > --- a/app/test/meson.build > +++ b/app/test/meson.build > @@ -35,6 +35,7 @@ source_file_deps = { > 'test_bitset.c': [], > 'test_bitratestats.c': ['metrics', 'bitratestats', 'ethdev'] + > sample_packet_forward_deps, > 'test_bpf.c': ['bpf', 'net'], > + 'test_bpf_simple.c': ['bpf'], > 'test_byteorder.c': [], > 'test_cfgfile.c': ['cfgfile'], > 'test_cksum.c': ['net'], > diff --git a/app/test/test_bpf_simple.c b/app/test/test_bpf_simple.c > new file mode 100644 > index 0000000000..576a6ed029 > --- /dev/null > +++ b/app/test/test_bpf_simple.c > @@ -0,0 +1,131 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2025 Huawei Technologies Co., Ltd > + */ > + > +#include "test.h" > + > +#include <rte_bpf.h> > +#include <rte_errno.h> > + > +/* Tests of most simple BPF programs (no instructions, one instruction etc.) */ > + > +/* > + * Try to load a simple bpf program from the instructions array. > + * > + * When `expected_errno` is zero, expect it to load successfully. > + * When `expected_errno` is non-zero, expect it to fail with this `rte_errno`. > + * > + * @param nb_ins > + * Number of instructions in the `ins` array. > + * @param ins > + * BPF instructions array. > + * @param expected_errno > + * Expected result. > + * @return > + * TEST_SUCCESS on success, error code on failure. > + */ > +static int > +simple_bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, > + int expected_errno) > +{ > + const struct rte_bpf_prm prm = { > + .ins = ins, > + .nb_ins = nb_ins, > + .prog_arg = { > + .type = RTE_BPF_ARG_RAW, > + .size = sizeof(uint64_t), > + }, > + }; > + > + struct rte_bpf *const bpf = rte_bpf_load(&prm); > + const int actual_errno = rte_errno; > + rte_bpf_destroy(bpf); > + > + if (expected_errno != 0) { > + RTE_TEST_ASSERT_EQUAL(bpf, NULL, > + "expect rte_bpf_load() == NULL"); > + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, > + "expect rte_errno == %d, found %d", > + expected_errno, actual_errno); > + } else > + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, > + "expect rte_bpf_load() != NULL"); > + > + return TEST_SUCCESS; > +} > + > +/* > + * Try and load completely empty BPF program. > + * Should fail because there is no EXIT (and also return value is undefined). > + */ > +static int > +test_simple_no_instructions(void) > +{ > + static const struct ebpf_insn ins[] = {}; > + return simple_bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_simple_no_instructions_autotest, true, true, > + test_simple_no_instructions); > + > +/* > + * Try and load a BPF program comprising single EXIT instruction. > + * Should fail because the return value is undefined. > + */ > +static int > +test_simple_exit_only(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + .code = (BPF_JMP | EBPF_EXIT), > + }, > + }; > + return simple_bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_simple_exit_only_autotest, true, true, > + test_simple_exit_only); > + > +/* > + * Try and load a BPF program with no EXIT instruction. > + * Should fail because of this. > + */ > +static int > +test_simple_no_exit(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + /* Set return value to the program argument. */ > + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), > + .src_reg = EBPF_REG_1, > + .dst_reg = EBPF_REG_0, > + }, > + }; > + return simple_bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_simple_no_exit_autotest, true, true, > + test_simple_no_exit); > + > +/* > + * Try and load smallest possible valid BPF program. > + */ > +static int > +test_simple_minimal_working(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + /* Set return value to the program argument. */ > + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), > + .src_reg = EBPF_REG_1, > + .dst_reg = EBPF_REG_0, > + }, > + { > + .code = (BPF_JMP | EBPF_EXIT), > + }, > + }; > + return simple_bpf_load_test(RTE_DIM(ins), ins, 0); > +} > + > +REGISTER_FAST_TEST(bpf_simple_minimal_working_autotest, true, true, > + test_simple_minimal_working); > diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c > index 556e613762..6983c026af 100644 > --- a/lib/bpf/bpf_load.c > +++ b/lib/bpf/bpf_load.c > @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) > int32_t rc; > uint32_t i; > > - if (prm == NULL || prm->ins == NULL || > + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || > (prm->nb_xsym != 0 && prm->xsym == NULL)) { > rte_errno = EINVAL; > return NULL; > diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c > index 4f47d6dc7b..23444b3eaa 100644 > --- a/lib/bpf/bpf_validate.c > +++ b/lib/bpf/bpf_validate.c > @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node > *node, uint32_t nidx) > { > uint32_t ne; > > - if (nidx > bvf->prm->nb_ins) { > + if (nidx >= bvf->prm->nb_ins) { > RTE_BPF_LOG_LINE(ERR, > "%s: program boundary violation at pc: %u, next pc: %u", > __func__, get_node_idx(bvf, node), nidx); > @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct > inst_node *node) > * Control Flow Graph (CFG). > * Information collected at this path would be used later > * to determine is there any loops, and/or unreachable instructions. > + * PREREQUISITE: there is at least one node. > */ > static void > dfs(struct bpf_verifier *bvf) > { > struct inst_node *next, *node; > > - node = bvf->in; > - while (node != NULL) { > + RTE_ASSERT(bvf->nb_nodes != 0); > + /* > + * Since there is at least one node, node with index 0 always exists; > + * it is our program entry point. > + */ > + node = &bvf->in[0]; > + do { > > if (node->colour == WHITE) > set_node_colour(bvf, node, GREY); > @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) > } > } else > node = NULL; > - } > + } while (node != NULL); > } > > /* > @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) > if (rc != 0) > return rc; > > + if (bvf->nb_nodes == 0) { > + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", > + __func__, bvf); > + return -EINVAL; > + } > + > dfs(bvf); > > RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" > -- Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > 2.43.0 ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH 3/3] bpf: make add/subtract one program validate 2025-11-10 15:30 [PATCH 0/3] bpf: simple tests and fixes Marat Khalili 2025-11-10 15:30 ` [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT Marat Khalili 2025-11-10 15:30 ` [PATCH 2/3] bpf: disallow empty program Marat Khalili @ 2025-11-10 15:30 ` Marat Khalili 2025-11-12 15:37 ` Konstantin Ananyev 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili 3 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-11-10 15:30 UTC (permalink / raw) To: Konstantin Ananyev; +Cc: Stephen Hemminger, dev Add tests loading simple BPF programs adding or subtracting one to its argument and fix triggered signed integer overflow undefined behaviours: lib/bpf/bpf_validate.c:324:24: runtime error: signed integer overflow: 1 + 9223372036854775807 cannot be represented in type 'long int' lib/bpf/bpf_validate.c:352:24: runtime error: signed integer overflow: -9223372036854775808 - 1 cannot be represented in type 'long int' As a minimal possible fix perform operation on unsigned integers where overflow is well-defined, which was probably the original intent. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- app/test/test_bpf_simple.c | 58 ++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 8 +++--- 2 files changed, 62 insertions(+), 4 deletions(-) diff --git a/app/test/test_bpf_simple.c b/app/test/test_bpf_simple.c index 576a6ed029..d4c5bbdc6e 100644 --- a/app/test/test_bpf_simple.c +++ b/app/test/test_bpf_simple.c @@ -129,3 +129,61 @@ test_simple_minimal_working(void) REGISTER_FAST_TEST(bpf_simple_minimal_working_autotest, true, true, test_simple_minimal_working); + +/* + * Try and load valid BPF program adding one to the argument. + */ +static int +test_simple_add_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to one. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_K), + .dst_reg = EBPF_REG_0, + .imm = 1, + }, + { + /* Add program argument to the return value. */ + .code = (EBPF_ALU64 | BPF_ADD | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return simple_bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_simple_add_one_autotest, true, true, + test_simple_add_one); + +/* + * Try and load valid BPF program subtracting one from the argument. + */ +static int +test_simple_subtract_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Subtract one from the program argument. */ + .code = (EBPF_ALU64 | BPF_SUB | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 1, + }, + { + /* Set return value to the result. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return simple_bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_simple_subtract_one_autotest, true, true, + test_simple_subtract_one); diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 23444b3eaa..47ad6fef0f 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -243,8 +243,8 @@ eval_add(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min + rs->u.min) & msk; rv.u.max = (rd->u.max + rs->u.max) & msk; - rv.s.min = (rd->s.min + rs->s.min) & msk; - rv.s.max = (rd->s.max + rs->s.max) & msk; + rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; + rv.s.max = ((uint64_t)rd->s.max + (uint64_t)rs->s.max) & msk; /* * if at least one of the operands is not constant, @@ -272,8 +272,8 @@ eval_sub(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min - rs->u.max) & msk; rv.u.max = (rd->u.max - rs->u.min) & msk; - rv.s.min = (rd->s.min - rs->s.max) & msk; - rv.s.max = (rd->s.max - rs->s.min) & msk; + rv.s.min = ((uint64_t)rd->s.min - (uint64_t)rs->s.max) & msk; + rv.s.max = ((uint64_t)rd->s.max - (uint64_t)rs->s.min) & msk; /* * if at least one of the operands is not constant, -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* RE: [PATCH 3/3] bpf: make add/subtract one program validate 2025-11-10 15:30 ` [PATCH 3/3] bpf: make add/subtract one program validate Marat Khalili @ 2025-11-12 15:37 ` Konstantin Ananyev 0 siblings, 0 replies; 51+ messages in thread From: Konstantin Ananyev @ 2025-11-12 15:37 UTC (permalink / raw) To: Marat Khalili; +Cc: Stephen Hemminger, dev@dpdk.org > Add tests loading simple BPF programs adding or subtracting one to its > argument and fix triggered signed integer overflow undefined behaviours: > > lib/bpf/bpf_validate.c:324:24: runtime error: signed integer > overflow: 1 + 9223372036854775807 cannot be represented in type > 'long int' > > lib/bpf/bpf_validate.c:352:24: runtime error: signed integer > overflow: -9223372036854775808 - 1 cannot be represented in type > 'long int' > > As a minimal possible fix perform operation on unsigned integers where > overflow is well-defined, which was probably the original intent. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- > app/test/test_bpf_simple.c | 58 ++++++++++++++++++++++++++++++++++++++ > lib/bpf/bpf_validate.c | 8 +++--- > 2 files changed, 62 insertions(+), 4 deletions(-) > > diff --git a/app/test/test_bpf_simple.c b/app/test/test_bpf_simple.c > index 576a6ed029..d4c5bbdc6e 100644 > --- a/app/test/test_bpf_simple.c > +++ b/app/test/test_bpf_simple.c > @@ -129,3 +129,61 @@ test_simple_minimal_working(void) > > REGISTER_FAST_TEST(bpf_simple_minimal_working_autotest, true, true, > test_simple_minimal_working); > + > +/* > + * Try and load valid BPF program adding one to the argument. > + */ > +static int > +test_simple_add_one(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + /* Set return value to one. */ > + .code = (EBPF_ALU64 | EBPF_MOV | BPF_K), > + .dst_reg = EBPF_REG_0, > + .imm = 1, > + }, > + { > + /* Add program argument to the return value. */ > + .code = (EBPF_ALU64 | BPF_ADD | BPF_X), > + .src_reg = EBPF_REG_1, > + .dst_reg = EBPF_REG_0, > + }, > + { > + .code = (BPF_JMP | EBPF_EXIT), > + }, > + }; > + return simple_bpf_load_test(RTE_DIM(ins), ins, 0); > +} > + > +REGISTER_FAST_TEST(bpf_simple_add_one_autotest, true, true, > + test_simple_add_one); > + > +/* > + * Try and load valid BPF program subtracting one from the argument. > + */ > +static int > +test_simple_subtract_one(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + /* Subtract one from the program argument. */ > + .code = (EBPF_ALU64 | BPF_SUB | BPF_K), > + .dst_reg = EBPF_REG_1, > + .imm = 1, > + }, > + { > + /* Set return value to the result. */ > + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), > + .src_reg = EBPF_REG_1, > + .dst_reg = EBPF_REG_0, > + }, > + { > + .code = (BPF_JMP | EBPF_EXIT), > + }, > + }; > + return simple_bpf_load_test(RTE_DIM(ins), ins, 0); > +} > + > +REGISTER_FAST_TEST(bpf_simple_subtract_one_autotest, true, true, > + test_simple_subtract_one); > diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c > index 23444b3eaa..47ad6fef0f 100644 > --- a/lib/bpf/bpf_validate.c > +++ b/lib/bpf/bpf_validate.c > @@ -243,8 +243,8 @@ eval_add(struct bpf_reg_val *rd, const struct bpf_reg_val > *rs, uint64_t msk) > > rv.u.min = (rd->u.min + rs->u.min) & msk; > rv.u.max = (rd->u.max + rs->u.max) & msk; > - rv.s.min = (rd->s.min + rs->s.min) & msk; > - rv.s.max = (rd->s.max + rs->s.max) & msk; > + rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; > + rv.s.max = ((uint64_t)rd->s.max + (uint64_t)rs->s.max) & msk; > > /* > * if at least one of the operands is not constant, > @@ -272,8 +272,8 @@ eval_sub(struct bpf_reg_val *rd, const struct bpf_reg_val > *rs, uint64_t msk) > > rv.u.min = (rd->u.min - rs->u.max) & msk; > rv.u.max = (rd->u.max - rs->u.min) & msk; > - rv.s.min = (rd->s.min - rs->s.max) & msk; > - rv.s.max = (rd->s.max - rs->s.min) & msk; > + rv.s.min = ((uint64_t)rd->s.min - (uint64_t)rs->s.max) & msk; > + rv.s.max = ((uint64_t)rd->s.max - (uint64_t)rs->s.min) & msk; > > /* > * if at least one of the operands is not constant, > -- Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > 2.43.0 ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v2 0/5] bpf: simple tests and fixes 2025-11-10 15:30 [PATCH 0/3] bpf: simple tests and fixes Marat Khalili ` (2 preceding siblings ...) 2025-11-10 15:30 ` [PATCH 3/3] bpf: make add/subtract one program validate Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili ` (5 more replies) 3 siblings, 6 replies; 51+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb; +Cc: dev, stable, konstantin.ananyev Add simple unit-style tests for rte_bpf_load, and fix some minor discovered bugs. v2: * Moved new tests back into test_bpf.c * Changed library macros RTE_SHIFT_VALxx to handle variable first arguments instead of introducing new macros. * Added another test and fix, for conditional jump as first instruction. Marat Khalili (5): eal: variable first arguments of RTE_SHIFT_VALxx bpf: fix signed shift overflows in ARM JIT bpf: disallow empty program bpf: make add/subtract one program validate bpf: fix BPF validation w/ conditional jump first app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ lib/bpf/bpf_jit_arm64.c | 162 +++++++++++----------- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 48 ++++--- lib/eal/include/rte_bitops.h | 4 +- 5 files changed, 364 insertions(+), 106 deletions(-) -- 2.43.0 ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-17 9:25 ` Morten Brørup 2025-12-16 18:20 ` [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT Marat Khalili ` (4 subsequent siblings) 5 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Jack Bond-Preston Cc: dev, stable, konstantin.ananyev Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying UINT32_C or UINT64_C correspondingly to its first argument. As a consequence first argument had to be a constant. Replace UINT32_C and UINT64_C with casts to uint32_t and uint64_t to allow these arguments be variable. For constants the result should be the same. (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, but I think most users of RTE_SHIFT_VAL64 expect the result to be uint64_t.) Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- lib/eal/include/rte_bitops.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h index 2d1b9d281c..aa6ac73abb 100644 --- a/lib/eal/include/rte_bitops.h +++ b/lib/eal/include/rte_bitops.h @@ -51,7 +51,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (32 - width of val). */ -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) /** * Get the uint64_t shifted value. @@ -61,7 +61,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (64 - width of val). */ -#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr)) +#define RTE_SHIFT_VAL64(val, nr) ((uint64_t)(val) << (nr)) /** * Generate a contiguous 32-bit mask -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* RE: [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2025-12-17 9:25 ` Morten Brørup 0 siblings, 0 replies; 51+ messages in thread From: Morten Brørup @ 2025-12-17 9:25 UTC (permalink / raw) To: Marat Khalili, stephen, jerinjacobk, Jack Bond-Preston Cc: dev, stable, konstantin.ananyev > From: Marat Khalili [mailto:marat.khalili@huawei.com] > Sent: Tuesday, 16 December 2025 19.21 > > Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying > UINT32_C or UINT64_C correspondingly to its first argument. As a > consequence first argument had to be a constant. Replace UINT32_C and > UINT64_C with casts to uint32_t and uint64_t to allow these arguments > be > variable. For constants the result should be the same. > > (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, > but > I think most users of RTE_SHIFT_VAL64 expect the result to be > uint64_t.) Interesting detail. And yes, I agree about expectations. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- > lib/eal/include/rte_bitops.h | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/lib/eal/include/rte_bitops.h > b/lib/eal/include/rte_bitops.h > index 2d1b9d281c..aa6ac73abb 100644 > --- a/lib/eal/include/rte_bitops.h > +++ b/lib/eal/include/rte_bitops.h > @@ -51,7 +51,7 @@ extern "C" { > * @param nr > * The shift number in range of 0 to (32 - width of val). > */ > -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) > +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) > > /** > * Get the uint64_t shifted value. > @@ -61,7 +61,7 @@ extern "C" { > * @param nr > * The shift number in range of 0 to (64 - width of val). > */ > -#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr)) > +#define RTE_SHIFT_VAL64(val, nr) ((uint64_t)(val) << (nr)) > > /** > * Generate a contiguous 32-bit mask > -- > 2.43.0 Reviewed-by: Morten Brørup <mb@smartsharesystems.com> ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-17 9:49 ` Morten Brørup 2025-12-16 18:20 ` [PATCH v2 3/5] bpf: disallow empty program Marat Khalili ` (3 subsequent siblings) 5 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Wathsala Vithanage, Konstantin Ananyev Cc: dev, stable Left shifts of integer literals and bool values overwriting the sign bit were used multiple times in bpf_jit_arm64.c. E.g.: insn = (!!is64) << 31; where is64 has type bool (double bang is a no-op here). The operand of left shift was promoted to type int, which when 32-bit wide cannot represent the result. Similarly literal integers have int type by default. Sanitizer produced the following diagnostic during runtime (for various lines): lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' To fix the issue use RTE_BIT32 and similar macros instead. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- lib/bpf/bpf_jit_arm64.c | 162 ++++++++++++++++++++-------------------- 1 file changed, 81 insertions(+), 81 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 96b8cd2e03..76df1e4ba1 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -28,7 +28,7 @@ #define A64_ZR 31 #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) >> (n))) -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) struct ebpf_a64_map { uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ @@ -238,12 +238,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, uint32_t insn, imm; imm = mask_imm(12, imm12); - insn = (!!is64) << 31; - insn |= (!!sub) << 30; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(sub, 30); insn |= 0x11000000; insn |= rd; - insn |= rn << 5; - insn |= imm << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(imm, 10); emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); @@ -279,16 +279,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, { uint32_t insn; - insn = (!!load) << 22; - insn |= (!!pre_index) << 24; + insn = RTE_SHIFT_VAL32(load, 22); + insn |= RTE_SHIFT_VAL32(pre_index, 24); insn |= 0xa8800000; insn |= rt; - insn |= rn << 5; - insn |= rt2 << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(rt2, 10); if (push) - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ + insn |= RTE_SHIFT_VAL32(0x7e, 15); /* 0x7e means -2 with imm7 */ else - insn |= 0x2 << 15; + insn |= RTE_SHIFT_VAL32(0x2, 15); emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || check_reg(rt2)); @@ -317,11 +317,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t type, { uint32_t insn; - insn = (!!is64) << 31; - insn |= type << 29; - insn |= 0x25 << 23; - insn |= (shift/16) << 21; - insn |= imm16 << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(type, 29); + insn |= RTE_SHIFT_VAL32(0x25, 23); + insn |= RTE_SHIFT_VAL32(shift/16, 21); + insn |= RTE_SHIFT_VAL32(imm16, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); @@ -334,7 +334,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint32_t val) uint16_t lower = val & 0xffff; /* Positive number */ - if ((val & 1UL << 31) == 0) { + if ((val & RTE_BIT32(31)) == 0) { mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); if (upper) mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); @@ -393,21 +393,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, uint8_t rt, uint8_t rn, uint8_t rm, { uint32_t insn; - insn = 0x1c1 << 21; + insn = RTE_SHIFT_VAL32(0x1c1, 21); if (load) - insn |= 1 << 22; + insn |= RTE_BIT32(22); if (sz == BPF_B) - insn |= 0 << 30; + insn |= RTE_SHIFT_VAL32(0, 30); else if (sz == BPF_H) - insn |= 1 << 30; + insn |= RTE_SHIFT_VAL32(1, 30); else if (sz == BPF_W) - insn |= 2 << 30; + insn |= RTE_SHIFT_VAL32(2, 30); else if (sz == EBPF_DW) - insn |= 3 << 30; + insn |= RTE_SHIFT_VAL32(3, 30); - insn |= rm << 16; - insn |= 0x1a << 10; /* LSL and S = 0 */ - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1a, 10); /* LSL and S = 0 */ + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || check_reg(rm) || @@ -436,10 +436,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 21; /* shift == 0 */ - insn |= rm << 16; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 21); /* shift == 0 */ + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -468,11 +468,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rm) { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= A64_ZR << 10; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(A64_ZR, 10); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -489,11 +489,11 @@ emit_data_process_two_src(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd6 << 21; - insn |= rm << 16; - insn |= op << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd6, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(op, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -532,14 +532,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; + insn = RTE_SHIFT_VAL32(is64, 31); if (insn) - insn |= 1 << 22; /* Set N bit when is64 is set */ - insn |= op << 29; - insn |= 0x26 << 23; - insn |= immr << 16; - insn |= imms << 10; - insn |= rn << 5; + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x26, 23); + insn |= RTE_SHIFT_VAL32(immr, 16); + insn |= RTE_SHIFT_VAL32(imms, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || @@ -578,11 +578,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 29; - insn |= 0x50 << 21; - insn |= rm << 16; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x50, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -612,12 +612,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= 0x1 << 15; - insn |= ra << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1, 15); + insn |= RTE_SHIFT_VAL32(ra, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_reg(rm) || @@ -638,7 +638,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) uint32_t insn; insn = 0xd63f0000; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); } @@ -669,22 +669,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) uint32_t insn; insn = 0xdac00000; - insn |= rd << 5; + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; switch (imm) { case 16: - insn |= 1 << 10; + insn |= RTE_SHIFT_VAL32(1, 10); emit_insn(ctx, insn, check_reg(rd)); emit_zero_extend(ctx, rd, 16); break; case 32: - insn |= 2 << 10; + insn |= RTE_SHIFT_VAL32(2, 10); emit_insn(ctx, insn, check_reg(rd)); /* Upper 32 bits already cleared */ break; case 64: - insn |= 3 << 10; + insn |= RTE_SHIFT_VAL32(3, 10); emit_insn(ctx, insn, check_reg(rd)); break; default: @@ -933,9 +933,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = (!!is64) << 31; - insn |= 0x35 << 24; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0x35, 24); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); @@ -947,7 +947,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) uint32_t insn, imm; imm = mask_imm(26, imm26); - insn = 0x5 << 26; + insn = RTE_SHIFT_VAL32(0x5, 26); insn |= imm; emit_insn(ctx, insn, check_imm(26, imm26)); @@ -971,9 +971,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) uint32_t insn; insn = 0xb820001f; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); } @@ -984,8 +984,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) uint32_t insn; insn = 0x885f7c00; - insn |= (!!is64) << 30; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); @@ -998,9 +998,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, uint32_t insn; insn = 0x88007c00; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || check_reg(rn)); @@ -1051,9 +1051,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t insn; insn = opc; - insn |= (!!is64) << 31; - insn |= rm << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); } @@ -1076,8 +1076,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t cond, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = 0x15 << 26; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(0x15, 26); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= cond; emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); @@ -1301,7 +1301,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) break; /* dst = imm64 */ case (BPF_LD | BPF_IMM | EBPF_DW): - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; + u64 = RTE_SHIFT_VAL64(ins[1].imm, 32) | (uint32_t)imm; emit_mov_imm(ctx, 1, dst, u64); i++; break; -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* RE: [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT 2025-12-16 18:20 ` [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-12-17 9:49 ` Morten Brørup 0 siblings, 0 replies; 51+ messages in thread From: Morten Brørup @ 2025-12-17 9:49 UTC (permalink / raw) To: Marat Khalili, stephen, jerinjacobk, Wathsala Vithanage, Konstantin Ananyev Cc: dev, stable > From: Marat Khalili [mailto:marat.khalili@huawei.com] > Sent: Tuesday, 16 December 2025 19.21 > > Left shifts of integer literals and bool values overwriting the sign > bit > were used multiple times in bpf_jit_arm64.c. E.g.: > > insn = (!!is64) << 31; > > where is64 has type bool (double bang is a no-op here). Yes, multiple online sources [1] confirms that the bool type can only hold two values, 0 and 1, and converting any non-zero value to bool results in 1. So, using !!x where x is a bool is literally a no-op, since x - being a bool - could already only hold the values 0 and 1. [1]: E.g. https://www.gnu.org/software/c-intro-and-ref/manual/html_node/Boolean-Type.html https://embedded.fm/blog/2016/5/3/ew-bools > The operand of > left shift was promoted to type int, which when 32-bit wide cannot > represent the result. Similarly literal integers have int type by > default. Sanitizer produced the following diagnostic during runtime > (for various lines): > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by > 31 > places cannot be represented in type 'int' > > To fix the issue use RTE_BIT32 and similar macros instead. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- > lib/bpf/bpf_jit_arm64.c | 162 ++++++++++++++++++++-------------------- > 1 file changed, 81 insertions(+), 81 deletions(-) > > diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c > index 96b8cd2e03..76df1e4ba1 100644 > --- a/lib/bpf/bpf_jit_arm64.c > +++ b/lib/bpf/bpf_jit_arm64.c > @@ -28,7 +28,7 @@ > #define A64_ZR 31 > > #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) > >> (n))) > -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) > +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) > > struct ebpf_a64_map { > uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ > @@ -238,12 +238,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool > is64, bool sub, uint8_t rd, > uint32_t insn, imm; > > imm = mask_imm(12, imm12); > - insn = (!!is64) << 31; > - insn |= (!!sub) << 30; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(sub, 30); As mentioned above, !!is64 and !!sub are literally no-ops, so it is safe to remove them. > insn |= 0x11000000; > insn |= rd; > - insn |= rn << 5; > - insn |= imm << 10; > + insn |= RTE_SHIFT_VAL32(rn, 5); > + insn |= RTE_SHIFT_VAL32(imm, 10); > > emit_insn(ctx, insn, > check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); > @@ -279,16 +279,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t > rt, uint8_t rt2, uint8_t rn, > { > uint32_t insn; > > - insn = (!!load) << 22; > - insn |= (!!pre_index) << 24; > + insn = RTE_SHIFT_VAL32(load, 22); > + insn |= RTE_SHIFT_VAL32(pre_index, 24); > insn |= 0xa8800000; This constant has the high bit set, consider: insn |= UINT32_C(0xa8800000); And for consistency, consider also doing it for constants where the high bit isn't set. Throughout the file. > insn |= rt; > - insn |= rn << 5; > - insn |= rt2 << 10; > + insn |= RTE_SHIFT_VAL32(rn, 5); > + insn |= RTE_SHIFT_VAL32(rt2, 10); > if (push) > - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ > + insn |= RTE_SHIFT_VAL32(0x7e, 15); /* 0x7e means -2 with > imm7 */ > else > - insn |= 0x2 << 15; > + insn |= RTE_SHIFT_VAL32(0x2, 15); > > emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || > check_reg(rt2)); > > @@ -317,11 +317,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t type, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= type << 29; > - insn |= 0x25 << 23; > - insn |= (shift/16) << 21; > - insn |= imm16 << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(type, 29); > + insn |= RTE_SHIFT_VAL32(0x25, 23); > + insn |= RTE_SHIFT_VAL32(shift/16, 21); > + insn |= RTE_SHIFT_VAL32(imm16, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); > @@ -334,7 +334,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint32_t val) > uint16_t lower = val & 0xffff; > > /* Positive number */ > - if ((val & 1UL << 31) == 0) { > + if ((val & RTE_BIT32(31)) == 0) { > mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); > if (upper) > mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); > @@ -393,21 +393,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, > uint8_t rt, uint8_t rn, uint8_t rm, > { > uint32_t insn; > > - insn = 0x1c1 << 21; > + insn = RTE_SHIFT_VAL32(0x1c1, 21); > if (load) > - insn |= 1 << 22; > + insn |= RTE_BIT32(22); > if (sz == BPF_B) > - insn |= 0 << 30; > + insn |= RTE_SHIFT_VAL32(0, 30); > else if (sz == BPF_H) > - insn |= 1 << 30; > + insn |= RTE_SHIFT_VAL32(1, 30); > else if (sz == BPF_W) > - insn |= 2 << 30; > + insn |= RTE_SHIFT_VAL32(2, 30); > else if (sz == EBPF_DW) > - insn |= 3 << 30; > + insn |= RTE_SHIFT_VAL32(3, 30); > > - insn |= rm << 16; > - insn |= 0x1a << 10; /* LSL and S = 0 */ > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(0x1a, 10); /* LSL and S = 0 */ > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || > check_reg(rm) || > @@ -436,10 +436,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= op << 21; /* shift == 0 */ > - insn |= rm << 16; > - insn |= rn << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(op, 21); /* shift == 0 */ > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -468,11 +468,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t rm) > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd8 << 21; > - insn |= rm << 16; > - insn |= A64_ZR << 10; > - insn |= rd << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(0xd8, 21); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(A64_ZR, 10); > + insn |= RTE_SHIFT_VAL32(rd, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -489,11 +489,11 @@ emit_data_process_two_src(struct a64_jit_ctx > *ctx, bool is64, uint8_t rd, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd6 << 21; > - insn |= rm << 16; > - insn |= op << 10; > - insn |= rn << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(0xd6, 21); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(op, 10); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -532,14 +532,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > + insn = RTE_SHIFT_VAL32(is64, 31); > if (insn) > - insn |= 1 << 22; /* Set N bit when is64 is set */ > - insn |= op << 29; > - insn |= 0x26 << 23; > - insn |= immr << 16; > - insn |= imms << 10; > - insn |= rn << 5; > + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ > + insn |= RTE_SHIFT_VAL32(op, 29); > + insn |= RTE_SHIFT_VAL32(0x26, 23); > + insn |= RTE_SHIFT_VAL32(immr, 16); > + insn |= RTE_SHIFT_VAL32(imms, 10); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || > @@ -578,11 +578,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= op << 29; > - insn |= 0x50 << 21; > - insn |= rm << 16; > - insn |= rd << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(op, 29); > + insn |= RTE_SHIFT_VAL32(0x50, 21); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(rd, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -612,12 +612,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd8 << 21; > - insn |= rm << 16; > - insn |= 0x1 << 15; > - insn |= ra << 10; > - insn |= rn << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(0xd8, 21); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(0x1, 15); > + insn |= RTE_SHIFT_VAL32(ra, 10); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || > check_reg(rm) || > @@ -638,7 +638,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) > uint32_t insn; > > insn = 0xd63f0000; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(rn, 5); > > emit_insn(ctx, insn, check_reg(rn)); > } > @@ -669,22 +669,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, > int32_t imm) > uint32_t insn; > > insn = 0xdac00000; > - insn |= rd << 5; > + insn |= RTE_SHIFT_VAL32(rd, 5); > insn |= rd; > > switch (imm) { > case 16: > - insn |= 1 << 10; > + insn |= RTE_SHIFT_VAL32(1, 10); > emit_insn(ctx, insn, check_reg(rd)); > emit_zero_extend(ctx, rd, 16); > break; > case 32: > - insn |= 2 << 10; > + insn |= RTE_SHIFT_VAL32(2, 10); > emit_insn(ctx, insn, check_reg(rd)); > /* Upper 32 bits already cleared */ > break; > case 64: > - insn |= 3 << 10; > + insn |= RTE_SHIFT_VAL32(3, 10); > emit_insn(ctx, insn, check_reg(rd)); > break; > default: > @@ -933,9 +933,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, > uint8_t rt, int32_t imm19) > uint32_t insn, imm; > > imm = mask_imm(19, imm19); > - insn = (!!is64) << 31; > - insn |= 0x35 << 24; > - insn |= imm << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(0x35, 24); > + insn |= RTE_SHIFT_VAL32(imm, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); > @@ -947,7 +947,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) > uint32_t insn, imm; > > imm = mask_imm(26, imm26); > - insn = 0x5 << 26; > + insn = RTE_SHIFT_VAL32(0x5, 26); > insn |= imm; > > emit_insn(ctx, insn, check_imm(26, imm26)); > @@ -971,9 +971,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, > uint8_t rs, uint8_t rn) > uint32_t insn; > > insn = 0xb820001f; > - insn |= (!!is64) << 30; > - insn |= rs << 16; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(is64, 30); > + insn |= RTE_SHIFT_VAL32(rs, 16); > + insn |= RTE_SHIFT_VAL32(rn, 5); > > emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); > } > @@ -984,8 +984,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, > uint8_t rt, uint8_t rn) > uint32_t insn; > > insn = 0x885f7c00; > - insn |= (!!is64) << 30; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(is64, 30); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); > @@ -998,9 +998,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, > uint8_t rs, uint8_t rt, > uint32_t insn; > > insn = 0x88007c00; > - insn |= (!!is64) << 30; > - insn |= rs << 16; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(is64, 30); > + insn |= RTE_SHIFT_VAL32(rs, 16); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || > check_reg(rn)); > @@ -1051,9 +1051,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, > uint8_t rn, uint8_t rm, > uint32_t insn; > > insn = opc; > - insn |= (!!is64) << 31; > - insn |= rm << 16; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(rn, 5); > > emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); > } > @@ -1076,8 +1076,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t > cond, int32_t imm19) > uint32_t insn, imm; > > imm = mask_imm(19, imm19); > - insn = 0x15 << 26; > - insn |= imm << 5; > + insn = RTE_SHIFT_VAL32(0x15, 26); > + insn |= RTE_SHIFT_VAL32(imm, 5); > insn |= cond; > > emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); > @@ -1301,7 +1301,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf > *bpf) > break; > /* dst = imm64 */ > case (BPF_LD | BPF_IMM | EBPF_DW): > - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; > + u64 = RTE_SHIFT_VAL64(ins[1].imm, 32) | > (uint32_t)imm; > emit_mov_imm(ctx, 1, dst, u64); > i++; > break; > -- > 2.43.0 With or without suggested changes, Acked-by: Morten Brørup <mb@smartsharesystems.com> ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v2 3/5] bpf: disallow empty program 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili 2025-12-16 18:20 ` [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-18 0:54 ` Stephen Hemminger 2025-12-16 18:20 ` [PATCH v2 4/5] bpf: make add/subtract one program validate Marat Khalili ` (2 subsequent siblings) 5 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev; +Cc: dev, stable Add tests for some simple cases: * Program with no instructions; * Program with only EXIT instruction but no return value set; * Program with return value set but no EXIT instruction; * Minimal valid program with return value set and an EXIT instruction. Fix found bugs: * a program with no instructions was accepted; * a program with no EXIT instruction read outside the buffer. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- app/test/test_bpf.c | 118 +++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 20 +++++-- 3 files changed, 135 insertions(+), 5 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index b7c94ba1c7..6ecc49efff 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -34,6 +34,124 @@ test_bpf(void) #include <rte_ip.h> +/* Tests of most simple BPF programs (no instructions, one instruction etc.) */ + +/* + * Try to load a simple bpf program from the instructions array. + * + * When `expected_errno` is zero, expect it to load successfully. + * When `expected_errno` is non-zero, expect it to fail with this `rte_errno`. + * + * @param nb_ins + * Number of instructions in the `ins` array. + * @param ins + * BPF instructions array. + * @param expected_errno + * Expected result. + * @return + * TEST_SUCCESS on success, error code on failure. + */ +static int +bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, int expected_errno) +{ + const struct rte_bpf_prm prm = { + .ins = ins, + .nb_ins = nb_ins, + .prog_arg = { + .type = RTE_BPF_ARG_RAW, + .size = sizeof(uint64_t), + }, + }; + + struct rte_bpf *const bpf = rte_bpf_load(&prm); + const int actual_errno = rte_errno; + rte_bpf_destroy(bpf); + + if (expected_errno != 0) { + RTE_TEST_ASSERT_EQUAL(bpf, NULL, + "expect rte_bpf_load() == NULL"); + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, + "expect rte_errno == %d, found %d", + expected_errno, actual_errno); + } else + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, + "expect rte_bpf_load() != NULL"); + + return TEST_SUCCESS; +} + +/* + * Try and load completely empty BPF program. + * Should fail because there is no EXIT (and also return value is undefined). + */ +static int +test_no_instructions(void) +{ + static const struct ebpf_insn ins[] = {}; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_instructions_autotest, true, true, test_no_instructions); + +/* + * Try and load a BPF program comprising single EXIT instruction. + * Should fail because the return value is undefined. + */ +static int +test_exit_only(void) +{ + static const struct ebpf_insn ins[] = { + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_exit_only_autotest, true, true, test_exit_only); + +/* + * Try and load a BPF program with no EXIT instruction. + * Should fail because of this. + */ +static int +test_no_exit(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_exit_autotest, true, true, test_no_exit); + +/* + * Try and load smallest possible valid BPF program. + */ +static int +test_minimal_working(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, test_minimal_working); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 556e613762..6983c026af 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) int32_t rc; uint32_t i; - if (prm == NULL || prm->ins == NULL || + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || (prm->nb_xsym != 0 && prm->xsym == NULL)) { rte_errno = EINVAL; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 4f47d6dc7b..23444b3eaa 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) { uint32_t ne; - if (nidx > bvf->prm->nb_ins) { + if (nidx >= bvf->prm->nb_ins) { RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, next pc: %u", __func__, get_node_idx(bvf, node), nidx); @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) * Control Flow Graph (CFG). * Information collected at this path would be used later * to determine is there any loops, and/or unreachable instructions. + * PREREQUISITE: there is at least one node. */ static void dfs(struct bpf_verifier *bvf) { struct inst_node *next, *node; - node = bvf->in; - while (node != NULL) { + RTE_ASSERT(bvf->nb_nodes != 0); + /* + * Since there is at least one node, node with index 0 always exists; + * it is our program entry point. + */ + node = &bvf->in[0]; + do { if (node->colour == WHITE) set_node_colour(bvf, node, GREY); @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) } } else node = NULL; - } + } while (node != NULL); } /* @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) if (rc != 0) return rc; + if (bvf->nb_nodes == 0) { + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", + __func__, bvf); + return -EINVAL; + } + dfs(bvf); RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v2 3/5] bpf: disallow empty program 2025-12-16 18:20 ` [PATCH v2 3/5] bpf: disallow empty program Marat Khalili @ 2025-12-18 0:54 ` Stephen Hemminger 2025-12-17 8:58 ` Marat Khalili 0 siblings, 1 reply; 51+ messages in thread From: Stephen Hemminger @ 2025-12-18 0:54 UTC (permalink / raw) To: Marat Khalili Cc: Jerin Jacob, Morten Brørup, Konstantin Ananyev, dev, stable [-- Attachment #1: Type: text/plain, Size: 7595 bytes --] I see no need for docbook style comments for static functions. Especially in test only code On Tue, Dec 16, 2025, 10:21 Marat Khalili <marat.khalili@huawei.com> wrote: > Add tests for some simple cases: > * Program with no instructions; > * Program with only EXIT instruction but no return value set; > * Program with return value set but no EXIT instruction; > * Minimal valid program with return value set and an EXIT instruction. > > Fix found bugs: > * a program with no instructions was accepted; > * a program with no EXIT instruction read outside the buffer. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > --- > app/test/test_bpf.c | 118 +++++++++++++++++++++++++++++++++++++++++ > lib/bpf/bpf_load.c | 2 +- > lib/bpf/bpf_validate.c | 20 +++++-- > 3 files changed, 135 insertions(+), 5 deletions(-) > > diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c > index b7c94ba1c7..6ecc49efff 100644 > --- a/app/test/test_bpf.c > +++ b/app/test/test_bpf.c > @@ -34,6 +34,124 @@ test_bpf(void) > #include <rte_ip.h> > > > +/* Tests of most simple BPF programs (no instructions, one instruction > etc.) */ > + > +/* > + * Try to load a simple bpf program from the instructions array. > + * > + * When `expected_errno` is zero, expect it to load successfully. > + * When `expected_errno` is non-zero, expect it to fail with this > `rte_errno`. > + * > + * @param nb_ins > + * Number of instructions in the `ins` array. > + * @param ins > + * BPF instructions array. > + * @param expected_errno > + * Expected result. > + * @return > + * TEST_SUCCESS on success, error code on failure. > + */ > +static int > +bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, int > expected_errno) > +{ > + const struct rte_bpf_prm prm = { > + .ins = ins, > + .nb_ins = nb_ins, > + .prog_arg = { > + .type = RTE_BPF_ARG_RAW, > + .size = sizeof(uint64_t), > + }, > + }; > + > + struct rte_bpf *const bpf = rte_bpf_load(&prm); > + const int actual_errno = rte_errno; > + rte_bpf_destroy(bpf); > + > + if (expected_errno != 0) { > + RTE_TEST_ASSERT_EQUAL(bpf, NULL, > + "expect rte_bpf_load() == NULL"); > + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, > + "expect rte_errno == %d, found %d", > + expected_errno, actual_errno); > + } else > + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, > + "expect rte_bpf_load() != NULL"); > + > + return TEST_SUCCESS; > +} > + > +/* > + * Try and load completely empty BPF program. > + * Should fail because there is no EXIT (and also return value is > undefined). > + */ > +static int > +test_no_instructions(void) > +{ > + static const struct ebpf_insn ins[] = {}; > + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_no_instructions_autotest, true, true, > test_no_instructions); > + > +/* > + * Try and load a BPF program comprising single EXIT instruction. > + * Should fail because the return value is undefined. > + */ > +static int > +test_exit_only(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + .code = (BPF_JMP | EBPF_EXIT), > + }, > + }; > + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_exit_only_autotest, true, true, test_exit_only); > + > +/* > + * Try and load a BPF program with no EXIT instruction. > + * Should fail because of this. > + */ > +static int > +test_no_exit(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + /* Set return value to the program argument. */ > + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), > + .src_reg = EBPF_REG_1, > + .dst_reg = EBPF_REG_0, > + }, > + }; > + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_no_exit_autotest, true, true, test_no_exit); > + > +/* > + * Try and load smallest possible valid BPF program. > + */ > +static int > +test_minimal_working(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + /* Set return value to the program argument. */ > + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), > + .src_reg = EBPF_REG_1, > + .dst_reg = EBPF_REG_0, > + }, > + { > + .code = (BPF_JMP | EBPF_EXIT), > + }, > + }; > + return bpf_load_test(RTE_DIM(ins), ins, 0); > +} > + > +REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, > test_minimal_working); > + > /* > * Basic functional tests for librte_bpf. > * The main procedure - load eBPF program, execute it and > diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c > index 556e613762..6983c026af 100644 > --- a/lib/bpf/bpf_load.c > +++ b/lib/bpf/bpf_load.c > @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) > int32_t rc; > uint32_t i; > > - if (prm == NULL || prm->ins == NULL || > + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || > (prm->nb_xsym != 0 && prm->xsym == NULL)) { > rte_errno = EINVAL; > return NULL; > diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c > index 4f47d6dc7b..23444b3eaa 100644 > --- a/lib/bpf/bpf_validate.c > +++ b/lib/bpf/bpf_validate.c > @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node > *node, uint32_t nidx) > { > uint32_t ne; > > - if (nidx > bvf->prm->nb_ins) { > + if (nidx >= bvf->prm->nb_ins) { > RTE_BPF_LOG_LINE(ERR, > "%s: program boundary violation at pc: %u, next > pc: %u", > __func__, get_node_idx(bvf, node), nidx); > @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct > inst_node *node) > * Control Flow Graph (CFG). > * Information collected at this path would be used later > * to determine is there any loops, and/or unreachable instructions. > + * PREREQUISITE: there is at least one node. > */ > static void > dfs(struct bpf_verifier *bvf) > { > struct inst_node *next, *node; > > - node = bvf->in; > - while (node != NULL) { > + RTE_ASSERT(bvf->nb_nodes != 0); > + /* > + * Since there is at least one node, node with index 0 always > exists; > + * it is our program entry point. > + */ > + node = &bvf->in[0]; > + do { > > if (node->colour == WHITE) > set_node_colour(bvf, node, GREY); > @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) > } > } else > node = NULL; > - } > + } while (node != NULL); > } > > /* > @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) > if (rc != 0) > return rc; > > + if (bvf->nb_nodes == 0) { > + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", > + __func__, bvf); > + return -EINVAL; > + } > + > dfs(bvf); > > RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" > -- > 2.43.0 > > [-- Attachment #2: Type: text/html, Size: 9549 bytes --] ^ permalink raw reply [flat|nested] 51+ messages in thread
* RE: [PATCH v2 3/5] bpf: disallow empty program 2025-12-18 0:54 ` Stephen Hemminger @ 2025-12-17 8:58 ` Marat Khalili 0 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2025-12-17 8:58 UTC (permalink / raw) To: Stephen Hemminger Cc: Jerin Jacob, Morten Brørup, Konstantin Ananyev, dev, stable@dpdk.org [-- Attachment #1: Type: text/plain, Size: 160 bytes --] > I see no need for docbook style comments for static functions. > Especially in test only code Thank you, will remember it and remove and next iteration. [-- Attachment #2: Type: text/html, Size: 1867 bytes --] ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v2 4/5] bpf: make add/subtract one program validate 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili ` (2 preceding siblings ...) 2025-12-16 18:20 ` [PATCH v2 3/5] bpf: disallow empty program Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-16 18:20 ` [PATCH v2 5/5] bpf: fix BPF validation w/ conditional jump first Marat Khalili 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 5 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev; +Cc: dev, stable Add tests loading simple BPF programs adding or subtracting one to its argument and fix triggered signed integer overflow undefined behaviours: lib/bpf/bpf_validate.c:324:24: runtime error: signed integer overflow: 1 + 9223372036854775807 cannot be represented in type 'long int' lib/bpf/bpf_validate.c:352:24: runtime error: signed integer overflow: -9223372036854775808 - 1 cannot be represented in type 'long int' As a minimal possible fix perform operation on unsigned integers where overflow is well-defined, which was probably the original intent. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- app/test/test_bpf.c | 56 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 8 +++--- 2 files changed, 60 insertions(+), 4 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index 6ecc49efff..c25f5a6f4d 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -152,6 +152,62 @@ test_minimal_working(void) REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, test_minimal_working); +/* + * Try and load valid BPF program adding one to the argument. + */ +static int +test_add_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to one. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_K), + .dst_reg = EBPF_REG_0, + .imm = 1, + }, + { + /* Add program argument to the return value. */ + .code = (EBPF_ALU64 | BPF_ADD | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_add_one_autotest, true, true, test_add_one); + +/* + * Try and load valid BPF program subtracting one from the argument. + */ +static int +test_subtract_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Subtract one from the program argument. */ + .code = (EBPF_ALU64 | BPF_SUB | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 1, + }, + { + /* Set return value to the result. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_subtract_one_autotest, true, true, test_subtract_one); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 23444b3eaa..47ad6fef0f 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -243,8 +243,8 @@ eval_add(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min + rs->u.min) & msk; rv.u.max = (rd->u.max + rs->u.max) & msk; - rv.s.min = (rd->s.min + rs->s.min) & msk; - rv.s.max = (rd->s.max + rs->s.max) & msk; + rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; + rv.s.max = ((uint64_t)rd->s.max + (uint64_t)rs->s.max) & msk; /* * if at least one of the operands is not constant, @@ -272,8 +272,8 @@ eval_sub(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min - rs->u.max) & msk; rv.u.max = (rd->u.max - rs->u.min) & msk; - rv.s.min = (rd->s.min - rs->s.max) & msk; - rv.s.max = (rd->s.max - rs->s.min) & msk; + rv.s.min = ((uint64_t)rd->s.min - (uint64_t)rs->s.max) & msk; + rv.s.max = ((uint64_t)rd->s.max - (uint64_t)rs->s.min) & msk; /* * if at least one of the operands is not constant, -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v2 5/5] bpf: fix BPF validation w/ conditional jump first 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili ` (3 preceding siblings ...) 2025-12-16 18:20 ` [PATCH v2 4/5] bpf: make add/subtract one program validate Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 5 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev, Ferruh Yigit; +Cc: dev, stable When the BPF program was starting with a conditional jump only one (true) execution branch of the program was evaluated. Any instructions jumped over were not evaluated and could contain invalid operations. The root cause was using zero instruction index as a signal for ending evaluation when backtracking. Switch from using previous instruction index for tracking execution history to a previous instruction pointer. First instruction will not have it set, and therefore backtracking _from_ it will end evaluation, not backtracking _to_ it like before. Add two tests demostrating the problem: * test_jump_over_invalid_first: loads BPF program with conditional jump over the invalid operation, should not succeeed; * test_jump_over_invalid_non_first: same program with one extra instruction at the start to demostrate that it is indeed invalid (and also guard against another kind of regression); Fixes: 6e12ec4c4d ("bpf: add more checks") Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- app/test/test_bpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 20 ++++------- 2 files changed, 86 insertions(+), 14 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index c25f5a6f4d..3f04cb28be 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -208,6 +208,86 @@ test_subtract_one(void) REGISTER_FAST_TEST(bpf_subtract_one_autotest, true, true, test_subtract_one); +/* + * Conditionally jump over invalid operation as first instruction. + */ +static int +test_jump_over_invalid_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_first_autotest, true, true, + test_jump_over_invalid_first); + +/* + * Conditionally jump over invalid operation as non-first instruction. + */ +static int +test_jump_over_invalid_non_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_non_first_autotest, true, true, + test_jump_over_invalid_non_first); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 47ad6fef0f..64a8f227a3 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -64,7 +64,7 @@ struct inst_node { uint8_t cur_edge:4; uint8_t edge_type[MAX_EDGES]; uint32_t edge_dest[MAX_EDGES]; - uint32_t prev_node; + struct inst_node *prev_node; struct { struct bpf_eval_state *cur; /* save/restore for jcc targets */ struct bpf_eval_state *start; @@ -1875,12 +1875,6 @@ set_edge_type(struct bpf_verifier *bvf, struct inst_node *node, bvf->edge_type[type]++; } -static struct inst_node * -get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) -{ - return bvf->in + node->prev_node; -} - /* * Depth-First Search (DFS) through previously constructed * Control Flow Graph (CFG). @@ -1916,7 +1910,7 @@ dfs(struct bpf_verifier *bvf) if (next != NULL) { /* proceed with next child */ - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } else { /* @@ -1925,7 +1919,7 @@ dfs(struct bpf_verifier *bvf) */ set_node_colour(bvf, node, BLACK); node->cur_edge = 0; - node = get_prev_node(bvf, node); + node = node->prev_node; } } else node = NULL; @@ -2490,7 +2484,7 @@ evaluate(struct bpf_verifier *bvf) next = NULL; stats.nb_prune++; } else { - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } } else { @@ -2501,11 +2495,9 @@ evaluate(struct bpf_verifier *bvf) */ node->cur_edge = 0; save_safe_eval_state(bvf, node); - node = get_prev_node(bvf, node); + node = node->prev_node; - /* finished */ - if (node == bvf->in) - node = NULL; + /* first node will not have prev, signalling finish */ } } -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v3 0/6] bpf: simple tests and fixes 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili ` (4 preceding siblings ...) 2025-12-16 18:20 ` [PATCH v2 5/5] bpf: fix BPF validation w/ conditional jump first Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili ` (8 more replies) 5 siblings, 9 replies; 51+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb; +Cc: dev, stable, konstantin.ananyev Add simple unit-style tests for rte_bpf_load, and fix some minor discovered bugs. v3: * Fixed unfortunate typos in the last patch commit message. * Marked all ARM opcodes with UINT32_C for clarity as suggested by Morten Brørup, in a separate commit. v2: * Moved new tests back into test_bpf.c * Changed library macros RTE_SHIFT_VALxx to handle variable first arguments instead of introducing new macros. * Added another test and fix, for conditional jump as first instruction. Marat Khalili (6): eal: variable first arguments of RTE_SHIFT_VALxx bpf: fix signed shift overflows in ARM JIT bpf: mark ARM opcodes with UINT32_C bpf: disallow empty program bpf: make add/subtract one program validate bpf: fix BPF validation w/ conditional jump first app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ lib/bpf/bpf_jit_arm64.c | 184 ++++++++++++------------- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 48 ++++--- lib/eal/include/rte_bitops.h | 4 +- 5 files changed, 375 insertions(+), 117 deletions(-) -- 2.43.0 ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-19 13:06 ` Konstantin Ananyev 2025-12-17 18:01 ` [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili ` (7 subsequent siblings) 8 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Jack Bond-Preston Cc: dev, stable, konstantin.ananyev Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying UINT32_C or UINT64_C correspondingly to its first argument. As a consequence first argument had to be a constant. Replace UINT32_C and UINT64_C with casts to uint32_t and uint64_t to allow these arguments be variable. For constants the result should be the same. (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, but I think most users of RTE_SHIFT_VAL64 expect the result to be uint64_t.) Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Reviewed-by: Morten Brørup <mb@smartsharesystems.com> --- lib/eal/include/rte_bitops.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h index 2d1b9d281c..aa6ac73abb 100644 --- a/lib/eal/include/rte_bitops.h +++ b/lib/eal/include/rte_bitops.h @@ -51,7 +51,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (32 - width of val). */ -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) /** * Get the uint64_t shifted value. @@ -61,7 +61,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (64 - width of val). */ -#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr)) +#define RTE_SHIFT_VAL64(val, nr) ((uint64_t)(val) << (nr)) /** * Generate a contiguous 32-bit mask -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* RE: [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx 2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2025-12-19 13:06 ` Konstantin Ananyev 0 siblings, 0 replies; 51+ messages in thread From: Konstantin Ananyev @ 2025-12-19 13:06 UTC (permalink / raw) To: Marat Khalili, stephen@networkplumber.org, jerinjacobk@gmail.com, mb@smartsharesystems.com, Jack Bond-Preston Cc: dev@dpdk.org, stable@dpdk.org > Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying > UINT32_C or UINT64_C correspondingly to its first argument. As a > consequence first argument had to be a constant. Replace UINT32_C and > UINT64_C with casts to uint32_t and uint64_t to allow these arguments be > variable. For constants the result should be the same. > > (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, but > I think most users of RTE_SHIFT_VAL64 expect the result to be uint64_t.) > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > Reviewed-by: Morten Brørup <mb@smartsharesystems.com> > --- > lib/eal/include/rte_bitops.h | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h > index 2d1b9d281c..aa6ac73abb 100644 > --- a/lib/eal/include/rte_bitops.h > +++ b/lib/eal/include/rte_bitops.h > @@ -51,7 +51,7 @@ extern "C" { > * @param nr > * The shift number in range of 0 to (32 - width of val). > */ > -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) > +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) > > /** > * Get the uint64_t shifted value. > @@ -61,7 +61,7 @@ extern "C" { > * @param nr > * The shift number in range of 0 to (64 - width of val). > */ > -#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr)) > +#define RTE_SHIFT_VAL64(val, nr) ((uint64_t)(val) << (nr)) > > /** > * Generate a contiguous 32-bit mask > -- Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > 2.43.0 ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-19 13:13 ` Konstantin Ananyev 2025-12-17 18:01 ` [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili ` (6 subsequent siblings) 8 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Wathsala Vithanage, Konstantin Ananyev Cc: dev, stable Left shifts of integer literals and bool values overwriting the sign bit were used multiple times in bpf_jit_arm64.c. E.g.: insn = (!!is64) << 31; where is64 has type bool (double bang is a no-op here). The operand of left shift was promoted to type int, which when 32-bit wide cannot represent the result. Similarly literal integers have int type by default. Sanitizer produced the following diagnostic during runtime (for various lines): lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' To fix the issue use RTE_BIT32 and similar macros instead. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> --- lib/bpf/bpf_jit_arm64.c | 162 ++++++++++++++++++++-------------------- 1 file changed, 81 insertions(+), 81 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 96b8cd2e03..76df1e4ba1 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -28,7 +28,7 @@ #define A64_ZR 31 #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) >> (n))) -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) struct ebpf_a64_map { uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ @@ -238,12 +238,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, uint32_t insn, imm; imm = mask_imm(12, imm12); - insn = (!!is64) << 31; - insn |= (!!sub) << 30; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(sub, 30); insn |= 0x11000000; insn |= rd; - insn |= rn << 5; - insn |= imm << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(imm, 10); emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); @@ -279,16 +279,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, { uint32_t insn; - insn = (!!load) << 22; - insn |= (!!pre_index) << 24; + insn = RTE_SHIFT_VAL32(load, 22); + insn |= RTE_SHIFT_VAL32(pre_index, 24); insn |= 0xa8800000; insn |= rt; - insn |= rn << 5; - insn |= rt2 << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(rt2, 10); if (push) - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ + insn |= RTE_SHIFT_VAL32(0x7e, 15); /* 0x7e means -2 with imm7 */ else - insn |= 0x2 << 15; + insn |= RTE_SHIFT_VAL32(0x2, 15); emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || check_reg(rt2)); @@ -317,11 +317,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t type, { uint32_t insn; - insn = (!!is64) << 31; - insn |= type << 29; - insn |= 0x25 << 23; - insn |= (shift/16) << 21; - insn |= imm16 << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(type, 29); + insn |= RTE_SHIFT_VAL32(0x25, 23); + insn |= RTE_SHIFT_VAL32(shift/16, 21); + insn |= RTE_SHIFT_VAL32(imm16, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); @@ -334,7 +334,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint32_t val) uint16_t lower = val & 0xffff; /* Positive number */ - if ((val & 1UL << 31) == 0) { + if ((val & RTE_BIT32(31)) == 0) { mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); if (upper) mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); @@ -393,21 +393,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, uint8_t rt, uint8_t rn, uint8_t rm, { uint32_t insn; - insn = 0x1c1 << 21; + insn = RTE_SHIFT_VAL32(0x1c1, 21); if (load) - insn |= 1 << 22; + insn |= RTE_BIT32(22); if (sz == BPF_B) - insn |= 0 << 30; + insn |= RTE_SHIFT_VAL32(0, 30); else if (sz == BPF_H) - insn |= 1 << 30; + insn |= RTE_SHIFT_VAL32(1, 30); else if (sz == BPF_W) - insn |= 2 << 30; + insn |= RTE_SHIFT_VAL32(2, 30); else if (sz == EBPF_DW) - insn |= 3 << 30; + insn |= RTE_SHIFT_VAL32(3, 30); - insn |= rm << 16; - insn |= 0x1a << 10; /* LSL and S = 0 */ - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1a, 10); /* LSL and S = 0 */ + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || check_reg(rm) || @@ -436,10 +436,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 21; /* shift == 0 */ - insn |= rm << 16; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 21); /* shift == 0 */ + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -468,11 +468,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rm) { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= A64_ZR << 10; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(A64_ZR, 10); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -489,11 +489,11 @@ emit_data_process_two_src(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd6 << 21; - insn |= rm << 16; - insn |= op << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd6, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(op, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -532,14 +532,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; + insn = RTE_SHIFT_VAL32(is64, 31); if (insn) - insn |= 1 << 22; /* Set N bit when is64 is set */ - insn |= op << 29; - insn |= 0x26 << 23; - insn |= immr << 16; - insn |= imms << 10; - insn |= rn << 5; + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x26, 23); + insn |= RTE_SHIFT_VAL32(immr, 16); + insn |= RTE_SHIFT_VAL32(imms, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || @@ -578,11 +578,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 29; - insn |= 0x50 << 21; - insn |= rm << 16; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x50, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -612,12 +612,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= 0x1 << 15; - insn |= ra << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1, 15); + insn |= RTE_SHIFT_VAL32(ra, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_reg(rm) || @@ -638,7 +638,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) uint32_t insn; insn = 0xd63f0000; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); } @@ -669,22 +669,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) uint32_t insn; insn = 0xdac00000; - insn |= rd << 5; + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; switch (imm) { case 16: - insn |= 1 << 10; + insn |= RTE_SHIFT_VAL32(1, 10); emit_insn(ctx, insn, check_reg(rd)); emit_zero_extend(ctx, rd, 16); break; case 32: - insn |= 2 << 10; + insn |= RTE_SHIFT_VAL32(2, 10); emit_insn(ctx, insn, check_reg(rd)); /* Upper 32 bits already cleared */ break; case 64: - insn |= 3 << 10; + insn |= RTE_SHIFT_VAL32(3, 10); emit_insn(ctx, insn, check_reg(rd)); break; default: @@ -933,9 +933,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = (!!is64) << 31; - insn |= 0x35 << 24; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0x35, 24); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); @@ -947,7 +947,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) uint32_t insn, imm; imm = mask_imm(26, imm26); - insn = 0x5 << 26; + insn = RTE_SHIFT_VAL32(0x5, 26); insn |= imm; emit_insn(ctx, insn, check_imm(26, imm26)); @@ -971,9 +971,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) uint32_t insn; insn = 0xb820001f; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); } @@ -984,8 +984,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) uint32_t insn; insn = 0x885f7c00; - insn |= (!!is64) << 30; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); @@ -998,9 +998,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, uint32_t insn; insn = 0x88007c00; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || check_reg(rn)); @@ -1051,9 +1051,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t insn; insn = opc; - insn |= (!!is64) << 31; - insn |= rm << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); } @@ -1076,8 +1076,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t cond, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = 0x15 << 26; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(0x15, 26); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= cond; emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); @@ -1301,7 +1301,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) break; /* dst = imm64 */ case (BPF_LD | BPF_IMM | EBPF_DW): - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; + u64 = RTE_SHIFT_VAL64(ins[1].imm, 32) | (uint32_t)imm; emit_mov_imm(ctx, 1, dst, u64); i++; break; -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* RE: [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT 2025-12-17 18:01 ` [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-12-19 13:13 ` Konstantin Ananyev 0 siblings, 0 replies; 51+ messages in thread From: Konstantin Ananyev @ 2025-12-19 13:13 UTC (permalink / raw) To: Marat Khalili, stephen@networkplumber.org, jerinjacobk@gmail.com, mb@smartsharesystems.com, Wathsala Vithanage Cc: dev@dpdk.org, stable@dpdk.org > > Left shifts of integer literals and bool values overwriting the sign bit > were used multiple times in bpf_jit_arm64.c. E.g.: > > insn = (!!is64) << 31; > > where is64 has type bool (double bang is a no-op here). The operand of > left shift was promoted to type int, which when 32-bit wide cannot > represent the result. Similarly literal integers have int type by > default. Sanitizer produced the following diagnostic during runtime > (for various lines): > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 > places cannot be represented in type 'int' > > To fix the issue use RTE_BIT32 and similar macros instead. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > Acked-by: Morten Brørup <mb@smartsharesystems.com> > --- Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > -- > 2.43.0 ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili 2025-12-17 18:01 ` [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-19 13:14 ` Konstantin Ananyev 2025-12-17 18:01 ` [PATCH v3 4/6] bpf: disallow empty program Marat Khalili ` (5 subsequent siblings) 8 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Wathsala Vithanage, Konstantin Ananyev Cc: dev, stable To prevent confusion as to whether operations with them will be executed as signed or unsigned (and with what bit width) mark all ARM opcode constants with UINT32_C. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- lib/bpf/bpf_jit_arm64.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 76df1e4ba1..9263234338 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -12,7 +12,7 @@ #include "bpf_impl.h" #define A64_REG_MASK(r) ((r) & 0x1f) -#define A64_INVALID_OP_CODE (0xffffffff) +#define A64_INVALID_OP_CODE (UINT32_C(0xffffffff)) #define TMP_REG_1 (EBPF_REG_10 + 1) #define TMP_REG_2 (EBPF_REG_10 + 2) @@ -228,7 +228,7 @@ emit_insn(struct a64_jit_ctx *ctx, uint32_t insn, int error) static void emit_ret(struct a64_jit_ctx *ctx) { - emit_insn(ctx, 0xd65f03c0, 0); + emit_insn(ctx, UINT32_C(0xd65f03c0), 0); } static void @@ -240,7 +240,7 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, imm = mask_imm(12, imm12); insn = RTE_SHIFT_VAL32(is64, 31); insn |= RTE_SHIFT_VAL32(sub, 30); - insn |= 0x11000000; + insn |= UINT32_C(0x11000000); insn |= rd; insn |= RTE_SHIFT_VAL32(rn, 5); insn |= RTE_SHIFT_VAL32(imm, 10); @@ -281,7 +281,7 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, insn = RTE_SHIFT_VAL32(load, 22); insn |= RTE_SHIFT_VAL32(pre_index, 24); - insn |= 0xa8800000; + insn |= UINT32_C(0xa8800000); insn |= rt; insn |= RTE_SHIFT_VAL32(rn, 5); insn |= RTE_SHIFT_VAL32(rt2, 10); @@ -637,7 +637,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) { uint32_t insn; - insn = 0xd63f0000; + insn = UINT32_C(0xd63f0000); insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); @@ -668,7 +668,7 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) { uint32_t insn; - insn = 0xdac00000; + insn = UINT32_C(0xdac00000); insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; @@ -970,7 +970,7 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) { uint32_t insn; - insn = 0xb820001f; + insn = UINT32_C(0xb820001f); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rs, 16); insn |= RTE_SHIFT_VAL32(rn, 5); @@ -983,7 +983,7 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) { uint32_t insn; - insn = 0x885f7c00; + insn = UINT32_C(0x885f7c00); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; @@ -997,7 +997,7 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, { uint32_t insn; - insn = 0x88007c00; + insn = UINT32_C(0x88007c00); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rs, 16); insn |= RTE_SHIFT_VAL32(rn, 5); @@ -1042,8 +1042,8 @@ emit_xadd(struct a64_jit_ctx *ctx, uint8_t op, uint8_t tmp1, uint8_t tmp2, } } -#define A64_CMP 0x6b00000f -#define A64_TST 0x6a00000f +#define A64_CMP (UINT32_C(0x6b00000f)) +#define A64_TST (UINT32_C(0x6a00000f)) static void emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t opc) -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* RE: [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C 2025-12-17 18:01 ` [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili @ 2025-12-19 13:14 ` Konstantin Ananyev 0 siblings, 0 replies; 51+ messages in thread From: Konstantin Ananyev @ 2025-12-19 13:14 UTC (permalink / raw) To: Marat Khalili, stephen@networkplumber.org, jerinjacobk@gmail.com, mb@smartsharesystems.com, Wathsala Vithanage Cc: dev@dpdk.org, stable@dpdk.org > To prevent confusion as to whether operations with them will be executed > as signed or unsigned (and with what bit width) mark all ARM opcode > constants with UINT32_C. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > -- > 2.43.0 ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v3 4/6] bpf: disallow empty program 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (2 preceding siblings ...) 2025-12-17 18:01 ` [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 5/6] bpf: make add/subtract one program validate Marat Khalili ` (4 subsequent siblings) 8 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev; +Cc: dev, stable Add tests for some simple cases: * Program with no instructions; * Program with only EXIT instruction but no return value set; * Program with return value set but no EXIT instruction; * Minimal valid program with return value set and an EXIT instruction. Fix found bugs: * a program with no instructions was accepted; * a program with no EXIT instruction read outside the buffer. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- app/test/test_bpf.c | 118 +++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 20 +++++-- 3 files changed, 135 insertions(+), 5 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index b7c94ba1c7..6ecc49efff 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -34,6 +34,124 @@ test_bpf(void) #include <rte_ip.h> +/* Tests of most simple BPF programs (no instructions, one instruction etc.) */ + +/* + * Try to load a simple bpf program from the instructions array. + * + * When `expected_errno` is zero, expect it to load successfully. + * When `expected_errno` is non-zero, expect it to fail with this `rte_errno`. + * + * @param nb_ins + * Number of instructions in the `ins` array. + * @param ins + * BPF instructions array. + * @param expected_errno + * Expected result. + * @return + * TEST_SUCCESS on success, error code on failure. + */ +static int +bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, int expected_errno) +{ + const struct rte_bpf_prm prm = { + .ins = ins, + .nb_ins = nb_ins, + .prog_arg = { + .type = RTE_BPF_ARG_RAW, + .size = sizeof(uint64_t), + }, + }; + + struct rte_bpf *const bpf = rte_bpf_load(&prm); + const int actual_errno = rte_errno; + rte_bpf_destroy(bpf); + + if (expected_errno != 0) { + RTE_TEST_ASSERT_EQUAL(bpf, NULL, + "expect rte_bpf_load() == NULL"); + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, + "expect rte_errno == %d, found %d", + expected_errno, actual_errno); + } else + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, + "expect rte_bpf_load() != NULL"); + + return TEST_SUCCESS; +} + +/* + * Try and load completely empty BPF program. + * Should fail because there is no EXIT (and also return value is undefined). + */ +static int +test_no_instructions(void) +{ + static const struct ebpf_insn ins[] = {}; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_instructions_autotest, true, true, test_no_instructions); + +/* + * Try and load a BPF program comprising single EXIT instruction. + * Should fail because the return value is undefined. + */ +static int +test_exit_only(void) +{ + static const struct ebpf_insn ins[] = { + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_exit_only_autotest, true, true, test_exit_only); + +/* + * Try and load a BPF program with no EXIT instruction. + * Should fail because of this. + */ +static int +test_no_exit(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_exit_autotest, true, true, test_no_exit); + +/* + * Try and load smallest possible valid BPF program. + */ +static int +test_minimal_working(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, test_minimal_working); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 556e613762..6983c026af 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) int32_t rc; uint32_t i; - if (prm == NULL || prm->ins == NULL || + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || (prm->nb_xsym != 0 && prm->xsym == NULL)) { rte_errno = EINVAL; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 4f47d6dc7b..23444b3eaa 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) { uint32_t ne; - if (nidx > bvf->prm->nb_ins) { + if (nidx >= bvf->prm->nb_ins) { RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, next pc: %u", __func__, get_node_idx(bvf, node), nidx); @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) * Control Flow Graph (CFG). * Information collected at this path would be used later * to determine is there any loops, and/or unreachable instructions. + * PREREQUISITE: there is at least one node. */ static void dfs(struct bpf_verifier *bvf) { struct inst_node *next, *node; - node = bvf->in; - while (node != NULL) { + RTE_ASSERT(bvf->nb_nodes != 0); + /* + * Since there is at least one node, node with index 0 always exists; + * it is our program entry point. + */ + node = &bvf->in[0]; + do { if (node->colour == WHITE) set_node_colour(bvf, node, GREY); @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) } } else node = NULL; - } + } while (node != NULL); } /* @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) if (rc != 0) return rc; + if (bvf->nb_nodes == 0) { + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", + __func__, bvf); + return -EINVAL; + } + dfs(bvf); RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v3 5/6] bpf: make add/subtract one program validate 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (3 preceding siblings ...) 2025-12-17 18:01 ` [PATCH v3 4/6] bpf: disallow empty program Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first Marat Khalili ` (3 subsequent siblings) 8 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev; +Cc: dev, stable Add tests loading simple BPF programs adding or subtracting one to its argument and fix triggered signed integer overflow undefined behaviours: lib/bpf/bpf_validate.c:324:24: runtime error: signed integer overflow: 1 + 9223372036854775807 cannot be represented in type 'long int' lib/bpf/bpf_validate.c:352:24: runtime error: signed integer overflow: -9223372036854775808 - 1 cannot be represented in type 'long int' As a minimal possible fix perform operation on unsigned integers where overflow is well-defined, which was probably the original intent. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- app/test/test_bpf.c | 56 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 8 +++--- 2 files changed, 60 insertions(+), 4 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index 6ecc49efff..c25f5a6f4d 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -152,6 +152,62 @@ test_minimal_working(void) REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, test_minimal_working); +/* + * Try and load valid BPF program adding one to the argument. + */ +static int +test_add_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to one. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_K), + .dst_reg = EBPF_REG_0, + .imm = 1, + }, + { + /* Add program argument to the return value. */ + .code = (EBPF_ALU64 | BPF_ADD | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_add_one_autotest, true, true, test_add_one); + +/* + * Try and load valid BPF program subtracting one from the argument. + */ +static int +test_subtract_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Subtract one from the program argument. */ + .code = (EBPF_ALU64 | BPF_SUB | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 1, + }, + { + /* Set return value to the result. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_subtract_one_autotest, true, true, test_subtract_one); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 23444b3eaa..47ad6fef0f 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -243,8 +243,8 @@ eval_add(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min + rs->u.min) & msk; rv.u.max = (rd->u.max + rs->u.max) & msk; - rv.s.min = (rd->s.min + rs->s.min) & msk; - rv.s.max = (rd->s.max + rs->s.max) & msk; + rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; + rv.s.max = ((uint64_t)rd->s.max + (uint64_t)rs->s.max) & msk; /* * if at least one of the operands is not constant, @@ -272,8 +272,8 @@ eval_sub(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min - rs->u.max) & msk; rv.u.max = (rd->u.max - rs->u.min) & msk; - rv.s.min = (rd->s.min - rs->s.max) & msk; - rv.s.max = (rd->s.max - rs->s.min) & msk; + rv.s.min = ((uint64_t)rd->s.min - (uint64_t)rs->s.max) & msk; + rv.s.max = ((uint64_t)rd->s.max - (uint64_t)rs->s.min) & msk; /* * if at least one of the operands is not constant, -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (4 preceding siblings ...) 2025-12-17 18:01 ` [PATCH v3 5/6] bpf: make add/subtract one program validate Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2026-01-08 11:10 ` Konstantin Ananyev 2026-01-14 19:50 ` [PATCH v3 0/6] bpf: simple tests and fixes Stephen Hemminger ` (2 subsequent siblings) 8 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev, Ferruh Yigit; +Cc: dev, stable When the BPF program was starting with a conditional jump only one (true) execution branch of the program was evaluated. Any instructions jumped over were not evaluated and could contain invalid operations. The root cause was using zero instruction index as a signal for ending evaluation when backtracking. Switch from using previous instruction index for tracking execution history to a previous instruction pointer. First instruction will not have it set, and therefore backtracking _from_ it will end evaluation, not backtracking _to_ it like before. Add two tests demonstrating the problem: * test_jump_over_invalid_first: loads BPF program with conditional jump over the invalid operation, should not succeed; * test_jump_over_invalid_non_first: same program with one extra instruction at the start to demonstrate that it is indeed invalid (and also guard against another kind of regression); Fixes: 6e12ec4c4d6d ("bpf: add more checks") Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- app/test/test_bpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 20 ++++------- 2 files changed, 86 insertions(+), 14 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index c25f5a6f4d..3f04cb28be 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -208,6 +208,86 @@ test_subtract_one(void) REGISTER_FAST_TEST(bpf_subtract_one_autotest, true, true, test_subtract_one); +/* + * Conditionally jump over invalid operation as first instruction. + */ +static int +test_jump_over_invalid_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_first_autotest, true, true, + test_jump_over_invalid_first); + +/* + * Conditionally jump over invalid operation as non-first instruction. + */ +static int +test_jump_over_invalid_non_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_non_first_autotest, true, true, + test_jump_over_invalid_non_first); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 47ad6fef0f..64a8f227a3 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -64,7 +64,7 @@ struct inst_node { uint8_t cur_edge:4; uint8_t edge_type[MAX_EDGES]; uint32_t edge_dest[MAX_EDGES]; - uint32_t prev_node; + struct inst_node *prev_node; struct { struct bpf_eval_state *cur; /* save/restore for jcc targets */ struct bpf_eval_state *start; @@ -1875,12 +1875,6 @@ set_edge_type(struct bpf_verifier *bvf, struct inst_node *node, bvf->edge_type[type]++; } -static struct inst_node * -get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) -{ - return bvf->in + node->prev_node; -} - /* * Depth-First Search (DFS) through previously constructed * Control Flow Graph (CFG). @@ -1916,7 +1910,7 @@ dfs(struct bpf_verifier *bvf) if (next != NULL) { /* proceed with next child */ - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } else { /* @@ -1925,7 +1919,7 @@ dfs(struct bpf_verifier *bvf) */ set_node_colour(bvf, node, BLACK); node->cur_edge = 0; - node = get_prev_node(bvf, node); + node = node->prev_node; } } else node = NULL; @@ -2490,7 +2484,7 @@ evaluate(struct bpf_verifier *bvf) next = NULL; stats.nb_prune++; } else { - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } } else { @@ -2501,11 +2495,9 @@ evaluate(struct bpf_verifier *bvf) */ node->cur_edge = 0; save_safe_eval_state(bvf, node); - node = get_prev_node(bvf, node); + node = node->prev_node; - /* finished */ - if (node == bvf->in) - node = NULL; + /* first node will not have prev, signalling finish */ } } -- 2.43.0 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* RE: [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first 2025-12-17 18:01 ` [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first Marat Khalili @ 2026-01-08 11:10 ` Konstantin Ananyev 0 siblings, 0 replies; 51+ messages in thread From: Konstantin Ananyev @ 2026-01-08 11:10 UTC (permalink / raw) To: Marat Khalili, stephen@networkplumber.org, jerinjacobk@gmail.com, mb@smartsharesystems.com, Ferruh Yigit Cc: dev@dpdk.org, stable@dpdk.org > When the BPF program was starting with a conditional jump only one > (true) execution branch of the program was evaluated. Any instructions > jumped over were not evaluated and could contain invalid operations. > The root cause was using zero instruction index as a signal for ending > evaluation when backtracking. > > Switch from using previous instruction index for tracking execution > history to a previous instruction pointer. First instruction will not > have it set, and therefore backtracking _from_ it will end evaluation, > not backtracking _to_ it like before. > > Add two tests demonstrating the problem: > * test_jump_over_invalid_first: loads BPF program with > conditional jump over the invalid operation, should not succeed; > * test_jump_over_invalid_non_first: same program with one extra > instruction at the start to demonstrate that it is indeed invalid > (and also guard against another kind of regression); > > Fixes: 6e12ec4c4d6d ("bpf: add more checks") > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> Tested-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > -- > 2.43.0 ^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v3 0/6] bpf: simple tests and fixes 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (5 preceding siblings ...) 2025-12-17 18:01 ` [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first Marat Khalili @ 2026-01-14 19:50 ` Stephen Hemminger 2026-01-23 5:22 ` Stephen Hemminger 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili 8 siblings, 0 replies; 51+ messages in thread From: Stephen Hemminger @ 2026-01-14 19:50 UTC (permalink / raw) To: Marat Khalili; +Cc: jerinjacobk, mb, dev, stable, konstantin.ananyev On Wed, 17 Dec 2025 18:01:33 +0000 Marat Khalili <marat.khalili@huawei.com> wrote: > Add simple unit-style tests for rte_bpf_load, and fix some minor > discovered bugs. > > v3: > * Fixed unfortunate typos in the last patch commit message. > * Marked all ARM opcodes with UINT32_C for clarity as suggested by > Morten Brørup, in a separate commit. > > v2: > * Moved new tests back into test_bpf.c > * Changed library macros RTE_SHIFT_VALxx to handle variable first > arguments instead of introducing new macros. > * Added another test and fix, for conditional jump as first instruction. > > Marat Khalili (6): > eal: variable first arguments of RTE_SHIFT_VALxx > bpf: fix signed shift overflows in ARM JIT > bpf: mark ARM opcodes with UINT32_C > bpf: disallow empty program > bpf: make add/subtract one program validate > bpf: fix BPF validation w/ conditional jump first > > app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ > lib/bpf/bpf_jit_arm64.c | 184 ++++++++++++------------- > lib/bpf/bpf_load.c | 2 +- > lib/bpf/bpf_validate.c | 48 ++++--- > lib/eal/include/rte_bitops.h | 4 +- > 5 files changed, 375 insertions(+), 117 deletions(-) > AI review of this patch series identified some minor things. They could either be addressed by Thomas during merge or you could send a new version. ## DPDK BPF Test Patchset Review (v3, 6 patches) **Submitter:** Marat Khalili <marat.khalili@huawei.com> **Series overview:** Fixes various issues in the BPF library including sanitizer-detected undefined behaviors (signed shift overflows, integer overflows), validation bugs, and adds comprehensive test coverage. --- ### Patch 1/6: `eal: variable first arguments of RTE_SHIFT_VALxx` **Subject line:** ✅ 48 characters, correct format, lowercase **Commit message:** - ✅ Signed-off-by present - ✅ Reviewed-by and Acked-by present - ✅ Good explanation of the change and rationale - ⚠️ **Warning:** Missing `Cc: stable@dpdk.org` - this changes the behavior of public macros **Code changes:** ```c -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) ``` - ✅ Clean, minimal change - ✅ Allows variables as arguments (enables patch 2) **Verdict:** Acceptable. Consider adding `Cc: stable@dpdk.org` if this fixes user-visible limitations. --- ### Patch 2/6: `bpf: fix signed shift overflows in ARM JIT` **Subject line:** ✅ 41 characters, correct format **Commit message:** - ✅ Signed-off-by present - ✅ Acked-by present - ✅ Cc: stable@dpdk.org included - ⚠️ **Warning:** Missing `Fixes:` tag - sanitizer errors indicate this is a regression/bug - ❌ **Error:** Trailing whitespace on line 176: `Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> ` (space before `---`) **Code review:** - ✅ Comprehensive fix for UBSan-detected issues - ✅ Correct use of `RTE_BIT32()` and `RTE_SHIFT_VAL32()` macros - ℹ️ **Info:** Some shifts are no-ops (e.g., `RTE_SHIFT_VAL32(0, 30)` = 0) but kept for consistency - acceptable **Verdict:** Fix the trailing whitespace error. Should add `Fixes:` tag identifying when the UB was introduced. --- ### Patch 3/6: `bpf: mark ARM opcodes with UINT32_C` **Subject line:** ✅ 35 characters, correct format **Commit message:** - ✅ Signed-off-by present - ✅ Acked-by present - ⚠️ **Warning:** Has `Cc: stable@dpdk.org` but this is a style/clarity change, not a bug fix. Consider removing unless there's a functional issue. **Code changes:** ```c -#define A64_INVALID_OP_CODE (0xffffffff) +#define A64_INVALID_OP_CODE (UINT32_C(0xffffffff)) ``` - ✅ Consistent use of UINT32_C for ARM opcode constants - ✅ Improves code clarity about integer types **Verdict:** Good cleanup. Questionable whether `Cc: stable@dpdk.org` is appropriate for a style change. --- ### Patch 4/6: `bpf: disallow empty program` **Subject line:** ✅ 26 characters, correct format **Commit message:** - ✅ Signed-off-by present - ✅ Acked-by present - ✅ Cc: stable@dpdk.org included (appropriate for bug fixes) - ⚠️ **Warning:** Missing `Fixes:` tag - this fixes actual bugs (empty program accepted, buffer over-read) **Code review:** - ✅ `bpf_load_test()` helper function is well-documented - ✅ Good test coverage for edge cases - ✅ Boundary check fix: `nidx > bvf->prm->nb_ins` → `nidx >= bvf->prm->nb_ins` (correct) - ✅ Empty program check added to `validate()` - ✅ Loop changed from `while` to `do-while` with assertion - good defensive coding **Minor observations:** - Line 817-818: Brace style `} else` without braces on else is inconsistent but acceptable per DPDK style **Verdict:** Good patch. Add `Fixes:` tag to identify when these bugs were introduced. --- ### Patch 5/6: `bpf: make add/subtract one program validate` **Subject line:** ✅ 42 characters, correct format **Commit message:** - ✅ Signed-off-by present - ✅ Acked-by present - ✅ Cc: stable@dpdk.org included - ✅ Good documentation of the UBSan errors being fixed - ⚠️ **Warning:** Missing `Fixes:` tag - this fixes signed integer overflow UB **Code review:** ```c - rv.s.min = (rd->s.min + rs->s.min) & msk; + rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; ``` - ✅ Minimal fix for UB by casting to unsigned before arithmetic - ✅ Tests correctly exercise the boundary conditions **Verdict:** Good minimal fix. Add `Fixes:` tag. --- ### Patch 6/6: `bpf: fix BPF validation w/ conditional jump first` **Subject line:** ✅ 48 characters, correct format **Commit message:** - ✅ Signed-off-by present - ✅ Acked-by and Tested-by present - ✅ Cc: stable@dpdk.org present (from CC header) - ✅ **Correct `Fixes:` tag:** `Fixes: 6e12ec4c4d6d ("bpf: add more checks")` - ✅ Excellent description of the root cause and fix **Code review:** ```c - uint32_t prev_node; + struct inst_node *prev_node; ``` - ✅ Clean refactoring from index to pointer for prev_node tracking - ✅ Fixes the bug where index 0 was used as termination signal - ✅ Two tests covering the exact bug scenario (first vs non-first instruction) - ✅ `get_prev_node()` function removed as it's no longer needed **Verdict:** Excellent patch. Good fix with proper testing and documentation. --- ## Summary | Patch | Status | Action Required | |-------|--------|-----------------| | 1/6 | ⚠️ | Consider adding `Cc: stable@dpdk.org` | | 2/6 | ❌ | Fix trailing whitespace; add `Fixes:` tag | | 3/6 | ⚠️ | Consider removing `Cc: stable@dpdk.org` (style change) | | 4/6 | ⚠️ | Add `Fixes:` tag | | 5/6 | ⚠️ | Add `Fixes:` tag | | 6/6 | ✅ | Ready | ### Required Fixes (Errors) 1. **Patch 2/6, line 176:** Remove trailing whitespace after `konstantin.ananyev@huawei.com` ### Recommended Improvements (Warnings) 1. **Patches 2, 4, 5:** Add `Fixes:` tags to identify the commits that introduced the bugs 2. **Patch 3:** Reconsider `Cc: stable@dpdk.org` - UINT32_C annotation is a style improvement, not a bug fix ### Overall Assessment This is a high-quality patch series that fixes real bugs (UB detected by sanitizers, validation logic errors) and adds valuable test coverage. The code changes are well-reasoned and minimal. The series structure properly separates enabling changes (patch 1) from fixes (patches 2-6). **Recommendation:** After fixing the trailing whitespace in patch 2 and adding missing `Fixes:` tags, this series should be ready for merge. ^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v3 0/6] bpf: simple tests and fixes 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (6 preceding siblings ...) 2026-01-14 19:50 ` [PATCH v3 0/6] bpf: simple tests and fixes Stephen Hemminger @ 2026-01-23 5:22 ` Stephen Hemminger 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili 8 siblings, 0 replies; 51+ messages in thread From: Stephen Hemminger @ 2026-01-23 5:22 UTC (permalink / raw) To: Marat Khalili; +Cc: jerinjacobk, mb, dev, stable, konstantin.ananyev On Wed, 17 Dec 2025 18:01:33 +0000 Marat Khalili <marat.khalili@huawei.com> wrote: > Add simple unit-style tests for rte_bpf_load, and fix some minor > discovered bugs. > > v3: > * Fixed unfortunate typos in the last patch commit message. > * Marked all ARM opcodes with UINT32_C for clarity as suggested by > Morten Brørup, in a separate commit. > > v2: > * Moved new tests back into test_bpf.c > * Changed library macros RTE_SHIFT_VALxx to handle variable first > arguments instead of introducing new macros. > * Added another test and fix, for conditional jump as first instruction. > > Marat Khalili (6): > eal: variable first arguments of RTE_SHIFT_VALxx > bpf: fix signed shift overflows in ARM JIT > bpf: mark ARM opcodes with UINT32_C > bpf: disallow empty program > bpf: make add/subtract one program validate > bpf: fix BPF validation w/ conditional jump first > > app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ > lib/bpf/bpf_jit_arm64.c | 184 ++++++++++++------------- > lib/bpf/bpf_load.c | 2 +- > lib/bpf/bpf_validate.c | 48 ++++--- > lib/eal/include/rte_bitops.h | 4 +- > 5 files changed, 375 insertions(+), 117 deletions(-) AI patch review saw some really minor things. These can be fixed by new version or by maintainer during merge. It would be good to be consistent about Fixes and Cc: stable. Series-Acked-by: Stephen Hemminger <stephen@networkplumber.org> === Patch Review: bundle-1681-bpf-fixes.mbox (via Claude) === # DPDK Patch Review: bundle-1681-bpf-fixes.mbox ## Overview This patch series (6 patches) addresses BPF-related fixes including signed shift overflows in ARM JIT, validation improvements, and related EAL macro changes. --- ## Patch 1/6: eal: variable first arguments of RTE_SHIFT_VALxx ### Errors None identified. ### Warnings 1. **Missing Cc: stable@dpdk.org in commit message body** - The patch CC's `stable@dpdk.org` in email headers but should include `Cc: stable@dpdk.org` in the commit message body if this is intended for stable backport. ### Info 1. **Good practice**: The parenthetical note about `uint_least64_t` vs `uint64_t` is helpful context for reviewers. 2. **Change is appropriate**: Replacing `UINT32_C`/`UINT64_C` with casts allows variable arguments while maintaining the intended type behavior. --- ## Patch 2/6: bpf: fix signed shift overflows in ARM JIT ### Errors None identified. ### Warnings 1. **Missing Fixes: tag** - This patch fixes a bug (signed shift overflow causing undefined behavior). It should have a `Fixes:` tag with the 12-character SHA of the commit that introduced the problematic code. ``` Fixes: abcdef123456 ("bpf: add JIT support for arm64") ``` 2. **Missing Cc: stable@dpdk.org in commit body** - If this is a bug fix intended for stable releases, it should have `Cc: stable@dpdk.org` in the commit message. 3. **Trailing whitespace in Acked-by tag** (line with `Acked-by: Konstantin Ananyev`): ``` Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> ``` There appears to be a trailing space after the closing `>`. ### Info 1. **Good use of RTE macros**: Using `RTE_BIT32()` and `RTE_SHIFT_VAL32()` is the correct approach to avoid signed integer overflow UB. 2. **Dependency**: This patch depends on Patch 1/6 for `RTE_SHIFT_VAL32` to accept non-constant arguments. --- ## Patch 3/6: bpf: mark ARM opcodes with UINT32_C ### Errors None identified. ### Warnings 1. **Missing Cc: stable@dpdk.org** - Similar to patch 2, if this is fixing potential issues, it may warrant stable backport consideration. ### Info 1. **Good practice**: Explicitly marking opcode constants with `UINT32_C()` clarifies intent and ensures consistent unsigned 32-bit operations. 2. **Minor style observation**: The parentheses around macro definitions like `#define A64_CMP (UINT32_C(0x6b00000f))` are good defensive practice. --- ## Patch 4/6: bpf: disallow empty program ### Errors None identified. ### Warnings 1. **Missing Fixes: tag** - The commit message mentions fixing bugs: > Fix found bugs: > * a program with no instructions was accepted; > * a program with no EXIT instruction read outside the buffer. These should have corresponding `Fixes:` tags. 2. **Missing Cc: stable@dpdk.org** - Bug fixes should typically be considered for stable backport. 3. **Brace style inconsistency** (app/test/test_bpf.c, line 90-93): ```c if (expected_errno != 0) { ... } else RTE_TEST_ASSERT_NOT_EQUAL(...); ``` While this follows DPDK style for single-statement else, the asymmetry with the braced if-block could be considered less readable. ### Info 1. **Good test coverage**: Adding tests for edge cases (empty program, EXIT-only, no-EXIT, minimal valid) is excellent practice. 2. **Use of RTE_TEST_ASSERT macros**: Good use of the test framework macros. 3. **Comment style**: The function-level comments are well-written and follow proper multi-line comment format. --- ## Patch 5/6: bpf: make add/subtract one program validate ### Errors None identified. ### Warnings 1. **Missing Fixes: tag** - This fixes signed integer overflow UB in `bpf_validate.c`. Should include: ``` Fixes: <sha> ("original commit that introduced the bug") ``` 2. **Missing Cc: stable@dpdk.org** - Bug fix should be considered for stable. ### Info 1. **Correct fix approach**: Casting to `uint64_t` before arithmetic is the right way to handle potential signed overflow when the result will be masked anyway. 2. **Good test additions**: Tests for add/subtract operations validate the fix works correctly. --- ## Patch 6/6: bpf: fix BPF validation w/ conditional jump first ### Errors None identified. ### Warnings 1. **Trailing whitespace in Tested-by tag**: ``` Tested-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> ``` There appears to be trailing space after `>`. 2. **Subject line could be clearer** - "fix BPF validation w/ conditional jump first" is somewhat abbreviated. Consider: ``` bpf: fix validation when conditional jump is first instruction ``` ### Info 1. **Good Fixes: tag present**: `Fixes: 6e12ec4c4d6d ("bpf: add more checks")` - properly formatted. 2. **Excellent commit message**: Clear explanation of the bug (zero index used as sentinel), the root cause, and the fix approach. 3. **Good test coverage**: Two tests demonstrating the issue - one with jump as first instruction, one with jump as non-first instruction to prevent regression. 4. **Code change is sound**: Switching from index-based to pointer-based tracking for `prev_node` is cleaner and avoids the zero-sentinel problem. --- ## Summary | Patch | Errors | Warnings | Info | |-------|--------|----------|------| | 1/6 (eal) | 0 | 1 | 2 | | 2/6 (bpf ARM) | 0 | 3 | 2 | | 3/6 (bpf opcodes) | 0 | 1 | 2 | | 4/6 (bpf empty) | 0 | 3 | 3 | | 5/6 (bpf add/sub) | 0 | 2 | 2 | | 6/6 (bpf jump) | 0 | 2 | 4 | ### Key Issues to Address 1. **Missing Fixes: tags** on patches 2, 3, 4, and 5 - these are bug fixes and should reference the original commits. 2. **Missing Cc: stable@dpdk.org** in commit message bodies for bug fix patches. 3. **Trailing whitespace** in Acked-by/Tested-by lines on patches 2 and 6. ### Positive Aspects - Good use of DPDK macros (`RTE_BIT32`, `RTE_SHIFT_VAL32`, `UINT32_C`) - Comprehensive test additions using proper test framework - Clear commit messages explaining the issues and fixes - Proper patch ordering (dependencies handled correctly) - License/copyright compliance (no changes needed) - Code follows DPDK style guidelines ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v4 0/6] bpf: simple tests and fixes 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (7 preceding siblings ...) 2026-01-23 5:22 ` Stephen Hemminger @ 2026-01-27 11:49 ` Marat Khalili 2026-01-27 11:49 ` [PATCH v4 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili ` (7 more replies) 8 siblings, 8 replies; 51+ messages in thread From: Marat Khalili @ 2026-01-27 11:49 UTC (permalink / raw) To: dev; +Cc: jerinjacobk, konstantin.ananyev, mb, stephen Add simple unit-style tests for rte_bpf_load, and fix some minor discovered bugs. v4: * Add Fixes: and Cc: stable@dpdk.org tags to all patches except 3. Try to make sure stable@dpdk.org does not receive patch 3. * Ensure no (possibly hallucinated) trailing whitespace after Acked-by. * Improve subject line for the "conditional jump first" patch. * Update REGISTER_FAST_TEST() calls in app/test/test_bpf.c to use new constants (NOHUGE_OK, ASAN_OK) instead of old boolean values. v3: * Fixed unfortunate typos in the last patch commit message. * Marked all ARM opcodes with UINT32_C for clarity as suggested by Morten Brørup, in a separate commit. v2: * Moved new tests back into test_bpf.c * Changed library macros RTE_SHIFT_VALxx to handle variable first arguments instead of introducing new macros. * Added another test and fix, for conditional jump as first instruction. Marat Khalili (6): eal: variable first arguments of RTE_SHIFT_VALxx bpf: fix signed shift overflows in ARM JIT bpf: mark ARM opcodes with UINT32_C bpf: disallow empty program bpf: make add/subtract one program validate bpf: fix validation when conditional jump is first instruction app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ lib/bpf/bpf_jit_arm64.c | 184 ++++++++++++------------- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 48 ++++--- lib/eal/include/rte_bitops.h | 4 +- 5 files changed, 375 insertions(+), 117 deletions(-) -- 2.34.1 ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v4 1/6] eal: variable first arguments of RTE_SHIFT_VALxx 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili @ 2026-01-27 11:49 ` Marat Khalili 2026-01-28 0:41 ` fengchengwen 2026-01-27 11:49 ` [PATCH v4 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili ` (6 subsequent siblings) 7 siblings, 1 reply; 51+ messages in thread From: Marat Khalili @ 2026-01-27 11:49 UTC (permalink / raw) To: dev, Jack Bond-Preston, Chengwen Feng Cc: jerinjacobk, konstantin.ananyev, mb, stephen, stable Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying UINT32_C or UINT64_C correspondingly to its first argument. As a consequence first argument had to be a constant. Replace UINT32_C and UINT64_C with casts to uint32_t and uint64_t to allow these arguments be variable. For constants the result should be the same. (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, but I think most users of RTE_SHIFT_VAL64 expect the result to be uint64_t.) Fixes: 1d8f2285ed3f ("eal: introduce more macros for bit definition") Cc: stable@dpdk.org Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Reviewed-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- lib/eal/include/rte_bitops.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h index 2d1b9d281c..aa6ac73abb 100644 --- a/lib/eal/include/rte_bitops.h +++ b/lib/eal/include/rte_bitops.h @@ -51,7 +51,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (32 - width of val). */ -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) /** * Get the uint64_t shifted value. @@ -61,7 +61,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (64 - width of val). */ -#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr)) +#define RTE_SHIFT_VAL64(val, nr) ((uint64_t)(val) << (nr)) /** * Generate a contiguous 32-bit mask -- 2.34.1 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v4 1/6] eal: variable first arguments of RTE_SHIFT_VALxx 2026-01-27 11:49 ` [PATCH v4 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2026-01-28 0:41 ` fengchengwen 0 siblings, 0 replies; 51+ messages in thread From: fengchengwen @ 2026-01-28 0:41 UTC (permalink / raw) To: Marat Khalili, dev, Jack Bond-Preston Cc: jerinjacobk, konstantin.ananyev, mb, stephen, stable Acked-by: Chengwen Feng <fengchengwen@huawei.com> On 1/27/2026 7:49 PM, Marat Khalili wrote: > Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying > UINT32_C or UINT64_C correspondingly to its first argument. As a > consequence first argument had to be a constant. Replace UINT32_C and > UINT64_C with casts to uint32_t and uint64_t to allow these arguments be > variable. For constants the result should be the same. > > (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, but > I think most users of RTE_SHIFT_VAL64 expect the result to be uint64_t.) > > Fixes: 1d8f2285ed3f ("eal: introduce more macros for bit definition") > Cc: stable@dpdk.org > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > Reviewed-by: Morten Brørup <mb@smartsharesystems.com> > Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > Acked-by: Stephen Hemminger <stephen@networkplumber.org> ... ^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v4 2/6] bpf: fix signed shift overflows in ARM JIT 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili 2026-01-27 11:49 ` [PATCH v4 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2026-01-27 11:49 ` Marat Khalili 2026-01-27 11:49 ` [PATCH v4 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili ` (5 subsequent siblings) 7 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2026-01-27 11:49 UTC (permalink / raw) To: dev, Wathsala Vithanage, Konstantin Ananyev, Jerin Jacob Cc: jerinjacobk, mb, stephen, stable Left shifts of integer literals and bool values overwriting the sign bit were used multiple times in bpf_jit_arm64.c. E.g.: insn = (!!is64) << 31; where is64 has type bool (double bang is a no-op here). The operand of left shift was promoted to type int, which when 32-bit wide cannot represent the result. Similarly literal integers have int type by default. Sanitizer produced the following diagnostic during runtime (for various lines): lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' To fix the issue use RTE_BIT32 and similar macros instead. Fixes: f3e516772464 ("bpf/arm: add prologue and epilogue") Cc: stable@dpdk.org Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- lib/bpf/bpf_jit_arm64.c | 162 ++++++++++++++++++++-------------------- 1 file changed, 81 insertions(+), 81 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 96b8cd2e03..76df1e4ba1 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -28,7 +28,7 @@ #define A64_ZR 31 #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) >> (n))) -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) struct ebpf_a64_map { uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ @@ -238,12 +238,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, uint32_t insn, imm; imm = mask_imm(12, imm12); - insn = (!!is64) << 31; - insn |= (!!sub) << 30; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(sub, 30); insn |= 0x11000000; insn |= rd; - insn |= rn << 5; - insn |= imm << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(imm, 10); emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); @@ -279,16 +279,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, { uint32_t insn; - insn = (!!load) << 22; - insn |= (!!pre_index) << 24; + insn = RTE_SHIFT_VAL32(load, 22); + insn |= RTE_SHIFT_VAL32(pre_index, 24); insn |= 0xa8800000; insn |= rt; - insn |= rn << 5; - insn |= rt2 << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(rt2, 10); if (push) - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ + insn |= RTE_SHIFT_VAL32(0x7e, 15); /* 0x7e means -2 with imm7 */ else - insn |= 0x2 << 15; + insn |= RTE_SHIFT_VAL32(0x2, 15); emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || check_reg(rt2)); @@ -317,11 +317,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t type, { uint32_t insn; - insn = (!!is64) << 31; - insn |= type << 29; - insn |= 0x25 << 23; - insn |= (shift/16) << 21; - insn |= imm16 << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(type, 29); + insn |= RTE_SHIFT_VAL32(0x25, 23); + insn |= RTE_SHIFT_VAL32(shift/16, 21); + insn |= RTE_SHIFT_VAL32(imm16, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); @@ -334,7 +334,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint32_t val) uint16_t lower = val & 0xffff; /* Positive number */ - if ((val & 1UL << 31) == 0) { + if ((val & RTE_BIT32(31)) == 0) { mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); if (upper) mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); @@ -393,21 +393,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, uint8_t rt, uint8_t rn, uint8_t rm, { uint32_t insn; - insn = 0x1c1 << 21; + insn = RTE_SHIFT_VAL32(0x1c1, 21); if (load) - insn |= 1 << 22; + insn |= RTE_BIT32(22); if (sz == BPF_B) - insn |= 0 << 30; + insn |= RTE_SHIFT_VAL32(0, 30); else if (sz == BPF_H) - insn |= 1 << 30; + insn |= RTE_SHIFT_VAL32(1, 30); else if (sz == BPF_W) - insn |= 2 << 30; + insn |= RTE_SHIFT_VAL32(2, 30); else if (sz == EBPF_DW) - insn |= 3 << 30; + insn |= RTE_SHIFT_VAL32(3, 30); - insn |= rm << 16; - insn |= 0x1a << 10; /* LSL and S = 0 */ - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1a, 10); /* LSL and S = 0 */ + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || check_reg(rm) || @@ -436,10 +436,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 21; /* shift == 0 */ - insn |= rm << 16; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 21); /* shift == 0 */ + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -468,11 +468,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rm) { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= A64_ZR << 10; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(A64_ZR, 10); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -489,11 +489,11 @@ emit_data_process_two_src(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd6 << 21; - insn |= rm << 16; - insn |= op << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd6, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(op, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -532,14 +532,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; + insn = RTE_SHIFT_VAL32(is64, 31); if (insn) - insn |= 1 << 22; /* Set N bit when is64 is set */ - insn |= op << 29; - insn |= 0x26 << 23; - insn |= immr << 16; - insn |= imms << 10; - insn |= rn << 5; + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x26, 23); + insn |= RTE_SHIFT_VAL32(immr, 16); + insn |= RTE_SHIFT_VAL32(imms, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || @@ -578,11 +578,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 29; - insn |= 0x50 << 21; - insn |= rm << 16; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x50, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -612,12 +612,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= 0x1 << 15; - insn |= ra << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1, 15); + insn |= RTE_SHIFT_VAL32(ra, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_reg(rm) || @@ -638,7 +638,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) uint32_t insn; insn = 0xd63f0000; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); } @@ -669,22 +669,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) uint32_t insn; insn = 0xdac00000; - insn |= rd << 5; + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; switch (imm) { case 16: - insn |= 1 << 10; + insn |= RTE_SHIFT_VAL32(1, 10); emit_insn(ctx, insn, check_reg(rd)); emit_zero_extend(ctx, rd, 16); break; case 32: - insn |= 2 << 10; + insn |= RTE_SHIFT_VAL32(2, 10); emit_insn(ctx, insn, check_reg(rd)); /* Upper 32 bits already cleared */ break; case 64: - insn |= 3 << 10; + insn |= RTE_SHIFT_VAL32(3, 10); emit_insn(ctx, insn, check_reg(rd)); break; default: @@ -933,9 +933,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = (!!is64) << 31; - insn |= 0x35 << 24; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0x35, 24); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); @@ -947,7 +947,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) uint32_t insn, imm; imm = mask_imm(26, imm26); - insn = 0x5 << 26; + insn = RTE_SHIFT_VAL32(0x5, 26); insn |= imm; emit_insn(ctx, insn, check_imm(26, imm26)); @@ -971,9 +971,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) uint32_t insn; insn = 0xb820001f; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); } @@ -984,8 +984,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) uint32_t insn; insn = 0x885f7c00; - insn |= (!!is64) << 30; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); @@ -998,9 +998,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, uint32_t insn; insn = 0x88007c00; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || check_reg(rn)); @@ -1051,9 +1051,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t insn; insn = opc; - insn |= (!!is64) << 31; - insn |= rm << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); } @@ -1076,8 +1076,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t cond, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = 0x15 << 26; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(0x15, 26); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= cond; emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); @@ -1301,7 +1301,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) break; /* dst = imm64 */ case (BPF_LD | BPF_IMM | EBPF_DW): - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; + u64 = RTE_SHIFT_VAL64(ins[1].imm, 32) | (uint32_t)imm; emit_mov_imm(ctx, 1, dst, u64); i++; break; -- 2.34.1 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v4 3/6] bpf: mark ARM opcodes with UINT32_C 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili 2026-01-27 11:49 ` [PATCH v4 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili 2026-01-27 11:49 ` [PATCH v4 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2026-01-27 11:49 ` Marat Khalili 2026-01-27 11:49 ` [PATCH v4 4/6] bpf: disallow empty program Marat Khalili ` (4 subsequent siblings) 7 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2026-01-27 11:49 UTC (permalink / raw) To: dev, Wathsala Vithanage, Konstantin Ananyev; +Cc: jerinjacobk, mb, stephen To prevent confusion as to whether operations with them will be executed as signed or unsigned (and with what bit width) mark all ARM opcode constants with UINT32_C. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- lib/bpf/bpf_jit_arm64.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 76df1e4ba1..9263234338 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -12,7 +12,7 @@ #include "bpf_impl.h" #define A64_REG_MASK(r) ((r) & 0x1f) -#define A64_INVALID_OP_CODE (0xffffffff) +#define A64_INVALID_OP_CODE (UINT32_C(0xffffffff)) #define TMP_REG_1 (EBPF_REG_10 + 1) #define TMP_REG_2 (EBPF_REG_10 + 2) @@ -228,7 +228,7 @@ emit_insn(struct a64_jit_ctx *ctx, uint32_t insn, int error) static void emit_ret(struct a64_jit_ctx *ctx) { - emit_insn(ctx, 0xd65f03c0, 0); + emit_insn(ctx, UINT32_C(0xd65f03c0), 0); } static void @@ -240,7 +240,7 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, imm = mask_imm(12, imm12); insn = RTE_SHIFT_VAL32(is64, 31); insn |= RTE_SHIFT_VAL32(sub, 30); - insn |= 0x11000000; + insn |= UINT32_C(0x11000000); insn |= rd; insn |= RTE_SHIFT_VAL32(rn, 5); insn |= RTE_SHIFT_VAL32(imm, 10); @@ -281,7 +281,7 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, insn = RTE_SHIFT_VAL32(load, 22); insn |= RTE_SHIFT_VAL32(pre_index, 24); - insn |= 0xa8800000; + insn |= UINT32_C(0xa8800000); insn |= rt; insn |= RTE_SHIFT_VAL32(rn, 5); insn |= RTE_SHIFT_VAL32(rt2, 10); @@ -637,7 +637,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) { uint32_t insn; - insn = 0xd63f0000; + insn = UINT32_C(0xd63f0000); insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); @@ -668,7 +668,7 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) { uint32_t insn; - insn = 0xdac00000; + insn = UINT32_C(0xdac00000); insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; @@ -970,7 +970,7 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) { uint32_t insn; - insn = 0xb820001f; + insn = UINT32_C(0xb820001f); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rs, 16); insn |= RTE_SHIFT_VAL32(rn, 5); @@ -983,7 +983,7 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) { uint32_t insn; - insn = 0x885f7c00; + insn = UINT32_C(0x885f7c00); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; @@ -997,7 +997,7 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, { uint32_t insn; - insn = 0x88007c00; + insn = UINT32_C(0x88007c00); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rs, 16); insn |= RTE_SHIFT_VAL32(rn, 5); @@ -1042,8 +1042,8 @@ emit_xadd(struct a64_jit_ctx *ctx, uint8_t op, uint8_t tmp1, uint8_t tmp2, } } -#define A64_CMP 0x6b00000f -#define A64_TST 0x6a00000f +#define A64_CMP (UINT32_C(0x6b00000f)) +#define A64_TST (UINT32_C(0x6a00000f)) static void emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t opc) -- 2.34.1 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v4 4/6] bpf: disallow empty program 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili ` (2 preceding siblings ...) 2026-01-27 11:49 ` [PATCH v4 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili @ 2026-01-27 11:49 ` Marat Khalili 2026-01-27 11:49 ` [PATCH v4 5/6] bpf: make add/subtract one program validate Marat Khalili ` (3 subsequent siblings) 7 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2026-01-27 11:49 UTC (permalink / raw) To: dev, Konstantin Ananyev, Ferruh Yigit; +Cc: jerinjacobk, mb, stephen, stable Add tests for some simple cases: * Program with no instructions; * Program with only EXIT instruction but no return value set; * Program with return value set but no EXIT instruction; * Minimal valid program with return value set and an EXIT instruction. Fix found bugs: * a program with no instructions was accepted; * a program with no EXIT instruction read outside the buffer. Fixes: 6e12ec4c4d6d ("bpf: add more checks") Cc: stable@dpdk.org Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- app/test/test_bpf.c | 118 +++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 20 +++++-- 3 files changed, 135 insertions(+), 5 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index a7d56f8d86..1012265ad0 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -34,6 +34,124 @@ test_bpf(void) #include <rte_ip.h> +/* Tests of most simple BPF programs (no instructions, one instruction etc.) */ + +/* + * Try to load a simple bpf program from the instructions array. + * + * When `expected_errno` is zero, expect it to load successfully. + * When `expected_errno` is non-zero, expect it to fail with this `rte_errno`. + * + * @param nb_ins + * Number of instructions in the `ins` array. + * @param ins + * BPF instructions array. + * @param expected_errno + * Expected result. + * @return + * TEST_SUCCESS on success, error code on failure. + */ +static int +bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, int expected_errno) +{ + const struct rte_bpf_prm prm = { + .ins = ins, + .nb_ins = nb_ins, + .prog_arg = { + .type = RTE_BPF_ARG_RAW, + .size = sizeof(uint64_t), + }, + }; + + struct rte_bpf *const bpf = rte_bpf_load(&prm); + const int actual_errno = rte_errno; + rte_bpf_destroy(bpf); + + if (expected_errno != 0) { + RTE_TEST_ASSERT_EQUAL(bpf, NULL, + "expect rte_bpf_load() == NULL"); + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, + "expect rte_errno == %d, found %d", + expected_errno, actual_errno); + } else + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, + "expect rte_bpf_load() != NULL"); + + return TEST_SUCCESS; +} + +/* + * Try and load completely empty BPF program. + * Should fail because there is no EXIT (and also return value is undefined). + */ +static int +test_no_instructions(void) +{ + static const struct ebpf_insn ins[] = {}; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_instructions_autotest, NOHUGE_OK, ASAN_OK, test_no_instructions); + +/* + * Try and load a BPF program comprising single EXIT instruction. + * Should fail because the return value is undefined. + */ +static int +test_exit_only(void) +{ + static const struct ebpf_insn ins[] = { + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_exit_only_autotest, NOHUGE_OK, ASAN_OK, test_exit_only); + +/* + * Try and load a BPF program with no EXIT instruction. + * Should fail because of this. + */ +static int +test_no_exit(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_exit_autotest, NOHUGE_OK, ASAN_OK, test_no_exit); + +/* + * Try and load smallest possible valid BPF program. + */ +static int +test_minimal_working(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_minimal_working_autotest, NOHUGE_OK, ASAN_OK, test_minimal_working); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 556e613762..6983c026af 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) int32_t rc; uint32_t i; - if (prm == NULL || prm->ins == NULL || + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || (prm->nb_xsym != 0 && prm->xsym == NULL)) { rte_errno = EINVAL; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 4f47d6dc7b..23444b3eaa 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) { uint32_t ne; - if (nidx > bvf->prm->nb_ins) { + if (nidx >= bvf->prm->nb_ins) { RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, next pc: %u", __func__, get_node_idx(bvf, node), nidx); @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) * Control Flow Graph (CFG). * Information collected at this path would be used later * to determine is there any loops, and/or unreachable instructions. + * PREREQUISITE: there is at least one node. */ static void dfs(struct bpf_verifier *bvf) { struct inst_node *next, *node; - node = bvf->in; - while (node != NULL) { + RTE_ASSERT(bvf->nb_nodes != 0); + /* + * Since there is at least one node, node with index 0 always exists; + * it is our program entry point. + */ + node = &bvf->in[0]; + do { if (node->colour == WHITE) set_node_colour(bvf, node, GREY); @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) } } else node = NULL; - } + } while (node != NULL); } /* @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) if (rc != 0) return rc; + if (bvf->nb_nodes == 0) { + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", + __func__, bvf); + return -EINVAL; + } + dfs(bvf); RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" -- 2.34.1 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v4 5/6] bpf: make add/subtract one program validate 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili ` (3 preceding siblings ...) 2026-01-27 11:49 ` [PATCH v4 4/6] bpf: disallow empty program Marat Khalili @ 2026-01-27 11:49 ` Marat Khalili 2026-01-27 11:49 ` [PATCH v4 6/6] bpf: fix validation when conditional jump is first instruction Marat Khalili ` (2 subsequent siblings) 7 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2026-01-27 11:49 UTC (permalink / raw) To: dev, Konstantin Ananyev; +Cc: jerinjacobk, mb, stephen, stable Add tests loading simple BPF programs adding or subtracting one to its argument and fix triggered signed integer overflow undefined behaviours: lib/bpf/bpf_validate.c:324:24: runtime error: signed integer overflow: 1 + 9223372036854775807 cannot be represented in type 'long int' lib/bpf/bpf_validate.c:352:24: runtime error: signed integer overflow: -9223372036854775808 - 1 cannot be represented in type 'long int' As a minimal possible fix perform operation on unsigned integers where overflow is well-defined, which was probably the original intent. Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program") Cc: stable@dpdk.org Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- app/test/test_bpf.c | 56 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 8 +++--- 2 files changed, 60 insertions(+), 4 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index 1012265ad0..54603c7c30 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -152,6 +152,62 @@ test_minimal_working(void) REGISTER_FAST_TEST(bpf_minimal_working_autotest, NOHUGE_OK, ASAN_OK, test_minimal_working); +/* + * Try and load valid BPF program adding one to the argument. + */ +static int +test_add_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to one. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_K), + .dst_reg = EBPF_REG_0, + .imm = 1, + }, + { + /* Add program argument to the return value. */ + .code = (EBPF_ALU64 | BPF_ADD | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_add_one_autotest, NOHUGE_OK, ASAN_OK, test_add_one); + +/* + * Try and load valid BPF program subtracting one from the argument. + */ +static int +test_subtract_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Subtract one from the program argument. */ + .code = (EBPF_ALU64 | BPF_SUB | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 1, + }, + { + /* Set return value to the result. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_subtract_one_autotest, NOHUGE_OK, ASAN_OK, test_subtract_one); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 23444b3eaa..47ad6fef0f 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -243,8 +243,8 @@ eval_add(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min + rs->u.min) & msk; rv.u.max = (rd->u.max + rs->u.max) & msk; - rv.s.min = (rd->s.min + rs->s.min) & msk; - rv.s.max = (rd->s.max + rs->s.max) & msk; + rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; + rv.s.max = ((uint64_t)rd->s.max + (uint64_t)rs->s.max) & msk; /* * if at least one of the operands is not constant, @@ -272,8 +272,8 @@ eval_sub(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min - rs->u.max) & msk; rv.u.max = (rd->u.max - rs->u.min) & msk; - rv.s.min = (rd->s.min - rs->s.max) & msk; - rv.s.max = (rd->s.max - rs->s.min) & msk; + rv.s.min = ((uint64_t)rd->s.min - (uint64_t)rs->s.max) & msk; + rv.s.max = ((uint64_t)rd->s.max - (uint64_t)rs->s.min) & msk; /* * if at least one of the operands is not constant, -- 2.34.1 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v4 6/6] bpf: fix validation when conditional jump is first instruction 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili ` (4 preceding siblings ...) 2026-01-27 11:49 ` [PATCH v4 5/6] bpf: make add/subtract one program validate Marat Khalili @ 2026-01-27 11:49 ` Marat Khalili 2026-01-27 14:02 ` [PATCH v4 0/6] bpf: simple tests and fixes Stephen Hemminger 2026-02-04 16:58 ` Thomas Monjalon 7 siblings, 0 replies; 51+ messages in thread From: Marat Khalili @ 2026-01-27 11:49 UTC (permalink / raw) To: dev, Konstantin Ananyev, Ferruh Yigit; +Cc: jerinjacobk, mb, stephen, stable When the BPF program was starting with a conditional jump only one (true) execution branch of the program was evaluated. Any instructions jumped over were not evaluated and could contain invalid operations. The root cause was using zero instruction index as a signal for ending evaluation when backtracking. Switch from using previous instruction index for tracking execution history to a previous instruction pointer. First instruction will not have it set, and therefore backtracking _from_ it will end evaluation, not backtracking _to_ it like before. Add two tests demonstrating the problem: * test_jump_over_invalid_first: loads BPF program with conditional jump over the invalid operation, should not succeed; * test_jump_over_invalid_non_first: same program with one extra instruction at the start to demonstrate that it is indeed invalid (and also guard against another kind of regression); Fixes: 6e12ec4c4d6d ("bpf: add more checks") Cc: stable@dpdk.org Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> Tested-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- app/test/test_bpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 20 ++++------- 2 files changed, 86 insertions(+), 14 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index 54603c7c30..9cb4761bd0 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -208,6 +208,86 @@ test_subtract_one(void) REGISTER_FAST_TEST(bpf_subtract_one_autotest, NOHUGE_OK, ASAN_OK, test_subtract_one); +/* + * Conditionally jump over invalid operation as first instruction. + */ +static int +test_jump_over_invalid_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_first_autotest, NOHUGE_OK, ASAN_OK, + test_jump_over_invalid_first); + +/* + * Conditionally jump over invalid operation as non-first instruction. + */ +static int +test_jump_over_invalid_non_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_non_first_autotest, NOHUGE_OK, ASAN_OK, + test_jump_over_invalid_non_first); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 47ad6fef0f..64a8f227a3 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -64,7 +64,7 @@ struct inst_node { uint8_t cur_edge:4; uint8_t edge_type[MAX_EDGES]; uint32_t edge_dest[MAX_EDGES]; - uint32_t prev_node; + struct inst_node *prev_node; struct { struct bpf_eval_state *cur; /* save/restore for jcc targets */ struct bpf_eval_state *start; @@ -1875,12 +1875,6 @@ set_edge_type(struct bpf_verifier *bvf, struct inst_node *node, bvf->edge_type[type]++; } -static struct inst_node * -get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) -{ - return bvf->in + node->prev_node; -} - /* * Depth-First Search (DFS) through previously constructed * Control Flow Graph (CFG). @@ -1916,7 +1910,7 @@ dfs(struct bpf_verifier *bvf) if (next != NULL) { /* proceed with next child */ - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } else { /* @@ -1925,7 +1919,7 @@ dfs(struct bpf_verifier *bvf) */ set_node_colour(bvf, node, BLACK); node->cur_edge = 0; - node = get_prev_node(bvf, node); + node = node->prev_node; } } else node = NULL; @@ -2490,7 +2484,7 @@ evaluate(struct bpf_verifier *bvf) next = NULL; stats.nb_prune++; } else { - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } } else { @@ -2501,11 +2495,9 @@ evaluate(struct bpf_verifier *bvf) */ node->cur_edge = 0; save_safe_eval_state(bvf, node); - node = get_prev_node(bvf, node); + node = node->prev_node; - /* finished */ - if (node == bvf->in) - node = NULL; + /* first node will not have prev, signalling finish */ } } -- 2.34.1 ^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v4 0/6] bpf: simple tests and fixes 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili ` (5 preceding siblings ...) 2026-01-27 11:49 ` [PATCH v4 6/6] bpf: fix validation when conditional jump is first instruction Marat Khalili @ 2026-01-27 14:02 ` Stephen Hemminger 2026-02-04 11:44 ` Thomas Monjalon 2026-02-04 16:58 ` Thomas Monjalon 7 siblings, 1 reply; 51+ messages in thread From: Stephen Hemminger @ 2026-01-27 14:02 UTC (permalink / raw) To: Marat Khalili; +Cc: dev, jerinjacobk, konstantin.ananyev, mb On Tue, 27 Jan 2026 11:49:36 +0000 Marat Khalili <marat.khalili@huawei.com> wrote: > Add simple unit-style tests for rte_bpf_load, and fix some minor > discovered bugs. > > v4: > * Add Fixes: and Cc: stable@dpdk.org tags to all patches except 3. Try > to make sure stable@dpdk.org does not receive patch 3. > * Ensure no (possibly hallucinated) trailing whitespace after Acked-by. > * Improve subject line for the "conditional jump first" patch. > * Update REGISTER_FAST_TEST() calls in app/test/test_bpf.c to use new > constants (NOHUGE_OK, ASAN_OK) instead of old boolean values. > > v3: > * Fixed unfortunate typos in the last patch commit message. > * Marked all ARM opcodes with UINT32_C for clarity as suggested by > Morten Brørup, in a separate commit. > > v2: > * Moved new tests back into test_bpf.c > * Changed library macros RTE_SHIFT_VALxx to handle variable first > arguments instead of introducing new macros. > * Added another test and fix, for conditional jump as first instruction. > > Marat Khalili (6): > eal: variable first arguments of RTE_SHIFT_VALxx > bpf: fix signed shift overflows in ARM JIT > bpf: mark ARM opcodes with UINT32_C > bpf: disallow empty program > bpf: make add/subtract one program validate > bpf: fix validation when conditional jump is first instruction > > app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ > lib/bpf/bpf_jit_arm64.c | 184 ++++++++++++------------- > lib/bpf/bpf_load.c | 2 +- > lib/bpf/bpf_validate.c | 48 ++++--- > lib/eal/include/rte_bitops.h | 4 +- > 5 files changed, 375 insertions(+), 117 deletions(-) > This is what an automated review looks like when everything is good: # DPDK BPF Patch Series Review (6 patches, v4) **Author:** Marat Khalili **Series:** BPF fixes addressing sanitizer-detected issues and validation bugs ## Summary | Severity | Count | |----------|-------| | Error | 0 | | Warning | 0 | | Info | 2 | **Overall Assessment:** This is a well-crafted patch series. All patches follow DPDK guidelines correctly with proper commit message formatting, appropriate Fixes/Cc:stable tags, and good test coverage. --- ## Patch 1/6: eal: variable first arguments of RTE_SHIFT_VALxx **Status:** ✓ OK Good fix to allow variable arguments in `RTE_SHIFT_VAL32` and `RTE_SHIFT_VAL64` macros by replacing `UINT32_C`/`UINT64_C` with casts. The commit message explains the rationale clearly, including the technical note about `uint_least64_t` vs `uint64_t`. Proper tags: `Fixes:`, `Cc: stable@dpdk.org`, `Signed-off-by`, multiple `Acked-by`. --- ## Patch 2/6: bpf: fix signed shift overflows in ARM JIT **Status:** ✓ OK Addresses undefined behavior when left-shifting into the sign bit (e.g., `(!!is64) << 31`). Correctly replaces problematic patterns with `RTE_SHIFT_VAL32()` and `RTE_BIT32()` macros. The commit message includes the actual sanitizer diagnostic, which is excellent for documenting the issue. Proper tags present. Depends on patch 1/6 for `RTE_SHIFT_VAL32` to accept variable arguments. --- ## Patch 3/6: bpf: mark ARM opcodes with UINT32_C **Status:** ✓ OK Stylistic improvement to explicitly mark ARM opcode constants with `UINT32_C()` for clarity about signedness and width. **Note:** No `Fixes:` or `Cc: stable@dpdk.org` tags, which is correct since this is a cleanup/improvement rather than a bug fix. --- ## Patch 4/6: bpf: disallow empty program **Status:** ✓ OK Excellent defensive fix that: 1. Rejects programs with zero instructions 2. Fixes out-of-bounds read when no EXIT instruction present 3. Adds comprehensive test cases for edge cases Test coverage includes: - Empty program (no instructions) - EXIT-only program (undefined return value) - No EXIT instruction - Minimal valid program Proper `Fixes:` and `Cc: stable@dpdk.org` tags. --- ## Patch 5/6: bpf: make add/subtract one program validate **Status:** ✓ OK Fixes signed integer overflow undefined behavior in `eval_add()` and `eval_sub()` by casting to `uint64_t` before arithmetic. The fix is minimal and correct: ```c // Before (UB when overflow occurs): rv.s.min = (rd->s.min + rs->s.min) & msk; // After (well-defined wraparound): rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; ``` Includes test cases that trigger the issue. Proper tags present. --- ## Patch 6/6: bpf: fix validation when conditional jump is first instruction **Status:** ✓ OK Fixes a security-relevant validation bug where a conditional jump at instruction 0 would skip validation of the false branch. The root cause (using index 0 as sentinel) is well-explained. The fix changes from tracking previous instruction index to tracking previous instruction pointer, which elegantly solves the problem since the first instruction will have `prev_node = NULL`. Test cases demonstrate both the bug (jump-over-invalid at position 0) and a control case (same pattern at position 1 to confirm it was always invalid). Proper tags present. --- ## Series-Level Observations **Info (minor observations):** 1. **Patch ordering is correct** - Patch 1 must come before Patch 2 since patch 2 uses `RTE_SHIFT_VAL32()` with variable arguments. Good dependency handling. 2. **Test infrastructure** - The `bpf_load_test()` helper function introduced in patch 4 is reused effectively in patches 5 and 6. Consider whether this helper could be useful in the existing BPF test infrastructure beyond this series. **Code Quality:** - All patches compile independently (can be verified) - Tests are added alongside fixes - No forbidden tokens or style violations detected - Proper use of `RTE_` prefixed macros **Tags Verification:** - All bug fixes have `Fixes:` with 12-char SHA and exact subject - All bug fixes have `Cc: stable@dpdk.org` - All patches have `Signed-off-by:` - Tag order is correct throughout --- ## Verdict **Ready to merge.** No blocking issues found. The series addresses real bugs found by sanitizers and adds valuable test coverage for edge cases in BPF validation. ^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v4 0/6] bpf: simple tests and fixes 2026-01-27 14:02 ` [PATCH v4 0/6] bpf: simple tests and fixes Stephen Hemminger @ 2026-02-04 11:44 ` Thomas Monjalon 2026-02-04 13:33 ` Konstantin Ananyev 0 siblings, 1 reply; 51+ messages in thread From: Thomas Monjalon @ 2026-02-04 11:44 UTC (permalink / raw) To: konstantin.ananyev, Stephen Hemminger; +Cc: Marat Khalili, dev, jerinjacobk, mb 27/01/2026 15:02, Stephen Hemminger: > On Tue, 27 Jan 2026 11:49:36 +0000 > Marat Khalili <marat.khalili@huawei.com> wrote: > > Marat Khalili (6): > > eal: variable first arguments of RTE_SHIFT_VALxx > > bpf: fix signed shift overflows in ARM JIT > > bpf: mark ARM opcodes with UINT32_C > > bpf: disallow empty program > > bpf: make add/subtract one program validate > > bpf: fix validation when conditional jump is first instruction > > > > app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ > > lib/bpf/bpf_jit_arm64.c | 184 ++++++++++++------------- > > lib/bpf/bpf_load.c | 2 +- > > lib/bpf/bpf_validate.c | 48 ++++--- > > lib/eal/include/rte_bitops.h | 4 +- > > 5 files changed, 375 insertions(+), 117 deletions(-) > > > > This is what an automated review looks like when everything is good: Does it make the author a good candidate to be co-maintainer? ^ permalink raw reply [flat|nested] 51+ messages in thread
* RE: [PATCH v4 0/6] bpf: simple tests and fixes 2026-02-04 11:44 ` Thomas Monjalon @ 2026-02-04 13:33 ` Konstantin Ananyev 0 siblings, 0 replies; 51+ messages in thread From: Konstantin Ananyev @ 2026-02-04 13:33 UTC (permalink / raw) To: Thomas Monjalon, Stephen Hemminger Cc: Marat Khalili, dev@dpdk.org, jerinjacobk@gmail.com, mb@smartsharesystems.com > 27/01/2026 15:02, Stephen Hemminger: > > On Tue, 27 Jan 2026 11:49:36 +0000 > > Marat Khalili <marat.khalili@huawei.com> wrote: > > > Marat Khalili (6): > > > eal: variable first arguments of RTE_SHIFT_VALxx > > > bpf: fix signed shift overflows in ARM JIT > > > bpf: mark ARM opcodes with UINT32_C > > > bpf: disallow empty program > > > bpf: make add/subtract one program validate > > > bpf: fix validation when conditional jump is first instruction > > > > > > app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ > > > lib/bpf/bpf_jit_arm64.c | 184 ++++++++++++------------- > > > lib/bpf/bpf_load.c | 2 +- > > > lib/bpf/bpf_validate.c | 48 ++++--- > > > lib/eal/include/rte_bitops.h | 4 +- > > > 5 files changed, 375 insertions(+), 117 deletions(-) > > > > > > > This is what an automated review looks like when everything is good: > > Does it make the author a good candidate to be co-maintainer? I believe it does :) Will ask Marat to submit the patch. Konstantin ^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v4 0/6] bpf: simple tests and fixes 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili ` (6 preceding siblings ...) 2026-01-27 14:02 ` [PATCH v4 0/6] bpf: simple tests and fixes Stephen Hemminger @ 2026-02-04 16:58 ` Thomas Monjalon 7 siblings, 0 replies; 51+ messages in thread From: Thomas Monjalon @ 2026-02-04 16:58 UTC (permalink / raw) To: Marat Khalili; +Cc: dev, jerinjacobk, konstantin.ananyev, mb, stephen > Marat Khalili (6): > eal: variable first arguments of RTE_SHIFT_VALxx > bpf: fix signed shift overflows in ARM JIT > bpf: mark ARM opcodes with UINT32_C > bpf: disallow empty program > bpf: make add/subtract one program validate > bpf: fix validation when conditional jump is first instruction Applied, thanks. ^ permalink raw reply [flat|nested] 51+ messages in thread
end of thread, other threads:[~2026-02-04 16:58 UTC | newest] Thread overview: 51+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-11-10 15:30 [PATCH 0/3] bpf: simple tests and fixes Marat Khalili 2025-11-10 15:30 ` [PATCH 1/3] bpf: fix signed shift overflows in ARM JIT Marat Khalili 2025-11-11 6:25 ` Jerin Jacob 2025-11-11 7:53 ` Morten Brørup 2025-11-11 10:10 ` Marat Khalili 2025-11-11 16:29 ` Jerin Jacob 2025-11-11 16:31 ` Jerin Jacob 2025-11-11 16:39 ` Marat Khalili 2025-11-12 5:23 ` Jerin Jacob 2025-11-12 10:16 ` Marat Khalili 2025-11-10 15:30 ` [PATCH 2/3] bpf: disallow empty program Marat Khalili 2025-11-10 16:40 ` Stephen Hemminger 2025-11-10 16:46 ` Marat Khalili 2025-11-12 15:35 ` Konstantin Ananyev 2025-11-10 15:30 ` [PATCH 3/3] bpf: make add/subtract one program validate Marat Khalili 2025-11-12 15:37 ` Konstantin Ananyev 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili 2025-12-17 9:25 ` Morten Brørup 2025-12-16 18:20 ` [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT Marat Khalili 2025-12-17 9:49 ` Morten Brørup 2025-12-16 18:20 ` [PATCH v2 3/5] bpf: disallow empty program Marat Khalili 2025-12-18 0:54 ` Stephen Hemminger 2025-12-17 8:58 ` Marat Khalili 2025-12-16 18:20 ` [PATCH v2 4/5] bpf: make add/subtract one program validate Marat Khalili 2025-12-16 18:20 ` [PATCH v2 5/5] bpf: fix BPF validation w/ conditional jump first Marat Khalili 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili 2025-12-19 13:06 ` Konstantin Ananyev 2025-12-17 18:01 ` [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili 2025-12-19 13:13 ` Konstantin Ananyev 2025-12-17 18:01 ` [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili 2025-12-19 13:14 ` Konstantin Ananyev 2025-12-17 18:01 ` [PATCH v3 4/6] bpf: disallow empty program Marat Khalili 2025-12-17 18:01 ` [PATCH v3 5/6] bpf: make add/subtract one program validate Marat Khalili 2025-12-17 18:01 ` [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first Marat Khalili 2026-01-08 11:10 ` Konstantin Ananyev 2026-01-14 19:50 ` [PATCH v3 0/6] bpf: simple tests and fixes Stephen Hemminger 2026-01-23 5:22 ` Stephen Hemminger 2026-01-27 11:49 ` [PATCH v4 " Marat Khalili 2026-01-27 11:49 ` [PATCH v4 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili 2026-01-28 0:41 ` fengchengwen 2026-01-27 11:49 ` [PATCH v4 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili 2026-01-27 11:49 ` [PATCH v4 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili 2026-01-27 11:49 ` [PATCH v4 4/6] bpf: disallow empty program Marat Khalili 2026-01-27 11:49 ` [PATCH v4 5/6] bpf: make add/subtract one program validate Marat Khalili 2026-01-27 11:49 ` [PATCH v4 6/6] bpf: fix validation when conditional jump is first instruction Marat Khalili 2026-01-27 14:02 ` [PATCH v4 0/6] bpf: simple tests and fixes Stephen Hemminger 2026-02-04 11:44 ` Thomas Monjalon 2026-02-04 13:33 ` Konstantin Ananyev 2026-02-04 16:58 ` Thomas Monjalon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox