* [PATCH bpf 0/2] bpf: reject bpf-to-bpf call with large offset in interpreter @ 2026-03-16 19:02 Yazhou Tang 2026-03-16 19:02 ` [PATCH bpf 1/2] " Yazhou Tang 2026-03-16 19:02 ` [PATCH bpf 2/2] selftests/bpf: Add test for large offset bpf-to-bpf call Yazhou Tang 0 siblings, 2 replies; 10+ messages in thread From: Yazhou Tang @ 2026-03-16 19:02 UTC (permalink / raw) To: bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye From: Yazhou Tang <tangyazhou518@outlook.com> This patchset addresses a silent truncation bug in the BPF verifier that occurs when a bpf-to-bpf call involves a massive relative offset, specifically in the context of the BPF interpreter (when JIT is disabled or fails). Please see commit log of 1/2 for more details. Yazhou Tang (2): bpf: reject bpf-to-bpf call with large offset in interpreter selftests/bpf: Add test for large offset bpf-to-bpf call kernel/bpf/verifier.c | 6 +++ .../selftests/bpf/prog_tests/call_large_imm.c | 49 +++++++++++++++++++ .../selftests/bpf/progs/call_large_imm.c | 38 ++++++++++++++ 3 files changed, 93 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/call_large_imm.c create mode 100644 tools/testing/selftests/bpf/progs/call_large_imm.c -- 2.53.0 ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH bpf 1/2] bpf: reject bpf-to-bpf call with large offset in interpreter 2026-03-16 19:02 [PATCH bpf 0/2] bpf: reject bpf-to-bpf call with large offset in interpreter Yazhou Tang @ 2026-03-16 19:02 ` Yazhou Tang 2026-03-16 19:33 ` bot+bpf-ci ` (2 more replies) 2026-03-16 19:02 ` [PATCH bpf 2/2] selftests/bpf: Add test for large offset bpf-to-bpf call Yazhou Tang 1 sibling, 3 replies; 10+ messages in thread From: Yazhou Tang @ 2026-03-16 19:02 UTC (permalink / raw) To: bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye From: Yazhou Tang <tangyazhou518@outlook.com> Currently, the BPF instruction set allows bpf-to-bpf call (or internal call, pseudo call) to use a 32-bit `imm` field to represent the relative jump offset. However, when JIT is disabled or falls back to interpreter, the verifier invokes `bpf_patch_call_args()` to rewrite the call instruction. In this function, the 32-bit `imm` is downcasted to s16 and stored in the `off` field. ```c void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth) { stack_depth = max_t(u32, stack_depth, 1); insn->off = (s16) insn->imm; insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] - __bpf_call_base_args; insn->code = BPF_JMP | BPF_CALL_ARGS; } ``` If the original `imm` exceeds the s16 range (i.e., jump offset > 32KB), this downcast silently truncates the offset, resulting in an incorrect call target. Fix this by explicitly checking the offset boundary in `fixup_call_args()`. If the offset is out of range, reject the program with -EINVAL and emit a clear verifier log message. Co-developed-by: Tianci Cao <ziye@zju.edu.cn> Signed-off-by: Tianci Cao <ziye@zju.edu.cn> Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com> Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com> Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com> --- We also evaluated a potentially more fundamental fix: patching the instruction stream in `do_misc_fixups()` to mov the 32-bit `imm` into an auxiliary register (`BPF_REG_AX`) prior to the call, and then utilizing `BPF_REG_AX` to resolve the jump target in the interpreter. To avoid degrading performance, this patching must be strictly confined to cases where JIT is disabled. However, reliably determining the final JIT execution status during the verification stage is not easy. Therefore, we opted for the simplest and safest fix for now. kernel/bpf/verifier.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index df22bfc572e2..0ee8a3333193 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -23109,6 +23109,12 @@ static int fixup_call_args(struct bpf_verifier_env *env) if (!bpf_pseudo_call(insn)) continue; + + if (insn->imm < S16_MIN || insn->imm > S16_MAX) { + verbose(env, "bpf-to-bpf call offset out of range for interpreter\n"); + return -EINVAL; + } + depth = get_callee_stack_depth(env, insn, i); if (depth < 0) return depth; -- 2.53.0 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH bpf 1/2] bpf: reject bpf-to-bpf call with large offset in interpreter 2026-03-16 19:02 ` [PATCH bpf 1/2] " Yazhou Tang @ 2026-03-16 19:33 ` bot+bpf-ci 2026-03-16 20:32 ` Emil Tsalapatis 2026-03-16 20:45 ` Puranjay Mohan 2 siblings, 0 replies; 10+ messages in thread From: bot+bpf-ci @ 2026-03-16 19:33 UTC (permalink / raw) To: tangyazhou, bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 1075 bytes --] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -23109,6 +23109,12 @@ static int fixup_call_args(struct bpf_verifier_env *env) > > if (!bpf_pseudo_call(insn)) > continue; > + > + if (insn->imm < S16_MIN || insn->imm > S16_MAX) { > + verbose(env, "bpf-to-bpf call offset out of range for interpreter\n"); > + return -EINVAL; > + } This fixes a real bug where bpf_patch_call_args() silently truncates the 32-bit imm to s16 via: insn->off = (s16) insn->imm; Should this have a Fixes: tag? A prior patch from 2022 by Yichun Zhang addressed the same truncation issue and identified the originating commit: Fixes: 1ea47e01ad6e ("bpf: add support for bpf_call to interpreter") That earlier attempt was acked by Song Liu but never applied. --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/23161654757 ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf 1/2] bpf: reject bpf-to-bpf call with large offset in interpreter 2026-03-16 19:02 ` [PATCH bpf 1/2] " Yazhou Tang 2026-03-16 19:33 ` bot+bpf-ci @ 2026-03-16 20:32 ` Emil Tsalapatis 2026-03-17 3:18 ` Yazhou Tang 2026-03-16 20:45 ` Puranjay Mohan 2 siblings, 1 reply; 10+ messages in thread From: Emil Tsalapatis @ 2026-03-16 20:32 UTC (permalink / raw) To: Yazhou Tang, bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye On Mon Mar 16, 2026 at 3:02 PM EDT, Yazhou Tang wrote: > From: Yazhou Tang <tangyazhou518@outlook.com> > > Currently, the BPF instruction set allows bpf-to-bpf call (or internal call, > pseudo call) to use a 32-bit `imm` field to represent the relative jump offset. > > However, when JIT is disabled or falls back to interpreter, the verifier > invokes `bpf_patch_call_args()` to rewrite the call instruction. In this > function, the 32-bit `imm` is downcasted to s16 and stored in the `off` field. > > ```c > void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth) > { > stack_depth = max_t(u32, stack_depth, 1); > insn->off = (s16) insn->imm; > insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] - > __bpf_call_base_args; > insn->code = BPF_JMP | BPF_CALL_ARGS; > } > ``` > > If the original `imm` exceeds the s16 range (i.e., jump offset > 32KB), > this downcast silently truncates the offset, resulting in an incorrect > call target. > > Fix this by explicitly checking the offset boundary in `fixup_call_args()`. > If the offset is out of range, reject the program with -EINVAL and emit > a clear verifier log message. > > Co-developed-by: Tianci Cao <ziye@zju.edu.cn> > Signed-off-by: Tianci Cao <ziye@zju.edu.cn> > Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com> > Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com> > Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com> > --- > > We also evaluated a potentially more fundamental fix: patching the instruction > stream in `do_misc_fixups()` to mov the 32-bit `imm` into an auxiliary register > (`BPF_REG_AX`) prior to the call, and then utilizing `BPF_REG_AX` to resolve > the jump target in the interpreter. > > To avoid degrading performance, this patching must be strictly confined to > cases where JIT is disabled. However, reliably determining the final JIT > execution status during the verification stage is not easy. Therefore, we > opted for the simplest and safest fix for now. > > kernel/bpf/verifier.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index df22bfc572e2..0ee8a3333193 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -23109,6 +23109,12 @@ static int fixup_call_args(struct bpf_verifier_env *env) > > if (!bpf_pseudo_call(insn)) > continue; > + > + if (insn->imm < S16_MIN || insn->imm > S16_MAX) { > + verbose(env, "bpf-to-bpf call offset out of range for interpreter\n"); > + return -EINVAL; > + } > + Can we maybe move the check within bpf_patch_call_args? That way, the new guard is right next to the typecast that necessitates it in the first place. So basically: bpf_patch_call_args() { if (insn->imm < S16_MIN || insn->imm > S16_MAX) { return -EINVAL; insn->off = (s16) insn->imm; ... } fixup_call_args() { ... err = bpf_patch_call_args(insn, depth); if (err) { verbose(env, "bpf-to-bpf call offset out of range for interpreter\n"); return err; } } Provided this is addressed (or people have objections to the change): Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> > depth = get_callee_stack_depth(env, insn, i); > if (depth < 0) > return depth; ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf 1/2] bpf: reject bpf-to-bpf call with large offset in interpreter 2026-03-16 20:32 ` Emil Tsalapatis @ 2026-03-17 3:18 ` Yazhou Tang 0 siblings, 0 replies; 10+ messages in thread From: Yazhou Tang @ 2026-03-17 3:18 UTC (permalink / raw) To: Emil Tsalapatis, bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye Hi Emil, Thanks for the detailed review! On 3/17/26 04:32, Emil Tsalapatis wrote: > Can we maybe move the check within bpf_patch_call_args? That way, the new guard is > right next to the typecast that necessitates it in the first place. > > So basically: > > bpf_patch_call_args() > { > if (insn->imm < S16_MIN || insn->imm > S16_MAX) { > return -EINVAL; > insn->off = (s16) insn->imm; > ... > } > > fixup_call_args() > { > ... > err = bpf_patch_call_args(insn, depth); > if (err) { > verbose(env, "bpf-to-bpf call offset out of range for interpreter\n"); > return err; > } > } > > Provided this is addressed (or people have objections to the change): > > Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> > Make sense, I will move the check within bpf_patch_call_args in v2 as suggested. Best, Yazhou ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf 1/2] bpf: reject bpf-to-bpf call with large offset in interpreter 2026-03-16 19:02 ` [PATCH bpf 1/2] " Yazhou Tang 2026-03-16 19:33 ` bot+bpf-ci 2026-03-16 20:32 ` Emil Tsalapatis @ 2026-03-16 20:45 ` Puranjay Mohan 2026-03-17 3:27 ` Yazhou Tang 2 siblings, 1 reply; 10+ messages in thread From: Puranjay Mohan @ 2026-03-16 20:45 UTC (permalink / raw) To: Yazhou Tang, bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye Yazhou Tang <tangyazhou@zju.edu.cn> writes: > From: Yazhou Tang <tangyazhou518@outlook.com> > > Currently, the BPF instruction set allows bpf-to-bpf call (or internal call, > pseudo call) to use a 32-bit `imm` field to represent the relative jump offset. > > However, when JIT is disabled or falls back to interpreter, the verifier > invokes `bpf_patch_call_args()` to rewrite the call instruction. In this > function, the 32-bit `imm` is downcasted to s16 and stored in the `off` field. > > ```c > void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth) > { > stack_depth = max_t(u32, stack_depth, 1); > insn->off = (s16) insn->imm; > insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] - > __bpf_call_base_args; > insn->code = BPF_JMP | BPF_CALL_ARGS; > } > ``` > > If the original `imm` exceeds the s16 range (i.e., jump offset > 32KB), > this downcast silently truncates the offset, resulting in an incorrect > call target. > > Fix this by explicitly checking the offset boundary in `fixup_call_args()`. > If the offset is out of range, reject the program with -EINVAL and emit > a clear verifier log message. > > Co-developed-by: Tianci Cao <ziye@zju.edu.cn> > Signed-off-by: Tianci Cao <ziye@zju.edu.cn> > Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com> > Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com> > Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com> > --- > > We also evaluated a potentially more fundamental fix: patching the instruction > stream in `do_misc_fixups()` to mov the 32-bit `imm` into an auxiliary register > (`BPF_REG_AX`) prior to the call, and then utilizing `BPF_REG_AX` to resolve > the jump target in the interpreter. > > To avoid degrading performance, this patching must be strictly confined to > cases where JIT is disabled. However, reliably determining the final JIT > execution status during the verification stage is not easy. Therefore, we > opted for the simplest and safest fix for now. > > kernel/bpf/verifier.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index df22bfc572e2..0ee8a3333193 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -23109,6 +23109,12 @@ static int fixup_call_args(struct bpf_verifier_env *env) > > if (!bpf_pseudo_call(insn)) > continue; > + > + if (insn->imm < S16_MIN || insn->imm > S16_MAX) { > + verbose(env, "bpf-to-bpf call offset out of range for interpreter\n"); > + return -EINVAL; > + } > + Thanks for the patch. The interpreter truncation fix looks correct, but there's a similar truncation bug in the JIT success path that should be addressed as well. In jit_subprogs(), the "make interpreter insns consistent for dump" fixup at the end has the same s16 truncation problem: insn->off = env->insn_aux_data[i].call_imm; subprog = find_subprog(env, i + insn->off + 1); insn->imm = subprog; When call_imm exceeds s16 range, insn->off truncates it, and find_subprog() looks up the wrong instruction index. In my testing with your selftest, find_subprog() returns -ENOENT (-2), so insn->imm = -2. This causes bpftool to compute the call target as __bpf_call_base + (-2), which shows a bogus address in the xlated dump instead of the actual subprog symbol: 1: (85) call pc+9#bpf_prog_115951674716e036_padding_subprog 2: (85) call pc+3402#0xffff80008026d276 The 0xffff80008026d276 is __bpf_call_base - 2, not target_subprog. This also leaks the address of __bpf_call_base to userspace. The fix is to use call_imm directly for the find_subprog() lookup instead of reading back from the truncated insn->off: insn->off = env->insn_aux_data[i].call_imm; subprog = find_subprog(env, i + env->insn_aux_data[i].call_imm + 1); insn->imm = subprog; this still leaves the ->off to be wrong, but that is fine in my opinion. Could you fold this into the series as well? ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf 1/2] bpf: reject bpf-to-bpf call with large offset in interpreter 2026-03-16 20:45 ` Puranjay Mohan @ 2026-03-17 3:27 ` Yazhou Tang 0 siblings, 0 replies; 10+ messages in thread From: Yazhou Tang @ 2026-03-17 3:27 UTC (permalink / raw) To: Puranjay Mohan, bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye Hi Puranjay, Thanks for the review and for catching this! On 3/17/26 04:45, Puranjay Mohan wrote: > Thanks for the patch. The interpreter truncation fix looks correct, but there's > a similar truncation bug in the JIT success path that should be addressed as > well. > > In jit_subprogs(), the "make interpreter insns consistent for dump" fixup at > the end has the same s16 truncation problem: > > insn->off = env->insn_aux_data[i].call_imm; > subprog = find_subprog(env, i + insn->off + 1); > insn->imm = subprog; > > When call_imm exceeds s16 range, insn->off truncates it, and find_subprog() > looks up the wrong instruction index. In my testing with your selftest, > find_subprog() returns -ENOENT (-2), so insn->imm = -2. This causes bpftool to > compute the call target as __bpf_call_base + (-2), which shows a bogus address > in the xlated dump instead of the actual subprog symbol: > > 1: (85) call pc+9#bpf_prog_115951674716e036_padding_subprog > 2: (85) call pc+3402#0xffff80008026d276 > > The 0xffff80008026d276 is __bpf_call_base - 2, not target_subprog. This also > leaks the address of __bpf_call_base to userspace. > > The fix is to use call_imm directly for the find_subprog() lookup instead of > reading back from the truncated insn->off: > > insn->off = env->insn_aux_data[i].call_imm; > subprog = find_subprog(env, i + env->insn_aux_data[i].call_imm + 1); > insn->imm = subprog; > > this still leaves the ->off to be wrong, but that is fine in my opinion. > > Could you fold this into the series as well? You are absolutely right. Using the truncated `insn->off` in the JIT dump path is indeed a bug, and your proposed fix makes perfect sense. I will fold this fix into `jit_subprogs()` and add your `Suggested-by` tag in v2. Best, Yazhou ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH bpf 2/2] selftests/bpf: Add test for large offset bpf-to-bpf call 2026-03-16 19:02 [PATCH bpf 0/2] bpf: reject bpf-to-bpf call with large offset in interpreter Yazhou Tang 2026-03-16 19:02 ` [PATCH bpf 1/2] " Yazhou Tang @ 2026-03-16 19:02 ` Yazhou Tang 2026-03-16 20:18 ` emil 1 sibling, 1 reply; 10+ messages in thread From: Yazhou Tang @ 2026-03-16 19:02 UTC (permalink / raw) To: bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye From: Yazhou Tang <tangyazhou518@outlook.com> The test utilizes an inline assembly block with `.rept 200000` to generate a massive dummy subprogram. By placing this padding between the main program and the target subprogram, it forces the verifier to process a bpf-to-bpf call with the `imm` field exceeding the s16 range. The user-space test driver dynamically adapts to the host's JIT configuration: - When JIT is enabled, it asserts that the program is successfully loaded, JIT-compiled, and executes correctly (returning the expected value). - When JIT is disabled, it asserts that the program is cleanly rejected by the verifier with -EINVAL (or -ENOSPC if the verifier log buffer is truncated), and strictly matches the expected error log. Co-developed-by: Tianci Cao <ziye@zju.edu.cn> Signed-off-by: Tianci Cao <ziye@zju.edu.cn> Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com> Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com> Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com> --- .../selftests/bpf/prog_tests/call_large_imm.c | 49 +++++++++++++++++++ .../selftests/bpf/progs/call_large_imm.c | 38 ++++++++++++++ 2 files changed, 87 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/call_large_imm.c create mode 100644 tools/testing/selftests/bpf/progs/call_large_imm.c diff --git a/tools/testing/selftests/bpf/prog_tests/call_large_imm.c b/tools/testing/selftests/bpf/prog_tests/call_large_imm.c new file mode 100644 index 000000000000..ba4a2e27fa5d --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/call_large_imm.c @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <test_progs.h> +#include "call_large_imm.skel.h" + +void test_call_large_imm(void) +{ + struct call_large_imm *skel; + int err, prog_fd; + char log_buf[4096] = {}; + char dummy_data[64] = {}; + + LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = dummy_data, + .data_size_in = sizeof(dummy_data), + ); + + skel = call_large_imm__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + if (!env.jit_enabled) + bpf_program__set_log_buf(skel->progs.call_large_imm_test, + log_buf, sizeof(log_buf)); + + err = call_large_imm__load(skel); + + if (env.jit_enabled) { + if (!ASSERT_OK(err, "load_should_succeed_with_jit")) + goto cleanup; + + prog_fd = bpf_program__fd(skel->progs.call_large_imm_test); + err = bpf_prog_test_run_opts(prog_fd, &opts); + + if (ASSERT_OK(err, "prog_run_success")) + ASSERT_EQ(opts.retval, 3, "prog_retval"); + + } else { + ASSERT_ERR(err, "load_should_fail_in_interpreter"); + ASSERT_TRUE(err == -EINVAL || err == -ENOSPC, "err_should_be_einval_or_enospc"); + + if (!ASSERT_OK_PTR(strstr(log_buf, "bpf-to-bpf call offset out of range for interpreter"), + "check_verifier_log_msg")) { + printf("Actual verifier log:\n%s\n", log_buf); + } + } + +cleanup: + call_large_imm__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/call_large_imm.c b/tools/testing/selftests/bpf/progs/call_large_imm.c new file mode 100644 index 000000000000..d0057f88b48b --- /dev/null +++ b/tools/testing/selftests/bpf/progs/call_large_imm.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> + +struct __sk_buff; + +static __attribute__((noinline)) void padding_subprog(void) +{ + asm volatile (" \ + r0 = 0; \ + .rept 200000; \ + r0 += 0; \ + .endr; \ + " ::: "r0"); +} + +static __attribute__((noinline)) int target_subprog(struct __sk_buff *ctx) +{ + volatile int magic_ret = 3; + return magic_ret; +} + +SEC("socket") +int call_large_imm_test(struct __sk_buff *ctx) +{ + int ret = 0; + + if (ctx == (void *)0) + padding_subprog(); + + ret = target_subprog(ctx); + + bpf_printk("Target subprog returned: %d\n", ret); + + return ret; +} + +char LICENSE[] SEC("license") = "GPL"; -- 2.53.0 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH bpf 2/2] selftests/bpf: Add test for large offset bpf-to-bpf call 2026-03-16 19:02 ` [PATCH bpf 2/2] selftests/bpf: Add test for large offset bpf-to-bpf call Yazhou Tang @ 2026-03-16 20:18 ` emil 2026-03-17 5:32 ` Yazhou Tang 0 siblings, 1 reply; 10+ messages in thread From: emil @ 2026-03-16 20:18 UTC (permalink / raw) To: Yazhou Tang, bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye On Mon Mar 16, 2026 at 3:02 PM EDT, Yazhou Tang wrote: > From: Yazhou Tang <tangyazhou518@outlook.com> > > The test utilizes an inline assembly block with `.rept 200000` to generate > a massive dummy subprogram. By placing this padding between the main program > and the target subprogram, it forces the verifier to process a bpf-to-bpf call > with the `imm` field exceeding the s16 range. > > The user-space test driver dynamically adapts to the host's JIT configuration: > - When JIT is enabled, it asserts that the program is successfully loaded, > JIT-compiled, and executes correctly (returning the expected value). > - When JIT is disabled, it asserts that the program is cleanly rejected by > the verifier with -EINVAL (or -ENOSPC if the verifier log buffer is truncated), > and strictly matches the expected error log. > > Co-developed-by: Tianci Cao <ziye@zju.edu.cn> > Signed-off-by: Tianci Cao <ziye@zju.edu.cn> > Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com> > Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com> > Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com> > --- > .../selftests/bpf/prog_tests/call_large_imm.c | 49 +++++++++++++++++++ > .../selftests/bpf/progs/call_large_imm.c | 38 ++++++++++++++ > 2 files changed, 87 insertions(+) > create mode 100644 tools/testing/selftests/bpf/prog_tests/call_large_imm.c > create mode 100644 tools/testing/selftests/bpf/progs/call_large_imm.c > > diff --git a/tools/testing/selftests/bpf/prog_tests/call_large_imm.c b/tools/testing/selftests/bpf/prog_tests/call_large_imm.c > new file mode 100644 > index 000000000000..ba4a2e27fa5d > --- /dev/null > +++ b/tools/testing/selftests/bpf/prog_tests/call_large_imm.c > @@ -0,0 +1,49 @@ > +// SPDX-License-Identifier: GPL-2.0 > +#include <test_progs.h> > +#include "call_large_imm.skel.h" > + > +void test_call_large_imm(void) > +{ > + struct call_large_imm *skel; > + int err, prog_fd; > + char log_buf[4096] = {}; No need to init the log_buf, it will be NUL-terminated if load runs. > + char dummy_data[64] = {}; Nit: Why define dummy_data? You can make the function you're calling an int (void) syscall to avoid this. > + > + LIBBPF_OPTS(bpf_test_run_opts, opts, > + .data_in = dummy_data, > + .data_size_in = sizeof(dummy_data), > + ); > + > + skel = call_large_imm__open(); > + if (!ASSERT_OK_PTR(skel, "skel_open")) > + return; > + > + if (!env.jit_enabled) > + bpf_program__set_log_buf(skel->progs.call_large_imm_test, > + log_buf, sizeof(log_buf)); > + > + err = call_large_imm__load(skel); > + > + if (env.jit_enabled) { So we only test one path at a time, correct? I understand there's no good way to turn JIT compilation on and off, but as it stands > + if (!ASSERT_OK(err, "load_should_succeed_with_jit")) > + goto cleanup; > + > + prog_fd = bpf_program__fd(skel->progs.call_large_imm_test); > + err = bpf_prog_test_run_opts(prog_fd, &opts); > + > + if (ASSERT_OK(err, "prog_run_success")) > + ASSERT_EQ(opts.retval, 3, "prog_retval"); > + > + } else { > + ASSERT_ERR(err, "load_should_fail_in_interpreter"); > + ASSERT_TRUE(err == -EINVAL || err == -ENOSPC, "err_should_be_einval_or_enospc"); > + > + if (!ASSERT_OK_PTR(strstr(log_buf, "bpf-to-bpf call offset out of range for interpreter"), > + "check_verifier_log_msg")) { > + printf("Actual verifier log:\n%s\n", log_buf); > + } > + } > + > +cleanup: > + call_large_imm__destroy(skel); > +} > diff --git a/tools/testing/selftests/bpf/progs/call_large_imm.c b/tools/testing/selftests/bpf/progs/call_large_imm.c > new file mode 100644 > index 000000000000..d0057f88b48b > --- /dev/null > +++ b/tools/testing/selftests/bpf/progs/call_large_imm.c > @@ -0,0 +1,38 @@ > +// SPDX-License-Identifier: GPL-2.0 > +#include <linux/bpf.h> > +#include <bpf/bpf_helpers.h> > + > +struct __sk_buff; Can be removed if you use a syscall prog instead of a socket. > + > +static __attribute__((noinline)) void padding_subprog(void) > +{ > + asm volatile (" \ > + r0 = 0; \ > + .rept 200000; \ > + r0 += 0; \ > + .endr; \ > + " ::: "r0"); > +} > + > +static __attribute__((noinline)) int target_subprog(struct __sk_buff *ctx) > +{ > + volatile int magic_ret = 3; > + return magic_ret; If we remove the printk below, we can just leave this empty or have it return 3 directly. Is there a reason to define the constant we're returning indirectly? If there is there should be a comment about it because right now it's not readily apparent. > +} > + > +SEC("socket") > +int call_large_imm_test(struct __sk_buff *ctx) > +{ > + int ret = 0; > + > + if (ctx == (void *)0) Nit: Would if (!ctx) work? What is this condition's purpose in the first place? If it's deliberate, again can we add a comment? > + padding_subprog(); > + > + et = target_subprog(ctx); > + > + bpf_printk("Target subprog returned: %d\n", ret); Let's remove this, there's no reason to emit to the trace pipe even on success. > + > + return ret; > +} > + > +char LICENSE[] SEC("license") = "GPL"; ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf 2/2] selftests/bpf: Add test for large offset bpf-to-bpf call 2026-03-16 20:18 ` emil @ 2026-03-17 5:32 ` Yazhou Tang 0 siblings, 0 replies; 10+ messages in thread From: Yazhou Tang @ 2026-03-17 5:32 UTC (permalink / raw) To: emil, bpf Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928, ziye Hi Emil, Thanks a lot for the detailed review! >> + char dummy_data[64] = {}; > > Nit: Why define dummy_data? You can make the function you're calling an > int (void) syscall to avoid this. > ...... >> +struct __sk_buff; > > Can be removed if you use a syscall prog instead of a socket. > ...... >> + if (ctx == (void *)0) > > Nit: Would if (!ctx) work? What is this condition's purpose in the first > place? If it's deliberate, again can we add a comment? The reason for using a socket prog and the `dummy_data` was to guarantee a non-null `ctx` parameter. Then, during the execution of this test program (while JIT is enabled), the useless `padding_subprog()` will be safely skipped. If we remove the condition and just call `padding_subprog()`, the 200,000 instructions in it will actually be executed, which not only causes lower performance but also risks hitting the BPF execution instruction limit. However, your suggestion is much better. I will switch to a syscall prog to eliminate `dummy_data` and `struct __sk_buff`. And to safely skip the padding execution without relying on `ctx`, I will use a volatile variable: volatile int zero = 0; if (zero) padding_subprog(); I will also add detailed comments explaining this workaround. > So we only test one path at a time, correct? I understand there's no > good way to turn JIT compilation on and off, but as it stands Yes, exactly. Actually the test dynamically adapts to the current environment. - If JIT is enabled, the test case will successfully load and return the expected value. - If JIT is disabled, it asserts that the verifier cleanly rejects the program with -EINVAL (or -ENOSPC) and outputs the expected error log. (Without our patch, the verifier would wrongly accept it). >> + volatile int magic_ret = 3; >> + return magic_ret; > > > If we remove the printk below, we can just leave this empty or have it > return 3 directly. Is there a reason to define the constant we're > returning indirectly? If there is there should be a comment about it > because right now it's not readily apparent. Yes, this is another deliberate trick to deceive the LLVM compiler. Using `volatile` prevents LLVM from optimizing the whole `target_subprog()` away. I will add comments here as well. >> + char log_buf[4096] = {}; > > No need to init the log_buf, it will be NUL-terminated if load runs. > ... >> + bpf_printk("Target subprog returned: %d\n", ret); > > Let's remove this, there's no reason to emit to the trace pipe even on > success. I will also incorporate these two changes, and fold all these improvements into v2. Best, Yazhou ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-03-17 5:33 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-03-16 19:02 [PATCH bpf 0/2] bpf: reject bpf-to-bpf call with large offset in interpreter Yazhou Tang 2026-03-16 19:02 ` [PATCH bpf 1/2] " Yazhou Tang 2026-03-16 19:33 ` bot+bpf-ci 2026-03-16 20:32 ` Emil Tsalapatis 2026-03-17 3:18 ` Yazhou Tang 2026-03-16 20:45 ` Puranjay Mohan 2026-03-17 3:27 ` Yazhou Tang 2026-03-16 19:02 ` [PATCH bpf 2/2] selftests/bpf: Add test for large offset bpf-to-bpf call Yazhou Tang 2026-03-16 20:18 ` emil 2026-03-17 5:32 ` Yazhou Tang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox