* [PATCH bpf-next v7 0/3] bpf: Fix call offset truncation and OOB read in bpf_patch_call_args()
@ 2026-04-21 14:45 Yazhou Tang
2026-04-21 14:45 ` [PATCH bpf-next v7 1/3] bpf: Fix out-of-bounds " Yazhou Tang
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Yazhou Tang @ 2026-04-21 14:45 UTC (permalink / raw)
To: bpf, ast, eddyz87, emil, puranjay, xukuohai
Cc: daniel, john.fastabend, andrii, martin.lau, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928,
ziye
From: Yazhou Tang <tangyazhou518@outlook.com>
This patchset addresses a silent truncation bug in the BPF verifier that
occurs when a bpf-to-bpf call involves a massive relative jump offset.
Additionally, it fixes a pre-existing out-of-bounds (OOB) read issue
in the interpreter fallback path.
Because the BPF instruction set utilizes a 32-bit imm field for bpf-to-bpf
calls, implicitly downcasting it to the 16-bit insn->off in bpf_patch_call_args()
causes incorrect call targets or subprog ID resolution for large BPF programs.
While fixing this by swapping the imm and off fields, it was discovered that
the original code also had a load-time OOB read vulnerability when the stack
depth exceeds MAX_BPF_STACK during JIT fallback.
Patch 1/3 fixes the pre-existing OOB read in bpf_patch_call_args(). It changes
the function to return an int and explicitly rejects the JIT fallback if
the stack depth exceeds MAX_BPF_STACK, preventing a potential stack buffer
overflow.
Patch 2/3 fixes the s16 truncation bug. It keeps the original imm field
unchanged and uses the off field to store the interpreter function index.
It also adjusts the JMP_CALL_ARGS case in ___bpf_prog_run() and the dumper
code accordingly, while safely removing a previous workaround in the
selftests disasm helpers.
Patch 3/3 introduces a selftest for this fix. When JIT is disabled, running
the test without this patchset triggers a kernel panic due to an invalid
call target caused by the truncation issue.
---
Change log:
v7:
1. Rebase the patchset to the bpf-next tree to resolve the apply conflict. (Alexei)
2. Add Patch 1/3 to properly fix a pre-existing OOB read in bpf_patch_call_args().
(Sashiko AI reviewer)
v6: https://lore.kernel.org/bpf/20260412170334.716778-1-tangyazhou@zju.edu.cn/
1. Use a different but clearer approach to resolve this issue: keeping
the original imm field unchanged and using the off field to store the
interpreter function index. (Kuohai)
2. Update the related dumper code and remove a previous workaround in the
selftests disasm helpers, which is no longer needed after this fix.
v5: https://lore.kernel.org/bpf/20260326090133.221957-1-tangyazhou@zju.edu.cn/
1. Some minor changes in commit messages. (AI Reviewer)
v4: https://lore.kernel.org/bpf/20260326063329.10031-1-tangyazhou@zju.edu.cn/
1. Remove some redundant commit messages of patch 2/3. (Emil)
2. Change the number of instructions in padding_subprog() from 200,000
to 32,765, which is the minimum number of instructions required to
trigger the verifier failure. (Emil)
v3: https://lore.kernel.org/bpf/20260323122254.98540-1-tangyazhou@zju.edu.cn/
1. Resend to fix a typo in v2 and add "Fixes" tag. The rest of the changes
are identical to v2.
v2 (incorrect): https://lore.kernel.org/bpf/20260323081748.106603-1-tangyazhou@zju.edu.cn/
1. Move the s16 boundary check from fixup_call_args() to bpf_patch_call_args(),
and change the return type of bpf_patch_call_args() to int. (Emil)
2. Add Patch 3/3 to fix the incorrect subprog ID in dumped bpf_pseudo_call
instructions, which is caused by the same truncation issue. (Puranjay)
3. Refine the new selftest for clarity and add detailed comments explaining
the test design. (Emil)
v1: https://lore.kernel.org/bpf/20260316190220.113417-1-tangyazhou@zju.edu.cn/
Yazhou Tang (3):
bpf: Fix out-of-bounds read in bpf_patch_call_args()
bpf: Fix s16 truncation for large bpf-to-bpf call offsets
selftests/bpf: Add test for large offset bpf-to-bpf call
include/linux/bpf.h | 2 +-
include/linux/filter.h | 3 --
kernel/bpf/core.c | 17 ++++---
kernel/bpf/fixups.c | 12 +++--
tools/bpf/bpftool/xlated_dumper.c | 10 ++--
tools/testing/selftests/bpf/disasm_helpers.c | 18 --------
.../selftests/bpf/prog_tests/call_large_imm.c | 29 ++++++++++++
.../selftests/bpf/progs/call_large_imm.c | 46 +++++++++++++++++++
8 files changed, 100 insertions(+), 37 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/call_large_imm.c
create mode 100644 tools/testing/selftests/bpf/progs/call_large_imm.c
--
2.53.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH bpf-next v7 1/3] bpf: Fix out-of-bounds read in bpf_patch_call_args()
2026-04-21 14:45 [PATCH bpf-next v7 0/3] bpf: Fix call offset truncation and OOB read in bpf_patch_call_args() Yazhou Tang
@ 2026-04-21 14:45 ` Yazhou Tang
2026-04-21 15:29 ` bot+bpf-ci
2026-04-21 14:45 ` [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets Yazhou Tang
2026-04-21 14:45 ` [PATCH bpf-next v7 3/3] selftests/bpf: Add test for large offset bpf-to-bpf call Yazhou Tang
2 siblings, 1 reply; 11+ messages in thread
From: Yazhou Tang @ 2026-04-21 14:45 UTC (permalink / raw)
To: bpf, ast, eddyz87, emil, puranjay, xukuohai
Cc: daniel, john.fastabend, andrii, martin.lau, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928,
ziye
From: Yazhou Tang <tangyazhou518@outlook.com>
The interpreters_args array only accommodates stack depths up to MAX_BPF_STACK
(512 bytes). However, do_misc_fixups() may allow a larger stack depth if JIT
is requested.
If JIT compilation later fails and falls back to the interpreter, the verifier
invokes bpf_patch_call_args() with this oversized stack depth. This causes a
load-time out-of-bounds (OOB) read when calculating the interpreter function
pointer index.
Fix this by changing bpf_patch_call_args() to return an int and explicitly
rejecting the JIT fallback (returning -EINVAL) if the stack depth exceeds
MAX_BPF_STACK.
Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
---
include/linux/bpf.h | 2 +-
kernel/bpf/core.c | 6 +++++-
kernel/bpf/fixups.c | 6 +++++-
3 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3cb6b9e70080..9fb24a5f31ac 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2917,7 +2917,7 @@ int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size,
int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size);
#ifndef CONFIG_BPF_JIT_ALWAYS_ON
-void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth);
+int bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth);
#endif
struct btf *bpf_get_btf_vmlinux(void);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 8b018ff48875..63044ebe5721 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2394,13 +2394,17 @@ EVAL4(PROG_NAME_LIST, 416, 448, 480, 512)
#undef PROG_NAME_LIST
#ifdef CONFIG_BPF_SYSCALL
-void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
+int bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
{
stack_depth = max_t(u32, stack_depth, 1);
+ /* Prevent out-of-bounds read to interpreters_args */
+ if (stack_depth > MAX_BPF_STACK)
+ return -EINVAL;
insn->off = (s16) insn->imm;
insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] -
__bpf_call_base_args;
insn->code = BPF_JMP | BPF_CALL_ARGS;
+ return 0;
}
#endif
#endif
diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
index fba9e8c00878..300e4e251931 100644
--- a/kernel/bpf/fixups.c
+++ b/kernel/bpf/fixups.c
@@ -1416,7 +1416,11 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env)
depth = get_callee_stack_depth(env, insn, i);
if (depth < 0)
return depth;
- bpf_patch_call_args(insn, depth);
+ err = bpf_patch_call_args(insn, depth);
+ if (err) {
+ verbose(env, "interpreter stack depth %d is too large\n", depth);
+ return err;
+ }
}
err = 0;
#endif
--
2.53.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets
2026-04-21 14:45 [PATCH bpf-next v7 0/3] bpf: Fix call offset truncation and OOB read in bpf_patch_call_args() Yazhou Tang
2026-04-21 14:45 ` [PATCH bpf-next v7 1/3] bpf: Fix out-of-bounds " Yazhou Tang
@ 2026-04-21 14:45 ` Yazhou Tang
2026-04-21 15:29 ` bot+bpf-ci
2026-04-21 20:46 ` sashiko-bot
2026-04-21 14:45 ` [PATCH bpf-next v7 3/3] selftests/bpf: Add test for large offset bpf-to-bpf call Yazhou Tang
2 siblings, 2 replies; 11+ messages in thread
From: Yazhou Tang @ 2026-04-21 14:45 UTC (permalink / raw)
To: bpf, ast, eddyz87, emil, puranjay, xukuohai
Cc: daniel, john.fastabend, andrii, martin.lau, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928,
ziye
From: Yazhou Tang <tangyazhou518@outlook.com>
Currently, the BPF instruction set allows bpf-to-bpf calls (or internal calls,
pseudo calls) to use a 32-bit imm field to represent the relative jump offset.
However, when JIT is disabled or falls back to the interpreter, the verifier
invokes bpf_patch_call_args() to rewrite the call instruction. In this
function, the 32-bit imm is downcast to s16 and stored in the off field.
void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
{
stack_depth = max_t(u32, stack_depth, 1);
insn->off = (s16) insn->imm;
insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] -
__bpf_call_base_args;
insn->code = BPF_JMP | BPF_CALL_ARGS;
}
If the original imm exceeds the s16 range (i.e., jump offset > 32KB),
this downcast silently truncates the offset, resulting in an incorrect
call target.
Fix this by:
1. In bpf_patch_call_args(), keeping the imm field unchanged and using the off
field to store the index of the interpreter function.
2. In ___bpf_prog_run() for the JMP_CALL_ARGS case, retrieving the interpreter
function pointer from the interpreters_args array using the off field as
the index, and passing the original imm to calculate the last argument of
the interpreter function.
After these changes, the truncation issue is resolved, and __bpf_call_base_args
is also no longer needed and can be removed, which makes the code cleaner.
Performance: In ___bpf_prog_run() for the JMP_CALL_ARGS case, changing the
retrieval of the interpreter function pointer from pointer addition to direct
array indexing improves performance. The possible reason is that the latter
has better instruction-level parallelism. See the v5 discussion [1] for more
details.
[1] https://lore.kernel.org/bpf/f120c3c4-6999-414a-b514-518bb64b4758@zju.edu.cn/
The related dumper code is also updated to adapt to the verifier changes.
Specifically, the usages of insn->imm and insn->off are swapped:
1. When JIT is enabled, use insn->imm as the call offset, and insn->off
as the subprog id.
2. When JIT is disabled, print insn->imm as the call offset.
Consequently, the workaround introduced in tools/testing/selftests/bpf/disasm_helpers.c
by commit 203e6aba7692 ("selftests/bpf: print correct offset for pseudo calls in disasm_insn()")
is no longer needed. Since the verifier now correctly exposes the offset
in the imm field, we can safely remove the custom print_call_cb() and
allow the selftests to rely on the native BPF disassembler.
Fixes: 1ea47e01ad6e ("bpf: add support for bpf_call to interpreter")
Fixes: 7105e828c087 ("bpf: allow for correlation of maps and helpers in dump")
Suggested-by: Xu Kuohai <xukuohai@huaweicloud.com>
Suggested-by: Puranjay Mohan <puranjay@kernel.org>
Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
---
include/linux/filter.h | 3 ---
kernel/bpf/core.c | 11 ++++++-----
kernel/bpf/fixups.c | 6 +++---
tools/bpf/bpftool/xlated_dumper.c | 10 +++++-----
tools/testing/selftests/bpf/disasm_helpers.c | 18 ------------------
5 files changed, 14 insertions(+), 34 deletions(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 1ec6d5ba64cc..88a241aac36a 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1151,9 +1151,6 @@ bool sk_filter_charge(struct sock *sk, struct sk_filter *fp);
void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp);
u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
-#define __bpf_call_base_args \
- ((u64 (*)(u64, u64, u64, u64, u64, const struct bpf_insn *)) \
- (void *)__bpf_call_base)
struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct bpf_prog *prog);
void bpf_jit_compile(struct bpf_prog *prog);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 63044ebe5721..cdccb1b319a6 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1771,6 +1771,9 @@ static u32 abs_s32(s32 x)
return x >= 0 ? (u32)x : -(u32)x;
}
+static u64 (*interpreters_args[])(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5,
+ const struct bpf_insn *insn);
+
/**
* ___bpf_prog_run - run eBPF program on a given context
* @regs: is the array of MAX_BPF_EXT_REG eBPF pseudo-registers
@@ -2077,10 +2080,10 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
CONT;
JMP_CALL_ARGS:
- BPF_R0 = (__bpf_call_base_args + insn->imm)(BPF_R1, BPF_R2,
+ BPF_R0 = (interpreters_args[insn->off])(BPF_R1, BPF_R2,
BPF_R3, BPF_R4,
BPF_R5,
- insn + insn->off + 1);
+ insn + insn->imm + 1);
CONT;
JMP_TAIL_CALL: {
@@ -2400,9 +2403,7 @@ int bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
/* Prevent out-of-bounds read to interpreters_args */
if (stack_depth > MAX_BPF_STACK)
return -EINVAL;
- insn->off = (s16) insn->imm;
- insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] -
- __bpf_call_base_args;
+ insn->off = (round_up(stack_depth, 32) / 32) - 1;
insn->code = BPF_JMP | BPF_CALL_ARGS;
return 0;
}
diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
index 300e4e251931..8947ef74f6a8 100644
--- a/kernel/bpf/fixups.c
+++ b/kernel/bpf/fixups.c
@@ -1250,9 +1250,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
}
if (!bpf_pseudo_call(insn))
continue;
- insn->off = env->insn_aux_data[i].call_imm;
- subprog = bpf_find_subprog(env, i + insn->off + 1);
- insn->imm = subprog;
+ insn->imm = env->insn_aux_data[i].call_imm;
+ subprog = bpf_find_subprog(env, i + insn->imm + 1);
+ insn->off = subprog;
}
prog->jited = 1;
diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
index 5e7cb8b36fef..6ccda6787245 100644
--- a/tools/bpf/bpftool/xlated_dumper.c
+++ b/tools/bpf/bpftool/xlated_dumper.c
@@ -150,13 +150,13 @@ static const char *print_call_pcrel(struct dump_data *dd,
if (!dd->nr_jited_ksyms)
/* Do not show address for interpreted programs */
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
- "%+d", insn->off);
+ "%+d", insn->imm);
else if (sym)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
- "%+d#%s", insn->off, sym->name);
+ "%+d#%s", insn->imm, sym->name);
else
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
- "%+d#0x%lx", insn->off, address);
+ "%+d#0x%lx", insn->imm, address);
return dd->scratch_buff;
}
@@ -181,8 +181,8 @@ static const char *print_call(void *private_data,
struct kernel_sym *sym;
if (insn->src_reg == BPF_PSEUDO_CALL &&
- (__u32) insn->imm < dd->nr_jited_ksyms && dd->jited_ksyms)
- address = dd->jited_ksyms[insn->imm];
+ (__u32) insn->off < dd->nr_jited_ksyms && dd->jited_ksyms)
+ address = dd->jited_ksyms[insn->off];
sym = kernel_syms_search(dd, address);
if (insn->src_reg == BPF_PSEUDO_CALL)
diff --git a/tools/testing/selftests/bpf/disasm_helpers.c b/tools/testing/selftests/bpf/disasm_helpers.c
index f529f1c8c171..96b1f2ffe438 100644
--- a/tools/testing/selftests/bpf/disasm_helpers.c
+++ b/tools/testing/selftests/bpf/disasm_helpers.c
@@ -4,7 +4,6 @@
#include "disasm.h"
struct print_insn_context {
- char scratch[16];
char *buf;
size_t sz;
};
@@ -19,22 +18,6 @@ static void print_insn_cb(void *private_data, const char *fmt, ...)
va_end(args);
}
-static const char *print_call_cb(void *private_data, const struct bpf_insn *insn)
-{
- struct print_insn_context *ctx = private_data;
-
- /* For pseudo calls verifier.c:jit_subprogs() hides original
- * imm to insn->off and changes insn->imm to be an index of
- * the subprog instead.
- */
- if (insn->src_reg == BPF_PSEUDO_CALL) {
- snprintf(ctx->scratch, sizeof(ctx->scratch), "%+d", insn->off);
- return ctx->scratch;
- }
-
- return NULL;
-}
-
struct bpf_insn *disasm_insn(struct bpf_insn *insn, char *buf, size_t buf_sz)
{
struct print_insn_context ctx = {
@@ -43,7 +26,6 @@ struct bpf_insn *disasm_insn(struct bpf_insn *insn, char *buf, size_t buf_sz)
};
struct bpf_insn_cbs cbs = {
.cb_print = print_insn_cb,
- .cb_call = print_call_cb,
.private_data = &ctx,
};
char *tmp, *pfx_end, *sfx_start;
--
2.53.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH bpf-next v7 3/3] selftests/bpf: Add test for large offset bpf-to-bpf call
2026-04-21 14:45 [PATCH bpf-next v7 0/3] bpf: Fix call offset truncation and OOB read in bpf_patch_call_args() Yazhou Tang
2026-04-21 14:45 ` [PATCH bpf-next v7 1/3] bpf: Fix out-of-bounds " Yazhou Tang
2026-04-21 14:45 ` [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets Yazhou Tang
@ 2026-04-21 14:45 ` Yazhou Tang
2026-04-21 21:02 ` sashiko-bot
2 siblings, 1 reply; 11+ messages in thread
From: Yazhou Tang @ 2026-04-21 14:45 UTC (permalink / raw)
To: bpf, ast, eddyz87, emil, puranjay, xukuohai
Cc: daniel, john.fastabend, andrii, martin.lau, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928,
ziye
From: Yazhou Tang <tangyazhou518@outlook.com>
Add a selftest to verify the verifier and JIT behavior when handling
bpf-to-bpf calls with relative jump offsets exceeding the s16 boundary.
The test utilizes an inline assembly block with ".rept 32765" to generate
a massive dummy subprogram. By placing this padding between the main program
and the target subprogram, it forces the verifier to process a bpf-to-bpf call
where the imm field exceeds the s16 range.
- When JIT is enabled, it asserts that the program is successfully loaded and
executes correctly to return the expected value. Since patch 1/2 does not change
the JIT behavior, the test passes whether the fix is applied or not.
- When JIT is disabled, it also asserts that the program is successfully loaded
and executes correctly to return the expected value.
- Before the fix, the verifier rewrites the call instruction with a truncated
offset (here 32768 -> -32768) and lets it pass. When the program is executed,
the call instruction causes a kernel panic due to an invalid jump target.
- After the fix, the verifier correctly handles the large offset and allows
it to pass. The program then executes correctly to return the expected value.
Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
---
.../selftests/bpf/prog_tests/call_large_imm.c | 29 ++++++++++++
.../selftests/bpf/progs/call_large_imm.c | 46 +++++++++++++++++++
2 files changed, 75 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/call_large_imm.c
create mode 100644 tools/testing/selftests/bpf/progs/call_large_imm.c
diff --git a/tools/testing/selftests/bpf/prog_tests/call_large_imm.c b/tools/testing/selftests/bpf/prog_tests/call_large_imm.c
new file mode 100644
index 000000000000..3ed61a8ca08c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/call_large_imm.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "call_large_imm.skel.h"
+
+void test_call_large_imm(void)
+{
+ struct call_large_imm *skel;
+ int err, prog_fd;
+
+ LIBBPF_OPTS(bpf_test_run_opts, opts);
+
+ skel = call_large_imm__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ err = call_large_imm__load(skel);
+
+ if (!ASSERT_OK(err, "load_should_succeed"))
+ goto cleanup;
+
+ prog_fd = bpf_program__fd(skel->progs.call_large_imm_test);
+ err = bpf_prog_test_run_opts(prog_fd, &opts);
+
+ if (ASSERT_OK(err, "prog_run_success"))
+ ASSERT_EQ(opts.retval, 3, "prog_retval");
+
+cleanup:
+ call_large_imm__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/call_large_imm.c b/tools/testing/selftests/bpf/progs/call_large_imm.c
new file mode 100644
index 000000000000..7cdf1e6d80d8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/call_large_imm.c
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+/*
+ * A volatile global variable is used here, so that padding_subprog()
+ * will not be optimized, and it will not be really executed even if
+ * it is successfully loaded (when JIT is enabled).
+ */
+volatile int zero = 0;
+
+/*
+ * 32765 is the exact minimum number of padding instructions needed to
+ * trigger the verifier failure, because:
+ * 1. Counting the wrapper instructions around the padding block (one
+ * "r0=0" and two "exit" instructions), the actual jump distance
+ * evaluates to N + 3.
+ * 2. To overflow the s16 max bound (32767), we need N + 3 > 32767.
+ * Thus, N = 32765 is the exact minimum padding size required.
+ */
+static __attribute__((noinline)) void padding_subprog(void)
+{
+ asm volatile (" \
+ r0 = 0; \
+ .rept 32765; \
+ r0 += 0; \
+ .endr; \
+ " ::: "r0");
+}
+
+static __attribute__((noinline)) int target_subprog(void)
+{
+ /* A volatile variable is used here to prevent optimization. */
+ volatile int magic_ret = 3;
+ return magic_ret;
+}
+
+SEC("syscall")
+int call_large_imm_test(void *ctx)
+{
+ if (zero)
+ padding_subprog();
+ return target_subprog();
+}
+
+char LICENSE[] SEC("license") = "GPL";
--
2.53.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v7 1/3] bpf: Fix out-of-bounds read in bpf_patch_call_args()
2026-04-21 14:45 ` [PATCH bpf-next v7 1/3] bpf: Fix out-of-bounds " Yazhou Tang
@ 2026-04-21 15:29 ` bot+bpf-ci
0 siblings, 0 replies; 11+ messages in thread
From: bot+bpf-ci @ 2026-04-21 15:29 UTC (permalink / raw)
To: tangyazhou, bpf, ast, eddyz87, emil, puranjay, xukuohai
Cc: daniel, john.fastabend, andrii, martin.lau, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928,
ziye, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song,
clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1103 bytes --]
> bpf: Fix out-of-bounds read in bpf_patch_call_args()
>
> The interpreters_args array only accommodates stack depths up to MAX_BPF_STACK
> (512 bytes). However, do_misc_fixups() may allow a larger stack depth if JIT
> is requested.
>
> If JIT compilation later fails and falls back to the interpreter, the verifier
> invokes bpf_patch_call_args() with this oversized stack depth. This causes a
> load-time out-of-bounds (OOB) read when calculating the interpreter function
> pointer index.
>
> Fix this by changing bpf_patch_call_args() to return an int and explicitly
> rejecting the JIT fallback (returning -EINVAL) if the stack depth exceeds
> MAX_BPF_STACK.
This patch fixes a bug that appears to have been introduced by commit
1ea47e01ad6e ("bpf: add support for bpf_call to interpreter"). Should
this commit include a Fixes: tag pointing to that commit?
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24729984210
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets
2026-04-21 14:45 ` [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets Yazhou Tang
@ 2026-04-21 15:29 ` bot+bpf-ci
2026-04-22 16:05 ` Yazhou Tang
2026-04-21 20:46 ` sashiko-bot
1 sibling, 1 reply; 11+ messages in thread
From: bot+bpf-ci @ 2026-04-21 15:29 UTC (permalink / raw)
To: tangyazhou, bpf, ast, eddyz87, emil, puranjay, xukuohai
Cc: daniel, john.fastabend, andrii, martin.lau, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928,
ziye, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song,
clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2298 bytes --]
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 63044ebe5721..cdccb1b319a6 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2400,9 +2403,7 @@ int bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
> /* Prevent out-of-bounds read to interpreters_args */
> if (stack_depth > MAX_BPF_STACK)
> return -EINVAL;
> - insn->off = (s16) insn->imm;
> - insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] -
> - __bpf_call_base_args;
> + insn->off = (round_up(stack_depth, 32) / 32) - 1;
> insn->code = BPF_JMP | BPF_CALL_ARGS;
> return 0;
> }
In the v6 review thread at https://lore.kernel.org/bpf/CAADnVQ+m2pFaPbRi=A+tsROKP2CPPkny_LXUKR3vUgH2+u1jRg@mail.gmail.com/,
Alexei Starovoitov noted that Sashiko flagged a real OOB read issue.
The author acknowledged this concern in a follow-up discussion, stating
that v6 inadvertently turned a pre-existing load-time OOB read into a more
critical run-time OOB read when stack_depth exceeds MAX_BPF_STACK. The
author committed to fixing this in v7 by changing bpf_patch_call_args()
return type to int and adding strict boundary checks for the array access.
However, the current version still lacks a boundary check on the computed
array index before accessing interpreters_args. The code checks if
stack_depth exceeds MAX_BPF_STACK:
if (stack_depth > MAX_BPF_STACK)
return -EINVAL;
But does this check guarantee that (round_up(stack_depth, 32) / 32) - 1
stays within the bounds of the interpreters_args array?
Looking at the array access in ___bpf_prog_run():
BPF_R0 = (interpreters_args[insn->off])(BPF_R1, BPF_R2,
BPF_R3, BPF_R4,
BPF_R5,
insn + insn->imm + 1);
If the computed index exceeds the array size, this would cause a run-time
OOB access. Should there be an explicit check ensuring the index value
is within the valid range of the interpreters_args array, as promised in
the v6 discussion?
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24729984210
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets
2026-04-21 14:45 ` [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets Yazhou Tang
2026-04-21 15:29 ` bot+bpf-ci
@ 2026-04-21 20:46 ` sashiko-bot
2026-04-22 15:33 ` Yazhou Tang
1 sibling, 1 reply; 11+ messages in thread
From: sashiko-bot @ 2026-04-21 20:46 UTC (permalink / raw)
To: Yazhou Tang; +Cc: bpf
Sashiko AI review found 1 potential issue(s):
- [Medium] bpftool's control-flow graph generator (cfg.c) was not updated to reflect the `insn->off` and `insn->imm` swap for BPF_PSEUDO_CALL, which will break visual dumps.
--
commit bbf026d16d77a3e1debc60ea3a7b2fef7d7def5e
Author: Yazhou Tang <tangyazhou518@outlook.com>
bpf: Fix s16 truncation for large bpf-to-bpf call offsets
This commit fixes an s16 truncation issue for large bpf-to-bpf call
offsets when JIT is disabled or falls back to the interpreter.
It does this by using the off field to store the interpreter function
index and the imm field to pass the original offset.
The related dumper code was also updated to adapt to this change.
> diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
> index 300e4e2519318..8947ef74f6a88 100644
> --- a/kernel/bpf/fixups.c
> +++ b/kernel/bpf/fixups.c
> @@ -1250,9 +1250,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
> }
> if (!bpf_pseudo_call(insn))
> continue;
> - insn->off = env->insn_aux_data[i].call_imm;
> - subprog = bpf_find_subprog(env, i + insn->off + 1);
> - insn->imm = subprog;
> + insn->imm = env->insn_aux_data[i].call_imm;
> + subprog = bpf_find_subprog(env, i + insn->imm + 1);
> + insn->off = subprog;
> }
Since insn->imm and insn->off are now swapped for exported translated
programs, should tools/bpf/bpftool/cfg.c also be updated?
It looks like cfg_partition_funcs() currently relies on cur->off to
calculate the target subprogram start when generating the control-flow
graph:
tools/bpf/bpftool/cfg.c:cfg_partition_funcs() {
...
for (; cur < end; cur++) {
if (cur->code != (BPF_JMP | BPF_CALL))
continue;
if (cur->src_reg != BPF_PSEUDO_CALL)
continue;
func = cfg_append_func(cfg, cur + cur->off + 1);
if (!func)
return true;
}
...
}
Because cur->off now holds the subprogram index instead of the jump offset,
could this point to the wrong instruction and cause a corrupted graph or
out-of-bounds memory access during bpftool prog dump xlated visual?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260421144504.823756-1-tangyazhou@zju.edu.cn?part=2
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v7 3/3] selftests/bpf: Add test for large offset bpf-to-bpf call
2026-04-21 14:45 ` [PATCH bpf-next v7 3/3] selftests/bpf: Add test for large offset bpf-to-bpf call Yazhou Tang
@ 2026-04-21 21:02 ` sashiko-bot
2026-04-22 15:47 ` Yazhou Tang
0 siblings, 1 reply; 11+ messages in thread
From: sashiko-bot @ 2026-04-21 21:02 UTC (permalink / raw)
To: Yazhou Tang; +Cc: bpf
Sashiko AI review found 1 potential issue(s):
- [Medium] The selftest causes a kernel panic on unfixed kernels (when JIT is disabled), which disrupts CI pipelines and bisection workflows.
--
commit c0593c5f6291ee1a846f9c17858890ded3cd8ea4
Author: Yazhou Tang <tangyazhou518@outlook.com>
selftests/bpf: Add test for large offset bpf-to-bpf call
This commit adds a BPF selftest to verify that the verifier and JIT correctly
handle bpf-to-bpf calls with relative jump offsets exceeding the s16 boundary.
It uses a large inline assembly block to force the verifier to process a jump
exceeding the standard limit.
> - Before the fix, the verifier rewrites the call instruction with a truncated
> offset (here 32768 -> -32768) and lets it pass. When the program is executed,
> the call instruction causes a kernel panic due to an invalid jump target.
Is it safe to introduce a test that causes a kernel panic on unpatched kernels?
While reproducing the bug is the test's goal, crashing the kernel is highly
disruptive for CI systems and git bisect workflows that run the testsuite on
older or unpatched kernel trees.
> diff --git a/tools/testing/selftests/bpf/progs/call_large_imm.c b/tools/testing/selftests/bpf/progs/call_large_imm.c
> new file mode 100644
> index 0000000000000..7cdf1e6d80d8d
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/call_large_imm.c
[ ... ]
> +static __attribute__((noinline)) void padding_subprog(void)
> +{
> + asm volatile (" \
> + r0 = 0; \
> + .rept 32765; \
> + r0 += 0; \
> + .endr; \
> + " ::: "r0");
> +}
> +
> +static __attribute__((noinline)) int target_subprog(void)
> +{
> + /* A volatile variable is used here to prevent optimization. */
> + volatile int magic_ret = 3;
> + return magic_ret;
> +}
> +
> +SEC("syscall")
> +int call_large_imm_test(void *ctx)
> +{
> + if (zero)
> + padding_subprog();
> + return target_subprog();
> +}
Could the test be redesigned to fail gracefully?
If the large padding block were placed before the call instruction, the
call site would be at a large positive offset. A truncated negative jump
(-32768) would then land safely inside the valid padding instructions instead
of jumping out of bounds of the program.
The execution would then fall through and return an incorrect value, causing
ASSERT_EQ(opts.retval, 3) in the runner to cleanly fail without bringing down
the system.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260421144504.823756-1-tangyazhou@zju.edu.cn?part=3
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets
2026-04-21 20:46 ` sashiko-bot
@ 2026-04-22 15:33 ` Yazhou Tang
0 siblings, 0 replies; 11+ messages in thread
From: Yazhou Tang @ 2026-04-22 15:33 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/22/26 4:46 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 1 potential issue(s):
> - [Medium] bpftool's control-flow graph generator (cfg.c) was not updated to reflect the `insn->off` and `insn->imm` swap for BPF_PSEUDO_CALL, which will break visual dumps.
> --
>
> commit bbf026d16d77a3e1debc60ea3a7b2fef7d7def5e
> Author: Yazhou Tang <tangyazhou518@outlook.com>
>
> bpf: Fix s16 truncation for large bpf-to-bpf call offsets
>
> This commit fixes an s16 truncation issue for large bpf-to-bpf call
> offsets when JIT is disabled or falls back to the interpreter.
> It does this by using the off field to store the interpreter function
> index and the imm field to pass the original offset.
> The related dumper code was also updated to adapt to this change.
>
>> diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
>> index 300e4e2519318..8947ef74f6a88 100644
>> --- a/kernel/bpf/fixups.c
>> +++ b/kernel/bpf/fixups.c
>> @@ -1250,9 +1250,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
>> }
>> if (!bpf_pseudo_call(insn))
>> continue;
>> - insn->off = env->insn_aux_data[i].call_imm;
>> - subprog = bpf_find_subprog(env, i + insn->off + 1);
>> - insn->imm = subprog;
>> + insn->imm = env->insn_aux_data[i].call_imm;
>> + subprog = bpf_find_subprog(env, i + insn->imm + 1);
>> + insn->off = subprog;
>> }
>
> Since insn->imm and insn->off are now swapped for exported translated
> programs, should tools/bpf/bpftool/cfg.c also be updated?
>
> It looks like cfg_partition_funcs() currently relies on cur->off to
> calculate the target subprogram start when generating the control-flow
> graph:
>
> tools/bpf/bpftool/cfg.c:cfg_partition_funcs() {
> ...
> for (; cur < end; cur++) {
> if (cur->code != (BPF_JMP | BPF_CALL))
> continue;
> if (cur->src_reg != BPF_PSEUDO_CALL)
> continue;
> func = cfg_append_func(cfg, cur + cur->off + 1);
> if (!func)
> return true;
> }
> ...
> }
>
> Because cur->off now holds the subprogram index instead of the jump offset,
> could this point to the wrong instruction and cause a corrupted graph or
> out-of-bounds memory access during bpftool prog dump xlated visual?
>
This is a valid point. I will fix this issue in next version.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v7 3/3] selftests/bpf: Add test for large offset bpf-to-bpf call
2026-04-21 21:02 ` sashiko-bot
@ 2026-04-22 15:47 ` Yazhou Tang
0 siblings, 0 replies; 11+ messages in thread
From: Yazhou Tang @ 2026-04-22 15:47 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/22/26 5:02 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 1 potential issue(s):
> - [Medium] The selftest causes a kernel panic on unfixed kernels (when JIT is disabled), which disrupts CI pipelines and bisection workflows.
> --
>
> commit c0593c5f6291ee1a846f9c17858890ded3cd8ea4
> Author: Yazhou Tang <tangyazhou518@outlook.com>
>
> selftests/bpf: Add test for large offset bpf-to-bpf call
>
> This commit adds a BPF selftest to verify that the verifier and JIT correctly
> handle bpf-to-bpf calls with relative jump offsets exceeding the s16 boundary.
> It uses a large inline assembly block to force the verifier to process a jump
> exceeding the standard limit.
>
>> - Before the fix, the verifier rewrites the call instruction with a truncated
>> offset (here 32768 -> -32768) and lets it pass. When the program is executed,
>> the call instruction causes a kernel panic due to an invalid jump target.
>
> Is it safe to introduce a test that causes a kernel panic on unpatched kernels?
>
> While reproducing the bug is the test's goal, crashing the kernel is highly
> disruptive for CI systems and git bisect workflows that run the testsuite on
> older or unpatched kernel trees.
This point is valid. Although the upstream CI typically runs with JIT enabled,
causing a kernel panic when JIT is disabled is definitely something we should avoid.
>> diff --git a/tools/testing/selftests/bpf/progs/call_large_imm.c b/tools/testing/selftests/bpf/progs/call_large_imm.c
>> new file mode 100644
>> index 0000000000000..7cdf1e6d80d8d
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/progs/call_large_imm.c
>
> [ ... ]
>
>> +static __attribute__((noinline)) void padding_subprog(void)
>> +{
>> + asm volatile (" \
>> + r0 = 0; \
>> + .rept 32765; \
>> + r0 += 0; \
>> + .endr; \
>> + " ::: "r0");
>> +}
>> +
>> +static __attribute__((noinline)) int target_subprog(void)
>> +{
>> + /* A volatile variable is used here to prevent optimization. */
>> + volatile int magic_ret = 3;
>> + return magic_ret;
>> +}
>> +
>> +SEC("syscall")
>> +int call_large_imm_test(void *ctx)
>> +{
>> + if (zero)
>> + padding_subprog();
>> + return target_subprog();
>> +}
>
> Could the test be redesigned to fail gracefully?
>
> If the large padding block were placed before the call instruction, the
> call site would be at a large positive offset. A truncated negative jump
> (-32768) would then land safely inside the valid padding instructions instead
> of jumping out of bounds of the program.
>
> The execution would then fall through and return an incorrect value, causing
> ASSERT_EQ(opts.retval, 3) in the runner to cleanly fail without bringing down
> the system.
>
This approach is indeed more elegant and safer. I will redesign the selftest
as suggested for next version.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets
2026-04-21 15:29 ` bot+bpf-ci
@ 2026-04-22 16:05 ` Yazhou Tang
0 siblings, 0 replies; 11+ messages in thread
From: Yazhou Tang @ 2026-04-22 16:05 UTC (permalink / raw)
To: bot+bpf-ci, bpf, ast, eddyz87, emil, puranjay, xukuohai
Cc: daniel, john.fastabend, andrii, martin.lau, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928,
ziye, martin.lau, clm, ihor.solodrai
On 4/21/26 11:29 PM, bot+bpf-ci@kernel.org wrote:
>> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
>> index 63044ebe5721..cdccb1b319a6 100644
>> --- a/kernel/bpf/core.c
>> +++ b/kernel/bpf/core.c
>> @@ -2400,9 +2403,7 @@ int bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
>> /* Prevent out-of-bounds read to interpreters_args */
>> if (stack_depth > MAX_BPF_STACK)
>> return -EINVAL;
>> - insn->off = (s16) insn->imm;
>> - insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] -
>> - __bpf_call_base_args;
>> + insn->off = (round_up(stack_depth, 32) / 32) - 1;
>> insn->code = BPF_JMP | BPF_CALL_ARGS;
>> return 0;
>> }
>
> In the v6 review thread at https://lore.kernel.org/bpf/CAADnVQ+m2pFaPbRi=A+tsROKP2CPPkny_LXUKR3vUgH2+u1jRg@mail.gmail.com/,
> Alexei Starovoitov noted that Sashiko flagged a real OOB read issue.
>
> The author acknowledged this concern in a follow-up discussion, stating
> that v6 inadvertently turned a pre-existing load-time OOB read into a more
> critical run-time OOB read when stack_depth exceeds MAX_BPF_STACK. The
> author committed to fixing this in v7 by changing bpf_patch_call_args()
> return type to int and adding strict boundary checks for the array access.
>
> However, the current version still lacks a boundary check on the computed
> array index before accessing interpreters_args. The code checks if
> stack_depth exceeds MAX_BPF_STACK:
>
> if (stack_depth > MAX_BPF_STACK)
> return -EINVAL;
>
> But does this check guarantee that (round_up(stack_depth, 32) / 32) - 1
> stays within the bounds of the interpreters_args array?
The "stack_depth > MAX_BPF_STACK" check perfectly guarantees this.
The interpreters_args array is hardcoded and strictly aligned with
MAX_BPF_STACK. The array is designed to accommodate exactly up to
512 bytes, which yields an array size of exactly 16 elements (512/32).
Therefore, checking the stack depth is mathematically equivalent to
an explicit ARRAY_SIZE bounds check.
>
> Looking at the array access in ___bpf_prog_run():
>
> BPF_R0 = (interpreters_args[insn->off])(BPF_R1, BPF_R2,
> BPF_R3, BPF_R4,
> BPF_R5,
> insn + insn->imm + 1);
>
> If the computed index exceeds the array size, this would cause a run-time
> OOB access. Should there be an explicit check ensuring the index value
> is within the valid range of the interpreters_args array, as promised in
> the v6 discussion?
>
I prefer checking MAX_BPF_STACK directly because it explicitly shows
the root cause of the verifier's rejection. It also maintains consistency
with the existing check in bpf_do_misc_fixups().
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24729984210
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-04-22 16:06 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 14:45 [PATCH bpf-next v7 0/3] bpf: Fix call offset truncation and OOB read in bpf_patch_call_args() Yazhou Tang
2026-04-21 14:45 ` [PATCH bpf-next v7 1/3] bpf: Fix out-of-bounds " Yazhou Tang
2026-04-21 15:29 ` bot+bpf-ci
2026-04-21 14:45 ` [PATCH bpf-next v7 2/3] bpf: Fix s16 truncation for large bpf-to-bpf call offsets Yazhou Tang
2026-04-21 15:29 ` bot+bpf-ci
2026-04-22 16:05 ` Yazhou Tang
2026-04-21 20:46 ` sashiko-bot
2026-04-22 15:33 ` Yazhou Tang
2026-04-21 14:45 ` [PATCH bpf-next v7 3/3] selftests/bpf: Add test for large offset bpf-to-bpf call Yazhou Tang
2026-04-21 21:02 ` sashiko-bot
2026-04-22 15:47 ` Yazhou Tang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox