* [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets
@ 2026-04-06 16:01 Anton Protopopov
2026-04-06 16:01 ` [PATCH v3 bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays Anton Protopopov
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Anton Protopopov @ 2026-04-06 16:01 UTC (permalink / raw)
To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Jiyong Yang,
Mykyta Yatsenko
Cc: Anton Protopopov
The PTR_TO_INSN is always loaded via BPF_LDX_MEM instruction.
However, the verifier doesn't properly verify such loads when the
offset is not zero. Fix this and extend selftests with more scenarios.
v2 -> v3:
* Add a C-level selftest which triggers a load with nonzero offset (Alexei)
* Rephrase commit messages a bit
v2: https://lore.kernel.org/bpf/20260402184647.988132-1-a.s.protopopov@gmail.com/
v1: https://lore.kernel.org/bpf/20260401161529.681755-1-a.s.protopopov@gmail.com
Anton Protopopov (2):
bpf: Do not ignore offsets for loads from insn_arrays
selftests/bpf: Add more tests for loading insn arrays with offsets
kernel/bpf/verifier.c | 20 +++
.../selftests/bpf/prog_tests/bpf_gotox.c | 123 ++++++++++++------
tools/testing/selftests/bpf/progs/bpf_gotox.c | 31 +++++
3 files changed, 135 insertions(+), 39 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v3 bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays
2026-04-06 16:01 [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets Anton Protopopov
@ 2026-04-06 16:01 ` Anton Protopopov
2026-04-06 16:01 ` [PATCH v3 bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets Anton Protopopov
2026-04-07 1:40 ` [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: Anton Protopopov @ 2026-04-06 16:01 UTC (permalink / raw)
To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Jiyong Yang,
Mykyta Yatsenko
Cc: Anton Protopopov
When a pointer to PTR_TO_INSN is dereferenced, the offset field
of the BPF_LDX_MEM instruction can be nonzero. Patch the verifier
to not ignore this field.
Reported-by: Jiyong Yang <ksur673@gmail.com>
Fixes: 493d9e0d6083 ("bpf, x86: add support for indirect jumps")
Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
---
kernel/bpf/verifier.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8c1cf2eb6cbb..ce29b3b7a4d9 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -212,6 +212,8 @@ static int ref_set_non_owning(struct bpf_verifier_env *env,
static bool is_trusted_reg(const struct bpf_reg_state *reg);
static inline bool in_sleepable_context(struct bpf_verifier_env *env);
static const char *non_sleepable_context_description(struct bpf_verifier_env *env);
+static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
+static void scalar_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
{
@@ -7735,6 +7737,23 @@ static bool get_func_retval_range(struct bpf_prog *prog,
return false;
}
+static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
+{
+ struct bpf_reg_state fake_reg;
+
+ if (!val)
+ return;
+
+ fake_reg.type = SCALAR_VALUE;
+ __mark_reg_known(&fake_reg, val);
+
+ scalar32_min_max_add(dst_reg, &fake_reg);
+ scalar_min_max_add(dst_reg, &fake_reg);
+ dst_reg->var_off = tnum_add(dst_reg->var_off, fake_reg.var_off);
+
+ reg_bounds_sync(dst_reg);
+}
+
/* check whether memory at (regno + off) is accessible for t = (read | write)
* if t==write, value_regno is a register which value is stored into memory
* if t==read, value_regno is a register which will receive the value from memory
@@ -7816,6 +7835,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return -EACCES;
}
copy_register_state(®s[value_regno], reg);
+ add_scalar_to_reg(®s[value_regno], off);
regs[value_regno].type = PTR_TO_INSN;
} else {
mark_reg_unknown(env, regs, value_regno);
--
2.34.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH v3 bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets
2026-04-06 16:01 [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets Anton Protopopov
2026-04-06 16:01 ` [PATCH v3 bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays Anton Protopopov
@ 2026-04-06 16:01 ` Anton Protopopov
2026-04-07 1:40 ` [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: Anton Protopopov @ 2026-04-06 16:01 UTC (permalink / raw)
To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Jiyong Yang,
Mykyta Yatsenko
Cc: Anton Protopopov
A `gotox rX` instruction accepts only values of type PTR_TO_INSN.
The only way to create such a value is to load it from a map of
type insn_array:
rX = *(rY + offset) # rY was read from an insn_array
...
gotox rX
Add instruction-level and C-level selftests to validate loads
with nonzero offsets.
Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
---
.../selftests/bpf/prog_tests/bpf_gotox.c | 123 ++++++++++++------
tools/testing/selftests/bpf/progs/bpf_gotox.c | 31 +++++
2 files changed, 115 insertions(+), 39 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c b/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
index 75b0cf2467ab..73dc63882b7d 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
@@ -317,7 +317,7 @@ static void check_ldimm64_off_load(struct bpf_gotox *skel __always_unused)
static int __check_ldimm64_gotox_prog_load(struct bpf_insn *insns,
__u32 insn_cnt,
- __u32 off1, __u32 off2)
+ int off1, int off2, int off3)
{
const __u32 values[] = {5, 7, 9, 11, 13, 15};
const __u32 max_entries = ARRAY_SIZE(values);
@@ -349,16 +349,46 @@ static int __check_ldimm64_gotox_prog_load(struct bpf_insn *insns,
/* r1 += off2 */
insns[2].imm = off2;
+ /* r1 = *(r1 + off3) */
+ insns[3].off = off3;
+
ret = prog_load(insns, insn_cnt);
close(map_fd);
return ret;
}
-static void reject_offsets(struct bpf_insn *insns, __u32 insn_cnt, __u32 off1, __u32 off2)
+static void
+allow_offsets(struct bpf_insn *insns, __u32 insn_cnt, int off1, int off2, int off3)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ int prog_fd, err;
+ char s[128] = "";
+
+ prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2, off3);
+ snprintf(s, sizeof(s), "__check_ldimm64_gotox_prog_load(%d,%d,%d)", off1, off2, off3);
+ if (!ASSERT_GE(prog_fd, 0, s))
+ return;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err")) {
+ close(prog_fd);
+ return;
+ }
+
+ if (!ASSERT_EQ(topts.retval, (off1 + off2 + off3) / 8, "test_run_opts retval")) {
+ close(prog_fd);
+ return;
+ }
+
+ close(prog_fd);
+}
+
+static void
+reject_offsets(struct bpf_insn *insns, __u32 insn_cnt, int off1, int off2, int off3)
{
int prog_fd;
- prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2);
+ prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2, off3);
if (!ASSERT_EQ(prog_fd, -EACCES, "__check_ldimm64_gotox_prog_load"))
close(prog_fd);
}
@@ -376,7 +406,7 @@ static void check_ldimm64_off_gotox(struct bpf_gotox *skel __always_unused)
* The program rewrites the offsets in the instructions below:
* r1 = &map + offset1
* r1 += offset2
- * r1 = *r1
+ * r1 = *(r1 + offset3)
* gotox r1
*/
BPF_LD_IMM64_RAW(BPF_REG_1, BPF_PSEUDO_MAP_VALUE, 0),
@@ -403,43 +433,55 @@ static void check_ldimm64_off_gotox(struct bpf_gotox *skel __always_unused)
BPF_MOV64_IMM(BPF_REG_0, 5),
BPF_EXIT_INSN(),
};
- int prog_fd, err;
- __u32 off1, off2;
-
- /* allow all combinations off1 + off2 < 6 */
- for (off1 = 0; off1 < 6; off1++) {
- for (off2 = 0; off1 + off2 < 6; off2++) {
- LIBBPF_OPTS(bpf_test_run_opts, topts);
-
- prog_fd = __check_ldimm64_gotox_prog_load(insns, ARRAY_SIZE(insns),
- off1 * 8, off2 * 8);
- if (!ASSERT_GE(prog_fd, 0, "__check_ldimm64_gotox_prog_load"))
- return;
-
- err = bpf_prog_test_run_opts(prog_fd, &topts);
- if (!ASSERT_OK(err, "test_run_opts err")) {
- close(prog_fd);
- return;
- }
-
- if (!ASSERT_EQ(topts.retval, off1 + off2, "test_run_opts retval")) {
- close(prog_fd);
- return;
- }
-
- close(prog_fd);
- }
- }
+ int off1, off2, off3;
+
+ /* allow all combinations off1 + off2 + off3 < 6 */
+ for (off1 = 0; off1 < 6; off1++)
+ for (off2 = 0; off1 + off2 < 6; off2++)
+ for (off3 = 0; off1 + off2 + off3 < 6; off3++)
+ allow_offsets(insns, ARRAY_SIZE(insns),
+ off1 * 8, off2 * 8, off3 * 8);
+
+ /* allow for some offsets to be negative */
+ allow_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 0, -(8 * 3));
+ allow_offsets(insns, ARRAY_SIZE(insns), 8 * 3, -(8 * 3), 0);
+ allow_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 3, -(8 * 3));
+ allow_offsets(insns, ARRAY_SIZE(insns), 8 * 4, 0, -(8 * 2));
+ allow_offsets(insns, ARRAY_SIZE(insns), 8 * 4, -(8 * 2), 0);
+ allow_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 4, -(8 * 2));
+
+ /* disallow negative sums of offsets */
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 0, -(8 * 4));
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, -(8 * 4), 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 3, -(8 * 4));
+
+ /* disallow the off1 to be negative in any case */
+ reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 0, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 8 * 1, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 8 * 1, 8 * 1);
+
+ /* reject off1 + off2 + off3 >= 6 */
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 3, 8 * 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 7, 8 * 0, 8 * 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 7, 8 * 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 0, 8 * 3);
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 3, 8 * 3);
+
+ /* reject (off1 + off2) % 8 != 0, off3 % 8 != 0 */
+ reject_offsets(insns, ARRAY_SIZE(insns), 3, 3, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 7, 0, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 0, 7, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 0, 0, 7);
+}
- /* reject off1 + off2 >= 6 */
- reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 3);
- reject_offsets(insns, ARRAY_SIZE(insns), 8 * 7, 8 * 0);
- reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 7);
+static void check_ldimm64_off_gotox_llvm(struct bpf_gotox *skel)
+{
+ __u64 in[] = {0, 1, 2, 3, 4};
+ __u64 out[] = {1, 1, 5, 1, 1};
+ int i;
- /* reject (off1 + off2) % 8 != 0 */
- reject_offsets(insns, ARRAY_SIZE(insns), 3, 3);
- reject_offsets(insns, ARRAY_SIZE(insns), 7, 0);
- reject_offsets(insns, ARRAY_SIZE(insns), 0, 7);
+ for (i = 0; i < ARRAY_SIZE(in); i++)
+ check_simple(skel, skel->progs.load_with_nonzero_offset, in[i], out[i]);
}
void test_bpf_gotox(void)
@@ -496,5 +538,8 @@ void test_bpf_gotox(void)
if (test__start_subtest("check-ldimm64-off-gotox"))
__subtest(skel, check_ldimm64_off_gotox);
+ if (test__start_subtest("check-ldimm64-off-gotox-llvm"))
+ __subtest(skel, check_ldimm64_off_gotox_llvm);
+
bpf_gotox__destroy(skel);
}
diff --git a/tools/testing/selftests/bpf/progs/bpf_gotox.c b/tools/testing/selftests/bpf/progs/bpf_gotox.c
index 216c71b94c64..99b3c9c9a01c 100644
--- a/tools/testing/selftests/bpf/progs/bpf_gotox.c
+++ b/tools/testing/selftests/bpf/progs/bpf_gotox.c
@@ -421,6 +421,36 @@ int use_nonstatic_global_other_sec(void *ctx)
return __nonstatic_global(in_user);
}
+SEC("syscall")
+int load_with_nonzero_offset(struct simple_ctx *ctx)
+{
+ void *jj[] = { &&l1, &&l2, &&l3 };
+
+ /*
+ * This makes LLVM to generate a load from the jj map with an offset:
+ * r1 = 0x0 ll
+ * r1 = *(u64 *)(r1 + 0x10)
+ * gotox r1
+ */
+ if (ctx->x == 2)
+ goto *jj[ctx->x];
+
+ ret_user = 1;
+ return 1;
+
+l1:
+ /* never reached, but leave it here to outsmart LLVM */
+ ret_user = 0;
+ return 0;
+l2:
+ /* never reached, but leave it here to outsmart LLVM */
+ ret_user = 3;
+ return 3;
+l3:
+ ret_user = 5;
+ return 5;
+}
+
#else /* __BPF_FEATURE_GOTOX */
#define SKIP_TEST(TEST_NAME) \
@@ -442,6 +472,7 @@ SKIP_TEST(use_static_global_other_sec);
SKIP_TEST(use_nonstatic_global1);
SKIP_TEST(use_nonstatic_global2);
SKIP_TEST(use_nonstatic_global_other_sec);
+SKIP_TEST(load_with_nonzero_offset);
#endif /* __BPF_FEATURE_GOTOX */
--
2.34.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets
2026-04-06 16:01 [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets Anton Protopopov
2026-04-06 16:01 ` [PATCH v3 bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays Anton Protopopov
2026-04-06 16:01 ` [PATCH v3 bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets Anton Protopopov
@ 2026-04-07 1:40 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-04-07 1:40 UTC (permalink / raw)
To: Anton Protopopov
Cc: bpf, ast, daniel, andrii, eddyz87, memxor, ksur673,
mykyta.yatsenko5
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Mon, 6 Apr 2026 16:01:39 +0000 you wrote:
> The PTR_TO_INSN is always loaded via BPF_LDX_MEM instruction.
> However, the verifier doesn't properly verify such loads when the
> offset is not zero. Fix this and extend selftests with more scenarios.
>
> v2 -> v3:
> * Add a C-level selftest which triggers a load with nonzero offset (Alexei)
> * Rephrase commit messages a bit
>
> [...]
Here is the summary with links:
- [v3,bpf-next,1/2] bpf: Do not ignore offsets for loads from insn_arrays
https://git.kernel.org/bpf/bpf-next/c/43cd9d9520e6
- [v3,bpf-next,2/2] selftests/bpf: Add more tests for loading insn arrays with offsets
https://git.kernel.org/bpf/bpf-next/c/1c2e217ad349
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-04-07 1:40 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-06 16:01 [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets Anton Protopopov
2026-04-06 16:01 ` [PATCH v3 bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays Anton Protopopov
2026-04-06 16:01 ` [PATCH v3 bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets Anton Protopopov
2026-04-07 1:40 ` [PATCH v3 bpf-next 0/2] Properly load values from insn_arays with non-zero offsets patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox