* [PATCH bpf-next 0/2] Properly load values from insn_arays with non-zero offsets
@ 2026-04-01 16:15 Anton Protopopov
2026-04-01 16:15 ` [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays Anton Protopopov
2026-04-01 16:15 ` [PATCH bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets Anton Protopopov
0 siblings, 2 replies; 10+ messages in thread
From: Anton Protopopov @ 2026-04-01 16:15 UTC (permalink / raw)
To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Jiyong Yang
Cc: Anton Protopopov
The PTR_TO_INSN is always loaded via BPF_LDX_MEM instruction.
However, the verifier doesn't properly verify such loads when the
offset is not zero. Fix this and extend selftests with more scenarios.
Anton Protopopov (2):
bpf: Do not ignore offsets for loads from insn_arrays
selftests/bpf: Add more tests for loading insn arrays with offsets
kernel/bpf/verifier.c | 17 +++
.../selftests/bpf/prog_tests/bpf_gotox.c | 111 +++++++++++-------
2 files changed, 87 insertions(+), 41 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays
2026-04-01 16:15 [PATCH bpf-next 0/2] Properly load values from insn_arays with non-zero offsets Anton Protopopov
@ 2026-04-01 16:15 ` Anton Protopopov
2026-04-01 22:47 ` Mykyta Yatsenko
2026-04-01 16:15 ` [PATCH bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets Anton Protopopov
1 sibling, 1 reply; 10+ messages in thread
From: Anton Protopopov @ 2026-04-01 16:15 UTC (permalink / raw)
To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Jiyong Yang
Cc: Anton Protopopov
When a pointer to PTR_TO_INSN is dereferenced it is possible to
specify an offset inside the load instruction. This is a bug,
because while the verifier ignores the field, JITs are not.
So, patch the verifier to not ignore this field.
Reported-by: Jiyong Yang <ksur673@gmail.com>
Fixes: 493d9e0d6083 ("bpf, x86: add support for indirect jumps")
Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
---
kernel/bpf/verifier.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8c1cf2eb6cbb..f1b1c8e9dc26 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -212,6 +212,8 @@ static int ref_set_non_owning(struct bpf_verifier_env *env,
static bool is_trusted_reg(const struct bpf_reg_state *reg);
static inline bool in_sleepable_context(struct bpf_verifier_env *env);
static const char *non_sleepable_context_description(struct bpf_verifier_env *env);
+static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
+static void scalar_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
{
@@ -7735,6 +7737,20 @@ static bool get_func_retval_range(struct bpf_prog *prog,
return false;
}
+static inline void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
+{
+ struct bpf_reg_state fake_reg;
+
+ fake_reg.type = SCALAR_VALUE;
+ __mark_reg_known(&fake_reg, val);
+
+ scalar32_min_max_add(dst_reg, &fake_reg);
+ scalar_min_max_add(dst_reg, &fake_reg);
+ dst_reg->var_off = tnum_add(dst_reg->var_off, fake_reg.var_off);
+
+ reg_bounds_sync(dst_reg);
+}
+
/* check whether memory at (regno + off) is accessible for t = (read | write)
* if t==write, value_regno is a register which value is stored into memory
* if t==read, value_regno is a register which will receive the value from memory
@@ -7816,6 +7832,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return -EACCES;
}
copy_register_state(®s[value_regno], reg);
+ add_scalar_to_reg(®s[value_regno], off);
regs[value_regno].type = PTR_TO_INSN;
} else {
mark_reg_unknown(env, regs, value_regno);
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets
2026-04-01 16:15 [PATCH bpf-next 0/2] Properly load values from insn_arays with non-zero offsets Anton Protopopov
2026-04-01 16:15 ` [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays Anton Protopopov
@ 2026-04-01 16:15 ` Anton Protopopov
2026-04-01 22:38 ` Mykyta Yatsenko
1 sibling, 1 reply; 10+ messages in thread
From: Anton Protopopov @ 2026-04-01 16:15 UTC (permalink / raw)
To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Jiyong Yang
Cc: Anton Protopopov
A typical series of load instructions for a gotox looks like
r1 = &map + offset1
r1 += offset2
r1 = *(r1 + offset3)
gotox r1
Here offset3 is, normally, equal to zero; but this is not guaranteed.
Extend selftests with tests for non-zero offset3 and, while here, also
add tests for negative offsets (the offset1 and the sum of three offsets
still have to be non-negative).
Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
---
.../selftests/bpf/prog_tests/bpf_gotox.c | 114 +++++++++++-------
1 file changed, 73 insertions(+), 41 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c b/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
index 75b0cf2467ab..594adf698fdb 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
@@ -317,7 +317,7 @@ static void check_ldimm64_off_load(struct bpf_gotox *skel __always_unused)
static int __check_ldimm64_gotox_prog_load(struct bpf_insn *insns,
__u32 insn_cnt,
- __u32 off1, __u32 off2)
+ int off1, int off2, int off3)
{
const __u32 values[] = {5, 7, 9, 11, 13, 15};
const __u32 max_entries = ARRAY_SIZE(values);
@@ -349,16 +349,46 @@ static int __check_ldimm64_gotox_prog_load(struct bpf_insn *insns,
/* r1 += off2 */
insns[2].imm = off2;
+ /* r1 = *(r1 + off3) */
+ insns[3].off = off3;
+
ret = prog_load(insns, insn_cnt);
close(map_fd);
return ret;
}
-static void reject_offsets(struct bpf_insn *insns, __u32 insn_cnt, __u32 off1, __u32 off2)
+static void
+allow_offsets(struct bpf_insn *insns, __u32 insn_cnt, int off1, int off2, int off3)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ int prog_fd, err;
+ char s[128] = "";
+
+ prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2, off3);
+ snprintf(s, sizeof(s), "__check_ldimm64_gotox_prog_load(%u,%u,%u)", off1, off2, off3);
+ if (!ASSERT_GE(prog_fd, 0, s))
+ return;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err")) {
+ close(prog_fd);
+ return;
+ }
+
+ if (!ASSERT_EQ(topts.retval, (off1 + off2 + off3) / 8, "test_run_opts retval")) {
+ close(prog_fd);
+ return;
+ }
+
+ close(prog_fd);
+}
+
+static void
+reject_offsets(struct bpf_insn *insns, __u32 insn_cnt, __u32 off1, __u32 off2, __u32 off3)
{
int prog_fd;
- prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2);
+ prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2, off3);
if (!ASSERT_EQ(prog_fd, -EACCES, "__check_ldimm64_gotox_prog_load"))
close(prog_fd);
}
@@ -376,7 +406,7 @@ static void check_ldimm64_off_gotox(struct bpf_gotox *skel __always_unused)
* The program rewrites the offsets in the instructions below:
* r1 = &map + offset1
* r1 += offset2
- * r1 = *r1
+ * r1 = *(r1 + offset3)
* gotox r1
*/
BPF_LD_IMM64_RAW(BPF_REG_1, BPF_PSEUDO_MAP_VALUE, 0),
@@ -403,43 +433,45 @@ static void check_ldimm64_off_gotox(struct bpf_gotox *skel __always_unused)
BPF_MOV64_IMM(BPF_REG_0, 5),
BPF_EXIT_INSN(),
};
- int prog_fd, err;
- __u32 off1, off2;
-
- /* allow all combinations off1 + off2 < 6 */
- for (off1 = 0; off1 < 6; off1++) {
- for (off2 = 0; off1 + off2 < 6; off2++) {
- LIBBPF_OPTS(bpf_test_run_opts, topts);
-
- prog_fd = __check_ldimm64_gotox_prog_load(insns, ARRAY_SIZE(insns),
- off1 * 8, off2 * 8);
- if (!ASSERT_GE(prog_fd, 0, "__check_ldimm64_gotox_prog_load"))
- return;
-
- err = bpf_prog_test_run_opts(prog_fd, &topts);
- if (!ASSERT_OK(err, "test_run_opts err")) {
- close(prog_fd);
- return;
- }
-
- if (!ASSERT_EQ(topts.retval, off1 + off2, "test_run_opts retval")) {
- close(prog_fd);
- return;
- }
-
- close(prog_fd);
- }
- }
-
- /* reject off1 + off2 >= 6 */
- reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 3);
- reject_offsets(insns, ARRAY_SIZE(insns), 8 * 7, 8 * 0);
- reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 7);
-
- /* reject (off1 + off2) % 8 != 0 */
- reject_offsets(insns, ARRAY_SIZE(insns), 3, 3);
- reject_offsets(insns, ARRAY_SIZE(insns), 7, 0);
- reject_offsets(insns, ARRAY_SIZE(insns), 0, 7);
+ int off1, off2, off3;
+
+ /* allow all combinations off1 + off2 + off3 < 6 */
+ for (off1 = 0; off1 < 6; off1++)
+ for (off2 = 0; off1 + off2 < 6; off2++)
+ for (off3 = 0; off1 + off2 + off3 < 6; off3++)
+ allow_offsets(insns, ARRAY_SIZE(insns),
+ off1 * 8, off2 * 8, off3 * 8);
+
+ /* allow for some offsets to be negative */
+ allow_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 0, -(8 * 3));
+ allow_offsets(insns, ARRAY_SIZE(insns), 8 * 3, -(8 * 3), 0);
+ allow_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 3, -(8 * 3));
+ allow_offsets(insns, ARRAY_SIZE(insns), 8 * 4, 0, -(8 * 2));
+ allow_offsets(insns, ARRAY_SIZE(insns), 8 * 4, -(8 * 2), 0);
+ allow_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 4, -(8 * 2));
+
+ /* disallow negative sums of offsets */
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 0, -(8 * 4));
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, -(8 * 4), 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 3, -(8 * 4));
+
+ /* disallow the off1 to be negative in any case */
+ reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 0, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 8 * 1, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 8 * 1, 8 * 1);
+
+ /* reject off1 + off2 + off3 >= 6 */
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 3, 8 * 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 7, 8 * 0, 8 * 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 7, 8 * 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 0, 8 * 3);
+ reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 3, 8 * 3);
+
+ /* reject (off1 + off2) % 8 != 0, off3 % 8 != 0 */
+ reject_offsets(insns, ARRAY_SIZE(insns), 3, 3, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 7, 0, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 0, 7, 0);
+ reject_offsets(insns, ARRAY_SIZE(insns), 0, 0, 7);
}
void test_bpf_gotox(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets
2026-04-01 16:15 ` [PATCH bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets Anton Protopopov
@ 2026-04-01 22:38 ` Mykyta Yatsenko
2026-04-02 8:28 ` Anton Protopopov
0 siblings, 1 reply; 10+ messages in thread
From: Mykyta Yatsenko @ 2026-04-01 22:38 UTC (permalink / raw)
To: Anton Protopopov, bpf, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Jiyong Yang
On 4/1/26 5:15 PM, Anton Protopopov wrote:
> A typical series of load instructions for a gotox looks like
>
> r1 = &map + offset1
> r1 += offset2
> r1 = *(r1 + offset3)
> gotox r1
>
> Here offset3 is, normally, equal to zero; but this is not guaranteed.
> Extend selftests with tests for non-zero offset3 and, while here, also
> add tests for negative offsets (the offset1 and the sum of three offsets
> still have to be non-negative).
>
> Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
> ---
> .../selftests/bpf/prog_tests/bpf_gotox.c | 114 +++++++++++-------
> 1 file changed, 73 insertions(+), 41 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c b/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
> index 75b0cf2467ab..594adf698fdb 100644
> --- a/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
> +++ b/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
> @@ -317,7 +317,7 @@ static void check_ldimm64_off_load(struct bpf_gotox *skel __always_unused)
>
> static int __check_ldimm64_gotox_prog_load(struct bpf_insn *insns,
> __u32 insn_cnt,
> - __u32 off1, __u32 off2)
> + int off1, int off2, int off3)
> {
> const __u32 values[] = {5, 7, 9, 11, 13, 15};
> const __u32 max_entries = ARRAY_SIZE(values);
> @@ -349,16 +349,46 @@ static int __check_ldimm64_gotox_prog_load(struct bpf_insn *insns,
> /* r1 += off2 */
> insns[2].imm = off2;
>
> + /* r1 = *(r1 + off3) */
> + insns[3].off = off3;
> +
> ret = prog_load(insns, insn_cnt);
> close(map_fd);
> return ret;
> }
>
> -static void reject_offsets(struct bpf_insn *insns, __u32 insn_cnt, __u32 off1, __u32 off2)
> +static void
> +allow_offsets(struct bpf_insn *insns, __u32 insn_cnt, int off1, int off2, int off3)
> +{
> + LIBBPF_OPTS(bpf_test_run_opts, topts);
> + int prog_fd, err;
> + char s[128] = "";
> +
> + prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2, off3);
> + snprintf(s, sizeof(s), "__check_ldimm64_gotox_prog_load(%u,%u,%u)", off1, off2, off3);
here offsets are int, but printed with %u.
> + if (!ASSERT_GE(prog_fd, 0, s))
> + return;
> +
> + err = bpf_prog_test_run_opts(prog_fd, &topts);
> + if (!ASSERT_OK(err, "test_run_opts err")) {
> + close(prog_fd);
> + return;
> + }
> +
> + if (!ASSERT_EQ(topts.retval, (off1 + off2 + off3) / 8, "test_run_opts retval")) {
> + close(prog_fd);
> + return;
> + }
> +
> + close(prog_fd);
> +}
> +
> +static void
> +reject_offsets(struct bpf_insn *insns, __u32 insn_cnt, __u32 off1, __u32 off2, __u32 off3)
If offsets can be negative, should these arguments be int rather than
__u32, also matching reject_offsets?
> {
> int prog_fd;
>
> - prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2);
> + prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2, off3);
> if (!ASSERT_EQ(prog_fd, -EACCES, "__check_ldimm64_gotox_prog_load"))
> close(prog_fd);
> }
> @@ -376,7 +406,7 @@ static void check_ldimm64_off_gotox(struct bpf_gotox *skel __always_unused)
> * The program rewrites the offsets in the instructions below:
> * r1 = &map + offset1
> * r1 += offset2
> - * r1 = *r1
> + * r1 = *(r1 + offset3)
> * gotox r1
> */
> BPF_LD_IMM64_RAW(BPF_REG_1, BPF_PSEUDO_MAP_VALUE, 0),
> @@ -403,43 +433,45 @@ static void check_ldimm64_off_gotox(struct bpf_gotox *skel __always_unused)
> BPF_MOV64_IMM(BPF_REG_0, 5),
> BPF_EXIT_INSN(),
> };
> - int prog_fd, err;
> - __u32 off1, off2;
> -
> - /* allow all combinations off1 + off2 < 6 */
> - for (off1 = 0; off1 < 6; off1++) {
> - for (off2 = 0; off1 + off2 < 6; off2++) {
> - LIBBPF_OPTS(bpf_test_run_opts, topts);
> -
> - prog_fd = __check_ldimm64_gotox_prog_load(insns, ARRAY_SIZE(insns),
> - off1 * 8, off2 * 8);
> - if (!ASSERT_GE(prog_fd, 0, "__check_ldimm64_gotox_prog_load"))
> - return;
> -
> - err = bpf_prog_test_run_opts(prog_fd, &topts);
> - if (!ASSERT_OK(err, "test_run_opts err")) {
> - close(prog_fd);
> - return;
> - }
> -
> - if (!ASSERT_EQ(topts.retval, off1 + off2, "test_run_opts retval")) {
> - close(prog_fd);
> - return;
> - }
> -
> - close(prog_fd);
> - }
> - }
> -
> - /* reject off1 + off2 >= 6 */
> - reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 3);
> - reject_offsets(insns, ARRAY_SIZE(insns), 8 * 7, 8 * 0);
> - reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 7);
> -
> - /* reject (off1 + off2) % 8 != 0 */
> - reject_offsets(insns, ARRAY_SIZE(insns), 3, 3);
> - reject_offsets(insns, ARRAY_SIZE(insns), 7, 0);
> - reject_offsets(insns, ARRAY_SIZE(insns), 0, 7);
> + int off1, off2, off3;
> +
> + /* allow all combinations off1 + off2 + off3 < 6 */
> + for (off1 = 0; off1 < 6; off1++)
> + for (off2 = 0; off1 + off2 < 6; off2++)
> + for (off3 = 0; off1 + off2 + off3 < 6; off3++)
> + allow_offsets(insns, ARRAY_SIZE(insns),
> + off1 * 8, off2 * 8, off3 * 8);
> +
> + /* allow for some offsets to be negative */
> + allow_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 0, -(8 * 3));
> + allow_offsets(insns, ARRAY_SIZE(insns), 8 * 3, -(8 * 3), 0);
> + allow_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 3, -(8 * 3));
> + allow_offsets(insns, ARRAY_SIZE(insns), 8 * 4, 0, -(8 * 2));
> + allow_offsets(insns, ARRAY_SIZE(insns), 8 * 4, -(8 * 2), 0);
> + allow_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 4, -(8 * 2));
> +
> + /* disallow negative sums of offsets */
> + reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 0, -(8 * 4));
> + reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, -(8 * 4), 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), 0, 8 * 3, -(8 * 4));
> +
> + /* disallow the off1 to be negative in any case */
> + reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 0, 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 8 * 1, 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), -8 * 1, 8 * 1, 8 * 1);
> +
> + /* reject off1 + off2 + off3 >= 6 */
> + reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 3, 8 * 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), 8 * 7, 8 * 0, 8 * 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 7, 8 * 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), 8 * 3, 8 * 0, 8 * 3);
> + reject_offsets(insns, ARRAY_SIZE(insns), 8 * 0, 8 * 3, 8 * 3);
> +
> + /* reject (off1 + off2) % 8 != 0, off3 % 8 != 0 */
> + reject_offsets(insns, ARRAY_SIZE(insns), 3, 3, 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), 7, 0, 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), 0, 7, 0);
> + reject_offsets(insns, ARRAY_SIZE(insns), 0, 0, 7);
> }
>
> void test_bpf_gotox(void)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays
2026-04-01 16:15 ` [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays Anton Protopopov
@ 2026-04-01 22:47 ` Mykyta Yatsenko
[not found] ` <CAGzPb2Ed+Z513yWDUE91H_OP2eF_fHucy_xV3-cpYOkmw73xmg@mail.gmail.com>
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Mykyta Yatsenko @ 2026-04-01 22:47 UTC (permalink / raw)
To: Anton Protopopov, bpf, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Jiyong Yang
On 4/1/26 5:15 PM, Anton Protopopov wrote:
> When a pointer to PTR_TO_INSN is dereferenced it is possible to
> specify an offset inside the load instruction. This is a bug,
> because while the verifier ignores the field, JITs are not.
> So, patch the verifier to not ignore this field.
>
> Reported-by: Jiyong Yang <ksur673@gmail.com>
> Fixes: 493d9e0d6083 ("bpf, x86: add support for indirect jumps")
> Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
> ---
> kernel/bpf/verifier.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 8c1cf2eb6cbb..f1b1c8e9dc26 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -212,6 +212,8 @@ static int ref_set_non_owning(struct bpf_verifier_env *env,
> static bool is_trusted_reg(const struct bpf_reg_state *reg);
> static inline bool in_sleepable_context(struct bpf_verifier_env *env);
> static const char *non_sleepable_context_description(struct bpf_verifier_env *env);
> +static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
> +static void scalar_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
>
> static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
> {
> @@ -7735,6 +7737,20 @@ static bool get_func_retval_range(struct bpf_prog *prog,
> return false;
> }
>
> +static inline void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
Why does it need to be manually inlined?
Other than that the change looks good, the implementation of the
add_scalar_to_reg() is similar to a piece of code in sync_linked_regs().
> +{
> + struct bpf_reg_state fake_reg;
> +
> + fake_reg.type = SCALAR_VALUE;
> + __mark_reg_known(&fake_reg, val);
> +
> + scalar32_min_max_add(dst_reg, &fake_reg);
> + scalar_min_max_add(dst_reg, &fake_reg);
> + dst_reg->var_off = tnum_add(dst_reg->var_off, fake_reg.var_off);
> +
> + reg_bounds_sync(dst_reg);
> +}
> +
> /* check whether memory at (regno + off) is accessible for t = (read | write)
> * if t==write, value_regno is a register which value is stored into memory
> * if t==read, value_regno is a register which will receive the value from memory
> @@ -7816,6 +7832,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
> return -EACCES;
> }
> copy_register_state(®s[value_regno], reg);
> + add_scalar_to_reg(®s[value_regno], off);
> regs[value_regno].type = PTR_TO_INSN;
> } else {
> mark_reg_unknown(env, regs, value_regno);
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays
[not found] ` <CAGzPb2Ed+Z513yWDUE91H_OP2eF_fHucy_xV3-cpYOkmw73xmg@mail.gmail.com>
@ 2026-04-02 0:27 ` Alexei Starovoitov
0 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2026-04-02 0:27 UTC (permalink / raw)
To: 지용
Cc: Mykyta Yatsenko, Anton Protopopov, bpf, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Eduard Zingerman,
Kumar Kartikeya Dwivedi
On Wed, Apr 1, 2026 at 5:18 PM 지용 <ksur673@gmail.com> wrote:
>
> Hello.
> Thank you for merging the fix with the credit.
>
> It seems a bit early, but Since this issue effectively bypasses the verifier's safety checks and affects multiple stable branches, I'd like to confirm if the Kernel CNA will be assigning a CVE ID for this once it hits the mainline?
Absolutely not. It's not a security issue.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays
2026-04-01 22:47 ` Mykyta Yatsenko
[not found] ` <CAGzPb2Ed+Z513yWDUE91H_OP2eF_fHucy_xV3-cpYOkmw73xmg@mail.gmail.com>
@ 2026-04-02 2:37 ` sun jian
2026-04-02 8:37 ` Anton Protopopov
2026-04-02 8:36 ` Anton Protopopov
2 siblings, 1 reply; 10+ messages in thread
From: sun jian @ 2026-04-02 2:37 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: Anton Protopopov, bpf, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Jiyong Yang
On Thu, Apr 2, 2026 at 6:47 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
>
>
> On 4/1/26 5:15 PM, Anton Protopopov wrote:
> > When a pointer to PTR_TO_INSN is dereferenced it is possible to
> > specify an offset inside the load instruction. This is a bug,
> > because while the verifier ignores the field, JITs are not.
> > So, patch the verifier to not ignore this field.
> >
> > Reported-by: Jiyong Yang <ksur673@gmail.com>
> > Fixes: 493d9e0d6083 ("bpf, x86: add support for indirect jumps")
> > Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
> > ---
> > kernel/bpf/verifier.c | 17 +++++++++++++++++
> > 1 file changed, 17 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 8c1cf2eb6cbb..f1b1c8e9dc26 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -212,6 +212,8 @@ static int ref_set_non_owning(struct bpf_verifier_env *env,
> > static bool is_trusted_reg(const struct bpf_reg_state *reg);
> > static inline bool in_sleepable_context(struct bpf_verifier_env *env);
> > static const char *non_sleepable_context_description(struct bpf_verifier_env *env);
> > +static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
> > +static void scalar_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
> >
> > static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
> > {
> > @@ -7735,6 +7737,20 @@ static bool get_func_retval_range(struct bpf_prog *prog,
> > return false;
> > }
> >
> > +static inline void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
>
> Why does it need to be manually inlined?
> Other than that the change looks good, the implementation of the
> add_scalar_to_reg() is similar to a piece of code in sync_linked_regs().
Agreed.
nit: maybe this helper could be called from sync_linked_regs() too?
>
> > +{
> > + struct bpf_reg_state fake_reg;
> > +
> > + fake_reg.type = SCALAR_VALUE;
> > + __mark_reg_known(&fake_reg, val);
> > +
> > + scalar32_min_max_add(dst_reg, &fake_reg);
> > + scalar_min_max_add(dst_reg, &fake_reg);
> > + dst_reg->var_off = tnum_add(dst_reg->var_off, fake_reg.var_off);
> > +
> > + reg_bounds_sync(dst_reg);
> > +}
> > +
> > /* check whether memory at (regno + off) is accessible for t = (read | write)
> > * if t==write, value_regno is a register which value is stored into memory
> > * if t==read, value_regno is a register which will receive the value from memory
> > @@ -7816,6 +7832,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
> > return -EACCES;
> > }
> > copy_register_state(®s[value_regno], reg);
> > + add_scalar_to_reg(®s[value_regno], off);
> > regs[value_regno].type = PTR_TO_INSN;
> > } else {
> > mark_reg_unknown(env, regs, value_regno);
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets
2026-04-01 22:38 ` Mykyta Yatsenko
@ 2026-04-02 8:28 ` Anton Protopopov
0 siblings, 0 replies; 10+ messages in thread
From: Anton Protopopov @ 2026-04-02 8:28 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Jiyong Yang
On 26/04/01 11:38PM, Mykyta Yatsenko wrote:
>
>
> On 4/1/26 5:15 PM, Anton Protopopov wrote:
> > A typical series of load instructions for a gotox looks like
> >
> > r1 = &map + offset1
> > r1 += offset2
> > r1 = *(r1 + offset3)
> > gotox r1
> >
> > Here offset3 is, normally, equal to zero; but this is not guaranteed.
> > Extend selftests with tests for non-zero offset3 and, while here, also
> > add tests for negative offsets (the offset1 and the sum of three offsets
> > still have to be non-negative).
> >
> > Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
> > ---
> > .../selftests/bpf/prog_tests/bpf_gotox.c | 114 +++++++++++-------
> > 1 file changed, 73 insertions(+), 41 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c b/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
> > index 75b0cf2467ab..594adf698fdb 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/bpf_gotox.c
> > @@ -317,7 +317,7 @@ static void check_ldimm64_off_load(struct bpf_gotox *skel __always_unused)
> > static int __check_ldimm64_gotox_prog_load(struct bpf_insn *insns,
> > __u32 insn_cnt,
> > - __u32 off1, __u32 off2)
> > + int off1, int off2, int off3)
> > {
> > const __u32 values[] = {5, 7, 9, 11, 13, 15};
> > const __u32 max_entries = ARRAY_SIZE(values);
> > @@ -349,16 +349,46 @@ static int __check_ldimm64_gotox_prog_load(struct bpf_insn *insns,
> > /* r1 += off2 */
> > insns[2].imm = off2;
> > + /* r1 = *(r1 + off3) */
> > + insns[3].off = off3;
> > +
> > ret = prog_load(insns, insn_cnt);
> > close(map_fd);
> > return ret;
> > }
> > -static void reject_offsets(struct bpf_insn *insns, __u32 insn_cnt, __u32 off1, __u32 off2)
> > +static void
> > +allow_offsets(struct bpf_insn *insns, __u32 insn_cnt, int off1, int off2, int off3)
> > +{
> > + LIBBPF_OPTS(bpf_test_run_opts, topts);
> > + int prog_fd, err;
> > + char s[128] = "";
> > +
> > + prog_fd = __check_ldimm64_gotox_prog_load(insns, insn_cnt, off1, off2, off3);
> > + snprintf(s, sizeof(s), "__check_ldimm64_gotox_prog_load(%u,%u,%u)", off1, off2, off3);
>
> here offsets are int, but printed with %u.
Yes, thanks, fixed.
> > + if (!ASSERT_GE(prog_fd, 0, s))
> > + return;
> > +
> > + err = bpf_prog_test_run_opts(prog_fd, &topts);
> > + if (!ASSERT_OK(err, "test_run_opts err")) {
> > + close(prog_fd);
> > + return;
> > + }
> > +
> > + if (!ASSERT_EQ(topts.retval, (off1 + off2 + off3) / 8, "test_run_opts retval")) {
> > + close(prog_fd);
> > + return;
> > + }
> > +
> > + close(prog_fd);
> > +}
> > +
> > +static void
> > +reject_offsets(struct bpf_insn *insns, __u32 insn_cnt, __u32 off1, __u32 off2, __u32 off3)
>
> If offsets can be negative, should these arguments be int rather than __u32,
> also matching reject_offsets?
Fixed.
> [...]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays
2026-04-01 22:47 ` Mykyta Yatsenko
[not found] ` <CAGzPb2Ed+Z513yWDUE91H_OP2eF_fHucy_xV3-cpYOkmw73xmg@mail.gmail.com>
2026-04-02 2:37 ` sun jian
@ 2026-04-02 8:36 ` Anton Protopopov
2 siblings, 0 replies; 10+ messages in thread
From: Anton Protopopov @ 2026-04-02 8:36 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Jiyong Yang
On 26/04/01 11:47PM, Mykyta Yatsenko wrote:
>
>
> On 4/1/26 5:15 PM, Anton Protopopov wrote:
> > When a pointer to PTR_TO_INSN is dereferenced it is possible to
> > specify an offset inside the load instruction. This is a bug,
> > because while the verifier ignores the field, JITs are not.
> > So, patch the verifier to not ignore this field.
> >
> > Reported-by: Jiyong Yang <ksur673@gmail.com>
> > Fixes: 493d9e0d6083 ("bpf, x86: add support for indirect jumps")
> > Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
> > ---
> > kernel/bpf/verifier.c | 17 +++++++++++++++++
> > 1 file changed, 17 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 8c1cf2eb6cbb..f1b1c8e9dc26 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -212,6 +212,8 @@ static int ref_set_non_owning(struct bpf_verifier_env *env,
> > static bool is_trusted_reg(const struct bpf_reg_state *reg);
> > static inline bool in_sleepable_context(struct bpf_verifier_env *env);
> > static const char *non_sleepable_context_description(struct bpf_verifier_env *env);
> > +static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
> > +static void scalar_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
> > static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
> > {
> > @@ -7735,6 +7737,20 @@ static bool get_func_retval_range(struct bpf_prog *prog,
> > return false;
> > }
> > +static inline void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
>
> Why does it need to be manually inlined?
No reason, thanks, fixed.
> Other than that the change looks good, the implementation of the
> add_scalar_to_reg() is similar to a piece of code in sync_linked_regs().
Thanks, I will send v2 today. Also will add a "if (!val) return"
check to skip adding 0.
Just in case, sashiko had some comments on this patch, all are
false-negative (for selftests it found same two issues as you).
> > +{
> > + struct bpf_reg_state fake_reg;
> > +
> > + fake_reg.type = SCALAR_VALUE;
> > + __mark_reg_known(&fake_reg, val);
> > +
> > + scalar32_min_max_add(dst_reg, &fake_reg);
> > + scalar_min_max_add(dst_reg, &fake_reg);
> > + dst_reg->var_off = tnum_add(dst_reg->var_off, fake_reg.var_off);
> > +
> > + reg_bounds_sync(dst_reg);
> > +}
> > +
> > /* check whether memory at (regno + off) is accessible for t = (read | write)
> > * if t==write, value_regno is a register which value is stored into memory
> > * if t==read, value_regno is a register which will receive the value from memory
> > @@ -7816,6 +7832,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
> > return -EACCES;
> > }
> > copy_register_state(®s[value_regno], reg);
> > + add_scalar_to_reg(®s[value_regno], off);
> > regs[value_regno].type = PTR_TO_INSN;
> > } else {
> > mark_reg_unknown(env, regs, value_regno);
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays
2026-04-02 2:37 ` sun jian
@ 2026-04-02 8:37 ` Anton Protopopov
0 siblings, 0 replies; 10+ messages in thread
From: Anton Protopopov @ 2026-04-02 8:37 UTC (permalink / raw)
To: sun jian
Cc: Mykyta Yatsenko, bpf, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Jiyong Yang
On 26/04/02 10:37AM, sun jian wrote:
> On Thu, Apr 2, 2026 at 6:47 AM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
> >
> >
> >
> > On 4/1/26 5:15 PM, Anton Protopopov wrote:
> > > When a pointer to PTR_TO_INSN is dereferenced it is possible to
> > > specify an offset inside the load instruction. This is a bug,
> > > because while the verifier ignores the field, JITs are not.
> > > So, patch the verifier to not ignore this field.
> > >
> > > Reported-by: Jiyong Yang <ksur673@gmail.com>
> > > Fixes: 493d9e0d6083 ("bpf, x86: add support for indirect jumps")
> > > Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com>
> > > ---
> > > kernel/bpf/verifier.c | 17 +++++++++++++++++
> > > 1 file changed, 17 insertions(+)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 8c1cf2eb6cbb..f1b1c8e9dc26 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -212,6 +212,8 @@ static int ref_set_non_owning(struct bpf_verifier_env *env,
> > > static bool is_trusted_reg(const struct bpf_reg_state *reg);
> > > static inline bool in_sleepable_context(struct bpf_verifier_env *env);
> > > static const char *non_sleepable_context_description(struct bpf_verifier_env *env);
> > > +static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
> > > +static void scalar_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg);
> > >
> > > static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
> > > {
> > > @@ -7735,6 +7737,20 @@ static bool get_func_retval_range(struct bpf_prog *prog,
> > > return false;
> > > }
> > >
> > > +static inline void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
> >
> > Why does it need to be manually inlined?
> > Other than that the change looks good, the implementation of the
> > add_scalar_to_reg() is similar to a piece of code in sync_linked_regs().
> Agreed.
>
> nit: maybe this helper could be called from sync_linked_regs() too?
Looks like out of scope for this particular patch.
> >
> > > +{
> > > + struct bpf_reg_state fake_reg;
> > > +
> > > + fake_reg.type = SCALAR_VALUE;
> > > + __mark_reg_known(&fake_reg, val);
> > > +
> > > + scalar32_min_max_add(dst_reg, &fake_reg);
> > > + scalar_min_max_add(dst_reg, &fake_reg);
> > > + dst_reg->var_off = tnum_add(dst_reg->var_off, fake_reg.var_off);
> > > +
> > > + reg_bounds_sync(dst_reg);
> > > +}
>
> > > +
> > > /* check whether memory at (regno + off) is accessible for t = (read | write)
> > > * if t==write, value_regno is a register which value is stored into memory
> > > * if t==read, value_regno is a register which will receive the value from memory
> > > @@ -7816,6 +7832,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
> > > return -EACCES;
> > > }
> > > copy_register_state(®s[value_regno], reg);
> > > + add_scalar_to_reg(®s[value_regno], off);
> > > regs[value_regno].type = PTR_TO_INSN;
> > > } else {
> > > mark_reg_unknown(env, regs, value_regno);
> >
> >
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-04-02 8:28 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-01 16:15 [PATCH bpf-next 0/2] Properly load values from insn_arays with non-zero offsets Anton Protopopov
2026-04-01 16:15 ` [PATCH bpf-next 1/2] bpf: Do not ignore offsets for loads from insn_arrays Anton Protopopov
2026-04-01 22:47 ` Mykyta Yatsenko
[not found] ` <CAGzPb2Ed+Z513yWDUE91H_OP2eF_fHucy_xV3-cpYOkmw73xmg@mail.gmail.com>
2026-04-02 0:27 ` Alexei Starovoitov
2026-04-02 2:37 ` sun jian
2026-04-02 8:37 ` Anton Protopopov
2026-04-02 8:36 ` Anton Protopopov
2026-04-01 16:15 ` [PATCH bpf-next 2/2] selftests/bpf: Add more tests for loading insn arrays with offsets Anton Protopopov
2026-04-01 22:38 ` Mykyta Yatsenko
2026-04-02 8:28 ` Anton Protopopov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox