* [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off
@ 2026-02-12 21:34 Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 1/4] bpf: split check_reg_sane_offset() in two parts Eduard Zingerman
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Eduard Zingerman @ 2026-02-12 21:34 UTC (permalink / raw)
To: bpf, ast
Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song, kuba,
Eduard Zingerman
Consolidate static and varying pointer offset tracking logic in the
BPF verifier. All pointer offsets are now represented solely using
`reg->var_off` and min/max fields, simplifying pointer tracking code
and making it easier to widen pointer registers for loop convergence
checks.
Patch 1 is a preparatory refactoring of check_reg_sane_offset().
Patch 2 is the main change, moving pointer offsets from `reg->off`
to `reg->var_off`.
Patch 3 removes references to `reg->off` in netronome code.
Patch 4 renames the now-repurposed `reg->off` field to `reg->delta`,
reflecting its remaining role as a constant delta between linked
scalar registers.
Note: netronome changes are compile-tested only!
Changelog:
v1 -> v2:
- put back WARN_ON_ONCE in mark_ptr_or_null_reg() (Alexei).
- references to `ptr->off` field are removed from netronome code
(bot+bpf-ci, kernel test robot).
- fix for a comment referencing `ptr->off` in bpf_verifier.h
(bot+bpf-ci).
v1: https://lore.kernel.org/bpf/20260211-ptrs-off-migration-v1-0-996c2a37b063@gmail.com/
---
Eduard Zingerman (4):
bpf: split check_reg_sane_offset() in two parts
bpf: use reg->var_off instead of reg->off for pointers
nfp: bpf: remove references to bpf_reg_state->off for pointers
bpf: rename bpf_reg_state->off to bpf_reg_state->delta
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 18 +-
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +-
include/linux/bpf_verifier.h | 9 +-
kernel/bpf/log.c | 10 +-
kernel/bpf/verifier.c | 368 +++++++++------------
.../testing/selftests/bpf/prog_tests/linked_list.c | 4 +-
.../selftests/bpf/progs/exceptions_assert.c | 2 +-
tools/testing/selftests/bpf/progs/iters.c | 6 +-
.../selftests/bpf/progs/mem_rdonly_untrusted.c | 2 +-
tools/testing/selftests/bpf/progs/verifier_align.c | 40 +--
.../testing/selftests/bpf/progs/verifier_bounds.c | 2 +-
.../bpf/progs/verifier_direct_packet_access.c | 4 +-
tools/testing/selftests/bpf/progs/verifier_gotox.c | 4 +-
.../bpf/progs/verifier_helper_packet_access.c | 2 +-
.../bpf/progs/verifier_helper_value_access.c | 4 +-
.../testing/selftests/bpf/progs/verifier_int_ptr.c | 2 +-
.../selftests/bpf/progs/verifier_meta_access.c | 2 +-
.../selftests/bpf/progs/verifier_spill_fill.c | 8 +-
.../selftests/bpf/progs/verifier_stack_ptr.c | 4 +-
.../selftests/bpf/progs/verifier_value_ptr_arith.c | 10 +-
.../bpf/progs/verifier_xdp_direct_packet_access.c | 64 ++--
tools/testing/selftests/bpf/verifier/calls.c | 2 +-
22 files changed, 260 insertions(+), 319 deletions(-)
---
base-commit: a86c608c067034f7e9da626cee81b75baeef16e8
change-id: 20260211-ptrs-off-migration-9ad00d0c3f93
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH bpf-next v2 1/4] bpf: split check_reg_sane_offset() in two parts
2026-02-12 21:34 [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off Eduard Zingerman
@ 2026-02-12 21:34 ` Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 2/4] bpf: use reg->var_off instead of reg->off for pointers Eduard Zingerman
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Eduard Zingerman @ 2026-02-12 21:34 UTC (permalink / raw)
To: bpf, ast
Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song, kuba,
Eduard Zingerman
check_reg_sane_offset() is used when verifying operations like:
dst_reg += src_reg
^ ^
| '-------- scalar
'------------------- pointer
To verify range for both dst_reg and src_reg. Split it in two parts:
- one to check a pointer offset
- another to check scalar offset
This would be useful for further refactoring.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 39 +++++++++++++++++++++++++++------------
1 file changed, 27 insertions(+), 12 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index edf5342b982f676567579ed6349ccd5391eee7c8..3bf72eacbec2407fc79e22f62098755415bdf61c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -14426,9 +14426,9 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return 0;
}
-static bool check_reg_sane_offset(struct bpf_verifier_env *env,
- const struct bpf_reg_state *reg,
- enum bpf_reg_type type)
+static bool check_reg_sane_offset_scalar(struct bpf_verifier_env *env,
+ const struct bpf_reg_state *reg,
+ enum bpf_reg_type type)
{
bool known = tnum_is_const(reg->var_off);
s64 val = reg->var_off.value;
@@ -14440,12 +14440,6 @@ static bool check_reg_sane_offset(struct bpf_verifier_env *env,
return false;
}
- if (reg->off >= BPF_MAX_VAR_OFF || reg->off <= -BPF_MAX_VAR_OFF) {
- verbose(env, "%s pointer offset %d is not allowed\n",
- reg_type_str(env, type), reg->off);
- return false;
- }
-
if (smin == S64_MIN) {
verbose(env, "math between %s pointer and register with unbounded min value is not allowed\n",
reg_type_str(env, type));
@@ -14461,6 +14455,27 @@ static bool check_reg_sane_offset(struct bpf_verifier_env *env,
return true;
}
+static bool check_reg_sane_offset_ptr(struct bpf_verifier_env *env,
+ const struct bpf_reg_state *reg,
+ enum bpf_reg_type type)
+{
+ s64 smin = reg->smin_value;
+
+ if (reg->off >= BPF_MAX_VAR_OFF || reg->off <= -BPF_MAX_VAR_OFF) {
+ verbose(env, "%s pointer offset %d is not allowed\n",
+ reg_type_str(env, type), reg->off);
+ return false;
+ }
+
+ if (smin >= BPF_MAX_VAR_OFF || smin <= -BPF_MAX_VAR_OFF) {
+ verbose(env, "%s pointer offset %lld is not allowed\n",
+ reg_type_str(env, type), smin);
+ return false;
+ }
+
+ return true;
+}
+
enum {
REASON_BOUNDS = -1,
REASON_TYPE = -2,
@@ -14874,8 +14889,8 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
dst_reg->type = ptr_reg->type;
dst_reg->id = ptr_reg->id;
- if (!check_reg_sane_offset(env, off_reg, ptr_reg->type) ||
- !check_reg_sane_offset(env, ptr_reg, ptr_reg->type))
+ if (!check_reg_sane_offset_scalar(env, off_reg, ptr_reg->type) ||
+ !check_reg_sane_offset_ptr(env, ptr_reg, ptr_reg->type))
return -EINVAL;
/* pointer types do not carry 32-bit bounds at the moment. */
@@ -15004,7 +15019,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
return -EACCES;
}
- if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))
+ if (!check_reg_sane_offset_ptr(env, dst_reg, ptr_reg->type))
return -EINVAL;
reg_bounds_sync(dst_reg);
bounds_ret = sanitize_check_bounds(env, insn, dst_reg);
--
2.51.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH bpf-next v2 2/4] bpf: use reg->var_off instead of reg->off for pointers
2026-02-12 21:34 [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 1/4] bpf: split check_reg_sane_offset() in two parts Eduard Zingerman
@ 2026-02-12 21:34 ` Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 3/4] nfp: bpf: remove references to bpf_reg_state->off " Eduard Zingerman
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Eduard Zingerman @ 2026-02-12 21:34 UTC (permalink / raw)
To: bpf, ast
Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song, kuba,
Eduard Zingerman
This commit consolidates static and varying pointer offset tracking
logic. All offsets are now represented solely using `.var_off` and
min/max fields. The reasons are twofold:
- This simplifies pointer tracking code, as each relevant function
needs to check the `.var_off` field anyway.
- It makes it easier to widen pointer registers for the purpose of loop
convergence checks, by forgoing the `regsafe()` logic demanding
`.off` fields to be identical.
The changes are spread across many functions and are hard to group
into smaller patches. Some of the logical changes include:
- Checks in __check_ptr_off_reg() are reordered so that the
tnum_is_const() check is done before operating on reg->var_off.value.
- check_packet_access() now uses check_mem_region_access() to handle
possible 'off' overflow cases.
- In check_helper_mem_access() utility functions like
check_packet_access() are now called with 'off=0', as these utility
functions now account for the complete register offset range.
- In check_reg_type() a call to __check_ptr_off_reg() is added before
a call to btf_struct_ids_match(). This prevents
btf_struct_ids_match() from potentially working on non-constant
reg->var_off.value.
- regsafe() is relaxed to avoid comparing '.off' field for pointers.
As a precaution, the changes are verified in [1] by adding a pass
checking that no pointer has non-zero '.off' field on each
do_check_insn() iteration.
[1] https://github.com/eddyz87/bpf/tree/ptrs-off-migration
Notable selftests changes:
- `.var_off` value changed because it now combines static and varying
offsets. Affected tests:
- linked_list/incorrect_node_var_off
- linked_list/incorrect_head_var_off2
- verifier_align/packet_variable_offset
- Overflowing `smax_value` bound leads to a pointer with big negative
or positive offset to be rejected immediately (previously overflowing
`rX += const` instruction updated `.off` field avoiding the overflow).
Affected tests:
- verifier_align/dubious_pointer_arithmetic
- verifier_bounds/var_off_insn_off_test1
- Invalid access to packet now reports full offset inside a packet.
Affected tests:
- verifier_direct_packet_access/test23_x_pkt_ptr_4
- A change in check_mem_region_access() behavior:
when register `.smin_value` is negative, it reports
"rX min value is negative..." before calling into __check_mem_access()
which reports "invalid access to ...".
In the tests below, the `.off` field was negative, while `.smin_value`
remained positive. This is no longer the case after the changes in
this commit. Affected tests:
- verifier_gotox/jump_table_invalid_mem_acceess_neg
- verifier_helper_packet_access/test15_cls_helper_fail_sub
- verifier_helper_value_access/imm_out_of_bound_2
- verifier_helper_value_access/reg_out_of_bound_2
- verifier_meta_access/meta_access_test2
- verifier_value_ptr_arith/known_scalar_from_different_maps
- lower_oob_arith_test_1
- value_ptr_known_scalar_3
- access_value_ptr_known_scalar
- Usage of check_mem_region_access() instead of __check_mem_access()
in check_packet_access() changes the reported message from
"rX offset is outside ..." to "rX min/max value is outside ...".
Affected tests:
- verifier_xdp_direct_packet_access/*
- In check_func_arg_reg_off() the check for zero offset now operates
on `.var_off` field instead of `.off` field. For tests where the
pattern looks like `kfunc(reg_with_var_off, ...)`, this changes the
reported error:
- previously the error "variable ... access ... disallowed"
was reported by __check_ptr_off_reg();
- now "R1 must have zero offset ..." is reported by
check_func_arg_reg_off() itself.
Affected tests:
- verifier/calls.c
"calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset"
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
include/linux/bpf_verifier.h | 3 +-
kernel/bpf/log.c | 2 +
kernel/bpf/verifier.c | 317 ++++++++-------------
.../testing/selftests/bpf/prog_tests/linked_list.c | 4 +-
.../selftests/bpf/progs/exceptions_assert.c | 2 +-
tools/testing/selftests/bpf/progs/iters.c | 6 +-
.../selftests/bpf/progs/mem_rdonly_untrusted.c | 2 +-
tools/testing/selftests/bpf/progs/verifier_align.c | 40 ++-
.../testing/selftests/bpf/progs/verifier_bounds.c | 2 +-
.../bpf/progs/verifier_direct_packet_access.c | 4 +-
tools/testing/selftests/bpf/progs/verifier_gotox.c | 4 +-
.../bpf/progs/verifier_helper_packet_access.c | 2 +-
.../bpf/progs/verifier_helper_value_access.c | 4 +-
.../testing/selftests/bpf/progs/verifier_int_ptr.c | 2 +-
.../selftests/bpf/progs/verifier_meta_access.c | 2 +-
.../selftests/bpf/progs/verifier_spill_fill.c | 8 +-
.../selftests/bpf/progs/verifier_stack_ptr.c | 4 +-
.../selftests/bpf/progs/verifier_value_ptr_arith.c | 10 +-
.../bpf/progs/verifier_xdp_direct_packet_access.c | 64 ++---
tools/testing/selftests/bpf/verifier/calls.c | 2 +-
20 files changed, 206 insertions(+), 278 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index ef8e45a362d96701efbc1d24539501f339d41def..a97bdbf3a07b63246ebf2021816e81c147989381 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -38,8 +38,7 @@ struct bpf_reg_state {
/* Ordering of fields matters. See states_equal() */
enum bpf_reg_type type;
/*
- * Fixed part of pointer offset, pointer types only.
- * Or constant delta between "linked" scalars with the same ID.
+ * Constant delta between "linked" scalars with the same ID.
*/
s32 off;
union {
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index a0c3b35de2ce65c488277c3c214f2a3c8185af5e..39a731392d6520a1345fc3d79e86fc43f63e426a 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -581,6 +581,8 @@ int tnum_strn(char *str, size_t size, struct tnum a)
if (a.mask == 0) {
if (is_unum_decimal(a.value))
return snprintf(str, size, "%llu", a.value);
+ if (is_snum_decimal(a.value))
+ return snprintf(str, size, "%lld", a.value);
else
return snprintf(str, size, "%#llx", a.value);
}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 3bf72eacbec2407fc79e22f62098755415bdf61c..2c5794dad66889b7f872aa9882652efae877ac1a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -654,7 +654,7 @@ static int stack_slot_obj_get_spi(struct bpf_verifier_env *env, struct bpf_reg_s
return -EINVAL;
}
- off = reg->off + reg->var_off.value;
+ off = reg->var_off.value;
if (off % BPF_REG_SIZE) {
verbose(env, "cannot pass in %s at an offset=%d\n", obj_kind, off);
return -EINVAL;
@@ -2281,11 +2281,10 @@ static void mark_ptr_not_null_reg(struct bpf_reg_state *reg)
static void mark_reg_graph_node(struct bpf_reg_state *regs, u32 regno,
struct btf_field_graph_root *ds_head)
{
- __mark_reg_known_zero(®s[regno]);
+ __mark_reg_known(®s[regno], ds_head->node_offset);
regs[regno].type = PTR_TO_BTF_ID | MEM_ALLOC;
regs[regno].btf = ds_head->btf;
regs[regno].btf_id = ds_head->value_btf_id;
- regs[regno].off = ds_head->node_offset;
}
static bool reg_is_pkt_pointer(const struct bpf_reg_state *reg)
@@ -2316,7 +2315,6 @@ static bool reg_is_init_pkt_pointer(const struct bpf_reg_state *reg,
*/
return reg->type == which &&
reg->id == 0 &&
- reg->off == 0 &&
tnum_equals_const(reg->var_off, 0);
}
@@ -5302,7 +5300,6 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
* tracks the effects of the write, considering that each stack slot in the
* dynamic range is potentially written to.
*
- * 'off' includes 'regno->off'.
* 'value_regno' can be -1, meaning that an unknown value is being written to
* the stack.
*
@@ -5724,7 +5721,6 @@ static int check_stack_read(struct bpf_verifier_env *env,
* check_stack_write_var_off.
*
* 'ptr_regno' is the register used as a pointer into the stack.
- * 'off' includes 'ptr_regno->off', but not its variable offset (if any).
* 'value_regno' is the register whose value we're writing to the stack. It can
* be -1, meaning that we're not writing from a register.
*
@@ -5761,14 +5757,14 @@ static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
u32 cap = bpf_map_flags_to_cap(map);
if (type == BPF_WRITE && !(cap & BPF_MAP_CAN_WRITE)) {
- verbose(env, "write into map forbidden, value_size=%d off=%d size=%d\n",
- map->value_size, off, size);
+ verbose(env, "write into map forbidden, value_size=%d off=%lld size=%d\n",
+ map->value_size, reg->smin_value + off, size);
return -EACCES;
}
if (type == BPF_READ && !(cap & BPF_MAP_CAN_READ)) {
- verbose(env, "read from map forbidden, value_size=%d off=%d size=%d\n",
- map->value_size, off, size);
+ verbose(env, "read from map forbidden, value_size=%d off=%lld size=%d\n",
+ map->value_size, reg->smin_value + off, size);
return -EACCES;
}
@@ -5875,24 +5871,24 @@ static int __check_ptr_off_reg(struct bpf_verifier_env *env,
* is only allowed in its original, unmodified form.
*/
- if (reg->off < 0) {
- verbose(env, "negative offset %s ptr R%d off=%d disallowed\n",
- reg_type_str(env, reg->type), regno, reg->off);
+ if (!tnum_is_const(reg->var_off)) {
+ char tn_buf[48];
+
+ tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+ verbose(env, "variable %s access var_off=%s disallowed\n",
+ reg_type_str(env, reg->type), tn_buf);
return -EACCES;
}
- if (!fixed_off_ok && reg->off) {
- verbose(env, "dereference of modified %s ptr R%d off=%d disallowed\n",
- reg_type_str(env, reg->type), regno, reg->off);
+ if (reg->smin_value < 0) {
+ verbose(env, "negative offset %s ptr R%d off=%lld disallowed\n",
+ reg_type_str(env, reg->type), regno, reg->var_off.value);
return -EACCES;
}
- if (!tnum_is_const(reg->var_off) || reg->var_off.value) {
- char tn_buf[48];
-
- tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "variable %s access var_off=%s disallowed\n",
- reg_type_str(env, reg->type), tn_buf);
+ if (!fixed_off_ok && reg->var_off.value != 0) {
+ verbose(env, "dereference of modified %s ptr R%d off=%lld disallowed\n",
+ reg_type_str(env, reg->type), regno, reg->var_off.value);
return -EACCES;
}
@@ -5934,14 +5930,14 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
/* For ref_ptr case, release function check should ensure we get one
* referenced PTR_TO_BTF_ID, and that its fixed offset is 0. For the
* normal store of unreferenced kptr, we must ensure var_off is zero.
- * Since ref_ptr cannot be accessed directly by BPF insns, checks for
- * reg->off and reg->ref_obj_id are not needed here.
+ * Since ref_ptr cannot be accessed directly by BPF insns, check for
+ * reg->ref_obj_id is not needed here.
*/
if (__check_ptr_off_reg(env, reg, regno, true))
return -EACCES;
/* A full type match is needed, as BTF can be vmlinux, module or prog BTF, and
- * we also need to take into account the reg->off.
+ * we also need to take into account the reg->var_off.
*
* We want to support cases like:
*
@@ -5952,19 +5948,19 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
*
* struct foo *v;
* v = func(); // PTR_TO_BTF_ID
- * val->foo = v; // reg->off is zero, btf and btf_id match type
- * val->bar = &v->br; // reg->off is still zero, but we need to retry with
+ * val->foo = v; // reg->var_off is zero, btf and btf_id match type
+ * val->bar = &v->br; // reg->var_off is still zero, but we need to retry with
* // first member type of struct after comparison fails
- * val->baz = &v->bz; // reg->off is non-zero, so struct needs to be walked
+ * val->baz = &v->bz; // reg->var_off is non-zero, so struct needs to be walked
* // to match type
*
- * In the kptr_ref case, check_func_arg_reg_off already ensures reg->off
+ * In the kptr_ref case, check_func_arg_reg_off already ensures reg->var_off
* is zero. We must also ensure that btf_struct_ids_match does not walk
* the struct to match type against first member of struct, i.e. reject
* second case from above. Hence, when type is BPF_KPTR_REF, we set
* strict mode to true for type match.
*/
- if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id, reg->off,
+ if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id, reg->var_off.value,
kptr_field->kptr.btf, kptr_field->kptr.btf_id,
kptr_field->type != BPF_KPTR_UNREF))
goto bad_type;
@@ -6273,27 +6269,14 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
struct bpf_reg_state *reg = reg_state(env, regno);
int err;
- /* We may have added a variable offset to the packet pointer; but any
- * reg->range we have comes after that. We are only checking the fixed
- * offset.
- */
-
- /* We don't allow negative numbers, because we aren't tracking enough
- * detail to prove they're safe.
- */
- if (reg->smin_value < 0) {
- verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
- regno);
- return -EACCES;
+ if (reg->range < 0) {
+ verbose(env, "R%d offset is outside of the packet\n", regno);
+ return -EINVAL;
}
- err = reg->range < 0 ? -EINVAL :
- __check_mem_access(env, regno, off, size, reg->range,
- zero_size_allowed);
- if (err) {
- verbose(env, "R%d offset is outside of the packet\n", regno);
+ err = check_mem_region_access(env, regno, off, size, reg->range, zero_size_allowed);
+ if (err)
return err;
- }
/* __check_mem_access has made sure "off + size - 1" is within u16.
* reg->umax_value can't be bigger than MAX_PACKET_OFF which is 0xffff,
@@ -6305,7 +6288,7 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
max_t(u32, env->prog->aux->max_pkt_offset,
off + reg->umax_value + size - 1);
- return err;
+ return 0;
}
/* check access to 'struct bpf_context' fields. Supports fixed offsets only */
@@ -6522,14 +6505,14 @@ static int check_pkt_ptr_alignment(struct bpf_verifier_env *env,
*/
ip_align = 2;
- reg_off = tnum_add(reg->var_off, tnum_const(ip_align + reg->off + off));
+ reg_off = tnum_add(reg->var_off, tnum_const(ip_align + off));
if (!tnum_is_aligned(reg_off, size)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose(env,
- "misaligned packet access off %d+%s+%d+%d size %d\n",
- ip_align, tn_buf, reg->off, off, size);
+ "misaligned packet access off %d+%s+%d size %d\n",
+ ip_align, tn_buf, off, size);
return -EACCES;
}
@@ -6547,13 +6530,13 @@ static int check_generic_ptr_alignment(struct bpf_verifier_env *env,
if (!strict || size == 1)
return 0;
- reg_off = tnum_add(reg->var_off, tnum_const(reg->off + off));
+ reg_off = tnum_add(reg->var_off, tnum_const(off));
if (!tnum_is_aligned(reg_off, size)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "misaligned %saccess off %s+%d+%d size %d\n",
- pointer_desc, tn_buf, reg->off, off, size);
+ verbose(env, "misaligned %saccess off %s+%d size %d\n",
+ pointer_desc, tn_buf, off, size);
return -EACCES;
}
@@ -6891,7 +6874,7 @@ static int __check_buffer_access(struct bpf_verifier_env *env,
regno, buf_info, off, size);
return -EACCES;
}
- if (!tnum_is_const(reg->var_off) || reg->var_off.value) {
+ if (!tnum_is_const(reg->var_off)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
@@ -6914,8 +6897,8 @@ static int check_tp_buffer_access(struct bpf_verifier_env *env,
if (err)
return err;
- if (off + size > env->prog->aux->max_tp_access)
- env->prog->aux->max_tp_access = off + size;
+ env->prog->aux->max_tp_access = max(reg->var_off.value + off + size,
+ env->prog->aux->max_tp_access);
return 0;
}
@@ -6933,8 +6916,7 @@ static int check_buffer_access(struct bpf_verifier_env *env,
if (err)
return err;
- if (off + size > *max_access)
- *max_access = off + size;
+ *max_access = max(reg->var_off.value + off + size, *max_access);
return 0;
}
@@ -7327,13 +7309,8 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
tname);
return -EINVAL;
}
- if (off < 0) {
- verbose(env,
- "R%d is ptr_%s invalid negative access: off=%d\n",
- regno, tname, off);
- return -EACCES;
- }
- if (!tnum_is_const(reg->var_off) || reg->var_off.value) {
+
+ if (!tnum_is_const(reg->var_off)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
@@ -7343,6 +7320,15 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
return -EACCES;
}
+ off += reg->var_off.value;
+
+ if (off < 0) {
+ verbose(env,
+ "R%d is ptr_%s invalid negative access: off=%d\n",
+ regno, tname, off);
+ return -EACCES;
+ }
+
if (reg->type & MEM_USER) {
verbose(env,
"R%d is ptr_%s access user memory: off=%d\n",
@@ -7589,8 +7575,8 @@ static int check_stack_access_within_bounds(
if (err) {
if (tnum_is_const(reg->var_off)) {
- verbose(env, "invalid%s stack R%d off=%d size=%d\n",
- err_extra, regno, off, access_size);
+ verbose(env, "invalid%s stack R%d off=%lld size=%d\n",
+ err_extra, regno, min_off, access_size);
} else {
char tn_buf[48];
@@ -7636,14 +7622,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
if (size < 0)
return size;
- /* alignment checks will add in reg->off themselves */
err = check_ptr_alignment(env, reg, off, size, strict_alignment_once);
if (err)
return err;
- /* for access checks, reg->off is just part of off */
- off += reg->off;
-
if (reg->type == PTR_TO_MAP_KEY) {
if (t == BPF_WRITE) {
verbose(env, "write to change key R%d not allowed\n", regno);
@@ -8122,8 +8104,6 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn)
* on the access type and privileges, that all elements of the stack are
* initialized.
*
- * 'off' includes 'regno->off', but not its dynamic part (if any).
- *
* All registers that have been spilled on the stack in the slots within the
* read offsets are marked as read.
*/
@@ -8284,7 +8264,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
switch (base_type(reg->type)) {
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
- return check_packet_access(env, regno, reg->off, access_size,
+ return check_packet_access(env, regno, 0, access_size,
zero_size_allowed);
case PTR_TO_MAP_KEY:
if (access_type == BPF_WRITE) {
@@ -8292,12 +8272,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
reg_type_str(env, reg->type));
return -EACCES;
}
- return check_mem_region_access(env, regno, reg->off, access_size,
+ return check_mem_region_access(env, regno, 0, access_size,
reg->map_ptr->key_size, false);
case PTR_TO_MAP_VALUE:
- if (check_map_access_type(env, regno, reg->off, access_size, access_type))
+ if (check_map_access_type(env, regno, 0, access_size, access_type))
return -EACCES;
- return check_map_access(env, regno, reg->off, access_size,
+ return check_map_access(env, regno, 0, access_size,
zero_size_allowed, ACCESS_HELPER);
case PTR_TO_MEM:
if (type_is_rdonly_mem(reg->type)) {
@@ -8307,7 +8287,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
return -EACCES;
}
}
- return check_mem_region_access(env, regno, reg->off,
+ return check_mem_region_access(env, regno, 0,
access_size, reg->mem_size,
zero_size_allowed);
case PTR_TO_BUF:
@@ -8322,16 +8302,16 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
} else {
max_access = &env->prog->aux->max_rdwr_access;
}
- return check_buffer_access(env, reg, regno, reg->off,
+ return check_buffer_access(env, reg, regno, 0,
access_size, zero_size_allowed,
max_access);
case PTR_TO_STACK:
return check_stack_range_initialized(
env,
- regno, reg->off, access_size,
+ regno, 0, access_size,
zero_size_allowed, access_type, meta);
case PTR_TO_BTF_ID:
- return check_ptr_to_btf_access(env, regs, regno, reg->off,
+ return check_ptr_to_btf_access(env, regs, regno, 0,
access_size, BPF_READ, -1);
case PTR_TO_CTX:
/* in case the function doesn't know how to access the context,
@@ -8543,9 +8523,9 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
return -EINVAL;
}
spin_lock_off = is_res_lock ? rec->res_spin_lock_off : rec->spin_lock_off;
- if (spin_lock_off != val + reg->off) {
+ if (spin_lock_off != val) {
verbose(env, "off %lld doesn't point to 'struct %s_lock' that is at %d\n",
- val + reg->off, lock_str, spin_lock_off);
+ val, lock_str, spin_lock_off);
return -EINVAL;
}
if (is_lock) {
@@ -8660,9 +8640,9 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno,
verifier_bug(env, "unsupported BTF field type: %s\n", struct_name);
return -EINVAL;
}
- if (field_off != val + reg->off) {
+ if (field_off != val) {
verbose(env, "off %lld doesn't point to 'struct %s' that is at %d\n",
- val + reg->off, struct_name, field_off);
+ val, struct_name, field_off);
return -EINVAL;
}
if (map_desc->ptr) {
@@ -8730,7 +8710,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
return -EINVAL;
}
- kptr_off = reg->off + reg->var_off.value;
+ kptr_off = reg->var_off.value;
kptr_field = btf_record_find(rec, kptr_off, BPF_KPTR);
if (!kptr_field) {
verbose(env, "off=%d doesn't point to kptr\n", kptr_off);
@@ -9373,7 +9353,7 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
struct bpf_reg_state *reg = reg_state(env, regno);
enum bpf_reg_type expected, type = reg->type;
const struct bpf_reg_types *compatible;
- int i, j;
+ int i, j, err;
compatible = compatible_reg_types[base_type(arg_type)];
if (!compatible) {
@@ -9476,8 +9456,12 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
return -EACCES;
}
- if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id, reg->off,
- btf_vmlinux, *arg_btf_id,
+ err = __check_ptr_off_reg(env, reg, regno, true);
+ if (err)
+ return err;
+
+ if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id,
+ reg->var_off.value, btf_vmlinux, *arg_btf_id,
strict_type_match)) {
verbose(env, "R%d is of type %s but %s is expected\n",
regno, btf_type_name(reg->btf, reg->btf_id),
@@ -9555,12 +9539,11 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
* because fixed_off_ok is false, but checking here allows us
* to give the user a better error message.
*/
- if (reg->off) {
+ if (!tnum_is_const(reg->var_off) || reg->var_off.value != 0) {
verbose(env, "R%d must have zero offset when passed to release func or trusted arg to kfunc\n",
regno);
return -EINVAL;
}
- return __check_ptr_off_reg(env, reg, regno, false);
}
switch (type) {
@@ -9657,7 +9640,7 @@ static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
if (reg->type == CONST_PTR_TO_DYNPTR)
return reg->dynptr.type;
- spi = __get_spi(reg->off);
+ spi = __get_spi(reg->var_off.value);
if (spi < 0) {
verbose(env, "verifier internal error: invalid spi when querying dynptr type\n");
return BPF_DYNPTR_TYPE_INVALID;
@@ -9698,13 +9681,13 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
return -EACCES;
}
- err = check_map_access(env, regno, reg->off,
- map->value_size - reg->off, false,
+ err = check_map_access(env, regno, 0,
+ map->value_size - reg->var_off.value, false,
ACCESS_HELPER);
if (err)
return err;
- map_off = reg->off + reg->var_off.value;
+ map_off = reg->var_off.value;
err = map->ops->map_direct_value_addr(map, &map_addr, map_off);
if (err) {
verbose(env, "direct value access on string failed\n");
@@ -9741,7 +9724,7 @@ static int get_constant_map_key(struct bpf_verifier_env *env,
if (!tnum_is_const(key->var_off))
return -EOPNOTSUPP;
- stack_off = key->off + key->var_off.value;
+ stack_off = key->var_off.value;
slot = -stack_off - 1;
spi = slot / BPF_REG_SIZE;
off = slot % BPF_REG_SIZE;
@@ -11073,7 +11056,8 @@ static int set_rbtree_add_callback_state(struct bpf_verifier_env *env,
*/
struct btf_field *field;
- field = reg_find_field_offset(&caller->regs[BPF_REG_1], caller->regs[BPF_REG_1].off,
+ field = reg_find_field_offset(&caller->regs[BPF_REG_1],
+ caller->regs[BPF_REG_1].var_off.value,
BPF_RB_ROOT);
if (!field || !field->graph_root.value_btf_id)
return -EFAULT;
@@ -11449,7 +11433,7 @@ static int check_bpf_snprintf_call(struct bpf_verifier_env *env,
/* fmt being ARG_PTR_TO_CONST_STR guarantees that var_off is const
* and map_direct_value_addr is set.
*/
- fmt_map_off = fmt_reg->off + fmt_reg->var_off.value;
+ fmt_map_off = fmt_reg->var_off.value;
err = fmt_map->ops->map_direct_value_addr(fmt_map, &fmt_addr,
fmt_map_off);
if (err) {
@@ -12755,13 +12739,12 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
btf_type_ids_nocast_alias(&env->log, reg_btf, reg_ref_id, meta->btf, ref_id))
strict_type_match = true;
- WARN_ON_ONCE(is_kfunc_release(meta) &&
- (reg->off || !tnum_is_const(reg->var_off) ||
- reg->var_off.value));
+ WARN_ON_ONCE(is_kfunc_release(meta) && !tnum_is_const(reg->var_off));
reg_ref_t = btf_type_skip_modifiers(reg_btf, reg_ref_id, ®_ref_id);
reg_ref_tname = btf_name_by_offset(reg_btf, reg_ref_t->name_off);
- struct_same = btf_struct_ids_match(&env->log, reg_btf, reg_ref_id, reg->off, meta->btf, ref_id, strict_type_match);
+ struct_same = btf_struct_ids_match(&env->log, reg_btf, reg_ref_id, reg->var_off.value,
+ meta->btf, ref_id, strict_type_match);
/* If kfunc is accepting a projection type (ie. __sk_buff), it cannot
* actually use it -- it must cast to the underlying type. So we allow
* caller to pass in the underlying type.
@@ -13133,7 +13116,7 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
}
rec = reg_btf_record(reg);
- head_off = reg->off + reg->var_off.value;
+ head_off = reg->var_off.value;
field = btf_record_find(rec, head_off, head_field_type);
if (!field) {
verbose(env, "%s not found at offset=%u\n", head_type_name, head_off);
@@ -13200,7 +13183,7 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
return -EINVAL;
}
- node_off = reg->off + reg->var_off.value;
+ node_off = reg->var_off.value;
field = reg_find_field_offset(reg, node_off, node_field_type);
if (!field) {
verbose(env, "%s not found at offset=%u\n", node_type_name, node_off);
@@ -14228,7 +14211,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
meta.func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
- insn_aux->insert_off = regs[BPF_REG_2].off;
+ insn_aux->insert_off = regs[BPF_REG_2].var_off.value;
insn_aux->kptr_struct_meta = btf_find_struct_meta(meta.arg_btf, meta.arg_btf_id);
err = ref_convert_owning_non_owning(env, release_ref_obj_id);
if (err) {
@@ -14459,11 +14442,13 @@ static bool check_reg_sane_offset_ptr(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg,
enum bpf_reg_type type)
{
+ bool known = tnum_is_const(reg->var_off);
+ s64 val = reg->var_off.value;
s64 smin = reg->smin_value;
- if (reg->off >= BPF_MAX_VAR_OFF || reg->off <= -BPF_MAX_VAR_OFF) {
- verbose(env, "%s pointer offset %d is not allowed\n",
- reg_type_str(env, type), reg->off);
+ if (known && (val >= BPF_MAX_VAR_OFF || val <= -BPF_MAX_VAR_OFF)) {
+ verbose(env, "%s pointer offset %lld is not allowed\n",
+ reg_type_str(env, type), val);
return false;
}
@@ -14497,13 +14482,11 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
* currently prohibited for unprivileged.
*/
max = MAX_BPF_STACK + mask_to_left;
- ptr_limit = -(ptr_reg->var_off.value + ptr_reg->off);
+ ptr_limit = -ptr_reg->var_off.value;
break;
case PTR_TO_MAP_VALUE:
max = ptr_reg->map_ptr->value_size;
- ptr_limit = (mask_to_left ?
- ptr_reg->smin_value :
- ptr_reg->umax_value) + ptr_reg->off;
+ ptr_limit = mask_to_left ? ptr_reg->smin_value : ptr_reg->umax_value;
break;
default:
return REASON_TYPE;
@@ -14734,9 +14717,6 @@ static int sanitize_err(struct bpf_verifier_env *env,
* Variable offset is prohibited for unprivileged mode for simplicity since it
* requires corresponding support in Spectre masking for stack ALU. See also
* retrieve_ptr_limit().
- *
- *
- * 'off' includes 'reg->off'.
*/
static int check_stack_access_for_ptr_arithmetic(
struct bpf_verifier_env *env,
@@ -14777,11 +14757,11 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
switch (dst_reg->type) {
case PTR_TO_STACK:
if (check_stack_access_for_ptr_arithmetic(env, dst, dst_reg,
- dst_reg->off + dst_reg->var_off.value))
+ dst_reg->var_off.value))
return -EACCES;
break;
case PTR_TO_MAP_VALUE:
- if (check_map_access(env, dst, dst_reg->off, 1, false, ACCESS_HELPER)) {
+ if (check_map_access(env, dst, 0, 1, false, ACCESS_HELPER)) {
verbose(env, "R%d pointer arithmetic of map value goes out of range, "
"prohibited for !root\n", dst);
return -EACCES;
@@ -14905,23 +14885,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
switch (opcode) {
case BPF_ADD:
- /* We can take a fixed offset as long as it doesn't overflow
- * the s32 'off' field
- */
- if (known && (ptr_reg->off + smin_val ==
- (s64)(s32)(ptr_reg->off + smin_val))) {
- /* pointer += K. Accumulate it into fixed offset */
- dst_reg->smin_value = smin_ptr;
- dst_reg->smax_value = smax_ptr;
- dst_reg->umin_value = umin_ptr;
- dst_reg->umax_value = umax_ptr;
- dst_reg->var_off = ptr_reg->var_off;
- dst_reg->off = ptr_reg->off + smin_val;
- dst_reg->raw = ptr_reg->raw;
- break;
- }
- /* A new variable offset is created. Note that off_reg->off
- * == 0, since it's a scalar.
+ /*
* dst_reg gets the pointer type and since some positive
* integer value was added to the pointer, give it a new 'id'
* if it's a PTR_TO_PACKET.
@@ -14940,9 +14904,8 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
dst_reg->umax_value = U64_MAX;
}
dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off);
- dst_reg->off = ptr_reg->off;
dst_reg->raw = ptr_reg->raw;
- if (reg_is_pkt_pointer(ptr_reg)) {
+ if (!known && reg_is_pkt_pointer(ptr_reg)) {
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
memset(&dst_reg->raw, 0, sizeof(dst_reg->raw));
@@ -14964,19 +14927,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
dst);
return -EACCES;
}
- if (known && (ptr_reg->off - smin_val ==
- (s64)(s32)(ptr_reg->off - smin_val))) {
- /* pointer -= K. Subtract it from fixed offset */
- dst_reg->smin_value = smin_ptr;
- dst_reg->smax_value = smax_ptr;
- dst_reg->umin_value = umin_ptr;
- dst_reg->umax_value = umax_ptr;
- dst_reg->var_off = ptr_reg->var_off;
- dst_reg->id = ptr_reg->id;
- dst_reg->off = ptr_reg->off - smin_val;
- dst_reg->raw = ptr_reg->raw;
- break;
- }
/* A new variable offset is created. If the subtrahend is known
* nonnegative, then any reg->range we had before is still good.
*/
@@ -14996,9 +14946,8 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
dst_reg->umax_value = umax_ptr - umin_val;
}
dst_reg->var_off = tnum_sub(ptr_reg->var_off, off_reg->var_off);
- dst_reg->off = ptr_reg->off;
dst_reg->raw = ptr_reg->raw;
- if (reg_is_pkt_pointer(ptr_reg)) {
+ if (!known && reg_is_pkt_pointer(ptr_reg)) {
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
if (smin_val < 0)
@@ -16542,19 +16491,17 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
struct bpf_reg_state *reg;
int new_range;
- if (dst_reg->off < 0 ||
- (dst_reg->off == 0 && range_right_open))
+ if (dst_reg->umax_value == 0 && range_right_open)
/* This doesn't give us any range */
return;
- if (dst_reg->umax_value > MAX_PACKET_OFF ||
- dst_reg->umax_value + dst_reg->off > MAX_PACKET_OFF)
+ if (dst_reg->umax_value > MAX_PACKET_OFF)
/* Risk of overflow. For instance, ptr + (1<<63) may be less
* than pkt_end, but that's because it's also less than pkt.
*/
return;
- new_range = dst_reg->off;
+ new_range = dst_reg->umax_value;
if (range_right_open)
new_range++;
@@ -16603,7 +16550,7 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
/* If our ids match, then we must have the same max_value. And we
* don't care about the other reg's fixed offset, since if it's too big
* the range won't allow anything.
- * dst_reg->off is known < MAX_PACKET_OFF, therefore it fits in a u16.
+ * dst_reg->umax_value is known < MAX_PACKET_OFF, therefore it fits in a u16.
*/
bpf_for_each_reg_in_vstate(vstate, state, reg, ({
if (reg->type == type && reg->id == dst_reg->id)
@@ -17129,29 +17076,24 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
{
if (type_may_be_null(reg->type) && reg->id == id &&
(is_rcu_reg(reg) || !WARN_ON_ONCE(!reg->id))) {
- /* Old offset (both fixed and variable parts) should have been
- * known-zero, because we don't allow pointer arithmetic on
- * pointers that might be NULL. If we see this happening, don't
- * convert the register.
+ /* Old offset should have been known-zero, because we don't
+ * allow pointer arithmetic on pointers that might be NULL.
+ * If we see this happening, don't convert the register.
*
* But in some cases, some helpers that return local kptrs
- * advance offset for the returned pointer. In those cases, it
- * is fine to expect to see reg->off.
+ * advance offset for the returned pointer. In those cases,
+ * it is fine to expect to see reg->var_off.
*/
- if (WARN_ON_ONCE(reg->smin_value || reg->smax_value || !tnum_equals_const(reg->var_off, 0)))
- return;
if (!(type_is_ptr_alloc_obj(reg->type) || type_is_non_owning_ref(reg->type)) &&
- WARN_ON_ONCE(reg->off))
+ WARN_ON_ONCE(!tnum_equals_const(reg->var_off, 0)))
return;
-
if (is_null) {
- reg->type = SCALAR_VALUE;
/* We don't need id and ref_obj_id from this point
* onwards anymore, thus we should better reset it,
* so that state pruning has chances to take effect.
*/
- reg->id = 0;
- reg->ref_obj_id = 0;
+ __mark_reg_known_zero(reg);
+ reg->type = SCALAR_VALUE;
return;
}
@@ -17731,22 +17673,24 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
}
map = env->used_maps[aux->map_index];
- dst_reg->map_ptr = map;
if (insn->src_reg == BPF_PSEUDO_MAP_VALUE ||
insn->src_reg == BPF_PSEUDO_MAP_IDX_VALUE) {
if (map->map_type == BPF_MAP_TYPE_ARENA) {
__mark_reg_unknown(env, dst_reg);
+ dst_reg->map_ptr = map;
return 0;
}
+ __mark_reg_known(dst_reg, aux->map_off);
dst_reg->type = PTR_TO_MAP_VALUE;
- dst_reg->off = aux->map_off;
+ dst_reg->map_ptr = map;
WARN_ON_ONCE(map->map_type != BPF_MAP_TYPE_INSN_ARRAY &&
map->max_entries != 1);
/* We want reg->id to be same (0) as map_value is not distinct */
} else if (insn->src_reg == BPF_PSEUDO_MAP_FD ||
insn->src_reg == BPF_PSEUDO_MAP_IDX) {
dst_reg->type = CONST_PTR_TO_MAP;
+ dst_reg->map_ptr = map;
} else {
verifier_bug(env, "unexpected src reg value for ldimm64");
return -EFAULT;
@@ -19852,11 +19796,6 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
*/
if (rold->range > rcur->range)
return false;
- /* If the offsets don't match, we can't trust our alignment;
- * nor can we be sure that we won't fall out of range.
- */
- if (rold->off != rcur->off)
- return false;
/* id relations must be preserved */
if (!check_ids(rold->id, rcur->id, idmap))
return false;
@@ -19872,8 +19811,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
return true;
case PTR_TO_INSN:
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, var_off)) == 0 &&
- rold->off == rcur->off && range_within(rold, rcur) &&
- tnum_in(rold->var_off, rcur->var_off);
+ range_within(rold, rcur) && tnum_in(rold->var_off, rcur->var_off);
default:
return regs_exact(rold, rcur, idmap);
}
@@ -20486,7 +20424,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
* so we can assume valid iter and reg state,
* no need for extra (re-)validations
*/
- spi = __get_spi(iter_reg->off + iter_reg->var_off.value);
+ spi = __get_spi(iter_reg->var_off.value);
iter_state = &func(env, iter_reg)->stack[spi].spilled_ptr;
if (iter_state->iter.state == BPF_ITER_STATE_ACTIVE) {
loop = true;
@@ -20892,19 +20830,16 @@ static int indirect_jump_min_max_index(struct bpf_verifier_env *env,
u32 *pmin_index, u32 *pmax_index)
{
struct bpf_reg_state *reg = reg_state(env, regno);
- u64 min_index, max_index;
+ u64 min_index = reg->umin_value;
+ u64 max_index = reg->umax_value;
const u32 size = 8;
- if (check_add_overflow(reg->umin_value, reg->off, &min_index) ||
- (min_index > (u64) U32_MAX * size)) {
- verbose(env, "the sum of R%u umin_value %llu and off %u is too big\n",
- regno, reg->umin_value, reg->off);
+ if (min_index > (u64) U32_MAX * size) {
+ verbose(env, "the sum of R%u umin_value %llu is too big\n", regno, reg->umin_value);
return -ERANGE;
}
- if (check_add_overflow(reg->umax_value, reg->off, &max_index) ||
- (max_index > (u64) U32_MAX * size)) {
- verbose(env, "the sum of R%u umax_value %llu and off %u is too big\n",
- regno, reg->umax_value, reg->off);
+ if (max_index > (u64) U32_MAX * size) {
+ verbose(env, "the sum of R%u umax_value %llu is too big\n", regno, reg->umax_value);
return -ERANGE;
}
diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c
index 14c5a7ef0e87d0c2cedf04a3b4fd7bd8419b9c43..6f25b5f39a79cd2e2547638a7726f444d81c1543 100644
--- a/tools/testing/selftests/bpf/prog_tests/linked_list.c
+++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c
@@ -87,12 +87,12 @@ static struct {
{ "incorrect_value_type",
"operation on bpf_list_head expects arg#1 bpf_list_node at offset=48 in struct foo, "
"but arg is at offset=0 in struct bar" },
- { "incorrect_node_var_off", "variable ptr_ access var_off=(0x0; 0xffffffff) disallowed" },
+ { "incorrect_node_var_off", "variable ptr_ access var_off=(0x0; 0x1ffffffff) disallowed" },
{ "incorrect_node_off1", "bpf_list_node not found at offset=49" },
{ "incorrect_node_off2", "arg#1 offset=0, but expected bpf_list_node at offset=48 in struct foo" },
{ "no_head_type", "bpf_list_head not found at offset=0" },
{ "incorrect_head_var_off1", "R1 doesn't have constant offset" },
- { "incorrect_head_var_off2", "variable ptr_ access var_off=(0x0; 0xffffffff) disallowed" },
+ { "incorrect_head_var_off2", "variable ptr_ access var_off=(0x0; 0x1ffffffff) disallowed" },
{ "incorrect_head_off1", "bpf_list_head not found at offset=25" },
{ "incorrect_head_off2", "bpf_list_head not found at offset=1" },
{ "pop_front_off", "off 48 doesn't point to 'struct bpf_spin_lock' that is at 40" },
diff --git a/tools/testing/selftests/bpf/progs/exceptions_assert.c b/tools/testing/selftests/bpf/progs/exceptions_assert.c
index a01c2736890f9482e5a16bc943e7d877600149c2..ed00dd551ffb290f05753b15ce1bbde71b3dc19d 100644
--- a/tools/testing/selftests/bpf/progs/exceptions_assert.c
+++ b/tools/testing/selftests/bpf/progs/exceptions_assert.c
@@ -114,7 +114,7 @@ int check_assert_single_range_u64(struct __sk_buff *ctx)
SEC("?tc")
__log_level(2) __failure
-__msg(": R1=pkt(off=64,r=64) R2=pkt_end() R6=pkt(r=64) R10=fp0")
+__msg(": R1=pkt(r=64,imm=64) R2=pkt_end() R6=pkt(r=64) R10=fp0")
int check_assert_generic(struct __sk_buff *ctx)
{
u8 *data_end = (void *)(long)ctx->data_end;
diff --git a/tools/testing/selftests/bpf/progs/iters.c b/tools/testing/selftests/bpf/progs/iters.c
index 7f27b517d5d5668a0d2204cb8f9a0632806c3959..86b74e3579d9d5f72b8634804becde22c491a452 100644
--- a/tools/testing/selftests/bpf/progs/iters.c
+++ b/tools/testing/selftests/bpf/progs/iters.c
@@ -1651,7 +1651,7 @@ int clean_live_states(const void *ctx)
SEC("?raw_tp")
__flag(BPF_F_TEST_STATE_FREQ)
-__failure __msg("misaligned stack access off 0+-31+0 size 8")
+__failure __msg("misaligned stack access off -31+0 size 8")
__naked int absent_mark_in_the_middle_state(void)
{
/* This is equivalent to C program below.
@@ -1726,7 +1726,7 @@ static int noop(void)
SEC("?raw_tp")
__flag(BPF_F_TEST_STATE_FREQ)
-__failure __msg("misaligned stack access off 0+-31+0 size 8")
+__failure __msg("misaligned stack access off -31+0 size 8")
__naked int absent_mark_in_the_middle_state2(void)
{
/* This is equivalent to C program below.
@@ -1802,7 +1802,7 @@ __naked int absent_mark_in_the_middle_state2(void)
SEC("?raw_tp")
__flag(BPF_F_TEST_STATE_FREQ)
-__failure __msg("misaligned stack access off 0+-31+0 size 8")
+__failure __msg("misaligned stack access off -31+0 size 8")
__naked int absent_mark_in_the_middle_state3(void)
{
/*
diff --git a/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c b/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
index 3b984b6ae7c0b94a993470a3fe8185fce1214f4a..5b4453747c2308e871ffefd27123ac33a74abb71 100644
--- a/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
+++ b/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
@@ -8,7 +8,7 @@
SEC("tp_btf/sys_enter")
__success
__log_level(2)
-__msg("r8 = *(u64 *)(r7 +0) ; R7=ptr_nameidata(off={{[0-9]+}}) R8=rdonly_untrusted_mem(sz=0)")
+__msg("r8 = *(u64 *)(r7 +0) ; R7=ptr_nameidata(imm={{[0-9]+}}) R8=rdonly_untrusted_mem(sz=0)")
__msg("r9 = *(u8 *)(r8 +0) ; R8=rdonly_untrusted_mem(sz=0) R9=scalar")
int btf_id_to_ptr_mem(void *ctx)
{
diff --git a/tools/testing/selftests/bpf/progs/verifier_align.c b/tools/testing/selftests/bpf/progs/verifier_align.c
index 90362d61f1feda29a51999188f289dcf75d2761f..24553ce6288170aecd123fcc8e392406b4321759 100644
--- a/tools/testing/selftests/bpf/progs/verifier_align.c
+++ b/tools/testing/selftests/bpf/progs/verifier_align.c
@@ -131,7 +131,7 @@ LBL ":" \
SEC("tc")
__success __log_level(2)
__flag(BPF_F_ANY_ALIGNMENT)
-__msg("6: R0=pkt(off=8,r=8)")
+__msg("6: R0=pkt(r=8,imm=8)")
__msg("6: {{.*}} R3={{[^)]*}}var_off=(0x0; 0xff)")
__msg("7: {{.*}} R3={{[^)]*}}var_off=(0x0; 0x1fe)")
__msg("8: {{.*}} R3={{[^)]*}}var_off=(0x0; 0x3fc)")
@@ -203,10 +203,10 @@ __naked void unknown_mul(void)
SEC("tc")
__success __log_level(2)
__msg("2: {{.*}} R5=pkt(r=0)")
-__msg("4: {{.*}} R5=pkt(off=14,r=0)")
-__msg("5: {{.*}} R4=pkt(off=14,r=0)")
+__msg("4: {{.*}} R5=pkt(r=0,imm=14)")
+__msg("5: {{.*}} R4=pkt(r=0,imm=14)")
__msg("9: {{.*}} R2=pkt(r=18)")
-__msg("10: {{.*}} R4={{[^)]*}}var_off=(0x0; 0xff){{.*}} R5=pkt(off=14,r=18)")
+__msg("10: {{.*}} R4={{[^)]*}}var_off=(0x0; 0xff){{.*}} R5=pkt(r=18,imm=14)")
__msg("13: {{.*}} R4={{[^)]*}}var_off=(0x0; 0xffff)")
__msg("14: {{.*}} R4={{[^)]*}}var_off=(0x0; 0xffff)")
__naked void packet_const_offset(void)
@@ -247,14 +247,14 @@ __msg("7: {{.*}} R6={{[^)]*}}var_off=(0x0; 0x3fc)")
/* Offset is added to packet pointer R5, resulting in
* known fixed offset, and variable offset from R6.
*/
-__msg("11: {{.*}} R5=pkt(id=1,off=14,")
+__msg("11: {{.*}} R5=pkt(id=1,{{[^)]*}},var_off=(0x2; 0x7fc)")
/* At the time the word size load is performed from R5,
* it's total offset is NET_IP_ALIGN + reg->off (0) +
* reg->aux_off (14) which is 16. Then the variable
* offset is considered using reg->aux_off_align which
* is 4 and meets the load's requirements.
*/
-__msg("15: {{.*}} R4={{[^)]*}}var_off=(0x0; 0x3fc){{.*}} R5={{[^)]*}}var_off=(0x0; 0x3fc)")
+__msg("15: {{.*}} R4={{[^)]*}}var_off=(0x2; 0x7fc){{.*}} R5={{[^)]*}}var_off=(0x2; 0x7fc)")
/* Variable offset is added to R5 packet pointer,
* resulting in auxiliary alignment of 4. To avoid BPF
* verifier's precision backtracking logging
@@ -266,37 +266,37 @@ __msg("18: {{.*}} R4={{[^)]*}}var_off=(0x0; 0x3fc){{.*}} R5={{[^)]*}}var_off=(0x
/* Constant offset is added to R5, resulting in
* reg->off of 14.
*/
-__msg("19: {{.*}} R5=pkt(id=2,off=14,")
+__msg("19: {{.*}} R5=pkt(id=2,{{[^)]*}}var_off=(0x2; 0x7fc)")
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off
* (14) which is 16. Then the variable offset is 4-byte
* aligned, so the total offset is 4-byte aligned and
* meets the load's requirements.
*/
-__msg("24: {{.*}} R4={{[^)]*}}var_off=(0x0; 0x3fc){{.*}} R5={{[^)]*}}var_off=(0x0; 0x3fc)")
+__msg("24: {{.*}} R4={{[^)]*}}var_off=(0x2; 0x7fc){{.*}} R5={{[^)]*}}var_off=(0x2; 0x7fc)")
/* Constant offset is added to R5 packet pointer,
* resulting in reg->off value of 14.
*/
-__msg("26: {{.*}} R5=pkt(off=14,r=8)")
+__msg("26: {{.*}} R5=pkt(r=8,imm=14)")
/* Variable offset is added to R5, resulting in a
* variable offset of (4n). See comment for insn #18
* for R4 = R5 trick.
*/
-__msg("28: {{.*}} R4={{[^)]*}}var_off=(0x0; 0x3fc){{.*}} R5={{[^)]*}}var_off=(0x0; 0x3fc)")
+__msg("28: {{.*}} R4={{[^)]*}}var_off=(0x2; 0x7fc){{.*}} R5={{[^)]*}}var_off=(0x2; 0x7fc)")
/* Constant is added to R5 again, setting reg->off to 18. */
-__msg("29: {{.*}} R5=pkt(id=3,off=18,")
+__msg("29: {{.*}} R5=pkt(id=3,{{[^)]*}}var_off=(0x2; 0x7fc)")
/* And once more we add a variable; resulting {{[^)]*}}var_off
* is still (4n), fixed offset is not changed.
* Also, we create a new reg->id.
*/
-__msg("31: {{.*}} R4={{[^)]*}}var_off=(0x0; 0x7fc){{.*}} R5={{[^)]*}}var_off=(0x0; 0x7fc)")
+__msg("31: {{.*}} R4={{[^)]*}}var_off=(0x2; 0xffc){{.*}} R5={{[^)]*}}var_off=(0x2; 0xffc)")
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (18)
* which is 20. Then the variable offset is (4n), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
-__msg("35: {{.*}} R4={{[^)]*}}var_off=(0x0; 0x7fc){{.*}} R5={{[^)]*}}var_off=(0x0; 0x7fc)")
+__msg("35: {{.*}} R4={{[^)]*}}var_off=(0x2; 0xffc){{.*}} R5={{[^)]*}}var_off=(0x2; 0xffc)")
__naked void packet_variable_offset(void)
{
asm volatile (" \
@@ -430,16 +430,10 @@ __msg("6: {{.*}} R5={{[^)]*}}var_off=(0x2; 0xfffffffffffffffc)")
/* Checked s>=0 */
__msg("9: {{.*}} R5={{[^)]*}}var_off=(0x2; 0x7ffffffffffffffc)")
/* packet pointer + nonnegative (4n+2) */
-__msg("11: {{.*}} R6={{[^)]*}}var_off=(0x2; 0x7ffffffffffffffc)")
-__msg("12: {{.*}} R4={{[^)]*}}var_off=(0x2; 0x7ffffffffffffffc)")
-/* NET_IP_ALIGN + (4n+2) == (4n), alignment is fine.
- * We checked the bounds, but it might have been able
- * to overflow if the packet pointer started in the
- * upper half of the address space.
- * So we did not get a 'range' on R6, and the access
- * attempt will fail.
- */
-__msg("15: {{.*}} R6={{[^)]*}}var_off=(0x2; 0x7ffffffffffffffc)")
+__msg("11: {{.*}} R4={{[^)]*}}var_off=(0x2; 0x7ffffffffffffffc){{.*}} R6={{[^)]*}}var_off=(0x2; 0x7ffffffffffffffc)")
+__msg("12: (07) r4 += 4")
+/* packet smax bound overflow */
+__msg("pkt pointer offset -9223372036854775808 is not allowed")
__naked void dubious_pointer_arithmetic(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
index 560531404bcef88aed778b4dceeade03e086cfdb..d195eaa67d75ae3109d1281c68a0c9e316674fae 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
@@ -202,7 +202,7 @@ l0_%=: /* exit */ \
SEC("tc")
__description("bounds check based on reg_off + var_off + insn_off. test1")
-__failure __msg("value_size=8 off=1073741825")
+__failure __msg("map_value pointer offset 1073741822 is not allowed")
__naked void var_off_insn_off_test1(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
index 911caa8fd1b759994177353afd6c79ad7eb6eba4..4ee3b7a708f7a47b9305139eed8e5ab733687b50 100644
--- a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
+++ b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
@@ -412,7 +412,7 @@ l0_%=: r0 = 0; \
SEC("tc")
__description("direct packet access: test17 (pruning, alignment)")
-__failure __msg("misaligned packet access off 2+0+15+-4 size 4")
+__failure __msg("misaligned packet access off 2+15+-4 size 4")
__flag(BPF_F_STRICT_ALIGNMENT)
__naked void packet_access_test17_pruning_alignment(void)
{
@@ -569,7 +569,7 @@ l0_%=: r0 = 0; \
SEC("tc")
__description("direct packet access: test23 (x += pkt_ptr, 4)")
-__failure __msg("invalid access to packet, off=0 size=8, R5(id=3,off=0,r=0)")
+__failure __msg("invalid access to packet, off=31 size=8, R5(id=3,off=31,r=0)")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void test23_x_pkt_ptr_4(void)
{
diff --git a/tools/testing/selftests/bpf/progs/verifier_gotox.c b/tools/testing/selftests/bpf/progs/verifier_gotox.c
index 607dad058ca16be9b3e55c04afff9ee857b2a56d..548dce00f5fbacf1087f139c04d3a5e36d7a82e5 100644
--- a/tools/testing/selftests/bpf/progs/verifier_gotox.c
+++ b/tools/testing/selftests/bpf/progs/verifier_gotox.c
@@ -131,7 +131,7 @@ DEFINE_INVALID_SIZE_PROG(u16, __failure __msg("Invalid read of 2 bytes from insn
DEFINE_INVALID_SIZE_PROG(u8, __failure __msg("Invalid read of 1 bytes from insn_array"))
SEC("socket")
-__failure __msg("misaligned value access off 0+1+0 size 8")
+__failure __msg("misaligned value access off 1+0 size 8")
__naked void jump_table_misaligned_access(void)
{
asm volatile (" \
@@ -187,7 +187,7 @@ jt0_%=: \
}
SEC("socket")
-__failure __msg("invalid access to map value, value_size=16 off=-24 size=8")
+__failure __msg("R0 min value is negative")
__naked void jump_table_invalid_mem_acceess_neg(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_helper_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_helper_packet_access.c
index 74f5f9cd153d9103661ea07ed70615e6e35f0f6e..71cee3f583243f00c2a288915db69c3b1e581b1e 100644
--- a/tools/testing/selftests/bpf/progs/verifier_helper_packet_access.c
+++ b/tools/testing/selftests/bpf/progs/verifier_helper_packet_access.c
@@ -360,7 +360,7 @@ l0_%=: r0 = 0; \
SEC("tc")
__description("helper access to packet: test15, cls helper fail sub")
-__failure __msg("invalid access to packet")
+__failure __msg("R1 min value is negative")
__naked void test15_cls_helper_fail_sub(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_helper_value_access.c b/tools/testing/selftests/bpf/progs/verifier_helper_value_access.c
index 886498b5e6f34339161fc705f34e19dcb8f4ed03..6d2a38597c34f4135f9defe6bb6212e88d1a5542 100644
--- a/tools/testing/selftests/bpf/progs/verifier_helper_value_access.c
+++ b/tools/testing/selftests/bpf/progs/verifier_helper_value_access.c
@@ -1100,7 +1100,7 @@ l0_%=: exit; \
SEC("tracepoint")
__description("map helper access to adjusted map (via const imm): out-of-bound 2")
-__failure __msg("invalid access to map value, value_size=16 off=-4 size=8")
+__failure __msg("R2 min value is negative")
__naked void imm_out_of_bound_2(void)
{
asm volatile (" \
@@ -1176,7 +1176,7 @@ l0_%=: exit; \
SEC("tracepoint")
__description("map helper access to adjusted map (via const reg): out-of-bound 2")
-__failure __msg("invalid access to map value, value_size=16 off=-4 size=8")
+__failure __msg("R2 min value is negative")
__naked void reg_out_of_bound_2(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_int_ptr.c b/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
index 59e34d5586543e22de9d2d284ea0efb18d342dee..6627f44faf4bca7b8f11fb7dc3ae7b17c81bf440 100644
--- a/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
+++ b/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
@@ -65,7 +65,7 @@ __naked void ptr_to_long_half_uninitialized(void)
SEC("cgroup/sysctl")
__description("arg pointer to long misaligned")
-__failure __msg("misaligned stack access off 0+-20+0 size 8")
+__failure __msg("misaligned stack access off -20+0 size 8")
__naked void arg_ptr_to_long_misaligned(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_meta_access.c b/tools/testing/selftests/bpf/progs/verifier_meta_access.c
index d81722fb5f19ff373ba63942972f71077f2d30ce..62235f032ffe7bb9ffb264f3cd96c3e8aececf44 100644
--- a/tools/testing/selftests/bpf/progs/verifier_meta_access.c
+++ b/tools/testing/selftests/bpf/progs/verifier_meta_access.c
@@ -27,7 +27,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("meta access, test2")
-__failure __msg("invalid access to packet, off=-8")
+__failure __msg("R0 min value is negative")
__naked void meta_access_test2(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 7a13dbd794b2fb88038b529f1a954bc8e09e1644..893d3bb024a0bdc39843ecd13a44fb8003ee79c1 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -656,7 +656,7 @@ __msg("mark_precise: frame0: regs= stack=-8 before 6: (05) goto pc+0")
__msg("mark_precise: frame0: regs= stack=-8 before 5: (7b) *(u64 *)(r10 -16) = r0")
__msg("mark_precise: frame0: regs= stack=-8 before 4: (b7) r0 = 1")
__msg("mark_precise: frame0: regs= stack=-8 before 3: (7a) *(u64 *)(r10 -8) = 1")
-__msg("10: R1=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2=1")
+__msg("10: R1=map_value(map=.data.two_byte_,ks=4,vs=2,imm=1) R2=1")
/* validate load from fp-16, which was initialized using BPF_STX_MEM */
__msg("12: (79) r2 = *(u64 *)(r10 -16) ; R2=1 R10=fp0 fp-16=1")
__msg("13: (0f) r1 += r2")
@@ -673,7 +673,7 @@ __msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7")
__msg("mark_precise: frame0: regs= stack=-16 before 6: (05) goto pc+0")
__msg("mark_precise: frame0: regs= stack=-16 before 5: (7b) *(u64 *)(r10 -16) = r0")
__msg("mark_precise: frame0: regs=r0 stack= before 4: (b7) r0 = 1")
-__msg("14: R1=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2=1")
+__msg("14: R1=map_value(map=.data.two_byte_,ks=4,vs=2,imm=1) R2=1")
__naked void stack_load_preserves_const_precision(void)
{
asm volatile (
@@ -732,7 +732,7 @@ __msg("mark_precise: frame0: regs= stack=-8 before 6: (05) goto pc+0")
__msg("mark_precise: frame0: regs= stack=-8 before 5: (63) *(u32 *)(r10 -16) = r0")
__msg("mark_precise: frame0: regs= stack=-8 before 4: (b7) r0 = 1")
__msg("mark_precise: frame0: regs= stack=-8 before 3: (62) *(u32 *)(r10 -8) = 1")
-__msg("10: R1=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2=1")
+__msg("10: R1=map_value(map=.data.two_byte_,ks=4,vs=2,imm=1) R2=1")
/* validate load from fp-16, which was initialized using BPF_STX_MEM */
__msg("12: (61) r2 = *(u32 *)(r10 -16) ; R2=1 R10=fp0 fp-16=????1")
__msg("13: (0f) r1 += r2")
@@ -748,7 +748,7 @@ __msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7")
__msg("mark_precise: frame0: regs= stack=-16 before 6: (05) goto pc+0")
__msg("mark_precise: frame0: regs= stack=-16 before 5: (63) *(u32 *)(r10 -16) = r0")
__msg("mark_precise: frame0: regs=r0 stack= before 4: (b7) r0 = 1")
-__msg("14: R1=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2=1")
+__msg("14: R1=map_value(map=.data.two_byte_,ks=4,vs=2,imm=1) R2=1")
__naked void stack_load_preserves_const_precision_subreg(void)
{
asm volatile (
diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c
index 24aabc6083fd9698ba22bf93f07db76c99bdd9ee..8e8cf8232255fed584bc977eb80a9ebd106faec8 100644
--- a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c
+++ b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c
@@ -37,7 +37,7 @@ __naked void ptr_to_stack_store_load(void)
SEC("socket")
__description("PTR_TO_STACK store/load - bad alignment on off")
-__failure __msg("misaligned stack access off 0+-8+2 size 8")
+__failure __msg("misaligned stack access off -8+2 size 8")
__failure_unpriv
__naked void load_bad_alignment_on_off(void)
{
@@ -53,7 +53,7 @@ __naked void load_bad_alignment_on_off(void)
SEC("socket")
__description("PTR_TO_STACK store/load - bad alignment on reg")
-__failure __msg("misaligned stack access off 0+-10+8 size 8")
+__failure __msg("misaligned stack access off -10+8 size 8")
__failure_unpriv
__naked void load_bad_alignment_on_reg(void)
{
diff --git a/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c b/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c
index af7938ce56cb07a340470ce4629baea3753562ab..b3b701b445503ad38231c6908f5432226b79a63b 100644
--- a/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c
+++ b/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c
@@ -346,7 +346,7 @@ l2_%=: r0 = 1; \
SEC("socket")
__description("map access: value_ptr -= known scalar from different maps")
__success __failure_unpriv
-__msg_unpriv("R0 min value is outside of the allowed memory range")
+__msg_unpriv("R0 min value is negative")
__retval(1)
__naked void known_scalar_from_different_maps(void)
{
@@ -683,9 +683,7 @@ l0_%=: r0 = 1; \
SEC("socket")
__description("map access: value_ptr -= known scalar, lower oob arith, test 1")
-__failure __msg("R0 min value is outside of the allowed memory range")
-__failure_unpriv
-__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__failure __msg("R0 min value is negative")
__naked void lower_oob_arith_test_1(void)
{
asm volatile (" \
@@ -840,7 +838,7 @@ l0_%=: r0 = 1; \
SEC("socket")
__description("map access: value_ptr += known scalar, 3")
-__failure __msg("invalid access to map value")
+__failure __msg("R0 min value is negative")
__failure_unpriv
__naked void value_ptr_known_scalar_3(void)
{
@@ -1207,7 +1205,7 @@ l0_%=: r0 = 1; \
SEC("socket")
__description("map access: value_ptr -= known scalar")
-__failure __msg("R0 min value is outside of the allowed memory range")
+__failure __msg("R0 min value is negative")
__failure_unpriv
__naked void access_value_ptr_known_scalar(void)
{
diff --git a/tools/testing/selftests/bpf/progs/verifier_xdp_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_xdp_direct_packet_access.c
index df2dfd1b15d131cda8e314fa207c8fe03074837f..0b86d95a41335983941fe8eb9e7d74d2ac56678c 100644
--- a/tools/testing/selftests/bpf/progs/verifier_xdp_direct_packet_access.c
+++ b/tools/testing/selftests/bpf/progs/verifier_xdp_direct_packet_access.c
@@ -69,7 +69,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data' > pkt_end, bad access 1")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_end_bad_access_1_1(void)
{
@@ -131,7 +131,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data' > pkt_end, corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_1(void)
{
@@ -173,7 +173,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_end > pkt_data', corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_2(void)
{
@@ -279,7 +279,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data' < pkt_end, corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_3(void)
{
@@ -384,7 +384,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_end < pkt_data', bad access 1")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_data_bad_access_1_1(void)
{
@@ -446,7 +446,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_end < pkt_data', corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_4(void)
{
@@ -487,7 +487,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data' >= pkt_end, corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_5(void)
{
@@ -590,7 +590,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_end >= pkt_data', bad access 1")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_data_bad_access_1_2(void)
{
@@ -654,7 +654,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_end >= pkt_data', corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_6(void)
{
@@ -697,7 +697,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data' <= pkt_end, bad access 1")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_end_bad_access_1_2(void)
{
@@ -761,7 +761,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data' <= pkt_end, corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_7(void)
{
@@ -803,7 +803,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_end <= pkt_data', corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_8(void)
{
@@ -905,7 +905,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' > pkt_data, bad access 1")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_data_bad_access_1_3(void)
{
@@ -926,7 +926,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' > pkt_data, bad access 2")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_data_bad_access_2_5(void)
{
@@ -967,7 +967,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' > pkt_data, corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_9(void)
{
@@ -1009,7 +1009,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data > pkt_meta', corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_10(void)
{
@@ -1031,7 +1031,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data > pkt_meta', bad access 2")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_meta_bad_access_2_1(void)
{
@@ -1115,7 +1115,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' < pkt_data, corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_11(void)
{
@@ -1137,7 +1137,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' < pkt_data, bad access 2")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_data_bad_access_2_6(void)
{
@@ -1220,7 +1220,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data < pkt_meta', bad access 1")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_meta_bad_access_1_1(void)
{
@@ -1241,7 +1241,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data < pkt_meta', bad access 2")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_meta_bad_access_2_2(void)
{
@@ -1282,7 +1282,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data < pkt_meta', corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_12(void)
{
@@ -1323,7 +1323,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' >= pkt_data, corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_13(void)
{
@@ -1344,7 +1344,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' >= pkt_data, bad access 2")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_data_bad_access_2_7(void)
{
@@ -1426,7 +1426,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data >= pkt_meta', bad access 1")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_meta_bad_access_1_2(void)
{
@@ -1448,7 +1448,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data >= pkt_meta', bad access 2")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_meta_bad_access_2_3(void)
{
@@ -1490,7 +1490,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data >= pkt_meta', corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_14(void)
{
@@ -1533,7 +1533,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' <= pkt_data, bad access 1")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_data_bad_access_1_4(void)
{
@@ -1555,7 +1555,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' <= pkt_data, bad access 2")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_data_bad_access_2_8(void)
{
@@ -1597,7 +1597,7 @@ l1_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_meta' <= pkt_data, corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_15(void)
{
@@ -1639,7 +1639,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data <= pkt_meta', corner case -1, bad access")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void corner_case_1_bad_access_16(void)
{
@@ -1660,7 +1660,7 @@ l0_%=: r0 = 0; \
SEC("xdp")
__description("XDP pkt read, pkt_data <= pkt_meta', bad access 2")
-__failure __msg("R1 offset is outside of the packet")
+__failure __msg("R1 {{min|max}} value is outside of the allowed memory range")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void pkt_meta_bad_access_2_4(void)
{
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index 9ca83dce100d4e13bbc46c9b6f7146766645d5b1..86887130a0efb5c4b6a29f70c7cba0abdde89825 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -220,7 +220,7 @@
},
.result_unpriv = REJECT,
.result = REJECT,
- .errstr = "variable trusted_ptr_ access var_off=(0x0; 0x7) disallowed",
+ .errstr = "R1 must have zero offset when passed to release func or trusted arg to kfunc",
},
{
"calls: invalid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID",
--
2.51.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH bpf-next v2 3/4] nfp: bpf: remove references to bpf_reg_state->off for pointers
2026-02-12 21:34 [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 1/4] bpf: split check_reg_sane_offset() in two parts Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 2/4] bpf: use reg->var_off instead of reg->off for pointers Eduard Zingerman
@ 2026-02-12 21:34 ` Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 4/4] bpf: rename bpf_reg_state->off to bpf_reg_state->delta Eduard Zingerman
2026-02-13 23:50 ` [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off patchwork-bot+netdevbpf
4 siblings, 0 replies; 6+ messages in thread
From: Eduard Zingerman @ 2026-02-12 21:34 UTC (permalink / raw)
To: bpf, ast
Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song, kuba,
Eduard Zingerman
Previous commit changed BPF verifier to track pointer offsets entirely
in bpf_reg_state->var_off. This commit removes references to
bpf_reg_state->off in netronome code.
- In most of the places `.off` field is used as a part of a sum:
`reg->off + reg->var_off.value`. In such code the `reg->off` addend
is removed.
- Function jit.c:cross_mem_access() uses `.off` but ignores `.var_off`
to compute memory access offset. Here access to `.off` is replaced
with access to `.var_off.value`. verifier.c:nfp_bpf_check_ptr()
rejects programs for which !tnum_is_const(reg->var_off),
so the above change will read a valid constant.
Not sure why `.var_off` was ignored in this function previously.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 18 ++++++++----------
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 ++++++------
2 files changed, 14 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c
index 3a02eef58cc6d977fcd130e0b69947deff468d3f..c9ab55abfd4c5d72e63f3f5b089915298a06c121 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c
@@ -1731,7 +1731,7 @@ map_call_stack_common(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
/* We only have to reload LM0 if the key is not at start of stack */
lm_off = nfp_prog->stack_frame_depth;
- lm_off += meta->arg2.reg.var_off.value + meta->arg2.reg.off;
+ lm_off += meta->arg2.reg.var_off.value;
load_lm_ptr = meta->arg2.var_off || lm_off;
/* Set LM0 to start of key */
@@ -2874,8 +2874,7 @@ mem_ldx(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
}
if (meta->ptr.type == PTR_TO_STACK)
- return mem_ldx_stack(nfp_prog, meta, size,
- meta->ptr.off + meta->ptr.var_off.value);
+ return mem_ldx_stack(nfp_prog, meta, size, meta->ptr.var_off.value);
if (meta->ptr.type == PTR_TO_MAP_VALUE)
return mem_ldx_emem(nfp_prog, meta, size);
@@ -2985,8 +2984,7 @@ mem_stx(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
return mem_stx_data(nfp_prog, meta, size);
if (meta->ptr.type == PTR_TO_STACK)
- return mem_stx_stack(nfp_prog, meta, size,
- meta->ptr.off + meta->ptr.var_off.value);
+ return mem_stx_stack(nfp_prog, meta, size, meta->ptr.var_off.value);
return -EOPNOTSUPP;
}
@@ -4153,9 +4151,9 @@ cross_mem_access(struct bpf_insn *ld, struct nfp_insn_meta *head_ld_meta,
/* Canonicalize the offsets. Turn all of them against the original
* base register.
*/
- head_ld_off = head_ld_meta->insn.off + head_ld_meta->ptr.off;
- head_st_off = head_st_meta->insn.off + head_st_meta->ptr.off;
- ld_off = ld->off + head_ld_meta->ptr.off;
+ head_ld_off = head_ld_meta->insn.off + head_ld_meta->ptr.var_off.value;
+ head_st_off = head_st_meta->insn.off + head_st_meta->ptr.var_off.value;
+ ld_off = ld->off + head_ld_meta->ptr.var_off.value;
/* Ascending order cross. */
if (ld_off > head_ld_off &&
@@ -4326,7 +4324,7 @@ static void nfp_bpf_opt_pkt_cache(struct nfp_prog *nfp_prog)
* support this.
*/
if (meta->ptr.id == range_ptr_id &&
- meta->ptr.off == range_ptr_off) {
+ meta->ptr.var_off.value == range_ptr_off) {
s16 new_start = range_start;
s16 end, off = insn->off;
s16 new_end = range_end;
@@ -4361,7 +4359,7 @@ static void nfp_bpf_opt_pkt_cache(struct nfp_prog *nfp_prog)
range_node = meta;
range_node->pkt_cache.do_init = true;
range_ptr_id = range_node->ptr.id;
- range_ptr_off = range_node->ptr.off;
+ range_ptr_off = range_node->ptr.var_off.value;
range_start = insn->off;
range_end = insn->off + BPF_LDST_BYTES(insn);
}
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
index 9d235c0ce46a8149370131d4e7b403ad3224b955..2893be0f9f18c7084fd75c1c09a8dff298689f0c 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
@@ -98,7 +98,7 @@ static bool nfp_bpf_map_update_value_ok(struct bpf_verifier_env *env)
offmap = map_to_offmap(reg1->map_ptr);
nfp_map = offmap->dev_priv;
- off = reg3->off + reg3->var_off.value;
+ off = reg3->var_off.value;
for (i = 0; i < offmap->map.value_size; i++) {
struct bpf_stack_state *stack_entry;
@@ -137,7 +137,7 @@ nfp_bpf_stack_arg_ok(const char *fname, struct bpf_verifier_env *env,
return false;
}
- off = reg->var_off.value + reg->off;
+ off = reg->var_off.value;
if (-off % 4) {
pr_vlog(env, "%s: unaligned stack pointer %lld\n", fname, -off);
return false;
@@ -147,7 +147,7 @@ nfp_bpf_stack_arg_ok(const char *fname, struct bpf_verifier_env *env,
if (!old_arg)
return true;
- old_off = old_arg->reg.var_off.value + old_arg->reg.off;
+ old_off = old_arg->reg.var_off.value;
old_arg->var_off |= off != old_off;
return true;
@@ -358,8 +358,8 @@ nfp_bpf_check_stack_access(struct nfp_prog *nfp_prog,
if (meta->ptr.type == NOT_INIT)
return 0;
- old_off = meta->ptr.off + meta->ptr.var_off.value;
- new_off = reg->off + reg->var_off.value;
+ old_off = meta->ptr.var_off.value;
+ new_off = reg->var_off.value;
meta->ptr_not_const |= old_off != new_off;
@@ -428,7 +428,7 @@ nfp_bpf_map_mark_used(struct bpf_verifier_env *env, struct nfp_insn_meta *meta,
return -EOPNOTSUPP;
}
- off = reg->var_off.value + meta->insn.off + reg->off;
+ off = reg->var_off.value + meta->insn.off;
size = BPF_LDST_BYTES(&meta->insn);
offmap = map_to_offmap(reg->map_ptr);
nfp_map = offmap->dev_priv;
--
2.51.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH bpf-next v2 4/4] bpf: rename bpf_reg_state->off to bpf_reg_state->delta
2026-02-12 21:34 [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off Eduard Zingerman
` (2 preceding siblings ...)
2026-02-12 21:34 ` [PATCH bpf-next v2 3/4] nfp: bpf: remove references to bpf_reg_state->off " Eduard Zingerman
@ 2026-02-12 21:34 ` Eduard Zingerman
2026-02-13 23:50 ` [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off patchwork-bot+netdevbpf
4 siblings, 0 replies; 6+ messages in thread
From: Eduard Zingerman @ 2026-02-12 21:34 UTC (permalink / raw)
To: bpf, ast
Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song, kuba,
Eduard Zingerman
This field is now used only for linked scalar registers tracking.
Rename it to 'delta' to better describe it's purpose:
constant delta between "linked" scalars with the same ID.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
include/linux/bpf_verifier.h | 6 +++---
kernel/bpf/log.c | 8 ++++----
kernel/bpf/verifier.c | 18 +++++++++---------
3 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index a97bdbf3a07b63246ebf2021816e81c147989381..c1e30096ea7bfc90b4eb79ab40f4fbd25b69188a 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -40,7 +40,7 @@ struct bpf_reg_state {
/*
* Constant delta between "linked" scalars with the same ID.
*/
- s32 off;
+ s32 delta;
union {
/* valid when type == PTR_TO_PACKET */
int range;
@@ -145,9 +145,9 @@ struct bpf_reg_state {
* Upper bit of ID is used to remember relationship between "linked"
* registers. Example:
* r1 = r2; both will have r1->id == r2->id == N
- * r1 += 10; r1->id == N | BPF_ADD_CONST and r1->off == 10
+ * r1 += 10; r1->id == N | BPF_ADD_CONST and r1->delta == 10
* r3 = r2; both will have r3->id == r2->id == N
- * w3 += 10; r3->id == N | BPF_ADD_CONST32 and r3->off == 10
+ * w3 += 10; r3->id == N | BPF_ADD_CONST32 and r3->delta == 10
*/
#define BPF_ADD_CONST64 (1U << 31)
#define BPF_ADD_CONST32 (1U << 30)
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index 39a731392d6520a1345fc3d79e86fc43f63e426a..37d72b0521920daf6f3274ad77d055c8b3aafa1b 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -694,7 +694,7 @@ static void print_reg_state(struct bpf_verifier_env *env,
if (state->frameno != reg->frameno)
verbose(env, "[%d]", reg->frameno);
if (tnum_is_const(reg->var_off)) {
- verbose_snum(env, reg->var_off.value + reg->off);
+ verbose_snum(env, reg->var_off.value + reg->delta);
return;
}
}
@@ -704,7 +704,7 @@ static void print_reg_state(struct bpf_verifier_env *env,
if (reg->id)
verbose_a("id=%d", reg->id & ~BPF_ADD_CONST);
if (reg->id & BPF_ADD_CONST)
- verbose(env, "%+d", reg->off);
+ verbose(env, "%+d", reg->delta);
if (reg->ref_obj_id)
verbose_a("ref_obj_id=%d", reg->ref_obj_id);
if (type_is_non_owning_ref(reg->type))
@@ -716,9 +716,9 @@ static void print_reg_state(struct bpf_verifier_env *env,
reg->map_ptr->key_size,
reg->map_ptr->value_size);
}
- if (t != SCALAR_VALUE && reg->off) {
+ if (t != SCALAR_VALUE && reg->delta) {
verbose_a("off=");
- verbose_snum(env, reg->off);
+ verbose_snum(env, reg->delta);
}
if (type_is_pkt_pointer(t)) {
verbose_a("r=");
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2c5794dad66889b7f872aa9882652efae877ac1a..0162f946032fe317ce1e5cf4a82e86a9357eca2b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5095,7 +5095,7 @@ static void assign_scalar_id_before_mov(struct bpf_verifier_env *env,
* Cleared it, since multiple rX += const are not supported.
*/
src_reg->id = 0;
- src_reg->off = 0;
+ src_reg->delta = 0;
}
if (!src_reg->id && !tnum_is_const(src_reg->var_off))
@@ -16219,14 +16219,14 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
* we cannot accumulate another val into rx->off.
*/
clear_id:
- dst_reg->off = 0;
+ dst_reg->delta = 0;
dst_reg->id = 0;
} else {
if (alu32)
dst_reg->id |= BPF_ADD_CONST32;
else
dst_reg->id |= BPF_ADD_CONST64;
- dst_reg->off = off;
+ dst_reg->delta = off;
}
} else {
/*
@@ -17305,18 +17305,18 @@ static void sync_linked_regs(struct bpf_verifier_env *env, struct bpf_verifier_s
if ((reg->id & ~BPF_ADD_CONST) != (known_reg->id & ~BPF_ADD_CONST))
continue;
if ((!(reg->id & BPF_ADD_CONST) && !(known_reg->id & BPF_ADD_CONST)) ||
- reg->off == known_reg->off) {
+ reg->delta == known_reg->delta) {
s32 saved_subreg_def = reg->subreg_def;
copy_register_state(reg, known_reg);
reg->subreg_def = saved_subreg_def;
} else {
s32 saved_subreg_def = reg->subreg_def;
- s32 saved_off = reg->off;
+ s32 saved_off = reg->delta;
u32 saved_id = reg->id;
fake_reg.type = SCALAR_VALUE;
- __mark_reg_known(&fake_reg, (s64)reg->off - (s64)known_reg->off);
+ __mark_reg_known(&fake_reg, (s64)reg->delta - (s64)known_reg->delta);
/* reg = known_reg; reg += delta */
copy_register_state(reg, known_reg);
@@ -17324,7 +17324,7 @@ static void sync_linked_regs(struct bpf_verifier_env *env, struct bpf_verifier_s
* Must preserve off, id and subreg_def flag,
* otherwise another sync_linked_regs() will be incorrect.
*/
- reg->off = saved_off;
+ reg->delta = saved_off;
reg->id = saved_id;
reg->subreg_def = saved_subreg_def;
@@ -19629,7 +19629,7 @@ static void clear_singular_ids(struct bpf_verifier_env *env,
continue;
if (idset_cnt_get(idset, reg->id & ~BPF_ADD_CONST) == 1) {
reg->id = 0;
- reg->off = 0;
+ reg->delta = 0;
}
}));
}
@@ -19766,7 +19766,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
return false;
/* Both have offset linkage: offsets must match */
- if ((rold->id & BPF_ADD_CONST) && rold->off != rcur->off)
+ if ((rold->id & BPF_ADD_CONST) && rold->delta != rcur->delta)
return false;
if (!check_scalar_ids(rold->id, rcur->id, idmap))
--
2.51.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off
2026-02-12 21:34 [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off Eduard Zingerman
` (3 preceding siblings ...)
2026-02-12 21:34 ` [PATCH bpf-next v2 4/4] bpf: rename bpf_reg_state->off to bpf_reg_state->delta Eduard Zingerman
@ 2026-02-13 23:50 ` patchwork-bot+netdevbpf
4 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-02-13 23:50 UTC (permalink / raw)
To: Eduard Zingerman
Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yonghong.song,
kuba
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Thu, 12 Feb 2026 13:34:20 -0800 you wrote:
> Consolidate static and varying pointer offset tracking logic in the
> BPF verifier. All pointer offsets are now represented solely using
> `reg->var_off` and min/max fields, simplifying pointer tracking code
> and making it easier to widen pointer registers for loop convergence
> checks.
>
> Patch 1 is a preparatory refactoring of check_reg_sane_offset().
> Patch 2 is the main change, moving pointer offsets from `reg->off`
> to `reg->var_off`.
> Patch 3 removes references to `reg->off` in netronome code.
> Patch 4 renames the now-repurposed `reg->off` field to `reg->delta`,
> reflecting its remaining role as a constant delta between linked
> scalar registers.
>
> [...]
Here is the summary with links:
- [bpf-next,v2,1/4] bpf: split check_reg_sane_offset() in two parts
https://git.kernel.org/bpf/bpf-next/c/ed20a14309e0
- [bpf-next,v2,2/4] bpf: use reg->var_off instead of reg->off for pointers
https://git.kernel.org/bpf/bpf-next/c/022ac0750883
- [bpf-next,v2,3/4] nfp: bpf: remove references to bpf_reg_state->off for pointers
https://git.kernel.org/bpf/bpf-next/c/6363fc17c9f5
- [bpf-next,v2,4/4] bpf: rename bpf_reg_state->off to bpf_reg_state->delta
https://git.kernel.org/bpf/bpf-next/c/3d91c618aca4
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-02-13 23:50 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-12 21:34 [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 1/4] bpf: split check_reg_sane_offset() in two parts Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 2/4] bpf: use reg->var_off instead of reg->off for pointers Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 3/4] nfp: bpf: remove references to bpf_reg_state->off " Eduard Zingerman
2026-02-12 21:34 ` [PATCH bpf-next v2 4/4] bpf: rename bpf_reg_state->off to bpf_reg_state->delta Eduard Zingerman
2026-02-13 23:50 ` [PATCH bpf-next v2 0/4] bpf: consolidate pointer offset tracking in var_off patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox