* [PATCH bpf-next 0/3] bpf: Improve linked register tracking
@ 2026-01-07 20:39 Puranjay Mohan
2026-01-07 20:39 ` [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for " Puranjay Mohan
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Puranjay Mohan @ 2026-01-07 20:39 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Mykyta Yatsenko,
kernel-team
This series extends the BPF verifier's linked register tracking to handle
negative offsets and BPF_SUB operations, enabling better bounds propagation for
common arithmetic patterns.
The verifier previously only tracked positive constant deltas between linked
registers using BPF_ADD. This meant patterns using negative offsets or
subtraction couldn't benefit from bounds propagation:
r1 = r0
r1 += -4
if r1 s>= 0 goto ... // r1 >= 0 implies r0 >= 4
// verifier couldn't propagate bounds back to r0
Patch 1 extends scalar_min_max_add() to:
- Accept BPF_SUB in addition to BPF_ADD (treating r1 -= 4 as r1 += -4)
- Change the overflow check to properly validate s32 range
- Add a guard against S32_MIN negation overflow
- Retain the !alu32 restriction due to known issues with 32-bit ALU upper bits
Patches 2-3 update the selftests:
- Patch 2 adds comprehensive tests covering success cases (negative offsets,
BPF_SUB), failure cases (32-bit ALU, double ADD), and large delta edge cases
(S32_MIN/S32_MAX offsets)
- Patch 3 updates an existing test's expected output to reflect the new
tracking behavior
Puranjay Mohan (3):
bpf: Support negative offsets and BPF_SUB for linked register tracking
selftests/bpf: Add tests for linked register tracking with negative
offsets
selftests/bpf: Update expected output for sub64_partial_overflow test
kernel/bpf/verifier.c | 26 ++-
.../selftests/bpf/progs/verifier_bounds.c | 2 +-
.../bpf/progs/verifier_linked_scalars.c | 213 ++++++++++++++++++
3 files changed, 233 insertions(+), 8 deletions(-)
base-commit: 2175ccfb93fd91d0ece74684eb7ab9443de806ec
--
2.47.3
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for linked register tracking
2026-01-07 20:39 [PATCH bpf-next 0/3] bpf: Improve linked register tracking Puranjay Mohan
@ 2026-01-07 20:39 ` Puranjay Mohan
2026-01-08 1:40 ` Eduard Zingerman
2026-01-07 20:39 ` [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets Puranjay Mohan
2026-01-07 20:39 ` [PATCH bpf-next 3/3] selftests/bpf: Update expected output for sub64_partial_overflow test Puranjay Mohan
2 siblings, 1 reply; 13+ messages in thread
From: Puranjay Mohan @ 2026-01-07 20:39 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Mykyta Yatsenko,
kernel-team
Extend the linked register tracking to support:
1. Negative offsets via BPF_ADD (e.g., r1 += -4)
2. BPF_SUB operations (e.g., r1 -= 4), which is treated as r1 += -4
Previously, the verifier only tracked positive constant deltas between
linked registers using BPF_ADD. This limitation meant patterns like:
r1 = r0
r1 += -4
if r1 s>= 0 goto ... // r1 >= 0 implies r0 >= 4
// verifier couldn't propagate bounds back to r0
With this change, the verifier can now track negative deltas in reg->off
(which is already s32), enabling bound propagation for the above pattern.
The changes include:
- Accept BPF_SUB in addition to BPF_ADD
- Change overflow check from val > (u32)S32_MAX to checking if val fits
in s32 range: (s64)val != (s64)(s32)val
- For BPF_SUB, negate the offset with a guard against S32_MIN overflow
- Keep !alu32 restriction as 32-bit ALU has known issues with upper bits
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
kernel/bpf/verifier.c | 26 +++++++++++++++++++-------
1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 53635ea2e41b..5eca33e02d6e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15710,22 +15710,34 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
* update r1 after 'if' condition.
*/
if (env->bpf_capable &&
- BPF_OP(insn->code) == BPF_ADD && !alu32 &&
- dst_reg->id && is_reg_const(src_reg, false)) {
+ (BPF_OP(insn->code) == BPF_ADD || BPF_OP(insn->code) == BPF_SUB) &&
+ !alu32 && dst_reg->id && is_reg_const(src_reg, false)) {
u64 val = reg_const_value(src_reg, false);
+ s32 off;
- if ((dst_reg->id & BPF_ADD_CONST) ||
- /* prevent overflow in sync_linked_regs() later */
- val > (u32)S32_MAX) {
+ if ((s64)val != (s64)(s32)val)
+ goto clear_id;
+
+ off = (s32)val;
+
+ if (BPF_OP(insn->code) == BPF_SUB) {
+ /* Negating S32_MIN would overflow */
+ if (off == S32_MIN)
+ goto clear_id;
+ off = -off;
+ }
+
+ if (dst_reg->id & BPF_ADD_CONST) {
/*
* If the register already went through rX += val
* we cannot accumulate another val into rx->off.
*/
+clear_id:
dst_reg->off = 0;
dst_reg->id = 0;
} else {
dst_reg->id |= BPF_ADD_CONST;
- dst_reg->off = val;
+ dst_reg->off = off;
}
} else {
/*
@@ -16821,7 +16833,7 @@ static void sync_linked_regs(struct bpf_verifier_state *vstate, struct bpf_reg_s
s32 saved_off = reg->off;
fake_reg.type = SCALAR_VALUE;
- __mark_reg_known(&fake_reg, (s32)reg->off - (s32)known_reg->off);
+ __mark_reg_known(&fake_reg, (s64)reg->off - (s64)known_reg->off);
/* reg = known_reg; reg += delta */
copy_register_state(reg, known_reg);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets
2026-01-07 20:39 [PATCH bpf-next 0/3] bpf: Improve linked register tracking Puranjay Mohan
2026-01-07 20:39 ` [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for " Puranjay Mohan
@ 2026-01-07 20:39 ` Puranjay Mohan
2026-01-08 2:11 ` Eduard Zingerman
2026-01-08 6:55 ` Eduard Zingerman
2026-01-07 20:39 ` [PATCH bpf-next 3/3] selftests/bpf: Update expected output for sub64_partial_overflow test Puranjay Mohan
2 siblings, 2 replies; 13+ messages in thread
From: Puranjay Mohan @ 2026-01-07 20:39 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Mykyta Yatsenko,
kernel-team
Add tests for linked register tracking with negative offsets and BPF_SUB:
Success cases (64-bit ALU, tracking works):
- scalars_neg: r1 += -4 with signed comparison
- scalars_neg_sub: r1 -= 4 with signed comparison
- scalars_pos: r1 += 4 with unsigned comparison
- scalars_sub_neg_imm: r1 -= -4 (equivalent to r1 += 4)
Failure cases (tracking disabled, documents limitations):
- scalars_neg_alu32_add: 32-bit ADD not tracked
- scalars_neg_alu32_sub: 32-bit SUB not tracked
- scalars_double_add: Double ADD clears ID
Large delta tests (verifies 64-bit arithmetic in sync_linked_regs):
- scalars_sync_delta_overflow: S32_MIN offset
- scalars_sync_delta_overflow_large_range: S32_MAX offset
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
.../bpf/progs/verifier_linked_scalars.c | 213 ++++++++++++++++++
1 file changed, 213 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c b/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
index 8f755d2464cf..2e1ef0f96717 100644
--- a/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
+++ b/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
@@ -31,4 +31,217 @@ l1: \
" ::: __clobber_all);
}
+SEC("socket")
+__description("scalars: linked scalars with negative offset")
+__success
+__naked void scalars_neg(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ r1 = r0; \
+ r1 += -4; \
+ if r1 s< 0 goto l2; \
+ if r0 != 0 goto l2; \
+ r0 /= 0; \
+l2: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/* Same test but using BPF_SUB instead of BPF_ADD with negative immediate */
+SEC("socket")
+__description("scalars: linked scalars with SUB")
+__success
+__naked void scalars_neg_sub(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ r1 = r0; \
+ r1 -= 4; \
+ if r1 s< 0 goto l2_sub; \
+ if r0 != 0 goto l2_sub; \
+ r0 /= 0; \
+l2_sub: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/* 32-bit ALU: linked scalar tracking not supported, ID cleared */
+SEC("socket")
+__description("scalars: linked scalars 32-bit ADD not tracked")
+__failure
+__msg("div by zero")
+__naked void scalars_neg_alu32_add(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ w0 &= 0xff; \
+ w1 = w0; \
+ w1 += -4; \
+ if w1 s< 0 goto l2_alu32_add; \
+ if w0 != 0 goto l2_alu32_add; \
+ r0 /= 0; \
+l2_alu32_add: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/* 32-bit ALU: linked scalar tracking not supported, ID cleared */
+SEC("socket")
+__description("scalars: linked scalars 32-bit SUB not tracked")
+__failure
+__msg("div by zero")
+__naked void scalars_neg_alu32_sub(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ w0 &= 0xff; \
+ w1 = w0; \
+ w1 -= 4; \
+ if w1 s< 0 goto l2_alu32_sub; \
+ if w0 != 0 goto l2_alu32_sub; \
+ r0 /= 0; \
+l2_alu32_sub: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/* Positive offset: r1 = r0 + 4, then if r1 >= 6, r0 >= 2, so r0 != 0 */
+SEC("socket")
+__description("scalars: linked scalars positive offset")
+__success
+__naked void scalars_pos(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ r1 = r0; \
+ r1 += 4; \
+ if r1 < 6 goto l2_pos; \
+ if r0 != 0 goto l2_pos; \
+ r0 /= 0; \
+l2_pos: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/* SUB with negative immediate: r1 -= -4 is equivalent to r1 += 4 */
+SEC("socket")
+__description("scalars: linked scalars SUB negative immediate")
+__success
+__naked void scalars_sub_neg_imm(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ r1 = r0; \
+ r1 -= -4; \
+ if r1 < 6 goto l2_sub_neg; \
+ if r0 != 0 goto l2_sub_neg; \
+ r0 /= 0; \
+l2_sub_neg: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/* Double ADD clears the ID (can't accumulate offsets) */
+SEC("socket")
+__description("scalars: linked scalars double ADD clears ID")
+__failure
+__msg("div by zero")
+__naked void scalars_double_add(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ r1 = r0; \
+ r1 += 2; \
+ r1 += 2; \
+ if r1 < 6 goto l2_double; \
+ if r0 != 0 goto l2_double; \
+ r0 /= 0; \
+l2_double: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/*
+ * Test that sync_linked_regs() correctly handles large offset differences.
+ * r1.off = S32_MIN, r2.off = 1, delta = S32_MIN - 1 requires 64-bit math.
+ */
+SEC("socket")
+__description("scalars: linked regs sync with large delta (S32_MIN offset)")
+__success
+__naked void scalars_sync_delta_overflow(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ r1 = r0; \
+ r2 = r0; \
+ r1 += %[s32_min]; \
+ r2 += 1; \
+ if r2 s< 100 goto l2_overflow; \
+ if r1 s< 0 goto l2_overflow; \
+ r0 /= 0; \
+l2_overflow: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32),
+ [s32_min]"i"((int)(-2147483647 - 1))
+ : __clobber_all);
+}
+
+/*
+ * Another large delta case: r1.off = S32_MAX, r2.off = -1.
+ * delta = S32_MAX - (-1) = S32_MAX + 1 requires 64-bit math.
+ */
+SEC("socket")
+__description("scalars: linked regs sync with large delta (S32_MAX offset)")
+__success
+__naked void scalars_sync_delta_overflow_large_range(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ r1 = r0; \
+ r2 = r0; \
+ r1 += %[s32_max]; \
+ r2 += -1; \
+ if r2 s< 0 goto l2_large; \
+ if r1 s>= 0 goto l2_large; \
+ r0 /= 0; \
+l2_large: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32),
+ [s32_max]"i"((int)2147483647)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH bpf-next 3/3] selftests/bpf: Update expected output for sub64_partial_overflow test
2026-01-07 20:39 [PATCH bpf-next 0/3] bpf: Improve linked register tracking Puranjay Mohan
2026-01-07 20:39 ` [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for " Puranjay Mohan
2026-01-07 20:39 ` [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets Puranjay Mohan
@ 2026-01-07 20:39 ` Puranjay Mohan
2026-01-08 6:59 ` Eduard Zingerman
2 siblings, 1 reply; 13+ messages in thread
From: Puranjay Mohan @ 2026-01-07 20:39 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Mykyta Yatsenko,
kernel-team
Update the expected regex pattern for the sub64_partial_overflow test.
With BPF_SUB now supporting linked register tracking, the verifier
output shows R3=scalar(id=1-1) instead of R3=scalar() because r3 is
now tracked as linked to r0 with an offset of -1.
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
tools/testing/selftests/bpf/progs/verifier_bounds.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
index 411a18437d7e..560531404bce 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
@@ -1477,7 +1477,7 @@ __naked void sub64_full_overflow(void)
SEC("socket")
__description("64-bit subtraction, partial overflow, result in unbounded reg")
__success __log_level(2)
-__msg("3: (1f) r3 -= r2 {{.*}} R3=scalar()")
+__msg("3: (1f) r3 -= r2 {{.*}} R3=scalar(id=1-1)")
__retval(0)
__naked void sub64_partial_overflow(void)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for linked register tracking
2026-01-07 20:39 ` [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for " Puranjay Mohan
@ 2026-01-08 1:40 ` Eduard Zingerman
2026-01-08 1:47 ` Eduard Zingerman
0 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2026-01-08 1:40 UTC (permalink / raw)
To: Puranjay Mohan, bpf
Cc: Puranjay Mohan, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin KaFai Lau, Kumar Kartikeya Dwivedi,
Mykyta Yatsenko, kernel-team
On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
> Extend the linked register tracking to support:
>
> 1. Negative offsets via BPF_ADD (e.g., r1 += -4)
> 2. BPF_SUB operations (e.g., r1 -= 4), which is treated as r1 += -4
>
> Previously, the verifier only tracked positive constant deltas between
> linked registers using BPF_ADD. This limitation meant patterns like:
>
> r1 = r0
> r1 += -4
> if r1 s>= 0 goto ... // r1 >= 0 implies r0 >= 4
> // verifier couldn't propagate bounds back to r0
>
> With this change, the verifier can now track negative deltas in reg->off
> (which is already s32), enabling bound propagation for the above pattern.
>
> The changes include:
> - Accept BPF_SUB in addition to BPF_ADD
> - Change overflow check from val > (u32)S32_MAX to checking if val fits
> in s32 range: (s64)val != (s64)(s32)val
> - For BPF_SUB, negate the offset with a guard against S32_MIN overflow
> - Keep !alu32 restriction as 32-bit ALU has known issues with upper bits
This is because we don't know if other registers with the same id have
their upper 32-bits as zeroes, right?
I'm curious how often we hit this limitation in practice, I would
expect code operating on 32-bit values to hit this from time to time.
E.g. see example in [1] (but it is mitigated by an LLVM workaround).
[1] https://github.com/eddyz87/cauldron-25-llvm-workarounds/blob/master/draft.md#bpfadjustoptimplserializeicmpcrossbb
>
> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
[...]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for linked register tracking
2026-01-08 1:40 ` Eduard Zingerman
@ 2026-01-08 1:47 ` Eduard Zingerman
2026-01-08 2:53 ` Alexei Starovoitov
0 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2026-01-08 1:47 UTC (permalink / raw)
To: Puranjay Mohan, bpf
Cc: Puranjay Mohan, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin KaFai Lau, Kumar Kartikeya Dwivedi,
Mykyta Yatsenko, kernel-team
On Wed, 2026-01-07 at 17:40 -0800, Eduard Zingerman wrote:
> On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
> > Extend the linked register tracking to support:
> >
> > 1. Negative offsets via BPF_ADD (e.g., r1 += -4)
> > 2. BPF_SUB operations (e.g., r1 -= 4), which is treated as r1 += -4
> >
> > Previously, the verifier only tracked positive constant deltas between
> > linked registers using BPF_ADD. This limitation meant patterns like:
> >
> > r1 = r0
> > r1 += -4
> > if r1 s>= 0 goto ... // r1 >= 0 implies r0 >= 4
> > // verifier couldn't propagate bounds back to r0
> >
> > With this change, the verifier can now track negative deltas in reg->off
> > (which is already s32), enabling bound propagation for the above pattern.
> >
> > The changes include:
> > - Accept BPF_SUB in addition to BPF_ADD
> > - Change overflow check from val > (u32)S32_MAX to checking if val fits
> > in s32 range: (s64)val != (s64)(s32)val
> > - For BPF_SUB, negate the offset with a guard against S32_MIN overflow
> > - Keep !alu32 restriction as 32-bit ALU has known issues with upper bits
>
> This is because we don't know if other registers with the same id have
> their upper 32-bits as zeroes, right?
Nah, we do know if their upper halves are zeroes or not,
registers with same id are identical up to the ->off field.
So, it appears that this restriction can be partially lifted if we
check if dst_reg upper half is zero before operation.
Wdyt?
> I'm curious how often we hit this limitation in practice, I would
> expect code operating on 32-bit values to hit this from time to time.
> E.g. see example in [1] (but it is mitigated by an LLVM workaround).
>
> [1] https://github.com/eddyz87/cauldron-25-llvm-workarounds/blob/master/draft.md#bpfadjustoptimplserializeicmpcrossbb
>
> >
> > Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> > ---
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
>
> [...]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets
2026-01-07 20:39 ` [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets Puranjay Mohan
@ 2026-01-08 2:11 ` Eduard Zingerman
2026-01-21 0:46 ` __description(). Was: " Alexei Starovoitov
2026-01-08 6:55 ` Eduard Zingerman
1 sibling, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2026-01-08 2:11 UTC (permalink / raw)
To: Puranjay Mohan, bpf
Cc: Puranjay Mohan, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin KaFai Lau, Kumar Kartikeya Dwivedi,
Mykyta Yatsenko, kernel-team
On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
> Add tests for linked register tracking with negative offsets and BPF_SUB:
>
> Success cases (64-bit ALU, tracking works):
> - scalars_neg: r1 += -4 with signed comparison
> - scalars_neg_sub: r1 -= 4 with signed comparison
> - scalars_pos: r1 += 4 with unsigned comparison
> - scalars_sub_neg_imm: r1 -= -4 (equivalent to r1 += 4)
>
> Failure cases (tracking disabled, documents limitations):
> - scalars_neg_alu32_add: 32-bit ADD not tracked
> - scalars_neg_alu32_sub: 32-bit SUB not tracked
> - scalars_double_add: Double ADD clears ID
>
> Large delta tests (verifies 64-bit arithmetic in sync_linked_regs):
> - scalars_sync_delta_overflow: S32_MIN offset
> - scalars_sync_delta_overflow_large_range: S32_MAX offset
>
> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> ---
> .../bpf/progs/verifier_linked_scalars.c | 213 ++++++++++++++++++
> 1 file changed, 213 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c b/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
> index 8f755d2464cf..2e1ef0f96717 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
> @@ -31,4 +31,217 @@ l1: \
> " ::: __clobber_all);
> }
>
> +SEC("socket")
> +__description("scalars: linked scalars with negative offset")
Nit: I think that __description tag should be avoided in the new code.
w/o this tag the test case could be executed as follows:
./test_progs -t verifier_linked_scalars/scalars_reg
with this tag the test case should be executed as:
./test_progs -t "verifier_linked_scalars/scalars: linked scalars with negative offset"
and I'm not sure test_progs handles spaces properly (even if it does, the invocation is inconvenient).
So, I'd just put the description in the comments.
> +__success
> +__naked void scalars_neg(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r1 += -4; \
> + if r1 s< 0 goto l2; \
^^
This is a file-global label.
It's better to use `goto 1f ...; 1: <code>` in such cases,
or a special `%=` substitution. There are multiple examples
for both in the test cases. See [1] and [2].
[1] https://sourceware.org/binutils/docs-2.36/as.html#Symbol-Names (local labels)
[2] https://gcc.gnu.org/onlinedocs/gcc-14.1.0/gcc/Extended-Asm.html#Special-format-strings
> + if r0 != 0 goto l2; \
> + r0 /= 0; \
> +l2: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32)
> + : __clobber_all);
> +}
> +
> +/* Same test but using BPF_SUB instead of BPF_ADD with negative immediate */
> +SEC("socket")
> +__description("scalars: linked scalars with SUB")
> +__success
> +__naked void scalars_neg_sub(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r1 -= 4; \
> + if r1 s< 0 goto l2_sub; \
> + if r0 != 0 goto l2_sub; \
> + r0 /= 0; \
> +l2_sub: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32)
> + : __clobber_all);
> +}
> +
> +/* 32-bit ALU: linked scalar tracking not supported, ID cleared */
> +SEC("socket")
> +__description("scalars: linked scalars 32-bit ADD not tracked")
> +__failure
> +__msg("div by zero")
> +__naked void scalars_neg_alu32_add(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + w0 &= 0xff; \
> + w1 = w0; \
> + w1 += -4; \
> + if w1 s< 0 goto l2_alu32_add; \
> + if w0 != 0 goto l2_alu32_add; \
> + r0 /= 0; \
> +l2_alu32_add: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32)
> + : __clobber_all);
> +}
> +
> +/* 32-bit ALU: linked scalar tracking not supported, ID cleared */
> +SEC("socket")
> +__description("scalars: linked scalars 32-bit SUB not tracked")
> +__failure
> +__msg("div by zero")
> +__naked void scalars_neg_alu32_sub(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + w0 &= 0xff; \
> + w1 = w0; \
> + w1 -= 4; \
> + if w1 s< 0 goto l2_alu32_sub; \
> + if w0 != 0 goto l2_alu32_sub; \
> + r0 /= 0; \
> +l2_alu32_sub: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32)
> + : __clobber_all);
> +}
> +
> +/* Positive offset: r1 = r0 + 4, then if r1 >= 6, r0 >= 2, so r0 != 0 */
> +SEC("socket")
> +__description("scalars: linked scalars positive offset")
> +__success
> +__naked void scalars_pos(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r1 += 4; \
> + if r1 < 6 goto l2_pos; \
> + if r0 != 0 goto l2_pos; \
> + r0 /= 0; \
> +l2_pos: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32)
> + : __clobber_all);
> +}
> +
> +/* SUB with negative immediate: r1 -= -4 is equivalent to r1 += 4 */
> +SEC("socket")
> +__description("scalars: linked scalars SUB negative immediate")
> +__success
> +__naked void scalars_sub_neg_imm(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r1 -= -4; \
> + if r1 < 6 goto l2_sub_neg; \
> + if r0 != 0 goto l2_sub_neg; \
> + r0 /= 0; \
> +l2_sub_neg: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32)
> + : __clobber_all);
> +}
> +
> +/* Double ADD clears the ID (can't accumulate offsets) */
> +SEC("socket")
> +__description("scalars: linked scalars double ADD clears ID")
> +__failure
> +__msg("div by zero")
> +__naked void scalars_double_add(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r1 += 2; \
> + r1 += 2; \
> + if r1 < 6 goto l2_double; \
> + if r0 != 0 goto l2_double; \
> + r0 /= 0; \
> +l2_double: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32)
> + : __clobber_all);
> +}
> +
> +/*
> + * Test that sync_linked_regs() correctly handles large offset differences.
> + * r1.off = S32_MIN, r2.off = 1, delta = S32_MIN - 1 requires 64-bit math.
> + */
> +SEC("socket")
> +__description("scalars: linked regs sync with large delta (S32_MIN offset)")
> +__success
> +__naked void scalars_sync_delta_overflow(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r2 = r0; \
> + r1 += %[s32_min]; \
> + r2 += 1; \
> + if r2 s< 100 goto l2_overflow; \
> + if r1 s< 0 goto l2_overflow; \
> + r0 /= 0; \
> +l2_overflow: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32),
> + [s32_min]"i"((int)(-2147483647 - 1))
^^^^^^^^^^^
Nit: use INT_MIN?
> + : __clobber_all);
> +}
> +
> +/*
> + * Another large delta case: r1.off = S32_MAX, r2.off = -1.
> + * delta = S32_MAX - (-1) = S32_MAX + 1 requires 64-bit math.
> + */
> +SEC("socket")
> +__description("scalars: linked regs sync with large delta (S32_MAX offset)")
> +__success
> +__naked void scalars_sync_delta_overflow_large_range(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r2 = r0; \
> + r1 += %[s32_max]; \
> + r2 += -1; \
> + if r2 s< 0 goto l2_large; \
> + if r1 s>= 0 goto l2_large; \
> + r0 /= 0; \
> +l2_large: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32),
> + [s32_max]"i"((int)2147483647)
> + : __clobber_all);
> +}
> +
> char _license[] SEC("license") = "GPL";
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for linked register tracking
2026-01-08 1:47 ` Eduard Zingerman
@ 2026-01-08 2:53 ` Alexei Starovoitov
0 siblings, 0 replies; 13+ messages in thread
From: Alexei Starovoitov @ 2026-01-08 2:53 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Puranjay Mohan, bpf, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Kumar Kartikeya Dwivedi, Mykyta Yatsenko, Kernel Team
On Wed, Jan 7, 2026 at 5:47 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2026-01-07 at 17:40 -0800, Eduard Zingerman wrote:
> > On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
> > > Extend the linked register tracking to support:
> > >
> > > 1. Negative offsets via BPF_ADD (e.g., r1 += -4)
> > > 2. BPF_SUB operations (e.g., r1 -= 4), which is treated as r1 += -4
> > >
> > > Previously, the verifier only tracked positive constant deltas between
> > > linked registers using BPF_ADD. This limitation meant patterns like:
> > >
> > > r1 = r0
> > > r1 += -4
> > > if r1 s>= 0 goto ... // r1 >= 0 implies r0 >= 4
> > > // verifier couldn't propagate bounds back to r0
> > >
> > > With this change, the verifier can now track negative deltas in reg->off
> > > (which is already s32), enabling bound propagation for the above pattern.
> > >
> > > The changes include:
> > > - Accept BPF_SUB in addition to BPF_ADD
> > > - Change overflow check from val > (u32)S32_MAX to checking if val fits
> > > in s32 range: (s64)val != (s64)(s32)val
> > > - For BPF_SUB, negate the offset with a guard against S32_MIN overflow
> > > - Keep !alu32 restriction as 32-bit ALU has known issues with upper bits
> >
> > This is because we don't know if other registers with the same id have
> > their upper 32-bits as zeroes, right?
>
> Nah, we do know if their upper halves are zeroes or not,
> registers with same id are identical up to the ->off field.
> So, it appears that this restriction can be partially lifted if we
> check if dst_reg upper half is zero before operation.
> Wdyt?
Let's figure out how to support w-registers too.
Hao Sun's 1st category:
https://lore.kernel.org/bpf/CACkBjsauBbmKRAgEhOujtpGBeAWksar9yS+0hk1i9pLYwtQN3A@mail.gmail.com/
is exactly this missing feature.
Instead of adding them as failures in patch 2, let's make them successes.
pw-bot: cr
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets
2026-01-07 20:39 ` [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets Puranjay Mohan
2026-01-08 2:11 ` Eduard Zingerman
@ 2026-01-08 6:55 ` Eduard Zingerman
2026-01-08 11:33 ` Puranjay Mohan
1 sibling, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2026-01-08 6:55 UTC (permalink / raw)
To: Puranjay Mohan, bpf
Cc: Puranjay Mohan, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin KaFai Lau, Kumar Kartikeya Dwivedi,
Mykyta Yatsenko, kernel-team
On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
[...]
> +/*
> + * Test that sync_linked_regs() correctly handles large offset differences.
> + * r1.off = S32_MIN, r2.off = 1, delta = S32_MIN - 1 requires 64-bit math.
> + */
> +SEC("socket")
> +__description("scalars: linked regs sync with large delta (S32_MIN offset)")
> +__success
> +__naked void scalars_sync_delta_overflow(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r2 = r0; \
> + r1 += %[s32_min]; \
> + r2 += 1; \
> + if r2 s< 100 goto l2_overflow; \
If I remove the above check the test case still passes.
What is the purpose of the test?
If the purpose is to check that S32_MIN can be used as delta for
BPF_ADD operations, then constants in the second comparison should be
picked in a way for comparison to be unpredictable w/o first comparison.
> + if r1 s< 0 goto l2_overflow; \
> + r0 /= 0; \
> +l2_overflow: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32),
> + [s32_min]"i"((int)(-2147483647 - 1))
> + : __clobber_all);
> +}
> +
> +/*
> + * Another large delta case: r1.off = S32_MAX, r2.off = -1.
> + * delta = S32_MAX - (-1) = S32_MAX + 1 requires 64-bit math.
> + */
> +SEC("socket")
> +__description("scalars: linked regs sync with large delta (S32_MAX offset)")
> +__success
> +__naked void scalars_sync_delta_overflow_large_range(void)
> +{
> + asm volatile (" \
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0xff; \
> + r1 = r0; \
> + r2 = r0; \
> + r1 += %[s32_max]; \
> + r2 += -1; \
> + if r2 s< 0 goto l2_large; \
Same issue here.
> + if r1 s>= 0 goto l2_large; \
> + r0 /= 0; \
> +l2_large: \
> + r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32),
> + [s32_max]"i"((int)2147483647)
^^^^^^^^^^^^^^^
Out of curiosity, did you write this by hand or was it generated?
> + : __clobber_all);
> +}
> +
> char _license[] SEC("license") = "GPL";
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next 3/3] selftests/bpf: Update expected output for sub64_partial_overflow test
2026-01-07 20:39 ` [PATCH bpf-next 3/3] selftests/bpf: Update expected output for sub64_partial_overflow test Puranjay Mohan
@ 2026-01-08 6:59 ` Eduard Zingerman
2026-01-08 10:39 ` Puranjay Mohan
0 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2026-01-08 6:59 UTC (permalink / raw)
To: Puranjay Mohan, bpf
Cc: Puranjay Mohan, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin KaFai Lau, Kumar Kartikeya Dwivedi,
Mykyta Yatsenko, kernel-team
On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
> Update the expected regex pattern for the sub64_partial_overflow test.
> With BPF_SUB now supporting linked register tracking, the verifier
> output shows R3=scalar(id=1-1) instead of R3=scalar() because r3 is
> now tracked as linked to r0 with an offset of -1.
>
> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> ---
W/o this patch selftests fail after patch #1, right?
We usually keep such test fixes in verifier patches.
Such that selftests are passing after each commit,
This makes any potential git bisect simpler.
> tools/testing/selftests/bpf/progs/verifier_bounds.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
> index 411a18437d7e..560531404bce 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
> @@ -1477,7 +1477,7 @@ __naked void sub64_full_overflow(void)
> SEC("socket")
> __description("64-bit subtraction, partial overflow, result in unbounded reg")
> __success __log_level(2)
> -__msg("3: (1f) r3 -= r2 {{.*}} R3=scalar()")
> +__msg("3: (1f) r3 -= r2 {{.*}} R3=scalar(id=1-1)")
> __retval(0)
> __naked void sub64_partial_overflow(void)
> {
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next 3/3] selftests/bpf: Update expected output for sub64_partial_overflow test
2026-01-08 6:59 ` Eduard Zingerman
@ 2026-01-08 10:39 ` Puranjay Mohan
0 siblings, 0 replies; 13+ messages in thread
From: Puranjay Mohan @ 2026-01-08 10:39 UTC (permalink / raw)
To: Eduard Zingerman
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Kumar Kartikeya Dwivedi, Mykyta Yatsenko,
kernel-team
On Thu, Jan 8, 2026 at 6:59 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
> > Update the expected regex pattern for the sub64_partial_overflow test.
> > With BPF_SUB now supporting linked register tracking, the verifier
> > output shows R3=scalar(id=1-1) instead of R3=scalar() because r3 is
> > now tracked as linked to r0 with an offset of -1.
> >
> > Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> > ---
>
> W/o this patch selftests fail after patch #1, right?
> We usually keep such test fixes in verifier patches.
> Such that selftests are passing after each commit,
> This makes any potential git bisect simpler.
Yes, I will merge this with the first patch in v2.
>
> > tools/testing/selftests/bpf/progs/verifier_bounds.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
> > index 411a18437d7e..560531404bce 100644
> > --- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
> > +++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
> > @@ -1477,7 +1477,7 @@ __naked void sub64_full_overflow(void)
> > SEC("socket")
> > __description("64-bit subtraction, partial overflow, result in unbounded reg")
> > __success __log_level(2)
> > -__msg("3: (1f) r3 -= r2 {{.*}} R3=scalar()")
> > +__msg("3: (1f) r3 -= r2 {{.*}} R3=scalar(id=1-1)")
> > __retval(0)
> > __naked void sub64_partial_overflow(void)
> > {
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets
2026-01-08 6:55 ` Eduard Zingerman
@ 2026-01-08 11:33 ` Puranjay Mohan
0 siblings, 0 replies; 13+ messages in thread
From: Puranjay Mohan @ 2026-01-08 11:33 UTC (permalink / raw)
To: Eduard Zingerman
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Kumar Kartikeya Dwivedi, Mykyta Yatsenko,
kernel-team
On Thu, Jan 8, 2026 at 6:55 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
>
> [...]
>
> > +/*
> > + * Test that sync_linked_regs() correctly handles large offset differences.
> > + * r1.off = S32_MIN, r2.off = 1, delta = S32_MIN - 1 requires 64-bit math.
> > + */
> > +SEC("socket")
> > +__description("scalars: linked regs sync with large delta (S32_MIN offset)")
> > +__success
> > +__naked void scalars_sync_delta_overflow(void)
> > +{
> > + asm volatile (" \
> > + call %[bpf_get_prandom_u32]; \
> > + r0 &= 0xff; \
> > + r1 = r0; \
> > + r2 = r0; \
> > + r1 += %[s32_min]; \
> > + r2 += 1; \
> > + if r2 s< 100 goto l2_overflow; \
>
> If I remove the above check the test case still passes.
> What is the purpose of the test?
> If the purpose is to check that S32_MIN can be used as delta for
> BPF_ADD operations, then constants in the second comparison should be
> picked in a way for comparison to be unpredictable w/o first comparison.
>
> > + if r1 s< 0 goto l2_overflow; \
>
> > + r0 /= 0; \
> > +l2_overflow: \
> > + r0 = 0; \
> > + exit; \
> > +" :
> > + : __imm(bpf_get_prandom_u32),
> > + [s32_min]"i"((int)(-2147483647 - 1))
> > + : __clobber_all);
> > +}
> > +
> > +/*
> > + * Another large delta case: r1.off = S32_MAX, r2.off = -1.
> > + * delta = S32_MAX - (-1) = S32_MAX + 1 requires 64-bit math.
> > + */
> > +SEC("socket")
> > +__description("scalars: linked regs sync with large delta (S32_MAX offset)")
> > +__success
> > +__naked void scalars_sync_delta_overflow_large_range(void)
> > +{
> > + asm volatile (" \
> > + call %[bpf_get_prandom_u32]; \
> > + r0 &= 0xff; \
> > + r1 = r0; \
> > + r2 = r0; \
> > + r1 += %[s32_max]; \
> > + r2 += -1; \
> > + if r2 s< 0 goto l2_large; \
>
> Same issue here.
>
> > + if r1 s>= 0 goto l2_large; \
> > + r0 /= 0; \
> > +l2_large: \
> > + r0 = 0; \
> > + exit; \
> > +" :
> > + : __imm(bpf_get_prandom_u32),
> > + [s32_max]"i"((int)2147483647)
> ^^^^^^^^^^^^^^^
> Out of curiosity, did you write this by hand or was it generated?
So, these two are generated by claude and the we want to test if the
line __mark_reg_known(&fake_reg, (s64)reg->off -
(s64)known_reg->off); in sync_linked_regs() working correctly. Before
patch 1 it looked like __mark_reg_known(&fake_reg, (s32)reg->off -
(s32)known_reg->off) and it would break for the pair of values
(S32_MAX, -1) and (S32_MIN, 1).
So, this test fails before the change from (s32) to (s64) in
sync_linked_regs, if I remove the lines that you are referring to (if
r2 s< 0 goto l2_large;, etc.) then it passes even with (s32).
So, I think this test is checking the right things.
^ permalink raw reply [flat|nested] 13+ messages in thread
* __description(). Was: [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets
2026-01-08 2:11 ` Eduard Zingerman
@ 2026-01-21 0:46 ` Alexei Starovoitov
0 siblings, 0 replies; 13+ messages in thread
From: Alexei Starovoitov @ 2026-01-21 0:46 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Puranjay Mohan, bpf, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Kumar Kartikeya Dwivedi, Mykyta Yatsenko, Kernel Team
On Wed, Jan 7, 2026 at 6:11 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2026-01-07 at 12:39 -0800, Puranjay Mohan wrote:
> > Add tests for linked register tracking with negative offsets and BPF_SUB:
> >
> > Success cases (64-bit ALU, tracking works):
> > - scalars_neg: r1 += -4 with signed comparison
> > - scalars_neg_sub: r1 -= 4 with signed comparison
> > - scalars_pos: r1 += 4 with unsigned comparison
> > - scalars_sub_neg_imm: r1 -= -4 (equivalent to r1 += 4)
> >
> > Failure cases (tracking disabled, documents limitations):
> > - scalars_neg_alu32_add: 32-bit ADD not tracked
> > - scalars_neg_alu32_sub: 32-bit SUB not tracked
> > - scalars_double_add: Double ADD clears ID
> >
> > Large delta tests (verifies 64-bit arithmetic in sync_linked_regs):
> > - scalars_sync_delta_overflow: S32_MIN offset
> > - scalars_sync_delta_overflow_large_range: S32_MAX offset
> >
> > Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> > ---
> > .../bpf/progs/verifier_linked_scalars.c | 213 ++++++++++++++++++
> > 1 file changed, 213 insertions(+)
> >
> > diff --git a/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c b/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
> > index 8f755d2464cf..2e1ef0f96717 100644
> > --- a/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
> > +++ b/tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
> > @@ -31,4 +31,217 @@ l1: \
> > " ::: __clobber_all);
> > }
> >
> > +SEC("socket")
> > +__description("scalars: linked scalars with negative offset")
>
> Nit: I think that __description tag should be avoided in the new code.
> w/o this tag the test case could be executed as follows:
>
> ./test_progs -t verifier_linked_scalars/scalars_reg
>
> with this tag the test case should be executed as:
>
> ./test_progs -t "verifier_linked_scalars/scalars: linked scalars with negative offset"
>
> and I'm not sure test_progs handles spaces properly (even if it does, the invocation is inconvenient).
> So, I'd just put the description in the comments.
Going back to this. I think 'description' tag is useful and
it makes test_progs output more human readable,
but we should do something about test_loader.c
if (!description)
description = spec->prog_name;
since specifying a full description is indeed inconvenient.
Maybe test_progs should print both: prog_name and description.
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-01-21 0:46 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-07 20:39 [PATCH bpf-next 0/3] bpf: Improve linked register tracking Puranjay Mohan
2026-01-07 20:39 ` [PATCH bpf-next 1/3] bpf: Support negative offsets and BPF_SUB for " Puranjay Mohan
2026-01-08 1:40 ` Eduard Zingerman
2026-01-08 1:47 ` Eduard Zingerman
2026-01-08 2:53 ` Alexei Starovoitov
2026-01-07 20:39 ` [PATCH bpf-next 2/3] selftests/bpf: Add tests for linked register tracking with negative offsets Puranjay Mohan
2026-01-08 2:11 ` Eduard Zingerman
2026-01-21 0:46 ` __description(). Was: " Alexei Starovoitov
2026-01-08 6:55 ` Eduard Zingerman
2026-01-08 11:33 ` Puranjay Mohan
2026-01-07 20:39 ` [PATCH bpf-next 3/3] selftests/bpf: Update expected output for sub64_partial_overflow test Puranjay Mohan
2026-01-08 6:59 ` Eduard Zingerman
2026-01-08 10:39 ` Puranjay Mohan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox