* [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
@ 2026-04-10 12:40 Helen Koike
2026-04-10 12:40 ` [PATCH 2/2] selftests/bpf: new cases handled by 32->64 range refinements Helen Koike
` (3 more replies)
0 siblings, 4 replies; 12+ messages in thread
From: Helen Koike @ 2026-04-10 12:40 UTC (permalink / raw)
To: andrii, eddyz87, shung-hsi.yu, yonghong.song, ast, bpf,
linux-kernel, koike, kernel-dev
Unify handling of signed and unsigned using circular range logic.
Signed and unsigned numbers follows the same order in bit
representation. We can think of it as a clock, were 12h is
0x0000_0000 and 11h is 0xFFFF_FFFF, regardless of its sign.
Then, instead of dealing with max and min, we deal with a base and len,
where len = max - min.
Example:
range [-1, 3] is represented by base=0xFFFF_FFFF len=4
since (u32)3 - (u32)-1 is 4.
And we can verify if a value v is in range if:
(u32)(v - base) <= len
which is true if v is signed -1 or v is unsigned 0xFFFF_FFFF.
This automatically handles the wrapping case, discarding the need to
check if it crosses the signed range or not and handle each case.
It also fixes the following current issues:
* [(u32)umin, (u32)umax] falling outside of [u32_min_value, u32_max_value]
* [(u32)umin, (u32)umax] falling in the gap [(u32)s32_max_value, (u32)s32_min_value]
Fixes: c51d5ad6543c ("bpf: improve deduction of 64-bit bounds from 32-bit bounds")
[Circular representation]
Suggested-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Helen Koike <koike@igalia.com>
---
This is a follow-up from discussion:
https://lore.kernel.org/all/7fb97184-baaa-4639-a0b9-ac289bf2e54d@igalia.com/
I didn't tag this as v2 since it is a completely different
implementation.
I did do an implementation[1] without circular range logic, by using
if/elses blocks, but it feels I'm always missing a corner case and using
circular range logic feels safer.
Eduard, I didn't use the exact code you pointed, since it seems it is
covered by your RFC, so I used the logic just for this case while your
RFC doesn't move forward.
Please let me know what you think.
[1]
https://github.com/helen-fornazier/linux/commits/bpf-min-max-if-else-solution/
(3 last commits)
Thanks,
Helen
---
kernel/bpf/verifier.c | 77 +++++++++++++++++++++++++++++++++++--------
1 file changed, 64 insertions(+), 13 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8c1cf2eb6cbb..2944c17d2b7b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2700,6 +2700,48 @@ static void deduce_bounds_64_from_64(struct bpf_reg_state *reg)
}
}
+/*
+ * Find the smallest v >= lo such that (u32)v is in the circular u32 range
+ * [b_min, b_max] (b_len = b_max - b_min wraps naturally for wrapping ranges).
+ */
+static u64 u64_tighten_umin(u64 lo, u64 hi, u32 b_min, u32 b_max)
+{
+ u32 b_len = b_max - b_min;
+ u64 a_len = hi - lo;
+ u64 cand;
+
+ /* lo32(lo) already in [b_min, b_max]? */
+ if ((u32)((u32)lo - b_min) <= b_len)
+ return lo;
+ /* Set lo32 to b_min and check if it's in the range [lo, hi] */
+ cand = (lo & ~(u64)U32_MAX) | b_min;
+ if (cand - lo <= a_len)
+ return cand;
+ /* Advance to the next 2^32 block */
+ return cand + BIT_ULL(32);
+}
+
+/*
+ * Find the largest v <= hi such that (u32)v is in the circular u32 range
+ * [b_min, b_max].
+ */
+static u64 u64_tighten_umax(u64 lo, u64 hi, u32 b_min, u32 b_max)
+{
+ u32 b_len = b_max - b_min;
+ u64 a_len = hi - lo;
+ u64 cand;
+
+ /* lo32(hi) already in [b_min, b_max]? */
+ if ((u32)((u32)hi - b_min) <= b_len)
+ return hi;
+ /* Set lo32 to b_max and check if it's in the range [lo, hi] */
+ cand = (hi & ~(u64)U32_MAX) | b_max;
+ if (hi - cand <= a_len)
+ return cand;
+ /* Retreat to the previous 2^32 block */
+ return cand - BIT_ULL(32);
+}
+
static void deduce_bounds_64_from_32(struct bpf_reg_state *reg)
{
/* Try to tighten 64-bit bounds from 32-bit knowledge, using 32-bit
@@ -2714,19 +2756,28 @@ static void deduce_bounds_64_from_32(struct bpf_reg_state *reg)
* with are well-formed ranges in respective s64 or u64 domain, just
* like we do with similar kinds of 32-to-64 or 64-to-32 adjustments.
*/
- __u64 new_umin, new_umax;
- __s64 new_smin, new_smax;
-
- /* u32 -> u64 tightening, it's always well-formed */
- new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
- new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value;
- reg->umin_value = max_t(u64, reg->umin_value, new_umin);
- reg->umax_value = min_t(u64, reg->umax_value, new_umax);
- /* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
- new_smin = (reg->smin_value & ~0xffffffffULL) | reg->u32_min_value;
- new_smax = (reg->smax_value & ~0xffffffffULL) | reg->u32_max_value;
- reg->smin_value = max_t(s64, reg->smin_value, new_smin);
- reg->smax_value = min_t(s64, reg->smax_value, new_smax);
+ u32 b_min, b_max;
+
+ /*
+ * If [u32_min_value, u32_max_value] is conservative, i.e. [0, U32_MAX],
+ * use [s32_min_value, s32_max_value] for tightening.
+ */
+ if (reg->u32_min_value == 0 && reg->u32_max_value == U32_MAX) {
+ b_min = (u32)reg->s32_min_value;
+ b_max = (u32)reg->s32_max_value;
+ } else {
+ b_min = reg->u32_min_value;
+ b_max = reg->u32_max_value;
+ }
+
+ reg->umin_value = u64_tighten_umin(reg->umin_value, reg->umax_value, b_min, b_max);
+ reg->umax_value = u64_tighten_umax(reg->umin_value, reg->umax_value, b_min, b_max);
+
+ /* u32 -> s64 tightening uses the same circular algorithm. */
+ reg->smin_value = (s64)u64_tighten_umin((u64)reg->smin_value, (u64)reg->smax_value,
+ reg->u32_min_value, reg->u32_max_value);
+ reg->smax_value = (s64)u64_tighten_umax((u64)reg->smin_value, (u64)reg->smax_value,
+ reg->u32_min_value, reg->u32_max_value);
/* Here we would like to handle a special case after sign extending load,
* when upper bits for a 64-bit range are all 1s or all 0s.
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/2] selftests/bpf: new cases handled by 32->64 range refinements
2026-04-10 12:40 [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Helen Koike
@ 2026-04-10 12:40 ` Helen Koike
2026-04-14 8:26 ` [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Shung-Hsi Yu
` (2 subsequent siblings)
3 siblings, 0 replies; 12+ messages in thread
From: Helen Koike @ 2026-04-10 12:40 UTC (permalink / raw)
To: andrii, eddyz87, shung-hsi.yu, yonghong.song, ast, bpf,
linux-kernel, koike, kernel-dev
From: Eduard Zingerman <eddyz87@gmail.com>
1. u64 range where the lo32 of each endpoint falls outside the u32
range within its 2^32 block, requiring umin/umax to advance to an
adjacent block. Three variants:
- range entirely below S64_MAX;
- s64 range spanning negative and positive values to exercise
smin/smax advance;
- u64 range crossing the sign boundary: smin/smax stay conservative
at S64_MIN/S64_MAX.
2. 32-bit range crosses the U32_MAX/0 boundary, represented as s32
range crossing sign boundary.
3. s32-bit range wraps, but u32 has a tighter lower bound from an
unsigned comparison.
Co-developed-by: Helen Koike <koike@igalia.com>
Signed-off-by: Helen Koike <koike@igalia.com>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
This patch was cherry-picked from:
https://lore.kernel.org/bpf/20260318-cnum-sync-bounds-v1-4-1f2e455ea711@gmail.com/
I added three other test cases and renamed a test to follow a pattern with
the others (and updated commit message).
Some of these extra tests were added due to cases I found while
implementing the version without the circular range logic[1], so I kept
them here (I guess some extra tests won't hurt).
[1] https://github.com/helen-fornazier/linux/commits/bpf-min-max-if-else-solution/
---
.../selftests/bpf/progs/verifier_bounds.c | 177 ++++++++++++++++++
1 file changed, 177 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
index bb20f0f06f05..a0178207c186 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
@@ -2165,4 +2165,181 @@ l0_%=: r0 = 0; \
: __clobber_all);
}
+/*
+ * 64-bit range is outside the 32-bit range in each 2^32 block.
+ *
+ * This test triggers updates on umin/umax and smin/smax.
+ *
+ * N*2^32 (N+1)*2^32 (N+2)*2^32 (N+3)*2^32
+ * ||----|=====|--|----------||----|=====|-------------||--|-|=====|-------------||
+ * |< b >| | |< b >| | |< b >|
+ * | | | |
+ * |<---------------+- a -+---------------->|
+ * | |
+ * |< t >| refined r0 range
+ *
+ * a = u64 [0x1'00000008, 0x3'00000001]
+ * b = u32 [2, 5]
+ * t = u64 [0x2'00000002, 0x2'00000005]
+ */
+SEC("socket")
+__success
+__flag(BPF_F_TEST_REG_INVARIANTS)
+__naked void deduce64_from_32_block_change(void *ctx)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r1 = 0x100000008 ll; \
+ if r0 < r1 goto 2f; \
+ r1 = 0x300000001 ll; \
+ if r0 > r1 goto 2f; /* u64: [0x1'00000008, 0x3'00000001] */ \
+ if w0 < 2 goto 2f; \
+ if w0 > 5 goto 2f; /* u32: [2, 5] */ \
+ r2 = 0x200000002 ll; \
+ r3 = 0x200000005 ll; \
+ if r0 >= r2 goto 1f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+1: if r0 <= r3 goto 2f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+2: exit; \
+ " :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/*
+ * Similar to the deduce64_from_32_block_change test for smin/smax boundaries.
+ *
+ * a = s64 [0x8000000100000008, 0x0000000300000001] (crosses sign boundary)
+ * b = u32 [2, 5]
+ * t = s64 [0x8000000200000002, 0x0000000200000005]
+ */
+SEC("socket")
+__success
+__flag(BPF_F_TEST_REG_INVARIANTS)
+__naked void deduce64_from_32_block_change_signed(void *ctx)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r1 = 0x8000000100000008 ll; \
+ if r0 s< r1 goto 2f; \
+ r1 = 0x300000001 ll; \
+ if r0 s> r1 goto 2f; /* s64: [0x8000000100000008, 0x3'00000001] */ \
+ if w0 < 2 goto 2f; \
+ if w0 > 5 goto 2f; /* u32: [2, 5] */ \
+ r2 = 0x8000000200000002 ll; \
+ r3 = 0x200000005 ll; \
+ if r0 s>= r2 goto 1f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+1: if r0 s<= r3 goto 2f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+2: exit; \
+ " :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/*
+ * Similar to the deduce64_from_32_block_change test, with conservative signed boundaries.
+ *
+ * a = u64 [0x1'00000008, 0x80000003'00000001]
+ * = s64 [S64_MIN, S64_MAX] (since (s64)umin > (s64)umax)
+ * b = u32 [2, 5]
+ * t = u64 [0x2'00000002, 0x80000002'00000005]
+ */
+SEC("socket")
+__success
+__flag(BPF_F_TEST_REG_INVARIANTS)
+__naked void deduce64_from_32_block_change_conservative_signed(void *ctx)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r1 = 0x100000008 ll; \
+ if r0 < r1 goto 2f; \
+ r1 = 0x8000000300000001 ll; \
+ if r0 > r1 goto 2f; /* u64: [0x100000008, 0x8000000300000001] */ \
+ if w0 < 2 goto 2f; \
+ if w0 > 5 goto 2f; /* u32: [2, 5] */ \
+ r2 = 0x200000002 ll; \
+ r3 = 0x8000000200000005 ll; \
+ if r0 >= r2 goto 1f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+1: if r0 <= r3 goto 2f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+2: exit; \
+ " :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/*
+ * 32-bit range crossing U32_MAX / 0 boundary.
+ *
+ * N*2^32 (N+1)*2^32 (N+2)*2^32 (N+3)*2^32
+ * ||===|---------|------|===||===|----------------|===||===|---------|------|===||
+ * |b >| | |< b||b >| |< b||b >| | |< b|
+ * | | | |
+ * |<-----+----------------- a --------------+-------->|
+ * | |
+ * |<---------------- t ------------->| refined r0 range
+ *
+ * a = u64 [0x1'00000006, 0x2'FFFFFFEF]
+ * b = s32 [-16, 5] (u32 wrapping [0xFFFFFFF0, 0x00000005])
+ * t = u64 [0x1'FFFFFFF0, 0x2'00000005]
+ */
+SEC("socket")
+__success
+__flag(BPF_F_TEST_REG_INVARIANTS)
+__naked void deduce64_from_32_wrapping_32bit(void *ctx)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r1 = 0x100000006 ll; \
+ if r0 < r1 goto 2f; \
+ r1 = 0x2ffffffef ll; \
+ if r0 > r1 goto 2f; /* u64: [0x1'00000006, 0x2'FFFFFFEF] */ \
+ if w0 s< -16 goto 2f; \
+ if w0 s> 5 goto 2f; /* s32: [-16, 5] */ \
+ r1 = 0x1fffffff0 ll; \
+ r2 = 0x200000005 ll; \
+ if r0 >= r1 goto 1f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+1: if r0 <= r2 goto 2f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+2: exit; \
+ " :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+/*
+ * s32 range wraps, but u32 has a tighter lower bound from an unsigned
+ * comparison.
+ *
+ * a = u64 [0x7FFFFFFF'00000001, 0x80000002'00000010]
+ * b = s32 [-5, 5] + w0 u>= 2 => u32: [2, U32_MAX]
+ * t = u64 [0x7FFFFFFF'00000002, ...]
+ */
+SEC("socket")
+__success __flag(BPF_F_TEST_REG_INVARIANTS)
+__naked void deduce64_from_32_u32_tighter_than_s32(void *ctx)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r1 = 0x7fffffff00000001 ll; \
+ if r0 < r1 goto 2f; \
+ r1 = 0x8000000200000010 ll; \
+ if r0 > r1 goto 2f; /* u64: [0x7FFFFFFF'00000001, 0x80000002'00000010] */ \
+ if w0 s< -5 goto 2f; \
+ if w0 s> 5 goto 2f; /* s32: [-5, 5] */ \
+ if w0 < 2 goto 2f; /* u32_min=2; s32 still wraps */ \
+ r2 = 0x7fffffff00000002 ll; \
+ if r0 >= r2 goto 2f; /* should be always true */ \
+ r10 = 0; /* dead code */ \
+2: exit; \
+ " :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-10 12:40 [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Helen Koike
2026-04-10 12:40 ` [PATCH 2/2] selftests/bpf: new cases handled by 32->64 range refinements Helen Koike
@ 2026-04-14 8:26 ` Shung-Hsi Yu
2026-04-14 16:25 ` Helen Koike
2026-04-15 7:12 ` Eduard Zingerman
2026-04-15 18:12 ` Eduard Zingerman
3 siblings, 1 reply; 12+ messages in thread
From: Shung-Hsi Yu @ 2026-04-14 8:26 UTC (permalink / raw)
To: Helen Koike
Cc: andrii, eddyz87, yonghong.song, ast, bpf, linux-kernel,
kernel-dev
Hi Helen,
On Fri, Apr 10, 2026 at 09:40:27AM -0300, Helen Koike wrote:
...
> This is a follow-up from discussion:
> https://lore.kernel.org/all/7fb97184-baaa-4639-a0b9-ac289bf2e54d@igalia.com/
Base on the original thread, I see that motivation of the patchset came
from a Syzkaller reported issue[1]. Can you try to run the reproducer
again and see if you can still get it to reproduce on the bpf-next tree?
I tried it myself, but I somehow can't get the bpf-next kernel to boot
when KMSAN enabled.
Given check_cond_jmp_op showed up the the kernel stack of the report,
I'm guessing it might have be resolved with Paul's "Fix invariant
violations and improve branch detection" patchset[2].
Side note: were you able to figure out why we get "KMSAN: uninit-value"
within reg_bounds_sanity_check() in the original Syzkaller report[1]
with the follow call?
verifier_bug(env, "REG INVARIANTS VIOLATION (%s): %s u64=[%#llx, %#llx] "
"s64=[%#llx, %#llx] u32=[%#x, %#x] s32=[%#x, %#x] var_off=(%#llx, %#llx)",
...);
1: https://syzkaller.appspot.com/bug?id=6f35b4962ae3653e4190d0ff3e30b882ebe86755
2: https://lore.kernel.org/bpf/cover.1775142354.git.paul.chaignon@gmail.com/
> I didn't tag this as v2 since it is a completely different
> implementation.
>
> I did do an implementation[1] without circular range logic, by using
> if/elses blocks, but it feels I'm always missing a corner case and using
> circular range logic feels safer.
>
> Eduard, I didn't use the exact code you pointed, since it seems it is
> covered by your RFC, so I used the logic just for this case while your
> RFC doesn't move forward.
>
> Please let me know what you think.
>
> [1]
> https://github.com/helen-fornazier/linux/commits/bpf-min-max-if-else-solution/
> (3 last commits)
>
> Thanks,
> Helen
...
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-14 8:26 ` [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Shung-Hsi Yu
@ 2026-04-14 16:25 ` Helen Koike
2026-04-14 18:32 ` Eduard Zingerman
0 siblings, 1 reply; 12+ messages in thread
From: Helen Koike @ 2026-04-14 16:25 UTC (permalink / raw)
To: Shung-Hsi Yu
Cc: andrii, eddyz87, yonghong.song, ast, bpf, linux-kernel,
kernel-dev
Hi Shung-Hsi Yu,
Thanks for your reply,
On 4/14/26 5:26 AM, Shung-Hsi Yu wrote:
> Hi Helen,
>
> On Fri, Apr 10, 2026 at 09:40:27AM -0300, Helen Koike wrote:
> ...
>> This is a follow-up from discussion:
>> https://lore.kernel.org/all/7fb97184-baaa-4639-a0b9-ac289bf2e54d@igalia.com/
>
> Base on the original thread, I see that motivation of the patchset came
> from a Syzkaller reported issue[1]. Can you try to run the reproducer
> again and see if you can still get it to reproduce on the bpf-next tree?
I just tested on bpf-next/master ("71b500afd2f7") and I couldn't
reproduce the issue from Syzbot anymore. It seems that the branch
causing the issue is pruned now, fixing the error in that case.
But I still get the failures of the kselftests from patch 2/2:
* verifier_bounds/deduce64_from_32_block_change:FAIL
* verifier_bounds/deduce64_from_32_block_change_signed:FAIL
*
verifier_bounds/deduce64_from_32_block_change_conservative_signed:FAIL
* verifier_bounds/deduce64_from_32_wrapping_32bit:FAIL
How do you suggest we should proceed?
> I tried it myself, but I somehow can't get the bpf-next kernel to boot
> when KMSAN enabled.
>
> Given check_cond_jmp_op showed up the the kernel stack of the report,
> I'm guessing it might have be resolved with Paul's "Fix invariant
> violations and improve branch detection" patchset[2].
Thanks for the pointer!
>
> Side note: were you able to figure out why we get "KMSAN: uninit-value"
> within reg_bounds_sanity_check() in the original Syzkaller report[1]
> with the follow call?
>
> verifier_bug(env, "REG INVARIANTS VIOLATION (%s): %s u64=[%#llx, %#llx] "
> "s64=[%#llx, %#llx] u32=[%#x, %#x] s32=[%#x, %#x] var_off=(%#llx, %#llx)",
> ...);
Unfortunately no. I investigated a bit but I didn't continue :(
Thanks,
Helen
>
> 1: https://syzkaller.appspot.com/bug?id=6f35b4962ae3653e4190d0ff3e30b882ebe86755
> 2: https://lore.kernel.org/bpf/cover.1775142354.git.paul.chaignon@gmail.com/
>
>> I didn't tag this as v2 since it is a completely different
>> implementation.
>>
>> I did do an implementation[1] without circular range logic, by using
>> if/elses blocks, but it feels I'm always missing a corner case and using
>> circular range logic feels safer.
>>
>> Eduard, I didn't use the exact code you pointed, since it seems it is
>> covered by your RFC, so I used the logic just for this case while your
>> RFC doesn't move forward.
>>
>> Please let me know what you think.
>>
>> [1]
>> https://github.com/helen-fornazier/linux/commits/bpf-min-max-if-else-solution/
>> (3 last commits)
>>
>> Thanks,
>> Helen
> ...
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-14 16:25 ` Helen Koike
@ 2026-04-14 18:32 ` Eduard Zingerman
0 siblings, 0 replies; 12+ messages in thread
From: Eduard Zingerman @ 2026-04-14 18:32 UTC (permalink / raw)
To: Helen Koike, Shung-Hsi Yu
Cc: andrii, yonghong.song, ast, bpf, linux-kernel, kernel-dev
On Tue, 2026-04-14 at 13:25 -0300, Helen Koike wrote:
> Hi Shung-Hsi Yu,
>
>
> Thanks for your reply,
>
> On 4/14/26 5:26 AM, Shung-Hsi Yu wrote:
> > Hi Helen,
> >
> > On Fri, Apr 10, 2026 at 09:40:27AM -0300, Helen Koike wrote:
> > ...
> > > This is a follow-up from discussion:
> > > https://lore.kernel.org/all/7fb97184-baaa-4639-a0b9-ac289bf2e54d@igalia.com/
> >
> > Base on the original thread, I see that motivation of the patchset came
> > from a Syzkaller reported issue[1]. Can you try to run the reproducer
> > again and see if you can still get it to reproduce on the bpf-next tree?
>
> I just tested on bpf-next/master ("71b500afd2f7") and I couldn't
> reproduce the issue from Syzbot anymore. It seems that the branch
> causing the issue is pruned now, fixing the error in that case.
>
> But I still get the failures of the kselftests from patch 2/2:
>
> * verifier_bounds/deduce64_from_32_block_change:FAIL
> * verifier_bounds/deduce64_from_32_block_change_signed:FAIL
> *
> verifier_bounds/deduce64_from_32_block_change_conservative_signed:FAIL
> * verifier_bounds/deduce64_from_32_wrapping_32bit:FAIL
>
>
> How do you suggest we should proceed?
>
I still want this for the sake of improved value tracking.
Please give me a bit more time for review.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-10 12:40 [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Helen Koike
2026-04-10 12:40 ` [PATCH 2/2] selftests/bpf: new cases handled by 32->64 range refinements Helen Koike
2026-04-14 8:26 ` [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Shung-Hsi Yu
@ 2026-04-15 7:12 ` Eduard Zingerman
2026-04-15 16:19 ` Harishankar Vishwanathan
2026-04-16 3:52 ` Shung-Hsi Yu
2026-04-15 18:12 ` Eduard Zingerman
3 siblings, 2 replies; 12+ messages in thread
From: Eduard Zingerman @ 2026-04-15 7:12 UTC (permalink / raw)
To: Helen Koike, harishankar.vishwanathan, paul.chaignon,
shung-hsi.yu
Cc: andrii, yonghong.song, ast, bpf, linux-kernel, kernel-dev
On Fri, 2026-04-10 at 09:40 -0300, Helen Koike wrote:
> Unify handling of signed and unsigned using circular range logic.
>
> Signed and unsigned numbers follows the same order in bit
> representation. We can think of it as a clock, were 12h is
> 0x0000_0000 and 11h is 0xFFFF_FFFF, regardless of its sign.
>
> Then, instead of dealing with max and min, we deal with a base and len,
> where len = max - min.
>
> Example:
> range [-1, 3] is represented by base=0xFFFF_FFFF len=4
> since (u32)3 - (u32)-1 is 4.
>
> And we can verify if a value v is in range if:
> (u32)(v - base) <= len
> which is true if v is signed -1 or v is unsigned 0xFFFF_FFFF.
>
> This automatically handles the wrapping case, discarding the need to
> check if it crosses the signed range or not and handle each case.
>
> It also fixes the following current issues:
> * [(u32)umin, (u32)umax] falling outside of [u32_min_value, u32_max_value]
> * [(u32)umin, (u32)umax] falling in the gap [(u32)s32_max_value, (u32)s32_min_value]
>
> Fixes: c51d5ad6543c ("bpf: improve deduction of 64-bit bounds from 32-bit bounds")
> [Circular representation]
> Suggested-by: Eduard Zingerman <eddyz87@gmail.com>
> Signed-off-by: Helen Koike <koike@igalia.com>
>
> ---
>
> This is a follow-up from discussion:
> https://lore.kernel.org/all/7fb97184-baaa-4639-a0b9-ac289bf2e54d@igalia.com/
>
> I didn't tag this as v2 since it is a completely different
> implementation.
>
> I did do an implementation[1] without circular range logic, by using
> if/elses blocks, but it feels I'm always missing a corner case and using
> circular range logic feels safer.
>
> Eduard, I didn't use the exact code you pointed, since it seems it is
> covered by your RFC, so I used the logic just for this case while your
> RFC doesn't move forward.
>
> Please let me know what you think.
>
> [1]
> https://github.com/helen-fornazier/linux/commits/bpf-min-max-if-else-solution/
> (3 last commits)
>
> Thanks,
> Helen
> ---
> kernel/bpf/verifier.c | 77 +++++++++++++++++++++++++++++++++++--------
> 1 file changed, 64 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 8c1cf2eb6cbb..2944c17d2b7b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2700,6 +2700,48 @@ static void deduce_bounds_64_from_64(struct bpf_reg_state *reg)
> }
> }
>
> +/*
> + * Find the smallest v >= lo such that (u32)v is in the circular u32 range
> + * [b_min, b_max] (b_len = b_max - b_min wraps naturally for wrapping ranges).
> + */
> +static u64 u64_tighten_umin(u64 lo, u64 hi, u32 b_min, u32 b_max)
> +{
> + u32 b_len = b_max - b_min;
> + u64 a_len = hi - lo;
> + u64 cand;
> +
> + /* lo32(lo) already in [b_min, b_max]? */
> + if ((u32)((u32)lo - b_min) <= b_len)
> + return lo;
> + /* Set lo32 to b_min and check if it's in the range [lo, hi] */
> + cand = (lo & ~(u64)U32_MAX) | b_min;
> + if (cand - lo <= a_len)
> + return cand;
> + /* Advance to the next 2^32 block */
> + return cand + BIT_ULL(32);
> +}
Hi Helen, Harishankar, Shung-Hsi, Paul,
I think this algorithm is correct and covers all cases discussed earlier.
I also prepared simple correctness check using cbmc in [1].
It shows that for any valid input register state deduce_bounds_64_from_32
does not loose any values (check_soundness() function in [1], which validates).
It also shows that there exist invalid input register state,
such that deduce_bounds_64_from_32() "fixes" it to be valid
(check_invalid_preserved() function in [1], which produces a counter-example).
Now, the question is whether we want check_invalid_preserved() to hold.
Harishankar is working on an extension to simulate_both_branches_taken()
checking for additional cases of invariant violation.
Disagreement between 64-bit and 32-bit ranges is one of such violations.
The logic in deduce_bounds_64_from_32() can be extracted as "intersect"
function producing a signal describing if intersection actually exist.
So, the question is for Harishankar, would you like to have such
"intersect" function?
[1] https://github.com/eddyz87/deduce-bounds-64-from-32-checks
[...]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-15 7:12 ` Eduard Zingerman
@ 2026-04-15 16:19 ` Harishankar Vishwanathan
2026-04-15 18:12 ` Eduard Zingerman
2026-04-16 3:52 ` Shung-Hsi Yu
1 sibling, 1 reply; 12+ messages in thread
From: Harishankar Vishwanathan @ 2026-04-15 16:19 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Helen Koike, paul.chaignon, shung-hsi.yu, andrii, yonghong.song,
ast, bpf, linux-kernel, kernel-dev
On Wed, Apr 15, 2026 at 3:12 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Fri, 2026-04-10 at 09:40 -0300, Helen Koike wrote:
> > Unify handling of signed and unsigned using circular range logic.
[...]
> Hi Helen, Harishankar, Shung-Hsi, Paul,
>
> I think this algorithm is correct and covers all cases discussed earlier.
> I also prepared simple correctness check using cbmc in [1].
> It shows that for any valid input register state deduce_bounds_64_from_32
> does not loose any values (check_soundness() function in [1], which validates).
> It also shows that there exist invalid input register state,
> such that deduce_bounds_64_from_32() "fixes" it to be valid
> (check_invalid_preserved() function in [1], which produces a counter-example).
IIUC, this patch enhances the new deduce_bounds_64_from_32, while
also having the property that it "maintains invalid input register states".
> Now, the question is whether we want check_invalid_preserved() to hold.
> Harishankar is working on an extension to simulate_both_branches_taken()
> checking for additional cases of invariant violation.
> Disagreement between 64-bit and 32-bit ranges is one of such violations.
> The logic in deduce_bounds_64_from_32() can be extracted as "intersect"
> function producing a signal describing if intersection actually exist.
> So, the question is for Harishankar, would you like to have such
> "intersect" function?
I think using the new reg_bounds_intersect() "intersection checks"
introduced in [1] to exit early in sync is still useful because
the other sub-sync function might still "fix" the bounds
[1] https://lore.kernel.org/bpf/20260415160728.657270-2-harishankar.vishwanathan@gmail.com/T/#u
> [...]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-15 16:19 ` Harishankar Vishwanathan
@ 2026-04-15 18:12 ` Eduard Zingerman
0 siblings, 0 replies; 12+ messages in thread
From: Eduard Zingerman @ 2026-04-15 18:12 UTC (permalink / raw)
To: Harishankar Vishwanathan
Cc: Helen Koike, paul.chaignon, shung-hsi.yu, andrii, yonghong.song,
ast, bpf, linux-kernel, kernel-dev
On Wed, 2026-04-15 at 12:19 -0400, Harishankar Vishwanathan wrote:
> On Wed, Apr 15, 2026 at 3:12 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
> >
> > On Fri, 2026-04-10 at 09:40 -0300, Helen Koike wrote:
> > > Unify handling of signed and unsigned using circular range logic.
> [...]
>
> > Hi Helen, Harishankar, Shung-Hsi, Paul,
> >
> > I think this algorithm is correct and covers all cases discussed earlier.
> > I also prepared simple correctness check using cbmc in [1].
> > It shows that for any valid input register state deduce_bounds_64_from_32
> > does not loose any values (check_soundness() function in [1], which validates).
> > It also shows that there exist invalid input register state,
> > such that deduce_bounds_64_from_32() "fixes" it to be valid
> > (check_invalid_preserved() function in [1], which produces a counter-example).
>
> IIUC, this patch enhances the new deduce_bounds_64_from_32, while
> also having the property that it "maintains invalid input register states".
It enhances deduce_bounds_64_from_32, but it does *not* maintain the
invalid stays invalid property. It can be adjusted to provide a signal
when 64-bit and 32-bit ranges do not intersect.
In any case, I see that you handle this situation already in [1].
Therefore, I think we can proceed with this patch as-is.
[1] https://lore.kernel.org/bpf/20260415160728.657270-1-harishankar.vishwanathan@gmail.com/T/#t
>
> > Now, the question is whether we want check_invalid_preserved() to hold.
> > Harishankar is working on an extension to simulate_both_branches_taken()
> > checking for additional cases of invariant violation.
> > Disagreement between 64-bit and 32-bit ranges is one of such violations.
> > The logic in deduce_bounds_64_from_32() can be extracted as "intersect"
> > function producing a signal describing if intersection actually exist.
> > So, the question is for Harishankar, would you like to have such
> > "intersect" function?
>
> I think using the new reg_bounds_intersect() "intersection checks"
> introduced in [1] to exit early in sync is still useful because
> the other sub-sync function might still "fix" the bounds
>
> [1] https://lore.kernel.org/bpf/20260415160728.657270-2-harishankar.vishwanathan@gmail.com/T/#u
>
> > [...]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-10 12:40 [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Helen Koike
` (2 preceding siblings ...)
2026-04-15 7:12 ` Eduard Zingerman
@ 2026-04-15 18:12 ` Eduard Zingerman
3 siblings, 0 replies; 12+ messages in thread
From: Eduard Zingerman @ 2026-04-15 18:12 UTC (permalink / raw)
To: Helen Koike, andrii, shung-hsi.yu, yonghong.song, ast, bpf,
linux-kernel, kernel-dev
On Fri, 2026-04-10 at 09:40 -0300, Helen Koike wrote:
> Unify handling of signed and unsigned using circular range logic.
>
> Signed and unsigned numbers follows the same order in bit
> representation. We can think of it as a clock, were 12h is
> 0x0000_0000 and 11h is 0xFFFF_FFFF, regardless of its sign.
>
> Then, instead of dealing with max and min, we deal with a base and len,
> where len = max - min.
>
> Example:
> range [-1, 3] is represented by base=0xFFFF_FFFF len=4
> since (u32)3 - (u32)-1 is 4.
>
> And we can verify if a value v is in range if:
> (u32)(v - base) <= len
> which is true if v is signed -1 or v is unsigned 0xFFFF_FFFF.
>
> This automatically handles the wrapping case, discarding the need to
> check if it crosses the signed range or not and handle each case.
>
> It also fixes the following current issues:
> * [(u32)umin, (u32)umax] falling outside of [u32_min_value, u32_max_value]
> * [(u32)umin, (u32)umax] falling in the gap [(u32)s32_max_value, (u32)s32_min_value]
>
> Fixes: c51d5ad6543c ("bpf: improve deduction of 64-bit bounds from 32-bit bounds")
> [Circular representation]
> Suggested-by: Eduard Zingerman <eddyz87@gmail.com>
> Signed-off-by: Helen Koike <koike@igalia.com>
>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
[...]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-15 7:12 ` Eduard Zingerman
2026-04-15 16:19 ` Harishankar Vishwanathan
@ 2026-04-16 3:52 ` Shung-Hsi Yu
2026-04-16 7:43 ` Eduard Zingerman
1 sibling, 1 reply; 12+ messages in thread
From: Shung-Hsi Yu @ 2026-04-16 3:52 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Helen Koike, harishankar.vishwanathan, paul.chaignon, andrii,
yonghong.song, ast, bpf, linux-kernel, kernel-dev
On Wed, Apr 15, 2026 at 12:12:45AM -0700, Eduard Zingerman wrote:
> On Fri, 2026-04-10 at 09:40 -0300, Helen Koike wrote:
> > Unify handling of signed and unsigned using circular range logic.
[...]
> Hi Helen, Harishankar, Shung-Hsi, Paul,
>
> I think this algorithm is correct and covers all cases discussed earlier.
> I also prepared simple correctness check using cbmc in [1].
Given the "Fix invariant violations and improve branch detection" is
merged and the original Syzkaller reproducer no longer triggers an issue,
teaching the verfier how to do better bound deduction (i.e., precision
improvement) seems less appealing than before, unless:
1. LLVM produced program show similar pattern and was rejected by the
verifier, which will be fixed by this patchset
2. We're proceeding with cnum RFC as a whole, and this marks the first
step (I am assuming this is the case?)
My understanding is that we just need the verifier to be smart enough
to accept safe LLVM-generated program, where as Syzkaller-generated one
is not as much of a concern if it does not causes any issue. cnum
improvement make sense because it simplifies the code, and could
potentially be the last time we have to touch 32->64 deduction (famous
last word).
Shung-Hsi
[...]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-16 3:52 ` Shung-Hsi Yu
@ 2026-04-16 7:43 ` Eduard Zingerman
2026-04-16 13:45 ` Paul Chaignon
0 siblings, 1 reply; 12+ messages in thread
From: Eduard Zingerman @ 2026-04-16 7:43 UTC (permalink / raw)
To: Shung-Hsi Yu
Cc: Helen Koike, harishankar.vishwanathan, paul.chaignon, andrii,
yonghong.song, ast, bpf, linux-kernel, kernel-dev
On Thu, 2026-04-16 at 11:52 +0800, Shung-Hsi Yu wrote:
> On Wed, Apr 15, 2026 at 12:12:45AM -0700, Eduard Zingerman wrote:
> > On Fri, 2026-04-10 at 09:40 -0300, Helen Koike wrote:
> > > Unify handling of signed and unsigned using circular range logic.
> [...]
> > Hi Helen, Harishankar, Shung-Hsi, Paul,
> >
> > I think this algorithm is correct and covers all cases discussed earlier.
> > I also prepared simple correctness check using cbmc in [1].
>
> Given the "Fix invariant violations and improve branch detection" is
> merged and the original Syzkaller reproducer no longer triggers an issue,
> teaching the verfier how to do better bound deduction (i.e., precision
> improvement) seems less appealing than before, unless:
There is only so much information that can be gained from 32->64
tightening. I think this patch-set makes such tightening as precise as
it can be. Which is a nice property, hence I'd like to proceed merging
it.
> 1. LLVM produced program show similar pattern and was rejected by the
> verifier, which will be fixed by this patchset
> 2. We're proceeding with cnum RFC as a whole, and this marks the first
> step (I am assuming this is the case?)
This is likely, I'm about to share the RFC.
> My understanding is that we just need the verifier to be smart enough
> to accept safe LLVM-generated program, where as Syzkaller-generated one
> is not as much of a concern if it does not causes any issue. cnum
> improvement make sense because it simplifies the code, and could
> potentially be the last time we have to touch 32->64 deduction (famous
> last word).
>
> Shung-Hsi
>
> [...]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic
2026-04-16 7:43 ` Eduard Zingerman
@ 2026-04-16 13:45 ` Paul Chaignon
0 siblings, 0 replies; 12+ messages in thread
From: Paul Chaignon @ 2026-04-16 13:45 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Shung-Hsi Yu, Helen Koike, harishankar.vishwanathan, andrii,
yonghong.song, ast, bpf, linux-kernel, kernel-dev
On Thu, Apr 16, 2026 at 12:43:44AM -0700, Eduard Zingerman wrote:
> On Thu, 2026-04-16 at 11:52 +0800, Shung-Hsi Yu wrote:
> > On Wed, Apr 15, 2026 at 12:12:45AM -0700, Eduard Zingerman wrote:
> > > On Fri, 2026-04-10 at 09:40 -0300, Helen Koike wrote:
> > > > Unify handling of signed and unsigned using circular range logic.
> > [...]
> > > Hi Helen, Harishankar, Shung-Hsi, Paul,
> > >
> > > I think this algorithm is correct and covers all cases discussed earlier.
> > > I also prepared simple correctness check using cbmc in [1].
> >
> > Given the "Fix invariant violations and improve branch detection" is
> > merged and the original Syzkaller reproducer no longer triggers an issue,
> > teaching the verfier how to do better bound deduction (i.e., precision
> > improvement) seems less appealing than before, unless:
>
> There is only so much information that can be gained from 32->64
> tightening. I think this patch-set makes such tightening as precise as
> it can be. Which is a nice property, hence I'd like to proceed merging
> it.
I've sent a patch [1] that might be addressing the same gaps in
deduce_bounds_64_from_32() (outside of the s32 case maybe?). At least,
the selftests introduced here pass with that patch as well. It doesn't
require cnums and looks a bit easier to follow IMO.
I wrote it while trying to fix the reg_bounds selftests and only noticed
yesterday that it might be overlapping with this. cnums probably have
other benefits, but maybe they're not needed to address the issues
identified here?
1: https://lore.kernel.org/bpf/b2a0346a5b0818008503b721c62621918d84ad0a.1776344897.git.paul.chaignon@gmail.com/
>
> > 1. LLVM produced program show similar pattern and was rejected by the
> > verifier, which will be fixed by this patchset
> > 2. We're proceeding with cnum RFC as a whole, and this marks the first
> > step (I am assuming this is the case?)
>
> This is likely, I'm about to share the RFC.
>
> > My understanding is that we just need the verifier to be smart enough
> > to accept safe LLVM-generated program, where as Syzkaller-generated one
> > is not as much of a concern if it does not causes any issue. cnum
> > improvement make sense because it simplifies the code, and could
> > potentially be the last time we have to touch 32->64 deduction (famous
> > last word).
> >
> > Shung-Hsi
> >
> > [...]
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-04-16 13:45 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-10 12:40 [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Helen Koike
2026-04-10 12:40 ` [PATCH 2/2] selftests/bpf: new cases handled by 32->64 range refinements Helen Koike
2026-04-14 8:26 ` [PATCH 1/2] bpf: deduce_bounds_64_from_32 tightening with circular range logic Shung-Hsi Yu
2026-04-14 16:25 ` Helen Koike
2026-04-14 18:32 ` Eduard Zingerman
2026-04-15 7:12 ` Eduard Zingerman
2026-04-15 16:19 ` Harishankar Vishwanathan
2026-04-15 18:12 ` Eduard Zingerman
2026-04-16 3:52 ` Shung-Hsi Yu
2026-04-16 7:43 ` Eduard Zingerman
2026-04-16 13:45 ` Paul Chaignon
2026-04-15 18:12 ` Eduard Zingerman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox