public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf 0/4] Fix invariant violation for single-value tnums
@ 2026-02-17  8:53 Paul Chaignon
  2026-02-17  8:59 ` [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members Paul Chaignon
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Paul Chaignon @ 2026-02-17  8:53 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Eduard Zingerman, Harishankar Vishwanathan, Srinivas Narayana,
	Santosh Nagarakatte

We're hitting an invariant violation in Cilium that sometimes leads to
BPF programs being rejected and Cilium failing to start [1]. As far as
I know this is the first case of invariant violation found in a real
program (i.e., not by a fuzzer). The following extract from verifier
logs shows what's happening:

  from 201 to 236: R1=0 R6=ctx() R7=1 R9=scalar(smin=umin=smin32=umin32=3584,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100)) R10=fp0
  236: R1=0 R6=ctx() R7=1 R9=scalar(smin=umin=smin32=umin32=3584,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100)) R10=fp0
  ; if (magic == MARK_MAGIC_HOST || magic == MARK_MAGIC_OVERLAY || magic == MARK_MAGIC_ENCRYPT) @ bpf_host.c:1337
  236: (16) if w9 == 0xe00 goto pc+45   ; R9=scalar(smin=umin=smin32=umin32=3585,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100))
  237: (16) if w9 == 0xf00 goto pc+1
  verifier bug: REG INVARIANTS VIOLATION (false_reg1): range bounds violation u64=[0xe01, 0xe00] s64=[0xe01, 0xe00] u32=[0xe01, 0xe00] s32=[0xe01, 0xe00] var_off=(0xe00, 0x0)

More details are given in the second patch, but in short, the verifier
should be able to detect that the false branch of instruction 237 is
never true. After instruction 236, the u64 range and the tnum overlap
in a single value, 0xf00.

The long-term solution to invariant violation is likely to rely on the
refinement + invariant violation check to detect dead branches, as
started by Eduard. To fix the current issue, we need something with
less refactoring that we can backport to affected kernels.

The solution implemented in the second patch is to improve the bounds
refinement to avoid this case. It relies on a new tnum helper,
tnum_step, first sent as an RFC in [2]. The last two patches extend and
update the selftests.

Link: https://github.com/cilium/cilium/issues/44216 [1]
Link: https://lore.kernel.org/bpf/20251107192328.2190680-2-harishankar.vishwanathan@gmail.com/ [2]

Harishankar Vishwanathan (1):
  bpf: Introduce tnum_step to step through tnum's members

Paul Chaignon (3):
  bpf: Improve bounds when tnum has a single possible value
  selftests/bpf: Test refinement of single-value tnum
  selftests/bpf: Avoid simplification of crafted bounds test

 include/linux/tnum.h                          |  3 +
 kernel/bpf/tnum.c                             | 52 +++++++++++
 kernel/bpf/verifier.c                         | 16 ++++
 .../selftests/bpf/prog_tests/reg_bounds.c     |  2 +-
 .../selftests/bpf/progs/verifier_bounds.c     | 91 +++++++++++++++++++
 5 files changed, 163 insertions(+), 1 deletion(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members
  2026-02-17  8:53 [PATCH bpf 0/4] Fix invariant violation for single-value tnums Paul Chaignon
@ 2026-02-17  8:59 ` Paul Chaignon
  2026-02-17  9:44   ` bot+bpf-ci
  2026-02-17  9:01 ` [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value Paul Chaignon
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Paul Chaignon @ 2026-02-17  8:59 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Eduard Zingerman, Harishankar Vishwanathan, Srinivas Narayana,
	Santosh Nagarakatte

From: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>

This commit introduces tnum_step(), a function that, when given t, and a
number z returns the smallest member of t larger than z. The number z
must be greater or equal to the smallest member of t and less than the
largest member of t.

The first step is to compute j, a number that keeps all of t's known
bits, and matches all unknown bits to z's bits. Since j is a member of
the t, it is already a candidate for result. However, we want our result
to be (minimally) greater than z.

There are only two possible cases:

(1) Case j <= z. In this case, we want to increase the value of j and
make it > z.
(2) Case j > z. In this case, we want to decrease the value of j while
keeping it > z.

(Case 1) j <= z

t = xx11x0x0
z = 10111101 (189)
j = 10111000 (184)
         ^
         k

(Case 1.1) Let's first consider the case where j < z. We will address j
== z later.

Since z > j, there had to be a bit position that was 1 in z and a 0 in
j, beyond which all positions of higher significance are equal in j and
z. Further, this position could not have been unknown in a, because the
unknown positions of a match z. This position had to be a 1 in z and
known 0 in t.

Let k be position of the most significant 1-to-0 flip. In our example, k
= 3 (starting the count at 1 at the least significant bit).  Setting (to
1) the unknown bits of t in positions of significance smaller than
k will not produce a result > z. Hence, we must set/unset the unknown
bits at positions of significance higher than k. Specifically, we look
for the next larger combination of 1s and 0s to place in those
positions, relative to the combination that exists in z. We can achieve
this by concatenating bits at unknown positions of t into an integer,
adding 1, and writing the bits of that result back into the
corresponding bit positions previously extracted from z.

>From our example, considering only positions of significance greater
than k:

t =  xx..x
z =  10..1
    +    1
     -----
     11..0

This is the exact combination 1s and 0s we need at the unknown bits of t
in positions of significance greater than k. Further, our result must
only increase the value minimally above z. Hence, unknown bits in
positions of significance smaller than k should remain 0. We finally
have,

result = 11110000 (240)

(Case 1.2) Now consider the case when j = z, for example

t = 1x1x0xxx
z = 10110100 (180)
j = 10110100 (180)

Matching the unknown bits of the t to the bits of z yielded exactly z.
To produce a number greater than z, we must set/unset the unknown bits
in t, and *all* the unknown bits of t candidates for being set/unset. We
can do this similar to Case 1.1, by adding 1 to the bits extracted from
the masked bit positions of z. Essentially, this case is equivalent to
Case 1.1, with k = 0.

t =  1x1x0xxx
z =  .0.1.100
    +       1
    ---------
     .0.1.101

This is the exact combination of bits needed in the unknown positions of
t. After recalling the known positions of t, we get

result = 10110101 (181)

(Case 2) j > z

t = x00010x1
z = 10000010 (130)
j = 10001011 (139)
	^
	k

Since j > z, there had to be a bit position which was 0 in z, and a 1 in
j, beyond which all positions of higher significance are equal in j and
z. This position had to be a 0 in z and known 1 in t. Let k be the
position of the most significant 0-to-1 flip. In our example, k = 4.

Because of the 0-to-1 flip at position k, a member of t can become
greater than z if the bits in positions greater than k are themselves >=
to z. To make that member *minimally* greater than z, the bits in
positions greater than k must be exactly = z. Hence, we simply match all
of t's unknown bits in positions more significant than k to z's bits. In
positions less significant than k, we set all t's unknown bits to 0
to retain minimality.

In our example, in positions of greater significance than k (=4),
t=x000. These positions are matched with z (1000) to produce 1000. In
positions of lower significance than k, t=10x1. All unknown bits are set
to 0 to produce 1001. The final result is:

result = 10001001 (137)

This concludes the computation for a result > z that is a member of t.

The procedure for tnum_step() in this commit implements the idea
described above. As a proof of correctness, we verified the algorithm
against a logical specification of tnum_step. The specification asserts
the following about the inputs t, z and output res that:

1. res is a member of t, and
2. res is strictly greater than z, and
3. there does not exist another value res2 such that
	3a. res2 is also a member of t, and
	3b. res2 is greater than z
	3c. res2 is smaller than res

We checked the implementation against this logical specification using
an SMT solver. The verification formula in SMTLIB format is available
at [1]. The verification returned an "unsat": indicating that no input
assignment exists for which the implementation and the specification
produce different outputs.

In addition, we also automatically generated the logical encoding of the
C implementation using Agni [2] and verified it against the same
specification. This verification also returned an "unsat", confirming
that the implementation is equivalent to the specification. The formula
for this check is also available at [3].

Link: https://pastebin.com/raw/2eRWbiit [1]
Link: https://github.com/bpfverif/agni [2]
Link: https://pastebin.com/raw/EztVbBJ2 [3]
Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com>
Co-developed-by: Srinivas Narayana <srinivas.narayana@rutgers.edu>
Signed-off-by: Srinivas Narayana <srinivas.narayana@rutgers.edu>
Co-developed-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu>
Signed-off-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu>
Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
---
 include/linux/tnum.h |  3 +++
 kernel/bpf/tnum.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+)

diff --git a/include/linux/tnum.h b/include/linux/tnum.h
index fa4654ffb621..ca2cfec8de08 100644
--- a/include/linux/tnum.h
+++ b/include/linux/tnum.h
@@ -131,4 +131,7 @@ static inline bool tnum_subreg_is_const(struct tnum a)
 	return !(tnum_subreg(a)).mask;
 }
 
+/* Returns the smallest member of t larger than z */
+u64 tnum_step(struct tnum t, u64 z);
+
 #endif /* _LINUX_TNUM_H */
diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
index 26fbfbb01700..55197c9fadc2 100644
--- a/kernel/bpf/tnum.c
+++ b/kernel/bpf/tnum.c
@@ -269,3 +269,55 @@ struct tnum tnum_bswap64(struct tnum a)
 {
 	return TNUM(swab64(a.value), swab64(a.mask));
 }
+
+/* Given tnum t, and a number z such that tmin <= z < tmax, where tmin
+ * is the smallest member of the t (= t.value) and tmax is the largest
+ * member of t (= t.value | t.mask), returns the smallest member of t
+ * larger than z.
+ *
+ * For example,
+ * t      = x11100x0
+ * z      = 11110001 (241)
+ * result = 11110010 (242)
+ *
+ * Note: if this function is called with z >= tmax, it just returns
+ * early with tmax; if this function is called with z < tmin, the
+ * algorithm already returns tmin.
+ */
+u64 tnum_step(struct tnum t, u64 z)
+{
+	u64 tmax, j, p, q, r, s, v, u, w, res;
+	u8 k;
+
+	tmax = t.value | t.mask;
+
+	/* if z >= largest member of t, return largest member of t */
+	if (z >= tmax)
+		return tmax;
+
+	/* keep t's known bits, and match all unknown bits to z */
+	j = t.value | (z & t.mask);
+
+	if (j > z) {
+		p = ~z & t.value & ~t.mask;
+		k = fls64(p); /* k is the most-significant 0-to-1 flip */
+		q = U64_MAX << k;
+		r = q & z; /* positions > k matched to z */
+		s = ~q & t.value; /* positions <= k matched to t.value */
+		v = r | s;
+		res = v;
+	} else {
+		p = z & ~t.value & ~t.mask;
+		k = fls64(p); /* k is the most-significant 1-to-0 flip */
+		q = U64_MAX << k;
+		r = q & t.mask & z; /* unknown positions > k, matched to z */
+		s = q & ~t.mask; /* known positions > k, set to 1 */
+		v = r | s;
+		/* add 1 to unknown positions > k to make value greater than z */
+		u = v + (1ULL << k);
+		/* extract bits in unknown positions > k from u, rest from t.value */
+		w = (u & t.mask) | t.value;
+		res = w;
+	}
+	return res;
+}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-17  8:53 [PATCH bpf 0/4] Fix invariant violation for single-value tnums Paul Chaignon
  2026-02-17  8:59 ` [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members Paul Chaignon
@ 2026-02-17  9:01 ` Paul Chaignon
  2026-02-17 18:58   ` Alexei Starovoitov
  2026-02-17 22:57   ` Eduard Zingerman
  2026-02-17  9:04 ` [PATCH bpf 3/4] selftests/bpf: Test refinement of single-value tnum Paul Chaignon
  2026-02-17  9:06 ` [PATCH bpf 4/4] selftests/bpf: Avoid simplification of crafted bounds test Paul Chaignon
  3 siblings, 2 replies; 18+ messages in thread
From: Paul Chaignon @ 2026-02-17  9:01 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Eduard Zingerman, Harishankar Vishwanathan, Srinivas Narayana,
	Santosh Nagarakatte

We're hitting an invariant violation in Cilium that sometimes leads to
BPF programs being rejected and Cilium failing to start [1]. The
following extract from verifier logs shows what's happening:

  from 201 to 236: R1=0 R6=ctx() R7=1 R9=scalar(smin=umin=smin32=umin32=3584,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100)) R10=fp0
  236: R1=0 R6=ctx() R7=1 R9=scalar(smin=umin=smin32=umin32=3584,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100)) R10=fp0
  ; if (magic == MARK_MAGIC_HOST || magic == MARK_MAGIC_OVERLAY || magic == MARK_MAGIC_ENCRYPT) @ bpf_host.c:1337
  236: (16) if w9 == 0xe00 goto pc+45   ; R9=scalar(smin=umin=smin32=umin32=3585,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100))
  237: (16) if w9 == 0xf00 goto pc+1
  verifier bug: REG INVARIANTS VIOLATION (false_reg1): range bounds violation u64=[0xe01, 0xe00] s64=[0xe01, 0xe00] u32=[0xe01, 0xe00] s32=[0xe01, 0xe00] var_off=(0xe00, 0x0)

We reach instruction 236 with two possible values for R9, 0xe00 and
0xf00. This is perfectly reflected in the tnum, but of course the ranges
are less accurate and cover [0xe00; 0xf00]. Taking the fallthrough path
at instruction 236 allows the verifier to reduce the range to
[0xe01; 0xf00]. The tnum is however not updated.

With these ranges, at instruction 237, the verifier is not able to
deduce that R9 is always equal to 0xf00. Hence the fallthrough pass is
explored first, the verifier refines the bounds using the assumption
that R9 != 0xf00, and ends up with an invariant violation.

This pattern of impossible branch + bounds refinement is common to all
invariant violations seen so far. The long-term solution is likely to
rely on the refinement + invariant violation check to detect dead
branches, as started by Eduard. To fix the current issue, we need
something with less refactoring that we can backport.

This patch uses the tnum_step helper introduced in the previous patch to
detect the above situation. In particular, three cases are now detected
in the bounds refinement:

1. The u64 range and the tnum only overlap in umin.
   u64:  ---[xxxxxx]-----
   tnum: --xx----------x-

2. The u64 range and the tnum only overlap in the maximum value
   represented by the tnum, called tmax.
   u64:  ---[xxxxxx]-----
   tnum: xx-----x--------

3. The u64 range and the tnum only overlap
   u64:  ---[xxxxxx]-----
   tnum: xx----x-------x-

To detect these three cases, we call tnum_step(tnum, umin), which
returns the largest member of the tnum greater than umin, called
tnum_next here. We're in case (1) if umin uis part of the tnum and
tnum_next is greater than umax. We're in case (2) if umin is not part of
the tnum and tnum_next is equal to tmax. Finally, we're in case (3) if
tnum_next is inferior or equal to umax AND calling tnum_step a second
time gives us a value past umax.

This change implements these three cases. With it, the above bytecode
looks as follows:

  0: (85) call bpf_get_prandom_u32#7    ; R0=scalar()
  1: (47) r0 |= 3584                    ; R0=scalar(smin=0x8000000000000e00,umin=umin32=3584,smin32=0x80000e00,var_off=(0xe00; 0xfffffffffffff1ff))
  2: (57) r0 &= 3840                    ; R0=scalar(smin=umin=smin32=umin32=3584,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100))
  3: (15) if r0 == 0xe00 goto pc+2      ; R0=3840
  4: (15) if r0 == 0xf00 goto pc+1
  4: R0=3840
  6: (95) exit

In addition to the new selftests, this change was also verified with
Agni [3]. For the record, the raw SMT is available at [4]. The property
it verifies is that: If a concrete value x is contained in all input
abstract values, after __update_reg_bounds, it will continue to be
contained in all output abstract values.

Link: https://github.com/cilium/cilium/issues/44216 [1]
Link: https://pchaigno.github.io/test-verifier-complexity.html [2]
Link: https://github.com/bpfverif/agni [3]
Link: https://pastebin.com/raw/9BJAi6Rw [4]
Co-developed-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com>
---
 kernel/bpf/verifier.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index dbaafb64d3bd..c4478c874616 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2379,6 +2379,8 @@ static void __update_reg32_bounds(struct bpf_reg_state *reg)
 
 static void __update_reg64_bounds(struct bpf_reg_state *reg)
 {
+	u64 tnum_next;
+
 	/* min signed is max(sign bit) | min(other bits) */
 	reg->smin_value = max_t(s64, reg->smin_value,
 				reg->var_off.value | (reg->var_off.mask & S64_MIN));
@@ -2388,6 +2390,20 @@ static void __update_reg64_bounds(struct bpf_reg_state *reg)
 	reg->umin_value = max(reg->umin_value, reg->var_off.value);
 	reg->umax_value = min(reg->umax_value,
 			      reg->var_off.value | reg->var_off.mask);
+
+	/* Check if u64 and tnum overlap in a single value */
+	tnum_next = tnum_step(reg->var_off, reg->umin_value);
+	if ((reg->umin_value & ~reg->var_off.mask) == reg->var_off.value) {
+		/* The only overlap is umin */
+		if (tnum_next > reg->umax_value)
+			___mark_reg_known(reg, reg->umin_value);
+	} else if (tnum_next == (reg->var_off.value | reg->var_off.mask)) {
+		/* The only overlap is tmax */
+		___mark_reg_known(reg, tnum_next);
+	} else if (tnum_next <= reg->umax_value &&
+		   tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
+		___mark_reg_known(reg, tnum_next);
+	}
 }
 
 static void __update_reg_bounds(struct bpf_reg_state *reg)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH bpf 3/4] selftests/bpf: Test refinement of single-value tnum
  2026-02-17  8:53 [PATCH bpf 0/4] Fix invariant violation for single-value tnums Paul Chaignon
  2026-02-17  8:59 ` [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members Paul Chaignon
  2026-02-17  9:01 ` [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value Paul Chaignon
@ 2026-02-17  9:04 ` Paul Chaignon
  2026-02-17  9:06 ` [PATCH bpf 4/4] selftests/bpf: Avoid simplification of crafted bounds test Paul Chaignon
  3 siblings, 0 replies; 18+ messages in thread
From: Paul Chaignon @ 2026-02-17  9:04 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Eduard Zingerman, Harishankar Vishwanathan, Srinivas Narayana,
	Santosh Nagarakatte

This patch introduces selftests to cover the new bounds refinement
logic introduced in the previous patch. Without the previous patch,
the first two tests fail because of the invariant violation they
trigger. The last test fails because the R10 access is not detected as
dead code. In addition, all tests fail because of R0 having a
non-constant value in the verifier logs.

Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com>
---
 .../selftests/bpf/progs/verifier_bounds.c     | 91 +++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
index 560531404bce..41dd249faadd 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
@@ -1863,4 +1863,95 @@ l1_%=:	r0 = 1;				\
 	: __clobber_all);
 }
 
+/* This test covers the bounds deduction when the u64 range and the tnum
+ * overlap only at umax. After instruction 3, the ranges look as follows:
+ *
+ * 0    umin=0xe01     umax=0xf00                              U64_MAX
+ * |    [xxxxxxxxxxxxxx]                                       |
+ * |----------------------------|------------------------------|
+ * |   x               x                                       | tnum values
+ *
+ * The verifier can therefore deduce that the R0=0xf00=3840.
+ */
+SEC("socket")
+__description("bounds refinement with single-value tnum on umax")
+__msg("3: (15) if r0 == 0xe00 {{.*}} R0=3840")
+__success __log_level(2)
+__flag(BPF_F_TEST_REG_INVARIANTS)
+__naked void bounds_refinement_tnum_umax(void *ctx)
+{
+	asm volatile("			\
+	call %[bpf_get_prandom_u32];	\
+	r0 |= 0xe00;			\
+	r0 &= 0xf00;			\
+	if r0 == 0xe00 goto +2;		\
+	if r0 == 0xf00 goto +1;		\
+	r0 = 0;				\
+	exit;				\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+/* This test covers the bounds deduction when the u64 range and the tnum
+ * overlap only at umin. After instruction 3, the ranges look as follows:
+ *
+ * 0    umin=0xe00     umax=0xeff                              U64_MAX
+ * |    [xxxxxxxxxxxxxx]                                       |
+ * |----------------------------|------------------------------|
+ * |    x               x                                      | tnum values
+ *
+ * The verifier can therefore deduce that the R0=0xe00=3584.
+ */
+SEC("socket")
+__description("bounds refinement with single-value tnum on umin")
+__msg("3: (15) if r0 == 0xf00 {{.*}} R0=3584")
+__success __log_level(2)
+__flag(BPF_F_TEST_REG_INVARIANTS)
+__naked void bounds_refinement_tnum_umin(void *ctx)
+{
+	asm volatile("			\
+	call %[bpf_get_prandom_u32];	\
+	r0 |= 0xe00;			\
+	r0 &= 0xf00;			\
+	if r0 == 0xf00 goto +2;		\
+	if r0 == 0xe00 goto +1;		\
+	r0 = 0;				\
+	exit;				\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+/* This test covers the bounds deduction when the only possible tnum value is
+ * in the middle of the u64 range. After instruction 3, the ranges look as
+ * follows:
+ *
+ * 0    umin=0x7cf     umax=0x7df                              U64_MAX
+ * |    [xxxxxxxxxxxxxx]                                       |
+ * |----------------------------|------------------------------|
+ * |      x            x            x            x            x| tnum values
+ *
+ * The verifier can therefore deduce that the R0=0x7d0=2000. Instruction 5 is
+ * therefore dead code.
+ */
+SEC("socket")
+__description("bounds refinement with single-value tnum in middle of range")
+__msg("3: (a5) if r0 < 0x7cf {{.*}} R0=2000")
+__success __log_level(2)
+__naked void bounds_refinement_tnum_middle(void *ctx)
+{
+	asm volatile("			\
+	call %[bpf_get_prandom_u32];	\
+	if r0 & 0x0f goto +4;		\
+	if r0 > 0x7df goto +3;		\
+	if r0 < 0x7cf goto +2;		\
+	if r0 == 0x7d0 goto +1;		\
+	r10 = 0;			\
+	exit;				\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH bpf 4/4] selftests/bpf: Avoid simplification of crafted bounds test
  2026-02-17  8:53 [PATCH bpf 0/4] Fix invariant violation for single-value tnums Paul Chaignon
                   ` (2 preceding siblings ...)
  2026-02-17  9:04 ` [PATCH bpf 3/4] selftests/bpf: Test refinement of single-value tnum Paul Chaignon
@ 2026-02-17  9:06 ` Paul Chaignon
  3 siblings, 0 replies; 18+ messages in thread
From: Paul Chaignon @ 2026-02-17  9:06 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Eduard Zingerman, Harishankar Vishwanathan, Srinivas Narayana,
	Santosh Nagarakatte

The reg_bounds_crafted tests validate the verifier's range analysis
logic. They focus on the actual ranges and thus ignore the tnum. As a
consequence, they carry the assumption that the tested cases can be
reproduced in userspace without using the tnum information.

Unfortunately, the previous change the refinement logic breaks that
assumption for one test case:

  (u64)2147483648 (u32)<op> [4294967294; 0x100000000]

The tested bytecode is shown below. Without our previous improvement, on
the false branch of the condition, R7 is only known to have u64 range
[0xfffffffe; 0x100000000]. With our improvement, and using the tnum
information, we can deduce that R7 equals 0x100000000.

  19: (bc) w0 = w6                ; R6=0x80000000
  20: (bc) w0 = w7                ; R7=scalar(smin=umin=0xfffffffe,smax=umax=0x100000000,smin32=-2,smax32=0,var_off=(0x0; 0x1ffffffff))
  21: (be) if w6 <= w7 goto pc+3  ; R6=0x80000000 R7=0x100000000

R7's tnum is (0; 0x1ffffffff). On the false branch, regs_refine_cond_op
refines R7's u32 range to [0; 0x7fffffff]. Then, __reg32_deduce_bounds
refines the s32 range to 0 using u32 and finally also sets u32=0.
From this, __reg_bound_offset improves the tnum to (0; 0x100000000).
Finally, our previous patch uses this new tnum to deduce that it only
intersect with u64=[0xfffffffe; 0x100000000] in a single value:
0x100000000.

Because the verifier uses the tnum to reach this constant value, the
selftest is unable to reproduce it by only simulating ranges. The
solution implemented in this patch is to change the test case such that
there is more than one overlap value between u64 and the tnum. The max.
u64 value is thus changed from 0x100000000 to 0x300000000.

Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com>
---
 tools/testing/selftests/bpf/prog_tests/reg_bounds.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/reg_bounds.c b/tools/testing/selftests/bpf/prog_tests/reg_bounds.c
index d93a0c7b1786..0322f817d07b 100644
--- a/tools/testing/selftests/bpf/prog_tests/reg_bounds.c
+++ b/tools/testing/selftests/bpf/prog_tests/reg_bounds.c
@@ -2091,7 +2091,7 @@ static struct subtest_case crafted_cases[] = {
 	{U64, S64, {0, 0xffffffffULL}, {0x7fffffff, 0x7fffffff}},
 
 	{U64, U32, {0, 0x100000000}, {0, 0}},
-	{U64, U32, {0xfffffffe, 0x100000000}, {0x80000000, 0x80000000}},
+	{U64, U32, {0xfffffffe, 0x300000000}, {0x80000000, 0x80000000}},
 
 	{U64, S32, {0, 0xffffffff00000000ULL}, {0, 0}},
 	/* these are tricky cases where lower 32 bits allow to tighten 64
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members
  2026-02-17  8:59 ` [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members Paul Chaignon
@ 2026-02-17  9:44   ` bot+bpf-ci
  2026-02-18  2:36     ` Harishankar Vishwanathan
  0 siblings, 1 reply; 18+ messages in thread
From: bot+bpf-ci @ 2026-02-17  9:44 UTC (permalink / raw)
  To: paul.chaignon, bpf
  Cc: ast, daniel, andrii, eddyz87, harishankar.vishwanathan,
	srinivas.narayana, santosh.nagarakatte, ast, andrii, daniel,
	martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2117 bytes --]

> diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
> index 26fbfbb01700..55197c9fadc2 100644
> --- a/kernel/bpf/tnum.c
> +++ b/kernel/bpf/tnum.c
> @@ -269,3 +269,55 @@ struct tnum tnum_bswap64(struct tnum a)

[ ... ]

> +u64 tnum_step(struct tnum t, u64 z)
> +{
> +	u64 tmax, j, p, q, r, s, v, u, w, res;
> +	u8 k;
> +
> +	tmax = t.value | t.mask;
> +
> +	/* if z >= largest member of t, return largest member of t */
> +	if (z >= tmax)
> +		return tmax;
> +
> +	/* keep t's known bits, and match all unknown bits to z */
> +	j = t.value | (z & t.mask);
> +
> +	if (j > z) {
> +		p = ~z & t.value & ~t.mask;
> +		k = fls64(p); /* k is the most-significant 0-to-1 flip */
> +		q = U64_MAX << k;

When t.value has bit 63 set and z does not (reachable when
z < tmin, which the function comment says is handled), fls64(p)
returns 64, making this U64_MAX << 64.

Shifting a 64-bit value by 64 is undefined behavior in C.
tnum_range() in the same file already guards against this:

kernel/bpf/tnum.c:tnum_range() {
    ...
    /* special case, needed because 1ULL << 64 is undefined */
    if (bits > 63)
        return tnum_unknown;
    ...
}

Should tnum_step() have a similar guard when k == 64?

> +		r = q & z; /* positions > k matched to z */
> +		s = ~q & t.value; /* positions <= k matched to t.value */
> +		v = r | s;
> +		res = v;
> +	} else {

[ ... ]

> +	}
> +	return res;
> +}


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22092666641

AI-authorship-score: low
AI-authorship-explanation: Detailed hand-crafted mathematical proofs with academic co-authors from Rutgers University and references to SMT solver verification are consistent with human academic research.
issues-found: 1
issue-severity-score: medium
issue-severity-explanation: Undefined behavior from shifting a 64-bit value by 64 in the j > z branch; currently unreachable from the only caller in the series but reachable through the documented API contract.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-17  9:01 ` [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value Paul Chaignon
@ 2026-02-17 18:58   ` Alexei Starovoitov
  2026-02-17 22:57   ` Eduard Zingerman
  1 sibling, 0 replies; 18+ messages in thread
From: Alexei Starovoitov @ 2026-02-17 18:58 UTC (permalink / raw)
  To: Paul Chaignon
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Eduard Zingerman, Harishankar Vishwanathan, Srinivas Narayana,
	Santosh Nagarakatte

On Tue, Feb 17, 2026 at 1:01 AM Paul Chaignon <paul.chaignon@gmail.com> wrote:
>
> We're hitting an invariant violation in Cilium that sometimes leads to
> BPF programs being rejected and Cilium failing to start [1]. The
> following extract from verifier logs shows what's happening:
>
>   from 201 to 236: R1=0 R6=ctx() R7=1 R9=scalar(smin=umin=smin32=umin32=3584,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100)) R10=fp0
>   236: R1=0 R6=ctx() R7=1 R9=scalar(smin=umin=smin32=umin32=3584,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100)) R10=fp0
>   ; if (magic == MARK_MAGIC_HOST || magic == MARK_MAGIC_OVERLAY || magic == MARK_MAGIC_ENCRYPT) @ bpf_host.c:1337
>   236: (16) if w9 == 0xe00 goto pc+45   ; R9=scalar(smin=umin=smin32=umin32=3585,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100))
>   237: (16) if w9 == 0xf00 goto pc+1
>   verifier bug: REG INVARIANTS VIOLATION (false_reg1): range bounds violation u64=[0xe01, 0xe00] s64=[0xe01, 0xe00] u32=[0xe01, 0xe00] s32=[0xe01, 0xe00] var_off=(0xe00, 0x0)
>
> We reach instruction 236 with two possible values for R9, 0xe00 and
> 0xf00. This is perfectly reflected in the tnum, but of course the ranges
> are less accurate and cover [0xe00; 0xf00]. Taking the fallthrough path
> at instruction 236 allows the verifier to reduce the range to
> [0xe01; 0xf00]. The tnum is however not updated.
>
> With these ranges, at instruction 237, the verifier is not able to
> deduce that R9 is always equal to 0xf00. Hence the fallthrough pass is
> explored first, the verifier refines the bounds using the assumption
> that R9 != 0xf00, and ends up with an invariant violation.
>
> This pattern of impossible branch + bounds refinement is common to all
> invariant violations seen so far. The long-term solution is likely to
> rely on the refinement + invariant violation check to detect dead
> branches, as started by Eduard. To fix the current issue, we need
> something with less refactoring that we can backport.
>
> This patch uses the tnum_step helper introduced in the previous patch to
> detect the above situation. In particular, three cases are now detected
> in the bounds refinement:
>
> 1. The u64 range and the tnum only overlap in umin.
>    u64:  ---[xxxxxx]-----
>    tnum: --xx----------x-
>
> 2. The u64 range and the tnum only overlap in the maximum value
>    represented by the tnum, called tmax.
>    u64:  ---[xxxxxx]-----
>    tnum: xx-----x--------
>
> 3. The u64 range and the tnum only overlap
>    u64:  ---[xxxxxx]-----
>    tnum: xx----x-------x-
>
> To detect these three cases, we call tnum_step(tnum, umin), which
> returns the largest member of the tnum greater than umin, called
> tnum_next here. We're in case (1) if umin uis part of the tnum and
> tnum_next is greater than umax. We're in case (2) if umin is not part of
> the tnum and tnum_next is equal to tmax. Finally, we're in case (3) if
> tnum_next is inferior or equal to umax AND calling tnum_step a second
> time gives us a value past umax.
>
> This change implements these three cases. With it, the above bytecode
> looks as follows:
>
>   0: (85) call bpf_get_prandom_u32#7    ; R0=scalar()
>   1: (47) r0 |= 3584                    ; R0=scalar(smin=0x8000000000000e00,umin=umin32=3584,smin32=0x80000e00,var_off=(0xe00; 0xfffffffffffff1ff))
>   2: (57) r0 &= 3840                    ; R0=scalar(smin=umin=smin32=umin32=3584,smax=umax=smax32=umax32=3840,var_off=(0xe00; 0x100))
>   3: (15) if r0 == 0xe00 goto pc+2      ; R0=3840
>   4: (15) if r0 == 0xf00 goto pc+1
>   4: R0=3840
>   6: (95) exit
>
> In addition to the new selftests, this change was also verified with
> Agni [3]. For the record, the raw SMT is available at [4]. The property
> it verifies is that: If a concrete value x is contained in all input
> abstract values, after __update_reg_bounds, it will continue to be
> contained in all output abstract values.
>
> Link: https://github.com/cilium/cilium/issues/44216 [1]
> Link: https://pchaigno.github.io/test-verifier-complexity.html [2]
> Link: https://github.com/bpfverif/agni [3]
> Link: https://pastebin.com/raw/9BJAi6Rw [4]
> Co-developed-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
> Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
> Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com>
> ---
>  kernel/bpf/verifier.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index dbaafb64d3bd..c4478c874616 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2379,6 +2379,8 @@ static void __update_reg32_bounds(struct bpf_reg_state *reg)
>
>  static void __update_reg64_bounds(struct bpf_reg_state *reg)
>  {
> +       u64 tnum_next;
> +
>         /* min signed is max(sign bit) | min(other bits) */
>         reg->smin_value = max_t(s64, reg->smin_value,
>                                 reg->var_off.value | (reg->var_off.mask & S64_MIN));
> @@ -2388,6 +2390,20 @@ static void __update_reg64_bounds(struct bpf_reg_state *reg)
>         reg->umin_value = max(reg->umin_value, reg->var_off.value);
>         reg->umax_value = min(reg->umax_value,
>                               reg->var_off.value | reg->var_off.mask);
> +
> +       /* Check if u64 and tnum overlap in a single value */
> +       tnum_next = tnum_step(reg->var_off, reg->umin_value);
> +       if ((reg->umin_value & ~reg->var_off.mask) == reg->var_off.value) {
> +               /* The only overlap is umin */
> +               if (tnum_next > reg->umax_value)
> +                       ___mark_reg_known(reg, reg->umin_value);
> +       } else if (tnum_next == (reg->var_off.value | reg->var_off.mask)) {
> +               /* The only overlap is tmax */
> +               ___mark_reg_known(reg, tnum_next);
> +       } else if (tnum_next <= reg->umax_value &&
> +                  tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
> +               ___mark_reg_known(reg, tnum_next);
> +       }
>  }

Looks like it's causing a big regression in one of sched_ext progs:
scx_cosmos-5c590c8858df35e1_out_bpf.bpf.o cosmos_select_cpu success +542.86 %

Please investigate. It needs to be understood.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-17  9:01 ` [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value Paul Chaignon
  2026-02-17 18:58   ` Alexei Starovoitov
@ 2026-02-17 22:57   ` Eduard Zingerman
  2026-02-18  6:06     ` Harishankar Vishwanathan
  1 sibling, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2026-02-17 22:57 UTC (permalink / raw)
  To: Paul Chaignon, bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Harishankar Vishwanathan, Srinivas Narayana, Santosh Nagarakatte

On Tue, 2026-02-17 at 10:01 +0100, Paul Chaignon wrote:

[...]

> This patch uses the tnum_step helper introduced in the previous patch to
> detect the above situation. In particular, three cases are now detected
> in the bounds refinement:
>
> 1. The u64 range and the tnum only overlap in umin.
>    u64:  ---[xxxxxx]-----
>    tnum: --xx----------x-
>
> 2. The u64 range and the tnum only overlap in the maximum value
>    represented by the tnum, called tmax.
>    u64:  ---[xxxxxx]-----
>    tnum: xx-----x--------
>
> 3. The u64 range and the tnum only overlap
>    u64:  ---[xxxxxx]-----
>    tnum: xx----x-------x-

Nit: please put these diagrams in the source code.

[...]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index dbaafb64d3bd..c4478c874616 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2379,6 +2379,8 @@ static void __update_reg32_bounds(struct bpf_reg_state *reg)
>
>  static void __update_reg64_bounds(struct bpf_reg_state *reg)
>  {
> +	u64 tnum_next;
> +
>  	/* min signed is max(sign bit) | min(other bits) */
>  	reg->smin_value = max_t(s64, reg->smin_value,
>  				reg->var_off.value | (reg->var_off.mask & S64_MIN));
> @@ -2388,6 +2390,20 @@ static void __update_reg64_bounds(struct bpf_reg_state *reg)
>  	reg->umin_value = max(reg->umin_value, reg->var_off.value);
>  	reg->umax_value = min(reg->umax_value,
>  			      reg->var_off.value | reg->var_off.mask);
> +
> +	/* Check if u64 and tnum overlap in a single value */
> +	tnum_next = tnum_step(reg->var_off, reg->umin_value);
> +	if ((reg->umin_value & ~reg->var_off.mask) == reg->var_off.value) {
> +		/* The only overlap is umin */
> +		if (tnum_next > reg->umax_value)
> +			___mark_reg_known(reg, reg->umin_value);
> +	} else if (tnum_next == (reg->var_off.value | reg->var_off.mask)) {
> +		/* The only overlap is tmax */
> +		___mark_reg_known(reg, tnum_next);
> +	} else if (tnum_next <= reg->umax_value &&
> +		   tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
> +		___mark_reg_known(reg, tnum_next);
> +	}
>  }

I think this hunk should be rewritten as follows:

        tnum_next = tnum_step(reg->var_off, reg->umin_value);
        tnum_max = reg->var_off.value | reg->var_off.mask;
        tnum_min = reg->var_off.value;
        if (tnum_next > reg->umax_value) {
                /* The only overlap is umin */
                ___mark_reg_known(reg, tnum_min);
        } else if (tnum_min < reg->umin_value && tnum_next == tnum_max) {
                /* The only overlap is tmax */
                ___mark_reg_known(reg, tnum_next);
        } else if (tnum_next <= reg->umax_value &&
                   tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
                ___mark_reg_known(reg, tnum_next);
        }

- At-least to me, it easier to understand this way.
- There is no need to gate the condition `tnum_next > reg->umax_value`,
  if next tnum overshoots the reg->umax_value then only tnum_min is left.
- Accidentally, it fixes the scx_cosmos regression
  (probably, because first condition is relaxed).

Wdyt?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members
  2026-02-17  9:44   ` bot+bpf-ci
@ 2026-02-18  2:36     ` Harishankar Vishwanathan
  2026-02-18  2:51       ` Alexei Starovoitov
  0 siblings, 1 reply; 18+ messages in thread
From: Harishankar Vishwanathan @ 2026-02-18  2:36 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: paul.chaignon, bpf, ast, daniel, andrii, eddyz87,
	srinivas.narayana, santosh.nagarakatte, martin.lau, yonghong.song,
	clm, ihor.solodrai

On Tue, Feb 17, 2026 at 4:44 AM <bot+bpf-ci@kernel.org> wrote:
>
> > diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
> > index 26fbfbb01700..55197c9fadc2 100644
> > --- a/kernel/bpf/tnum.c
> > +++ b/kernel/bpf/tnum.c
> > @@ -269,3 +269,55 @@ struct tnum tnum_bswap64(struct tnum a)
>
> [ ... ]
>
> > +u64 tnum_step(struct tnum t, u64 z)
> > +{
> > +     u64 tmax, j, p, q, r, s, v, u, w, res;
> > +     u8 k;
> > +
> > +     tmax = t.value | t.mask;
> > +
> > +     /* if z >= largest member of t, return largest member of t */
> > +     if (z >= tmax)
> > +             return tmax;
> > +
> > +     /* keep t's known bits, and match all unknown bits to z */
> > +     j = t.value | (z & t.mask);
> > +
> > +     if (j > z) {
> > +             p = ~z & t.value & ~t.mask;
> > +             k = fls64(p); /* k is the most-significant 0-to-1 flip */
> > +             q = U64_MAX << k;
>
> When t.value has bit 63 set and z does not (reachable when
> z < tmin, which the function comment says is handled), fls64(p)
> returns 64, making this U64_MAX << 64.

It is possible that the shift value k can be = 64. This happens when
(1) j > z (i.e.,
this cannot happen in the else branch when j <= z), (2) the 63rd bit
position of z is 0,
(3) the 63rd position of t is a known 1 (t.value = 0).

If I recall correctly, the existing code assumed that the shift by 64
would return a 0,
and that consequently, the remaining computation would set res to tmin
(Re the comment:
"if this function is called with z < tmin, the algorithm already returns tmin").

> Shifting a 64-bit value by 64 is undefined behavior in C.
> tnum_range() in the same file already guards against this:
> kernel/bpf/tnum.c:tnum_range() {
>     ...
>     /* special case, needed because 1ULL << 64 is undefined */
>     if (bits > 63)
>         return tnum_unknown;
>     ...
> }
> Should tnum_step() have a similar guard when k == 64?
>

A cleaner way to address this is to make the idea of the comment
explicit, with an early return.

/* if z < smallest member of t, return smallest member of t */
if (z < tmin)
    return tmin;

Without z < tmin, it is not possible to satisfy necessary conditions
(2) and (3) above.

[ ... ]

As an aside, this didn't show up during SMT verification because the
bvshl (bitvector shift left)
operation is defined such that a shift value greater than the width
returns exactly 0.

> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22092666641
>
> AI-authorship-score: low
> AI-authorship-explanation: Detailed hand-crafted mathematical proofs with academic co-authors from Rutgers University and references to SMT solver verification are consistent with human academic research.
> issues-found: 1
> issue-severity-score: medium
> issue-severity-explanation: Undefined behavior from shifting a 64-bit value by 64 in the j > z branch; currently unreachable from the only caller in the series but reachable through the documented API contract.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members
  2026-02-18  2:36     ` Harishankar Vishwanathan
@ 2026-02-18  2:51       ` Alexei Starovoitov
  2026-02-18  6:17         ` Harishankar Vishwanathan
  0 siblings, 1 reply; 18+ messages in thread
From: Alexei Starovoitov @ 2026-02-18  2:51 UTC (permalink / raw)
  To: Harishankar Vishwanathan
  Cc: bot+bpf-ci, Paul Chaignon, bpf, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Eduard, Srinivas Narayana,
	Santosh Nagarakatte, Martin KaFai Lau, Yonghong Song, Chris Mason,
	Ihor Solodrai

On Tue, Feb 17, 2026 at 6:36 PM Harishankar Vishwanathan
<harishankar.vishwanathan@gmail.com> wrote:
>
>
> As an aside, this didn't show up during SMT verification because the
> bvshl (bitvector shift left)
> operation is defined such that a shift value greater than the width
> returns exactly 0.

Something that you should keep in mind when relying so much on SMT.
Shift by 64 on x86 is a nop while on arm64 it results in zero.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-17 22:57   ` Eduard Zingerman
@ 2026-02-18  6:06     ` Harishankar Vishwanathan
  2026-02-19 18:32       ` Eduard Zingerman
  0 siblings, 1 reply; 18+ messages in thread
From: Harishankar Vishwanathan @ 2026-02-18  6:06 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Paul Chaignon, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Srinivas Narayana, Santosh Nagarakatte

On Tue, Feb 17, 2026 at 5:58 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
[...]
> > 1. The u64 range and the tnum only overlap in umin.
> >    u64:  ---[xxxxxx]-----
> >    tnum: --xx----------x-
> I think this hunk should be rewritten as follows:
>
>         tnum_next = tnum_step(reg->var_off, reg->umin_value);
>         tnum_max = reg->var_off.value | reg->var_off.mask;
>         tnum_min = reg->var_off.value;
>         if (tnum_next > reg->umax_value) {
>                 /* The only overlap is umin */
>                 ___mark_reg_known(reg, tnum_min);
>         } else if (tnum_min < reg->umin_value && tnum_next == tnum_max) {
>                 /* The only overlap is tmax */
>                 ___mark_reg_known(reg, tnum_next);
>         } else if (tnum_next <= reg->umax_value &&
>                    tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
>                 ___mark_reg_known(reg, tnum_next);
>         }
>
> - At-least to me, it easier to understand this way.

We can also use tmin and tmax, like tnum_step from the previous commit does,
to keep things consistent.

> - There is no need to gate the condition `tnum_next > reg->umax_value`,
>   if next tnum overshoots the reg->umax_value then only tnum_min is left.

The guard helps avoid the case where u64 and tnum have no intersection:
u64:  ---[xxxxxx]-----
tnum: x------------x-

The entire condition for "only overlap of tnum is with umin" needs to check both
(1) if umin is a member of var_off
(2) if the next member of var_off after umin > umax.

> - Accidentally, it fixes the scx_cosmos regression
>   (probably, because first condition is relaxed).
>
> Wdyt?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members
  2026-02-18  2:51       ` Alexei Starovoitov
@ 2026-02-18  6:17         ` Harishankar Vishwanathan
  0 siblings, 0 replies; 18+ messages in thread
From: Harishankar Vishwanathan @ 2026-02-18  6:17 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bot+bpf-ci, Paul Chaignon, bpf, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Eduard, Srinivas Narayana,
	Santosh Nagarakatte, Martin KaFai Lau, Yonghong Song, Chris Mason,
	Ihor Solodrai

[ ... ]
> Something that you should keep in mind when relying so much on SMT.
> Shift by 64 on x86 is a nop while on arm64 it results in zero.

That is a fair point; a good reminder of the semantic gap between SMT
abstractions
and real hardware.

I missed a standard constraint here: explicitly model the undefined
behavior by adding
an assertion that the shift amount is strictly < width. That would
have forced the solver
to flag any execution with shift amount >= 64 as a verification failure.

Having said that, I will say that SMT was still instrumental in
developing this; it did help
cover cases that would have been difficult to hit with testing or fuzzing.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-18  6:06     ` Harishankar Vishwanathan
@ 2026-02-19 18:32       ` Eduard Zingerman
  2026-02-19 18:55         ` Paul Chaignon
  0 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2026-02-19 18:32 UTC (permalink / raw)
  To: Harishankar Vishwanathan
  Cc: Paul Chaignon, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Srinivas Narayana, Santosh Nagarakatte

On Wed, 2026-02-18 at 01:06 -0500, Harishankar Vishwanathan wrote:
> On Tue, Feb 17, 2026 at 5:58 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> [...]
> > > 1. The u64 range and the tnum only overlap in umin.
> > >    u64:  ---[xxxxxx]-----
> > >    tnum: --xx----------x-
> > I think this hunk should be rewritten as follows:
> > 
> >         tnum_next = tnum_step(reg->var_off, reg->umin_value);
> >         tnum_max = reg->var_off.value | reg->var_off.mask;
> >         tnum_min = reg->var_off.value;
> >         if (tnum_next > reg->umax_value) {
> >                 /* The only overlap is umin */
> >                 ___mark_reg_known(reg, tnum_min);
> >         } else if (tnum_min < reg->umin_value && tnum_next == tnum_max) {
> >                 /* The only overlap is tmax */
> >                 ___mark_reg_known(reg, tnum_next);
> >         } else if (tnum_next <= reg->umax_value &&
> >                    tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
> >                 ___mark_reg_known(reg, tnum_next);
> >         }
> > 
> > - At-least to me, it easier to understand this way.
> 
> We can also use tmin and tmax, like tnum_step from the previous commit does,
> to keep things consistent.
> 
> > - There is no need to gate the condition `tnum_next > reg->umax_value`,
> >   if next tnum overshoots the reg->umax_value then only tnum_min is left.
> 

Hi Harishankar,

Sorry for delayed response.

> The guard helps avoid the case where u64 and tnum have no intersection:
> u64:  ---[xxxxxx]-----
> tnum: x------------x-
> 
> The entire condition for "only overlap of tnum is with umin" needs to check both
> (1) if umin is a member of var_off
> (2) if the next member of var_off after umin > umax.

Right before the hunk there is:

  reg->umin_value = max(reg->umin_value, reg->var_off.value);

Which implies that:

  tnum_min <= reg->umin_value

My mental model for register sync functions is that all ranges are
considered valid, hence I suggested dropping the check.
If you think the check should be preserved, maybe still rewrite it like:

  tnum_min == reg->umin_value && tnum_next > reg->umax_value

That's a bit easier to parse compared to
'(reg->umin_value & ~reg->var_off.mask) == reg->var_off.value'.
Wdyt?

> 
> > - Accidentally, it fixes the scx_cosmos regression
> >   (probably, because first condition is relaxed).

I double-checked scx_cosmos with current master, patch as posted and
patch with my changes, in all three cases I see:

File              Program             Verdict  Insns
----------------  ------------------  -------  -----
scx_cosmos.bpf.o  cosmos_select_cpu   success    475

So, I'm not sure why CI flagged this as +500%.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-19 18:32       ` Eduard Zingerman
@ 2026-02-19 18:55         ` Paul Chaignon
  2026-02-20  0:13           ` Paul Chaignon
  0 siblings, 1 reply; 18+ messages in thread
From: Paul Chaignon @ 2026-02-19 18:55 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Harishankar Vishwanathan, bpf, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Srinivas Narayana,
	Santosh Nagarakatte

On Thu, Feb 19, 2026 at 10:32:19AM -0800, Eduard Zingerman wrote:
> On Wed, 2026-02-18 at 01:06 -0500, Harishankar Vishwanathan wrote:
> > On Tue, Feb 17, 2026 at 5:58 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > [...]
> > > > 1. The u64 range and the tnum only overlap in umin.
> > > >    u64:  ---[xxxxxx]-----
> > > >    tnum: --xx----------x-
> > > I think this hunk should be rewritten as follows:
> > > 
> > >         tnum_next = tnum_step(reg->var_off, reg->umin_value);
> > >         tnum_max = reg->var_off.value | reg->var_off.mask;
> > >         tnum_min = reg->var_off.value;
> > >         if (tnum_next > reg->umax_value) {
> > >                 /* The only overlap is umin */
> > >                 ___mark_reg_known(reg, tnum_min);
> > >         } else if (tnum_min < reg->umin_value && tnum_next == tnum_max) {
> > >                 /* The only overlap is tmax */
> > >                 ___mark_reg_known(reg, tnum_next);
> > >         } else if (tnum_next <= reg->umax_value &&
> > >                    tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
> > >                 ___mark_reg_known(reg, tnum_next);
> > >         }
> > > 
> > > - At-least to me, it easier to understand this way.

Agree. That looks easier to parse to me too. Thanks!

> > 
> > We can also use tmin and tmax, like tnum_step from the previous commit does,
> > to keep things consistent.
> > 
> > > - There is no need to gate the condition `tnum_next > reg->umax_value`,
> > >   if next tnum overshoots the reg->umax_value then only tnum_min is left.
> > 
> 
> Hi Harishankar,
> 
> Sorry for delayed response.
> 
> > The guard helps avoid the case where u64 and tnum have no intersection:
> > u64:  ---[xxxxxx]-----
> > tnum: x------------x-
> > 
> > The entire condition for "only overlap of tnum is with umin" needs to check both
> > (1) if umin is a member of var_off
> > (2) if the next member of var_off after umin > umax.
> 
> Right before the hunk there is:
> 
>   reg->umin_value = max(reg->umin_value, reg->var_off.value);
> 
> Which implies that:
> 
>   tnum_min <= reg->umin_value
> 
> My mental model for register sync functions is that all ranges are
> considered valid, hence I suggested dropping the check.

I think I agree with that, especially considering the reg->umin_value
computation above. If we think we need extra checks to be on the safe
side, they should probably live in reg_bounds_sanity_check.

> If you think the check should be preserved, maybe still rewrite it like:
> 
>   tnum_min == reg->umin_value && tnum_next > reg->umax_value
> 
> That's a bit easier to parse compared to
> '(reg->umin_value & ~reg->var_off.mask) == reg->var_off.value'.
> Wdyt?
> 
> > 
> > > - Accidentally, it fixes the scx_cosmos regression
> > >   (probably, because first condition is relaxed).
> 
> I double-checked scx_cosmos with current master, patch as posted and
> patch with my changes, in all three cases I see:
> 
> File              Program             Verdict  Insns
> ----------------  ------------------  -------  -----
> scx_cosmos.bpf.o  cosmos_select_cpu   success    475
> 
> So, I'm not sure why CI flagged this as +500%.

Ok, that's reassuring. I was going crazy trying to understand how your
changes could have fixed this given the semantics should be unchanged.

I'm re-testing the patchset on Cilium just to be sure, and I'll send a
v2 afterward. Thanks for the review!

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-19 18:55         ` Paul Chaignon
@ 2026-02-20  0:13           ` Paul Chaignon
  2026-02-20  1:29             ` Eduard Zingerman
  0 siblings, 1 reply; 18+ messages in thread
From: Paul Chaignon @ 2026-02-20  0:13 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Harishankar Vishwanathan, bpf, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Srinivas Narayana,
	Santosh Nagarakatte

On Thu, Feb 19, 2026 at 07:55:33PM +0100, Paul Chaignon wrote:
> On Thu, Feb 19, 2026 at 10:32:19AM -0800, Eduard Zingerman wrote:
> > On Wed, 2026-02-18 at 01:06 -0500, Harishankar Vishwanathan wrote:
> > > On Tue, Feb 17, 2026 at 5:58 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > > [...]
> > > > > 1. The u64 range and the tnum only overlap in umin.
> > > > >    u64:  ---[xxxxxx]-----
> > > > >    tnum: --xx----------x-
> > > > I think this hunk should be rewritten as follows:
> > > > 
> > > >         tnum_next = tnum_step(reg->var_off, reg->umin_value);
> > > >         tnum_max = reg->var_off.value | reg->var_off.mask;
> > > >         tnum_min = reg->var_off.value;
> > > >         if (tnum_next > reg->umax_value) {
> > > >                 /* The only overlap is umin */
> > > >                 ___mark_reg_known(reg, tnum_min);
> > > >         } else if (tnum_min < reg->umin_value && tnum_next == tnum_max) {
> > > >                 /* The only overlap is tmax */
> > > >                 ___mark_reg_known(reg, tnum_next);
> > > >         } else if (tnum_next <= reg->umax_value &&
> > > >                    tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
> > > >                 ___mark_reg_known(reg, tnum_next);

Actually, that last condition is not enough. In the original patch we
also would check that umin is not part of the tnum. That's needed for
example if we have R3=(u64=[0; 1], var_off=unknown) (a case possible
because __update_reg_bounds is the first refinement step in
reg_bounds_sync). With those values, we would match the third condition
and set R3=1.

Checking that tmin < reg->umin_value in the third condition would also
not work. For example R3=(u64=[0xffff; 0x10000], var_off=(0; 0x1ffff))
would be incorrectly simplified to R3=0x10000 because tmin=0,
umin=0xffff.

What we really want is to check that umin is not already part of the
tnum, as in the original patch. I think we can however improve
readability as you did. Here's what I have in mind:

    tnum_next = tnum_step(reg->var_off, reg->umin_value);
    umin_in_tnum = (reg->umin_value & ~reg->var_off.mask) == reg->var_off.value;
    tmax = reg->var_off.value | reg->var_off.mask;
    if (tnum_next > reg->umax_value) {
        /* The u64 range and the tnum only overlap in umin.
         * u64:  ---[xxxxxx]-----
         * tnum: --xx----------x-
         */
        ___mark_reg_known(reg, reg->umin_value);
    } else if (!umin_in_tnum && tnum_next == tmax) {
        /* The u64 range and the tnum only overlap in the maximum value
         * represented by the tnum, called tmax.
         * u64:  ---[xxxxxx]-----
         * tnum: xx-----x--------
         */
        ___mark_reg_known(reg, tmax);
    } else if (!umin_in_tnum && tnum_next <= reg->umax_value &&
       tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
        /* The u64 range and the tnum only overlap once in between umin
         * (excluded) and umax.
         * u64:  ---[xxxxxx]-----
         * tnum: xx----x-------x-
         */
        ___mark_reg_known(reg, tnum_next);
    }

Wdyt?

[...]


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-20  0:13           ` Paul Chaignon
@ 2026-02-20  1:29             ` Eduard Zingerman
  2026-02-20  6:34               ` Harishankar Vishwanathan
  0 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2026-02-20  1:29 UTC (permalink / raw)
  To: Paul Chaignon
  Cc: Harishankar Vishwanathan, bpf, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Srinivas Narayana,
	Santosh Nagarakatte

On Fri, 2026-02-20 at 01:13 +0100, Paul Chaignon wrote:
> On Thu, Feb 19, 2026 at 07:55:33PM +0100, Paul Chaignon wrote:
> > On Thu, Feb 19, 2026 at 10:32:19AM -0800, Eduard Zingerman wrote:
> > > On Wed, 2026-02-18 at 01:06 -0500, Harishankar Vishwanathan wrote:
> > > > On Tue, Feb 17, 2026 at 5:58 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > > > [...]
> > > > > > 1. The u64 range and the tnum only overlap in umin.
> > > > > >    u64:  ---[xxxxxx]-----
> > > > > >    tnum: --xx----------x-
> > > > > I think this hunk should be rewritten as follows:
> > > > > 
> > > > >         tnum_next = tnum_step(reg->var_off, reg->umin_value);
> > > > >         tnum_max = reg->var_off.value | reg->var_off.mask;
> > > > >         tnum_min = reg->var_off.value;
> > > > >         if (tnum_next > reg->umax_value) {
> > > > >                 /* The only overlap is umin */
> > > > >                 ___mark_reg_known(reg, tnum_min);
> > > > >         } else if (tnum_min < reg->umin_value && tnum_next == tnum_max) {
> > > > >                 /* The only overlap is tmax */
> > > > >                 ___mark_reg_known(reg, tnum_next);
> > > > >         } else if (tnum_next <= reg->umax_value &&
> > > > >                    tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
> > > > >                 ___mark_reg_known(reg, tnum_next);
> 
> Actually, that last condition is not enough. In the original patch we
> also would check that umin is not part of the tnum. That's needed for
> example if we have R3=(u64=[0; 1], var_off=unknown) (a case possible
> because __update_reg_bounds is the first refinement step in
> reg_bounds_sync). With those values, we would match the third condition
> and set R3=1.
> 
> Checking that tmin < reg->umin_value in the third condition would also
> not work. For example R3=(u64=[0xffff; 0x10000], var_off=(0; 0x1ffff))
> would be incorrectly simplified to R3=0x10000 because tmin=0,
> umin=0xffff.
> 
> What we really want is to check that umin is not already part of the
> tnum, as in the original patch. I think we can however improve
> readability as you did. Here's what I have in mind:
> 
>     tnum_next = tnum_step(reg->var_off, reg->umin_value);
>     umin_in_tnum = (reg->umin_value & ~reg->var_off.mask) == reg->var_off.value;
>     tmax = reg->var_off.value | reg->var_off.mask;
>     if (tnum_next > reg->umax_value) {
>         /* The u64 range and the tnum only overlap in umin.
>          * u64:  ---[xxxxxx]-----
>          * tnum: --xx----------x-
>          */        ^
>         ___mark_reg_known(reg, reg->umin_value);
>     } else if (!umin_in_tnum && tnum_next == tmax) {
>         /* The u64 range and the tnum only overlap in the maximum value
>          * represented by the tnum, called tmax.
>          * u64:  ---[xxxxxx]-----
>          * tnum: xx-----x--------
>          */
>         ___mark_reg_known(reg, tmax);
>     } else if (!umin_in_tnum && tnum_next <= reg->umax_value &&
>        tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
>         /* The u64 range and the tnum only overlap once in between umin
>          * (excluded) and umax.
>          * u64:  ---[xxxxxx]-----
>          * tnum: xx----x-------x-
>          */
>         ___mark_reg_known(reg, tnum_next);
>     }
> 
> Wdyt?

This makes sense, I missed this nuance in the original patch.
Maybe keep the first check as 'umin_in_tnum && tnum_next > reg->umax_value',
then?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-20  1:29             ` Eduard Zingerman
@ 2026-02-20  6:34               ` Harishankar Vishwanathan
  2026-02-20 14:02                 ` Paul Chaignon
  0 siblings, 1 reply; 18+ messages in thread
From: Harishankar Vishwanathan @ 2026-02-20  6:34 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Paul Chaignon, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Srinivas Narayana, Santosh Nagarakatte

On Thu, Feb 19, 2026 at 8:29 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
[...]
> >     tnum_next = tnum_step(reg->var_off, reg->umin_value);
> >     umin_in_tnum = (reg->umin_value & ~reg->var_off.mask) == reg->var_off.value;
> >     tmax = reg->var_off.value | reg->var_off.mask;
> >     if (tnum_next > reg->umax_value) {
> >         /* The u64 range and the tnum only overlap in umin.
> >          * u64:  ---[xxxxxx]-----
> >          * tnum: --xx----------x-
> >          */        ^
> >         ___mark_reg_known(reg, reg->umin_value);
> >     } else if (!umin_in_tnum && tnum_next == tmax) {
> >         /* The u64 range and the tnum only overlap in the maximum value
> >          * represented by the tnum, called tmax.
> >          * u64:  ---[xxxxxx]-----
> >          * tnum: xx-----x--------
> >          */
> >         ___mark_reg_known(reg, tmax);
> >     } else if (!umin_in_tnum && tnum_next <= reg->umax_value &&
> >        tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
> >         /* The u64 range and the tnum only overlap once in between umin
> >          * (excluded) and umax.
> >          * u64:  ---[xxxxxx]-----
> >          * tnum: xx----x-------x-
> >          */
> >         ___mark_reg_known(reg, tnum_next);
> >     }
> >
> > Wdyt?
>
> This makes sense, I missed this nuance in the original patch.
> Maybe keep the first check as 'umin_in_tnum && tnum_next > reg->umax_value',
> then?

I too think Eduard's suggestion makes sense. We will be explicit in
the code that
we want umin to be part of the tnum.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value
  2026-02-20  6:34               ` Harishankar Vishwanathan
@ 2026-02-20 14:02                 ` Paul Chaignon
  0 siblings, 0 replies; 18+ messages in thread
From: Paul Chaignon @ 2026-02-20 14:02 UTC (permalink / raw)
  To: Harishankar Vishwanathan
  Cc: Eduard Zingerman, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Srinivas Narayana, Santosh Nagarakatte

On Fri, Feb 20, 2026 at 01:34:26AM -0500, Harishankar Vishwanathan wrote:
> On Thu, Feb 19, 2026 at 8:29 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> [...]
> > >     tnum_next = tnum_step(reg->var_off, reg->umin_value);
> > >     umin_in_tnum = (reg->umin_value & ~reg->var_off.mask) == reg->var_off.value;
> > >     tmax = reg->var_off.value | reg->var_off.mask;
> > >     if (tnum_next > reg->umax_value) {
> > >         /* The u64 range and the tnum only overlap in umin.
> > >          * u64:  ---[xxxxxx]-----
> > >          * tnum: --xx----------x-
> > >          */        ^
> > >         ___mark_reg_known(reg, reg->umin_value);
> > >     } else if (!umin_in_tnum && tnum_next == tmax) {
> > >         /* The u64 range and the tnum only overlap in the maximum value
> > >          * represented by the tnum, called tmax.
> > >          * u64:  ---[xxxxxx]-----
> > >          * tnum: xx-----x--------
> > >          */
> > >         ___mark_reg_known(reg, tmax);
> > >     } else if (!umin_in_tnum && tnum_next <= reg->umax_value &&
> > >        tnum_step(reg->var_off, tnum_next) > reg->umax_value) {
> > >         /* The u64 range and the tnum only overlap once in between umin
> > >          * (excluded) and umax.
> > >          * u64:  ---[xxxxxx]-----
> > >          * tnum: xx----x-------x-
> > >          */
> > >         ___mark_reg_known(reg, tnum_next);
> > >     }
> > >
> > > Wdyt?
> >
> > This makes sense, I missed this nuance in the original patch.
> > Maybe keep the first check as 'umin_in_tnum && tnum_next > reg->umax_value',
> > then?
> 
> I too think Eduard's suggestion makes sense. We will be explicit in
> the code that
> we want umin to be part of the tnum.

Agree, it helps readability. I've sent the v2 with that change.

Thanks everyone!

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2026-02-20 14:02 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-17  8:53 [PATCH bpf 0/4] Fix invariant violation for single-value tnums Paul Chaignon
2026-02-17  8:59 ` [PATCH bpf 1/4] bpf: Introduce tnum_step to step through tnum's members Paul Chaignon
2026-02-17  9:44   ` bot+bpf-ci
2026-02-18  2:36     ` Harishankar Vishwanathan
2026-02-18  2:51       ` Alexei Starovoitov
2026-02-18  6:17         ` Harishankar Vishwanathan
2026-02-17  9:01 ` [PATCH bpf 2/4] bpf: Improve bounds when tnum has a single possible value Paul Chaignon
2026-02-17 18:58   ` Alexei Starovoitov
2026-02-17 22:57   ` Eduard Zingerman
2026-02-18  6:06     ` Harishankar Vishwanathan
2026-02-19 18:32       ` Eduard Zingerman
2026-02-19 18:55         ` Paul Chaignon
2026-02-20  0:13           ` Paul Chaignon
2026-02-20  1:29             ` Eduard Zingerman
2026-02-20  6:34               ` Harishankar Vishwanathan
2026-02-20 14:02                 ` Paul Chaignon
2026-02-17  9:04 ` [PATCH bpf 3/4] selftests/bpf: Test refinement of single-value tnum Paul Chaignon
2026-02-17  9:06 ` [PATCH bpf 4/4] selftests/bpf: Avoid simplification of crafted bounds test Paul Chaignon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox