BPF List
 help / color / mirror / Atom feed
* [PATCH v0 0/3] Add tnum_scast helper
@ 2025-01-30 11:23 Dimitar Kanaliev
  2025-01-30 11:23 ` [PATCH v0 1/3] bpf: Introduce tnum_scast as a tnum native sign extension helper Dimitar Kanaliev
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Dimitar Kanaliev @ 2025-01-30 11:23 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Yonghong Song, Shung-Hsi Yu, Dimitar Kanaliev

Hello,

This patch series introduces a new tnum helper function called tnum_scast
which can perform sign extension on tnums. 

The goal of this patch is to help simplify (and unify) sign extension
operations performed on tnums inside the verifier. Additionally,
the changes also improve the precision with which boundaries are tracked
for s64/u64, since we can more accurately deduce said ranges. The patch
removes assignments to worst case {S,U}64_{MIN,MAX}.

There are a bunch of other places in which existing sign extensions can
be replaced with the new primitive, but I thought I'd start off with
register coercion as a minimal self contained example.

The first patch in the series introduces tnum_scast. The second patch
makes use of tnum_scast to simplify the logic in coerce_reg_to_size_sx
and coerce_subreg_to_size_sx. The last patch introduces some more tests
related to sign extension.

Dimitar Kanaliev (3):
  bpf: Introduce tnum_scast as a tnum native sign extension helper
  bpf: verifier: Simplify register sign extension with tnum_scast
  selftests/bpf: Extend tests with more coverage for sign extension

 include/linux/tnum.h                          |   3 +
 kernel/bpf/tnum.c                             |  29 ++++
 kernel/bpf/verifier.c                         | 124 ++++++------------
 .../selftests/bpf/progs/verifier_movsx.c      |  73 +++++++++++
 4 files changed, 144 insertions(+), 85 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v0 1/3] bpf: Introduce tnum_scast as a tnum native sign extension helper
  2025-01-30 11:23 [PATCH v0 0/3] Add tnum_scast helper Dimitar Kanaliev
@ 2025-01-30 11:23 ` Dimitar Kanaliev
  2025-01-30 21:59   ` Eduard Zingerman
  2025-01-30 11:23 ` [PATCH v0 2/3] bpf: verifier: Simplify register sign extension with tnum_scast Dimitar Kanaliev
  2025-01-30 11:23 ` [PATCH v0 3/3] selftests/bpf: Extend tests with more coverage for sign extension Dimitar Kanaliev
  2 siblings, 1 reply; 7+ messages in thread
From: Dimitar Kanaliev @ 2025-01-30 11:23 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Yonghong Song, Shung-Hsi Yu, Dimitar Kanaliev

This patch introduces a new helper function - tnum_scast(), which
sign-extends a tnum from a smaller integer size to the full 64-bit bpf
register range.

This is achieved by:

Given a tnum (v, m) and target size S bytes:
  1) Mask value/mask to S bytes: val = v & mask, msk = m & mask
  2) If sign bit (bit S*8-1) is unknown (msk has bit set):
    - Extended bits become unknown (mask |= ~value_mask)
    - Sign possibilities constrain value (if sign could be 1, upper bits
     must allow both 0s and 1s)
   3) If sign bit is known:
    - Upper bits follow sign extension of val
    - Mask upper bits then follow sign extension of msk

a) When the sign bit is known:
Assume a tnum with value = 0xFF, mask = 0x00, size = 1, which corresponds
to an 8-bit subregister of value 0xFF. We extract the sign bit position,
compute the value mask, apply it to the lower bits and check the sign bit
at said position.

  s = size * 8 - 1 // 1 * 8 - 1 = 7.
  value_mask = (1ULL << (s + 1)) - 1; // (1 << (7 + 1)) - 1 = 0xFF
  new_value = a.value & value_mask; // 0xFF & 0xFF = 0xFF
  new_mask = a.mask & value_mask; // 0x00 & 0xFF = 0x00
  sign_bit_unknown = (0x00 >> 7) & 1 = 0;  // sign bit is known
  sign_bit_value   = (0xFF >> 7) & 1 = 1;  // with value 1

Because the sign bit is known to be 1, we sign-extend with 1s above bit 7,
so all upper bits [63,8] become 1, new_value in 64 bits is
0xFFFFFFFFFFFFFFFF and new_mask for those bits is 0 (since we know
for sure they are all 1). So after the tnum_scast call and the sign
extension, the tnum is (0xFFFFFFFFFFFFFFFF, 0x0000000000000000),
which corresponds to the 64-bit value -1.

b) When the sign bit is unknown:
Assume a tnum wih value = 0x7F, mask = 0x80, size = 1. In this case the
lower 8 bits [6,0] are known to be 0x7F or b(0111 1111). Bit 7 is
unknown (mask = 0x80), so it could be 0 or 1. This means the subregister
could be 0x7F (+127) or 0xFF (-1), or otherwise anythnig that differs in
bit 7. Following the same operations as the previous example, we get s = 7
and value_mask = 0xFF. Then:

  new_value = a.value & value_mask; // 0x7F & 0xFF = 0x7F
  new_mask = a.mask & value_mask; // 0x80 & 0xFF = 0x80
  sign_bit_unknown = (0x80 >> 7) & 1 = 1; // bit 7 is unknown
  // sign bit is unkown, so we treat upper bits [63,8] as unknown
  new_mask |= ~value_mask;

This leads to a new tnum with value=0x7F, mask=0xFFFFFFFFFFFFFF80
The lower 8 bits can be 0x7F or 0xFF, and the higher 56 bits are fully
unknown. In 64-bit form, this tnum can represent anything from:
0x000000000000007F (+127) if the sign bit is 0 and all higher bits are 0,
up to 0xFFFFFFFFFFFFFFFF (-1) if the sign bit and all higher bits are 1.

Signed-off-by: Dimitar Kanaliev <dimitar.kanaliev@siteground.com>
---
 include/linux/tnum.h |  3 +++
 kernel/bpf/tnum.c    | 29 +++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/include/linux/tnum.h b/include/linux/tnum.h
index 3c13240077b8..6933db04c9ee 100644
--- a/include/linux/tnum.h
+++ b/include/linux/tnum.h
@@ -55,6 +55,9 @@ struct tnum tnum_intersect(struct tnum a, struct tnum b);
 /* Return @a with all but the lowest @size bytes cleared */
 struct tnum tnum_cast(struct tnum a, u8 size);
 
+/* Return @a sign-extended from @size bytes */
+struct tnum tnum_scast(struct tnum a, u8 size);
+
 /* Returns true if @a is a known constant */
 static inline bool tnum_is_const(struct tnum a)
 {
diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
index 9dbc31b25e3d..cb29dbc793d4 100644
--- a/kernel/bpf/tnum.c
+++ b/kernel/bpf/tnum.c
@@ -157,6 +157,35 @@ struct tnum tnum_cast(struct tnum a, u8 size)
 	return a;
 }
 
+struct tnum tnum_scast(struct tnum a, u8 size)
+{
+	u64 s = size * 8 - 1;
+	u64 sign_mask;
+	u64 value_mask;
+	u64 new_value, new_mask;
+	u64 sign_bit_unknown, sign_bit_value;
+	u64 mask;
+
+	if (size >= 8) {
+		return a;
+	}
+
+	sign_mask = 1ULL << s;
+	value_mask = (1ULL << (s + 1)) - 1;
+
+	new_value = a.value & value_mask;
+	new_mask = a.mask & value_mask;
+
+	sign_bit_unknown = (a.mask >> s) & 1;
+	sign_bit_value = (a.value >> s) & 1;
+
+	mask = ~value_mask;
+	new_mask |= mask & (0 - sign_bit_unknown);
+	new_value |= mask & (0 - ((sign_bit_unknown ^ 1) & sign_bit_value));
+
+	return TNUM(new_value, new_mask);
+}
+
 bool tnum_is_aligned(struct tnum a, u64 size)
 {
 	if (!size)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v0 2/3] bpf: verifier: Simplify register sign extension with tnum_scast
  2025-01-30 11:23 [PATCH v0 0/3] Add tnum_scast helper Dimitar Kanaliev
  2025-01-30 11:23 ` [PATCH v0 1/3] bpf: Introduce tnum_scast as a tnum native sign extension helper Dimitar Kanaliev
@ 2025-01-30 11:23 ` Dimitar Kanaliev
  2025-01-30 23:24   ` Eduard Zingerman
  2025-01-30 11:23 ` [PATCH v0 3/3] selftests/bpf: Extend tests with more coverage for sign extension Dimitar Kanaliev
  2 siblings, 1 reply; 7+ messages in thread
From: Dimitar Kanaliev @ 2025-01-30 11:23 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Yonghong Song, Shung-Hsi Yu, Dimitar Kanaliev

This patch refactors the verifier's sign-extension logic for narrow
register values to use the new tnum_scast helper.

The general idea is to replace manual range checks in
coerce_reg_to_size_sx and coerce_subreg_to_size_sx by deriving smin/smax
and umin/umax boundaries directly from the tnum via tnum_scast.

In the original code,some coercion cases with unknown sign bits triggered
a fallback to worst-case [S64_MIN, S64_MAX] ranges. With these changes, we
can now track bitwise uncertainty more precisely, allowing for arbitratry
bounds like `[-129, 126]` when upper bits are partially known.

An example for such cases would be:
For an 8-bit register with var_off = (value=0x7F, mask=0x80) i.e known
lower 7 bits, unknown sign bit, the original code would  default to
[S64_MIN, S64_MAX] for the smin, smax ranges, while the tnum_scast
implementation will bind smin = -1, smax = 127, while still tracking
uncertainty in the upper 56 bits.

Signed-off-by: Dimitar Kanaliev <dimitar.kanaliev@siteground.com>
---
 kernel/bpf/verifier.c | 124 +++++++++++++-----------------------------
 1 file changed, 39 insertions(+), 85 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9971c03adfd5..a98dba42abc0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6661,61 +6661,35 @@ static void set_sext64_default_val(struct bpf_reg_state *reg, int size)
 
 static void coerce_reg_to_size_sx(struct bpf_reg_state *reg, int size)
 {
-	s64 init_s64_max, init_s64_min, s64_max, s64_min, u64_cval;
-	u64 top_smax_value, top_smin_value;
-	u64 num_bits = size * 8;
+	u64 s = size * 8 - 1;
+	u64 sign_mask = 1ULL << s;
+	s64 smin_value, smax_value;
+	u64 umax_value;
 
-	if (tnum_is_const(reg->var_off)) {
-		u64_cval = reg->var_off.value;
-		if (size == 1)
-			reg->var_off = tnum_const((s8)u64_cval);
-		else if (size == 2)
-			reg->var_off = tnum_const((s16)u64_cval);
-		else
-			/* size == 4 */
-			reg->var_off = tnum_const((s32)u64_cval);
-
-		u64_cval = reg->var_off.value;
-		reg->smax_value = reg->smin_value = u64_cval;
-		reg->umax_value = reg->umin_value = u64_cval;
-		reg->s32_max_value = reg->s32_min_value = u64_cval;
-		reg->u32_max_value = reg->u32_min_value = u64_cval;
+	if (size >= 8)
 		return;
-	}
 
-	top_smax_value = ((u64)reg->smax_value >> num_bits) << num_bits;
-	top_smin_value = ((u64)reg->smin_value >> num_bits) << num_bits;
+	reg->var_off = tnum_scast(reg->var_off, size);
 
-	if (top_smax_value != top_smin_value)
-		goto out;
-
-	/* find the s64_min and s64_min after sign extension */
-	if (size == 1) {
-		init_s64_max = (s8)reg->smax_value;
-		init_s64_min = (s8)reg->smin_value;
-	} else if (size == 2) {
-		init_s64_max = (s16)reg->smax_value;
-		init_s64_min = (s16)reg->smin_value;
+	if (reg->var_off.mask & sign_mask) {
+		smin_value = -(1LL << s);
+		smax_value = (1LL << s) - 1;
 	} else {
-		init_s64_max = (s32)reg->smax_value;
-		init_s64_min = (s32)reg->smin_value;
+		smin_value = (s64)(reg->var_off.value);
+		smax_value = (s64)(reg->var_off.value | reg->var_off.mask);
 	}
 
-	s64_max = max(init_s64_max, init_s64_min);
-	s64_min = min(init_s64_max, init_s64_min);
+	reg->smin_value = smin_value;
+	reg->smax_value = smax_value;
 
-	/* both of s64_max/s64_min positive or negative */
-	if ((s64_max >= 0) == (s64_min >= 0)) {
-		reg->s32_min_value = reg->smin_value = s64_min;
-		reg->s32_max_value = reg->smax_value = s64_max;
-		reg->u32_min_value = reg->umin_value = s64_min;
-		reg->u32_max_value = reg->umax_value = s64_max;
-		reg->var_off = tnum_range(s64_min, s64_max);
-		return;
-	}
+	reg->umin_value = reg->var_off.value;
+	umax_value = reg->var_off.value | reg->var_off.mask;
+	reg->umax_value = umax_value;
 
-out:
-	set_sext64_default_val(reg, size);
+	reg->s32_min_value = (s32)smin_value;
+	reg->s32_max_value = (s32)smax_value;
+	reg->u32_min_value = (u32)reg->umin_value;
+	reg->u32_max_value = (u32)umax_value;
 }
 
 static void set_sext32_default_val(struct bpf_reg_state *reg, int size)
@@ -6735,52 +6709,32 @@ static void set_sext32_default_val(struct bpf_reg_state *reg, int size)
 
 static void coerce_subreg_to_size_sx(struct bpf_reg_state *reg, int size)
 {
-	s32 init_s32_max, init_s32_min, s32_max, s32_min, u32_val;
-	u32 top_smax_value, top_smin_value;
-	u32 num_bits = size * 8;
+	u32 s = size * 8 - 1;
+	u32 sign_mask = 1U << s;
+	s32 smin_value, smax_value;
+	u32 umax_value;
 
-	if (tnum_is_const(reg->var_off)) {
-		u32_val = reg->var_off.value;
-		if (size == 1)
-			reg->var_off = tnum_const((s8)u32_val);
-		else
-			reg->var_off = tnum_const((s16)u32_val);
-
-		u32_val = reg->var_off.value;
-		reg->s32_min_value = reg->s32_max_value = u32_val;
-		reg->u32_min_value = reg->u32_max_value = u32_val;
+	if (size >= 4)
 		return;
-	}
-
-	top_smax_value = ((u32)reg->s32_max_value >> num_bits) << num_bits;
-	top_smin_value = ((u32)reg->s32_min_value >> num_bits) << num_bits;
 
-	if (top_smax_value != top_smin_value)
-		goto out;
+	reg->var_off = tnum_scast(reg->var_off, size);
 
-	/* find the s32_min and s32_min after sign extension */
-	if (size == 1) {
-		init_s32_max = (s8)reg->s32_max_value;
-		init_s32_min = (s8)reg->s32_min_value;
+	if (reg->var_off.mask & sign_mask) {
+		smin_value = -(1 << s);
+		smax_value = (1 << s) - 1;
 	} else {
-		/* size == 2 */
-		init_s32_max = (s16)reg->s32_max_value;
-		init_s32_min = (s16)reg->s32_min_value;
-	}
-	s32_max = max(init_s32_max, init_s32_min);
-	s32_min = min(init_s32_max, init_s32_min);
-
-	if ((s32_min >= 0) == (s32_max >= 0)) {
-		reg->s32_min_value = s32_min;
-		reg->s32_max_value = s32_max;
-		reg->u32_min_value = (u32)s32_min;
-		reg->u32_max_value = (u32)s32_max;
-		reg->var_off = tnum_subreg(tnum_range(s32_min, s32_max));
-		return;
+		smin_value = (s32)(reg->var_off.value);
+		smax_value = (s32)(reg->var_off.value | reg->var_off.mask);
 	}
 
-out:
-	set_sext32_default_val(reg, size);
+	reg->s32_min_value = smin_value;
+	reg->s32_max_value = smax_value;
+
+	reg->u32_min_value = reg->var_off.value;
+	umax_value = reg->var_off.value | reg->var_off.mask;
+	reg->u32_max_value = umax_value;
+
+	reg->var_off = tnum_subreg(reg->var_off);
 }
 
 static bool bpf_map_is_rdonly(const struct bpf_map *map)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v0 3/3] selftests/bpf: Extend tests with more coverage for sign extension
  2025-01-30 11:23 [PATCH v0 0/3] Add tnum_scast helper Dimitar Kanaliev
  2025-01-30 11:23 ` [PATCH v0 1/3] bpf: Introduce tnum_scast as a tnum native sign extension helper Dimitar Kanaliev
  2025-01-30 11:23 ` [PATCH v0 2/3] bpf: verifier: Simplify register sign extension with tnum_scast Dimitar Kanaliev
@ 2025-01-30 11:23 ` Dimitar Kanaliev
  2025-01-30 22:48   ` Eduard Zingerman
  2 siblings, 1 reply; 7+ messages in thread
From: Dimitar Kanaliev @ 2025-01-30 11:23 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Yonghong Song, Shung-Hsi Yu, Dimitar Kanaliev

This commit adds a few more tests related to tnum_scast that explicitly
check cases with known / unknown sign bit, as well as values that cross
zero (going from negative to positive).

Signed-off-by: Dimitar Kanaliev <dimitar.kanaliev@siteground.com>
---
 .../selftests/bpf/progs/verifier_movsx.c      | 73 +++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/verifier_movsx.c b/tools/testing/selftests/bpf/progs/verifier_movsx.c
index 994bbc346d25..20abeec09dee 100644
--- a/tools/testing/selftests/bpf/progs/verifier_movsx.c
+++ b/tools/testing/selftests/bpf/progs/verifier_movsx.c
@@ -327,6 +327,79 @@ label_%=: 	                                        \
 	: __clobber_all);
 }
 
+SEC("socket")
+__description("MOV64SX, S8, unknown value")
+__success __success_unpriv __retval(1)
+__naked void mov64sx_s8_unknown(void)
+{
+	asm volatile ("                                      \
+	call %[bpf_get_prandom_u32];                         \
+	r1 = r0;                                             \
+	r1 &= 0xFF;      			             \
+	r1 = (s8)r1;  					     \
+	if r1 s>= -128 goto l0_%=;                           \
+	r0 = 0;                                              \
+	exit;                                                \
+l0_%=:                                                       \
+	if r1 s<= 127 goto l1_%=;                            \
+	r0 = 0;                                              \
+	exit;                                                \
+l1_%=:                                                       \
+	r0 = 1;                                              \
+	exit;                                                \
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+	__description("MOV64SX, S8, sign bit unknown")
+	__success __success_unpriv __retval(1)
+__naked void mov64sx_s8_sign_unknown(void)
+{
+	asm volatile ("                                      \
+	call %[bpf_get_prandom_u32];                         \
+	r1 = r0;                                             \
+	r1 &= 0x7F;                 			     \
+	r1 |= 0x80;        				     \
+	r1 = (s8)r1;       				     \
+	if r1 s>= -128 goto l0_%=;                           \
+	r0 = 0;                                              \
+	exit;                                                \
+l0_%=:                                                       \
+	if r1 s<= 127 goto l1_%=;                            \
+	r0 = 0;                                              \
+	exit;                                                \
+l1_%=:                                                       \
+	r0 = 1;                                              \
+	exit;                                                \
+	"   :
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+	__description("MOV64SX, S8, value crossing zero")
+	__success __success_unpriv __retval(1)
+__naked void mov64sx_s8_cross_zero(void)
+{
+	asm volatile ("                                      \
+	call %[bpf_get_prandom_u32];                         \
+	r1 = r0;                                             \
+	r1 &= 0xFF;      			             \
+	r1 = (s8)r1;             			     \
+	if r1 s> -10 goto l0_%=;                             \
+	if r1 s< 10 goto l0_%=;                              \
+	r0 = 0;                                              \
+	exit;                                                \
+l0_%=:                                               	     \
+	r0 = 1;                                              \
+	exit;                                                \
+	"   :
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
 #else
 
 SEC("socket")
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v0 1/3] bpf: Introduce tnum_scast as a tnum native sign extension helper
  2025-01-30 11:23 ` [PATCH v0 1/3] bpf: Introduce tnum_scast as a tnum native sign extension helper Dimitar Kanaliev
@ 2025-01-30 21:59   ` Eduard Zingerman
  0 siblings, 0 replies; 7+ messages in thread
From: Eduard Zingerman @ 2025-01-30 21:59 UTC (permalink / raw)
  To: Dimitar Kanaliev, bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, KP Singh,
	Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Yonghong Song, Shung-Hsi Yu

On Thu, 2025-01-30 at 13:23 +0200, Dimitar Kanaliev wrote:

Hi Dimitar,

[...]

> +struct tnum tnum_scast(struct tnum a, u8 size)
> +{
> +	u64 s = size * 8 - 1;
> +	u64 sign_mask;
> +	u64 value_mask;
> +	u64 new_value, new_mask;
> +	u64 sign_bit_unknown, sign_bit_value;
> +	u64 mask;
> +
> +	if (size >= 8) {
> +		return a;
> +	}
> +
> +	sign_mask = 1ULL << s;
> +	value_mask = (1ULL << (s + 1)) - 1;
> +
> +	new_value = a.value & value_mask;
> +	new_mask = a.mask & value_mask;
> +
> +	sign_bit_unknown = (a.mask >> s) & 1;
> +	sign_bit_value = (a.value >> s) & 1;
> +
> +	mask = ~value_mask;
> +	new_mask |= mask & (0 - sign_bit_unknown);
> +	new_value |= mask & (0 - ((sign_bit_unknown ^ 1) & sign_bit_value));
> +
> +	return TNUM(new_value, new_mask);
> +}

So, effectively what you want to achieve here:
- pick a sign bit SM from mask and set signed extended bits of mask to SM
- pick a sign bit SV from value and set signed extended bits of value to SV
right?

I think this could be done a bit simpler, e.g.:

struct tnum tnum_scast(struct tnum a, u8 size)
{
	u8 s = 64 - size * 8;
	u64 value, mask;

	if (size >= 8)
		return a;

	value = ((s64)a.value << s) >> s;
	mask = ((s64)a.mask << s) >> s;
	return TNUM(value, mask);
}

wdyt?


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v0 3/3] selftests/bpf: Extend tests with more coverage for sign extension
  2025-01-30 11:23 ` [PATCH v0 3/3] selftests/bpf: Extend tests with more coverage for sign extension Dimitar Kanaliev
@ 2025-01-30 22:48   ` Eduard Zingerman
  0 siblings, 0 replies; 7+ messages in thread
From: Eduard Zingerman @ 2025-01-30 22:48 UTC (permalink / raw)
  To: Dimitar Kanaliev, bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, KP Singh,
	Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Yonghong Song, Shung-Hsi Yu

On Thu, 2025-01-30 at 13:23 +0200, Dimitar Kanaliev wrote:
> This commit adds a few more tests related to tnum_scast that explicitly
> check cases with known / unknown sign bit, as well as values that cross
> zero (going from negative to positive).
> 
> Signed-off-by: Dimitar Kanaliev <dimitar.kanaliev@siteground.com>
> ---
>  .../selftests/bpf/progs/verifier_movsx.c      | 73 +++++++++++++++++++
>  1 file changed, 73 insertions(+)
> 
> diff --git a/tools/testing/selftests/bpf/progs/verifier_movsx.c b/tools/testing/selftests/bpf/progs/verifier_movsx.c
> index 994bbc346d25..20abeec09dee 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_movsx.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_movsx.c
> @@ -327,6 +327,79 @@ label_%=: 	                                        \
>  	: __clobber_all);
>  }
>  
> +SEC("socket")
> +__description("MOV64SX, S8, unknown value")
> +__success __success_unpriv __retval(1)

Note: __retval() annotation is needed when one wants to execute the
      test using libbpf's bpf_prog_test_run_opts().
      The changes for register range tracking should not affect
      runtime behaviour (unless there is a bug in and some dead code
      elimination is done incorrectly).
      I suggest to add __log_level(2) annotation and check verifier
      log output using __msg() annotations to check what range is
      inferred for specific registers.
      As-is these new tests are passing on master as well,
      so the feature is effectively untested.

> +__naked void mov64sx_s8_unknown(void)
> +{
> +	asm volatile ("                                      \
> +	call %[bpf_get_prandom_u32];                         \
> +	r1 = r0;                                             \
> +	r1 &= 0xFF;      			             \
> +	r1 = (s8)r1;  					     \
> +	if r1 s>= -128 goto l0_%=;                           \
> +	r0 = 0;                                              \
> +	exit;                                                \
> +l0_%=:                                                       \
> +	if r1 s<= 127 goto l1_%=;                            \
> +	r0 = 0;                                              \
> +	exit;                                                \
> +l1_%=:                                                       \
> +	r0 = 1;                                              \
> +	exit;                                                \
> +"	:
> +	: __imm(bpf_get_prandom_u32)
> +	: __clobber_all);
> +}

[...]


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v0 2/3] bpf: verifier: Simplify register sign extension with tnum_scast
  2025-01-30 11:23 ` [PATCH v0 2/3] bpf: verifier: Simplify register sign extension with tnum_scast Dimitar Kanaliev
@ 2025-01-30 23:24   ` Eduard Zingerman
  0 siblings, 0 replies; 7+ messages in thread
From: Eduard Zingerman @ 2025-01-30 23:24 UTC (permalink / raw)
  To: Dimitar Kanaliev, bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, KP Singh,
	Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Yonghong Song, Shung-Hsi Yu

On Thu, 2025-01-30 at 13:23 +0200, Dimitar Kanaliev wrote:

[...]

>  static void coerce_reg_to_size_sx(struct bpf_reg_state *reg, int size)
>  {
> -	s64 init_s64_max, init_s64_min, s64_max, s64_min, u64_cval;
> -	u64 top_smax_value, top_smin_value;
> -	u64 num_bits = size * 8;
> +	u64 s = size * 8 - 1;
> +	u64 sign_mask = 1ULL << s;
> +	s64 smin_value, smax_value;
> +	u64 umax_value;
>  
> -	if (tnum_is_const(reg->var_off)) {
> -		u64_cval = reg->var_off.value;
> -		if (size == 1)
> -			reg->var_off = tnum_const((s8)u64_cval);
> -		else if (size == 2)
> -			reg->var_off = tnum_const((s16)u64_cval);
> -		else
> -			/* size == 4 */
> -			reg->var_off = tnum_const((s32)u64_cval);
> -
> -		u64_cval = reg->var_off.value;
> -		reg->smax_value = reg->smin_value = u64_cval;
> -		reg->umax_value = reg->umin_value = u64_cval;
> -		reg->s32_max_value = reg->s32_min_value = u64_cval;
> -		reg->u32_max_value = reg->u32_min_value = u64_cval;
> +	if (size >= 8)
>  		return;
> -	}
>  
> -	top_smax_value = ((u64)reg->smax_value >> num_bits) << num_bits;
> -	top_smin_value = ((u64)reg->smin_value >> num_bits) << num_bits;
> +	reg->var_off = tnum_scast(reg->var_off, size);
>  
> -	if (top_smax_value != top_smin_value)
> -		goto out;
> -
> -	/* find the s64_min and s64_min after sign extension */
> -	if (size == 1) {
> -		init_s64_max = (s8)reg->smax_value;
> -		init_s64_min = (s8)reg->smin_value;
> -	} else if (size == 2) {
> -		init_s64_max = (s16)reg->smax_value;
> -		init_s64_min = (s16)reg->smin_value;
> +	if (reg->var_off.mask & sign_mask) {
> +		smin_value = -(1LL << s);
> +		smax_value = (1LL << s) - 1;
>  	} else {
> -		init_s64_max = (s32)reg->smax_value;
> -		init_s64_min = (s32)reg->smin_value;
> +		smin_value = (s64)(reg->var_off.value);
> +		smax_value = (s64)(reg->var_off.value | reg->var_off.mask);

Note the following code in __update_reg64_bounds():

static void __update_reg64_bounds(struct bpf_reg_state *reg)
{
	/* min signed is max(sign bit) | min(other bits) */
	reg->smin_value = max_t(s64, reg->smin_value,
				reg->var_off.value | (reg->var_off.mask & S64_MIN));
	/* max signed is min(sign bit) | max(other bits) */
	reg->smax_value = min_t(s64, reg->smax_value,
				reg->var_off.value | (reg->var_off.mask & S64_MAX));
	reg->umin_value = max(reg->umin_value, reg->var_off.value);
	reg->umax_value = min(reg->umax_value,
			      reg->var_off.value | reg->var_off.mask);
}

Is it possible to set {u,s}min/{u,s}max to {U,S}64_MIN/{U,S}64_MAX and rely on
__update_reg64_bounds() for this computation?

>  	}
>  
> -	s64_max = max(init_s64_max, init_s64_min);
> -	s64_min = min(init_s64_max, init_s64_min);
> +	reg->smin_value = smin_value;
> +	reg->smax_value = smax_value;
>  
> -	/* both of s64_max/s64_min positive or negative */
> -	if ((s64_max >= 0) == (s64_min >= 0)) {
> -		reg->s32_min_value = reg->smin_value = s64_min;
> -		reg->s32_max_value = reg->smax_value = s64_max;
> -		reg->u32_min_value = reg->umin_value = s64_min;
> -		reg->u32_max_value = reg->umax_value = s64_max;
> -		reg->var_off = tnum_range(s64_min, s64_max);
> -		return;
> -	}
> +	reg->umin_value = reg->var_off.value;
> +	umax_value = reg->var_off.value | reg->var_off.mask;
> +	reg->umax_value = umax_value;
>  
> -out:
> -	set_sext64_default_val(reg, size);

After this commit the functions set_sext64_default_val() and
set_sext32_default_val() are never called.

> +	reg->s32_min_value = (s32)smin_value;
> +	reg->s32_max_value = (s32)smax_value;
> +	reg->u32_min_value = (u32)reg->umin_value;
> +	reg->u32_max_value = (u32)umax_value;
>  }

[...]


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-01-30 23:25 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-30 11:23 [PATCH v0 0/3] Add tnum_scast helper Dimitar Kanaliev
2025-01-30 11:23 ` [PATCH v0 1/3] bpf: Introduce tnum_scast as a tnum native sign extension helper Dimitar Kanaliev
2025-01-30 21:59   ` Eduard Zingerman
2025-01-30 11:23 ` [PATCH v0 2/3] bpf: verifier: Simplify register sign extension with tnum_scast Dimitar Kanaliev
2025-01-30 23:24   ` Eduard Zingerman
2025-01-30 11:23 ` [PATCH v0 3/3] selftests/bpf: Extend tests with more coverage for sign extension Dimitar Kanaliev
2025-01-30 22:48   ` Eduard Zingerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox