* [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END
@ 2026-02-04 11:15 Tianci Cao
2026-02-04 11:15 ` [PATCH bpf-next v3 1/2] " Tianci Cao
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Tianci Cao @ 2026-02-04 11:15 UTC (permalink / raw)
To: bpf
Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
shenghaoyuan0928, ziye
Add bitwise tracking (tnum analysis) for BPF_END (`bswap(16|32|64)`,
`be(16|32|64)`, `le(16|32|64)`) operations. Please see commit log of
1/2 for more details.
---
Change log:
v3:
- Resend to fix a version control error in v2.
- The rest of the changes are identical to v2.
v2 (incorrect): https://lore.kernel.org/bpf/20260204091146.52447-1-ziye@zju.edu.cn/
- Refactored selftests using BSWAP_RANGE_TEST macro to eliminate code
duplication and improve maintainability. (Eduard)
- Simplified test names. (Eduard)
- Reduced excessive comments in test cases. (Eduard)
- Added more comments to explain BPF_END's special handling of zext_32_to_64.
v1: https://lore.kernel.org/bpf/20260202133536.66207-1-ziye@zju.edu.cn/
Tianci Cao (2):
bpf: Add bitwise tracking for BPF_END
selftests/bpf: Add tests for BPF_END bitwise tracking
include/linux/tnum.h | 5 ++
kernel/bpf/tnum.c | 16 +++++
kernel/bpf/verifier.c | 60 ++++++++++++++++++-
.../selftests/bpf/progs/verifier_bswap.c | 43 +++++++++++++
4 files changed, 121 insertions(+), 3 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH bpf-next v3 1/2] bpf: Add bitwise tracking for BPF_END
2026-02-04 11:15 [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END Tianci Cao
@ 2026-02-04 11:15 ` Tianci Cao
2026-02-04 11:15 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking Tianci Cao
2026-02-05 3:14 ` [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END Alexei Starovoitov
2 siblings, 0 replies; 8+ messages in thread
From: Tianci Cao @ 2026-02-04 11:15 UTC (permalink / raw)
To: bpf
Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
shenghaoyuan0928, ziye
This patch implements bitwise tracking (tnum analysis) for BPF_END
(byte swap) operation.
Currently, the BPF verifier does not track value for BPF_END operation,
treating the result as completely unknown. This limits the verifier's
ability to prove safety of programs that perform endianness conversions,
which are common in networking code.
For example, the following code pattern for port number validation:
int test(struct pt_regs *ctx) {
__u64 x = bpf_get_prandom_u32();
x &= 0x3f00; // Range: [0, 0x3f00], var_off: (0x0; 0x3f00)
x = bswap16(x); // Should swap to range [0, 0x3f], var_off: (0x0; 0x3f)
if (x > 0x3f) goto trap;
return 0;
trap:
return *(u64 *)NULL; // Should be unreachable
}
Currently generates verifier output:
1: (54) w0 &= 16128 ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=16128,var_off=(0x0; 0x3f00))
2: (d7) r0 = bswap16 r0 ; R0=scalar()
3: (25) if r0 > 0x3f goto pc+2 ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=63,var_off=(0x0; 0x3f))
Without this patch, even though the verifier knows `x` has certain bits
set, after bswap16, it loses all tracking information and treats port
as having a completely unknown value [0, 65535].
According to the BPF instruction set[1], there are 3 kinds of BPF_END:
1. `bswap(16|32|64)`: opcode=0xd7 (BPF_END | BPF_ALU64 | BPF_TO_LE)
- do unconditional swap
2. `le(16|32|64)`: opcode=0xd4 (BPF_END | BPF_ALU | BPF_TO_LE)
- on big-endian: do swap
- on little-endian: truncation (16/32-bit) or no-op (64-bit)
3. `be(16|32|64)`: opcode=0xdc (BPF_END | BPF_ALU | BPF_TO_BE)
- on little-endian: do swap
- on big-endian: truncation (16/32-bit) or no-op (64-bit)
Since BPF_END operations are inherently bit-wise permutations, tnum
(bitwise tracking) offers the most efficient and precise mechanism
for value analysis. By implementing `tnum_bswap16`, `tnum_bswap32`,
and `tnum_bswap64`, we can derive exact `var_off` values concisely,
directly reflecting the bit-level changes.
Here is the overview of changes:
1. In `tnum_bswap(16|32|64)` (kernel/bpf/tnum.c):
Call `swab(16|32|64)` function on the value and mask of `var_off`, and
do truncation for 16/32-bit cases.
2. In `adjust_scalar_min_max_vals` (kernel/bpf/verifier.c):
Call helper function `scalar_byte_swap`.
- Only do byte swap when
* alu64 (unconditional swap) OR
* switching between big-endian and little-endian machines.
- If need do byte swap:
* Firstly call `tnum_bswap(16|32|64)` to update `var_off`.
* Then reset the bound since byte swap scrambles the range.
- For 16/32-bit cases, truncate dst register to match the swapped size.
This enables better verification of networking code that frequently uses
byte swaps for protocol processing, reducing false positive rejections.
[1] https://www.kernel.org/doc/Documentation/bpf/standardization/instruction-set.rst
Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Co-developed-by: Yazhou Tang <tangyazhou518@outlook.com>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
include/linux/tnum.h | 5 ++++
kernel/bpf/tnum.c | 16 ++++++++++++
kernel/bpf/verifier.c | 60 ++++++++++++++++++++++++++++++++++++++++---
3 files changed, 78 insertions(+), 3 deletions(-)
diff --git a/include/linux/tnum.h b/include/linux/tnum.h
index c52b862dad45..fa4654ffb621 100644
--- a/include/linux/tnum.h
+++ b/include/linux/tnum.h
@@ -63,6 +63,11 @@ struct tnum tnum_union(struct tnum t1, struct tnum t2);
/* Return @a with all but the lowest @size bytes cleared */
struct tnum tnum_cast(struct tnum a, u8 size);
+/* Swap the bytes of a tnum */
+struct tnum tnum_bswap16(struct tnum a);
+struct tnum tnum_bswap32(struct tnum a);
+struct tnum tnum_bswap64(struct tnum a);
+
/* Returns true if @a is a known constant */
static inline bool tnum_is_const(struct tnum a)
{
diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
index f8e70e9c3998..26fbfbb01700 100644
--- a/kernel/bpf/tnum.c
+++ b/kernel/bpf/tnum.c
@@ -8,6 +8,7 @@
*/
#include <linux/kernel.h>
#include <linux/tnum.h>
+#include <linux/swab.h>
#define TNUM(_v, _m) (struct tnum){.value = _v, .mask = _m}
/* A completely unknown value */
@@ -253,3 +254,18 @@ struct tnum tnum_const_subreg(struct tnum a, u32 value)
{
return tnum_with_subreg(a, tnum_const(value));
}
+
+struct tnum tnum_bswap16(struct tnum a)
+{
+ return TNUM(swab16(a.value & 0xFFFF), swab16(a.mask & 0xFFFF));
+}
+
+struct tnum tnum_bswap32(struct tnum a)
+{
+ return TNUM(swab32(a.value & 0xFFFFFFFF), swab32(a.mask & 0xFFFFFFFF));
+}
+
+struct tnum tnum_bswap64(struct tnum a)
+{
+ return TNUM(swab64(a.value), swab64(a.mask));
+}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 40a8252140fb..92e03a5a50f5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15832,6 +15832,48 @@ static void scalar_min_max_arsh(struct bpf_reg_state *dst_reg,
__update_reg_bounds(dst_reg);
}
+static void scalar_byte_swap(struct bpf_reg_state *dst_reg, struct bpf_insn *insn)
+{
+ /*
+ * Byte swap operation - update var_off using tnum_bswap.
+ * Three cases:
+ * 1. bswap(16|32|64): opcode=0xd7 (BPF_END | BPF_ALU64 | BPF_TO_LE)
+ * unconditional swap
+ * 2. to_le(16|32|64): opcode=0xd4 (BPF_END | BPF_ALU | BPF_TO_LE)
+ * swap on big-endian, truncation or no-op on little-endian
+ * 3. to_be(16|32|64): opcode=0xdc (BPF_END | BPF_ALU | BPF_TO_BE)
+ * swap on little-endian, truncation or no-op on big-endian
+ */
+
+ bool alu64 = BPF_CLASS(insn->code) == BPF_ALU64;
+ bool to_le = BPF_SRC(insn->code) == BPF_TO_LE;
+ bool is_big_endian;
+#ifdef CONFIG_CPU_BIG_ENDIAN
+ is_big_endian = true;
+#else
+ is_big_endian = false;
+#endif
+ /* Apply bswap if alu64 or switch between big-endian and little-endian machines */
+ bool need_bswap = alu64 || (to_le == is_big_endian);
+
+ if (need_bswap) {
+ if (insn->imm == 16)
+ dst_reg->var_off = tnum_bswap16(dst_reg->var_off);
+ else if (insn->imm == 32)
+ dst_reg->var_off = tnum_bswap32(dst_reg->var_off);
+ else if (insn->imm == 64)
+ dst_reg->var_off = tnum_bswap64(dst_reg->var_off);
+ /*
+ * Byteswap scrambles the range, so we must reset bounds.
+ * Bounds will be re-derived from the new tnum later.
+ */
+ __mark_reg_unbounded(dst_reg);
+ }
+ /* For bswap16/32, truncate dst register to match the swapped size */
+ if (insn->imm == 16 || insn->imm == 32)
+ coerce_reg_to_size(dst_reg, insn->imm / 8);
+}
+
static bool is_safe_to_compute_dst_reg_range(struct bpf_insn *insn,
const struct bpf_reg_state *src_reg)
{
@@ -15858,6 +15900,7 @@ static bool is_safe_to_compute_dst_reg_range(struct bpf_insn *insn,
case BPF_XOR:
case BPF_OR:
case BPF_MUL:
+ case BPF_END:
return true;
/*
@@ -16047,12 +16090,23 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
else
scalar_min_max_arsh(dst_reg, &src_reg);
break;
+ case BPF_END:
+ scalar_byte_swap(dst_reg, insn);
+ break;
default:
break;
}
- /* ALU32 ops are zero extended into 64bit register */
- if (alu32)
+ /*
+ * ALU32 ops are zero extended into 64bit register.
+ *
+ * BPF_END is already handled inside the helper (truncation),
+ * so skip zext here to avoid unexpected zero extension.
+ * e.g., le64: opcode=(BPF_END|BPF_ALU|BPF_TO_LE), imm=0x40
+ * This is a 64bit byte swap operation with alu32==true,
+ * but we should not zero extend the result.
+ */
+ if (alu32 && opcode != BPF_END)
zext_32_to_64(dst_reg);
reg_bounds_sync(dst_reg);
return 0;
@@ -16232,7 +16286,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
}
/* check dest operand */
- if (opcode == BPF_NEG &&
+ if ((opcode == BPF_NEG || opcode == BPF_END) &&
regs[insn->dst_reg].type == SCALAR_VALUE) {
err = check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK);
err = err ?: adjust_scalar_min_max_vals(env, insn,
--
2.53.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking
2026-02-04 11:15 [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END Tianci Cao
2026-02-04 11:15 ` [PATCH bpf-next v3 1/2] " Tianci Cao
@ 2026-02-04 11:15 ` Tianci Cao
2026-02-04 19:58 ` Eduard Zingerman
2026-02-05 3:14 ` [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END Alexei Starovoitov
2 siblings, 1 reply; 8+ messages in thread
From: Tianci Cao @ 2026-02-04 11:15 UTC (permalink / raw)
To: bpf
Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
shenghaoyuan0928, ziye
Now BPF_END has bitwise tracking support. This patch adds selftests to
cover various cases of BPF_END (`bswap(16|32|64)`, `be(16|32|64)`,
`le(16|32|64)`) with bitwise propagation.
This patch is based on existing `verifier_bswap.c`, and add several
types of new tests:
1. Unconditional byte swap operations:
- bswap16/bswap32/bswap64 with unknown bytes
2. Endian conversion operations (architecture-aware):
- be16/be32/be64: convert to big-endian
* on little-endian: do swap
* on big-endian: truncation (16/32-bit) or no-op (64-bit)
- le16/le32/le64: convert to little-endian
* on big-endian: do swap
* on little-endian: truncation (16/32-bit) or no-op (64-bit)
Each test simulates realistic networking scenarios where a value is
masked with unknown bits (e.g., var_off=(0x0; 0x3f00), range=[0,0x3f00]),
then byte-swapped, and the verifier must prove the result stays within
expected bounds.
Specifically, these selftests are based on dead code elimination:
If the BPF verifier can precisely track bitwise through byte swap
operations, it can prune the trap path (invalid memory access) that
should be unreachable, allowing the program to pass verification.
If bitwise tracking is incorrect, the verifier cannot prove the trap
is unreachable, causing verification failure.
The tests use preprocessor conditionals (#ifdef __BYTE_ORDER__) to
verify correct behavior on both little-endian and big-endian
architectures, and require Clang 18+ for bswap instruction support.
Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Co-developed-by: Yazhou Tang <tangyazhou518@outlook.com>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_bswap.c | 43 +++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_bswap.c b/tools/testing/selftests/bpf/progs/verifier_bswap.c
index e61755656e8d..4b779deee767 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bswap.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bswap.c
@@ -48,6 +48,49 @@ __naked void bswap_64(void)
: __clobber_all);
}
+#define BSWAP_RANGE_TEST(name, op, in_value, out_value) \
+ SEC("socket") \
+ __success __log_level(2) \
+ __msg("r0 &= {{.*}}; R0=scalar({{.*}},var_off=(0x0; " #in_value "))") \
+ __msg("r0 = " op " r0 {{.*}}; R0=scalar({{.*}},var_off=(0x0; " #out_value "))") \
+ __naked void name(void) \
+ { \
+ asm volatile ( \
+ "call %[bpf_get_prandom_u32];" \
+ "r0 &= " #in_value ";" \
+ "r0 = " op " r0;" \
+ "r2 = " #out_value " ll;" \
+ "if r0 > r2 goto trap_%=;" \
+ "r0 = 0;" \
+ "exit;" \
+ "trap_%=:" \
+ "r1 = 42;" \
+ "r0 = *(u64 *)(r1 + 0);" \
+ "exit;" \
+ : \
+ : __imm(bpf_get_prandom_u32) \
+ : __clobber_all); \
+ }
+
+BSWAP_RANGE_TEST(bswap16_range, "bswap16", 0x3f00, 0x3f)
+BSWAP_RANGE_TEST(bswap32_range, "bswap32", 0x3f00, 0x3f0000)
+BSWAP_RANGE_TEST(bswap64_range, "bswap64", 0x3f00, 0x3f000000000000)
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+BSWAP_RANGE_TEST(be16_range, "be16", 0x3f00, 0x3f)
+BSWAP_RANGE_TEST(be32_range, "be32", 0x3f00, 0x3f0000)
+BSWAP_RANGE_TEST(be64_range, "be64", 0x3f00, 0x3f000000000000)
+BSWAP_RANGE_TEST(le16_range, "le16", 0x3f00, 0x3f00)
+BSWAP_RANGE_TEST(le32_range, "le32", 0x3f00, 0x3f00)
+BSWAP_RANGE_TEST(le64_range, "le64", 0x3f00, 0x3f00)
+#else
+BSWAP_RANGE_TEST(be16_range, "be16", 0x3f00, 0x3f00)
+BSWAP_RANGE_TEST(be32_range, "be32", 0x3f00, 0x3f00)
+BSWAP_RANGE_TEST(be64_range, "be64", 0x3f00, 0x3f00)
+BSWAP_RANGE_TEST(le16_range, "le16", 0x3f00, 0x3f)
+BSWAP_RANGE_TEST(le32_range, "le32", 0x3f00, 0x3f0000)
+BSWAP_RANGE_TEST(le64_range, "le64", 0x3f00, 0x3f000000000000)
+#endif
+
#else
SEC("socket")
--
2.53.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking
2026-02-04 11:15 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking Tianci Cao
@ 2026-02-04 19:58 ` Eduard Zingerman
2026-02-05 5:23 ` Tianci Cao
0 siblings, 1 reply; 8+ messages in thread
From: Eduard Zingerman @ 2026-02-04 19:58 UTC (permalink / raw)
To: Tianci Cao, bpf
Cc: ast, daniel, john.fastabend, andrii, martin.lau, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
shenghaoyuan0928
On Wed, 2026-02-04 at 19:15 +0800, Tianci Cao wrote:
> Now BPF_END has bitwise tracking support. This patch adds selftests to
> cover various cases of BPF_END (`bswap(16|32|64)`, `be(16|32|64)`,
> `le(16|32|64)`) with bitwise propagation.
>
> This patch is based on existing `verifier_bswap.c`, and add several
> types of new tests:
>
> 1. Unconditional byte swap operations:
> - bswap16/bswap32/bswap64 with unknown bytes
>
> 2. Endian conversion operations (architecture-aware):
> - be16/be32/be64: convert to big-endian
> * on little-endian: do swap
> * on big-endian: truncation (16/32-bit) or no-op (64-bit)
> - le16/le32/le64: convert to little-endian
> * on big-endian: do swap
> * on little-endian: truncation (16/32-bit) or no-op (64-bit)
>
> Each test simulates realistic networking scenarios where a value is
> masked with unknown bits (e.g., var_off=(0x0; 0x3f00), range=[0,0x3f00]),
> then byte-swapped, and the verifier must prove the result stays within
> expected bounds.
>
> Specifically, these selftests are based on dead code elimination:
> If the BPF verifier can precisely track bitwise through byte swap
> operations, it can prune the trap path (invalid memory access) that
> should be unreachable, allowing the program to pass verification.
> If bitwise tracking is incorrect, the verifier cannot prove the trap
> is unreachable, causing verification failure.
>
> The tests use preprocessor conditionals (#ifdef __BYTE_ORDER__) to
> verify correct behavior on both little-endian and big-endian
> architectures, and require Clang 18+ for bswap instruction support.
>
> Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
> Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
> Co-developed-by: Yazhou Tang <tangyazhou518@outlook.com>
> Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
> Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
Nit: Technically I did not ack this patch, just suggested the changes.
For such cases you should keep SOB as in the previous version,
or add suggested-by. Acks should only be added if someone sent
and email with ack itself.
Anyway, patch-set looks good to me, thank you for working on this!
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
[...]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END
2026-02-04 11:15 [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END Tianci Cao
2026-02-04 11:15 ` [PATCH bpf-next v3 1/2] " Tianci Cao
2026-02-04 11:15 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking Tianci Cao
@ 2026-02-05 3:14 ` Alexei Starovoitov
2 siblings, 0 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2026-02-05 3:14 UTC (permalink / raw)
To: Tianci Cao
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Yazhou Tang, Shenghao Yuan
On Wed, Feb 4, 2026 at 3:15 AM Tianci Cao <ziye@zju.edu.cn> wrote:
>
> Add bitwise tracking (tnum analysis) for BPF_END (`bswap(16|32|64)`,
> `be(16|32|64)`, `le(16|32|64)`) operations. Please see commit log of
> 1/2 for more details.
>
> ---
>
> Change log:
>
> v3:
> - Resend to fix a version control error in v2.
> - The rest of the changes are identical to v2.
Applied. Thanks
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking
2026-02-04 19:58 ` Eduard Zingerman
@ 2026-02-05 5:23 ` Tianci Cao
2026-02-05 18:35 ` Eduard Zingerman
0 siblings, 1 reply; 8+ messages in thread
From: Tianci Cao @ 2026-02-05 5:23 UTC (permalink / raw)
To: eddyz87
Cc: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
martin.lau, sdf, shenghaoyuan0928, song, tangyazhou518,
yonghong.song, ziye
Hi Eduard,
On 2/5/26 3:58 AM, Eduard Zingerman wrote:
> Nit: Technically I did not ack this patch, just suggested the changes.
> For such cases you should keep SOB as in the previous version,
> or add suggested-by. Acks should only be added if someone sent
> and email with ack itself.
>
> Anyway, patch-set looks good to me, thank you for working on this!
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Thanks for the review and the Ack!
I see the patch set has been merged (thanks Alexei for applying!). Apologies
for the misunderstanding regarding the tags earlier. I have noted your advice
and will follow the proper conventions in future submissions.
On a side note, we are also working on range tracking (interval analysis)
support for bswap operations. While bswap is non-monotonic and hard to
track with ranges, we found that by leveraging existing range information,
range tracking can complement bitwise tracking (tnum) to yield a more
precise combined reg state.
Given that the implementation would be more complex than the tnum approach,
we wanted to ask if the community would be interested in seeing an RFC for this?
Thanks,
Tianci
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking
2026-02-05 5:23 ` Tianci Cao
@ 2026-02-05 18:35 ` Eduard Zingerman
2026-02-10 9:44 ` Tianci Cao
0 siblings, 1 reply; 8+ messages in thread
From: Eduard Zingerman @ 2026-02-05 18:35 UTC (permalink / raw)
To: Tianci Cao
Cc: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
martin.lau, sdf, shenghaoyuan0928, song, tangyazhou518,
yonghong.song
On Thu, 2026-02-05 at 13:23 +0800, Tianci Cao wrote:
[...]
> On a side note, we are also working on range tracking (interval analysis)
> support for bswap operations. While bswap is non-monotonic and hard to
> track with ranges, we found that by leveraging existing range information,
> range tracking can complement bitwise tracking (tnum) to yield a more
> precise combined reg state.
>
> Given that the implementation would be more complex than the tnum approach,
> we wanted to ask if the community would be interested in seeing an RFC for this?
Hi Tianci,
Depends on how complicated the implementation is.
Is it just theoretical work or you see some real-world programs that
fail to verify due to insufficient BPF_END range tracking?
Thanks,
Eduard
[...]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking
2026-02-05 18:35 ` Eduard Zingerman
@ 2026-02-10 9:44 ` Tianci Cao
0 siblings, 0 replies; 8+ messages in thread
From: Tianci Cao @ 2026-02-10 9:44 UTC (permalink / raw)
To: eddyz87
Cc: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
martin.lau, sdf, shenghaoyuan0928, song, tangyazhou518,
yonghong.song, ziye
Thu, 05 Feb 2026 10:35:41 -0800, Eduard wrote:
> Depends on how complicated the implementation is.
> Is it just theoretical work or you see some real-world programs that
> fail to verify due to insufficient BPF_END range tracking?
Hi Eduard,
Thanks for the feedback!
This is primarily theoretical research focused on developing an optimal algorithm
for interval analysis on bswap operations. We've been studying how to compute
precise min/max bounds for byte-swapped values given input ranges.
Our current demo implementation is approximately 140 lines of code for the
core bswap interval logic. There's room for optimization to reduce the code
size in future iterations.
We have verified the soundness of our algorithm using the Z3 SMT solver and
extensive randomized testing. Our interval tracking approach provides tighter
bounds than the Linux kernel's approach of converting intervals to tnum via
tnum_range() and then applying tnum_bswap() (kernel/bpf/tnum.c).
Here's a detailed example showing how both algorithms work:
Example: Small range with non-aligned boundaries (64-bits bswap)
Input range: [0x38e4, 0x978b]
* Tnum Algorithm :
1. tnum_range(0x38e4, 0x978b) (the function to convert a range to tnum in tnum.c):
=> tnum = {0x0, 0xffff}
2. tnum_bswap64(tnum = {0x0, 0xffff}):
- value = bswap64(0x0) = 0x0
- mask = bswap64(0xffff) = 0xffff000000000000
3. Extract bounds:
- min = 0x0
- max = 0xffff000000000000
result: tnum: {value: 0x0, mask: 0xffff000000000000}
range: [0x0,0xffff000000000000]
* Interval Algorithm:
let bswap_min(min,max) = res_min
bswap_max(min,max) = res_max
1. Identify common prefix:
min = 0x38e4, max = 0x978b
highest 48 bits are the same
Only the lowest 2 bytes differ
2. Compute bswap_min(0x38e4, 0x978b):
0x3900 in range [0x38e4, 0x978b]
res_min = bswap64(0x3900) = 0x0039000000000000
3. Compute bswap_max(0x38e4, 0x978b):
0x96ff in range [0x38e4, 0x978b]
res_max = bswap64(0x96ff) = 0xff96000000000000
result: range: [0x0039000000000000, 0xff96000000000000]
tnum: {value: 0x0000000000000000, mask: 0xffffffffffffffff}
Comparison:
- Interval range: [0x0039000000000000, 0xff96000000000000]
- Tnum range: [0x0000000000000000, 0xffff000000000000]
The key difference: Interval algorithm exploits bswap's byte-exchange property
to align boundaries precisely, while tnum conservatively marks all "uncertain"
bits as completely unknown.
Given the modest implementation cost and potential for improved precision, we
believe this could be useful. Would it be helpful if we prepare an RFC with
detailed examples and benchmark data?
Best regards,
Tianci
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-02-10 9:45 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-04 11:15 [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END Tianci Cao
2026-02-04 11:15 ` [PATCH bpf-next v3 1/2] " Tianci Cao
2026-02-04 11:15 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_END bitwise tracking Tianci Cao
2026-02-04 19:58 ` Eduard Zingerman
2026-02-05 5:23 ` Tianci Cao
2026-02-05 18:35 ` Eduard Zingerman
2026-02-10 9:44 ` Tianci Cao
2026-02-05 3:14 ` [PATCH bpf-next v3 0/2] bpf: Add bitwise tracking for BPF_END Alexei Starovoitov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox