* [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier
@ 2024-01-08 20:51 Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Maxim Mikityanskiy
` (15 more replies)
0 siblings, 16 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:51 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The goal of this series is to extend the verifier's capabilities of
tracking scalars when they are spilled to stack, especially when the
spill or fill is narrowing. It also contains a fix by Eduard for
infinite loop detection and a state pruning optimization by Eduard that
compensates for a verification complexity regression introduced by
tracking unbounded scalars. These improvements reduce the surface of
false rejections that I saw while working on Cilium codebase.
Patch 1 (Maxim): Fix for an existing test, it will matter later in the
series.
Patches 2-3 (Eduard): Fixes for false rejections in infinite loop
detection that happen in the selftests when my patches are applied.
Patches 4-5 (Maxim): Fix the inconsistency of find_equal_scalars that
was possible if 32-bit spills were made.
Patches 6-11 (Maxim): Support the case when boundary checks are first
performed after the register was spilled to the stack.
Patches 12-13 (Maxim): Support narrowing fills.
Patches 14-15 (Eduard): Optimization for state pruning in stacksafe() to
mitigate the verification complexity regression.
veristat -e file,prog,states -f '!states_diff<50' -f '!states_pct<10' -f '!states_a<10' -f '!states_b<10' -C ...
* Without patch 14:
File Program States (A) States (B) States (DIFF)
-------------------- ------------ ---------- ---------- ----------------
bpf_xdp.o tail_lb_ipv6 3877 2936 -941 (-24.27%)
pyperf180.bpf.o on_event 8422 10456 +2034 (+24.15%)
pyperf600.bpf.o on_event 22259 37319 +15060 (+67.66%)
pyperf600_iter.bpf.o on_event 400 540 +140 (+35.00%)
strobemeta.bpf.o on_event 4702 13435 +8733 (+185.73%)
* With patch 14:
File Program States (A) States (B) States (DIFF)
-------------------- ------------ ---------- ---------- --------------
bpf_xdp.o tail_lb_ipv6 3877 2937 -940 (-24.25%)
pyperf600_iter.bpf.o on_event 400 500 +100 (+25.00%)
v2 changes:
Fixed comments in patch 1, moved endianness checks to header files in
patch 12 where possible, added Eduard's ACKs.
Eduard Zingerman (4):
bpf: make infinite loop detection in is_state_visited() exact
selftests/bpf: check if imprecise stack spills confuse infinite loop
detection
bpf: Optimize state pruning for spilled scalars
selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO}
Maxim Mikityanskiy (11):
selftests/bpf: Fix the u64_offset_to_skb_data test
bpf: Make bpf_for_each_spilled_reg consider narrow spills
selftests/bpf: Add a test case for 32-bit spill tracking
bpf: Add the assign_scalar_id_before_mov function
bpf: Add the get_reg_width function
bpf: Assign ID to scalars on spill
selftests/bpf: Test assigning ID to scalars on spill
bpf: Track spilled unbounded scalars
selftests/bpf: Test tracking spilled unbounded scalars
bpf: Preserve boundaries and track scalars on narrowing fill
selftests/bpf: Add test cases for narrowing fill
include/linux/bpf_verifier.h | 4 +-
include/linux/filter.h | 12 +
kernel/bpf/verifier.c | 155 ++++-
.../bpf/progs/verifier_direct_packet_access.c | 2 +-
.../selftests/bpf/progs/verifier_loops1.c | 24 +
.../selftests/bpf/progs/verifier_spill_fill.c | 533 +++++++++++++++++-
.../testing/selftests/bpf/verifier/precise.c | 6 +-
7 files changed, 685 insertions(+), 51 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
@ 2024-01-08 20:51 ` Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 02/15] bpf: make infinite loop detection in is_state_visited() exact Maxim Mikityanskiy
` (14 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:51 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but
instead makes a 16-bit one. Fix the test according to its intention and
update the comments accordingly (umax is no longer 0xffff). The 16-bit
fill is covered by u16_offset_to_skb_data.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 39fe3372e0e0..848f2930f599 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -243,7 +243,7 @@ l0_%=: r0 = 0; \
SEC("tc")
__description("Spill u32 const scalars. Refill as u64. Offset to skb->data")
-__failure __msg("invalid access to packet")
+__failure __msg("math between pkt pointer and register with unbounded min value is not allowed")
__naked void u64_offset_to_skb_data(void)
{
asm volatile (" \
@@ -253,13 +253,11 @@ __naked void u64_offset_to_skb_data(void)
w7 = 20; \
*(u32*)(r10 - 4) = r6; \
*(u32*)(r10 - 8) = r7; \
- r4 = *(u16*)(r10 - 8); \
+ r4 = *(u64*)(r10 - 8); \
r0 = r2; \
- /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
+ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4= */ \
r0 += r4; \
- /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\
if r0 > r3 goto l0_%=; \
- /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\
r0 = *(u32*)(r2 + 0); \
l0_%=: r0 = 0; \
exit; \
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 02/15] bpf: make infinite loop detection in is_state_visited() exact
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Maxim Mikityanskiy
@ 2024-01-08 20:51 ` Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection Maxim Mikityanskiy
` (13 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:51 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
From: Eduard Zingerman <eddyz87@gmail.com>
Current infinite loops detection mechanism is speculative:
- first, states_maybe_looping() check is done which simply does memcmp
for R1-R10 in current frame;
- second, states_equal(..., exact=false) is called. With exact=false
states_equal() would compare scalars for equality only if in old
state scalar has precision mark.
Such logic might be problematic if compiler makes some unlucky stack
spill/fill decisions. An artificial example of a false positive looks
as follows:
r0 = ... unknown scalar ...
r0 &= 0xff;
*(u64 *)(r10 - 8) = r0;
r0 = 0;
loop:
r0 = *(u64 *)(r10 - 8);
if r0 > 10 goto exit_;
r0 += 1;
*(u64 *)(r10 - 8) = r0;
r0 = 0;
goto loop;
This commit updates call to states_equal to use exact=true, forcing
all scalar comparisons to be exact.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index adbf330d364b..bc565f445410 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17023,7 +17023,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
}
/* attempt to detect infinite loop to avoid unnecessary doomed work */
if (states_maybe_looping(&sl->state, cur) &&
- states_equal(env, &sl->state, cur, false) &&
+ states_equal(env, &sl->state, cur, true) &&
!iter_active_depths_differ(&sl->state, cur) &&
sl->state.callback_unroll_depth == cur->callback_unroll_depth) {
verbose_linfo(env, insn_idx, "; ");
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 02/15] bpf: make infinite loop detection in is_state_visited() exact Maxim Mikityanskiy
@ 2024-01-08 20:51 ` Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills Maxim Mikityanskiy
` (12 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:51 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
From: Eduard Zingerman <eddyz87@gmail.com>
Verify that infinite loop detection logic separates states with
identical register states but different imprecise scalars spilled to
stack.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_loops1.c | 24 +++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_loops1.c b/tools/testing/selftests/bpf/progs/verifier_loops1.c
index 71735dbf33d4..e07b43b78fd2 100644
--- a/tools/testing/selftests/bpf/progs/verifier_loops1.c
+++ b/tools/testing/selftests/bpf/progs/verifier_loops1.c
@@ -259,4 +259,28 @@ l0_%=: r2 += r1; \
" ::: __clobber_all);
}
+SEC("xdp")
+__success
+__naked void not_an_inifinite_loop(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ *(u64 *)(r10 - 8) = r0; \
+ r0 = 0; \
+loop_%=: \
+ r0 = *(u64 *)(r10 - 8); \
+ if r0 > 10 goto exit_%=; \
+ r0 += 1; \
+ *(u64 *)(r10 - 8) = r0; \
+ r0 = 0; \
+ goto loop_%=; \
+exit_%=: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (2 preceding siblings ...)
2024-01-08 20:51 ` [PATCH bpf-next v2 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection Maxim Mikityanskiy
@ 2024-01-08 20:51 ` Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 05/15] selftests/bpf: Add a test case for 32-bit spill tracking Maxim Mikityanskiy
` (11 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:51 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Adjust the check in bpf_get_spilled_reg to take into account spilled
registers narrower than 64 bits. That allows find_equal_scalars to
properly adjust the range of all spilled registers that have the same
ID. Before this change, it was possible for a register and a spilled
register to have the same IDs but different ranges if the spill was
narrower than 64 bits and a range check was performed on the register.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
include/linux/bpf_verifier.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index d07d857ca67f..e11baecbde68 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -453,7 +453,7 @@ struct bpf_verifier_state {
#define bpf_get_spilled_reg(slot, frame, mask) \
(((slot < frame->allocated_stack / BPF_REG_SIZE) && \
- ((1 << frame->stack[slot].slot_type[0]) & (mask))) \
+ ((1 << frame->stack[slot].slot_type[BPF_REG_SIZE - 1]) & (mask))) \
? &frame->stack[slot].spilled_ptr : NULL)
/* Iterate over 'frame', setting 'reg' to either NULL or a spilled register. */
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 05/15] selftests/bpf: Add a test case for 32-bit spill tracking
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (3 preceding siblings ...)
2024-01-08 20:51 ` [PATCH bpf-next v2 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills Maxim Mikityanskiy
@ 2024-01-08 20:51 ` Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 06/15] bpf: Add the assign_scalar_id_before_mov function Maxim Mikityanskiy
` (10 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:51 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
When a range check is performed on a register that was 32-bit spilled to
the stack, the IDs of the two instances of the register are the same, so
the range should also be the same.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 31 +++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 848f2930f599..f303ac19cf41 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -735,4 +735,35 @@ __naked void stack_load_preserves_const_precision_subreg(void)
: __clobber_common);
}
+SEC("xdp")
+__description("32-bit spilled reg range should be tracked")
+__success __retval(0)
+__naked void spill_32bit_range_track(void)
+{
+ asm volatile(" \
+ call %[bpf_ktime_get_ns]; \
+ /* Make r0 bounded. */ \
+ r0 &= 65535; \
+ /* Assign an ID to r0. */ \
+ r1 = r0; \
+ /* 32-bit spill r0 to stack. */ \
+ *(u32*)(r10 - 8) = r0; \
+ /* Boundary check on r0. */ \
+ if r0 < 1 goto l0_%=; \
+ /* 32-bit fill r1 from stack. */ \
+ r1 = *(u32*)(r10 - 8); \
+ /* r1 == r0 => r1 >= 1 always. */ \
+ if r1 >= 1 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. \
+ * Do an invalid memory access if the verifier \
+ * follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 06/15] bpf: Add the assign_scalar_id_before_mov function
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (4 preceding siblings ...)
2024-01-08 20:51 ` [PATCH bpf-next v2 05/15] selftests/bpf: Add a test case for 32-bit spill tracking Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 07/15] bpf: Add the get_reg_width function Maxim Mikityanskiy
` (9 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Extract the common code that generates a register ID for src_reg before
MOV if needed into a new function. This function will also be used in
a following commit.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 33 +++++++++++++++++++--------------
1 file changed, 19 insertions(+), 14 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index bc565f445410..e3eff2becd64 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4403,6 +4403,18 @@ static bool __is_pointer_value(bool allow_ptr_leaks,
return reg->type != SCALAR_VALUE;
}
+static void assign_scalar_id_before_mov(struct bpf_verifier_env *env,
+ struct bpf_reg_state *src_reg)
+{
+ if (src_reg->type == SCALAR_VALUE && !src_reg->id &&
+ !tnum_is_const(src_reg->var_off))
+ /* Ensure that src_reg has a valid ID that will be copied to
+ * dst_reg and then will be used by find_equal_scalars() to
+ * propagate min/max range.
+ */
+ src_reg->id = ++env->id_gen;
+}
+
/* Copy src state preserving dst->parent and dst->live fields */
static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src)
{
@@ -13901,20 +13913,13 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
if (BPF_SRC(insn->code) == BPF_X) {
struct bpf_reg_state *src_reg = regs + insn->src_reg;
struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
- bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
- !tnum_is_const(src_reg->var_off);
if (BPF_CLASS(insn->code) == BPF_ALU64) {
if (insn->off == 0) {
/* case: R1 = R2
* copy register state to dest reg
*/
- if (need_id)
- /* Assign src and dst registers the same ID
- * that will be used by find_equal_scalars()
- * to propagate min/max range.
- */
- src_reg->id = ++env->id_gen;
+ assign_scalar_id_before_mov(env, src_reg);
copy_register_state(dst_reg, src_reg);
dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = DEF_NOT_SUBREG;
@@ -13929,8 +13934,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
bool no_sext;
no_sext = src_reg->umax_value < (1ULL << (insn->off - 1));
- if (no_sext && need_id)
- src_reg->id = ++env->id_gen;
+ if (no_sext)
+ assign_scalar_id_before_mov(env, src_reg);
copy_register_state(dst_reg, src_reg);
if (!no_sext)
dst_reg->id = 0;
@@ -13952,8 +13957,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
if (insn->off == 0) {
bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
- if (is_src_reg_u32 && need_id)
- src_reg->id = ++env->id_gen;
+ if (is_src_reg_u32)
+ assign_scalar_id_before_mov(env, src_reg);
copy_register_state(dst_reg, src_reg);
/* Make sure ID is cleared if src_reg is not in u32
* range otherwise dst_reg min/max could be incorrectly
@@ -13967,8 +13972,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
/* case: W1 = (s8, s16)W2 */
bool no_sext = src_reg->umax_value < (1ULL << (insn->off - 1));
- if (no_sext && need_id)
- src_reg->id = ++env->id_gen;
+ if (no_sext)
+ assign_scalar_id_before_mov(env, src_reg);
copy_register_state(dst_reg, src_reg);
if (!no_sext)
dst_reg->id = 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 07/15] bpf: Add the get_reg_width function
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (5 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 06/15] bpf: Add the assign_scalar_id_before_mov function Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 08/15] bpf: Assign ID to scalars on spill Maxim Mikityanskiy
` (8 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Put calculation of the register value width into a dedicated function.
This function will also be used in a following commit.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
kernel/bpf/verifier.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e3eff2becd64..4cd82a7c1318 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4450,6 +4450,11 @@ static bool is_bpf_st_mem(struct bpf_insn *insn)
return BPF_CLASS(insn->code) == BPF_ST && BPF_MODE(insn->code) == BPF_MEM;
}
+static int get_reg_width(struct bpf_reg_state *reg)
+{
+ return fls64(reg->umax_value);
+}
+
/* check_stack_{read,write}_fixed_off functions track spill/fill of registers,
* stack boundary and alignment are checked in check_mem_access()
*/
@@ -4502,7 +4507,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
save_register_state(env, state, spi, reg, size);
/* Break the relation on a narrowing spill. */
- if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
+ if (get_reg_width(reg) > BITS_PER_BYTE * size)
state->stack[spi].spilled_ptr.id = 0;
} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
insn->imm != 0 && env->bpf_capable) {
@@ -13955,7 +13960,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
return -EACCES;
} else if (src_reg->type == SCALAR_VALUE) {
if (insn->off == 0) {
- bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
+ bool is_src_reg_u32 = get_reg_width(src_reg) <= 32;
if (is_src_reg_u32)
assign_scalar_id_before_mov(env, src_reg);
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 08/15] bpf: Assign ID to scalars on spill
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (6 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 07/15] bpf: Add the get_reg_width function Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 09/15] selftests/bpf: Test assigning " Maxim Mikityanskiy
` (7 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Currently, when a scalar bounded register is spilled to the stack, its
ID is preserved, but only if was already assigned, i.e. if this register
was MOVed before.
Assign an ID on spill if none is set, so that equal scalars could be
tracked if a register is spilled to the stack and filled into another
register.
One test is adjusted to reflect the change in register IDs.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 8 +++++++-
.../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +-
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4cd82a7c1318..055fa8096a08 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4505,9 +4505,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
mark_stack_slot_scratched(env, spi);
if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
+ bool reg_value_fits;
+
+ reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
+ /* Make sure that reg had an ID to build a relation on spill. */
+ if (reg_value_fits)
+ assign_scalar_id_before_mov(env, reg);
save_register_state(env, state, spi, reg, size);
/* Break the relation on a narrowing spill. */
- if (get_reg_width(reg) > BITS_PER_BYTE * size)
+ if (!reg_value_fits)
state->stack[spi].spilled_ptr.id = 0;
} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
insn->imm != 0 && env->bpf_capable) {
diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
index be95570ab382..28b602ac9cbe 100644
--- a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
+++ b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
@@ -568,7 +568,7 @@ l0_%=: r0 = 0; \
SEC("tc")
__description("direct packet access: test23 (x += pkt_ptr, 4)")
-__failure __msg("invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)")
+__failure __msg("invalid access to packet, off=0 size=8, R5(id=3,off=0,r=0)")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void test23_x_pkt_ptr_4(void)
{
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 09/15] selftests/bpf: Test assigning ID to scalars on spill
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (7 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 08/15] bpf: Assign ID to scalars on spill Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-09 23:34 ` Andrii Nakryiko
2024-01-08 20:52 ` [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars Maxim Mikityanskiy
` (6 subsequent siblings)
15 siblings, 1 reply; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit implemented assigning IDs to registers holding
scalars before spill. Add the test cases to check the new functionality.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 133 ++++++++++++++++++
1 file changed, 133 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index f303ac19cf41..b05aab925ee5 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -766,4 +766,137 @@ l0_%=: r0 = 0; \
: __clobber_all);
}
+SEC("xdp")
+__description("64-bit spill of 64-bit reg should assign ID")
+__success __retval(0)
+__naked void spill_64bit_of_64bit_ok(void)
+{
+ asm volatile (" \
+ /* Roll one bit to make the register inexact. */\
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0x80000000; \
+ r0 <<= 32; \
+ /* 64-bit spill r0 to stack - should assign an ID. */\
+ *(u64*)(r10 - 8) = r0; \
+ /* 64-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u64*)(r10 - 8); \
+ /* Compare r1 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. \
+ */ \
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("32-bit spill of 32-bit reg should assign ID")
+__success __retval(0)
+__naked void spill_32bit_of_32bit_ok(void)
+{
+ asm volatile (" \
+ /* Roll one bit to make the register inexact. */\
+ call %[bpf_get_prandom_u32]; \
+ w0 &= 0x80000000; \
+ /* 32-bit spill r0 to stack - should assign an ID. */\
+ *(u32*)(r10 - 8) = r0; \
+ /* 32-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u32*)(r10 - 8); \
+ /* Compare r1 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. \
+ */ \
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("16-bit spill of 16-bit reg should assign ID")
+__success __retval(0)
+__naked void spill_16bit_of_16bit_ok(void)
+{
+ asm volatile (" \
+ /* Roll one bit to make the register inexact. */\
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0x8000; \
+ /* 16-bit spill r0 to stack - should assign an ID. */\
+ *(u16*)(r10 - 8) = r0; \
+ /* 16-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u16*)(r10 - 8); \
+ /* Compare r1 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. \
+ */ \
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("8-bit spill of 8-bit reg should assign ID")
+__success __retval(0)
+__naked void spill_8bit_of_8bit_ok(void)
+{
+ asm volatile (" \
+ /* Roll one bit to make the register inexact. */\
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0x80; \
+ /* 8-bit spill r0 to stack - should assign an ID. */\
+ *(u8*)(r10 - 8) = r0; \
+ /* 8-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u8*)(r10 - 8); \
+ /* Compare r1 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. \
+ */ \
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (8 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 09/15] selftests/bpf: Test assigning " Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-12 19:10 ` Alexei Starovoitov
2024-01-08 20:52 ` [PATCH bpf-next v2 11/15] selftests/bpf: Test tracking " Maxim Mikityanskiy
` (5 subsequent siblings)
15 siblings, 1 reply; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Support the pattern where an unbounded scalar is spilled to the stack,
then boundary checks are performed on the src register, after which the
stack frame slot is refilled into a register.
Before this commit, the verifier didn't treat the src register and the
stack slot as related if the src register was an unbounded scalar. The
register state wasn't copied, the id wasn't preserved, and the stack
slot was marked as STACK_MISC. Subsequent boundary checks on the src
register wouldn't result in updating the boundaries of the spilled
variable on the stack.
After this commit, the verifier will preserve the bond between src and
dst even if src is unbounded, which permits to do boundary checks on src
and refill dst later, still remembering its boundaries. Such a pattern
is sometimes generated by clang when compiling complex long functions.
One test is adjusted to reflect the fact that an untracked register is
marked as precise at an earlier stage, and one more test is adjusted to
reflect that now unbounded scalars are tracked.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 7 +------
tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 6 +++---
tools/testing/selftests/bpf/verifier/precise.c | 6 +++---
3 files changed, 7 insertions(+), 12 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 055fa8096a08..e7fff5f5aa1d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4389,11 +4389,6 @@ static bool __is_scalar_unbounded(struct bpf_reg_state *reg)
reg->u32_min_value == 0 && reg->u32_max_value == U32_MAX;
}
-static bool register_is_bounded(struct bpf_reg_state *reg)
-{
- return reg->type == SCALAR_VALUE && !__is_scalar_unbounded(reg);
-}
-
static bool __is_pointer_value(bool allow_ptr_leaks,
const struct bpf_reg_state *reg)
{
@@ -4504,7 +4499,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
return err;
mark_stack_slot_scratched(env, spi);
- if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
+ if (reg && !(off % BPF_REG_SIZE) && reg->type == SCALAR_VALUE && env->bpf_capable) {
bool reg_value_fits;
reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index b05aab925ee5..57eb70e100a3 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -452,9 +452,9 @@ l0_%=: r1 >>= 16; \
SEC("raw_tp")
__log_level(2)
__success
-__msg("fp-8=0m??mmmm")
-__msg("fp-16=00mm??mm")
-__msg("fp-24=00mm???m")
+__msg("fp-8=0m??scalar()")
+__msg("fp-16=00mm??scalar()")
+__msg("fp-24=00mm???scalar()")
__naked void spill_subregs_preserve_stack_zero(void)
{
asm volatile (
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
index 8a2ff81d8350..0a9293a57211 100644
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ b/tools/testing/selftests/bpf/verifier/precise.c
@@ -183,10 +183,10 @@
.prog_type = BPF_PROG_TYPE_XDP,
.flags = BPF_F_TEST_STATE_FREQ,
.errstr = "mark_precise: frame0: last_idx 7 first_idx 7\
- mark_precise: frame0: parent state regs=r4 stack=:\
+ mark_precise: frame0: parent state regs=r4 stack=-8:\
mark_precise: frame0: last_idx 6 first_idx 4\
- mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\
- mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\
+ mark_precise: frame0: regs=r4 stack=-8 before 6: (b7) r0 = -1\
+ mark_precise: frame0: regs=r4 stack=-8 before 5: (79) r4 = *(u64 *)(r10 -8)\
mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\
mark_precise: frame0: parent state regs=r0 stack=:\
mark_precise: frame0: last_idx 3 first_idx 3\
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 11/15] selftests/bpf: Test tracking spilled unbounded scalars
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (9 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Maxim Mikityanskiy
` (4 subsequent siblings)
15 siblings, 0 replies; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit added tracking for unbounded scalars on spill. Add
the test case to check the new functionality.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 27 +++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 57eb70e100a3..cc6c5a3b464b 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -899,4 +899,31 @@ l0_%=: r0 = 0; \
: __clobber_all);
}
+SEC("xdp")
+__description("spill unbounded reg, then range check src")
+__success __retval(0)
+__naked void spill_unbounded(void)
+{
+ asm volatile (" \
+ /* Produce an unbounded scalar. */ \
+ call %[bpf_get_prandom_u32]; \
+ /* Spill r0 to stack. */ \
+ *(u64*)(r10 - 8) = r0; \
+ /* Boundary check on r0. */ \
+ if r0 > 16 goto l0_%=; \
+ /* Fill r0 from stack. */ \
+ r0 = *(u64*)(r10 - 8); \
+ /* Boundary check on r0 with predetermined result. */\
+ if r0 <= 16 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 12/15] bpf: Preserve boundaries and track scalars on narrowing fill
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (10 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 11/15] selftests/bpf: Test tracking " Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-09 23:51 ` Andrii Nakryiko
2024-01-08 20:52 ` [PATCH bpf-next v2 13/15] selftests/bpf: Add test cases for " Maxim Mikityanskiy
` (3 subsequent siblings)
15 siblings, 1 reply; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
When the width of a fill is smaller than the width of the preceding
spill, the information about scalar boundaries can still be preserved,
as long as it's coerced to the right width (done by coerce_reg_to_size).
Even further, if the actual value fits into the fill width, the ID can
be preserved as well for further tracking of equal scalars.
Implement the above improvements, which makes narrowing fills behave the
same as narrowing spills and MOVs between registers.
Two tests are adjusted to accommodate for endianness differences and to
take into account that it's now allowed to do a narrowing fill from the
least significant bits.
reg_bounds_sync is added to coerce_reg_to_size to correctly adjust
umin/umax boundaries after the var_off truncation, for example, a 64-bit
value 0xXXXXXXXX00000000, when read as a 32-bit, gets umin = 0, umax =
0xFFFFFFFF, var_off = (0x0; 0xffffffff00000000), which needs to be
synced down to umax = 0, otherwise reg_bounds_sanity_check doesn't pass.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
include/linux/bpf_verifier.h | 2 --
include/linux/filter.h | 12 ++++++++
kernel/bpf/verifier.c | 15 +++++++---
.../selftests/bpf/progs/verifier_spill_fill.c | 28 +++++++++++++------
4 files changed, 42 insertions(+), 15 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index e11baecbde68..95ea7657f07e 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -239,8 +239,6 @@ enum bpf_stack_slot_type {
STACK_ITER,
};
-#define BPF_REG_SIZE 8 /* size of eBPF register in bytes */
-
#define BPF_REGMASK_ARGS ((1 << BPF_REG_1) | (1 << BPF_REG_2) | \
(1 << BPF_REG_3) | (1 << BPF_REG_4) | \
(1 << BPF_REG_5))
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 68fb6c8142fe..be784be7ed4e 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -39,6 +39,8 @@ struct sock_reuseport;
struct ctl_table;
struct ctl_table_header;
+#define BPF_REG_SIZE 8 /* size of eBPF register in bytes */
+
/* ArgX, context and stack frame pointer register positions. Note,
* Arg1, Arg2, Arg3, etc are used as argument mappings of function
* calls in BPF_CALL instruction.
@@ -881,6 +883,16 @@ bpf_ctx_narrow_access_offset(u32 off, u32 size, u32 size_default)
#define bpf_classic_proglen(fprog) (fprog->len * sizeof(fprog->filter[0]))
+static inline bool
+bpf_stack_narrow_access_ok(int off, int size, int spill_size)
+{
+#ifdef __BIG_ENDIAN
+ off -= spill_size - size;
+#endif
+
+ return !(off % BPF_REG_SIZE);
+}
+
static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
{
#ifndef CONFIG_BPF_JIT_ALWAYS_ON
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e7fff5f5aa1d..aeb3e198a5ea 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4774,7 +4774,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
if (dst_regno < 0)
return 0;
- if (!(off % BPF_REG_SIZE) && size == spill_size) {
+ if (size <= spill_size &&
+ bpf_stack_narrow_access_ok(off, size, spill_size)) {
/* The earlier check_reg_arg() has decided the
* subreg_def for this insn. Save it first.
*/
@@ -4782,6 +4783,12 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
copy_register_state(&state->regs[dst_regno], reg);
state->regs[dst_regno].subreg_def = subreg_def;
+
+ /* Break the relation on a narrowing fill.
+ * coerce_reg_to_size will adjust the boundaries.
+ */
+ if (get_reg_width(reg) > size * BITS_PER_BYTE)
+ state->regs[dst_regno].id = 0;
} else {
int spill_cnt = 0, zero_cnt = 0;
@@ -6057,10 +6064,10 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
* values are also truncated so we push 64-bit bounds into
* 32-bit bounds. Above were truncated < 32-bits already.
*/
- if (size < 4) {
+ if (size < 4)
__mark_reg32_unbounded(reg);
- reg_bounds_sync(reg);
- }
+
+ reg_bounds_sync(reg);
}
static void set_sext64_default_val(struct bpf_reg_state *reg, int size)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index cc6c5a3b464b..fab8ae9fe947 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -217,7 +217,7 @@ __naked void uninit_u32_from_the_stack(void)
SEC("tc")
__description("Spill a u32 const scalar. Refill as u16. Offset to skb->data")
-__failure __msg("invalid access to packet")
+__success __retval(0)
__naked void u16_offset_to_skb_data(void)
{
asm volatile (" \
@@ -225,19 +225,24 @@ __naked void u16_offset_to_skb_data(void)
r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \
w4 = 20; \
*(u32*)(r10 - 8) = r4; \
- r4 = *(u16*)(r10 - 8); \
+ r4 = *(u16*)(r10 - %[offset]); \
r0 = r2; \
- /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
+ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */\
r0 += r4; \
- /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\
+ /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\
if r0 > r3 goto l0_%=; \
- /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\
+ /* r0 = *(u32 *)r2 R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\
r0 = *(u32*)(r2 + 0); \
l0_%=: r0 = 0; \
exit; \
" :
: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
- __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+ __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 8)
+#else
+ __imm_const(offset, 6)
+#endif
: __clobber_all);
}
@@ -268,7 +273,7 @@ l0_%=: r0 = 0; \
}
SEC("tc")
-__description("Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data")
+__description("Spill a u32 const scalar. Refill as u16 from MSB. Offset to skb->data")
__failure __msg("invalid access to packet")
__naked void _6_offset_to_skb_data(void)
{
@@ -277,7 +282,7 @@ __naked void _6_offset_to_skb_data(void)
r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \
w4 = 20; \
*(u32*)(r10 - 8) = r4; \
- r4 = *(u16*)(r10 - 6); \
+ r4 = *(u16*)(r10 - %[offset]); \
r0 = r2; \
/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
r0 += r4; \
@@ -289,7 +294,12 @@ l0_%=: r0 = 0; \
exit; \
" :
: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
- __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+ __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 6)
+#else
+ __imm_const(offset, 8)
+#endif
: __clobber_all);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 13/15] selftests/bpf: Add test cases for narrowing fill
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (11 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-09 23:55 ` Andrii Nakryiko
2024-01-08 20:52 ` [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars Maxim Mikityanskiy
` (2 subsequent siblings)
15 siblings, 1 reply; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit allowed to preserve boundaries and track IDs of
scalars on narrowing fills. Add test cases for that pattern.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 108 ++++++++++++++++++
1 file changed, 108 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index fab8ae9fe947..3764111d190d 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -936,4 +936,112 @@ l0_%=: r0 = 0; \
: __clobber_all);
}
+SEC("xdp")
+__description("32-bit fill after 64-bit spill")
+__success __retval(0)
+__naked void fill_32bit_after_spill_64bit(void)
+{
+ asm volatile(" \
+ /* Randomize the upper 32 bits. */ \
+ call %[bpf_get_prandom_u32]; \
+ r0 <<= 32; \
+ /* 64-bit spill r0 to stack. */ \
+ *(u64*)(r10 - 8) = r0; \
+ /* 32-bit fill r0 from stack. */ \
+ r0 = *(u32*)(r10 - %[offset]); \
+ /* Boundary check on r0 with predetermined result. */\
+ if r0 == 0 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+l0_%=: exit; \
+" :
+ : __imm(bpf_get_prandom_u32),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 8)
+#else
+ __imm_const(offset, 4)
+#endif
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("32-bit fill after 64-bit spill of 32-bit value should preserve ID")
+__success __retval(0)
+__naked void fill_32bit_after_spill_64bit_preserve_id(void)
+{
+ asm volatile (" \
+ /* Randomize the lower 32 bits. */ \
+ call %[bpf_get_prandom_u32]; \
+ w0 &= 0xffffffff; \
+ /* 64-bit spill r0 to stack - should assign an ID. */\
+ *(u64*)(r10 - 8) = r0; \
+ /* 32-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u32*)(r10 - %[offset]); \
+ /* Compare r1 with another register to trigger find_equal_scalars. */\
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 8)
+#else
+ __imm_const(offset, 4)
+#endif
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("32-bit fill after 64-bit spill should clear ID")
+__failure __msg("math between ctx pointer and 4294967295 is not allowed")
+__naked void fill_32bit_after_spill_64bit_clear_id(void)
+{
+ asm volatile (" \
+ r6 = r1; \
+ /* Roll one bit to force the verifier to track both branches. */\
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0x8; \
+ /* Put a large number into r1. */ \
+ r1 = 0xffffffff; \
+ r1 <<= 32; \
+ r1 += r0; \
+ /* 64-bit spill r1 to stack - should assign an ID. */\
+ *(u64*)(r10 - 8) = r1; \
+ /* 32-bit fill r2 from stack - should clear the ID. */\
+ r2 = *(u32*)(r10 - %[offset]); \
+ /* Compare r2 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. If the ID was mistakenly preserved on fill, this would\
+ * cause the verifier to think that r1 is also equal to zero in one of\
+ * the branches, and equal to eight on the other branch.\
+ */ \
+ r3 = 0; \
+ if r2 != r3 goto l0_%=; \
+l0_%=: r1 >>= 32; \
+ /* The verifier shouldn't propagate r2's range to r1, so it should\
+ * still remember r1 = 0xffffffff and reject the below.\
+ */ \
+ r6 += r1; \
+ r0 = *(u32*)(r6 + 0); \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 8)
+#else
+ __imm_const(offset, 4)
+#endif
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (12 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 13/15] selftests/bpf: Add test cases for " Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-10 0:22 ` Andrii Nakryiko
2024-01-08 20:52 ` [PATCH bpf-next v2 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO} Maxim Mikityanskiy
2024-01-12 3:00 ` [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier patchwork-bot+netdevbpf
15 siblings, 1 reply; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
From: Eduard Zingerman <eddyz87@gmail.com>
Changes for scalar ID tracking of spilled unbound scalars lead to
certain verification performance regression. This commit mitigates the
regression by exploiting the following properties maintained by
check_stack_read_fixed_off():
- a mix of STACK_MISC, STACK_ZERO and STACK_INVALID marks is read as
unbounded scalar register;
- spi with all slots marked STACK_ZERO is read as scalar register with
value zero.
This commit modifies stacksafe() to consider situations above
equivalent.
Veristat results after this patch show significant gains:
$ ./veristat -e file,prog,states -f '!states_pct<10' -f '!states_b<10' -C not-opt after
File Program States (A) States (B) States (DIFF)
---------------- -------- ---------- ---------- ----------------
pyperf180.bpf.o on_event 10456 8422 -2034 (-19.45%)
pyperf600.bpf.o on_event 37319 22519 -14800 (-39.66%)
strobemeta.bpf.o on_event 13435 4703 -8732 (-64.99%)
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 83 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index aeb3e198a5ea..cb82f8d4226f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1170,6 +1170,12 @@ static void mark_stack_slot_misc(struct bpf_verifier_env *env, u8 *stype)
*stype = STACK_MISC;
}
+static bool is_spilled_scalar_reg64(const struct bpf_stack_state *stack)
+{
+ return stack->slot_type[0] == STACK_SPILL &&
+ stack->spilled_ptr.type == SCALAR_VALUE;
+}
+
static void scrub_spilled_slot(u8 *stype)
{
if (*stype != STACK_INVALID)
@@ -16459,11 +16465,45 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
}
}
+static bool is_stack_zero64(struct bpf_stack_state *stack)
+{
+ u32 i;
+
+ for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i)
+ if (stack->slot_type[i] != STACK_ZERO)
+ return false;
+ return true;
+}
+
+static bool is_stack_unbound_slot64(struct bpf_verifier_env *env,
+ struct bpf_stack_state *stack)
+{
+ u32 i;
+
+ for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i)
+ if (stack->slot_type[i] != STACK_ZERO &&
+ stack->slot_type[i] != STACK_MISC &&
+ (!env->allow_uninit_stack || stack->slot_type[i] != STACK_INVALID))
+ return false;
+ return true;
+}
+
+static bool is_spilled_unbound_scalar_reg64(struct bpf_stack_state *stack)
+{
+ return is_spilled_scalar_reg64(stack) && __is_scalar_unbounded(&stack->spilled_ptr);
+}
+
static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
{
+ struct bpf_reg_state unbound_reg = {};
+ struct bpf_reg_state zero_reg = {};
int i, spi;
+ __mark_reg_unknown(env, &unbound_reg);
+ __mark_reg_const_zero(env, &zero_reg);
+ zero_reg.precise = true;
+
/* walk slots of the explored stack and ignore any additional
* slots in the current stack, since explored(safe) state
* didn't use them
@@ -16484,6 +16524,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
continue;
}
+ /* load of stack value with all MISC and ZERO slots produces unbounded
+ * scalar value, call regsafe to ensure scalar ids are compared.
+ */
+ if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) &&
+ is_stack_unbound_slot64(env, &cur->stack[spi])) {
+ i += BPF_REG_SIZE - 1;
+ if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg,
+ idmap, exact))
+ return false;
+ continue;
+ }
+
+ if (is_stack_unbound_slot64(env, &old->stack[spi]) &&
+ is_spilled_unbound_scalar_reg64(&cur->stack[spi])) {
+ i += BPF_REG_SIZE - 1;
+ if (!regsafe(env, &unbound_reg, &cur->stack[spi].spilled_ptr,
+ idmap, exact))
+ return false;
+ continue;
+ }
+
+ /* load of stack value with all ZERO slots produces scalar value 0,
+ * call regsafe to ensure scalar ids are compared and precision
+ * flags are taken into account.
+ */
+ if (is_spilled_scalar_reg64(&old->stack[spi]) &&
+ is_stack_zero64(&cur->stack[spi])) {
+ if (!regsafe(env, &old->stack[spi].spilled_ptr, &zero_reg,
+ idmap, exact))
+ return false;
+ i += BPF_REG_SIZE - 1;
+ continue;
+ }
+
+ if (is_stack_zero64(&old->stack[spi]) &&
+ is_spilled_scalar_reg64(&cur->stack[spi])) {
+ if (!regsafe(env, &zero_reg, &cur->stack[spi].spilled_ptr,
+ idmap, exact))
+ return false;
+ i += BPF_REG_SIZE - 1;
+ continue;
+ }
+
if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_INVALID)
continue;
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH bpf-next v2 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO}
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (13 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars Maxim Mikityanskiy
@ 2024-01-08 20:52 ` Maxim Mikityanskiy
2024-01-10 0:27 ` Andrii Nakryiko
2024-01-12 3:00 ` [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier patchwork-bot+netdevbpf
15 siblings, 1 reply; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-08 20:52 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
From: Eduard Zingerman <eddyz87@gmail.com>
Check that stacksafe() considers the following old vs cur stack spill
state combinations equivalent:
- spill of unbound scalar vs combination of STACK_{MISC,ZERO,INVALID}
- STACK_MISC vs spill of unbound scalar
- spill of scalar 0 vs STACK_ZERO
- STACK_ZERO vs spill of scalar 0
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 192 ++++++++++++++++++
1 file changed, 192 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 3764111d190d..3cd3fe30357f 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -1044,4 +1044,196 @@ l0_%=: r1 >>= 32; \
: __clobber_all);
}
+/* stacksafe(): check if spill of unbound scalar in old state is
+ * considered equivalent to any state of the spill in the current state.
+ *
+ * On the first verification path an unbound scalar is written for
+ * fp-8 and later marked precise.
+ * On the second verification path a mix of STACK_MISC/ZERO/INVALID is
+ * written to fp-8. These should be considered equivalent.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("10: (79) r0 = *(u64 *)(r10 -8)")
+__msg("10: safe")
+__msg("processed 16 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_unbound_scalar_vs_cur_anything(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "r7 = r0;"
+ /* get a random value for storing at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "if r7 == 0 goto 1f;"
+ /* unbound scalar written to fp-8 */
+ "*(u64*)(r10 - 8) = r0;"
+ "goto 2f;"
+"1:"
+ /* mark fp-8 as mix of STACK_MISC/ZERO/INVALID */
+ "r1 = 0;"
+ "*(u8*)(r10 - 8) = r0;"
+ "*(u8*)(r10 - 7) = r1;"
+ /* fp-2..fp-6 remain STACK_INVALID */
+ "*(u8*)(r10 - 1) = r0;"
+"2:"
+ /* read fp-8 and force it precise, should be considered safe
+ * on second visit
+ */
+ "r0 = *(u64*)(r10 - 8);"
+ "r0 &= 0xff;"
+ "r1 = r10;"
+ "r1 += r0;"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
+/* stacksafe(): check if STACK_MISC in old state is considered
+ * equivalent to stack spill of unbound scalar in cur state.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar(id=1) R10=fp0 fp-8=scalar(id=1)")
+__msg("8: safe")
+__msg("processed 11 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_unbound_scalar_vs_cur_stack_misc(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "if r0 == 0 goto 1f;"
+ /* conjure unbound scalar at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+ "goto 2f;"
+"1:"
+ /* conjure STACK_MISC at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+ "*(u32*)(r10 - 4) = r0;"
+"2:"
+ /* read fp-8, should be considered safe on second visit */
+ "r0 = *(u64*)(r10 - 8);"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
+/* stacksafe(): check if stack spill of unbound scalar in old state is
+ * considered equivalent to STACK_MISC in cur state.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar() R10=fp0 fp-8=mmmmmmmm")
+__msg("8: safe")
+__msg("processed 11 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_stack_misc_vs_cur_unbound_scalar(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "if r0 == 0 goto 1f;"
+ /* conjure STACK_MISC at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+ "*(u32*)(r10 - 4) = r0;"
+ "goto 2f;"
+"1:"
+ /* conjure unbound scalar at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+"2:"
+ /* read fp-8, should be considered safe on second visit */
+ "r0 = *(u64*)(r10 - 8);"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
+/* stacksafe(): check if spill of register with value 0 in old state
+ * is considered equivalent to STACK_ZERO.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("9: (79) r0 = *(u64 *)(r10 -8)")
+__msg("9: safe")
+__msg("processed 15 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_spill_zero_vs_stack_zero(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "r7 = r0;"
+ /* get a random value for storing at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "if r7 == 0 goto 1f;"
+ /* conjure spilled register with value 0 at fp-8 */
+ "*(u64*)(r10 - 8) = r0;"
+ "if r0 != 0 goto 3f;"
+ "goto 2f;"
+"1:"
+ /* conjure STACK_ZERO at fp-8 */
+ "r1 = 0;"
+ "*(u64*)(r10 - 8) = r1;"
+"2:"
+ /* read fp-8 and force it precise, should be considered safe
+ * on second visit
+ */
+ "r0 = *(u64*)(r10 - 8);"
+ "r1 = r10;"
+ "r1 += r0;"
+"3:"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
+/* stacksafe(): similar to old_spill_zero_vs_stack_zero() but the
+ * other way around: check if STACK_ZERO is considered equivalent to
+ * spill of register with value 0.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("8: (79) r0 = *(u64 *)(r10 -8)")
+__msg("8: safe")
+__msg("processed 14 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_stack_zero_vs_spill_zero(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "if r0 == 0 goto 1f;"
+ /* conjure STACK_ZERO at fp-8 */
+ "r1 = 0;"
+ "*(u64*)(r10 - 8) = r1;"
+ "goto 2f;"
+"1:"
+ /* conjure spilled register with value 0 at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+ "if r0 != 0 goto 3f;"
+"2:"
+ /* read fp-8 and force it precise, should be considered safe
+ * on second visit
+ */
+ "r0 = *(u64*)(r10 - 8);"
+ "r1 = r10;"
+ "r1 += r0;"
+"3:"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.43.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 09/15] selftests/bpf: Test assigning ID to scalars on spill
2024-01-08 20:52 ` [PATCH bpf-next v2 09/15] selftests/bpf: Test assigning " Maxim Mikityanskiy
@ 2024-01-09 23:34 ` Andrii Nakryiko
0 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-09 23:34 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf, linux-kselftest,
netdev, Maxim Mikityanskiy
On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> The previous commit implemented assigning IDs to registers holding
> scalars before spill. Add the test cases to check the new functionality.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
> .../selftests/bpf/progs/verifier_spill_fill.c | 133 ++++++++++++++++++
> 1 file changed, 133 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> index f303ac19cf41..b05aab925ee5 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> @@ -766,4 +766,137 @@ l0_%=: r0 = 0; \
> : __clobber_all);
> }
>
[...]
> +
> +SEC("xdp")
> +__description("8-bit spill of 8-bit reg should assign ID")
> +__success __retval(0)
> +__naked void spill_8bit_of_8bit_ok(void)
> +{
> + asm volatile (" \
> + /* Roll one bit to make the register inexact. */\
> + call %[bpf_get_prandom_u32]; \
> + r0 &= 0x80; \
> + /* 8-bit spill r0 to stack - should assign an ID. */\
> + *(u8*)(r10 - 8) = r0; \
> + /* 8-bit fill r1 from stack - should preserve the ID. */\
> + r1 = *(u8*)(r10 - 8); \
> + /* Compare r1 with another register to trigger find_equal_scalars.\
> + * Having one random bit is important here, otherwise the verifier cuts\
> + * the corners. \
> + */ \
> + r2 = 0; \
> + if r1 != r2 goto l0_%=; \
> + /* The result of this comparison is predefined. */\
> + if r0 == r2 goto l0_%=; \
> + /* Dead branch: the verifier should prune it. Do an invalid memory\
> + * access if the verifier follows it. \
> + */ \
> + r0 = *(u64*)(r9 + 0); \
> + exit; \
> +l0_%=: r0 = 0; \
> + exit; \
> +" :
> + : __imm(bpf_get_prandom_u32)
> + : __clobber_all);
> +}
> +
Can you add a test where we spill register of one size and fill a
different size? And what should the behavior be? Should we or should
we not preserve linked IDs in such situation?
> char _license[] SEC("license") = "GPL";
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 12/15] bpf: Preserve boundaries and track scalars on narrowing fill
2024-01-08 20:52 ` [PATCH bpf-next v2 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Maxim Mikityanskiy
@ 2024-01-09 23:51 ` Andrii Nakryiko
0 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-09 23:51 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf, linux-kselftest,
netdev, Maxim Mikityanskiy
On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> When the width of a fill is smaller than the width of the preceding
> spill, the information about scalar boundaries can still be preserved,
> as long as it's coerced to the right width (done by coerce_reg_to_size).
> Even further, if the actual value fits into the fill width, the ID can
> be preserved as well for further tracking of equal scalars.
>
> Implement the above improvements, which makes narrowing fills behave the
> same as narrowing spills and MOVs between registers.
>
> Two tests are adjusted to accommodate for endianness differences and to
> take into account that it's now allowed to do a narrowing fill from the
> least significant bits.
>
> reg_bounds_sync is added to coerce_reg_to_size to correctly adjust
> umin/umax boundaries after the var_off truncation, for example, a 64-bit
> value 0xXXXXXXXX00000000, when read as a 32-bit, gets umin = 0, umax =
> 0xFFFFFFFF, var_off = (0x0; 0xffffffff00000000), which needs to be
> synced down to umax = 0, otherwise reg_bounds_sanity_check doesn't pass.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> ---
> include/linux/bpf_verifier.h | 2 --
> include/linux/filter.h | 12 ++++++++
> kernel/bpf/verifier.c | 15 +++++++---
> .../selftests/bpf/progs/verifier_spill_fill.c | 28 +++++++++++++------
> 4 files changed, 42 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index e11baecbde68..95ea7657f07e 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -239,8 +239,6 @@ enum bpf_stack_slot_type {
> STACK_ITER,
> };
>
> -#define BPF_REG_SIZE 8 /* size of eBPF register in bytes */
> -
> #define BPF_REGMASK_ARGS ((1 << BPF_REG_1) | (1 << BPF_REG_2) | \
> (1 << BPF_REG_3) | (1 << BPF_REG_4) | \
> (1 << BPF_REG_5))
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 68fb6c8142fe..be784be7ed4e 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -39,6 +39,8 @@ struct sock_reuseport;
> struct ctl_table;
> struct ctl_table_header;
>
> +#define BPF_REG_SIZE 8 /* size of eBPF register in bytes */
> +
> /* ArgX, context and stack frame pointer register positions. Note,
> * Arg1, Arg2, Arg3, etc are used as argument mappings of function
> * calls in BPF_CALL instruction.
> @@ -881,6 +883,16 @@ bpf_ctx_narrow_access_offset(u32 off, u32 size, u32 size_default)
>
> #define bpf_classic_proglen(fprog) (fprog->len * sizeof(fprog->filter[0]))
>
> +static inline bool
> +bpf_stack_narrow_access_ok(int off, int size, int spill_size)
this is used by verifier.c, right? So why not add this to bpf_verifier.h?
nit: given we have spill_size, should we s/size/fill_size/ for symmetry?
> +{
> +#ifdef __BIG_ENDIAN
> + off -= spill_size - size;
> +#endif
> +
> + return !(off % BPF_REG_SIZE);
> +}
> +
> static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
> {
> #ifndef CONFIG_BPF_JIT_ALWAYS_ON
[...]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 13/15] selftests/bpf: Add test cases for narrowing fill
2024-01-08 20:52 ` [PATCH bpf-next v2 13/15] selftests/bpf: Add test cases for " Maxim Mikityanskiy
@ 2024-01-09 23:55 ` Andrii Nakryiko
0 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-09 23:55 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf, linux-kselftest,
netdev, Maxim Mikityanskiy
On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> The previous commit allowed to preserve boundaries and track IDs of
> scalars on narrowing fills. Add test cases for that pattern.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
> .../selftests/bpf/progs/verifier_spill_fill.c | 108 ++++++++++++++++++
> 1 file changed, 108 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> index fab8ae9fe947..3764111d190d 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> @@ -936,4 +936,112 @@ l0_%=: r0 = 0; \
> : __clobber_all);
> }
>
> +SEC("xdp")
> +__description("32-bit fill after 64-bit spill")
> +__success __retval(0)
> +__naked void fill_32bit_after_spill_64bit(void)
I guess these tests are an answer for my question about mixing
spill/fill sizes on earlier patch (so disregard those)
> +{
> + asm volatile(" \
> + /* Randomize the upper 32 bits. */ \
> + call %[bpf_get_prandom_u32]; \
> + r0 <<= 32; \
> + /* 64-bit spill r0 to stack. */ \
> + *(u64*)(r10 - 8) = r0; \
> + /* 32-bit fill r0 from stack. */ \
> + r0 = *(u32*)(r10 - %[offset]); \
have you considered doing the BYTE_ORDER check right here and have
offset embedded in assembly instruction directly:
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
r0 = *(u32*)(r10 - 8);
#else
r0 = *(u32*)(r10 - 4);
#endif
It's a bit less jumping around the code when reading. And it's kind of
obviously that this is endianness-dependent without jumping to
definition of %[offset]?
> + /* Boundary check on r0 with predetermined result. */\
> + if r0 == 0 goto l0_%=; \
> + /* Dead branch: the verifier should prune it. Do an invalid memory\
> + * access if the verifier follows it. \
> + */ \
> + r0 = *(u64*)(r9 + 0); \
> +l0_%=: exit; \
> +" :
> + : __imm(bpf_get_prandom_u32),
> +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
> + __imm_const(offset, 8)
> +#else
> + __imm_const(offset, 4)
> +#endif
> + : __clobber_all);
> +}
> +
[...]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars
2024-01-08 20:52 ` [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars Maxim Mikityanskiy
@ 2024-01-10 0:22 ` Andrii Nakryiko
2024-01-10 21:04 ` Eduard Zingerman
0 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-10 0:22 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf, linux-kselftest,
netdev
On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Eduard Zingerman <eddyz87@gmail.com>
>
> Changes for scalar ID tracking of spilled unbound scalars lead to
> certain verification performance regression. This commit mitigates the
> regression by exploiting the following properties maintained by
> check_stack_read_fixed_off():
> - a mix of STACK_MISC, STACK_ZERO and STACK_INVALID marks is read as
> unbounded scalar register;
> - spi with all slots marked STACK_ZERO is read as scalar register with
> value zero.
>
> This commit modifies stacksafe() to consider situations above
> equivalent.
>
> Veristat results after this patch show significant gains:
>
> $ ./veristat -e file,prog,states -f '!states_pct<10' -f '!states_b<10' -C not-opt after
> File Program States (A) States (B) States (DIFF)
> ---------------- -------- ---------- ---------- ----------------
> pyperf180.bpf.o on_event 10456 8422 -2034 (-19.45%)
> pyperf600.bpf.o on_event 37319 22519 -14800 (-39.66%)
> strobemeta.bpf.o on_event 13435 4703 -8732 (-64.99%)
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
> kernel/bpf/verifier.c | 83 +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 83 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index aeb3e198a5ea..cb82f8d4226f 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1170,6 +1170,12 @@ static void mark_stack_slot_misc(struct bpf_verifier_env *env, u8 *stype)
> *stype = STACK_MISC;
> }
>
> +static bool is_spilled_scalar_reg64(const struct bpf_stack_state *stack)
> +{
> + return stack->slot_type[0] == STACK_SPILL &&
> + stack->spilled_ptr.type == SCALAR_VALUE;
> +}
> +
> static void scrub_spilled_slot(u8 *stype)
> {
> if (*stype != STACK_INVALID)
> @@ -16459,11 +16465,45 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
> }
> }
>
> +static bool is_stack_zero64(struct bpf_stack_state *stack)
> +{
> + u32 i;
> +
> + for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i)
> + if (stack->slot_type[i] != STACK_ZERO)
> + return false;
> + return true;
> +}
> +
> +static bool is_stack_unbound_slot64(struct bpf_verifier_env *env,
> + struct bpf_stack_state *stack)
> +{
> + u32 i;
> +
> + for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i)
> + if (stack->slot_type[i] != STACK_ZERO &&
> + stack->slot_type[i] != STACK_MISC &&
> + (!env->allow_uninit_stack || stack->slot_type[i] != STACK_INVALID))
> + return false;
> + return true;
> +}
> +
> +static bool is_spilled_unbound_scalar_reg64(struct bpf_stack_state *stack)
> +{
> + return is_spilled_scalar_reg64(stack) && __is_scalar_unbounded(&stack->spilled_ptr);
> +}
> +
> static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
> {
> + struct bpf_reg_state unbound_reg = {};
> + struct bpf_reg_state zero_reg = {};
> int i, spi;
>
> + __mark_reg_unknown(env, &unbound_reg);
> + __mark_reg_const_zero(env, &zero_reg);
> + zero_reg.precise = true;
these are immutable, right? Would it make sense to set them up just
once as static variables instead of initializing on each check?
> +
> /* walk slots of the explored stack and ignore any additional
> * slots in the current stack, since explored(safe) state
> * didn't use them
> @@ -16484,6 +16524,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> continue;
> }
>
we didn't check that cur->stack[spi] is ok to access yet, it's done a
bit later with `if (i >= cur->allocated_stack)`, if I'm not mistaken.
So these checks would need to be moved a bit lower, probably.
> + /* load of stack value with all MISC and ZERO slots produces unbounded
> + * scalar value, call regsafe to ensure scalar ids are compared.
> + */
> + if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) &&
> + is_stack_unbound_slot64(env, &cur->stack[spi])) {
> + i += BPF_REG_SIZE - 1;
> + if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg,
> + idmap, exact))
> + return false;
> + continue;
> + }
> +
> + if (is_stack_unbound_slot64(env, &old->stack[spi]) &&
> + is_spilled_unbound_scalar_reg64(&cur->stack[spi])) {
> + i += BPF_REG_SIZE - 1;
> + if (!regsafe(env, &unbound_reg, &cur->stack[spi].spilled_ptr,
> + idmap, exact))
> + return false;
> + continue;
> + }
scalar_old = scalar_cur = NULL;
if (is_spilled_unbound64(&old->..))
scalar_old = old->stack[spi].slot_type[0] == STACK_SPILL ?
&old->stack[spi].spilled_ptr : &unbound_reg;
if (is_spilled_unbound64(&cur->..))
scalar_cur = cur->stack[spi].slot_type[0] == STACK_SPILL ?
&cur->stack[spi].spilled_ptr : &unbound_reg;
if (scalar_old && scalar_cur) {
if (!regsafe(env, scalar_old, scalar_new, idmap, exact)
return false;
i += BPF_REG_SIZE - 1;
continue;
}
where is_spilled_unbound64() would be basically `return
is_spilled_unbound_scalar_reg64(&old->..) ||
is_stack_unbound_slot64(&old->...)`;
Similarly for zero case? Though I'm wondering if zero case should be
checked first, as it's actually a subset of is_spilled_unbound64 when
it comes to STACK_ZERO/STACK_MISC mixes, no?
> +
> + /* load of stack value with all ZERO slots produces scalar value 0,
> + * call regsafe to ensure scalar ids are compared and precision
> + * flags are taken into account.
> + */
> + if (is_spilled_scalar_reg64(&old->stack[spi]) &&
> + is_stack_zero64(&cur->stack[spi])) {
> + if (!regsafe(env, &old->stack[spi].spilled_ptr, &zero_reg,
> + idmap, exact))
> + return false;
> + i += BPF_REG_SIZE - 1;
> + continue;
> + }
> +
> + if (is_stack_zero64(&old->stack[spi]) &&
> + is_spilled_scalar_reg64(&cur->stack[spi])) {
> + if (!regsafe(env, &zero_reg, &cur->stack[spi].spilled_ptr,
> + idmap, exact))
> + return false;
> + i += BPF_REG_SIZE - 1;
> + continue;
> + }
> +
> if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_INVALID)
> continue;
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO}
2024-01-08 20:52 ` [PATCH bpf-next v2 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO} Maxim Mikityanskiy
@ 2024-01-10 0:27 ` Andrii Nakryiko
2024-01-10 20:27 ` Eduard Zingerman
0 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-10 0:27 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf, linux-kselftest,
netdev
On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Eduard Zingerman <eddyz87@gmail.com>
>
> Check that stacksafe() considers the following old vs cur stack spill
> state combinations equivalent:
> - spill of unbound scalar vs combination of STACK_{MISC,ZERO,INVALID}
> - STACK_MISC vs spill of unbound scalar
> - spill of scalar 0 vs STACK_ZERO
> - STACK_ZERO vs spill of scalar 0
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
> .../selftests/bpf/progs/verifier_spill_fill.c | 192 ++++++++++++++++++
> 1 file changed, 192 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> index 3764111d190d..3cd3fe30357f 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> @@ -1044,4 +1044,196 @@ l0_%=: r1 >>= 32; \
> : __clobber_all);
> }
>
> +/* stacksafe(): check if spill of unbound scalar in old state is
> + * considered equivalent to any state of the spill in the current state.
> + *
> + * On the first verification path an unbound scalar is written for
> + * fp-8 and later marked precise.
> + * On the second verification path a mix of STACK_MISC/ZERO/INVALID is
> + * written to fp-8. These should be considered equivalent.
> + */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("10: (79) r0 = *(u64 *)(r10 -8)")
> +__msg("10: safe")
> +__msg("processed 16 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void old_unbound_scalar_vs_cur_anything(void)
> +{
> + asm volatile(
> + /* get a random value for branching */
> + "call %[bpf_ktime_get_ns];"
> + "r7 = r0;"
> + /* get a random value for storing at fp-8 */
> + "call %[bpf_ktime_get_ns];"
> + "if r7 == 0 goto 1f;"
> + /* unbound scalar written to fp-8 */
> + "*(u64*)(r10 - 8) = r0;"
> + "goto 2f;"
> +"1:"
> + /* mark fp-8 as mix of STACK_MISC/ZERO/INVALID */
> + "r1 = 0;"
> + "*(u8*)(r10 - 8) = r0;"
this is actually a spilled register, not STACK_ZERO. Is it important?
> + "*(u8*)(r10 - 7) = r1;"
> + /* fp-2..fp-6 remain STACK_INVALID */
> + "*(u8*)(r10 - 1) = r0;"
> +"2:"
> + /* read fp-8 and force it precise, should be considered safe
> + * on second visit
> + */
> + "r0 = *(u64*)(r10 - 8);"
> + "r0 &= 0xff;"
> + "r1 = r10;"
> + "r1 += r0;"
> + "exit;"
> + :
> + : __imm(bpf_ktime_get_ns)
> + : __clobber_all);
> +}
> +
> +/* stacksafe(): check if STACK_MISC in old state is considered
> + * equivalent to stack spill of unbound scalar in cur state.
> + */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar(id=1) R10=fp0 fp-8=scalar(id=1)")
> +__msg("8: safe")
> +__msg("processed 11 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void old_unbound_scalar_vs_cur_stack_misc(void)
> +{
> + asm volatile(
> + /* get a random value for branching */
> + "call %[bpf_ktime_get_ns];"
> + "if r0 == 0 goto 1f;"
> + /* conjure unbound scalar at fp-8 */
> + "call %[bpf_ktime_get_ns];"
> + "*(u64*)(r10 - 8) = r0;"
> + "goto 2f;"
> +"1:"
> + /* conjure STACK_MISC at fp-8 */
> + "call %[bpf_ktime_get_ns];"
> + "*(u64*)(r10 - 8) = r0;"
> + "*(u32*)(r10 - 4) = r0;"
> +"2:"
> + /* read fp-8, should be considered safe on second visit */
> + "r0 = *(u64*)(r10 - 8);"
> + "exit;"
> + :
> + : __imm(bpf_ktime_get_ns)
> + : __clobber_all);
> +}
> +
> +/* stacksafe(): check if stack spill of unbound scalar in old state is
> + * considered equivalent to STACK_MISC in cur state.
> + */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar() R10=fp0 fp-8=mmmmmmmm")
> +__msg("8: safe")
> +__msg("processed 11 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void old_stack_misc_vs_cur_unbound_scalar(void)
> +{
> + asm volatile(
> + /* get a random value for branching */
> + "call %[bpf_ktime_get_ns];"
> + "if r0 == 0 goto 1f;"
> + /* conjure STACK_MISC at fp-8 */
> + "call %[bpf_ktime_get_ns];"
> + "*(u64*)(r10 - 8) = r0;"
> + "*(u32*)(r10 - 4) = r0;"
> + "goto 2f;"
> +"1:"
> + /* conjure unbound scalar at fp-8 */
> + "call %[bpf_ktime_get_ns];"
> + "*(u64*)(r10 - 8) = r0;"
> +"2:"
> + /* read fp-8, should be considered safe on second visit */
> + "r0 = *(u64*)(r10 - 8);"
> + "exit;"
> + :
> + : __imm(bpf_ktime_get_ns)
> + : __clobber_all);
> +}
> +
> +/* stacksafe(): check if spill of register with value 0 in old state
> + * is considered equivalent to STACK_ZERO.
> + */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("9: (79) r0 = *(u64 *)(r10 -8)")
> +__msg("9: safe")
> +__msg("processed 15 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void old_spill_zero_vs_stack_zero(void)
> +{
> + asm volatile(
> + /* get a random value for branching */
> + "call %[bpf_ktime_get_ns];"
> + "r7 = r0;"
> + /* get a random value for storing at fp-8 */
> + "call %[bpf_ktime_get_ns];"
> + "if r7 == 0 goto 1f;"
> + /* conjure spilled register with value 0 at fp-8 */
> + "*(u64*)(r10 - 8) = r0;"
> + "if r0 != 0 goto 3f;"
> + "goto 2f;"
> +"1:"
> + /* conjure STACK_ZERO at fp-8 */
> + "r1 = 0;"
> + "*(u64*)(r10 - 8) = r1;"
this is not STACK_ZERO, it's full register spill
> +"2:"
> + /* read fp-8 and force it precise, should be considered safe
> + * on second visit
> + */
> + "r0 = *(u64*)(r10 - 8);"
> + "r1 = r10;"
> + "r1 += r0;"
> +"3:"
> + "exit;"
> + :
> + : __imm(bpf_ktime_get_ns)
> + : __clobber_all);
> +}
> +
> +/* stacksafe(): similar to old_spill_zero_vs_stack_zero() but the
> + * other way around: check if STACK_ZERO is considered equivalent to
> + * spill of register with value 0.
> + */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("8: (79) r0 = *(u64 *)(r10 -8)")
> +__msg("8: safe")
> +__msg("processed 14 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void old_stack_zero_vs_spill_zero(void)
> +{
> + asm volatile(
> + /* get a random value for branching */
> + "call %[bpf_ktime_get_ns];"
> + "if r0 == 0 goto 1f;"
> + /* conjure STACK_ZERO at fp-8 */
> + "r1 = 0;"
> + "*(u64*)(r10 - 8) = r1;"
same, please double check this STACK_xxx assumptions, as now we spill
registers instead of STACK_ZERO in a lot of cases
> + "goto 2f;"
> +"1:"
> + /* conjure spilled register with value 0 at fp-8 */
> + "call %[bpf_ktime_get_ns];"
> + "*(u64*)(r10 - 8) = r0;"
> + "if r0 != 0 goto 3f;"
> +"2:"
> + /* read fp-8 and force it precise, should be considered safe
> + * on second visit
> + */
> + "r0 = *(u64*)(r10 - 8);"
> + "r1 = r10;"
> + "r1 += r0;"
> +"3:"
> + "exit;"
> + :
> + : __imm(bpf_ktime_get_ns)
> + : __clobber_all);
> +}
> +
> char _license[] SEC("license") = "GPL";
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO}
2024-01-10 0:27 ` Andrii Nakryiko
@ 2024-01-10 20:27 ` Eduard Zingerman
0 siblings, 0 replies; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-10 20:27 UTC (permalink / raw)
To: Andrii Nakryiko, Maxim Mikityanskiy
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Shung-Hsi Yu, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
On Tue, 2024-01-09 at 16:27 -0800, Andrii Nakryiko wrote:
[...]
> same, please double check this STACK_xxx assumptions, as now we spill
> registers instead of STACK_ZERO in a lot of cases
Right, the test is outdated after your recent fixes for STACK_ZERO.
Thank you for catching this.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars
2024-01-10 0:22 ` Andrii Nakryiko
@ 2024-01-10 21:04 ` Eduard Zingerman
2024-01-10 21:52 ` Andrii Nakryiko
0 siblings, 1 reply; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-10 21:04 UTC (permalink / raw)
To: Andrii Nakryiko, Maxim Mikityanskiy
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Shung-Hsi Yu, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
On Tue, 2024-01-09 at 16:22 -0800, Andrii Nakryiko wrote:
[...]
> > static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
> > {
> > + struct bpf_reg_state unbound_reg = {};
> > + struct bpf_reg_state zero_reg = {};
> > int i, spi;
> >
> > + __mark_reg_unknown(env, &unbound_reg);
> > + __mark_reg_const_zero(env, &zero_reg);
> > + zero_reg.precise = true;
>
> these are immutable, right? Would it make sense to set them up just
> once as static variables instead of initializing on each check?
Should be possible.
> > +
> > /* walk slots of the explored stack and ignore any additional
> > * slots in the current stack, since explored(safe) state
> > * didn't use them
> > @@ -16484,6 +16524,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > continue;
> > }
> >
>
> we didn't check that cur->stack[spi] is ok to access yet, it's done a
> bit later with `if (i >= cur->allocated_stack)`, if I'm not mistaken.
> So these checks would need to be moved a bit lower, probably.
Right. And it seems the issue is already present:
if (exact &&
old->stack[spi].slot_type[i % BPF_REG_SIZE] !=
cur->stack[spi].slot_type[i % BPF_REG_SIZE])
return false;
This is currently executed before `if (i >= cur->allocated_stack)` check as well.
Introduced by another commit of mine :(
> > + /* load of stack value with all MISC and ZERO slots produces unbounded
> > + * scalar value, call regsafe to ensure scalar ids are compared.
> > + */
> > + if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) &&
> > + is_stack_unbound_slot64(env, &cur->stack[spi])) {
> > + i += BPF_REG_SIZE - 1;
> > + if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg,
> > + idmap, exact))
> > + return false;
> > + continue;
> > + }
> > +
> > + if (is_stack_unbound_slot64(env, &old->stack[spi]) &&
> > + is_spilled_unbound_scalar_reg64(&cur->stack[spi])) {
> > + i += BPF_REG_SIZE - 1;
> > + if (!regsafe(env, &unbound_reg, &cur->stack[spi].spilled_ptr,
> > + idmap, exact))
> > + return false;
> > + continue;
> > + }
>
> scalar_old = scalar_cur = NULL;
> if (is_spilled_unbound64(&old->..))
> scalar_old = old->stack[spi].slot_type[0] == STACK_SPILL ?
> &old->stack[spi].spilled_ptr : &unbound_reg;
> if (is_spilled_unbound64(&cur->..))
> scalar_cur = cur->stack[spi].slot_type[0] == STACK_SPILL ?
> &cur->stack[spi].spilled_ptr : &unbound_reg;
> if (scalar_old && scalar_cur) {
> if (!regsafe(env, scalar_old, scalar_new, idmap, exact)
> return false;
> i += BPF_REG_SIZE - 1;
> continue;
> }
Ok, I'll switch to this.
(Although, I think old variant is a bit simpler to follow).
> where is_spilled_unbound64() would be basically `return
> is_spilled_unbound_scalar_reg64(&old->..) ||
> is_stack_unbound_slot64(&old->...)`;
>
> Similarly for zero case? Though I'm wondering if zero case should be
> checked first, as it's actually a subset of is_spilled_unbound64 when
> it comes to STACK_ZERO/STACK_MISC mixes, no?
Yes, makes sense.
[...]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars
2024-01-10 21:04 ` Eduard Zingerman
@ 2024-01-10 21:52 ` Andrii Nakryiko
0 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-10 21:52 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Maxim Mikityanskiy, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf, linux-kselftest,
netdev
On Wed, Jan 10, 2024 at 1:04 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Tue, 2024-01-09 at 16:22 -0800, Andrii Nakryiko wrote:
> [...]
> > > static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > > struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
> > > {
> > > + struct bpf_reg_state unbound_reg = {};
> > > + struct bpf_reg_state zero_reg = {};
> > > int i, spi;
> > >
> > > + __mark_reg_unknown(env, &unbound_reg);
> > > + __mark_reg_const_zero(env, &zero_reg);
> > > + zero_reg.precise = true;
> >
> > these are immutable, right? Would it make sense to set them up just
> > once as static variables instead of initializing on each check?
>
> Should be possible.
>
> > > +
> > > /* walk slots of the explored stack and ignore any additional
> > > * slots in the current stack, since explored(safe) state
> > > * didn't use them
> > > @@ -16484,6 +16524,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > > continue;
> > > }
> > >
> >
> > we didn't check that cur->stack[spi] is ok to access yet, it's done a
> > bit later with `if (i >= cur->allocated_stack)`, if I'm not mistaken.
> > So these checks would need to be moved a bit lower, probably.
>
> Right. And it seems the issue is already present:
>
> if (exact &&
> old->stack[spi].slot_type[i % BPF_REG_SIZE] !=
> cur->stack[spi].slot_type[i % BPF_REG_SIZE])
> return false;
>
> This is currently executed before `if (i >= cur->allocated_stack)` check as well.
> Introduced by another commit of mine :(
I guess we'll need to move that too, then
>
> > > + /* load of stack value with all MISC and ZERO slots produces unbounded
> > > + * scalar value, call regsafe to ensure scalar ids are compared.
> > > + */
> > > + if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) &&
> > > + is_stack_unbound_slot64(env, &cur->stack[spi])) {
> > > + i += BPF_REG_SIZE - 1;
> > > + if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg,
> > > + idmap, exact))
> > > + return false;
> > > + continue;
> > > + }
> > > +
> > > + if (is_stack_unbound_slot64(env, &old->stack[spi]) &&
> > > + is_spilled_unbound_scalar_reg64(&cur->stack[spi])) {
> > > + i += BPF_REG_SIZE - 1;
> > > + if (!regsafe(env, &unbound_reg, &cur->stack[spi].spilled_ptr,
> > > + idmap, exact))
> > > + return false;
> > > + continue;
> > > + }
> >
> > scalar_old = scalar_cur = NULL;
> > if (is_spilled_unbound64(&old->..))
> > scalar_old = old->stack[spi].slot_type[0] == STACK_SPILL ?
> > &old->stack[spi].spilled_ptr : &unbound_reg;
> > if (is_spilled_unbound64(&cur->..))
> > scalar_cur = cur->stack[spi].slot_type[0] == STACK_SPILL ?
> > &cur->stack[spi].spilled_ptr : &unbound_reg;
> > if (scalar_old && scalar_cur) {
> > if (!regsafe(env, scalar_old, scalar_new, idmap, exact)
> > return false;
> > i += BPF_REG_SIZE - 1;
> > continue;
> > }
>
> Ok, I'll switch to this.
> (Although, I think old variant is a bit simpler to follow).
my goal was to eliminate duplicated logic inside each if and kind of
showing at high level that we are comparing two "logically unbound
scalars", regardless of whether that's STACK_xxx mix or spilled
scalar.
I haven't thought this through, but if we can simplify further to
something like this:
if (is_spilled_unbound64(old) && is_spilled_unbound64(cur)) {
scalar_cur = ...
scalar_old = ...
if (!regsafe(...))
return false;
i += BPF_REG_SIZE - 1;
}
In general, this symmetry in two consecutive if conditions seems like
an opportunity to simplify. But if you think it's more complicated,
I'm fine with leaving it as is.
>
> > where is_spilled_unbound64() would be basically `return
> > is_spilled_unbound_scalar_reg64(&old->..) ||
> > is_stack_unbound_slot64(&old->...)`;
> >
> > Similarly for zero case? Though I'm wondering if zero case should be
> > checked first, as it's actually a subset of is_spilled_unbound64 when
> > it comes to STACK_ZERO/STACK_MISC mixes, no?
>
> Yes, makes sense.
>
> [...]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (14 preceding siblings ...)
2024-01-08 20:52 ` [PATCH bpf-next v2 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO} Maxim Mikityanskiy
@ 2024-01-12 3:00 ` patchwork-bot+netdevbpf
15 siblings, 0 replies; 28+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-01-12 3:00 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: eddyz87, ast, daniel, andrii, shung-hsi.yu, john.fastabend,
martin.lau, song, yonghong.song, kpsingh, sdf, haoluo, jolsa,
mykolal, shuah, davem, kuba, hawk, bpf, linux-kselftest, netdev,
maxim
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Mon, 8 Jan 2024 22:51:54 +0200 you wrote:
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> The goal of this series is to extend the verifier's capabilities of
> tracking scalars when they are spilled to stack, especially when the
> spill or fill is narrowing. It also contains a fix by Eduard for
> infinite loop detection and a state pruning optimization by Eduard that
> compensates for a verification complexity regression introduced by
> tracking unbounded scalars. These improvements reduce the surface of
> false rejections that I saw while working on Cilium codebase.
>
> [...]
Here is the summary with links:
- [bpf-next,v2,01/15] selftests/bpf: Fix the u64_offset_to_skb_data test
https://git.kernel.org/bpf/bpf-next/c/02fb00d34de1
- [bpf-next,v2,02/15] bpf: make infinite loop detection in is_state_visited() exact
https://git.kernel.org/bpf/bpf-next/c/3a96c705f48a
- [bpf-next,v2,03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection
https://git.kernel.org/bpf/bpf-next/c/723909ae6496
- [bpf-next,v2,04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills
https://git.kernel.org/bpf/bpf-next/c/0e00a9551c61
- [bpf-next,v2,05/15] selftests/bpf: Add a test case for 32-bit spill tracking
https://git.kernel.org/bpf/bpf-next/c/221dffec93e8
- [bpf-next,v2,06/15] bpf: Add the assign_scalar_id_before_mov function
https://git.kernel.org/bpf/bpf-next/c/85b6e9d75c8e
- [bpf-next,v2,07/15] bpf: Add the get_reg_width function
https://git.kernel.org/bpf/bpf-next/c/b08973e4d9c4
- [bpf-next,v2,08/15] bpf: Assign ID to scalars on spill
https://git.kernel.org/bpf/bpf-next/c/26b560765e67
- [bpf-next,v2,09/15] selftests/bpf: Test assigning ID to scalars on spill
https://git.kernel.org/bpf/bpf-next/c/5a052eb509e9
- [bpf-next,v2,10/15] bpf: Track spilled unbounded scalars
https://git.kernel.org/bpf/bpf-next/c/53ac20c9e0dd
- [bpf-next,v2,11/15] selftests/bpf: Test tracking spilled unbounded scalars
https://git.kernel.org/bpf/bpf-next/c/9ba80a06cabb
- [bpf-next,v2,12/15] bpf: Preserve boundaries and track scalars on narrowing fill
(no matching commit)
- [bpf-next,v2,13/15] selftests/bpf: Add test cases for narrowing fill
(no matching commit)
- [bpf-next,v2,14/15] bpf: Optimize state pruning for spilled scalars
(no matching commit)
- [bpf-next,v2,15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO}
(no matching commit)
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars
2024-01-08 20:52 ` [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars Maxim Mikityanskiy
@ 2024-01-12 19:10 ` Alexei Starovoitov
2024-01-12 20:44 ` Maxim Mikityanskiy
0 siblings, 1 reply; 28+ messages in thread
From: Alexei Starovoitov @ 2024-01-12 19:10 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf,
open list:KERNEL SELFTEST FRAMEWORK, Network Development,
Maxim Mikityanskiy
On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> Support the pattern where an unbounded scalar is spilled to the stack,
> then boundary checks are performed on the src register, after which the
> stack frame slot is refilled into a register.
>
> Before this commit, the verifier didn't treat the src register and the
> stack slot as related if the src register was an unbounded scalar. The
> register state wasn't copied, the id wasn't preserved, and the stack
> slot was marked as STACK_MISC. Subsequent boundary checks on the src
> register wouldn't result in updating the boundaries of the spilled
> variable on the stack.
>
> After this commit, the verifier will preserve the bond between src and
> dst even if src is unbounded, which permits to do boundary checks on src
> and refill dst later, still remembering its boundaries. Such a pattern
> is sometimes generated by clang when compiling complex long functions.
>
> One test is adjusted to reflect the fact that an untracked register is
> marked as precise at an earlier stage, and one more test is adjusted to
> reflect that now unbounded scalars are tracked.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
> kernel/bpf/verifier.c | 7 +------
> tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 6 +++---
> tools/testing/selftests/bpf/verifier/precise.c | 6 +++---
> 3 files changed, 7 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 055fa8096a08..e7fff5f5aa1d 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4389,11 +4389,6 @@ static bool __is_scalar_unbounded(struct bpf_reg_state *reg)
> reg->u32_min_value == 0 && reg->u32_max_value == U32_MAX;
> }
>
> -static bool register_is_bounded(struct bpf_reg_state *reg)
> -{
> - return reg->type == SCALAR_VALUE && !__is_scalar_unbounded(reg);
> -}
> -
> static bool __is_pointer_value(bool allow_ptr_leaks,
> const struct bpf_reg_state *reg)
> {
> @@ -4504,7 +4499,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> return err;
>
> mark_stack_slot_scratched(env, spi);
> - if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> + if (reg && !(off % BPF_REG_SIZE) && reg->type == SCALAR_VALUE && env->bpf_capable) {
> bool reg_value_fits;
>
> reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> index b05aab925ee5..57eb70e100a3 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> @@ -452,9 +452,9 @@ l0_%=: r1 >>= 16; \
> SEC("raw_tp")
> __log_level(2)
> __success
> -__msg("fp-8=0m??mmmm")
> -__msg("fp-16=00mm??mm")
> -__msg("fp-24=00mm???m")
> +__msg("fp-8=0m??scalar()")
> +__msg("fp-16=00mm??scalar()")
> +__msg("fp-24=00mm???scalar()")
> __naked void spill_subregs_preserve_stack_zero(void)
> {
> asm volatile (
> diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
> index 8a2ff81d8350..0a9293a57211 100644
> --- a/tools/testing/selftests/bpf/verifier/precise.c
> +++ b/tools/testing/selftests/bpf/verifier/precise.c
> @@ -183,10 +183,10 @@
> .prog_type = BPF_PROG_TYPE_XDP,
> .flags = BPF_F_TEST_STATE_FREQ,
> .errstr = "mark_precise: frame0: last_idx 7 first_idx 7\
> - mark_precise: frame0: parent state regs=r4 stack=:\
> + mark_precise: frame0: parent state regs=r4 stack=-8:\
> mark_precise: frame0: last_idx 6 first_idx 4\
> - mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\
> - mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\
> + mark_precise: frame0: regs=r4 stack=-8 before 6: (b7) r0 = -1\
> + mark_precise: frame0: regs=r4 stack=-8 before 5: (79) r4 = *(u64 *)(r10 -8)\
> mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\
> mark_precise: frame0: parent state regs=r0 stack=:\
> mark_precise: frame0: last_idx 3 first_idx 3\
Yesterday I've applied patches 1 through 11 to bpf-next.
Then Yonghong found that removal of register_is_bounded()
in this patch 10 makes __is_scalar_unbounded() unused which
breaks build.
So I dropped patches 10 and 11.
Today we found out that test_verifier is broken with patches 1 through 9.
Turned out that this hunk for verifier/precise.c in patch 10 should have been
in patch 8.
I manually took it and force pushed bpf-next again.
Please test bisectability of the series more carefully in the future.
As far as register_is_bounded() issue.
Maybe order patch 14 that uses __is_scalar_unbounded() first and
then add this patch 10 ?
Other ideas?
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars
2024-01-12 19:10 ` Alexei Starovoitov
@ 2024-01-12 20:44 ` Maxim Mikityanskiy
2024-01-12 20:50 ` Alexei Starovoitov
0 siblings, 1 reply; 28+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-12 20:44 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf,
open list:KERNEL SELFTEST FRAMEWORK, Network Development,
Maxim Mikityanskiy
On Fri, 12 Jan 2024 at 11:10:57 -0800, Alexei Starovoitov wrote:
> On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
> >
> > From: Maxim Mikityanskiy <maxim@isovalent.com>
> >
> > Support the pattern where an unbounded scalar is spilled to the stack,
> > then boundary checks are performed on the src register, after which the
> > stack frame slot is refilled into a register.
> >
> > Before this commit, the verifier didn't treat the src register and the
> > stack slot as related if the src register was an unbounded scalar. The
> > register state wasn't copied, the id wasn't preserved, and the stack
> > slot was marked as STACK_MISC. Subsequent boundary checks on the src
> > register wouldn't result in updating the boundaries of the spilled
> > variable on the stack.
> >
> > After this commit, the verifier will preserve the bond between src and
> > dst even if src is unbounded, which permits to do boundary checks on src
> > and refill dst later, still remembering its boundaries. Such a pattern
> > is sometimes generated by clang when compiling complex long functions.
> >
> > One test is adjusted to reflect the fact that an untracked register is
> > marked as precise at an earlier stage, and one more test is adjusted to
> > reflect that now unbounded scalars are tracked.
> >
> > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> > Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> > kernel/bpf/verifier.c | 7 +------
> > tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 6 +++---
> > tools/testing/selftests/bpf/verifier/precise.c | 6 +++---
> > 3 files changed, 7 insertions(+), 12 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 055fa8096a08..e7fff5f5aa1d 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -4389,11 +4389,6 @@ static bool __is_scalar_unbounded(struct bpf_reg_state *reg)
> > reg->u32_min_value == 0 && reg->u32_max_value == U32_MAX;
> > }
> >
> > -static bool register_is_bounded(struct bpf_reg_state *reg)
> > -{
> > - return reg->type == SCALAR_VALUE && !__is_scalar_unbounded(reg);
> > -}
> > -
> > static bool __is_pointer_value(bool allow_ptr_leaks,
> > const struct bpf_reg_state *reg)
> > {
> > @@ -4504,7 +4499,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > return err;
> >
> > mark_stack_slot_scratched(env, spi);
> > - if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> > + if (reg && !(off % BPF_REG_SIZE) && reg->type == SCALAR_VALUE && env->bpf_capable) {
> > bool reg_value_fits;
> >
> > reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> > diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > index b05aab925ee5..57eb70e100a3 100644
> > --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > @@ -452,9 +452,9 @@ l0_%=: r1 >>= 16; \
> > SEC("raw_tp")
> > __log_level(2)
> > __success
> > -__msg("fp-8=0m??mmmm")
> > -__msg("fp-16=00mm??mm")
> > -__msg("fp-24=00mm???m")
> > +__msg("fp-8=0m??scalar()")
> > +__msg("fp-16=00mm??scalar()")
> > +__msg("fp-24=00mm???scalar()")
> > __naked void spill_subregs_preserve_stack_zero(void)
> > {
> > asm volatile (
> > diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
> > index 8a2ff81d8350..0a9293a57211 100644
> > --- a/tools/testing/selftests/bpf/verifier/precise.c
> > +++ b/tools/testing/selftests/bpf/verifier/precise.c
> > @@ -183,10 +183,10 @@
> > .prog_type = BPF_PROG_TYPE_XDP,
> > .flags = BPF_F_TEST_STATE_FREQ,
> > .errstr = "mark_precise: frame0: last_idx 7 first_idx 7\
> > - mark_precise: frame0: parent state regs=r4 stack=:\
> > + mark_precise: frame0: parent state regs=r4 stack=-8:\
> > mark_precise: frame0: last_idx 6 first_idx 4\
> > - mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\
> > - mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\
> > + mark_precise: frame0: regs=r4 stack=-8 before 6: (b7) r0 = -1\
> > + mark_precise: frame0: regs=r4 stack=-8 before 5: (79) r4 = *(u64 *)(r10 -8)\
> > mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\
> > mark_precise: frame0: parent state regs=r0 stack=:\
> > mark_precise: frame0: last_idx 3 first_idx 3\
>
> Yesterday I've applied patches 1 through 11 to bpf-next.
> Then Yonghong found that removal of register_is_bounded()
> in this patch 10 makes __is_scalar_unbounded() unused which
> breaks build.
> So I dropped patches 10 and 11.
>
> Today we found out that test_verifier is broken with patches 1 through 9.
> Turned out that this hunk for verifier/precise.c in patch 10 should have been
> in patch 8.
> I manually took it and force pushed bpf-next again.
> Please test bisectability of the series more carefully in the future.
So sorry I caused this trouble! I indeed mistakenly attributed this hunk
to a wrong patch, must have been more careful =/
> As far as register_is_bounded() issue.
> Maybe order patch 14 that uses __is_scalar_unbounded() first and
> then add this patch 10 ?
> Other ideas?
As for the unused function warning, I thought it wasn't a big deal if I
start using the function again later in the same series. Couldn't
anticipate how it turns out. The idea of having patch 14 in the end of
the series was to show the performance numbers. I'll reorder it before
patch 10, so that we avoid that warning. Sorry again for this mess.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars
2024-01-12 20:44 ` Maxim Mikityanskiy
@ 2024-01-12 20:50 ` Alexei Starovoitov
0 siblings, 0 replies; 28+ messages in thread
From: Alexei Starovoitov @ 2024-01-12 20:50 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Shung-Hsi Yu, John Fastabend, Martin KaFai Lau,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Mykola Lysenko, Shuah Khan, David S. Miller,
Jakub Kicinski, Jesper Dangaard Brouer, bpf,
open list:KERNEL SELFTEST FRAMEWORK, Network Development,
Maxim Mikityanskiy
On Fri, Jan 12, 2024 at 12:44 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> On Fri, 12 Jan 2024 at 11:10:57 -0800, Alexei Starovoitov wrote:
> > On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
> > >
> > > From: Maxim Mikityanskiy <maxim@isovalent.com>
> > >
> > > Support the pattern where an unbounded scalar is spilled to the stack,
> > > then boundary checks are performed on the src register, after which the
> > > stack frame slot is refilled into a register.
> > >
> > > Before this commit, the verifier didn't treat the src register and the
> > > stack slot as related if the src register was an unbounded scalar. The
> > > register state wasn't copied, the id wasn't preserved, and the stack
> > > slot was marked as STACK_MISC. Subsequent boundary checks on the src
> > > register wouldn't result in updating the boundaries of the spilled
> > > variable on the stack.
> > >
> > > After this commit, the verifier will preserve the bond between src and
> > > dst even if src is unbounded, which permits to do boundary checks on src
> > > and refill dst later, still remembering its boundaries. Such a pattern
> > > is sometimes generated by clang when compiling complex long functions.
> > >
> > > One test is adjusted to reflect the fact that an untracked register is
> > > marked as precise at an earlier stage, and one more test is adjusted to
> > > reflect that now unbounded scalars are tracked.
> > >
> > > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> > > Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> > > ---
> > > kernel/bpf/verifier.c | 7 +------
> > > tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 6 +++---
> > > tools/testing/selftests/bpf/verifier/precise.c | 6 +++---
> > > 3 files changed, 7 insertions(+), 12 deletions(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 055fa8096a08..e7fff5f5aa1d 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -4389,11 +4389,6 @@ static bool __is_scalar_unbounded(struct bpf_reg_state *reg)
> > > reg->u32_min_value == 0 && reg->u32_max_value == U32_MAX;
> > > }
> > >
> > > -static bool register_is_bounded(struct bpf_reg_state *reg)
> > > -{
> > > - return reg->type == SCALAR_VALUE && !__is_scalar_unbounded(reg);
> > > -}
> > > -
> > > static bool __is_pointer_value(bool allow_ptr_leaks,
> > > const struct bpf_reg_state *reg)
> > > {
> > > @@ -4504,7 +4499,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > > return err;
> > >
> > > mark_stack_slot_scratched(env, spi);
> > > - if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> > > + if (reg && !(off % BPF_REG_SIZE) && reg->type == SCALAR_VALUE && env->bpf_capable) {
> > > bool reg_value_fits;
> > >
> > > reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> > > diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > > index b05aab925ee5..57eb70e100a3 100644
> > > --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > > +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > > @@ -452,9 +452,9 @@ l0_%=: r1 >>= 16; \
> > > SEC("raw_tp")
> > > __log_level(2)
> > > __success
> > > -__msg("fp-8=0m??mmmm")
> > > -__msg("fp-16=00mm??mm")
> > > -__msg("fp-24=00mm???m")
> > > +__msg("fp-8=0m??scalar()")
> > > +__msg("fp-16=00mm??scalar()")
> > > +__msg("fp-24=00mm???scalar()")
> > > __naked void spill_subregs_preserve_stack_zero(void)
> > > {
> > > asm volatile (
> > > diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
> > > index 8a2ff81d8350..0a9293a57211 100644
> > > --- a/tools/testing/selftests/bpf/verifier/precise.c
> > > +++ b/tools/testing/selftests/bpf/verifier/precise.c
> > > @@ -183,10 +183,10 @@
> > > .prog_type = BPF_PROG_TYPE_XDP,
> > > .flags = BPF_F_TEST_STATE_FREQ,
> > > .errstr = "mark_precise: frame0: last_idx 7 first_idx 7\
> > > - mark_precise: frame0: parent state regs=r4 stack=:\
> > > + mark_precise: frame0: parent state regs=r4 stack=-8:\
> > > mark_precise: frame0: last_idx 6 first_idx 4\
> > > - mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\
> > > - mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\
> > > + mark_precise: frame0: regs=r4 stack=-8 before 6: (b7) r0 = -1\
> > > + mark_precise: frame0: regs=r4 stack=-8 before 5: (79) r4 = *(u64 *)(r10 -8)\
> > > mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\
> > > mark_precise: frame0: parent state regs=r0 stack=:\
> > > mark_precise: frame0: last_idx 3 first_idx 3\
> >
> > Yesterday I've applied patches 1 through 11 to bpf-next.
> > Then Yonghong found that removal of register_is_bounded()
> > in this patch 10 makes __is_scalar_unbounded() unused which
> > breaks build.
> > So I dropped patches 10 and 11.
> >
> > Today we found out that test_verifier is broken with patches 1 through 9.
> > Turned out that this hunk for verifier/precise.c in patch 10 should have been
> > in patch 8.
> > I manually took it and force pushed bpf-next again.
> > Please test bisectability of the series more carefully in the future.
>
> So sorry I caused this trouble! I indeed mistakenly attributed this hunk
> to a wrong patch, must have been more careful =/
>
> > As far as register_is_bounded() issue.
> > Maybe order patch 14 that uses __is_scalar_unbounded() first and
> > then add this patch 10 ?
> > Other ideas?
>
> As for the unused function warning, I thought it wasn't a big deal if I
> start using the function again later in the same series. Couldn't
> anticipate how it turns out. The idea of having patch 14 in the end of
> the series was to show the performance numbers. I'll reorder it before
> patch 10, so that we avoid that warning. Sorry again for this mess.
Makes sense to me and no worries.
No one would have noticed if the whole series were applied.
Looking forward to v3.
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2024-01-12 20:50 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-08 20:51 [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 02/15] bpf: make infinite loop detection in is_state_visited() exact Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills Maxim Mikityanskiy
2024-01-08 20:51 ` [PATCH bpf-next v2 05/15] selftests/bpf: Add a test case for 32-bit spill tracking Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 06/15] bpf: Add the assign_scalar_id_before_mov function Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 07/15] bpf: Add the get_reg_width function Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 08/15] bpf: Assign ID to scalars on spill Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 09/15] selftests/bpf: Test assigning " Maxim Mikityanskiy
2024-01-09 23:34 ` Andrii Nakryiko
2024-01-08 20:52 ` [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars Maxim Mikityanskiy
2024-01-12 19:10 ` Alexei Starovoitov
2024-01-12 20:44 ` Maxim Mikityanskiy
2024-01-12 20:50 ` Alexei Starovoitov
2024-01-08 20:52 ` [PATCH bpf-next v2 11/15] selftests/bpf: Test tracking " Maxim Mikityanskiy
2024-01-08 20:52 ` [PATCH bpf-next v2 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Maxim Mikityanskiy
2024-01-09 23:51 ` Andrii Nakryiko
2024-01-08 20:52 ` [PATCH bpf-next v2 13/15] selftests/bpf: Add test cases for " Maxim Mikityanskiy
2024-01-09 23:55 ` Andrii Nakryiko
2024-01-08 20:52 ` [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars Maxim Mikityanskiy
2024-01-10 0:22 ` Andrii Nakryiko
2024-01-10 21:04 ` Eduard Zingerman
2024-01-10 21:52 ` Andrii Nakryiko
2024-01-08 20:52 ` [PATCH bpf-next v2 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO} Maxim Mikityanskiy
2024-01-10 0:27 ` Andrii Nakryiko
2024-01-10 20:27 ` Eduard Zingerman
2024-01-12 3:00 ` [PATCH bpf-next v2 00/15] Improvements for tracking scalars in the BPF verifier patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox