* [PATCH bpf-next 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
@ 2023-12-20 21:39 ` Maxim Mikityanskiy
2023-12-26 9:52 ` Shung-Hsi Yu
2023-12-20 21:40 ` [PATCH bpf-next 02/15] bpf: make infinite loop detection in is_state_visited() exact Maxim Mikityanskiy
` (13 subsequent siblings)
14 siblings, 1 reply; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:39 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but
instead makes a 16-bit one. Fix the test according to its intention. The
16-bit fill is covered by u16_offset_to_skb_data.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 39fe3372e0e0..84eccab36582 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -243,7 +243,7 @@ l0_%=: r0 = 0; \
SEC("tc")
__description("Spill u32 const scalars. Refill as u64. Offset to skb->data")
-__failure __msg("invalid access to packet")
+__failure __msg("math between pkt pointer and register with unbounded min value is not allowed")
__naked void u64_offset_to_skb_data(void)
{
asm volatile (" \
@@ -253,7 +253,7 @@ __naked void u64_offset_to_skb_data(void)
w7 = 20; \
*(u32*)(r10 - 4) = r6; \
*(u32*)(r10 - 8) = r7; \
- r4 = *(u16*)(r10 - 8); \
+ r4 = *(u64*)(r10 - 8); \
r0 = r2; \
/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
r0 += r4; \
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test
2023-12-20 21:39 ` [PATCH bpf-next 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Maxim Mikityanskiy
@ 2023-12-26 9:52 ` Shung-Hsi Yu
2023-12-26 10:38 ` Maxim Mikityanskiy
0 siblings, 1 reply; 24+ messages in thread
From: Shung-Hsi Yu @ 2023-12-26 9:52 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
On Wed, Dec 20, 2023 at 11:39:59PM +0200, Maxim Mikityanskiy wrote:
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but
> instead makes a 16-bit one. Fix the test according to its intention. The
> 16-bit fill is covered by u16_offset_to_skb_data.
Cover letter mentioned
Patch 1 (Maxim): Fix for an existing test, it will matter later in the
series.
However no subsequent patch touch upon u64_offset_to_skb_data(). Was the
followup missing from this series?
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> [...]
> SEC("tc")
> __description("Spill u32 const scalars. Refill as u64. Offset to skb->data")
> -__failure __msg("invalid access to packet")
> +__failure __msg("math between pkt pointer and register with unbounded min value is not allowed")
> __naked void u64_offset_to_skb_data(void)
> {
> asm volatile (" \
> @@ -253,7 +253,7 @@ __naked void u64_offset_to_skb_data(void)
> w7 = 20; \
> *(u32*)(r10 - 4) = r6; \
> *(u32*)(r10 - 8) = r7; \
> - r4 = *(u16*)(r10 - 8); \
> + r4 = *(u64*)(r10 - 8); \
> r0 = r2; \
> /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
> r0 += r4; \
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test
2023-12-26 9:52 ` Shung-Hsi Yu
@ 2023-12-26 10:38 ` Maxim Mikityanskiy
2023-12-26 13:22 ` Shung-Hsi Yu
0 siblings, 1 reply; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-26 10:38 UTC (permalink / raw)
To: Shung-Hsi Yu
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
On Tue, 26 Dec 2023 at 17:52:56 +0800, Shung-Hsi Yu wrote:
> On Wed, Dec 20, 2023 at 11:39:59PM +0200, Maxim Mikityanskiy wrote:
> > From: Maxim Mikityanskiy <maxim@isovalent.com>
> >
> > The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but
> > instead makes a 16-bit one. Fix the test according to its intention. The
> > 16-bit fill is covered by u16_offset_to_skb_data.
>
> Cover letter mentioned
>
> Patch 1 (Maxim): Fix for an existing test, it will matter later in the
> series.
>
> However no subsequent patch touch upon u64_offset_to_skb_data(). Was the
> followup missing from this series?
Thanks for your vigilance, but it's actually correct, sorry for not
making it clear enough. In patch 11 ("bpf: Preserve boundaries and track
scalars on narrowing fill") I modify u16_offset_to_skb_data, because it
becomes a valid pattern after that change. If I didn't change and fix
u64_offset_to_skb_data here, I'd need to modify it in patch 11 as well
(that's what I meant when I said "it will matter later in the series",
it's indeed subtle and implicit, now that I look at it), because it
would also start passing, however, that's not what we want, because:
1. Both tests would essentially test the same thing: a 16-bit fill after
a 32-bit spill.
2. The description of u64_offset_to_skb_data clearly says: "Refill as
u64". It's a typo in the code, u16->u64 makes sense, because we spill
two u32s and fill them as a single u64.
So, this patch essentially prevents wrong changes in a further patch.
> > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> > [...]
> > SEC("tc")
> > __description("Spill u32 const scalars. Refill as u64. Offset to skb->data")
> > -__failure __msg("invalid access to packet")
> > +__failure __msg("math between pkt pointer and register with unbounded min value is not allowed")
> > __naked void u64_offset_to_skb_data(void)
> > {
> > asm volatile (" \
> > @@ -253,7 +253,7 @@ __naked void u64_offset_to_skb_data(void)
> > w7 = 20; \
> > *(u32*)(r10 - 4) = r6; \
> > *(u32*)(r10 - 8) = r7; \
> > - r4 = *(u16*)(r10 - 8); \
> > + r4 = *(u64*)(r10 - 8); \
> > r0 = r2; \
> > /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
> > r0 += r4; \
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test
2023-12-26 10:38 ` Maxim Mikityanskiy
@ 2023-12-26 13:22 ` Shung-Hsi Yu
0 siblings, 0 replies; 24+ messages in thread
From: Shung-Hsi Yu @ 2023-12-26 13:22 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
On Tue, Dec 26, 2023 at 12:38:06PM +0200, Maxim Mikityanskiy wrote:
> On Tue, 26 Dec 2023 at 17:52:56 +0800, Shung-Hsi Yu wrote:
> > On Wed, Dec 20, 2023 at 11:39:59PM +0200, Maxim Mikityanskiy wrote:
> > > From: Maxim Mikityanskiy <maxim@isovalent.com>
> > >
> > > The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but
> > > instead makes a 16-bit one. Fix the test according to its intention. The
> > > 16-bit fill is covered by u16_offset_to_skb_data.
> >
> > Cover letter mentioned
> >
> > Patch 1 (Maxim): Fix for an existing test, it will matter later in the
> > series.
> >
> > However no subsequent patch touch upon u64_offset_to_skb_data(). Was the
> > followup missing from this series?
>
> Thanks for your vigilance, but it's actually correct, sorry for not
> making it clear enough. In patch 11 ("bpf: Preserve boundaries and track
> scalars on narrowing fill") I modify u16_offset_to_skb_data, because it
> becomes a valid pattern after that change. If I didn't change and fix
> u64_offset_to_skb_data here, I'd need to modify it in patch 11 as well
> (that's what I meant when I said "it will matter later in the series",
> it's indeed subtle and implicit, now that I look at it), because it
> would also start passing, however, that's not what we want, because:
>
> 1. Both tests would essentially test the same thing: a 16-bit fill after
> a 32-bit spill.
>
> 2. The description of u64_offset_to_skb_data clearly says: "Refill as
> u64". It's a typo in the code, u16->u64 makes sense, because we spill
> two u32s and fill them as a single u64.
>
> So, this patch essentially prevents wrong changes in a further patch.
Thank for the thorough explanation. Now I can see and agree that the
u16->u64 change should be made. Digging back a big, the change also
aligns with what's said in commit 0be2516f865f5 ("selftests/bpf: Tests
for state pruning with u32 spill/fill") that introduced the check:
... checks that a filled u64 register is marked unknown if the
register spilled in the same slack slot was less than 8B.
Side note: the r4 value in comment is still "R4=umax=65535", that
probably should be updated as well now that r4 is unbounded.
> [...]
> > > - r4 = *(u16*)(r10 - 8); \
> > > + r4 = *(u64*)(r10 - 8); \
> > > r0 = r2; \
> > > /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
> > > r0 += r4; \
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next 02/15] bpf: make infinite loop detection in is_state_visited() exact
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
2023-12-20 21:39 ` [PATCH bpf-next 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection Maxim Mikityanskiy
` (12 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
From: Eduard Zingerman <eddyz87@gmail.com>
Current infinite loops detection mechanism is speculative:
- first, states_maybe_looping() check is done which simply does memcmp
for R1-R10 in current frame;
- second, states_equal(..., exact=false) is called. With exact=false
states_equal() would compare scalars for equality only if in old
state scalar has precision mark.
Such logic might be problematic if compiler makes some unlucky stack
spill/fill decisions. An artificial example of a false positive looks
as follows:
r0 = ... unknown scalar ...
r0 &= 0xff;
*(u64 *)(r10 - 8) = r0;
r0 = 0;
loop:
r0 = *(u64 *)(r10 - 8);
if r0 > 10 goto exit_;
r0 += 1;
*(u64 *)(r10 - 8) = r0;
r0 = 0;
goto loop;
This commit updates call to states_equal to use exact=true, forcing
all scalar comparisons to be exact.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index f13008d27f35..89f8c527ed3c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17008,7 +17008,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
}
/* attempt to detect infinite loop to avoid unnecessary doomed work */
if (states_maybe_looping(&sl->state, cur) &&
- states_equal(env, &sl->state, cur, false) &&
+ states_equal(env, &sl->state, cur, true) &&
!iter_active_depths_differ(&sl->state, cur) &&
sl->state.callback_unroll_depth == cur->callback_unroll_depth) {
verbose_linfo(env, insn_idx, "; ");
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
2023-12-20 21:39 ` [PATCH bpf-next 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 02/15] bpf: make infinite loop detection in is_state_visited() exact Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills Maxim Mikityanskiy
` (11 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
From: Eduard Zingerman <eddyz87@gmail.com>
Verify that infinite loop detection logic separates states with
identical register states but different imprecise scalars spilled to
stack.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_loops1.c | 24 +++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_loops1.c b/tools/testing/selftests/bpf/progs/verifier_loops1.c
index 71735dbf33d4..e07b43b78fd2 100644
--- a/tools/testing/selftests/bpf/progs/verifier_loops1.c
+++ b/tools/testing/selftests/bpf/progs/verifier_loops1.c
@@ -259,4 +259,28 @@ l0_%=: r2 += r1; \
" ::: __clobber_all);
}
+SEC("xdp")
+__success
+__naked void not_an_inifinite_loop(void)
+{
+ asm volatile (" \
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0xff; \
+ *(u64 *)(r10 - 8) = r0; \
+ r0 = 0; \
+loop_%=: \
+ r0 = *(u64 *)(r10 - 8); \
+ if r0 > 10 goto exit_%=; \
+ r0 += 1; \
+ *(u64 *)(r10 - 8) = r0; \
+ r0 = 0; \
+ goto loop_%=; \
+exit_%=: \
+ r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (2 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 05/15] selftests/bpf: Add a test case for 32-bit spill tracking Maxim Mikityanskiy
` (10 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Adjust the check in bpf_get_spilled_reg to take into account spilled
registers narrower than 64 bits. That allows find_equal_scalars to
properly adjust the range of all spilled registers that have the same
ID. Before this change, it was possible for a register and a spilled
register to have the same IDs but different ranges if the spill was
narrower than 64 bits and a range check was performed on the register.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
include/linux/bpf_verifier.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index d07d857ca67f..e11baecbde68 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -453,7 +453,7 @@ struct bpf_verifier_state {
#define bpf_get_spilled_reg(slot, frame, mask) \
(((slot < frame->allocated_stack / BPF_REG_SIZE) && \
- ((1 << frame->stack[slot].slot_type[0]) & (mask))) \
+ ((1 << frame->stack[slot].slot_type[BPF_REG_SIZE - 1]) & (mask))) \
? &frame->stack[slot].spilled_ptr : NULL)
/* Iterate over 'frame', setting 'reg' to either NULL or a spilled register. */
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 05/15] selftests/bpf: Add a test case for 32-bit spill tracking
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (3 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 06/15] bpf: Add the assign_scalar_id_before_mov function Maxim Mikityanskiy
` (9 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
When a range check is performed on a register that was 32-bit spilled to
the stack, the IDs of the two instances of the register are the same, so
the range should also be the same.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 31 +++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 84eccab36582..f2c1fe5b1dba 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -737,4 +737,35 @@ __naked void stack_load_preserves_const_precision_subreg(void)
: __clobber_common);
}
+SEC("xdp")
+__description("32-bit spilled reg range should be tracked")
+__success __retval(0)
+__naked void spill_32bit_range_track(void)
+{
+ asm volatile(" \
+ call %[bpf_ktime_get_ns]; \
+ /* Make r0 bounded. */ \
+ r0 &= 65535; \
+ /* Assign an ID to r0. */ \
+ r1 = r0; \
+ /* 32-bit spill r0 to stack. */ \
+ *(u32*)(r10 - 8) = r0; \
+ /* Boundary check on r0. */ \
+ if r0 < 1 goto l0_%=; \
+ /* 32-bit fill r1 from stack. */ \
+ r1 = *(u32*)(r10 - 8); \
+ /* r1 == r0 => r1 >= 1 always. */ \
+ if r1 >= 1 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. \
+ * Do an invalid memory access if the verifier \
+ * follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 06/15] bpf: Add the assign_scalar_id_before_mov function
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (4 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 05/15] selftests/bpf: Add a test case for 32-bit spill tracking Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 07/15] bpf: Add the get_reg_width function Maxim Mikityanskiy
` (8 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Extract the common code that generates a register ID for src_reg before
MOV if needed into a new function. This function will also be used in
a following commit.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
kernel/bpf/verifier.c | 33 +++++++++++++++++++--------------
1 file changed, 19 insertions(+), 14 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 89f8c527ed3c..a703e3adedd3 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4401,6 +4401,18 @@ static bool __is_pointer_value(bool allow_ptr_leaks,
return reg->type != SCALAR_VALUE;
}
+static void assign_scalar_id_before_mov(struct bpf_verifier_env *env,
+ struct bpf_reg_state *src_reg)
+{
+ if (src_reg->type == SCALAR_VALUE && !src_reg->id &&
+ !tnum_is_const(src_reg->var_off))
+ /* Ensure that src_reg has a valid ID that will be copied to
+ * dst_reg and then will be used by find_equal_scalars() to
+ * propagate min/max range.
+ */
+ src_reg->id = ++env->id_gen;
+}
+
/* Copy src state preserving dst->parent and dst->live fields */
static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src)
{
@@ -13886,20 +13898,13 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
if (BPF_SRC(insn->code) == BPF_X) {
struct bpf_reg_state *src_reg = regs + insn->src_reg;
struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
- bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
- !tnum_is_const(src_reg->var_off);
if (BPF_CLASS(insn->code) == BPF_ALU64) {
if (insn->off == 0) {
/* case: R1 = R2
* copy register state to dest reg
*/
- if (need_id)
- /* Assign src and dst registers the same ID
- * that will be used by find_equal_scalars()
- * to propagate min/max range.
- */
- src_reg->id = ++env->id_gen;
+ assign_scalar_id_before_mov(env, src_reg);
copy_register_state(dst_reg, src_reg);
dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = DEF_NOT_SUBREG;
@@ -13914,8 +13919,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
bool no_sext;
no_sext = src_reg->umax_value < (1ULL << (insn->off - 1));
- if (no_sext && need_id)
- src_reg->id = ++env->id_gen;
+ if (no_sext)
+ assign_scalar_id_before_mov(env, src_reg);
copy_register_state(dst_reg, src_reg);
if (!no_sext)
dst_reg->id = 0;
@@ -13937,8 +13942,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
if (insn->off == 0) {
bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
- if (is_src_reg_u32 && need_id)
- src_reg->id = ++env->id_gen;
+ if (is_src_reg_u32)
+ assign_scalar_id_before_mov(env, src_reg);
copy_register_state(dst_reg, src_reg);
/* Make sure ID is cleared if src_reg is not in u32
* range otherwise dst_reg min/max could be incorrectly
@@ -13952,8 +13957,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
/* case: W1 = (s8, s16)W2 */
bool no_sext = src_reg->umax_value < (1ULL << (insn->off - 1));
- if (no_sext && need_id)
- src_reg->id = ++env->id_gen;
+ if (no_sext)
+ assign_scalar_id_before_mov(env, src_reg);
copy_register_state(dst_reg, src_reg);
if (!no_sext)
dst_reg->id = 0;
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 07/15] bpf: Add the get_reg_width function
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (5 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 06/15] bpf: Add the assign_scalar_id_before_mov function Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill Maxim Mikityanskiy
` (7 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Put calculation of the register value width into a dedicated function.
This function will also be used in a following commit.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
kernel/bpf/verifier.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a703e3adedd3..b757fdbbbdd2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4448,6 +4448,11 @@ static bool is_bpf_st_mem(struct bpf_insn *insn)
return BPF_CLASS(insn->code) == BPF_ST && BPF_MODE(insn->code) == BPF_MEM;
}
+static int get_reg_width(struct bpf_reg_state *reg)
+{
+ return fls64(reg->umax_value);
+}
+
/* check_stack_{read,write}_fixed_off functions track spill/fill of registers,
* stack boundary and alignment are checked in check_mem_access()
*/
@@ -4500,7 +4505,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
save_register_state(env, state, spi, reg, size);
/* Break the relation on a narrowing spill. */
- if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
+ if (get_reg_width(reg) > BITS_PER_BYTE * size)
state->stack[spi].spilled_ptr.id = 0;
} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
insn->imm != 0 && env->bpf_capable) {
@@ -13940,7 +13945,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
return -EACCES;
} else if (src_reg->type == SCALAR_VALUE) {
if (insn->off == 0) {
- bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
+ bool is_src_reg_u32 = get_reg_width(src_reg) <= 32;
if (is_src_reg_u32)
assign_scalar_id_before_mov(env, src_reg);
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (6 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 07/15] bpf: Add the get_reg_width function Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-25 3:15 ` Alexei Starovoitov
2023-12-20 21:40 ` [PATCH bpf-next 09/15] selftests/bpf: Test assigning " Maxim Mikityanskiy
` (6 subsequent siblings)
14 siblings, 1 reply; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Currently, when a scalar bounded register is spilled to the stack, its
ID is preserved, but only if was already assigned, i.e. if this register
was MOVed before.
Assign an ID on spill if none is set, so that equal scalars could be
tracked if a register is spilled to the stack and filled into another
register.
One test is adjusted to reflect the change in register IDs.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
kernel/bpf/verifier.c | 8 +++++++-
.../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +-
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b757fdbbbdd2..caa768f1e369 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
mark_stack_slot_scratched(env, spi);
if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
+ bool reg_value_fits;
+
+ reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
+ /* Make sure that reg had an ID to build a relation on spill. */
+ if (reg_value_fits)
+ assign_scalar_id_before_mov(env, reg);
save_register_state(env, state, spi, reg, size);
/* Break the relation on a narrowing spill. */
- if (get_reg_width(reg) > BITS_PER_BYTE * size)
+ if (!reg_value_fits)
state->stack[spi].spilled_ptr.id = 0;
} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
insn->imm != 0 && env->bpf_capable) {
diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
index be95570ab382..28b602ac9cbe 100644
--- a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
+++ b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
@@ -568,7 +568,7 @@ l0_%=: r0 = 0; \
SEC("tc")
__description("direct packet access: test23 (x += pkt_ptr, 4)")
-__failure __msg("invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)")
+__failure __msg("invalid access to packet, off=0 size=8, R5(id=3,off=0,r=0)")
__flag(BPF_F_ANY_ALIGNMENT)
__naked void test23_x_pkt_ptr_4(void)
{
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill
2023-12-20 21:40 ` [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill Maxim Mikityanskiy
@ 2023-12-25 3:15 ` Alexei Starovoitov
2023-12-25 21:11 ` Maxim Mikityanskiy
0 siblings, 1 reply; 24+ messages in thread
From: Alexei Starovoitov @ 2023-12-25 3:15 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, open list:KERNEL SELFTEST FRAMEWORK,
Network Development, Maxim Mikityanskiy
On Wed, Dec 20, 2023 at 1:40 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> Currently, when a scalar bounded register is spilled to the stack, its
> ID is preserved, but only if was already assigned, i.e. if this register
> was MOVed before.
>
> Assign an ID on spill if none is set, so that equal scalars could be
> tracked if a register is spilled to the stack and filled into another
> register.
>
> One test is adjusted to reflect the change in register IDs.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> ---
> kernel/bpf/verifier.c | 8 +++++++-
> .../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +-
> 2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index b757fdbbbdd2..caa768f1e369 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>
> mark_stack_slot_scratched(env, spi);
> if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> + bool reg_value_fits;
> +
> + reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> + /* Make sure that reg had an ID to build a relation on spill. */
> + if (reg_value_fits)
> + assign_scalar_id_before_mov(env, reg);
Thanks.
I just debugged this issue as part of my bpf_cmp series.
llvm generated:
1093: (7b) *(u64 *)(r10 -96) = r0 ;
R0_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
fp-96_w=scalar(smin=smin32=-4095,smax=smax32=256)
; if (bpf_cmp(filepart_length, >, MAX_PATH))
1094: (25) if r0 > 0x100 goto pc+903 ;
R0_w=scalar(id=53,smin=smin32=0,smax=umax=smax32=umax32=256,var_off=(0x0;
0x1ff))
the verifier refined the range of 'r0' here,
but the code just read spilled value from stack:
1116: (79) r1 = *(u64 *)(r10 -64) ; R1_w=map_value
; payload += filepart_length;
1117: (79) r2 = *(u64 *)(r10 -96) ;
R2_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
fp-96=scalar(smin=smin32=-4095,smax=smax32=256)
1118: (0f) r1 += r2 ;
R1_w=map_value(map=data_heap,ks=4,vs=23040,off=148,smin=smin32=-4095,smax=smax32=3344)
And later errors as:
"R1 min value is negative, either use unsigned index or do a if (index
>=0) check."
This verifier improvement is certainly necessary.
Since you've analyzed this issue did you figure out a workaround
for C code on existing and older kernels?
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill
2023-12-25 3:15 ` Alexei Starovoitov
@ 2023-12-25 21:11 ` Maxim Mikityanskiy
2023-12-25 21:26 ` Alexei Starovoitov
0 siblings, 1 reply; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-25 21:11 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, open list:KERNEL SELFTEST FRAMEWORK,
Network Development, Maxim Mikityanskiy
On Sun, 24 Dec 2023 at 19:15:42 -0800, Alexei Starovoitov wrote:
> On Wed, Dec 20, 2023 at 1:40 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
> >
> > From: Maxim Mikityanskiy <maxim@isovalent.com>
> >
> > Currently, when a scalar bounded register is spilled to the stack, its
> > ID is preserved, but only if was already assigned, i.e. if this register
> > was MOVed before.
> >
> > Assign an ID on spill if none is set, so that equal scalars could be
> > tracked if a register is spilled to the stack and filled into another
> > register.
> >
> > One test is adjusted to reflect the change in register IDs.
> >
> > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> > ---
> > kernel/bpf/verifier.c | 8 +++++++-
> > .../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +-
> > 2 files changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index b757fdbbbdd2..caa768f1e369 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >
> > mark_stack_slot_scratched(env, spi);
> > if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> > + bool reg_value_fits;
> > +
> > + reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> > + /* Make sure that reg had an ID to build a relation on spill. */
> > + if (reg_value_fits)
> > + assign_scalar_id_before_mov(env, reg);
>
> Thanks.
> I just debugged this issue as part of my bpf_cmp series.
>
> llvm generated:
>
> 1093: (7b) *(u64 *)(r10 -96) = r0 ;
> R0_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
> fp-96_w=scalar(smin=smin32=-4095,smax=smax32=256)
> ; if (bpf_cmp(filepart_length, >, MAX_PATH))
> 1094: (25) if r0 > 0x100 goto pc+903 ;
> R0_w=scalar(id=53,smin=smin32=0,smax=umax=smax32=umax32=256,var_off=(0x0;
> 0x1ff))
>
> the verifier refined the range of 'r0' here,
> but the code just read spilled value from stack:
>
> 1116: (79) r1 = *(u64 *)(r10 -64) ; R1_w=map_value
> ; payload += filepart_length;
> 1117: (79) r2 = *(u64 *)(r10 -96) ;
> R2_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
> fp-96=scalar(smin=smin32=-4095,smax=smax32=256)
> 1118: (0f) r1 += r2 ;
> R1_w=map_value(map=data_heap,ks=4,vs=23040,off=148,smin=smin32=-4095,smax=smax32=3344)
>
> And later errors as:
> "R1 min value is negative, either use unsigned index or do a if (index
> >=0) check."
>
> This verifier improvement is certainly necessary.
Glad that you found it useful!
> Since you've analyzed this issue did you figure out a workaround
> for C code on existing and older kernels?
Uhm... in my case (Cilium, it was a while ago) I did some big change
(reorganized function calls and revalidate_data() calls) that changed
codegen significantly, and the problematic pattern disappeared.
I can suggest trying to play with volatile, e.g., declare
filepart_length as volatile; if it doesn't help, create another volatile
variable and copy filepart_length to it before doing bpf_cmp (copying
reg->reg will assign an ID, but I'm not sure if they'll still be in
registers after being declared as volatile).
Unfortunately, I couldn't reproduce your issue locally, so I couldn't
try these suggestions myself. Is this the right code, or should I take
it from elsewhere?
https://patchwork.kernel.org/project/netdevbpf/list/?series=812010
What LLVM version do you see the issue on? I can try to look for a
specific C workaround if I reproduce it locally.
BTW, the asm workaround is obvious (copy reg to another reg to assign an
ID), so maybe an inline asm like this would do the thing?
asm volatile("r8 = %0" :: "r"(filepart_length) : "r8");
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill
2023-12-25 21:11 ` Maxim Mikityanskiy
@ 2023-12-25 21:26 ` Alexei Starovoitov
0 siblings, 0 replies; 24+ messages in thread
From: Alexei Starovoitov @ 2023-12-25 21:26 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, open list:KERNEL SELFTEST FRAMEWORK,
Network Development, Maxim Mikityanskiy
On Mon, Dec 25, 2023 at 1:11 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> On Sun, 24 Dec 2023 at 19:15:42 -0800, Alexei Starovoitov wrote:
> > On Wed, Dec 20, 2023 at 1:40 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
> > >
> > > From: Maxim Mikityanskiy <maxim@isovalent.com>
> > >
> > > Currently, when a scalar bounded register is spilled to the stack, its
> > > ID is preserved, but only if was already assigned, i.e. if this register
> > > was MOVed before.
> > >
> > > Assign an ID on spill if none is set, so that equal scalars could be
> > > tracked if a register is spilled to the stack and filled into another
> > > register.
> > >
> > > One test is adjusted to reflect the change in register IDs.
> > >
> > > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> > > ---
> > > kernel/bpf/verifier.c | 8 +++++++-
> > > .../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +-
> > > 2 files changed, 8 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index b757fdbbbdd2..caa768f1e369 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > >
> > > mark_stack_slot_scratched(env, spi);
> > > if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
> > > + bool reg_value_fits;
> > > +
> > > + reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
> > > + /* Make sure that reg had an ID to build a relation on spill. */
> > > + if (reg_value_fits)
> > > + assign_scalar_id_before_mov(env, reg);
> >
> > Thanks.
> > I just debugged this issue as part of my bpf_cmp series.
> >
> > llvm generated:
> >
> > 1093: (7b) *(u64 *)(r10 -96) = r0 ;
> > R0_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
> > fp-96_w=scalar(smin=smin32=-4095,smax=smax32=256)
> > ; if (bpf_cmp(filepart_length, >, MAX_PATH))
> > 1094: (25) if r0 > 0x100 goto pc+903 ;
> > R0_w=scalar(id=53,smin=smin32=0,smax=umax=smax32=umax32=256,var_off=(0x0;
> > 0x1ff))
> >
> > the verifier refined the range of 'r0' here,
> > but the code just read spilled value from stack:
> >
> > 1116: (79) r1 = *(u64 *)(r10 -64) ; R1_w=map_value
> > ; payload += filepart_length;
> > 1117: (79) r2 = *(u64 *)(r10 -96) ;
> > R2_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0
> > fp-96=scalar(smin=smin32=-4095,smax=smax32=256)
> > 1118: (0f) r1 += r2 ;
> > R1_w=map_value(map=data_heap,ks=4,vs=23040,off=148,smin=smin32=-4095,smax=smax32=3344)
> >
> > And later errors as:
> > "R1 min value is negative, either use unsigned index or do a if (index
> > >=0) check."
> >
> > This verifier improvement is certainly necessary.
>
> Glad that you found it useful!
>
> > Since you've analyzed this issue did you figure out a workaround
> > for C code on existing and older kernels?
>
> Uhm... in my case (Cilium, it was a while ago) I did some big change
> (reorganized function calls and revalidate_data() calls) that changed
> codegen significantly, and the problematic pattern disappeared.
>
> I can suggest trying to play with volatile, e.g., declare
> filepart_length as volatile; if it doesn't help, create another volatile
> variable and copy filepart_length to it before doing bpf_cmp (copying
> reg->reg will assign an ID, but I'm not sure if they'll still be in
> registers after being declared as volatile).
>
> Unfortunately, I couldn't reproduce your issue locally, so I couldn't
> try these suggestions myself.
No worries.
> What LLVM version do you see the issue on? I can try to look for a
> specific C workaround if I reproduce it locally.
>
> BTW, the asm workaround is obvious (copy reg to another reg to assign an
> ID), so maybe an inline asm like this would do the thing?
>
> asm volatile("r8 = %0" :: "r"(filepart_length) : "r8");
Right. I tried:
asm volatile("%[reg]=%[reg]"::[reg]"r"((short)filepart_length));
and it forces ID assignment, but depending on the code it might still be
too late.
I've seen the pattern:
call ...
*(u64 *)(r10 -96) = r0
r0 = r0 // asm trick above
if r0 > 0x100 goto pc+903
So it may or may not help, but it was good to understand this issue.
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next 09/15] selftests/bpf: Test assigning ID to scalars on spill
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (7 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 10/15] bpf: Track spilled unbounded scalars Maxim Mikityanskiy
` (5 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit implemented assigning IDs to registers holding
scalars before spill. Add the test cases to check the new functionality.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 133 ++++++++++++++++++
1 file changed, 133 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index f2c1fe5b1dba..86881eaab4e2 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -768,4 +768,137 @@ l0_%=: r0 = 0; \
: __clobber_all);
}
+SEC("xdp")
+__description("64-bit spill of 64-bit reg should assign ID")
+__success __retval(0)
+__naked void spill_64bit_of_64bit_ok(void)
+{
+ asm volatile (" \
+ /* Roll one bit to make the register inexact. */\
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0x80000000; \
+ r0 <<= 32; \
+ /* 64-bit spill r0 to stack - should assign an ID. */\
+ *(u64*)(r10 - 8) = r0; \
+ /* 64-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u64*)(r10 - 8); \
+ /* Compare r1 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. \
+ */ \
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("32-bit spill of 32-bit reg should assign ID")
+__success __retval(0)
+__naked void spill_32bit_of_32bit_ok(void)
+{
+ asm volatile (" \
+ /* Roll one bit to make the register inexact. */\
+ call %[bpf_get_prandom_u32]; \
+ w0 &= 0x80000000; \
+ /* 32-bit spill r0 to stack - should assign an ID. */\
+ *(u32*)(r10 - 8) = r0; \
+ /* 32-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u32*)(r10 - 8); \
+ /* Compare r1 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. \
+ */ \
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("16-bit spill of 16-bit reg should assign ID")
+__success __retval(0)
+__naked void spill_16bit_of_16bit_ok(void)
+{
+ asm volatile (" \
+ /* Roll one bit to make the register inexact. */\
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0x8000; \
+ /* 16-bit spill r0 to stack - should assign an ID. */\
+ *(u16*)(r10 - 8) = r0; \
+ /* 16-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u16*)(r10 - 8); \
+ /* Compare r1 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. \
+ */ \
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("8-bit spill of 8-bit reg should assign ID")
+__success __retval(0)
+__naked void spill_8bit_of_8bit_ok(void)
+{
+ asm volatile (" \
+ /* Roll one bit to make the register inexact. */\
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0x80; \
+ /* 8-bit spill r0 to stack - should assign an ID. */\
+ *(u8*)(r10 - 8) = r0; \
+ /* 8-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u8*)(r10 - 8); \
+ /* Compare r1 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. \
+ */ \
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 10/15] bpf: Track spilled unbounded scalars
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (8 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 09/15] selftests/bpf: Test assigning " Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 11/15] selftests/bpf: Test tracking " Maxim Mikityanskiy
` (4 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
Support the pattern where an unbounded scalar is spilled to the stack,
then boundary checks are performed on the src register, after which the
stack frame slot is refilled into a register.
Before this commit, the verifier didn't treat the src register and the
stack slot as related if the src register was an unbounded scalar. The
register state wasn't copied, the id wasn't preserved, and the stack
slot was marked as STACK_MISC. Subsequent boundary checks on the src
register wouldn't result in updating the boundaries of the spilled
variable on the stack.
After this commit, the verifier will preserve the bond between src and
dst even if src is unbounded, which permits to do boundary checks on src
and refill dst later, still remembering its boundaries. Such a pattern
is sometimes generated by clang when compiling complex long functions.
One test is adjusted to reflect the fact that an untracked register is
marked as precise at an earlier stage, and one more test is adjusted to
reflect that now unbounded scalars are tracked.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
kernel/bpf/verifier.c | 7 +------
tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 6 +++---
tools/testing/selftests/bpf/verifier/precise.c | 6 +++---
3 files changed, 7 insertions(+), 12 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index caa768f1e369..9b5053389739 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4387,11 +4387,6 @@ static bool __is_scalar_unbounded(struct bpf_reg_state *reg)
reg->u32_min_value == 0 && reg->u32_max_value == U32_MAX;
}
-static bool register_is_bounded(struct bpf_reg_state *reg)
-{
- return reg->type == SCALAR_VALUE && !__is_scalar_unbounded(reg);
-}
-
static bool __is_pointer_value(bool allow_ptr_leaks,
const struct bpf_reg_state *reg)
{
@@ -4502,7 +4497,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
return err;
mark_stack_slot_scratched(env, spi);
- if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
+ if (reg && !(off % BPF_REG_SIZE) && reg->type == SCALAR_VALUE && env->bpf_capable) {
bool reg_value_fits;
reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 86881eaab4e2..92e446b18e10 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -454,9 +454,9 @@ l0_%=: r1 >>= 16; \
SEC("raw_tp")
__log_level(2)
__success
-__msg("fp-8=0m??mmmm")
-__msg("fp-16=00mm??mm")
-__msg("fp-24=00mm???m")
+__msg("fp-8=0m??scalar()")
+__msg("fp-16=00mm??scalar()")
+__msg("fp-24=00mm???scalar()")
__naked void spill_subregs_preserve_stack_zero(void)
{
asm volatile (
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
index 8a2ff81d8350..0a9293a57211 100644
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ b/tools/testing/selftests/bpf/verifier/precise.c
@@ -183,10 +183,10 @@
.prog_type = BPF_PROG_TYPE_XDP,
.flags = BPF_F_TEST_STATE_FREQ,
.errstr = "mark_precise: frame0: last_idx 7 first_idx 7\
- mark_precise: frame0: parent state regs=r4 stack=:\
+ mark_precise: frame0: parent state regs=r4 stack=-8:\
mark_precise: frame0: last_idx 6 first_idx 4\
- mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\
- mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\
+ mark_precise: frame0: regs=r4 stack=-8 before 6: (b7) r0 = -1\
+ mark_precise: frame0: regs=r4 stack=-8 before 5: (79) r4 = *(u64 *)(r10 -8)\
mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\
mark_precise: frame0: parent state regs=r0 stack=:\
mark_precise: frame0: last_idx 3 first_idx 3\
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 11/15] selftests/bpf: Test tracking spilled unbounded scalars
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (9 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 10/15] bpf: Track spilled unbounded scalars Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Maxim Mikityanskiy
` (3 subsequent siblings)
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit added tracking for unbounded scalars on spill. Add
the test case to check the new functionality.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 27 +++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 92e446b18e10..809a09732168 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -901,4 +901,31 @@ l0_%=: r0 = 0; \
: __clobber_all);
}
+SEC("xdp")
+__description("spill unbounded reg, then range check src")
+__success __retval(0)
+__naked void spill_unbounded(void)
+{
+ asm volatile (" \
+ /* Produce an unbounded scalar. */ \
+ call %[bpf_get_prandom_u32]; \
+ /* Spill r0 to stack. */ \
+ *(u64*)(r10 - 8) = r0; \
+ /* Boundary check on r0. */ \
+ if r0 > 16 goto l0_%=; \
+ /* Fill r0 from stack. */ \
+ r0 = *(u64*)(r10 - 8); \
+ /* Boundary check on r0 with predetermined result. */\
+ if r0 <= 16 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 12/15] bpf: Preserve boundaries and track scalars on narrowing fill
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (10 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 11/15] selftests/bpf: Test tracking " Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-26 5:29 ` Shung-Hsi Yu
[not found] ` <a4c8b7b9f03ff3455fbf430862b370abe9337bc9.camel@gmail.com>
2023-12-20 21:40 ` [PATCH bpf-next 13/15] selftests/bpf: Add test cases for " Maxim Mikityanskiy
` (2 subsequent siblings)
14 siblings, 2 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
When the width of a fill is smaller than the width of the preceding
spill, the information about scalar boundaries can still be preserved,
as long as it's coerced to the right width (done by coerce_reg_to_size).
Even further, if the actual value fits into the fill width, the ID can
be preserved as well for further tracking of equal scalars.
Implement the above improvements, which makes narrowing fills behave the
same as narrowing spills and MOVs between registers.
Two tests are adjusted to accommodate for endianness differences and to
take into account that it's now allowed to do a narrowing fill from the
least significant bits.
reg_bounds_sync is added to coerce_reg_to_size to correctly adjust
umin/umax boundaries after the var_off truncation, for example, a 64-bit
value 0xXXXXXXXX00000000, when read as a 32-bit, gets umin = 0, umax =
0xFFFFFFFF, var_off = (0x0; 0xffffffff00000000), which needs to be
synced down to umax = 0, otherwise reg_bounds_sanity_check doesn't pass.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
kernel/bpf/verifier.c | 20 ++++++++++---
.../selftests/bpf/progs/verifier_spill_fill.c | 28 +++++++++++++------
2 files changed, 35 insertions(+), 13 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9b5053389739..b6e252539e52 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4772,7 +4772,13 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
if (dst_regno < 0)
return 0;
- if (!(off % BPF_REG_SIZE) && size == spill_size) {
+ if (size <= spill_size &&
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ !(off % BPF_REG_SIZE)
+#else
+ !((off + size - spill_size) % BPF_REG_SIZE)
+#endif
+ ) {
/* The earlier check_reg_arg() has decided the
* subreg_def for this insn. Save it first.
*/
@@ -4780,6 +4786,12 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
copy_register_state(&state->regs[dst_regno], reg);
state->regs[dst_regno].subreg_def = subreg_def;
+
+ /* Break the relation on a narrowing fill.
+ * coerce_reg_to_size will adjust the boundaries.
+ */
+ if (get_reg_width(reg) > size * BITS_PER_BYTE)
+ state->regs[dst_regno].id = 0;
} else {
int spill_cnt = 0, zero_cnt = 0;
@@ -6055,10 +6067,10 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
* values are also truncated so we push 64-bit bounds into
* 32-bit bounds. Above were truncated < 32-bits already.
*/
- if (size < 4) {
+ if (size < 4)
__mark_reg32_unbounded(reg);
- reg_bounds_sync(reg);
- }
+
+ reg_bounds_sync(reg);
}
static void set_sext64_default_val(struct bpf_reg_state *reg, int size)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 809a09732168..de03e72e07a9 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -217,7 +217,7 @@ __naked void uninit_u32_from_the_stack(void)
SEC("tc")
__description("Spill a u32 const scalar. Refill as u16. Offset to skb->data")
-__failure __msg("invalid access to packet")
+__success __retval(0)
__naked void u16_offset_to_skb_data(void)
{
asm volatile (" \
@@ -225,19 +225,24 @@ __naked void u16_offset_to_skb_data(void)
r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \
w4 = 20; \
*(u32*)(r10 - 8) = r4; \
- r4 = *(u16*)(r10 - 8); \
+ r4 = *(u16*)(r10 - %[offset]); \
r0 = r2; \
- /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
+ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */\
r0 += r4; \
- /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\
+ /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\
if r0 > r3 goto l0_%=; \
- /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\
+ /* r0 = *(u32 *)r2 R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\
r0 = *(u32*)(r2 + 0); \
l0_%=: r0 = 0; \
exit; \
" :
: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
- __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+ __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 8)
+#else
+ __imm_const(offset, 6)
+#endif
: __clobber_all);
}
@@ -270,7 +275,7 @@ l0_%=: r0 = 0; \
}
SEC("tc")
-__description("Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data")
+__description("Spill a u32 const scalar. Refill as u16 from MSB. Offset to skb->data")
__failure __msg("invalid access to packet")
__naked void _6_offset_to_skb_data(void)
{
@@ -279,7 +284,7 @@ __naked void _6_offset_to_skb_data(void)
r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \
w4 = 20; \
*(u32*)(r10 - 8) = r4; \
- r4 = *(u16*)(r10 - 6); \
+ r4 = *(u16*)(r10 - %[offset]); \
r0 = r2; \
/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
r0 += r4; \
@@ -291,7 +296,12 @@ l0_%=: r0 = 0; \
exit; \
" :
: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
- __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+ __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 6)
+#else
+ __imm_const(offset, 8)
+#endif
: __clobber_all);
}
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next 12/15] bpf: Preserve boundaries and track scalars on narrowing fill
2023-12-20 21:40 ` [PATCH bpf-next 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Maxim Mikityanskiy
@ 2023-12-26 5:29 ` Shung-Hsi Yu
[not found] ` <a4c8b7b9f03ff3455fbf430862b370abe9337bc9.camel@gmail.com>
1 sibling, 0 replies; 24+ messages in thread
From: Shung-Hsi Yu @ 2023-12-26 5:29 UTC (permalink / raw)
To: Maxim Mikityanskiy
Cc: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, John Fastabend, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
On Wed, Dec 20, 2023 at 11:40:10PM +0200, Maxim Mikityanskiy wrote:
> When the width of a fill is smaller than the width of the preceding
> spill, the information about scalar boundaries can still be preserved,
> as long as it's coerced to the right width (done by coerce_reg_to_size).
> Even further, if the actual value fits into the fill width, the ID can
> be preserved as well for further tracking of equal scalars.
>
> Implement the above improvements, which makes narrowing fills behave the
> same as narrowing spills and MOVs between registers.
>
> Two tests are adjusted to accommodate for endianness differences and to
> take into account that it's now allowed to do a narrowing fill from the
> least significant bits.
>
> reg_bounds_sync is added to coerce_reg_to_size to correctly adjust
> umin/umax boundaries after the var_off truncation, for example, a 64-bit
> value 0xXXXXXXXX00000000, when read as a 32-bit, gets umin = 0, umax =
> 0xFFFFFFFF, var_off = (0x0; 0xffffffff00000000), which needs to be
> synced down to umax = 0, otherwise reg_bounds_sanity_check doesn't pass.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> ---
> kernel/bpf/verifier.c | 20 ++++++++++---
> .../selftests/bpf/progs/verifier_spill_fill.c | 28 +++++++++++++------
> 2 files changed, 35 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 9b5053389739..b6e252539e52 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4772,7 +4772,13 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
> if (dst_regno < 0)
> return 0;
>
> - if (!(off % BPF_REG_SIZE) && size == spill_size) {
> + if (size <= spill_size &&
> +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
> + !(off % BPF_REG_SIZE)
> +#else
> + !((off + size - spill_size) % BPF_REG_SIZE)
> +#endif
If I understand correctly, it is preferred to keep endianess checking
macro out of verfier.c and have helper function handle them instead.
E.g. See bpf_ctx_narrow_access_offset() from include/linux/filter.h
> [...]
^ permalink raw reply [flat|nested] 24+ messages in thread[parent not found: <a4c8b7b9f03ff3455fbf430862b370abe9337bc9.camel@gmail.com>]
* Re: [PATCH bpf-next 12/15] bpf: Preserve boundaries and track scalars on narrowing fill
[not found] ` <a4c8b7b9f03ff3455fbf430862b370abe9337bc9.camel@gmail.com>
@ 2024-01-05 17:48 ` Maxim Mikityanskiy
0 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2024-01-05 17:48 UTC (permalink / raw)
To: Eduard Zingerman
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, linux-kselftest, netdev,
Maxim Mikityanskiy
On Thu, 04 Jan 2024 at 04:27:00 +0200, Eduard Zingerman wrote:
> On Wed, 2023-12-20 at 23:40 +0200, Maxim Mikityanskiy wrote:
>
> [...]
>
> The two tests below were added by the following commit:
> ef979017b837 ("bpf: selftest: Add verifier tests for <8-byte scalar spill and refill")
>
> As far as I understand, the original intent was to check the behavior
> for stack read/write with non-matching size.
> I think these tests are redundant after patch #13. Wdyt?
_6_offset_to_skb_data is for sure not redundant. I don't test a partial
fill from the most significant bits in my patch 13.
u16_offset_to_skb_data is somewhat similar to
fill_32bit_after_spill_64bit, but they aren't exactly the same: the
former spills (u32)20 and fills (u16)20 (the same value), while my test
spills (u64)0xXXXXXXXX00000000 and fills (u32)0 (the most significant
bits are stripped). Maybe u16_offset_to_skb_data is redundant, but more
coverage is better than less coverage, isn't it?
> > diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > index 809a09732168..de03e72e07a9 100644
> > --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > @@ -217,7 +217,7 @@ __naked void uninit_u32_from_the_stack(void)
> >
> > SEC("tc")
> > __description("Spill a u32 const scalar. Refill as u16. Offset to skb->data")
> > -__failure __msg("invalid access to packet")
> > +__success __retval(0)
> > __naked void u16_offset_to_skb_data(void)
> > {
> > asm volatile (" \
> > @@ -225,19 +225,24 @@ __naked void u16_offset_to_skb_data(void)
> > r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \
> > w4 = 20; \
> > *(u32*)(r10 - 8) = r4; \
> > - r4 = *(u16*)(r10 - 8); \
> > + r4 = *(u16*)(r10 - %[offset]); \
> > r0 = r2; \
> > - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
> > + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */\
> > r0 += r4; \
> > - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\
> > + /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\
> > if r0 > r3 goto l0_%=; \
> > - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\
> > + /* r0 = *(u32 *)r2 R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\
> > r0 = *(u32*)(r2 + 0); \
> > l0_%=: r0 = 0; \
> > exit; \
> > " :
> > : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
> > - __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
> > + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
> > +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
> > + __imm_const(offset, 8)
> > +#else
> > + __imm_const(offset, 6)
> > +#endif
> > : __clobber_all);
> > }
> >
> > @@ -270,7 +275,7 @@ l0_%=: r0 = 0; \
> > }
> >
> > SEC("tc")
> > -__description("Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data")
> > +__description("Spill a u32 const scalar. Refill as u16 from MSB. Offset to skb->data")
> > __failure __msg("invalid access to packet")
> > __naked void _6_offset_to_skb_data(void)
> > {
> > @@ -279,7 +284,7 @@ __naked void _6_offset_to_skb_data(void)
> > r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \
> > w4 = 20; \
> > *(u32*)(r10 - 8) = r4; \
> > - r4 = *(u16*)(r10 - 6); \
> > + r4 = *(u16*)(r10 - %[offset]); \
> > r0 = r2; \
> > /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\
> > r0 += r4; \
> > @@ -291,7 +296,12 @@ l0_%=: r0 = 0; \
> > exit; \
> > " :
> > : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
> > - __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
> > + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
> > +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
> > + __imm_const(offset, 6)
> > +#else
> > + __imm_const(offset, 8)
> > +#endif
> > : __clobber_all);
> > }
> >
>
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next 13/15] selftests/bpf: Add test cases for narrowing fill
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (11 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 14/15] bpf: Optimize state pruning for spilled scalars Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO} Maxim Mikityanskiy
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev,
Maxim Mikityanskiy
From: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit allowed to preserve boundaries and track IDs of
scalars on narrowing fills. Add test cases for that pattern.
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 108 ++++++++++++++++++
1 file changed, 108 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index de03e72e07a9..df195cf5c77b 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -938,4 +938,112 @@ l0_%=: r0 = 0; \
: __clobber_all);
}
+SEC("xdp")
+__description("32-bit fill after 64-bit spill")
+__success __retval(0)
+__naked void fill_32bit_after_spill_64bit(void)
+{
+ asm volatile(" \
+ /* Randomize the upper 32 bits. */ \
+ call %[bpf_get_prandom_u32]; \
+ r0 <<= 32; \
+ /* 64-bit spill r0 to stack. */ \
+ *(u64*)(r10 - 8) = r0; \
+ /* 32-bit fill r0 from stack. */ \
+ r0 = *(u32*)(r10 - %[offset]); \
+ /* Boundary check on r0 with predetermined result. */\
+ if r0 == 0 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+l0_%=: exit; \
+" :
+ : __imm(bpf_get_prandom_u32),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 8)
+#else
+ __imm_const(offset, 4)
+#endif
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("32-bit fill after 64-bit spill of 32-bit value should preserve ID")
+__success __retval(0)
+__naked void fill_32bit_after_spill_64bit_preserve_id(void)
+{
+ asm volatile (" \
+ /* Randomize the lower 32 bits. */ \
+ call %[bpf_get_prandom_u32]; \
+ w0 &= 0xffffffff; \
+ /* 64-bit spill r0 to stack - should assign an ID. */\
+ *(u64*)(r10 - 8) = r0; \
+ /* 32-bit fill r1 from stack - should preserve the ID. */\
+ r1 = *(u32*)(r10 - %[offset]); \
+ /* Compare r1 with another register to trigger find_equal_scalars. */\
+ r2 = 0; \
+ if r1 != r2 goto l0_%=; \
+ /* The result of this comparison is predefined. */\
+ if r0 == r2 goto l0_%=; \
+ /* Dead branch: the verifier should prune it. Do an invalid memory\
+ * access if the verifier follows it. \
+ */ \
+ r0 = *(u64*)(r9 + 0); \
+ exit; \
+l0_%=: r0 = 0; \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 8)
+#else
+ __imm_const(offset, 4)
+#endif
+ : __clobber_all);
+}
+
+SEC("xdp")
+__description("32-bit fill after 64-bit spill should clear ID")
+__failure __msg("math between ctx pointer and 4294967295 is not allowed")
+__naked void fill_32bit_after_spill_64bit_clear_id(void)
+{
+ asm volatile (" \
+ r6 = r1; \
+ /* Roll one bit to force the verifier to track both branches. */\
+ call %[bpf_get_prandom_u32]; \
+ r0 &= 0x8; \
+ /* Put a large number into r1. */ \
+ r1 = 0xffffffff; \
+ r1 <<= 32; \
+ r1 += r0; \
+ /* 64-bit spill r1 to stack - should assign an ID. */\
+ *(u64*)(r10 - 8) = r1; \
+ /* 32-bit fill r2 from stack - should clear the ID. */\
+ r2 = *(u32*)(r10 - %[offset]); \
+ /* Compare r2 with another register to trigger find_equal_scalars.\
+ * Having one random bit is important here, otherwise the verifier cuts\
+ * the corners. If the ID was mistakenly preserved on fill, this would\
+ * cause the verifier to think that r1 is also equal to zero in one of\
+ * the branches, and equal to eight on the other branch.\
+ */ \
+ r3 = 0; \
+ if r2 != r3 goto l0_%=; \
+l0_%=: r1 >>= 32; \
+ /* The verifier shouldn't propagate r2's range to r1, so it should\
+ * still remember r1 = 0xffffffff and reject the below.\
+ */ \
+ r6 += r1; \
+ r0 = *(u32*)(r6 + 0); \
+ exit; \
+" :
+ : __imm(bpf_get_prandom_u32),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ __imm_const(offset, 8)
+#else
+ __imm_const(offset, 4)
+#endif
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 14/15] bpf: Optimize state pruning for spilled scalars
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (12 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 13/15] selftests/bpf: Add test cases for " Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
2023-12-20 21:40 ` [PATCH bpf-next 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO} Maxim Mikityanskiy
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
From: Eduard Zingerman <eddyz87@gmail.com>
Changes for scalar ID tracking of spilled unbound scalars lead to
certain verification performance regression. This commit mitigates the
regression by exploiting the following properties maintained by
check_stack_read_fixed_off():
- a mix of STACK_MISC, STACK_ZERO and STACK_INVALID marks is read as
unbounded scalar register;
- spi with all slots marked STACK_ZERO is read as scalar register with
value zero.
This commit modifies stacksafe() to consider situations above
equivalent.
Veristat results after this patch show significant gains:
$ ./veristat -e file,prog,states -f '!states_pct<10' -f '!states_b<10' -C not-opt after
File Program States (A) States (B) States (DIFF)
---------------- -------- ---------- ---------- ----------------
pyperf180.bpf.o on_event 10456 8422 -2034 (-19.45%)
pyperf600.bpf.o on_event 37319 22519 -14800 (-39.66%)
strobemeta.bpf.o on_event 13435 4703 -8732 (-64.99%)
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 83 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b6e252539e52..a020d4d83524 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1168,6 +1168,12 @@ static void mark_stack_slot_misc(struct bpf_verifier_env *env, u8 *stype)
*stype = STACK_MISC;
}
+static bool is_spilled_scalar_reg64(const struct bpf_stack_state *stack)
+{
+ return stack->slot_type[0] == STACK_SPILL &&
+ stack->spilled_ptr.type == SCALAR_VALUE;
+}
+
static void scrub_spilled_slot(u8 *stype)
{
if (*stype != STACK_INVALID)
@@ -16449,11 +16455,45 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
}
}
+static bool is_stack_zero64(struct bpf_stack_state *stack)
+{
+ u32 i;
+
+ for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i)
+ if (stack->slot_type[i] != STACK_ZERO)
+ return false;
+ return true;
+}
+
+static bool is_stack_unbound_slot64(struct bpf_verifier_env *env,
+ struct bpf_stack_state *stack)
+{
+ u32 i;
+
+ for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i)
+ if (stack->slot_type[i] != STACK_ZERO &&
+ stack->slot_type[i] != STACK_MISC &&
+ (!env->allow_uninit_stack || stack->slot_type[i] != STACK_INVALID))
+ return false;
+ return true;
+}
+
+static bool is_spilled_unbound_scalar_reg64(struct bpf_stack_state *stack)
+{
+ return is_spilled_scalar_reg64(stack) && __is_scalar_unbounded(&stack->spilled_ptr);
+}
+
static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
{
+ struct bpf_reg_state unbound_reg = {};
+ struct bpf_reg_state zero_reg = {};
int i, spi;
+ __mark_reg_unknown(env, &unbound_reg);
+ __mark_reg_const_zero(env, &zero_reg);
+ zero_reg.precise = true;
+
/* walk slots of the explored stack and ignore any additional
* slots in the current stack, since explored(safe) state
* didn't use them
@@ -16474,6 +16514,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
continue;
}
+ /* load of stack value with all MISC and ZERO slots produces unbounded
+ * scalar value, call regsafe to ensure scalar ids are compared.
+ */
+ if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) &&
+ is_stack_unbound_slot64(env, &cur->stack[spi])) {
+ i += BPF_REG_SIZE - 1;
+ if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg,
+ idmap, exact))
+ return false;
+ continue;
+ }
+
+ if (is_stack_unbound_slot64(env, &old->stack[spi]) &&
+ is_spilled_unbound_scalar_reg64(&cur->stack[spi])) {
+ i += BPF_REG_SIZE - 1;
+ if (!regsafe(env, &unbound_reg, &cur->stack[spi].spilled_ptr,
+ idmap, exact))
+ return false;
+ continue;
+ }
+
+ /* load of stack value with all ZERO slots produces scalar value 0,
+ * call regsafe to ensure scalar ids are compared and precision
+ * flags are taken into account.
+ */
+ if (is_spilled_scalar_reg64(&old->stack[spi]) &&
+ is_stack_zero64(&cur->stack[spi])) {
+ if (!regsafe(env, &old->stack[spi].spilled_ptr, &zero_reg,
+ idmap, exact))
+ return false;
+ i += BPF_REG_SIZE - 1;
+ continue;
+ }
+
+ if (is_stack_zero64(&old->stack[spi]) &&
+ is_spilled_scalar_reg64(&cur->stack[spi])) {
+ if (!regsafe(env, &zero_reg, &cur->stack[spi].spilled_ptr,
+ idmap, exact))
+ return false;
+ i += BPF_REG_SIZE - 1;
+ continue;
+ }
+
if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_INVALID)
continue;
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO}
2023-12-20 21:39 [PATCH bpf-next 00/15] Improvements for tracking scalars in the BPF verifier Maxim Mikityanskiy
` (13 preceding siblings ...)
2023-12-20 21:40 ` [PATCH bpf-next 14/15] bpf: Optimize state pruning for spilled scalars Maxim Mikityanskiy
@ 2023-12-20 21:40 ` Maxim Mikityanskiy
14 siblings, 0 replies; 24+ messages in thread
From: Maxim Mikityanskiy @ 2023-12-20 21:40 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko
Cc: John Fastabend, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, David S. Miller, Jakub Kicinski,
Jesper Dangaard Brouer, bpf, linux-kselftest, netdev
From: Eduard Zingerman <eddyz87@gmail.com>
Check that stacksafe() considers the following old vs cur stack spill
state combinations equivalent:
- spill of unbound scalar vs combination of STACK_{MISC,ZERO,INVALID}
- STACK_MISC vs spill of unbound scalar
- spill of scalar 0 vs STACK_ZERO
- STACK_ZERO vs spill of scalar 0
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 192 ++++++++++++++++++
1 file changed, 192 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index df195cf5c77b..e2acc4fc3d10 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -1046,4 +1046,196 @@ l0_%=: r1 >>= 32; \
: __clobber_all);
}
+/* stacksafe(): check if spill of unbound scalar in old state is
+ * considered equivalent to any state of the spill in the current state.
+ *
+ * On the first verification path an unbound scalar is written for
+ * fp-8 and later marked precise.
+ * On the second verification path a mix of STACK_MISC/ZERO/INVALID is
+ * written to fp-8. These should be considered equivalent.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("10: (79) r0 = *(u64 *)(r10 -8)")
+__msg("10: safe")
+__msg("processed 16 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_unbound_scalar_vs_cur_anything(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "r7 = r0;"
+ /* get a random value for storing at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "if r7 == 0 goto 1f;"
+ /* unbound scalar written to fp-8 */
+ "*(u64*)(r10 - 8) = r0;"
+ "goto 2f;"
+"1:"
+ /* mark fp-8 as mix of STACK_MISC/ZERO/INVALID */
+ "r1 = 0;"
+ "*(u8*)(r10 - 8) = r0;"
+ "*(u8*)(r10 - 7) = r1;"
+ /* fp-2..fp-6 remain STACK_INVALID */
+ "*(u8*)(r10 - 1) = r0;"
+"2:"
+ /* read fp-8 and force it precise, should be considered safe
+ * on second visit
+ */
+ "r0 = *(u64*)(r10 - 8);"
+ "r0 &= 0xff;"
+ "r1 = r10;"
+ "r1 += r0;"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
+/* stacksafe(): check if STACK_MISC in old state is considered
+ * equivalent to stack spill of unbound scalar in cur state.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar(id=1) R10=fp0 fp-8=scalar(id=1)")
+__msg("8: safe")
+__msg("processed 11 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_unbound_scalar_vs_cur_stack_misc(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "if r0 == 0 goto 1f;"
+ /* conjure unbound scalar at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+ "goto 2f;"
+"1:"
+ /* conjure STACK_MISC at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+ "*(u32*)(r10 - 4) = r0;"
+"2:"
+ /* read fp-8, should be considered safe on second visit */
+ "r0 = *(u64*)(r10 - 8);"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
+/* stacksafe(): check if stack spill of unbound scalar in old state is
+ * considered equivalent to STACK_MISC in cur state.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar() R10=fp0 fp-8=mmmmmmmm")
+__msg("8: safe")
+__msg("processed 11 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_stack_misc_vs_cur_unbound_scalar(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "if r0 == 0 goto 1f;"
+ /* conjure STACK_MISC at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+ "*(u32*)(r10 - 4) = r0;"
+ "goto 2f;"
+"1:"
+ /* conjure unbound scalar at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+"2:"
+ /* read fp-8, should be considered safe on second visit */
+ "r0 = *(u64*)(r10 - 8);"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
+/* stacksafe(): check if spill of register with value 0 in old state
+ * is considered equivalent to STACK_ZERO.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("9: (79) r0 = *(u64 *)(r10 -8)")
+__msg("9: safe")
+__msg("processed 15 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_spill_zero_vs_stack_zero(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "r7 = r0;"
+ /* get a random value for storing at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "if r7 == 0 goto 1f;"
+ /* conjure spilled register with value 0 at fp-8 */
+ "*(u64*)(r10 - 8) = r0;"
+ "if r0 != 0 goto 3f;"
+ "goto 2f;"
+"1:"
+ /* conjure STACK_ZERO at fp-8 */
+ "r1 = 0;"
+ "*(u64*)(r10 - 8) = r1;"
+"2:"
+ /* read fp-8 and force it precise, should be considered safe
+ * on second visit
+ */
+ "r0 = *(u64*)(r10 - 8);"
+ "r1 = r10;"
+ "r1 += r0;"
+"3:"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
+/* stacksafe(): similar to old_spill_zero_vs_stack_zero() but the
+ * other way around: check if STACK_ZERO is considered equivalent to
+ * spill of register with value 0.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("8: (79) r0 = *(u64 *)(r10 -8)")
+__msg("8: safe")
+__msg("processed 14 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void old_stack_zero_vs_spill_zero(void)
+{
+ asm volatile(
+ /* get a random value for branching */
+ "call %[bpf_ktime_get_ns];"
+ "if r0 == 0 goto 1f;"
+ /* conjure STACK_ZERO at fp-8 */
+ "r1 = 0;"
+ "*(u64*)(r10 - 8) = r1;"
+ "goto 2f;"
+"1:"
+ /* conjure spilled register with value 0 at fp-8 */
+ "call %[bpf_ktime_get_ns];"
+ "*(u64*)(r10 - 8) = r0;"
+ "if r0 != 0 goto 3f;"
+"2:"
+ /* read fp-8 and force it precise, should be considered safe
+ * on second visit
+ */
+ "r0 = *(u64*)(r10 - 8);"
+ "r1 = r10;"
+ "r1 += r0;"
+"3:"
+ "exit;"
+ :
+ : __imm(bpf_ktime_get_ns)
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.42.1
^ permalink raw reply related [flat|nested] 24+ messages in thread