BPF List
 help / color / mirror / Atom feed
From: Andrii Nakryiko <andrii@kernel.org>
To: <bpf@vger.kernel.org>, <ast@kernel.org>, <daniel@iogearbox.net>,
	<martin.lau@kernel.org>
Cc: <andrii@kernel.org>, <kernel-team@meta.com>,
	Eduard Zingerman <eddyz87@gmail.com>
Subject: [PATCH v3 bpf-next 09/10] selftests/bpf: validate precision logic in partial_stack_load_preserves_zeros
Date: Mon, 4 Dec 2023 11:26:00 -0800	[thread overview]
Message-ID: <20231204192601.2672497-10-andrii@kernel.org> (raw)
In-Reply-To: <20231204192601.2672497-1-andrii@kernel.org>

Enhance partial_stack_load_preserves_zeros subtest with detailed
precision propagation log checks. We know expect fp-16 to be spilled,
initially imprecise, zero const register, which is later marked as
precise even when partial stack slot load is performed, even if it's not
a register fill (!).

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 .../selftests/bpf/progs/verifier_spill_fill.c | 40 +++++++++++++++----
 1 file changed, 32 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 7c1f1927f01a..f7bebc79fec4 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -492,6 +492,22 @@ char single_byte_buf[1] SEC(".data.single_byte_buf");
 SEC("raw_tp")
 __log_level(2)
 __success
+/* make sure fp-8 is all STACK_ZERO */
+__msg("2: (7a) *(u64 *)(r10 -8) = 0          ; R10=fp0 fp-8_w=00000000")
+/* but fp-16 is spilled IMPRECISE zero const reg */
+__msg("4: (7b) *(u64 *)(r10 -16) = r0        ; R0_w=0 R10=fp0 fp-16_w=0")
+/* and now check that precision propagation works even for such tricky case */
+__msg("10: (71) r2 = *(u8 *)(r10 -9)         ; R2_w=P0 R10=fp0 fp-16_w=0")
+__msg("11: (0f) r1 += r2")
+__msg("mark_precise: frame0: last_idx 11 first_idx 0 subseq_idx -1")
+__msg("mark_precise: frame0: regs=r2 stack= before 10: (71) r2 = *(u8 *)(r10 -9)")
+__msg("mark_precise: frame0: regs= stack=-16 before 9: (bf) r1 = r6")
+__msg("mark_precise: frame0: regs= stack=-16 before 8: (73) *(u8 *)(r1 +0) = r2")
+__msg("mark_precise: frame0: regs= stack=-16 before 7: (0f) r1 += r2")
+__msg("mark_precise: frame0: regs= stack=-16 before 6: (71) r2 = *(u8 *)(r10 -1)")
+__msg("mark_precise: frame0: regs= stack=-16 before 5: (bf) r1 = r6")
+__msg("mark_precise: frame0: regs= stack=-16 before 4: (7b) *(u64 *)(r10 -16) = r0")
+__msg("mark_precise: frame0: regs=r0 stack= before 3: (b7) r0 = 0")
 __naked void partial_stack_load_preserves_zeros(void)
 {
 	asm volatile (
@@ -505,42 +521,50 @@ __naked void partial_stack_load_preserves_zeros(void)
 		/* load single U8 from non-aligned STACK_ZERO slot */
 		"r1 = %[single_byte_buf];"
 		"r2 = *(u8 *)(r10 -1);"
-		"r1 += r2;" /* this should be fine */
+		"r1 += r2;"
+		"*(u8 *)(r1 + 0) = r2;" /* this should be fine */
 
 		/* load single U8 from non-aligned ZERO REG slot */
 		"r1 = %[single_byte_buf];"
 		"r2 = *(u8 *)(r10 -9);"
-		"r1 += r2;" /* this should be fine */
+		"r1 += r2;"
+		"*(u8 *)(r1 + 0) = r2;" /* this should be fine */
 
 		/* load single U16 from non-aligned STACK_ZERO slot */
 		"r1 = %[single_byte_buf];"
 		"r2 = *(u16 *)(r10 -2);"
-		"r1 += r2;" /* this should be fine */
+		"r1 += r2;"
+		"*(u8 *)(r1 + 0) = r2;" /* this should be fine */
 
 		/* load single U16 from non-aligned ZERO REG slot */
 		"r1 = %[single_byte_buf];"
 		"r2 = *(u16 *)(r10 -10);"
-		"r1 += r2;" /* this should be fine */
+		"r1 += r2;"
+		"*(u8 *)(r1 + 0) = r2;" /* this should be fine */
 
 		/* load single U32 from non-aligned STACK_ZERO slot */
 		"r1 = %[single_byte_buf];"
 		"r2 = *(u32 *)(r10 -4);"
-		"r1 += r2;" /* this should be fine */
+		"r1 += r2;"
+		"*(u8 *)(r1 + 0) = r2;" /* this should be fine */
 
 		/* load single U32 from non-aligned ZERO REG slot */
 		"r1 = %[single_byte_buf];"
 		"r2 = *(u32 *)(r10 -12);"
-		"r1 += r2;" /* this should be fine */
+		"r1 += r2;"
+		"*(u8 *)(r1 + 0) = r2;" /* this should be fine */
 
 		/* for completeness, load U64 from STACK_ZERO slot */
 		"r1 = %[single_byte_buf];"
 		"r2 = *(u64 *)(r10 -8);"
-		"r1 += r2;" /* this should be fine */
+		"r1 += r2;"
+		"*(u8 *)(r1 + 0) = r2;" /* this should be fine */
 
 		/* for completeness, load U64 from ZERO REG slot */
 		"r1 = %[single_byte_buf];"
 		"r2 = *(u64 *)(r10 -16);"
-		"r1 += r2;" /* this should be fine */
+		"r1 += r2;"
+		"*(u8 *)(r1 + 0) = r2;" /* this should be fine */
 
 		"r0 = 0;"
 		"exit;"
-- 
2.34.1


  parent reply	other threads:[~2023-12-04 19:26 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-04 19:25 [PATCH v3 bpf-next 00/10] Complete BPF verifier precision tracking support for register spills Andrii Nakryiko
2023-12-04 19:25 ` [PATCH v3 bpf-next 01/10] bpf: support non-r10 register spill/fill to/from stack in precision tracking Andrii Nakryiko
2023-12-04 19:25 ` [PATCH v3 bpf-next 02/10] selftests/bpf: add stack access precision test Andrii Nakryiko
2023-12-04 19:25 ` [PATCH v3 bpf-next 03/10] bpf: fix check for attempt to corrupt spilled pointer Andrii Nakryiko
2023-12-04 22:12   ` Eduard Zingerman
2023-12-04 22:15     ` Eduard Zingerman
2023-12-05  0:23     ` Andrii Nakryiko
2023-12-05  0:54       ` Eduard Zingerman
2023-12-05  3:56         ` Andrii Nakryiko
2023-12-05 13:34           ` Eduard Zingerman
2023-12-05 18:30             ` Andrii Nakryiko
2023-12-05 18:49               ` Eduard Zingerman
2023-12-05 18:55                 ` Andrii Nakryiko
2023-12-05  1:45       ` Alexei Starovoitov
2023-12-05  3:50         ` Andrii Nakryiko
2023-12-04 19:25 ` [PATCH v3 bpf-next 04/10] bpf: preserve STACK_ZERO slots on partial reg spills Andrii Nakryiko
2023-12-04 19:25 ` [PATCH v3 bpf-next 05/10] selftests/bpf: validate STACK_ZERO is preserved on subreg spill Andrii Nakryiko
2023-12-04 19:25 ` [PATCH v3 bpf-next 06/10] bpf: preserve constant zero when doing partial register restore Andrii Nakryiko
2023-12-04 19:25 ` [PATCH v3 bpf-next 07/10] selftests/bpf: validate zero preservation for sub-slot loads Andrii Nakryiko
2023-12-04 19:25 ` [PATCH v3 bpf-next 08/10] bpf: track aligned STACK_ZERO cases as imprecise spilled registers Andrii Nakryiko
2023-12-04 19:26 ` Andrii Nakryiko [this message]
2023-12-04 19:26 ` [PATCH v3 bpf-next 10/10] bpf: use common instruction history across all states Andrii Nakryiko
2023-12-04 22:32 ` [PATCH v3 bpf-next 00/10] Complete BPF verifier precision tracking support for register spills Andrii Nakryiko
2023-12-04 23:02   ` Yonghong Song
2023-12-04 23:52     ` Andrii Nakryiko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231204192601.2672497-10-andrii@kernel.org \
    --to=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=eddyz87@gmail.com \
    --cc=kernel-team@meta.com \
    --cc=martin.lau@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox