* [RFC bpf 0/2] bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END
@ 2023-10-30 13:21 Shung-Hsi Yu
2023-10-30 13:21 ` [RFC bpf 1/2] " Shung-Hsi Yu
2023-10-30 13:21 ` [RFC bpf 2/2] selftests/bpf: precision tracking test " Shung-Hsi Yu
0 siblings, 2 replies; 7+ messages in thread
From: Shung-Hsi Yu @ 2023-10-30 13:21 UTC (permalink / raw)
To: bpf
Cc: Shung-Hsi Yu, Daniel Borkmann, Andrii Nakryiko,
Alexei Starovoitov, Toke Høiland-Jørgensen,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Eduard Zingerman
Note: this is sent as a RFC because I'm quite unsure about the selftest.
(Please see the notes in patch 2, just above diffstat)
This patchset fixes and adds selftest for the issue reported by Mohamed
Mahmoud and Toke Høiland-Jørgensen where the kernel can run into a
verifier bug during backtracking of BPF_ALU | BPF_TO_BE | BPF_END
instruction[0]. As seen in the verifier log below, r0 was incorrectly
marked as precise even tough its value was not being used.
Patch 1 fixes the issue based on Andrii's analysis, and patch 2 adds a
selftest for such case using inline assembly. Please see individual
patch for detail.
...
mark_precise: frame2: regs=r2 stack= before 1891: (77) r2 >>= 56
mark_precise: frame2: regs=r2 stack= before 1890: (dc) r2 = be64 r2
mark_precise: frame2: regs=r0,r2 stack= before 1889: (73) *(u8 *)(r1 +47) = r3
...
mark_precise: frame2: regs=r0 stack= before 212: (85) call pc+1617
BUG regs 1
processed 5112 insns (limit 1000000) max_states_per_insn 4 total_states 92 peak_states 90 mark_read 20
0: https://lore.kernel.org/r/87jzrrwptf.fsf@toke.dk
Shung-Hsi Yu (2):
bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END
selftests/bpf: precision tracking test for BPF_ALU | BPF_TO_BE | BPF_END
kernel/bpf/verifier.c | 6 +++-
.../selftests/bpf/prog_tests/verifier.c | 2 ++
.../selftests/bpf/progs/verifier_precision.c | 29 +++++++++++++++++++
3 files changed, 36 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/progs/verifier_precision.c
base-commit: c17cda15cc86e65e9725641daddcd7a63cc9ad01
--
2.42.0
^ permalink raw reply [flat|nested] 7+ messages in thread* [RFC bpf 1/2] bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END 2023-10-30 13:21 [RFC bpf 0/2] bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END Shung-Hsi Yu @ 2023-10-30 13:21 ` Shung-Hsi Yu 2023-10-30 14:28 ` Eduard Zingerman 2023-10-30 13:21 ` [RFC bpf 2/2] selftests/bpf: precision tracking test " Shung-Hsi Yu 1 sibling, 1 reply; 7+ messages in thread From: Shung-Hsi Yu @ 2023-10-30 13:21 UTC (permalink / raw) To: bpf Cc: Shung-Hsi Yu, Daniel Borkmann, Andrii Nakryiko, Alexei Starovoitov, Toke Høiland-Jørgensen, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Eduard Zingerman, stable, Mohamed Mahmoud, Tao Lyu BPF_END and BPF_NEG has a different specification for the source bit in the opcode compared to other ALU/ALU64 instructions, and is either reserved or use to specify the byte swap endianness. In both cases the source bit does not encode source operand location, and src_reg is a reserved field. backtrack_insn() currently does not differentiate BPF_END and BPF_NEG from other ALU/ALU64 instructions, which leads to r0 being incorrectly marked as precise when processing BPF_ALU | BPF_TO_BE | BPF_END instructions. This commit teaches backtrack_insn() to correctly mark precision for such case. While precise tracking of BPF_NEG and other BPF_END instructions are correct and does not need fixing because their source bit are unset and thus treated as the BPF_K case, this commit opt to process all BPF_NEG and BPF_END instructions within the same if-clause so it better aligns with current convention used in the verifier (e.g. check_alu_op). Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking") Cc: stable@vger.kernel.org Reported-by: Mohamed Mahmoud <mmahmoud@redhat.com> Tested-by: Toke Høiland-Jørgensen <toke@redhat.com> Tested-by: Tao Lyu <tao.lyu@epfl.ch> Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> --- kernel/bpf/verifier.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 873ade146f3d..646dc49263fd 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3426,7 +3426,12 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, if (class == BPF_ALU || class == BPF_ALU64) { if (!bt_is_reg_set(bt, dreg)) return 0; - if (opcode == BPF_MOV) { + if (opcode == BPF_END || opcode == BPF_NEG) { + /* sreg is reserved and unused + * dreg still need precision before this insn + */ + return 0; + } else if (opcode == BPF_MOV) { if (BPF_SRC(insn->code) == BPF_X) { /* dreg = sreg or dreg = (s8, s16, s32)sreg * dreg needs precision after this insn -- 2.42.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC bpf 1/2] bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END 2023-10-30 13:21 ` [RFC bpf 1/2] " Shung-Hsi Yu @ 2023-10-30 14:28 ` Eduard Zingerman 0 siblings, 0 replies; 7+ messages in thread From: Eduard Zingerman @ 2023-10-30 14:28 UTC (permalink / raw) To: Shung-Hsi Yu, bpf Cc: Daniel Borkmann, Andrii Nakryiko, Alexei Starovoitov, Toke Høiland-Jørgensen, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, stable, Mohamed Mahmoud, Tao Lyu On Mon, 2023-10-30 at 21:21 +0800, Shung-Hsi Yu wrote: > BPF_END and BPF_NEG has a different specification for the source bit in > the opcode compared to other ALU/ALU64 instructions, and is either > reserved or use to specify the byte swap endianness. In both cases the > source bit does not encode source operand location, and src_reg is a > reserved field. > > backtrack_insn() currently does not differentiate BPF_END and BPF_NEG > from other ALU/ALU64 instructions, which leads to r0 being incorrectly > marked as precise when processing BPF_ALU | BPF_TO_BE | BPF_END > instructions. This commit teaches backtrack_insn() to correctly mark > precision for such case. > > While precise tracking of BPF_NEG and other BPF_END instructions are > correct and does not need fixing because their source bit are unset and > thus treated as the BPF_K case, this commit opt to process all BPF_NEG > and BPF_END instructions within the same if-clause so it better aligns > with current convention used in the verifier (e.g. check_alu_op). > > Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking") > Cc: stable@vger.kernel.org > Reported-by: Mohamed Mahmoud <mmahmoud@redhat.com> > Tested-by: Toke Høiland-Jørgensen <toke@redhat.com> > Tested-by: Tao Lyu <tao.lyu@epfl.ch> > Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> > --- > kernel/bpf/verifier.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 873ade146f3d..646dc49263fd 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -3426,7 +3426,12 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, > if (class == BPF_ALU || class == BPF_ALU64) { > if (!bt_is_reg_set(bt, dreg)) > return 0; > - if (opcode == BPF_MOV) { > + if (opcode == BPF_END || opcode == BPF_NEG) { > + /* sreg is reserved and unused > + * dreg still need precision before this insn > + */ > + return 0; > + } else if (opcode == BPF_MOV) { > if (BPF_SRC(insn->code) == BPF_X) { > /* dreg = sreg or dreg = (s8, s16, s32)sreg > * dreg needs precision after this insn ^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC bpf 2/2] selftests/bpf: precision tracking test for BPF_ALU | BPF_TO_BE | BPF_END 2023-10-30 13:21 [RFC bpf 0/2] bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END Shung-Hsi Yu 2023-10-30 13:21 ` [RFC bpf 1/2] " Shung-Hsi Yu @ 2023-10-30 13:21 ` Shung-Hsi Yu 2023-10-30 14:36 ` Eduard Zingerman 1 sibling, 1 reply; 7+ messages in thread From: Shung-Hsi Yu @ 2023-10-30 13:21 UTC (permalink / raw) To: bpf Cc: Shung-Hsi Yu, Daniel Borkmann, Andrii Nakryiko, Alexei Starovoitov, Toke Høiland-Jørgensen, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Eduard Zingerman, Mykola Lysenko, Shuah Khan Add a test written with inline assembly to check that the verifier does not incorrecly use the src_reg field of a BPF_ALU | BPF_TO_BE | BPF_END instruction. Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> --- This is the first time I'm writing a selftest so there's a lot of question I can't answer myself. Looking for suggestions regarding: 1. Whether BPF_NEG and other BPF_END cases should be tested as well 2. While the suggested way of writing BPF assembly is with inline assembly[0], as done here, maybe it is better to have this test case added in verifier/precise.c and written using macro instead? The rational is that ideally we want the selftest to be backport to the v5.3+ stable kernels alongside the fix, but __msg macro used here is only available since v6.2. 0: https://lore.kernel.org/bpf/CAADnVQJHAPid9HouwMEnfwDDKuy8BnGia269KSbby2gA030OBg@mail.gmail.com/ .../selftests/bpf/prog_tests/verifier.c | 2 ++ .../selftests/bpf/progs/verifier_precision.c | 29 +++++++++++++++++++ 2 files changed, 31 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/verifier_precision.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index e3e68c97b40c..e5c61aa6604a 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -46,6 +46,7 @@ #include "verifier_movsx.skel.h" #include "verifier_netfilter_ctx.skel.h" #include "verifier_netfilter_retcode.skel.h" +#include "verifier_precision.skel.h" #include "verifier_prevent_map_lookup.skel.h" #include "verifier_raw_stack.skel.h" #include "verifier_raw_tp_writable.skel.h" @@ -153,6 +154,7 @@ void test_verifier_meta_access(void) { RUN(verifier_meta_access); } void test_verifier_movsx(void) { RUN(verifier_movsx); } void test_verifier_netfilter_ctx(void) { RUN(verifier_netfilter_ctx); } void test_verifier_netfilter_retcode(void) { RUN(verifier_netfilter_retcode); } +void test_verifier_precision(void) { RUN(verifier_precision); } void test_verifier_prevent_map_lookup(void) { RUN(verifier_prevent_map_lookup); } void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); } void test_verifier_raw_tp_writable(void) { RUN(verifier_raw_tp_writable); } diff --git a/tools/testing/selftests/bpf/progs/verifier_precision.c b/tools/testing/selftests/bpf/progs/verifier_precision.c new file mode 100644 index 000000000000..9236994387bf --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_precision.c @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2023 SUSE LLC */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +int vals[] SEC(".data.vals") = {1, 2, 3, 4}; + +SEC("?raw_tp") +__success __log_level(2) +__msg("mark_precise: frame0: regs=r2 stack= before 5: (bf) r1 = r6") +__msg("mark_precise: frame0: regs=r2 stack= before 4: (57) r2 &= 3") +__msg("mark_precise: frame0: regs=r2 stack= before 3: (dc) r2 = be16 r2") +__msg("mark_precise: frame0: regs=r2 stack= before 2: (b7) r2 = 0") +__naked int bpf_end(void) +{ + asm volatile ( + "r2 = 0;" + "r2 = be16 r2;" + "r2 &= 0x3;" + "r1 = %[vals];" + "r1 += r2;" + "r0 = *(u32 *)(r1 + 0);" + "exit;" + : + : __imm_ptr(vals) + : __clobber_common); +} -- 2.42.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC bpf 2/2] selftests/bpf: precision tracking test for BPF_ALU | BPF_TO_BE | BPF_END 2023-10-30 13:21 ` [RFC bpf 2/2] selftests/bpf: precision tracking test " Shung-Hsi Yu @ 2023-10-30 14:36 ` Eduard Zingerman 2023-10-30 17:17 ` Alexei Starovoitov 0 siblings, 1 reply; 7+ messages in thread From: Eduard Zingerman @ 2023-10-30 14:36 UTC (permalink / raw) To: Shung-Hsi Yu, bpf Cc: Daniel Borkmann, Andrii Nakryiko, Alexei Starovoitov, Toke Høiland-Jørgensen, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko, Shuah Khan On Mon, 2023-10-30 at 21:21 +0800, Shung-Hsi Yu wrote: > Add a test written with inline assembly to check that the verifier does > not incorrecly use the src_reg field of a BPF_ALU | BPF_TO_BE | BPF_END > instruction. > > Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> > --- > > This is the first time I'm writing a selftest so there's a lot of > question I can't answer myself. Looking for suggestions regarding: > > 1. Whether BPF_NEG and other BPF_END cases should be tested as well It is probably good to test BPF_NEG, unfortunately verifier does not track range information for BPF_NEG, so I ended up with the following contraption: SEC("?raw_tp") __success __log_level(2) __msg("mark_precise: frame0: regs=r2 stack= before 3: (bf) r1 = r10") __msg("mark_precise: frame0: regs=r2 stack= before 2: (55) if r2 != 0xfffffff8 goto pc+2") __msg("mark_precise: frame0: regs=r2 stack= before 1: (87) r2 = -r2") __msg("mark_precise: frame0: regs=r2 stack= before 0: (b7) r2 = 8") __naked int bpf_neg(void) { asm volatile ( "r2 = 8;" "r2 = -r2;" "if r2 != -8 goto 1f;" "r1 = r10;" "r1 += r2;" "1:" "r0 = 0;" "exit;" ::: __clobber_all); } Also, maybe it's good to test bswap version of BPF_END (CPU v4 instruction) for completeness, e.g. as follows: #if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \ (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \ defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \ __clang_major__ >= 18 ... "r2 = bswap16 r2;" ... #endif > 2. While the suggested way of writing BPF assembly is with inline > assembly[0], as done here, maybe it is better to have this test case > added in verifier/precise.c and written using macro instead? > The rational is that ideally we want the selftest to be backport to > the v5.3+ stable kernels alongside the fix, but __msg macro used here > is only available since v6.2. As far as I understand we want to have new tests written in assembly, but let's wait for Alexei or Andrii to comment. > > 0: https://lore.kernel.org/bpf/CAADnVQJHAPid9HouwMEnfwDDKuy8BnGia269KSbby2gA030OBg@mail.gmail.com/ > > .../selftests/bpf/prog_tests/verifier.c | 2 ++ > .../selftests/bpf/progs/verifier_precision.c | 29 +++++++++++++++++++ > 2 files changed, 31 insertions(+) > create mode 100644 tools/testing/selftests/bpf/progs/verifier_precision.c > > diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c > index e3e68c97b40c..e5c61aa6604a 100644 > --- a/tools/testing/selftests/bpf/prog_tests/verifier.c > +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c > @@ -46,6 +46,7 @@ > #include "verifier_movsx.skel.h" > #include "verifier_netfilter_ctx.skel.h" > #include "verifier_netfilter_retcode.skel.h" > +#include "verifier_precision.skel.h" > #include "verifier_prevent_map_lookup.skel.h" > #include "verifier_raw_stack.skel.h" > #include "verifier_raw_tp_writable.skel.h" > @@ -153,6 +154,7 @@ void test_verifier_meta_access(void) { RUN(verifier_meta_access); } > void test_verifier_movsx(void) { RUN(verifier_movsx); } > void test_verifier_netfilter_ctx(void) { RUN(verifier_netfilter_ctx); } > void test_verifier_netfilter_retcode(void) { RUN(verifier_netfilter_retcode); } > +void test_verifier_precision(void) { RUN(verifier_precision); } > void test_verifier_prevent_map_lookup(void) { RUN(verifier_prevent_map_lookup); } > void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); } > void test_verifier_raw_tp_writable(void) { RUN(verifier_raw_tp_writable); } > diff --git a/tools/testing/selftests/bpf/progs/verifier_precision.c b/tools/testing/selftests/bpf/progs/verifier_precision.c > new file mode 100644 > index 000000000000..9236994387bf > --- /dev/null > +++ b/tools/testing/selftests/bpf/progs/verifier_precision.c > @@ -0,0 +1,29 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* Copyright (C) 2023 SUSE LLC */ > + > +#include <linux/bpf.h> > +#include <bpf/bpf_helpers.h> > +#include "bpf_misc.h" > + > +int vals[] SEC(".data.vals") = {1, 2, 3, 4}; > + > +SEC("?raw_tp") > +__success __log_level(2) > +__msg("mark_precise: frame0: regs=r2 stack= before 5: (bf) r1 = r6") > +__msg("mark_precise: frame0: regs=r2 stack= before 4: (57) r2 &= 3") > +__msg("mark_precise: frame0: regs=r2 stack= before 3: (dc) r2 = be16 r2") > +__msg("mark_precise: frame0: regs=r2 stack= before 2: (b7) r2 = 0") > +__naked int bpf_end(void) > +{ > + asm volatile ( > + "r2 = 0;" > + "r2 = be16 r2;" > + "r2 &= 0x3;" > + "r1 = %[vals];" > + "r1 += r2;" > + "r0 = *(u32 *)(r1 + 0);" > + "exit;" > + : > + : __imm_ptr(vals) > + : __clobber_common); > +} Note: there are a simpler ways to force r2 precise, e.g. add it to r10: SEC("?raw_tp") __success __log_level(2) __msg("mark_precise: frame0: regs=r2 stack= before 2: (57) r2 &= 3") __msg("mark_precise: frame0: regs=r2 stack= before 1: (dc) r2 = be16 r2") __msg("mark_precise: frame0: regs=r2 stack= before 0: (b7) r2 = 0") __naked int bpf_end(void) { asm volatile ( "r2 = 0;" "r2 = be16 r2;" "r2 &= 0x3;" "r1 = r10;" "r1 += r2;" "r0 = 0;" "exit;" ::: __clobber_all); } ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC bpf 2/2] selftests/bpf: precision tracking test for BPF_ALU | BPF_TO_BE | BPF_END 2023-10-30 14:36 ` Eduard Zingerman @ 2023-10-30 17:17 ` Alexei Starovoitov 2023-10-31 5:22 ` Shung-Hsi Yu 0 siblings, 1 reply; 7+ messages in thread From: Alexei Starovoitov @ 2023-10-30 17:17 UTC (permalink / raw) To: Eduard Zingerman Cc: Shung-Hsi Yu, bpf, Daniel Borkmann, Andrii Nakryiko, Alexei Starovoitov, Toke Høiland-Jørgensen, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko, Shuah Khan On Mon, Oct 30, 2023 at 7:36 AM Eduard Zingerman <eddyz87@gmail.com> wrote: > > On Mon, 2023-10-30 at 21:21 +0800, Shung-Hsi Yu wrote: > > Add a test written with inline assembly to check that the verifier does > > not incorrecly use the src_reg field of a BPF_ALU | BPF_TO_BE | BPF_END > > instruction. > > > > Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> > > --- > > > > This is the first time I'm writing a selftest so there's a lot of > > question I can't answer myself. Looking for suggestions regarding: > > > > 1. Whether BPF_NEG and other BPF_END cases should be tested as well > > It is probably good to test BPF_NEG, unfortunately verifier does not > track range information for BPF_NEG, so I ended up with the following > contraption: Makes sense to me. > SEC("?raw_tp") > __success __log_level(2) > __msg("mark_precise: frame0: regs=r2 stack= before 3: (bf) r1 = r10") > __msg("mark_precise: frame0: regs=r2 stack= before 2: (55) if r2 != 0xfffffff8 goto pc+2") > __msg("mark_precise: frame0: regs=r2 stack= before 1: (87) r2 = -r2") > __msg("mark_precise: frame0: regs=r2 stack= before 0: (b7) r2 = 8") > __naked int bpf_neg(void) > { > asm volatile ( > "r2 = 8;" > "r2 = -r2;" > "if r2 != -8 goto 1f;" > "r1 = r10;" > "r1 += r2;" > "1:" > "r0 = 0;" > "exit;" > ::: __clobber_all); > } > > Also, maybe it's good to test bswap version of BPF_END (CPU v4 > instruction) for completeness, e.g. as follows: > > #if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \ > (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \ > defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \ > __clang_major__ >= 18 > > ... > "r2 = bswap16 r2;" +1. Let's have a test for this one as well. > ... > > #endif > > > > 2. While the suggested way of writing BPF assembly is with inline > > assembly[0], as done here, maybe it is better to have this test case > > added in verifier/precise.c and written using macro instead? > > The rational is that ideally we want the selftest to be backport to > > the v5.3+ stable kernels alongside the fix, but __msg macro used here > > is only available since v6.2. > > As far as I understand we want to have new tests written in assembly, > but let's wait for Alexei or Andrii to comment. Backports is not a reason to use macros. Please continue with inline asm. ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC bpf 2/2] selftests/bpf: precision tracking test for BPF_ALU | BPF_TO_BE | BPF_END 2023-10-30 17:17 ` Alexei Starovoitov @ 2023-10-31 5:22 ` Shung-Hsi Yu 0 siblings, 0 replies; 7+ messages in thread From: Shung-Hsi Yu @ 2023-10-31 5:22 UTC (permalink / raw) To: Alexei Starovoitov, Eduard Zingerman Cc: bpf, Daniel Borkmann, Andrii Nakryiko, Alexei Starovoitov, Toke Høiland-Jørgensen, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko, Shuah Khan On Mon, Oct 30, 2023 at 10:17:10AM -0700, Alexei Starovoitov wrote: > On Mon, Oct 30, 2023 at 7:36 AM Eduard Zingerman <eddyz87@gmail.com> wrote: > > On Mon, 2023-10-30 at 21:21 +0800, Shung-Hsi Yu wrote: > > > Add a test written with inline assembly to check that the verifier does > > > not incorrecly use the src_reg field of a BPF_ALU | BPF_TO_BE | BPF_END > > > instruction. > > > > > > Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> > > > --- > > > > > > This is the first time I'm writing a selftest so there's a lot of > > > question I can't answer myself. Looking for suggestions regarding: > > > > > > 1. Whether BPF_NEG and other BPF_END cases should be tested as well > > > > It is probably good to test BPF_NEG, unfortunately verifier does not > > track range information for BPF_NEG, so I ended up with the following > > contraption: > > Makes sense to me. > > > SEC("?raw_tp") > > __success __log_level(2) > > __msg("mark_precise: frame0: regs=r2 stack= before 3: (bf) r1 = r10") > > __msg("mark_precise: frame0: regs=r2 stack= before 2: (55) if r2 != 0xfffffff8 goto pc+2") > > __msg("mark_precise: frame0: regs=r2 stack= before 1: (87) r2 = -r2") > > __msg("mark_precise: frame0: regs=r2 stack= before 0: (b7) r2 = 8") > > __naked int bpf_neg(void) > > { > > asm volatile ( > > "r2 = 8;" > > "r2 = -r2;" > > "if r2 != -8 goto 1f;" > > "r1 = r10;" > > "r1 += r2;" > > "1:" > > "r0 = 0;" > > "exit;" > > ::: __clobber_all); > > } > > > > Also, maybe it's good to test bswap version of BPF_END (CPU v4 > > instruction) for completeness, e.g. as follows: > > > > #if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \ > > (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \ > > defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \ > > __clang_major__ >= 18 > > > > ... > > "r2 = bswap16 r2;" > > +1. Let's have a test for this one as well. > > > ... > > > > #endif > > > > > 2. While the suggested way of writing BPF assembly is with inline > > > assembly[0], as done here, maybe it is better to have this test case > > > added in verifier/precise.c and written using macro instead? > > > The rational is that ideally we want the selftest to be backport to > > > the v5.3+ stable kernels alongside the fix, but __msg macro used here > > > is only available since v6.2. > > > > As far as I understand we want to have new tests written in assembly, > > but let's wait for Alexei or Andrii to comment. > > Backports is not a reason to use macros. > Please continue with inline asm. Got it, will add tests for negation and bswap with inline assembly. Thanks you both for feedbacks and suggestions! ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-10-31 5:22 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-10-30 13:21 [RFC bpf 0/2] bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END Shung-Hsi Yu 2023-10-30 13:21 ` [RFC bpf 1/2] " Shung-Hsi Yu 2023-10-30 14:28 ` Eduard Zingerman 2023-10-30 13:21 ` [RFC bpf 2/2] selftests/bpf: precision tracking test " Shung-Hsi Yu 2023-10-30 14:36 ` Eduard Zingerman 2023-10-30 17:17 ` Alexei Starovoitov 2023-10-31 5:22 ` Shung-Hsi Yu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox