From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: Leon Hwang <hffilwlqm@gmail.com>
Cc: <ast@kernel.org>, <daniel@iogearbox.net>, <andrii@kernel.org>,
<song@kernel.org>, <iii@linux.ibm.com>, <jakub@cloudflare.com>,
<bpf@vger.kernel.org>
Subject: Re: [RFC PATCH bpf-next v4 4/4] selftests/bpf: Add testcases for tailcall infinite loop fixing
Date: Wed, 6 Sep 2023 23:18:22 +0200 [thread overview]
Message-ID: <ZPjsnjoS/nh7aMcF@boxer> (raw)
In-Reply-To: <20230903151448.61696-5-hffilwlqm@gmail.com>
On Sun, Sep 03, 2023 at 11:14:48PM +0800, Leon Hwang wrote:
> Add 4 test cases to confirm the tailcall infinite loop bug has been fixed.
>
> Like tailcall_bpf2bpf cases, do fentry/fexit on the bpf2bpf, and then
> check the final count result.
>
> tools/testing/selftests/bpf/test_progs -t tailcalls
> 226/13 tailcalls/tailcall_bpf2bpf_fentry:OK
> 226/14 tailcalls/tailcall_bpf2bpf_fexit:OK
> 226/15 tailcalls/tailcall_bpf2bpf_fentry_fexit:OK
> 226/16 tailcalls/tailcall_bpf2bpf_fentry_entry:OK
> 226 tailcalls:OK
> Summary: 1/16 PASSED, 0 SKIPPED, 0 FAILED
>
> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com>
> ---
> .../selftests/bpf/prog_tests/tailcalls.c | 299 ++++++++++++++++++
> .../bpf/progs/tailcall_bpf2bpf_fentry.c | 18 ++
> .../bpf/progs/tailcall_bpf2bpf_fexit.c | 18 ++
> 3 files changed, 335 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
> create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
> index b20d7f77a5bce..331b4e455ad06 100644
> --- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
> +++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
> @@ -884,6 +884,297 @@ static void test_tailcall_bpf2bpf_6(void)
> tailcall_bpf2bpf6__destroy(obj);
> }
>
> +static void tailcall_bpf2bpf_fentry_fexit(bool test_fentry, bool test_fexit)
> +{
> + struct bpf_object *tgt_obj = NULL, *fentry_obj = NULL, *fexit_obj = NULL;
> + struct bpf_link *fentry_link = NULL, *fexit_link = NULL;
> + int err, map_fd, prog_fd, main_fd, data_fd, i, val;
> + struct bpf_map *prog_array, *data_map;
> + struct bpf_program *prog;
> + char buff[128] = {};
> +
> + LIBBPF_OPTS(bpf_test_run_opts, topts,
> + .data_in = buff,
> + .data_size_in = sizeof(buff),
> + .repeat = 1,
> + );
> +
> + err = bpf_prog_test_load("tailcall_bpf2bpf2.bpf.o",
> + BPF_PROG_TYPE_SCHED_CLS,
> + &tgt_obj, &prog_fd);
> + if (!ASSERT_OK(err, "load tgt_obj"))
> + return;
> +
> + prog = bpf_object__find_program_by_name(tgt_obj, "entry");
> + if (!ASSERT_OK_PTR(prog, "find entry prog"))
> + goto out;
> +
> + main_fd = bpf_program__fd(prog);
> + if (!ASSERT_FALSE(main_fd < 0, "find entry prog fd"))
> + goto out;
> +
> + prog_array = bpf_object__find_map_by_name(tgt_obj, "jmp_table");
> + if (!ASSERT_OK_PTR(prog_array, "find jmp_table map"))
> + goto out;
> +
> + map_fd = bpf_map__fd(prog_array);
> + if (!ASSERT_FALSE(map_fd < 0, "find jmp_table map fd"))
> + goto out;
> +
> + prog = bpf_object__find_program_by_name(tgt_obj, "classifier_0");
> + if (!ASSERT_OK_PTR(prog, "find classifier_0 prog"))
> + goto out;
> +
> + prog_fd = bpf_program__fd(prog);
> + if (!ASSERT_FALSE(prog_fd < 0, "find classifier_0 prog fd"))
> + goto out;
> +
> + i = 0;
> + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
> + if (!ASSERT_OK(err, "update jmp_table"))
> + goto out;
> +
> + if (test_fentry) {
> + fentry_obj = bpf_object__open_file("tailcall_bpf2bpf_fentry.bpf.o",
> + NULL);
> + if (!ASSERT_OK_PTR(fentry_obj, "open fentry_obj file"))
> + goto out;
> +
> + prog = bpf_object__find_program_by_name(fentry_obj, "fentry");
> + if (!ASSERT_OK_PTR(prog, "find fentry prog"))
> + goto out;
> +
> + err = bpf_program__set_attach_target(prog, prog_fd,
> + "subprog_tail");
> + if (!ASSERT_OK(err, "set_attach_target subprog_tail"))
> + goto out;
> +
> + err = bpf_object__load(fentry_obj);
> + if (!ASSERT_OK(err, "load fentry_obj"))
> + goto out;
> +
> + fentry_link = bpf_program__attach_trace(prog);
> + if (!ASSERT_OK_PTR(fentry_link, "attach_trace"))
> + goto out;
> + }
> +
> + if (test_fexit) {
> + fexit_obj = bpf_object__open_file("tailcall_bpf2bpf_fexit.bpf.o",
> + NULL);
> + if (!ASSERT_OK_PTR(fexit_obj, "open fexit_obj file"))
> + goto out;
> +
> + prog = bpf_object__find_program_by_name(fexit_obj, "fexit");
> + if (!ASSERT_OK_PTR(prog, "find fexit prog"))
> + goto out;
> +
> + err = bpf_program__set_attach_target(prog, prog_fd,
> + "subprog_tail");
> + if (!ASSERT_OK(err, "set_attach_target subprog_tail"))
> + goto out;
> +
> + err = bpf_object__load(fexit_obj);
> + if (!ASSERT_OK(err, "load fexit_obj"))
> + goto out;
> +
> + fexit_link = bpf_program__attach_trace(prog);
> + if (!ASSERT_OK_PTR(fexit_link, "attach_trace"))
> + goto out;
> + }
> +
> + err = bpf_prog_test_run_opts(main_fd, &topts);
> + ASSERT_OK(err, "tailcall");
> + ASSERT_EQ(topts.retval, 1, "tailcall retval");
> +
> + data_map = bpf_object__find_map_by_name(tgt_obj, "tailcall.bss");
> + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
> + "find tailcall.bss map"))
> + goto out;
> +
> + data_fd = bpf_map__fd(data_map);
> + if (!ASSERT_FALSE(data_fd < 0, "find tailcall.bss map fd"))
> + goto out;
> +
> + i = 0;
> + err = bpf_map_lookup_elem(data_fd, &i, &val);
> + ASSERT_OK(err, "tailcall count");
> + ASSERT_EQ(val, 33, "tailcall count");
> +
> + if (test_fentry) {
> + data_map = bpf_object__find_map_by_name(fentry_obj, ".bss");
> + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
> + "find tailcall_bpf2bpf_fentry.bss map"))
> + goto out;
> +
> + data_fd = bpf_map__fd(data_map);
> + if (!ASSERT_FALSE(data_fd < 0,
> + "find tailcall_bpf2bpf_fentry.bss map fd"))
> + goto out;
> +
> + i = 0;
> + err = bpf_map_lookup_elem(data_fd, &i, &val);
> + ASSERT_OK(err, "fentry count");
> + ASSERT_EQ(val, 33, "fentry count");
> + }
> +
> + if (test_fexit) {
> + data_map = bpf_object__find_map_by_name(fexit_obj, ".bss");
> + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
> + "find tailcall_bpf2bpf_fexit.bss map"))
> + goto out;
> +
> + data_fd = bpf_map__fd(data_map);
> + if (!ASSERT_FALSE(data_fd < 0,
> + "find tailcall_bpf2bpf_fexit.bss map fd"))
> + goto out;
> +
> + i = 0;
> + err = bpf_map_lookup_elem(data_fd, &i, &val);
> + ASSERT_OK(err, "fexit count");
> + ASSERT_EQ(val, 33, "fexit count");
> + }
> +
> +out:
> + bpf_link__destroy(fentry_link);
> + bpf_link__destroy(fexit_link);
> + bpf_object__close(fentry_obj);
> + bpf_object__close(fexit_obj);
> + bpf_object__close(tgt_obj);
> +}
> +
> +/* test_tailcall_bpf2bpf_fentry checks that the count value of the tail call
> + * limit enforcement matches with expectations when tailcall is preceded with
> + * bpf2bpf call, and the bpf2bpf call is traced by fentry.
> + */
> +static void test_tailcall_bpf2bpf_fentry(void)
> +{
> + tailcall_bpf2bpf_fentry_fexit(true, false);
> +}
> +
> +/* test_tailcall_bpf2bpf_fexit checks that the count value of the tail call
> + * limit enforcement matches with expectations when tailcall is preceded with
> + * bpf2bpf call, and the bpf2bpf call is traced by fexit.
> + */
> +static void test_tailcall_bpf2bpf_fexit(void)
> +{
> + tailcall_bpf2bpf_fentry_fexit(false, true);
> +}
> +
> +/* test_tailcall_bpf2bpf_fentry_fexit checks that the count value of the tail
> + * call limit enforcement matches with expectations when tailcall is preceded
> + * with bpf2bpf call, and the bpf2bpf call is traced by both fentry and fexit.
> + */
> +static void test_tailcall_bpf2bpf_fentry_fexit(void)
> +{
> + tailcall_bpf2bpf_fentry_fexit(true, true);
Would it be possible to modify existing test_tailcall_count() to have
fentry/fexit testing within? __tailcall_bpf2bpf_fentry_fexit() basically
repeats the logic of test_tailcall_count(), right?
How about something like:
static void test_tailcall_bpf2bpf_fentry(void)
{
test_tailcall_count("tailcall_bpf2bpf2.bpf.o", true, false);
}
// rest of your test cases
and existing tailcall count tests:
static void test_tailcall_3(void)
{
test_tailcall_count("tailcall3.bpf.o", false, false);
}
static void test_tailcall_6(void)
{
test_tailcall_count("tailcall6.bpf.o", false, false);
}
> +}
> +
> +/* test_tailcall_bpf2bpf_fentry_entry checks that the count value of the tail
> + * call limit enforcement matches with expectations when tailcall is preceded
> + * with bpf2bpf call, and the bpf2bpf caller is traced by fentry.
> + */
> +static void test_tailcall_bpf2bpf_fentry_entry(void)
> +{
> + struct bpf_object *tgt_obj = NULL, *fentry_obj = NULL;
> + int err, map_fd, prog_fd, data_fd, i, val;
> + struct bpf_map *prog_array, *data_map;
> + struct bpf_link *fentry_link = NULL;
> + struct bpf_program *prog;
> + char buff[128] = {};
> +
> + LIBBPF_OPTS(bpf_test_run_opts, topts,
> + .data_in = buff,
> + .data_size_in = sizeof(buff),
> + .repeat = 1,
> + );
> +
> + err = bpf_prog_test_load("tailcall_bpf2bpf2.bpf.o",
> + BPF_PROG_TYPE_SCHED_CLS,
> + &tgt_obj, &prog_fd);
> + if (!ASSERT_OK(err, "load tgt_obj"))
> + return;
> +
> + prog_array = bpf_object__find_map_by_name(tgt_obj, "jmp_table");
> + if (!ASSERT_OK_PTR(prog_array, "find jmp_table map"))
> + goto out;
> +
> + map_fd = bpf_map__fd(prog_array);
> + if (!ASSERT_FALSE(map_fd < 0, "find jmp_table map fd"))
> + goto out;
> +
> + prog = bpf_object__find_program_by_name(tgt_obj, "classifier_0");
> + if (!ASSERT_OK_PTR(prog, "find classifier_0 prog"))
> + goto out;
> +
> + prog_fd = bpf_program__fd(prog);
> + if (!ASSERT_FALSE(prog_fd < 0, "find classifier_0 prog fd"))
> + goto out;
> +
> + i = 0;
> + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
> + if (!ASSERT_OK(err, "update jmp_table"))
> + goto out;
> +
> + fentry_obj = bpf_object__open_file("tailcall_bpf2bpf_fentry.bpf.o",
> + NULL);
> + if (!ASSERT_OK_PTR(fentry_obj, "open fentry_obj file"))
> + goto out;
> +
> + prog = bpf_object__find_program_by_name(fentry_obj, "fentry");
> + if (!ASSERT_OK_PTR(prog, "find fentry prog"))
> + goto out;
> +
> + err = bpf_program__set_attach_target(prog, prog_fd, "classifier_0");
> + if (!ASSERT_OK(err, "set_attach_target classifier_0"))
> + goto out;
> +
> + err = bpf_object__load(fentry_obj);
> + if (!ASSERT_OK(err, "load fentry_obj"))
> + goto out;
> +
> + fentry_link = bpf_program__attach_trace(prog);
> + if (!ASSERT_OK_PTR(fentry_link, "attach_trace"))
> + goto out;
> +
> + err = bpf_prog_test_run_opts(prog_fd, &topts);
> + ASSERT_OK(err, "tailcall");
> + ASSERT_EQ(topts.retval, 1, "tailcall retval");
> +
> + data_map = bpf_object__find_map_by_name(tgt_obj, "tailcall.bss");
> + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
> + "find tailcall.bss map"))
> + goto out;
> +
> + data_fd = bpf_map__fd(data_map);
> + if (!ASSERT_FALSE(data_fd < 0, "find tailcall.bss map fd"))
> + goto out;
> +
> + i = 0;
> + err = bpf_map_lookup_elem(data_fd, &i, &val);
> + ASSERT_OK(err, "tailcall count");
> + ASSERT_EQ(val, 34, "tailcall count");
> +
> + data_map = bpf_object__find_map_by_name(fentry_obj, ".bss");
> + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
> + "find tailcall_bpf2bpf_fentry.bss map"))
> + goto out;
> +
> + data_fd = bpf_map__fd(data_map);
> + if (!ASSERT_FALSE(data_fd < 0,
> + "find tailcall_bpf2bpf_fentry.bss map fd"))
> + goto out;
> +
> + i = 0;
> + err = bpf_map_lookup_elem(data_fd, &i, &val);
> + ASSERT_OK(err, "fentry count");
> + ASSERT_EQ(val, 1, "fentry count");
> +
> +out:
> + bpf_link__destroy(fentry_link);
> + bpf_object__close(fentry_obj);
> + bpf_object__close(tgt_obj);
> +}
> +
> void test_tailcalls(void)
> {
> if (test__start_subtest("tailcall_1"))
> @@ -910,4 +1201,12 @@ void test_tailcalls(void)
> test_tailcall_bpf2bpf_4(true);
> if (test__start_subtest("tailcall_bpf2bpf_6"))
> test_tailcall_bpf2bpf_6();
> + if (test__start_subtest("tailcall_bpf2bpf_fentry"))
> + test_tailcall_bpf2bpf_fentry();
> + if (test__start_subtest("tailcall_bpf2bpf_fexit"))
> + test_tailcall_bpf2bpf_fexit();
> + if (test__start_subtest("tailcall_bpf2bpf_fentry_fexit"))
> + test_tailcall_bpf2bpf_fentry_fexit();
> + if (test__start_subtest("tailcall_bpf2bpf_fentry_entry"))
> + test_tailcall_bpf2bpf_fentry_entry();
> }
> diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
> new file mode 100644
> index 0000000000000..8436c6729167c
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
> @@ -0,0 +1,18 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright Leon Hwang */
> +
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +int count = 0;
> +
> +SEC("fentry/subprog_tail")
> +int BPF_PROG(fentry, struct sk_buff *skb)
> +{
> + count++;
> +
> + return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c
> new file mode 100644
> index 0000000000000..fe16412c6e6e9
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c
> @@ -0,0 +1,18 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright Leon Hwang */
> +
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +int count = 0;
> +
> +SEC("fexit/subprog_tail")
> +int BPF_PROG(fexit, struct sk_buff *skb)
> +{
> + count++;
> +
> + return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> --
> 2.41.0
>
next prev parent reply other threads:[~2023-09-06 21:18 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-03 15:14 [RFC PATCH bpf-next v4 0/4] bpf, x64: Fix tailcall infinite loop Leon Hwang
2023-09-03 15:14 ` [RFC PATCH bpf-next v4 1/4] bpf, x64: Comment tail_call_cnt initialisation Leon Hwang
2023-09-05 19:26 ` Maciej Fijalkowski
2023-09-06 2:23 ` Leon Hwang
2023-09-03 15:14 ` [RFC PATCH bpf-next v4 2/4] bpf, x64: Fix tailcall infinite loop Leon Hwang
2023-09-06 20:50 ` Maciej Fijalkowski
2023-09-03 15:14 ` [RFC PATCH bpf-next v4 3/4] selftests/bpf: Correct map_fd to data_fd in tailcalls Leon Hwang
2023-09-05 19:22 ` Maciej Fijalkowski
2023-09-06 2:29 ` Leon Hwang
2023-09-03 15:14 ` [RFC PATCH bpf-next v4 4/4] selftests/bpf: Add testcases for tailcall infinite loop fixing Leon Hwang
2023-09-06 21:18 ` Maciej Fijalkowski [this message]
2023-09-07 3:53 ` Leon Hwang
2023-09-04 13:10 ` [RFC PATCH bpf-next v4 0/4] bpf, x64: Fix tailcall infinite loop Ilya Leoshkevich
2023-09-04 15:15 ` Leon Hwang
2023-09-06 0:57 ` Ilya Leoshkevich
2023-09-06 2:39 ` Leon Hwang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZPjsnjoS/nh7aMcF@boxer \
--to=maciej.fijalkowski@intel.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=hffilwlqm@gmail.com \
--cc=iii@linux.ibm.com \
--cc=jakub@cloudflare.com \
--cc=song@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox