From: David Vernet <void@manifault.com>
To: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: bpf@vger.kernel.org, Alexei Starovoitov <ast@kernel.org>,
Andrii Nakryiko <andrii@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Martin KaFai Lau <martin.lau@kernel.org>,
Tejun Heo <tj@kernel.org>, Raj Sahu <rjsu26@vt.edu>,
Dan Williams <djwillia@vt.edu>,
Rishabh Iyer <rishabh.iyer@epfl.ch>,
Sanidhya Kashyap <sanidhya.kashyap@epfl.ch>
Subject: Re: [RFC PATCH v1 14/14] selftests/bpf: Add tests for exceptions runtime cleanup
Date: Mon, 12 Feb 2024 14:53:14 -0600 [thread overview]
Message-ID: <20240212205314.GC2200361@maniforge.lan> (raw)
In-Reply-To: <20240201042109.1150490-15-memxor@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 25667 bytes --]
On Thu, Feb 01, 2024 at 04:21:09AM +0000, Kumar Kartikeya Dwivedi wrote:
> Add tests for the runtime cleanup support for exceptions, ensuring that
> resources are correctly identified and released when an exception is
> thrown. Also, we add negative tests to exercise corner cases the
> verifier should reject.
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
> tools/testing/selftests/bpf/DENYLIST.aarch64 | 1 +
> tools/testing/selftests/bpf/DENYLIST.s390x | 1 +
> .../bpf/prog_tests/exceptions_cleanup.c | 65 +++
> .../selftests/bpf/progs/exceptions_cleanup.c | 468 ++++++++++++++++++
> .../bpf/progs/exceptions_cleanup_fail.c | 154 ++++++
> .../selftests/bpf/progs/exceptions_fail.c | 13 -
> 6 files changed, 689 insertions(+), 13 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/exceptions_cleanup.c
> create mode 100644 tools/testing/selftests/bpf/progs/exceptions_cleanup.c
> create mode 100644 tools/testing/selftests/bpf/progs/exceptions_cleanup_fail.c
>
> diff --git a/tools/testing/selftests/bpf/DENYLIST.aarch64 b/tools/testing/selftests/bpf/DENYLIST.aarch64
> index 5c2cc7e8c5d0..6fc79727cd14 100644
> --- a/tools/testing/selftests/bpf/DENYLIST.aarch64
> +++ b/tools/testing/selftests/bpf/DENYLIST.aarch64
> @@ -1,6 +1,7 @@
> bpf_cookie/multi_kprobe_attach_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3
> bpf_cookie/multi_kprobe_link_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3
> exceptions # JIT does not support calling kfunc bpf_throw: -524
> +exceptions_unwind # JIT does not support calling kfunc bpf_throw: -524
> fexit_sleep # The test never returns. The remaining tests cannot start.
> kprobe_multi_bench_attach # needs CONFIG_FPROBE
> kprobe_multi_test # needs CONFIG_FPROBE
> diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x
> index 1a63996c0304..f09a73dee72c 100644
> --- a/tools/testing/selftests/bpf/DENYLIST.s390x
> +++ b/tools/testing/selftests/bpf/DENYLIST.s390x
> @@ -1,5 +1,6 @@
> # TEMPORARY
> # Alphabetical order
> exceptions # JIT does not support calling kfunc bpf_throw (exceptions)
> +exceptions_unwind # JIT does not support calling kfunc bpf_throw (exceptions)
> get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace)
> stacktrace_build_id # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2 (?)
> diff --git a/tools/testing/selftests/bpf/prog_tests/exceptions_cleanup.c b/tools/testing/selftests/bpf/prog_tests/exceptions_cleanup.c
> new file mode 100644
> index 000000000000..78df037b60ea
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/exceptions_cleanup.c
> @@ -0,0 +1,65 @@
> +#include "bpf/bpf.h"
> +#include "exceptions.skel.h"
> +#include <test_progs.h>
> +#include <network_helpers.h>
> +
> +#include "exceptions_cleanup.skel.h"
> +#include "exceptions_cleanup_fail.skel.h"
> +
> +static void test_exceptions_cleanup_fail(void)
> +{
> + RUN_TESTS(exceptions_cleanup_fail);
> +}
> +
> +void test_exceptions_cleanup(void)
> +{
> + LIBBPF_OPTS(bpf_test_run_opts, ropts,
> + .data_in = &pkt_v4,
> + .data_size_in = sizeof(pkt_v4),
> + .repeat = 1,
> + );
> + struct exceptions_cleanup *skel;
> + int ret;
> +
> + if (test__start_subtest("exceptions_cleanup_fail"))
> + test_exceptions_cleanup_fail();
RUN_TESTS takes care of doing test__start_subtest(), etc. You should be
able to just call RUN_TESTS(exceptions_cleanup_fail) directly here.
> +
> + skel = exceptions_cleanup__open_and_load();
> + if (!ASSERT_OK_PTR(skel, "exceptions_cleanup__open_and_load"))
> + return;
> +
> + ret = exceptions_cleanup__attach(skel);
> + if (!ASSERT_OK(ret, "exceptions_cleanup__attach"))
> + return;
> +
> +#define RUN_EXC_CLEANUP_TEST(name) \
Should we add a call to if (test__start_subtest(#name)) to this macro?
> + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.name), \
> + &ropts); \
> + if (!ASSERT_OK(ret, #name ": return value")) \
> + return; \
> + if (!ASSERT_EQ(ropts.retval, 0xeB9F, #name ": opts.retval")) \
> + return; \
> + ret = bpf_prog_test_run_opts( \
> + bpf_program__fd(skel->progs.exceptions_cleanup_check), \
> + &ropts); \
> + if (!ASSERT_OK(ret, #name " CHECK: return value")) \
> + return; \
> + if (!ASSERT_EQ(ropts.retval, 0, #name " CHECK: opts.retval")) \
> + return; \
> + skel->bss->only_count = 0;
> +
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_prog_num_iter);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_prog_num_iter_mult);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_prog_dynptr_iter);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_obj);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_percpu_obj);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_ringbuf);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_reg);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_null_or_ptr_do_ptr);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_null_or_ptr_do_null);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_callee_saved);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_frame);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_loop_iterations);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_dead_code_elim);
> + RUN_EXC_CLEANUP_TEST(exceptions_cleanup_frame_dce);
> +}
> diff --git a/tools/testing/selftests/bpf/progs/exceptions_cleanup.c b/tools/testing/selftests/bpf/progs/exceptions_cleanup.c
> new file mode 100644
> index 000000000000..ccf14fe6bd1b
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/exceptions_cleanup.c
> @@ -0,0 +1,468 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <vmlinux.h>
> +#include <bpf/bpf_tracing.h>
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_core_read.h>
> +#include <bpf/bpf_endian.h>
> +#include "bpf_misc.h"
> +#include "bpf_kfuncs.h"
> +#include "bpf_experimental.h"
> +
> +struct {
> + __uint(type, BPF_MAP_TYPE_RINGBUF);
> + __uint(max_entries, 8);
> +} ringbuf SEC(".maps");
> +
> +enum {
> + RES_DYNPTR,
> + RES_ITER,
> + RES_REG,
> + RES_SPILL,
> + __RES_MAX,
> +};
> +
> +struct bpf_resource {
> + int type;
> +};
> +
> +struct {
> + __uint(type, BPF_MAP_TYPE_HASH);
> + __uint(max_entries, 1024);
> + __type(key, int);
> + __type(value, struct bpf_resource);
> +} hashmap SEC(".maps");
> +
> +const volatile bool always_false = false;
> +bool only_count = false;
> +int res_count = 0;
> +
> +#define MARK_RESOURCE(ptr, type) ({ res_count++; bpf_map_update_elem(&hashmap, &(void *){ptr}, &(struct bpf_resource){type}, 0); });
> +#define FIND_RESOURCE(ptr) ((struct bpf_resource *)bpf_map_lookup_elem(&hashmap, &(void *){ptr}) ?: &(struct bpf_resource){__RES_MAX})
> +#define FREE_RESOURCE(ptr) bpf_map_delete_elem(&hashmap, &(void *){ptr})
> +#define VAL 0xeB9F
> +
> +SEC("fentry/bpf_cleanup_resource")
> +int BPF_PROG(exception_cleanup_mark_free, struct bpf_frame_desc_reg_entry *fd, void *ptr)
> +{
> + if (fd->spill_type == STACK_INVALID)
> + bpf_probe_read_kernel(&ptr, sizeof(ptr), ptr);
> + if (only_count) {
> + res_count--;
> + return 0;
> + }
> + switch (fd->spill_type) {
> + case STACK_SPILL:
> + if (FIND_RESOURCE(ptr)->type == RES_SPILL)
> + FREE_RESOURCE(ptr);
> + break;
> + case STACK_INVALID:
> + if (FIND_RESOURCE(ptr)->type == RES_REG)
> + FREE_RESOURCE(ptr);
> + break;
> + case STACK_ITER:
> + if (FIND_RESOURCE(ptr)->type == RES_ITER)
> + FREE_RESOURCE(ptr);
> + break;
> + case STACK_DYNPTR:
> + if (FIND_RESOURCE(ptr)->type == RES_DYNPTR)
> + FREE_RESOURCE(ptr);
> + break;
> + }
> + return 0;
> +}
> +
> +static long map_cb(struct bpf_map *map, void *key, void *value, void *ctx)
> +{
> + int *cnt = ctx;
> +
> + (*cnt)++;
> + return 0;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_check(struct __sk_buff *ctx)
> +{
> + int cnt = 0;
> +
> + if (only_count)
> + return res_count;
> + bpf_for_each_map_elem(&hashmap, map_cb, &cnt, 0);
> + return cnt;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_prog_num_iter(struct __sk_buff *ctx)
> +{
> + int i;
> +
> + bpf_for(i, 0, 10) {
> + MARK_RESOURCE(&___it, RES_ITER);
> + bpf_throw(VAL);
> + }
> + return 0;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_prog_num_iter_mult(struct __sk_buff *ctx)
> +{
> + int i, j, k;
> +
> + bpf_for(i, 0, 10) {
> + MARK_RESOURCE(&___it, RES_ITER);
> + bpf_for(j, 0, 10) {
> + MARK_RESOURCE(&___it, RES_ITER);
> + bpf_for(k, 0, 10) {
> + MARK_RESOURCE(&___it, RES_ITER);
> + bpf_throw(VAL);
> + }
> + }
> + }
> + return 0;
> +}
> +
> +__noinline
> +static int exceptions_cleanup_subprog(struct __sk_buff *ctx)
> +{
> + int i;
> +
> + bpf_for(i, 0, 10) {
> + MARK_RESOURCE(&___it, RES_ITER);
> + bpf_throw(VAL);
> + }
> + return ctx->len;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_prog_dynptr_iter(struct __sk_buff *ctx)
> +{
> + struct bpf_dynptr rbuf;
> + int ret = 0;
> +
> + bpf_ringbuf_reserve_dynptr(&ringbuf, 8, 0, &rbuf);
> + MARK_RESOURCE(&rbuf, RES_DYNPTR);
> + if (ctx->protocol)
> + ret = exceptions_cleanup_subprog(ctx);
> + bpf_ringbuf_discard_dynptr(&rbuf, 0);
> + return ret;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_obj(struct __sk_buff *ctx)
> +{
> + struct { int i; } *p;
> +
> + p = bpf_obj_new(typeof(*p));
> + MARK_RESOURCE(&p, RES_SPILL);
> + bpf_throw(VAL);
> + return p->i;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_percpu_obj(struct __sk_buff *ctx)
> +{
> + struct { int i; } *p;
> +
> + p = bpf_percpu_obj_new(typeof(*p));
> + MARK_RESOURCE(&p, RES_SPILL);
> + bpf_throw(VAL);
It would be neat if we could have the bpf_throw() kfunc signature be
marked as __attribute__((noreturn)) and have things work correctly;
meaning you wouldn't have to even return a value here. The verifier
should know that bpf_throw() is terminal, so it should be able to prune
any subsequent instructions as unreachable anyways.
> + return !p;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_ringbuf(struct __sk_buff *ctx)
> +{
> + void *p;
> +
> + p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + MARK_RESOURCE(&p, RES_SPILL);
> + bpf_throw(VAL);
> + return 0;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_reg(struct __sk_buff *ctx)
> +{
> + void *p;
> +
> + p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + MARK_RESOURCE(p, RES_REG);
> + bpf_throw(VAL);
> + if (p)
> + bpf_ringbuf_discard(p, 0);
Does the prog fail to load if you don't have this bpf_ringbuf_discard()
check? I assume not given that in
exceptions_cleanup_null_or_ptr_do_ptr() and elsewhere we do a reserve
without discarding. Is there some subtle stack state difference here or
something?
> + return 0;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_null_or_ptr_do_ptr(struct __sk_buff *ctx)
> +{
> + union {
> + void *p;
> + char buf[8];
> + } volatile p;
> + u64 z = 0;
> +
> + __builtin_memcpy((void *)&p.p, &z, sizeof(z));
> + MARK_RESOURCE((void *)&p.p, RES_SPILL);
> + if (ctx->len)
> + p.p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + bpf_throw(VAL);
> + return 0;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_null_or_ptr_do_null(struct __sk_buff *ctx)
> +{
> + union {
> + void *p;
> + char buf[8];
> + } volatile p;
> +
> + p.p = 0;
> + MARK_RESOURCE((void *)p.buf, RES_SPILL);
> + if (!ctx->len)
> + p.p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + bpf_throw(VAL);
> + return 0;
> +}
> +
> +__noinline static int mark_resource_subprog(u64 a, u64 b, u64 c, u64 d)
> +{
> + MARK_RESOURCE((void *)a, RES_REG);
> + MARK_RESOURCE((void *)b, RES_REG);
> + MARK_RESOURCE((void *)c, RES_REG);
> + MARK_RESOURCE((void *)d, RES_REG);
> + return 0;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_callee_saved(struct __sk_buff *ctx)
> +{
> + asm volatile (
> + "r1 = %[ringbuf] ll; \
> + r2 = 8; \
> + r3 = 0; \
> + call %[bpf_ringbuf_reserve]; \
> + r6 = r0; \
> + r1 = %[ringbuf] ll; \
> + r2 = 8; \
> + r3 = 0; \
> + call %[bpf_ringbuf_reserve]; \
> + r7 = r0; \
> + r1 = %[ringbuf] ll; \
> + r2 = 8; \
> + r3 = 0; \
> + call %[bpf_ringbuf_reserve]; \
> + r8 = r0; \
> + r1 = %[ringbuf] ll; \
> + r2 = 8; \
> + r3 = 0; \
> + call %[bpf_ringbuf_reserve]; \
> + r9 = r0; \
> + r1 = r6; \
> + r2 = r7; \
> + r3 = r8; \
> + r4 = r9; \
> + call mark_resource_subprog; \
> + r1 = 0xeB9F; \
> + call bpf_throw; \
> + " : : __imm(bpf_ringbuf_reserve),
> + __imm_addr(ringbuf)
> + : __clobber_all);
> + mark_resource_subprog(0, 0, 0, 0);
> + return 0;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_callee_saved_noopt(struct __sk_buff *ctx)
> +{
> + mark_resource_subprog(1, 2, 3, 4);
> + return 0;
> +}
> +
> +__noinline int global_subprog_throw(struct __sk_buff *ctx)
> +{
> + u64 *p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + bpf_throw(VAL);
> + return p ? *p : 0 + ctx->len;
> +}
> +
> +__noinline int global_subprog(struct __sk_buff *ctx)
> +{
> + u64 *p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + if (!p)
> + return ctx->len;
> + global_subprog_throw(ctx);
> + bpf_ringbuf_discard(p, 0);
> + return !!p + ctx->len;
> +}
> +
> +__noinline static int static_subprog(struct __sk_buff *ctx)
> +{
> + struct bpf_dynptr rbuf;
> + u64 *p, r = 0;
> +
> + bpf_ringbuf_reserve_dynptr(&ringbuf, 8, 0, &rbuf);
> + p = bpf_dynptr_data(&rbuf, 0, 8);
> + if (!p)
> + goto end;
> + *p = global_subprog(ctx);
> + r += *p;
> +end:
> + bpf_ringbuf_discard_dynptr(&rbuf, 0);
> + return r + ctx->len;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_frame(struct __sk_buff *ctx)
> +{
> + struct foo { int i; } *p = bpf_obj_new(typeof(*p));
> + int i;
> + only_count = 1;
> + res_count = 4;
> + if (!p)
> + return 1;
> + p->i = static_subprog(ctx);
> + i = p->i;
> + bpf_obj_drop(p);
> + return i + ctx->len;
> +}
> +
> +SEC("tc")
> +__success
> +int exceptions_cleanup_loop_iterations(struct __sk_buff *ctx)
> +{
> + struct { int i; } *f[50] = {};
> + int i;
> +
> + only_count = true;
> +
> + for (i = 0; i < 50; i++) {
> + f[i] = bpf_obj_new(typeof(*f[0]));
> + if (!f[i])
> + goto end;
> + res_count++;
> + if (i == 49) {
> + bpf_throw(VAL);
> + }
> + }
> +end:
> + for (i = 0; i < 50; i++) {
> + if (!f[i])
> + continue;
> + bpf_obj_drop(f[i]);
> + }
> + return 0;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_dead_code_elim(struct __sk_buff *ctx)
> +{
> + void *p;
> +
> + p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + if (!p)
> + return 0;
> + asm volatile (
> + "r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + " ::: "r0");
> + bpf_throw(VAL);
> + bpf_ringbuf_discard(p, 0);
> + return 0;
> +}
> +
> +__noinline int global_subprog_throw_dce(struct __sk_buff *ctx)
> +{
> + u64 *p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + bpf_throw(VAL);
> + return p ? *p : 0 + ctx->len;
> +}
> +
> +__noinline int global_subprog_dce(struct __sk_buff *ctx)
> +{
> + u64 *p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + if (!p)
> + return ctx->len;
> + asm volatile (
> + "r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + " ::: "r0");
> + global_subprog_throw_dce(ctx);
> + bpf_ringbuf_discard(p, 0);
> + return !!p + ctx->len;
> +}
> +
> +__noinline static int static_subprog_dce(struct __sk_buff *ctx)
> +{
> + struct bpf_dynptr rbuf;
> + u64 *p, r = 0;
> +
> + bpf_ringbuf_reserve_dynptr(&ringbuf, 8, 0, &rbuf);
> + p = bpf_dynptr_data(&rbuf, 0, 8);
> + if (!p)
> + goto end;
> + asm volatile (
> + "r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + r0 = r0; \
> + " ::: "r0");
> + *p = global_subprog_dce(ctx);
> + r += *p;
> +end:
> + bpf_ringbuf_discard_dynptr(&rbuf, 0);
> + return r + ctx->len;
> +}
> +
> +SEC("tc")
> +int exceptions_cleanup_frame_dce(struct __sk_buff *ctx)
> +{
> + struct foo { int i; } *p = bpf_obj_new(typeof(*p));
> + int i;
> + only_count = 1;
> + res_count = 4;
> + if (!p)
> + return 1;
> + p->i = static_subprog_dce(ctx);
> + i = p->i;
> + bpf_obj_drop(p);
> + return i + ctx->len;
> +}
> +
> +SEC("tc")
> +int reject_slot_with_zero_vs_ptr_ok(struct __sk_buff *ctx)
> +{
> + asm volatile (
> + "r7 = *(u32 *)(r1 + 0); \
> + r0 = 0; \
> + *(u64 *)(r10 - 8) = r0; \
> + r1 = %[ringbuf] ll; \
> + r2 = 8; \
> + r3 = 0; \
> + if r7 != 0 goto jump4; \
> + call %[bpf_ringbuf_reserve]; \
> + *(u64 *)(r10 - 8) = r0; \
> + jump4: \
> + r0 = 0; \
> + r1 = 0; \
> + call bpf_throw; \
> + " : : __imm(bpf_ringbuf_reserve),
> + __imm_addr(ringbuf)
> + : "memory");
> + return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> diff --git a/tools/testing/selftests/bpf/progs/exceptions_cleanup_fail.c b/tools/testing/selftests/bpf/progs/exceptions_cleanup_fail.c
> new file mode 100644
> index 000000000000..b3c70f92b35f
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/exceptions_cleanup_fail.c
> @@ -0,0 +1,154 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <vmlinux.h>
> +#include <bpf/bpf_tracing.h>
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_core_read.h>
> +
> +#include "bpf_misc.h"
> +#include "bpf_experimental.h"
> +
> +struct {
> + __uint(type, BPF_MAP_TYPE_RINGBUF);
> + __uint(max_entries, 8);
> +} ringbuf SEC(".maps");
> +
> +SEC("?tc")
> +__failure __msg("Unreleased reference")
> +int reject_with_reference(void *ctx)
> +{
> + struct { int i; } *f;
> +
> + f = bpf_obj_new(typeof(*f));
> + if (!f)
> + return 0;
> + bpf_throw(0);
> + return 0;
> +}
> +
> +SEC("?tc")
> +__failure __msg("frame_desc: merge: failed to merge old and new frame desc entry")
> +int reject_slot_with_distinct_ptr(struct __sk_buff *ctx)
> +{
> + void *p;
> +
> + if (ctx->len) {
> + p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + } else {
> + p = bpf_obj_new(typeof(struct { int i; }));
> + }
> + bpf_throw(0);
> + return !p;
> +}
> +
> +SEC("?tc")
> +__failure __msg("frame_desc: merge: failed to merge old and new frame desc entry")
> +int reject_slot_with_distinct_ptr_old(struct __sk_buff *ctx)
> +{
> + void *p;
> +
> + if (ctx->len) {
> + p = bpf_obj_new(typeof(struct { int i; }));
> + } else {
> + p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + }
> + bpf_throw(0);
> + return !p;
> +}
> +
> +SEC("?tc")
> +__failure __msg("frame_desc: merge: failed to merge old and new frame desc entry")
> +int reject_slot_with_misc_vs_ptr(struct __sk_buff *ctx)
> +{
> + void *p = (void *)bpf_ktime_get_ns();
> +
> + if (ctx->protocol)
> + p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> + bpf_throw(0);
> + return !p;
> +}
> +
> +SEC("?tc")
> +__failure __msg("Unreleased reference")
> +int reject_slot_with_misc_vs_ptr_old(struct __sk_buff *ctx)
> +{
> + void *p = bpf_ringbuf_reserve(&ringbuf, 8, 0);
> +
> + if (ctx->protocol)
> + p = (void *)bpf_ktime_get_ns();
> + bpf_throw(0);
> + return !p;
> +}
> +
> +SEC("?tc")
> +__failure __msg("frame_desc: merge: failed to merge old and new frame desc entry")
> +int reject_slot_with_invalid_vs_ptr(struct __sk_buff *ctx)
> +{
> + asm volatile (
> + "r7 = r1; \
> + r1 = %[ringbuf] ll; \
> + r2 = 8; \
> + r3 = 0; \
> + r4 = *(u32 *)(r7 + 0); \
> + r6 = *(u64 *)(r10 - 8); \
> + if r4 == 0 goto jump; \
> + call %[bpf_ringbuf_reserve]; \
> + r6 = r0; \
> + jump: \
> + r0 = 0; \
> + r1 = 0; \
> + call bpf_throw; \
> + " : : __imm(bpf_ringbuf_reserve),
> + __imm_addr(ringbuf)
> + : "memory");
> + return 0;
> +}
> +
> +SEC("?tc")
> +__failure __msg("Unreleased reference")
> +int reject_slot_with_invalid_vs_ptr_old(struct __sk_buff *ctx)
> +{
> + asm volatile (
> + "r7 = r1; \
> + r1 = %[ringbuf] ll; \
> + r2 = 8; \
> + r3 = 0; \
> + call %[bpf_ringbuf_reserve]; \
> + r6 = r0; \
> + r4 = *(u32 *)(r7 + 0); \
> + if r4 == 0 goto jump2; \
> + r6 = *(u64 *)(r10 - 8); \
> + jump2: \
> + r0 = 0; \
> + r1 = 0; \
> + call bpf_throw; \
> + " : : __imm(bpf_ringbuf_reserve),
> + __imm_addr(ringbuf)
> + : "memory");
> + return 0;
> +}
> +
> +SEC("?tc")
> +__failure __msg("Unreleased reference")
> +int reject_slot_with_zero_vs_ptr(struct __sk_buff *ctx)
> +{
> + asm volatile (
> + "r7 = *(u32 *)(r1 + 0); \
> + r1 = %[ringbuf] ll; \
> + r2 = 8; \
> + r3 = 0; \
> + call %[bpf_ringbuf_reserve]; \
> + *(u64 *)(r10 - 8) = r0; \
> + r0 = 0; \
> + if r7 != 0 goto jump3; \
> + *(u64 *)(r10 - 8) = r0; \
> + jump3: \
> + r0 = 0; \
> + r1 = 0; \
> + call bpf_throw; \
> + " : : __imm(bpf_ringbuf_reserve),
> + __imm_addr(ringbuf)
> + : "memory");
> + return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> diff --git a/tools/testing/selftests/bpf/progs/exceptions_fail.c b/tools/testing/selftests/bpf/progs/exceptions_fail.c
> index dfd164a7a261..1e73200c6276 100644
> --- a/tools/testing/selftests/bpf/progs/exceptions_fail.c
> +++ b/tools/testing/selftests/bpf/progs/exceptions_fail.c
> @@ -182,19 +182,6 @@ int reject_with_rbtree_add_throw(void *ctx)
> return 0;
> }
>
> -SEC("?tc")
> -__failure __msg("Unreleased reference")
> -int reject_with_reference(void *ctx)
> -{
> - struct foo *f;
> -
> - f = bpf_obj_new(typeof(*f));
> - if (!f)
> - return 0;
> - bpf_throw(0);
Hmm, so why is this a memory leak exactly? Apologies if this is already
explained clearly elsewhere in the stack.
> - return 0;
> -}
> -
> __noinline static int subprog_ref(struct __sk_buff *ctx)
> {
> struct foo *f;
> --
> 2.40.1
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
next prev parent reply other threads:[~2024-02-12 20:53 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-01 4:20 [RFC PATCH v1 00/14] Exceptions - Resource Cleanup Kumar Kartikeya Dwivedi
2024-02-01 4:20 ` [RFC PATCH v1 01/14] bpf: Mark subprogs as throw reachable before do_check pass Kumar Kartikeya Dwivedi
2024-02-12 19:35 ` David Vernet
2024-02-12 22:28 ` Kumar Kartikeya Dwivedi
2024-02-15 1:01 ` Eduard Zingerman
2024-02-16 21:34 ` Kumar Kartikeya Dwivedi
2024-02-01 4:20 ` [RFC PATCH v1 02/14] bpf: Process global subprog's exception propagation Kumar Kartikeya Dwivedi
2024-02-15 1:10 ` Eduard Zingerman
2024-02-16 21:50 ` Kumar Kartikeya Dwivedi
2024-02-17 14:04 ` Eduard Zingerman
2024-02-01 4:20 ` [RFC PATCH v1 03/14] selftests/bpf: Add test for throwing global subprog with acquired refs Kumar Kartikeya Dwivedi
2024-02-15 1:10 ` Eduard Zingerman
2024-02-01 4:20 ` [RFC PATCH v1 04/14] bpf: Refactor check_pseudo_btf_id's BTF reference bump Kumar Kartikeya Dwivedi
2024-02-15 1:11 ` Eduard Zingerman
2024-02-16 21:50 ` Kumar Kartikeya Dwivedi
2024-02-01 4:21 ` [RFC PATCH v1 05/14] bpf: Implement BPF exception frame descriptor generation Kumar Kartikeya Dwivedi
2024-02-15 18:24 ` Eduard Zingerman
2024-02-16 11:23 ` Eduard Zingerman
2024-02-16 22:06 ` Kumar Kartikeya Dwivedi
2024-02-17 17:14 ` Eduard Zingerman
2024-02-20 21:58 ` Kumar Kartikeya Dwivedi
2024-02-16 22:24 ` Kumar Kartikeya Dwivedi
2024-02-01 4:21 ` [RFC PATCH v1 06/14] bpf: Adjust frame descriptor pc on instruction patching Kumar Kartikeya Dwivedi
2024-02-15 16:31 ` Eduard Zingerman
2024-02-16 21:52 ` Kumar Kartikeya Dwivedi
2024-02-17 14:08 ` Eduard Zingerman
2024-02-01 4:21 ` [RFC PATCH v1 07/14] bpf: Use hidden subprog trampoline for bpf_throw Kumar Kartikeya Dwivedi
2024-02-15 22:11 ` Eduard Zingerman
2024-02-16 21:59 ` Kumar Kartikeya Dwivedi
2024-02-17 14:22 ` Eduard Zingerman
2024-02-01 4:21 ` [RFC PATCH v1 08/14] bpf: Compute used callee saved registers for subprogs Kumar Kartikeya Dwivedi
2024-02-15 22:12 ` Eduard Zingerman
2024-02-16 22:02 ` Kumar Kartikeya Dwivedi
2024-02-17 14:26 ` Eduard Zingerman
2024-02-01 4:21 ` [RFC PATCH v1 09/14] bpf, x86: Fix up pc offsets for frame descriptor entries Kumar Kartikeya Dwivedi
2024-02-15 22:12 ` Eduard Zingerman
2024-02-16 13:33 ` Eduard Zingerman
2024-02-01 4:21 ` [RFC PATCH v1 10/14] bpf, x86: Implement runtime resource cleanup for exceptions Kumar Kartikeya Dwivedi
2024-02-16 12:02 ` Eduard Zingerman
2024-02-16 22:28 ` Kumar Kartikeya Dwivedi
2024-02-19 12:01 ` Eduard Zingerman
2024-02-01 4:21 ` [RFC PATCH v1 11/14] bpf: Release references in verifier state when throwing exceptions Kumar Kartikeya Dwivedi
2024-02-16 12:21 ` Eduard Zingerman
2024-02-01 4:21 ` [RFC PATCH v1 12/14] bpf: Register cleanup dtors for runtime unwinding Kumar Kartikeya Dwivedi
2024-02-01 4:21 ` [RFC PATCH v1 13/14] bpf: Make bpf_throw available to all program types Kumar Kartikeya Dwivedi
2024-02-01 4:21 ` [RFC PATCH v1 14/14] selftests/bpf: Add tests for exceptions runtime cleanup Kumar Kartikeya Dwivedi
2024-02-12 20:53 ` David Vernet [this message]
2024-02-12 22:43 ` Kumar Kartikeya Dwivedi
2024-02-13 19:33 ` David Vernet
2024-02-13 20:51 ` Kumar Kartikeya Dwivedi
2024-03-14 11:08 ` [RFC PATCH v1 00/14] Exceptions - Resource Cleanup Eduard Zingerman
2024-03-18 5:40 ` Kumar Kartikeya Dwivedi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240212205314.GC2200361@maniforge.lan \
--to=void@manifault.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=djwillia@vt.edu \
--cc=martin.lau@kernel.org \
--cc=memxor@gmail.com \
--cc=rishabh.iyer@epfl.ch \
--cc=rjsu26@vt.edu \
--cc=sanidhya.kashyap@epfl.ch \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox