BPF List
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Paul Houssel" <paulhoussel2@gmail.com>
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
Date: Wed, 06 May 2026 21:24:14 +0000	[thread overview]
Message-ID: <20260506212414.CF3DCC2BCB0@smtp.kernel.org> (raw)
In-Reply-To: <20260506150547.767315-3-paulhoussel2@gmail.com>

Thank you for your contribution! Sashiko AI review found 5 potential issue(s) to consider:
- [High] The test leaks `struct bpf_link *` objects, leading to memory and file descriptor leaks that permanently exhaust global LSM cgroup slots.
- [Medium] The test assumes any attachment failure is strictly due to the `CONFIG_CGROUP_LSM_NUM` limit, which masks genuine bugs.
- [Medium] The test is flaky when run in parallel because it assumes exclusive access to the global `cgroup_lsm_atype` array.
- [Medium] The test hardcodes `CONFIG_CGROUP_LSM_NUM=10`, making it brittle on non-default kernel configurations.
- [Low] The message provided to the `ASSERT_EQ` macro for the attached count is misleading.
--

commit c964298fd80079f668d77b4f5c169d14d01bc9e0
Author: Paul Houssel <paulhoussel2@gmail.com>

selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM

This commit adds a selftest to verify that the kernel enforces
CONFIG_CGROUP_LSM_NUM as the maximum number of concurrently attachable
per-cgroup LSM hook slots.

> diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
> new file mode 100644
> index 0000000000000..1c5825c6c3d07
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c

[ ... ]

> @@ -0,0 +1,60 @@

[ ... ]

> +	bpf_object__for_each_program(prog, skel->obj) {
> +		struct bpf_link *link;
> +
> +		link = bpf_program__attach_cgroup(prog, cgroup_fd);
> +		if (!link) {
> +			if (errno == EOPNOTSUPP) {
> +				test__skip();
> +				goto out;
> +			}
> +			failed++;

Should this check if errno is specifically E2BIG?

If the attachment fails for an unrelated reason, such as an invalid hook
configuration (EINVAL) or memory exhaustion (ENOMEM), this will still
increment the failure count and might falsely pass the test.

> +		} else {
> +			attached++;

Does this code leak the struct bpf_link objects?

The link pointer returned by bpf_program__attach_cgroup() is dynamically
allocated, but bpf_link__destroy() is never called to free it during or
after the loop.

Because these leaked file descriptors keep the BPF links active in the
kernel, they will indefinitely occupy the slots in the global
cgroup_lsm_atype array. Could this exhaust the available slots and cause
subsequent tests in the suite that use BPF_LSM_CGROUP hooks to
spuriously fail with -E2BIG?

> +		}
> +	}
> +
> +	// CONFIG_CGROUP_LSM_NUM set to 10
> +	// -> 10 programs shall be attached
> +	ASSERT_EQ(attached, 10, "at least one attached");

Since CONFIG_CGROUP_LSM_NUM limits the number of distinct per-cgroup
LSM hooks globally across the entire system, could this test fail when
run in parallel?

If the test suite is executed concurrently (e.g., test_progs -j) and another
running test utilizes lsm_cgroup hooks, some slots will already be occupied.
This would cause the limit to be exhausted prematurely and fail the assertion.

Will this test fail on custom or distribution kernels where the limit is
configured to a higher value?

While the selftest Kconfig fragment sets the limit to 10, selftests are
often executed against pre-built kernels. If the limit is greater than 12,
all 12 BPF programs will successfully attach, causing the hardcoded
assertion to fail unconditionally.

Could the test dynamically read the limit via
extern int CONFIG_CGROUP_LSM_NUM __kconfig instead?

This isn't a bug, but the assertion message says "at least one attached"
when it actually verifies that exactly 10 are attached.

> +	// -> 2 programs shall be rejected
> +	ASSERT_EQ(failed, 2, "limit was enforced");
> +
> +out:
> +	close(cgroup_fd);
> +	cgroup_lsm_num__destroy(skel);
> +}

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260506150547.767315-1-paulhoussel2@gmail.com?part=2

  parent reply	other threads:[~2026-05-06 21:24 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-06 15:05 [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Houssel
2026-05-06 15:05 ` [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
2026-05-06 15:52   ` bot+bpf-ci
2026-05-06 16:11     ` Paul Chaignon
2026-05-06 21:08   ` sashiko-bot
2026-05-07 16:39     ` Paul Houssel
2026-05-08 22:16       ` Alexei Starovoitov
2026-05-06 15:05 ` [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
2026-05-06 16:05   ` Paul Chaignon
2026-05-06 21:24   ` sashiko-bot [this message]
2026-05-06 16:13 ` [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Chaignon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260506212414.CF3DCC2BCB0@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=paulhoussel2@gmail.com \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox