* [PATCH v2 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable
@ 2026-05-06 13:12 Paul Houssel
2026-05-06 13:12 ` [PATCH v2 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
2026-05-06 13:12 ` [PATCH v2 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
0 siblings, 2 replies; 7+ messages in thread
From: Paul Houssel @ 2026-05-06 13:12 UTC (permalink / raw)
To: paul.houssel, Andrii Nakryiko, Yonghong Song, Paul Houssel,
KP Singh, Alexei Starovoitov, Song Liu, Martin KaFai Lau,
Christian König, Florian Westphal, T.J. Mercier, Li RongQing,
Paul Chaignon, D. Wythe, Jakub Kicinski
Cc: Stanislav Fomichev, bpf
In include/linux/bpf-cgroup-defs.h, CGROUP_LSM_NUM defines the maximum
number of BPF_PROG_TYPE_LSM programs that can be simultaneously attached
using the BPF_LSM_CGROUP attachment type. It is currently hardcoded to 10.
This limit was introduced in 'commit c0e19f2c9a3e ("bpf: minimize number
of allocated lsm slots per program")' in the first patch implementing
BPF_LSM_CGROUP attachment, and has not been changed since. Rather than
reserving one slot per LSM hook (a 1:1 static mapping across all 211
possible available hooks at that time), it introduced a dynamic scheme
where only 10 slots exist per cgroup allocated on demand.
In practice, eBPF-based tools may exceed this limit. I therefore propose
making CGROUP_LSM_NUM a Kconfig option so that users can tune it to their
requirements, rather than being constrained by static hardcoded default
that has been arbitrarily decided on the first implementation of this
attachment type. On the other hand some uses cases may be interest to
limit the number of attachments to a lower value than 10 to prevent too
much memory overhead.
Modifying this limit has been dicussed previously in
https://lore.kernel.org/bpf/20220408225628.oog4a3qteauhqkdn@kafai-mbp.dhcp.thefacebook.com/,
where the same thought on this limit being too small was being shared as
well. Furthermore, this discussion seems to have yielded inconclusive
about to render it dynamic, without a fixed array size.
Changes in V2:
- fix SOB in patch 1/2
- add reference to previous related discussion provided by Paul
Chaignon
- add Stanislav Fomichev to cc
- Link to V1:
https://lore.kernel.org/bpf/20260506065048.592474-1-paulhoussel2@gmail.com/
Paul Houssel (2):
bpf: render CGROUP_LSM_NUM configurable as a KConfig
selftests/bpf: add tests to verify the enforcement of
CONFIG_CGROUP_LSM_NUM
include/linux/bpf-cgroup-defs.h | 2 +-
kernel/bpf/Kconfig | 13 +++
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/cgroup_lsm_num.c | 60 ++++++++++++
.../selftests/bpf/progs/cgroup_lsm_num.c | 92 +++++++++++++++++++
5 files changed, 167 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
create mode 100644 tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
--
2.54.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-06 13:12 [PATCH v2 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Houssel
@ 2026-05-06 13:12 ` Paul Houssel
2026-05-06 13:48 ` bot+bpf-ci
2026-05-06 19:46 ` sashiko-bot
2026-05-06 13:12 ` [PATCH v2 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
1 sibling, 2 replies; 7+ messages in thread
From: Paul Houssel @ 2026-05-06 13:12 UTC (permalink / raw)
To: paul.houssel, Andrii Nakryiko, Yonghong Song, Paul Houssel,
KP Singh, Alexei Starovoitov, Song Liu, Martin KaFai Lau,
Christian König, Florian Westphal, T.J. Mercier, Li RongQing,
Paul Chaignon, D. Wythe, Jakub Kicinski
Cc: Stanislav Fomichev, bpf
In include/linux/bpf-cgroup-defs.h, CGROUP_LSM_NUM defines the maximum
number of BPF_PROG_TYPE_LSM programs that can be simultaneously attached
using the `BPF_LSM_CGROUP` attachment type. We set the value to the newly
introduced `CONFIG_CGROUP_LSM_NUM` Kconfig option, allowing users and
distributions to tune this limit at build time rather than relying on a
hardcoded value.
The option ranges from 0 to 300 and defaults to 10, preserving the
existing behaviour. There are currently 273 LSM hooks but this number is
subject to change. I coudn't find a MACRO counting the sum of LSM
interfaces and therefore arbitrarily set the threshold to 300. I am open
to suggestions on how to set this limit dynamically or not.
Signed-off-by: Paul Houssel <paulhoussel2@gmail.com>
---
include/linux/bpf-cgroup-defs.h | 2 +-
kernel/bpf/Kconfig | 13 +++++++++++++
2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
index c9e6b26abab6..9ab5ca3dbaba 100644
--- a/include/linux/bpf-cgroup-defs.h
+++ b/include/linux/bpf-cgroup-defs.h
@@ -12,7 +12,7 @@ struct bpf_prog_array;
#ifdef CONFIG_BPF_LSM
/* Maximum number of concurrently attachable per-cgroup LSM hooks. */
-#define CGROUP_LSM_NUM 10
+#define CGROUP_LSM_NUM CONFIG_CGROUP_LSM_NUM
#else
#define CGROUP_LSM_NUM 0
#endif
diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
index eb3de35734f0..7f51598aa8fe 100644
--- a/kernel/bpf/Kconfig
+++ b/kernel/bpf/Kconfig
@@ -101,4 +101,17 @@ config BPF_LSM
If you are unsure how to answer this question, answer N.
+config CGROUP_LSM_NUM
+ int "Maximum number of per-cgroup LSM hooks"
+ depends on BPF_LSM
+ depends on CGROUP_BPF
+ range 0 300
+ default 10
+ help
+ Maximum number of concurrently attachable per-cgroup LSM hooks.
+ Increasing this value increases the size of the cgroup_lsm_atype
+ structure.
+
+ If you are unsure, leave the default value.
+
endmenu # "BPF subsystem"
--
2.54.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
2026-05-06 13:12 [PATCH v2 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Houssel
2026-05-06 13:12 ` [PATCH v2 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
@ 2026-05-06 13:12 ` Paul Houssel
2026-05-06 13:48 ` bot+bpf-ci
2026-05-06 20:07 ` sashiko-bot
1 sibling, 2 replies; 7+ messages in thread
From: Paul Houssel @ 2026-05-06 13:12 UTC (permalink / raw)
To: paul.houssel, Andrii Nakryiko, Yonghong Song, Paul Houssel,
KP Singh, Alexei Starovoitov, Song Liu, Martin KaFai Lau,
Christian König, Florian Westphal, T.J. Mercier, Li RongQing,
Paul Chaignon, D. Wythe, Jakub Kicinski
Cc: Stanislav Fomichev, bpf
Add a selftest that verifies the kernel correctly enforces
CONFIG_CGROUP_LSM_NUM as the maximum number of concurrently attachable
per-cgroup LSM hook slots.
The BPF program side (progs/cgroup_lsm_num.c) defines 12 lsm_cgroup
programs, each attached to a distinct LSM hook. The test side
(prog_tests/cgroup_lsm_num.c) attempts to attach all 12 programs one by
one to a cgroup, and verifies that exactly 10 succeed and 2 are rejected,
matching the value of CONFIG_CGROUP_LSM_NUM set to 10 in the selftest
Kconfig fragment.
Signed-off-by: Paul Houssel <paulhoussel2@gmail.com>
---
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/cgroup_lsm_num.c | 60 ++++++++++++
.../selftests/bpf/progs/cgroup_lsm_num.c | 92 +++++++++++++++++++
3 files changed, 153 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
create mode 100644 tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
index 24855381290d..e4c5dd86c640 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -11,6 +11,7 @@ CONFIG_BPF_STREAM_PARSER=y
CONFIG_BPF_SYSCALL=y
# CONFIG_BPF_UNPRIV_DEFAULT_OFF is not set
CONFIG_CGROUP_BPF=y
+CONFIG_CGROUP_LSM_NUM=10
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_USER_API=y
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
new file mode 100644
index 000000000000..1c5825c6c3d0
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Orange */
+
+/*
+ * Test that the kernel enforces CONFIG_CGROUP_LSM_NUM as the maximum
+ * number of concurrently used per-cgroup LSM hook slots.
+ *
+ * - load a BPF object with 12 programs each on a distinct lsm_cgroup hook
+ * - attach them one by one via bpf_program__attach_cgroup()
+ * - at some point the slots are exhausted and attachment fails
+ * - verify that 10 succeed attachment and 2 fail
+ */
+
+#include <test_progs.h>
+#include <bpf/bpf.h>
+
+#include "cgroup_lsm_num.skel.h"
+#include "cgroup_helpers.h"
+
+void test_cgroup_lsm_num(void)
+{
+ struct cgroup_lsm_num *skel = NULL;
+ struct bpf_program *prog;
+ int cgroup_fd = -1;
+ int attached = 0;
+ int failed = 0;
+
+ cgroup_fd = test__join_cgroup("/cgroup_lsm_num");
+ if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup"))
+ return;
+
+ skel = cgroup_lsm_num__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ goto out;
+
+ bpf_object__for_each_program(prog, skel->obj) {
+ struct bpf_link *link;
+
+ link = bpf_program__attach_cgroup(prog, cgroup_fd);
+ if (!link) {
+ if (errno == EOPNOTSUPP) {
+ test__skip();
+ goto out;
+ }
+ failed++;
+ } else {
+ attached++;
+ }
+ }
+
+ // CONFIG_CGROUP_LSM_NUM set to 10
+ // -> 10 programs shall be attached
+ ASSERT_EQ(attached, 10, "at least one attached");
+ // -> 2 programs shall be rejected
+ ASSERT_EQ(failed, 2, "limit was enforced");
+
+out:
+ close(cgroup_fd);
+ cgroup_lsm_num__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c b/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
new file mode 100644
index 000000000000..0cce61cd7b26
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Orange */
+
+/*
+ * 12 LSM programs with lsm_cgroup attachment type, each on a distinct LSM
+ * hook. Used by prog_tests/cgroup_lsm_num.c to verify that the kernel
+ * enforces the CONFIG_CGROUP_LSM_NUM limit on unique per-cgroup LSM hook
+ * slots. With CONFIG_CGROUP_LSM_NUM set to 10, 10 shall be attached and 2
+ * rejected.
+ */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("lsm_cgroup/socket_create")
+int BPF_PROG(hook0, int family, int type, int protocol, int kern)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_post_create")
+int BPF_PROG(hook1, struct socket *sock, int family, int type,
+ int protocol, int kern)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_socketpair")
+int BPF_PROG(hook2, struct socket *socka, struct socket *sockb)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_bind")
+int BPF_PROG(hook3, struct socket *sock, struct sockaddr *address,
+ int addrlen)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_connect")
+int BPF_PROG(hook4, struct socket *sock, struct sockaddr *address,
+ int addrlen)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_listen")
+int BPF_PROG(hook5, struct socket *sock, int backlog)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_accept")
+int BPF_PROG(hook6, struct socket *sock, struct socket *newsock)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_sendmsg")
+int BPF_PROG(hook7, struct socket *sock, struct msghdr *msg, int size)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_recvmsg")
+int BPF_PROG(hook8, struct socket *sock, struct msghdr *msg, int size,
+ int flags)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_getsockname")
+int BPF_PROG(hook9, struct socket *sock)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_getpeername")
+int BPF_PROG(hook10, struct socket *sock)
+{
+ return 1;
+}
+
+SEC("lsm_cgroup/socket_shutdown")
+int BPF_PROG(hook11, struct socket *sock, int how)
+{
+ return 1;
+}
--
2.54.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-06 13:12 ` [PATCH v2 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
@ 2026-05-06 13:48 ` bot+bpf-ci
2026-05-06 19:46 ` sashiko-bot
1 sibling, 0 replies; 7+ messages in thread
From: bot+bpf-ci @ 2026-05-06 13:48 UTC (permalink / raw)
To: paulhoussel2, paul.houssel, andrii, yonghong.song, paulhoussel2,
kpsingh, ast, song, martin.lau, christian.koenig, fw, tjmercier,
lirongqing, paul.chaignon, alibuda, kuba
Cc: sdf, bpf, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song,
clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2178 bytes --]
> diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
> index c9e6b26abab6..9ab5ca3dbaba 100644
> --- a/include/linux/bpf-cgroup-defs.h
> +++ b/include/linux/bpf-cgroup-defs.h
[ ... ]
> diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
> index eb3de35734f0..7f51598aa8fe 100644
> --- a/kernel/bpf/Kconfig
> +++ b/kernel/bpf/Kconfig
> @@ -101,4 +101,17 @@ config BPF_LSM
>
> If you are unsure how to answer this question, answer N.
>
> +config CGROUP_LSM_NUM
> + int "Maximum number of per-cgroup LSM hooks"
> + depends on BPF_LSM
> + depends on CGROUP_BPF
> + range 0 300
> + default 10
> + help
> + Maximum number of concurrently attachable per-cgroup LSM hooks.
> + Increasing this value increases the size of the cgroup_lsm_atype
> + structure.
The help text here mentions that increasing this value increases the size
of the cgroup_lsm_atype structure, but doesn't mention the more significant
memory impact on struct cgroup_bpf.
While the cgroup_lsm_atype array in kernel/bpf/cgroup.c is a single global
array that grows by (CONFIG_CGROUP_LSM_NUM - 10) * 8 bytes, struct cgroup_bpf
grows by approximately (CONFIG_CGROUP_LSM_NUM - 10) * 25 bytes per cgroup
instance, since MAX_CGROUP_BPF_ATTACH_TYPE expands to include CGROUP_LSM_NUM.
The affected arrays in struct cgroup_bpf are:
include/linux/bpf-cgroup-defs.h:struct cgroup_bpf {
struct bpf_prog_array __rcu *effective[MAX_CGROUP_BPF_ATTACH_TYPE];
struct hlist_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
u8 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
u64 revisions[MAX_CGROUP_BPF_ATTACH_TYPE];
};
For a system with many cgroups, increasing CONFIG_CGROUP_LSM_NUM from 10 to
300 would add approximately 7.25 KB per cgroup. Should the help text mention
this per-cgroup memory impact so users understand the full cost when tuning
this configuration option?
> +
> + If you are unsure, leave the default value.
> +
> endmenu # "BPF subsystem"
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/25437889223
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
2026-05-06 13:12 ` [PATCH v2 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
@ 2026-05-06 13:48 ` bot+bpf-ci
2026-05-06 20:07 ` sashiko-bot
1 sibling, 0 replies; 7+ messages in thread
From: bot+bpf-ci @ 2026-05-06 13:48 UTC (permalink / raw)
To: paulhoussel2, paul.houssel, andrii, yonghong.song, paulhoussel2,
kpsingh, ast, song, martin.lau, christian.koenig, fw, tjmercier,
lirongqing, paul.chaignon, alibuda, kuba
Cc: sdf, bpf, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song,
clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 3273 bytes --]
> diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
> index 24855381290d..e4c5dd86c640 100644
> --- a/tools/testing/selftests/bpf/config
> +++ b/tools/testing/selftests/bpf/config
[ ... ]
> diff --git a/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c b/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
> new file mode 100644
> index 000000000000..0cce61cd7b26
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
> @@ -0,0 +1,92 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2026 Orange */
> +
> +/*
> + * 12 LSM programs with lsm_cgroup attachment type, each on a distinct LSM
> + * hook. Used by prog_tests/cgroup_lsm_num.c to verify that the kernel
> + * enforces the CONFIG_CGROUP_LSM_NUM limit on unique per-cgroup LSM hook
> + * slots. With CONFIG_CGROUP_LSM_NUM set to 10, 10 shall be attached and 2
> + * rejected.
> + */
> +
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +SEC("lsm_cgroup/socket_create")
> +int BPF_PROG(hook0, int family, int type, int protocol, int kern)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_post_create")
> +int BPF_PROG(hook1, struct socket *sock, int family, int type,
> + int protocol, int kern)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_socketpair")
> +int BPF_PROG(hook2, struct socket *socka, struct socket *sockb)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_bind")
> +int BPF_PROG(hook3, struct socket *sock, struct sockaddr *address,
> + int addrlen)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_connect")
> +int BPF_PROG(hook4, struct socket *sock, struct sockaddr *address,
> + int addrlen)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_listen")
> +int BPF_PROG(hook5, struct socket *sock, int backlog)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_accept")
> +int BPF_PROG(hook6, struct socket *sock, struct socket *newsock)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_sendmsg")
> +int BPF_PROG(hook7, struct socket *sock, struct msghdr *msg, int size)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_recvmsg")
> +int BPF_PROG(hook8, struct socket *sock, struct msghdr *msg, int size,
> + int flags)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_getsockname")
> +int BPF_PROG(hook9, struct socket *sock)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_getpeername")
> +int BPF_PROG(hook10, struct socket *sock)
> +{
> + return 1;
> +}
> +
> +SEC("lsm_cgroup/socket_shutdown")
> +int BPF_PROG(hook11, struct socket *sock, int how)
> +{
> + return 1;
> +}
Could these 12 BPF program declarations be refactored using a macro? The
implementations are identical except for the hook name in the SEC() attribute.
A reviewer previously suggested this approach to reduce verbosity since only
the attach point changes between declarations.
Reference: https://lore.kernel.org/bpf/afs_CIQUer2tfh_3@mail.gmail.com/
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/25437889223
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-06 13:12 ` [PATCH v2 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
2026-05-06 13:48 ` bot+bpf-ci
@ 2026-05-06 19:46 ` sashiko-bot
1 sibling, 0 replies; 7+ messages in thread
From: sashiko-bot @ 2026-05-06 19:46 UTC (permalink / raw)
To: Paul Houssel; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [High] Setting `CONFIG_CGROUP_LSM_NUM` to large values (e.g., 300) causes a stack frame size violation in `cgroup_bpf_inherit()`, breaking the build and risking stack overflows.
- [High] Increasing `CONFIG_CGROUP_LSM_NUM` massively bloats `struct cgroup` for every cgroup instance system-wide, causing severe un-reclaimable memory overhead.
- [Low] Setting `CONFIG_CGROUP_LSM_NUM` to 0 creates a global zero-length array.
--
commit 51a90971268ad1dbaa85eb0becb17cac4d98af78
Author: Paul Houssel <paulhoussel2@gmail.com>
bpf: render CGROUP_LSM_NUM configurable as a KConfig
This commit introduces the CONFIG_CGROUP_LSM_NUM Kconfig option to allow
configuring the maximum number of simultaneously attachable BPF_PROG_TYPE_LSM
programs per cgroup. It replaces the hardcoded limit of 10 with a
configurable range from 0 to 300.
> diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
> index eb3de35734f09..7f51598aa8fe6 100644
> --- a/kernel/bpf/Kconfig
> +++ b/kernel/bpf/Kconfig
[ ... ]
> @@ -101,4 +101,17 @@ config BPF_LSM
>
> If you are unsure how to answer this question, answer N.
>
> +config CGROUP_LSM_NUM
> + int "Maximum number of per-cgroup LSM hooks"
> + depends on BPF_LSM
> + depends on CGROUP_BPF
> + range 0 300
Could this large upper bound cause a stack frame size violation?
In kernel/bpf/cgroup.c, cgroup_bpf_inherit() allocates an array of pointers
on the stack:
struct bpf_prog_array *arrays[ARRAY_SIZE(cgrp->bpf.effective)];
The size of the effective array scales directly with MAX_CGROUP_BPF_ATTACH_TYPE.
If CONFIG_CGROUP_LSM_NUM is configured to its maximum of 300,
MAX_CGROUP_BPF_ATTACH_TYPE becomes 329. Allocating 329 pointers on the stack
requires 2,632 bytes, which exceeds the standard kernel stack frame warning
threshold (CONFIG_FRAME_WARN=2048).
Does this large upper bound also bloat struct cgroup for every cgroup
instance system-wide?
MAX_CGROUP_BPF_ATTACH_TYPE dictates the size of four arrays embedded directly
inside struct cgroup_bpf: effective (pointers), progs (hlist_heads),
flags (u8), and revisions (u32).
Increasing the limit from the default 10 to 300 adds over 6KB of memory overhead
to struct cgroup_bpf. Because struct cgroup_bpf is embedded within
struct cgroup, this results in >6KB of un-reclaimable slab memory bloat for
every single cgroup created system-wide.
Could a minimum value of 0 create a global zero-length array?
If set to 0, the declaration in kernel/bpf/cgroup.c expands to:
static struct cgroup_lsm_atype cgroup_lsm_atype[0];
While GCC supports zero-length arrays as an extension, declaring a static
global zero-length array is non-standard C and can trigger warnings in
fortified builds or with stricter compiler flags.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260506131257.713895-1-paulhoussel2@gmail.com?part=1
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
2026-05-06 13:12 ` [PATCH v2 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
2026-05-06 13:48 ` bot+bpf-ci
@ 2026-05-06 20:07 ` sashiko-bot
1 sibling, 0 replies; 7+ messages in thread
From: sashiko-bot @ 2026-05-06 20:07 UTC (permalink / raw)
To: Paul Houssel; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 4 potential issue(s) to consider:
- [High] Resource Leak (Memory and File Descriptor) for bpf_link objects
- [High] Logic Flaw preventing graceful test skip on unsupported kernels
- [Medium] Test flakiness under concurrent execution due to global system limit
- [Medium] Test failure on kernels with different configurations due to hardcoded limit
--
commit 18126ecb98cd2c651e3993d4401ade44b2577a2f
Author: Paul Houssel <paulhoussel2@gmail.com>
selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
This patch adds a selftest to verify that the kernel limits the number of
concurrently attachable per-cgroup LSM hook slots to the configured limit.
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
new file mode 100644
index 0000000000000..1c5825c6c3d07
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
[ ... ]
> void test_cgroup_lsm_num(void)
> {
Since CONFIG_CGROUP_LSM_NUM enforces a global limit across the system,
could parallel selftest execution (e.g. test_progs -j) consume slots and
cause this test to randomly fail?
Should this test be renamed to serial_test_cgroup_lsm_num() to prevent
concurrent execution issues?
[ ... ]
> bpf_object__for_each_program(prog, skel->obj) {
> struct bpf_link *link;
>
> link = bpf_program__attach_cgroup(prog, cgroup_fd);
> if (!link) {
> if (errno == EOPNOTSUPP) {
When CONFIG_BPF_LSM is disabled, bpf_cgroup_atype_find() returns
-EOPNOTSUPP, but __cgroup_bpf_attach() translates any negative atype to
-EINVAL.
Does this mean the test will receive EINVAL instead of EOPNOTSUPP and fail
to skip gracefully on unsupported kernels?
> test__skip();
> goto out;
> }
> failed++;
> } else {
> attached++;
> }
> }
When a program successfully attaches, bpf_program__attach_cgroup()
allocates a struct bpf_link and a kernel file descriptor. Since the link
pointer is overwritten on each loop iteration and the skeleton destroy
function does not free manually attached links, do these links and file
descriptors leak and prevent the cgroup from being destroyed?
>
> // CONFIG_CGROUP_LSM_NUM set to 10
> // -> 10 programs shall be attached
> ASSERT_EQ(attached, 10, "at least one attached");
BPF selftests frequently run against external or distribution kernels. If
the test runs on a kernel compiled with a larger limit (e.g., 20), all 12
programs will attach successfully and this assertion will fail.
Can we use extern int CONFIG_CGROUP_LSM_NUM __kconfig in the BPF code to
dynamically retrieve the actual limit instead of hardcoding 10?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260506131257.713895-1-paulhoussel2@gmail.com?part=2
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-05-06 20:08 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 13:12 [PATCH v2 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Houssel
2026-05-06 13:12 ` [PATCH v2 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
2026-05-06 13:48 ` bot+bpf-ci
2026-05-06 19:46 ` sashiko-bot
2026-05-06 13:12 ` [PATCH v2 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
2026-05-06 13:48 ` bot+bpf-ci
2026-05-06 20:07 ` sashiko-bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox