* [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable
@ 2026-05-06 15:05 Paul Houssel
2026-05-06 15:05 ` [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Paul Houssel @ 2026-05-06 15:05 UTC (permalink / raw)
To: paul.houssel, Andrii Nakryiko, Yonghong Song, Paul Houssel,
KP Singh, Alexei Starovoitov, Song Liu, Martin KaFai Lau,
Christian König, Florian Westphal, T.J. Mercier, Li RongQing,
Paul Chaignon, D. Wythe, Jakub Kicinski
Cc: Stanislav Fomichev, bpf
In include/linux/bpf-cgroup-defs.h, CGROUP_LSM_NUM defines the maximum
number of BPF_PROG_TYPE_LSM programs that can be simultaneously attached
using the BPF_LSM_CGROUP attachment type. It is currently hardcoded to 10.
This limit was introduced in 'commit c0e19f2c9a3e ("bpf: minimize number
of allocated lsm slots per program")' in the first patch implementing
BPF_LSM_CGROUP attachment, and has not been changed since. Rather than
reserving one slot per LSM hook (a 1:1 static mapping across all 211
possible available hooks at that time), it introduced a dynamic scheme
where only 10 slots exist per cgroup allocated on demand.
In practice, eBPF-based tools may exceed this limit. I therefore propose
making CGROUP_LSM_NUM a Kconfig option so that users can tune it to their
requirements, rather than being constrained by static hardcoded default
that has been arbitrarily decided on the first implementation of this
attachment type. On the other hand some uses cases may be interest to
limit the number of attachments to a lower value than 10 to prevent too
much memory overhead.
Modifying this limit has been dicussed previously in
https://lore.kernel.org/bpf/20220408225628.oog4a3qteauhqkdn@kafai-mbp.dhcp.thefacebook.com/,
where the same thought on this limit being too small was being shared as
well. Furthermore, this discussion seems to have yielded inconclusive
about to render it dynamic, without a fixed array size.
Changes since V3:
- refactor test eBPF programs by using a macro (patch 2)
- improve the kconfig help text by elaborating on the memory
overhead (patch 1)
- link to V2:
https://lore.kernel.org/bpf/20260506131257.713895-1-paulhoussel2@gmail.com/
Paul Houssel (2):
bpf: render CGROUP_LSM_NUM configurable as a KConfig
selftests/bpf: add tests to verify the enforcement of
CONFIG_CGROUP_LSM_NUM
include/linux/bpf-cgroup-defs.h | 2 +-
kernel/bpf/Kconfig | 19 ++++++
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/cgroup_lsm_num.c | 60 +++++++++++++++++++
.../selftests/bpf/progs/cgroup_lsm_num.c | 46 ++++++++++++++
5 files changed, 127 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
create mode 100644 tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
--
2.54.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-06 15:05 [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Houssel
@ 2026-05-06 15:05 ` Paul Houssel
2026-05-06 15:52 ` bot+bpf-ci
2026-05-06 21:08 ` sashiko-bot
2026-05-06 15:05 ` [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
2026-05-06 16:13 ` [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Chaignon
2 siblings, 2 replies; 11+ messages in thread
From: Paul Houssel @ 2026-05-06 15:05 UTC (permalink / raw)
To: paul.houssel, Andrii Nakryiko, Yonghong Song, Paul Houssel,
KP Singh, Alexei Starovoitov, Song Liu, Martin KaFai Lau,
Christian König, Florian Westphal, T.J. Mercier, Li RongQing,
Paul Chaignon, D. Wythe, Jakub Kicinski
Cc: Stanislav Fomichev, bpf
In include/linux/bpf-cgroup-defs.h, CGROUP_LSM_NUM defines the maximum
number of BPF_PROG_TYPE_LSM programs that can be simultaneously attached
using the `BPF_LSM_CGROUP` attachment type. We set the value to the newly
introduced `CONFIG_CGROUP_LSM_NUM` Kconfig option, allowing users and
distributions to tune this limit at build time rather than relying on a
hardcoded value.
The option ranges from 0 to 300 and defaults to 10, preserving the
existing behaviour. There are currently 273 LSM hooks but this number is
subject to change. I coudn't find a MACRO counting the sum of LSM
interfaces and therefore arbitrarily set the threshold to 300. I am open
to suggestions on how to set this limit dynamically or not.
Signed-off-by: Paul Houssel <paulhoussel2@gmail.com>
---
include/linux/bpf-cgroup-defs.h | 2 +-
kernel/bpf/Kconfig | 19 +++++++++++++++++++
2 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
index c9e6b26abab6..9ab5ca3dbaba 100644
--- a/include/linux/bpf-cgroup-defs.h
+++ b/include/linux/bpf-cgroup-defs.h
@@ -12,7 +12,7 @@ struct bpf_prog_array;
#ifdef CONFIG_BPF_LSM
/* Maximum number of concurrently attachable per-cgroup LSM hooks. */
-#define CGROUP_LSM_NUM 10
+#define CGROUP_LSM_NUM CONFIG_CGROUP_LSM_NUM
#else
#define CGROUP_LSM_NUM 0
#endif
diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
index eb3de35734f0..050af0b72651 100644
--- a/kernel/bpf/Kconfig
+++ b/kernel/bpf/Kconfig
@@ -101,4 +101,23 @@ config BPF_LSM
If you are unsure how to answer this question, answer N.
+config CGROUP_LSM_NUM
+ int "Maximum number of per-cgroup LSM hooks"
+ depends on BPF_LSM
+ depends on CGROUP_BPF
+ range 0 300
+ default 10
+ help
+ Maximum number of concurrently attachable per-cgroup LSM hooks.
+ Increasing this value has two memory costs:
+ - 8 bytes per added hook (due to growing
+ cgroup_lsm_atype[] array in kernel/bpf/cgroup.c)
+
+ - 25 bytes per added hook, because each hook adds a value to
+ MAX_CGROUP_BPF_ATTACH_TYPE and thus increases the
+ effective, progs, flags and revisions arrays in struct
+ cgroup_bpf
+
+ If you are unsure, leave the default value.
+
endmenu # "BPF subsystem"
--
2.54.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
2026-05-06 15:05 [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Houssel
2026-05-06 15:05 ` [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
@ 2026-05-06 15:05 ` Paul Houssel
2026-05-06 16:05 ` Paul Chaignon
2026-05-06 21:24 ` sashiko-bot
2026-05-06 16:13 ` [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Chaignon
2 siblings, 2 replies; 11+ messages in thread
From: Paul Houssel @ 2026-05-06 15:05 UTC (permalink / raw)
To: paul.houssel, Andrii Nakryiko, Yonghong Song, Paul Houssel,
KP Singh, Alexei Starovoitov, Song Liu, Martin KaFai Lau,
Christian König, Florian Westphal, T.J. Mercier, Li RongQing,
Paul Chaignon, D. Wythe, Jakub Kicinski
Cc: Stanislav Fomichev, bpf
Add a selftest that verifies the kernel correctly enforces
CONFIG_CGROUP_LSM_NUM as the maximum number of concurrently attachable
per-cgroup LSM hook slots.
The BPF program side (progs/cgroup_lsm_num.c) defines 12 lsm_cgroup
programs, each attached to a distinct LSM hook. The test side
(prog_tests/cgroup_lsm_num.c) attempts to attach all 12 programs one by
one to a cgroup, and verifies that exactly 10 succeed and 2 are rejected,
matching the value of CONFIG_CGROUP_LSM_NUM set to 10 in the selftest
Kconfig fragment.
Signed-off-by: Paul Houssel <paulhoussel2@gmail.com>
---
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/cgroup_lsm_num.c | 60 +++++++++++++++++++
.../selftests/bpf/progs/cgroup_lsm_num.c | 46 ++++++++++++++
3 files changed, 107 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
create mode 100644 tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
index 24855381290d..e4c5dd86c640 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -11,6 +11,7 @@ CONFIG_BPF_STREAM_PARSER=y
CONFIG_BPF_SYSCALL=y
# CONFIG_BPF_UNPRIV_DEFAULT_OFF is not set
CONFIG_CGROUP_BPF=y
+CONFIG_CGROUP_LSM_NUM=10
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_USER_API=y
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
new file mode 100644
index 000000000000..1c5825c6c3d0
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Orange */
+
+/*
+ * Test that the kernel enforces CONFIG_CGROUP_LSM_NUM as the maximum
+ * number of concurrently used per-cgroup LSM hook slots.
+ *
+ * - load a BPF object with 12 programs each on a distinct lsm_cgroup hook
+ * - attach them one by one via bpf_program__attach_cgroup()
+ * - at some point the slots are exhausted and attachment fails
+ * - verify that 10 succeed attachment and 2 fail
+ */
+
+#include <test_progs.h>
+#include <bpf/bpf.h>
+
+#include "cgroup_lsm_num.skel.h"
+#include "cgroup_helpers.h"
+
+void test_cgroup_lsm_num(void)
+{
+ struct cgroup_lsm_num *skel = NULL;
+ struct bpf_program *prog;
+ int cgroup_fd = -1;
+ int attached = 0;
+ int failed = 0;
+
+ cgroup_fd = test__join_cgroup("/cgroup_lsm_num");
+ if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup"))
+ return;
+
+ skel = cgroup_lsm_num__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ goto out;
+
+ bpf_object__for_each_program(prog, skel->obj) {
+ struct bpf_link *link;
+
+ link = bpf_program__attach_cgroup(prog, cgroup_fd);
+ if (!link) {
+ if (errno == EOPNOTSUPP) {
+ test__skip();
+ goto out;
+ }
+ failed++;
+ } else {
+ attached++;
+ }
+ }
+
+ // CONFIG_CGROUP_LSM_NUM set to 10
+ // -> 10 programs shall be attached
+ ASSERT_EQ(attached, 10, "at least one attached");
+ // -> 2 programs shall be rejected
+ ASSERT_EQ(failed, 2, "limit was enforced");
+
+out:
+ close(cgroup_fd);
+ cgroup_lsm_num__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c b/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
new file mode 100644
index 000000000000..662aee2283c2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Orange */
+
+/*
+ * 12 LSM programs with lsm_cgroup attachment type, each on a distinct LSM
+ * hook. Used by prog_tests/cgroup_lsm_num.c to verify that the kernel
+ * enforces the CONFIG_CGROUP_LSM_NUM limit on unique per-cgroup LSM hook
+ * slots. With CONFIG_CGROUP_LSM_NUM set to 10, 10 shall be attached and 2
+ * rejected.
+ */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define LSM_CGROUP_HOOK(name, hook) \
+ SEC("lsm_cgroup/" #hook) \
+ int BPF_PROG(name) { return 1; }
+
+
+LSM_CGROUP_HOOK(hook0, socket_create)
+
+LSM_CGROUP_HOOK(hook1, socket_post_create)
+
+LSM_CGROUP_HOOK(hook2, socket_socketpair)
+
+LSM_CGROUP_HOOK(hook3, socket_bind)
+
+LSM_CGROUP_HOOK(hook4, socket_connect)
+
+LSM_CGROUP_HOOK(hook5, socket_listen)
+
+LSM_CGROUP_HOOK(hook6, socket_accept)
+
+LSM_CGROUP_HOOK(hook7, socket_sendmsg)
+
+LSM_CGROUP_HOOK(hook8, socket_recvmsg)
+
+LSM_CGROUP_HOOK(hook9, socket_getsockname)
+
+LSM_CGROUP_HOOK(hook10, socket_getpeername)
+
+LSM_CGROUP_HOOK(hook11, socket_shutdown)
+
--
2.54.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-06 15:05 ` [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
@ 2026-05-06 15:52 ` bot+bpf-ci
2026-05-06 16:11 ` Paul Chaignon
2026-05-06 21:08 ` sashiko-bot
1 sibling, 1 reply; 11+ messages in thread
From: bot+bpf-ci @ 2026-05-06 15:52 UTC (permalink / raw)
To: paulhoussel2, paul.houssel, andrii, yonghong.song, paulhoussel2,
kpsingh, ast, song, martin.lau, christian.koenig, fw, tjmercier,
lirongqing, paul.chaignon, alibuda, kuba
Cc: sdf, bpf, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song,
clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2357 bytes --]
> diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
> index c9e6b26abab6..9ab5ca3dbaba 100644
> --- a/include/linux/bpf-cgroup-defs.h
> +++ b/include/linux/bpf-cgroup-defs.h
> @@ -12,7 +12,7 @@ struct bpf_prog_array;
>
> #ifdef CONFIG_BPF_LSM
> /* Maximum number of concurrently attachable per-cgroup LSM hooks. */
> -#define CGROUP_LSM_NUM 10
> +#define CGROUP_LSM_NUM CONFIG_CGROUP_LSM_NUM
> #else
> #define CGROUP_LSM_NUM 0
> #endif
[ ... ]
> diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
> index eb3de35734f0..050af0b72651 100644
> --- a/kernel/bpf/Kconfig
> +++ b/kernel/bpf/Kconfig
> @@ -101,4 +101,23 @@ config BPF_LSM
>
> If you are unsure how to answer this question, answer N.
>
> +config CGROUP_LSM_NUM
> + int "Maximum number of per-cgroup LSM hooks"
> + depends on BPF_LSM
> + depends on CGROUP_BPF
> + range 0 300
^^^
Should the minimum range be 1 instead of 0? When CONFIG_CGROUP_LSM_NUM=0,
the enum in include/linux/bpf-cgroup-defs.h calculates:
enum cgroup_bpf_attach_type {
...
CGROUP_LSM_START,
CGROUP_LSM_END = CGROUP_LSM_START + CGROUP_LSM_NUM - 1,
...
};
This results in CGROUP_LSM_END being less than CGROUP_LSM_START (END =
START - 1), which contradicts C enum range conventions. While the runtime
behavior stays safe since bpf_cgroup_atype_find returns -E2BIG for empty
arrays, the enum definition is semantically incorrect.
The Kconfig help text refers to "Maximum number of concurrently attachable
per-cgroup LSM hooks" which suggests at least one hook should be
attachable.
> + default 10
> + help
> + Maximum number of concurrently attachable per-cgroup LSM hooks.
> + Increasing this value has two memory costs:
> + - 8 bytes per added hook (due to growing
> + cgroup_lsm_atype[] array in kernel/bpf/cgroup.c)
> +
> + - 25 bytes per added hook, because each hook adds a value to
> + MAX_CGROUP_BPF_ATTACH_TYPE and thus increases the
> + effective, progs, flags and revisions arrays in struct
> + cgroup_bpf
> +
> + If you are unsure, leave the default value.
> +
> endmenu # "BPF subsystem"
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/25444641622
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
2026-05-06 15:05 ` [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
@ 2026-05-06 16:05 ` Paul Chaignon
2026-05-06 21:24 ` sashiko-bot
1 sibling, 0 replies; 11+ messages in thread
From: Paul Chaignon @ 2026-05-06 16:05 UTC (permalink / raw)
To: Paul Houssel
Cc: paul.houssel, Andrii Nakryiko, Yonghong Song, KP Singh,
Alexei Starovoitov, Song Liu, Martin KaFai Lau,
Christian König, Florian Westphal, T.J. Mercier, Li RongQing,
D. Wythe, Jakub Kicinski, Stanislav Fomichev, bpf
On Wed, May 06, 2026 at 05:05:47PM +0200, Paul Houssel wrote:
> Add a selftest that verifies the kernel correctly enforces
> CONFIG_CGROUP_LSM_NUM as the maximum number of concurrently attachable
> per-cgroup LSM hook slots.
>
> The BPF program side (progs/cgroup_lsm_num.c) defines 12 lsm_cgroup
> programs, each attached to a distinct LSM hook. The test side
> (prog_tests/cgroup_lsm_num.c) attempts to attach all 12 programs one by
> one to a cgroup, and verifies that exactly 10 succeed and 2 are rejected,
> matching the value of CONFIG_CGROUP_LSM_NUM set to 10 in the selftest
> Kconfig fragment.
>
> Signed-off-by: Paul Houssel <paulhoussel2@gmail.com>
> ---
> tools/testing/selftests/bpf/config | 1 +
> .../selftests/bpf/prog_tests/cgroup_lsm_num.c | 60 +++++++++++++++++++
> .../selftests/bpf/progs/cgroup_lsm_num.c | 46 ++++++++++++++
> 3 files changed, 107 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
> create mode 100644 tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
>
> diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
> index 24855381290d..e4c5dd86c640 100644
> --- a/tools/testing/selftests/bpf/config
> +++ b/tools/testing/selftests/bpf/config
> @@ -11,6 +11,7 @@ CONFIG_BPF_STREAM_PARSER=y
> CONFIG_BPF_SYSCALL=y
> # CONFIG_BPF_UNPRIV_DEFAULT_OFF is not set
> CONFIG_CGROUP_BPF=y
> +CONFIG_CGROUP_LSM_NUM=10
> CONFIG_CRYPTO_HMAC=y
> CONFIG_CRYPTO_SHA256=y
> CONFIG_CRYPTO_USER_API=y
> diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
> new file mode 100644
> index 000000000000..1c5825c6c3d0
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
> @@ -0,0 +1,60 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2026 Orange */
> +
> +/*
> + * Test that the kernel enforces CONFIG_CGROUP_LSM_NUM as the maximum
> + * number of concurrently used per-cgroup LSM hook slots.
> + *
> + * - load a BPF object with 12 programs each on a distinct lsm_cgroup hook
> + * - attach them one by one via bpf_program__attach_cgroup()
> + * - at some point the slots are exhausted and attachment fails
> + * - verify that 10 succeed attachment and 2 fail
> + */
> +
> +#include <test_progs.h>
> +#include <bpf/bpf.h>
> +
> +#include "cgroup_lsm_num.skel.h"
> +#include "cgroup_helpers.h"
> +
> +void test_cgroup_lsm_num(void)
> +{
> + struct cgroup_lsm_num *skel = NULL;
> + struct bpf_program *prog;
> + int cgroup_fd = -1;
> + int attached = 0;
> + int failed = 0;
> +
> + cgroup_fd = test__join_cgroup("/cgroup_lsm_num");
> + if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup"))
> + return;
> +
> + skel = cgroup_lsm_num__open_and_load();
> + if (!ASSERT_OK_PTR(skel, "open_and_load"))
> + goto out;
> +
> + bpf_object__for_each_program(prog, skel->obj) {
> + struct bpf_link *link;
> +
> + link = bpf_program__attach_cgroup(prog, cgroup_fd);
> + if (!link) {
> + if (errno == EOPNOTSUPP) {
> + test__skip();
> + goto out;
> + }
> + failed++;
> + } else {
> + attached++;
> + }
> + }
> +
> + // CONFIG_CGROUP_LSM_NUM set to 10
> + // -> 10 programs shall be attached
> + ASSERT_EQ(attached, 10, "at least one attached");
> + // -> 2 programs shall be rejected
> + ASSERT_EQ(failed, 2, "limit was enforced");
> +
> +out:
> + close(cgroup_fd);
> + cgroup_lsm_num__destroy(skel);
> +}
> diff --git a/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c b/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
> new file mode 100644
> index 000000000000..662aee2283c2
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
> @@ -0,0 +1,46 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2026 Orange */
> +
> +/*
> + * 12 LSM programs with lsm_cgroup attachment type, each on a distinct LSM
> + * hook. Used by prog_tests/cgroup_lsm_num.c to verify that the kernel
> + * enforces the CONFIG_CGROUP_LSM_NUM limit on unique per-cgroup LSM hook
> + * slots. With CONFIG_CGROUP_LSM_NUM set to 10, 10 shall be attached and 2
> + * rejected.
> + */
> +
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +#define LSM_CGROUP_HOOK(name, hook) \
> + SEC("lsm_cgroup/" #hook) \
> + int BPF_PROG(name) { return 1; }
> +
> +
> +LSM_CGROUP_HOOK(hook0, socket_create)
> +
> +LSM_CGROUP_HOOK(hook1, socket_post_create)
> +
> +LSM_CGROUP_HOOK(hook2, socket_socketpair)
> +
> +LSM_CGROUP_HOOK(hook3, socket_bind)
> +
> +LSM_CGROUP_HOOK(hook4, socket_connect)
> +
> +LSM_CGROUP_HOOK(hook5, socket_listen)
> +
> +LSM_CGROUP_HOOK(hook6, socket_accept)
> +
> +LSM_CGROUP_HOOK(hook7, socket_sendmsg)
> +
> +LSM_CGROUP_HOOK(hook8, socket_recvmsg)
> +
> +LSM_CGROUP_HOOK(hook9, socket_getsockname)
> +
> +LSM_CGROUP_HOOK(hook10, socket_getpeername)
> +
> +LSM_CGROUP_HOOK(hook11, socket_shutdown)
Please remove all the unnecessary newlines. One empty line after the
macro definition is enough :)
> +
> --
> 2.54.0
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-06 15:52 ` bot+bpf-ci
@ 2026-05-06 16:11 ` Paul Chaignon
0 siblings, 0 replies; 11+ messages in thread
From: Paul Chaignon @ 2026-05-06 16:11 UTC (permalink / raw)
To: bot+bpf-ci
Cc: paulhoussel2, paul.houssel, andrii, yonghong.song, kpsingh, ast,
song, martin.lau, christian.koenig, fw, tjmercier, lirongqing,
alibuda, kuba, sdf, bpf, daniel, eddyz87, clm, ihor.solodrai
On Wed, May 06, 2026 at 03:52:37PM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
> > index c9e6b26abab6..9ab5ca3dbaba 100644
> > --- a/include/linux/bpf-cgroup-defs.h
> > +++ b/include/linux/bpf-cgroup-defs.h
> > @@ -12,7 +12,7 @@ struct bpf_prog_array;
> >
> > #ifdef CONFIG_BPF_LSM
> > /* Maximum number of concurrently attachable per-cgroup LSM hooks. */
> > -#define CGROUP_LSM_NUM 10
> > +#define CGROUP_LSM_NUM CONFIG_CGROUP_LSM_NUM
> > #else
> > #define CGROUP_LSM_NUM 0
> > #endif
>
> [ ... ]
>
> > diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
> > index eb3de35734f0..050af0b72651 100644
> > --- a/kernel/bpf/Kconfig
> > +++ b/kernel/bpf/Kconfig
> > @@ -101,4 +101,23 @@ config BPF_LSM
> >
> > If you are unsure how to answer this question, answer N.
> >
> > +config CGROUP_LSM_NUM
> > + int "Maximum number of per-cgroup LSM hooks"
> > + depends on BPF_LSM
> > + depends on CGROUP_BPF
> > + range 0 300
> ^^^
>
> Should the minimum range be 1 instead of 0? When CONFIG_CGROUP_LSM_NUM=0,
> the enum in include/linux/bpf-cgroup-defs.h calculates:
That makes sense to me, especially since we already have CONFIG_BPF_LSM
to disable BPF LSM altogether. CONFIG_CGROUP_LSM_NUM=0 could be useful
if we wanted to offer a way to disable per-cgroup LSM specifically, but
then you'd need to rework things a bit as pointed out by the bot. I'm
not sure it's worth it.
>
> enum cgroup_bpf_attach_type {
> ...
> CGROUP_LSM_START,
> CGROUP_LSM_END = CGROUP_LSM_START + CGROUP_LSM_NUM - 1,
> ...
> };
>
> This results in CGROUP_LSM_END being less than CGROUP_LSM_START (END =
> START - 1), which contradicts C enum range conventions. While the runtime
> behavior stays safe since bpf_cgroup_atype_find returns -E2BIG for empty
> arrays, the enum definition is semantically incorrect.
>
> The Kconfig help text refers to "Maximum number of concurrently attachable
> per-cgroup LSM hooks" which suggests at least one hook should be
> attachable.
>
> > + default 10
> > + help
> > + Maximum number of concurrently attachable per-cgroup LSM hooks.
> > + Increasing this value has two memory costs:
> > + - 8 bytes per added hook (due to growing
> > + cgroup_lsm_atype[] array in kernel/bpf/cgroup.c)
> > +
> > + - 25 bytes per added hook, because each hook adds a value to
> > + MAX_CGROUP_BPF_ATTACH_TYPE and thus increases the
> > + effective, progs, flags and revisions arrays in struct
> > + cgroup_bpf
> > +
> > + If you are unsure, leave the default value.
> > +
> > endmenu # "BPF subsystem"
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/25444641622
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable
2026-05-06 15:05 [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Houssel
2026-05-06 15:05 ` [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
2026-05-06 15:05 ` [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
@ 2026-05-06 16:13 ` Paul Chaignon
2 siblings, 0 replies; 11+ messages in thread
From: Paul Chaignon @ 2026-05-06 16:13 UTC (permalink / raw)
To: Paul Houssel
Cc: paul.houssel, Andrii Nakryiko, Yonghong Song, KP Singh,
Alexei Starovoitov, Song Liu, Martin KaFai Lau,
Christian König, Florian Westphal, T.J. Mercier, Li RongQing,
D. Wythe, Jakub Kicinski, Stanislav Fomichev, bpf
On Wed, May 06, 2026 at 05:05:45PM +0200, Paul Houssel wrote:
> In include/linux/bpf-cgroup-defs.h, CGROUP_LSM_NUM defines the maximum
> number of BPF_PROG_TYPE_LSM programs that can be simultaneously attached
> using the BPF_LSM_CGROUP attachment type. It is currently hardcoded to 10.
>
> This limit was introduced in 'commit c0e19f2c9a3e ("bpf: minimize number
> of allocated lsm slots per program")' in the first patch implementing
> BPF_LSM_CGROUP attachment, and has not been changed since. Rather than
> reserving one slot per LSM hook (a 1:1 static mapping across all 211
> possible available hooks at that time), it introduced a dynamic scheme
> where only 10 slots exist per cgroup allocated on demand.
>
> In practice, eBPF-based tools may exceed this limit. I therefore propose
> making CGROUP_LSM_NUM a Kconfig option so that users can tune it to their
> requirements, rather than being constrained by static hardcoded default
> that has been arbitrarily decided on the first implementation of this
> attachment type. On the other hand some uses cases may be interest to
> limit the number of attachments to a lower value than 10 to prevent too
> much memory overhead.
>
> Modifying this limit has been dicussed previously in
> https://lore.kernel.org/bpf/20220408225628.oog4a3qteauhqkdn@kafai-mbp.dhcp.thefacebook.com/,
> where the same thought on this limit being too small was being shared as
> well. Furthermore, this discussion seems to have yielded inconclusive
> about to render it dynamic, without a fixed array size.
>
> Changes since V3:
> - refactor test eBPF programs by using a macro (patch 2)
> - improve the kconfig help text by elaborating on the memory
> overhead (patch 1)
> - link to V2:
> https://lore.kernel.org/bpf/20260506131257.713895-1-paulhoussel2@gmail.com/
We typically keep the full changelogs (with each version). That can
help among other things to get a sense of whether the reviews are
converging.
>
>
>
> Paul Houssel (2):
> bpf: render CGROUP_LSM_NUM configurable as a KConfig
> selftests/bpf: add tests to verify the enforcement of
> CONFIG_CGROUP_LSM_NUM
>
> include/linux/bpf-cgroup-defs.h | 2 +-
> kernel/bpf/Kconfig | 19 ++++++
> tools/testing/selftests/bpf/config | 1 +
> .../selftests/bpf/prog_tests/cgroup_lsm_num.c | 60 +++++++++++++++++++
> .../selftests/bpf/progs/cgroup_lsm_num.c | 46 ++++++++++++++
> 5 files changed, 127 insertions(+), 1 deletion(-)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
> create mode 100644 tools/testing/selftests/bpf/progs/cgroup_lsm_num.c
>
> --
> 2.54.0
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-06 15:05 ` [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
2026-05-06 15:52 ` bot+bpf-ci
@ 2026-05-06 21:08 ` sashiko-bot
2026-05-07 16:39 ` Paul Houssel
1 sibling, 1 reply; 11+ messages in thread
From: sashiko-bot @ 2026-05-06 21:08 UTC (permalink / raw)
To: Paul Houssel; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [High] Increasing CONFIG_CGROUP_LSM_NUM to its maximum value (300) causes an excessive stack allocation in cgroup_bpf_inherit(), which will trigger a FRAME_WARN build failure and risks stack overflow.
--
commit 33800a93ec2833bc433c78926307347450251cdb
Author: Paul Houssel <paulhoussel2@gmail.com>
bpf: render CGROUP_LSM_NUM configurable as a KConfig
This commit makes the maximum number of concurrently attachable per-cgroup
LSM hooks configurable at build time via the new CONFIG_CGROUP_LSM_NUM
option. It allows the value to be tuned from 0 to 300, replacing the
previous hardcoded limit of 10.
> diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
> index c9e6b26abab6d..9ab5ca3dbaba3 100644
> --- a/include/linux/bpf-cgroup-defs.h
> +++ b/include/linux/bpf-cgroup-defs.h
[ ... ]
> #ifdef CONFIG_BPF_LSM
> /* Maximum number of concurrently attachable per-cgroup LSM hooks. */
> -#define CGROUP_LSM_NUM 10
> +#define CGROUP_LSM_NUM CONFIG_CGROUP_LSM_NUM
> #else
> #define CGROUP_LSM_NUM 0
> #endif
> diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
> index eb3de35734f09..050af0b72651d 100644
> --- a/kernel/bpf/Kconfig
> +++ b/kernel/bpf/Kconfig
[ ... ]
> +config CGROUP_LSM_NUM
> + int "Maximum number of per-cgroup LSM hooks"
> + depends on BPF_LSM
> + depends on CGROUP_BPF
> + range 0 300
If this is configured to the maximum value of 300, could this cause a
regression by overflowing the kernel stack in cgroup_bpf_inherit()?
In include/linux/bpf-cgroup-defs.h, MAX_CGROUP_BPF_ATTACH_TYPE scales
dynamically based on CGROUP_LSM_NUM. At 300, MAX_CGROUP_BPF_ATTACH_TYPE
grows to 328 (28 base hooks + 300).
In kernel/bpf/cgroup.c, cgroup_bpf_inherit() computes NR as
ARRAY_SIZE(cgrp->bpf.effective) and declares a local stack array:
kernel/bpf/cgroup.c:cgroup_bpf_inherit() {
...
#define NR ARRAY_SIZE(cgrp->bpf.effective)
struct bpf_prog_array *arrays[NR] = {};
...
}
On 64-bit architectures, this array of pointers would consume 2624 bytes
(328 * 8) on the stack. Does this exceed the standard CONFIG_FRAME_WARN
limits and risk build failures or runtime stack overflows?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260506150547.767315-1-paulhoussel2@gmail.com?part=1
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
2026-05-06 15:05 ` [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
2026-05-06 16:05 ` Paul Chaignon
@ 2026-05-06 21:24 ` sashiko-bot
1 sibling, 0 replies; 11+ messages in thread
From: sashiko-bot @ 2026-05-06 21:24 UTC (permalink / raw)
To: Paul Houssel; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 5 potential issue(s) to consider:
- [High] The test leaks `struct bpf_link *` objects, leading to memory and file descriptor leaks that permanently exhaust global LSM cgroup slots.
- [Medium] The test assumes any attachment failure is strictly due to the `CONFIG_CGROUP_LSM_NUM` limit, which masks genuine bugs.
- [Medium] The test is flaky when run in parallel because it assumes exclusive access to the global `cgroup_lsm_atype` array.
- [Medium] The test hardcodes `CONFIG_CGROUP_LSM_NUM=10`, making it brittle on non-default kernel configurations.
- [Low] The message provided to the `ASSERT_EQ` macro for the attached count is misleading.
--
commit c964298fd80079f668d77b4f5c169d14d01bc9e0
Author: Paul Houssel <paulhoussel2@gmail.com>
selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM
This commit adds a selftest to verify that the kernel enforces
CONFIG_CGROUP_LSM_NUM as the maximum number of concurrently attachable
per-cgroup LSM hook slots.
> diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
> new file mode 100644
> index 0000000000000..1c5825c6c3d07
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_lsm_num.c
[ ... ]
> @@ -0,0 +1,60 @@
[ ... ]
> + bpf_object__for_each_program(prog, skel->obj) {
> + struct bpf_link *link;
> +
> + link = bpf_program__attach_cgroup(prog, cgroup_fd);
> + if (!link) {
> + if (errno == EOPNOTSUPP) {
> + test__skip();
> + goto out;
> + }
> + failed++;
Should this check if errno is specifically E2BIG?
If the attachment fails for an unrelated reason, such as an invalid hook
configuration (EINVAL) or memory exhaustion (ENOMEM), this will still
increment the failure count and might falsely pass the test.
> + } else {
> + attached++;
Does this code leak the struct bpf_link objects?
The link pointer returned by bpf_program__attach_cgroup() is dynamically
allocated, but bpf_link__destroy() is never called to free it during or
after the loop.
Because these leaked file descriptors keep the BPF links active in the
kernel, they will indefinitely occupy the slots in the global
cgroup_lsm_atype array. Could this exhaust the available slots and cause
subsequent tests in the suite that use BPF_LSM_CGROUP hooks to
spuriously fail with -E2BIG?
> + }
> + }
> +
> + // CONFIG_CGROUP_LSM_NUM set to 10
> + // -> 10 programs shall be attached
> + ASSERT_EQ(attached, 10, "at least one attached");
Since CONFIG_CGROUP_LSM_NUM limits the number of distinct per-cgroup
LSM hooks globally across the entire system, could this test fail when
run in parallel?
If the test suite is executed concurrently (e.g., test_progs -j) and another
running test utilizes lsm_cgroup hooks, some slots will already be occupied.
This would cause the limit to be exhausted prematurely and fail the assertion.
Will this test fail on custom or distribution kernels where the limit is
configured to a higher value?
While the selftest Kconfig fragment sets the limit to 10, selftests are
often executed against pre-built kernels. If the limit is greater than 12,
all 12 BPF programs will successfully attach, causing the hardcoded
assertion to fail unconditionally.
Could the test dynamically read the limit via
extern int CONFIG_CGROUP_LSM_NUM __kconfig instead?
This isn't a bug, but the assertion message says "at least one attached"
when it actually verifies that exactly 10 are attached.
> + // -> 2 programs shall be rejected
> + ASSERT_EQ(failed, 2, "limit was enforced");
> +
> +out:
> + close(cgroup_fd);
> + cgroup_lsm_num__destroy(skel);
> +}
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260506150547.767315-1-paulhoussel2@gmail.com?part=2
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-06 21:08 ` sashiko-bot
@ 2026-05-07 16:39 ` Paul Houssel
2026-05-08 22:16 ` Alexei Starovoitov
0 siblings, 1 reply; 11+ messages in thread
From: Paul Houssel @ 2026-05-07 16:39 UTC (permalink / raw)
To: Paul Chaignon, sashiko; +Cc: bpf
Ok, I agree I'll set the minimum value to 1 as I don't see a
motivation to would want to restrict per cgroup attachments if BPF LSM
is already allowed if CONFIG_BPF_LSM is set. I would like to have your
thoughts on the upper bound. 300 is currently higher than the actual
number of total LSM interfaces and would be considerable memory
overhead. With CONFIG_CGROUP_LSM_NUM=10, cgroup_bpf has 1024 bytes
since MAX_CGROUP_BPF_ATTACH_TYPE=38.
```
$ pahole cgroup_bpf
struct cgroup_bpf {
struct bpf_prog_array * effective[38]; /* 0 304 */
/* --- cacheline 4 boundary (256 bytes) was 48 bytes ago --- */
struct hlist_head progs[38]; /* 304 304 */
/* --- cacheline 9 boundary (576 bytes) was 32 bytes ago --- */
u8 flags[38]; /* 608 38 */
/* XXX 2 bytes hole, try to pack */
/* --- cacheline 10 boundary (640 bytes) was 8 bytes ago --- */
u64 revisions[38]; /* 648 304 */
/* --- cacheline 14 boundary (896 bytes) was 56 bytes ago --- */
struct list_head storages; /* 952 16 */
/* --- cacheline 15 boundary (960 bytes) was 8 bytes ago --- */
struct bpf_prog_array * inactive; /* 968 8 */
struct percpu_ref refcnt; /* 976 16 */
struct work_struct release_work; /* 992 32 */
/* size: 1024, cachelines: 16, members: 8 */
/* sum members: 1022, holes: 1, sum holes: 2 */
};
```
While with CONFIG_CGROUP_LSM_NUM=50, it has 2024 bytes, indeed adding
25 bytes per increment to CONFIG_CGROUP_LSM_NUM (8 + 8 + 8 + 1 per
slot for effective, progs, revisions, and flags):
```
$ pahole cgroup_bpf
struct cgroup_bpf {
struct bpf_prog_array * effective[78]; /* 0 624 */
/* --- cacheline 9 boundary (576 bytes) was 48 bytes ago --- */
struct hlist_head progs[78]; /* 624 624 */
/* --- cacheline 19 boundary (1216 bytes) was 32 bytes ago --- */
u8 flags[78]; /* 1248 78 */
/* XXX 2 bytes hole, try to pack */
/* --- cacheline 20 boundary (1280 bytes) was 48 bytes ago --- */
u64 revisions[78]; /* 1328 624 */
/* --- cacheline 30 boundary (1920 bytes) was 32 bytes ago --- */
struct list_head storages; /* 1952 16 */
struct bpf_prog_array * inactive; /* 1968 8 */
struct percpu_ref refcnt; /* 1976 16 */
/* --- cacheline 31 boundary (1984 bytes) was 8 bytes ago --- */
struct work_struct release_work; /* 1992 32 */
/* size: 2024, cachelines: 32, members: 8 */
/* sum members: 2022, holes: 1, sum holes: 2 */
/* last cacheline: 40 bytes */
};
```
Above a certain threshold the memory overhead is too high indeed,
causing a crash on my VM. Probably caused by the raised stack concern,
exceeding CONFIG_FRAME_WARN=2048. I therefore propose capping at 50 to
be below FRAME_WARN. In practice eBPF-based tools may exceed this
limit of 10 but I don't see use-case requiring the attachment of all
LSM hooks. For example, Aqua Security's Tracee uses 24 LSM hooks[1],
and the recently announced MetArmor requires at least 14 LSM hooks if
they choose to trace the following events with LSM programs: execve,
bind, connect, accept, sendmsg, recvmsg, namespace creation, open,
kernel module loading, bpf, ptrace (covered by 2 LSM interfaces),
mount, and pivot_root (see slide 8 in [2]).
I would therefore propose to set the maximum value to 50 or 100 (where
cgroup_bpf is 2024 bytes long), what do you think Paul C?
[1] https://aquasecurity.github.io/tracee/latest/docs/events/
[2] https://lpc.events/event/19/contributions/2172/attachments/1845/3945/LPC%20
conference%20-%20Carl%20v2.pdf
On Wed, 6 May 2026 at 23:08, <sashiko-bot@kernel.org> wrote:
>
> Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
> - [High] Increasing CONFIG_CGROUP_LSM_NUM to its maximum value (300) causes an excessive stack allocation in cgroup_bpf_inherit(), which will trigger a FRAME_WARN build failure and risks stack overflow.
> --
>
> commit 33800a93ec2833bc433c78926307347450251cdb
> Author: Paul Houssel <paulhoussel2@gmail.com>
>
> bpf: render CGROUP_LSM_NUM configurable as a KConfig
>
> This commit makes the maximum number of concurrently attachable per-cgroup
> LSM hooks configurable at build time via the new CONFIG_CGROUP_LSM_NUM
> option. It allows the value to be tuned from 0 to 300, replacing the
> previous hardcoded limit of 10.
>
> > diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
> > index c9e6b26abab6d..9ab5ca3dbaba3 100644
> > --- a/include/linux/bpf-cgroup-defs.h
> > +++ b/include/linux/bpf-cgroup-defs.h
>
> [ ... ]
>
> > #ifdef CONFIG_BPF_LSM
> > /* Maximum number of concurrently attachable per-cgroup LSM hooks. */
> > -#define CGROUP_LSM_NUM 10
> > +#define CGROUP_LSM_NUM CONFIG_CGROUP_LSM_NUM
> > #else
> > #define CGROUP_LSM_NUM 0
> > #endif
>
> > diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
> > index eb3de35734f09..050af0b72651d 100644
> > --- a/kernel/bpf/Kconfig
> > +++ b/kernel/bpf/Kconfig
>
> [ ... ]
>
> > +config CGROUP_LSM_NUM
> > + int "Maximum number of per-cgroup LSM hooks"
> > + depends on BPF_LSM
> > + depends on CGROUP_BPF
> > + range 0 300
>
> If this is configured to the maximum value of 300, could this cause a
> regression by overflowing the kernel stack in cgroup_bpf_inherit()?
>
> In include/linux/bpf-cgroup-defs.h, MAX_CGROUP_BPF_ATTACH_TYPE scales
> dynamically based on CGROUP_LSM_NUM. At 300, MAX_CGROUP_BPF_ATTACH_TYPE
> grows to 328 (28 base hooks + 300).
>
> In kernel/bpf/cgroup.c, cgroup_bpf_inherit() computes NR as
> ARRAY_SIZE(cgrp->bpf.effective) and declares a local stack array:
>
> kernel/bpf/cgroup.c:cgroup_bpf_inherit() {
> ...
> #define NR ARRAY_SIZE(cgrp->bpf.effective)
> struct bpf_prog_array *arrays[NR] = {};
> ...
> }
>
> On 64-bit architectures, this array of pointers would consume 2624 bytes
> (328 * 8) on the stack. Does this exceed the standard CONFIG_FRAME_WARN
> limits and risk build failures or runtime stack overflows?
>
> --
> Sashiko AI review · https://sashiko.dev/#/patchset/20260506150547.767315-1-paulhoussel2@gmail.com?part=1
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig
2026-05-07 16:39 ` Paul Houssel
@ 2026-05-08 22:16 ` Alexei Starovoitov
0 siblings, 0 replies; 11+ messages in thread
From: Alexei Starovoitov @ 2026-05-08 22:16 UTC (permalink / raw)
To: Paul Houssel; +Cc: Paul Chaignon, sashiko, bpf
On Thu, May 7, 2026 at 9:40 AM Paul Houssel <paulhoussel2@gmail.com> wrote:
>
> I would therefore propose to set the maximum value to 50 or 100 (where
> cgroup_bpf is 2024 bytes long), what do you think Paul C?
Sorry we cannot grow it with documentation only.
The whole thing needs to be refactored.
effective[MAX_CGROUP_BPF_ATTACH_TYPE]
is problematic as it is. Plenty of memory waste already.
It needs to be replaced with scalable approach.
Yes, it will be a big change to __cgroup_bpf_attach()
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-05-08 22:16 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 15:05 [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Houssel
2026-05-06 15:05 ` [PATCH v3 1/2] bpf: render CGROUP_LSM_NUM configurable as a KConfig Paul Houssel
2026-05-06 15:52 ` bot+bpf-ci
2026-05-06 16:11 ` Paul Chaignon
2026-05-06 21:08 ` sashiko-bot
2026-05-07 16:39 ` Paul Houssel
2026-05-08 22:16 ` Alexei Starovoitov
2026-05-06 15:05 ` [PATCH v3 2/2] selftests/bpf: add tests to verify the enforcement of CONFIG_CGROUP_LSM_NUM Paul Houssel
2026-05-06 16:05 ` Paul Chaignon
2026-05-06 21:24 ` sashiko-bot
2026-05-06 16:13 ` [PATCH v3 0/2] Introduce CONFIG_CGROUP_LSM_NUM to render BPF_LSM_CGROUP attachment limit configurable Paul Chaignon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox