* [PATCH bpf-next v3 0/5] bpf: Implement mprog API on top of existing cgroup progs
@ 2025-05-17 16:27 Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 1/5] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Yonghong Song @ 2025-05-17 16:27 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Current cgroup prog ordering is appending at attachment time. This is not
ideal. In some cases, users want specific ordering at a particular cgroup
level. For example, in Meta, we have a case where three different
applications all have cgroup/setsockopt progs and they require specific
ordering. Current approach is to use a bpfchainer where one bpf prog
contains multiple global functions and each global function can be
freplaced by a prog for a specific application. The ordering of global
functions decides the ordering of those application specific bpf progs.
Using bpfchainer is a centralized approach and is not desirable as
one of applications acts as a daemon. The decentralized attachment
approach is more favorable for those applications.
To address this, the existing mprog API ([2]) seems an ideal solution with
supporting BPF_F_BEFORE and BPF_F_AFTER flags on top of existing cgroup
bpf implementation. More specifically, the support is added for prog/link
attachment with BPF_F_BEFORE and BPF_F_AFTER. The kernel mprog
interface ([2]) is not used and the implementation is directly done in
cgroup bpf code base. The mprog 'revision' is also implemented in
attach/detach/replace, so users can query revision number to check the
change of cgroup prog list.
The patch set contains 5 patches. Patch 1 adds revision support for
cgroup bpf progs. Patch 2 implements mprog API implementation for
prog/link attach and revision update. Patch 3 adds a new libbpf
API to do cgroup link attach with flags like BPF_F_BEFORE/BPF_F_AFTER.
Patches 4 and 5 add two tests to validate the implementation.
[1] https://lore.kernel.org/r/20250224230116.283071-1-yonghong.song@linux.dev
[2] https://lore.kernel.org/r/20230719140858.13224-2-daniel@iogearbox.net
Changelogs:
v2 -> v3:
- v2: https://lore.kernel.org/bpf/20250508223524.487875-1-yonghong.song@linux.dev/
- Big change to replace get_anchor_prog() to get_prog_list() so the
'struct bpf_prog_list *' is returned directly.
- Support 'BPF_F_BEFORE | BPF_F_AFTER' attachment if the prog list is empty
and flags do not have 'BPF_F_LINK | BPF_F_ID' and id_or_fd is 0.
- Add BPF_F_LINK support.
- Patch 4 is added to reuse id_from_prog_fd() and id_from_link_fd().
v1 -> v2:
- v1: https://lore.kernel.org/bpf/20250411011523.1838771-1-yonghong.song@linux.dev/
- Change cgroup_bpf.revisions from atomic64_t to u64.
- Added missing bpf_prog_put in various places.
- Rename get_cmp_prog() to get_anchor_prog(). The implementation tries to
find the anchor prog regardless of whether id_or_fd is non-NULL or not.
- Rename bpf_cgroup_prog_attached() to is_cgroup_prog_type() and handle
BPF_PROG_TYPE_LSM properly (with BPF_LSM_CGROUP attach type).
- I kept 'id || id_or_fd' condition as the condition 'id' is also used
in mprog.c so I assume it is okay in cgroup.c as well.
Yonghong Song (5):
cgroup: Add bpf prog revisions to struct cgroup_bpf
bpf: Implement mprog API on top of existing cgroup progs
libbpf: Support link-based cgroup attach with options
selftests/bpf: Move some tc_helpers.h functions to test_progs.h
selftests/bpf: Add two selftests for mprog API based cgroup progs
include/linux/bpf-cgroup-defs.h | 1 +
include/uapi/linux/bpf.h | 7 +
kernel/bpf/cgroup.c | 195 ++++-
kernel/bpf/syscall.c | 43 +-
kernel/cgroup/cgroup.c | 5 +
tools/include/uapi/linux/bpf.h | 7 +
tools/lib/bpf/bpf.c | 44 +
tools/lib/bpf/bpf.h | 5 +
tools/lib/bpf/libbpf.c | 28 +
tools/lib/bpf/libbpf.h | 15 +
tools/lib/bpf/libbpf.map | 1 +
.../bpf/prog_tests/cgroup_mprog_opts.c | 749 ++++++++++++++++++
.../bpf/prog_tests/cgroup_mprog_ordering.c | 77 ++
.../selftests/bpf/prog_tests/tc_helpers.h | 28 -
.../selftests/bpf/progs/cgroup_mprog.c | 30 +
tools/testing/selftests/bpf/test_progs.h | 28 +
16 files changed, 1197 insertions(+), 66 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c
create mode 100644 tools/testing/selftests/bpf/progs/cgroup_mprog.c
--
2.47.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH bpf-next v3 1/5] cgroup: Add bpf prog revisions to struct cgroup_bpf
2025-05-17 16:27 [PATCH bpf-next v3 0/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
@ 2025-05-17 16:27 ` Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2025-05-17 16:27 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
One of key items in mprog API is revision for prog list. The revision
number will be increased if the prog list changed, e.g., attach, detach
or replace.
Add 'revisions' field to struct cgroup_bpf, representing revisions for
all cgroup related attachment types. The initial revision value is
set to 1, the same as kernel mprog implementations.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/bpf-cgroup-defs.h | 1 +
kernel/cgroup/cgroup.c | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
index 0985221d5478..c9e6b26abab6 100644
--- a/include/linux/bpf-cgroup-defs.h
+++ b/include/linux/bpf-cgroup-defs.h
@@ -63,6 +63,7 @@ struct cgroup_bpf {
*/
struct hlist_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
u8 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
+ u64 revisions[MAX_CGROUP_BPF_ATTACH_TYPE];
/* list of cgroup shared storages */
struct list_head storages;
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 63e5b90da1f3..260ce8fc4ea4 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -2071,6 +2071,11 @@ static void init_cgroup_housekeeping(struct cgroup *cgrp)
for_each_subsys(ss, ssid)
INIT_LIST_HEAD(&cgrp->e_csets[ssid]);
+#ifdef CONFIG_CGROUP_BPF
+ for (int i = 0; i < ARRAY_SIZE(cgrp->bpf.revisions); i++)
+ cgrp->bpf.revisions[i] = 1;
+#endif
+
init_waitqueue_head(&cgrp->offline_waitq);
INIT_WORK(&cgrp->release_agent_work, cgroup1_release_agent);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs
2025-05-17 16:27 [PATCH bpf-next v3 0/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 1/5] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
@ 2025-05-17 16:27 ` Yonghong Song
2025-05-22 20:45 ` Andrii Nakryiko
2025-05-17 16:27 ` [PATCH bpf-next v3 3/5] libbpf: Support link-based cgroup attach with options Yonghong Song
` (2 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Yonghong Song @ 2025-05-17 16:27 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Current cgroup prog ordering is appending at attachment time. This is not
ideal. In some cases, users want specific ordering at a particular cgroup
level. To address this, the existing mprog API seems an ideal solution with
supporting BPF_F_BEFORE and BPF_F_AFTER flags.
But there are a few obstacles to directly use kernel mprog interface.
Currently cgroup bpf progs already support prog attach/detach/replace
and link-based attach/detach/replace. For example, in struct
bpf_prog_array_item, the cgroup_storage field needs to be together
with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
as the member, which makes it difficult to use kernel mprog interface.
In another case, the current cgroup prog detach tries to use the
same flag as in attach. This is different from mprog kernel interface
which uses flags passed from user space.
So to avoid modifying existing behavior, I made the following changes to
support mprog API for cgroup progs:
- The support is for prog list at cgroup level. Cross-level prog list
(a.k.a. effective prog list) is not supported.
- Previously, BPF_F_PREORDER is supported only for prog attach, now
BPF_F_PREORDER is also supported by link-based attach.
- For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID/BPF_F_LINK is supported
similar to kernel mprog but with different implementation.
- For detach and replace, use the existing implementation.
- For attach, detach and replace, the revision for a particular prog
list, associated with a particular attach type, will be updated
by increasing count by 1.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/uapi/linux/bpf.h | 7 ++
kernel/bpf/cgroup.c | 195 +++++++++++++++++++++++++++++----
kernel/bpf/syscall.c | 43 +++++---
tools/include/uapi/linux/bpf.h | 7 ++
4 files changed, 214 insertions(+), 38 deletions(-)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 16e95398c91c..356cd2b185fb 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1794,6 +1794,13 @@ union bpf_attr {
};
__u64 expected_revision;
} netkit;
+ struct {
+ union {
+ __u32 relative_fd;
+ __u32 relative_id;
+ };
+ __u64 expected_revision;
+ } cgroup;
};
} link_create;
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 62a1d8deb3dc..78e6fc70b8f9 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -624,6 +624,129 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
return NULL;
}
+static struct bpf_link *bpf_get_anchor_link(u32 flags, u32 id_or_fd, enum bpf_prog_type type)
+{
+ struct bpf_link *link = ERR_PTR(-EINVAL);
+
+ if (flags & BPF_F_ID)
+ link = bpf_link_by_id(id_or_fd);
+ else if (id_or_fd)
+ link = bpf_link_get_from_fd(id_or_fd);
+ if (IS_ERR(link))
+ return link;
+ if (type && link->prog->type != type) {
+ bpf_link_put(link);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return link;
+}
+
+static struct bpf_prog *bpf_get_anchor_prog(u32 flags, u32 id_or_fd, enum bpf_prog_type type)
+{
+ struct bpf_prog *prog = ERR_PTR(-EINVAL);
+
+ if (flags & BPF_F_ID)
+ prog = bpf_prog_by_id(id_or_fd);
+ else if (id_or_fd)
+ prog = bpf_prog_get(id_or_fd);
+ if (IS_ERR(prog))
+ return prog;
+ if (type && prog->type != type) {
+ bpf_prog_put(prog);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return prog;
+}
+
+static struct bpf_prog_list *get_prog_list(struct hlist_head *progs, struct bpf_prog *prog,
+ u32 flags, u32 id_or_fd)
+{
+ bool link = flags & BPF_F_LINK, id = flags & BPF_F_ID;
+ struct bpf_prog *anchor_prog = NULL, *pltmp_prog;
+ bool preorder = flags & BPF_F_PREORDER;
+ struct bpf_link *anchor_link = NULL;
+ struct bpf_prog_list *pltmp;
+ int ret = -EINVAL;
+
+ if (link || id || id_or_fd) {
+ /* flags must have either BPF_F_BEFORE or BPF_F_AFTER */
+ if (!(flags & BPF_F_BEFORE) != !!(flags & BPF_F_AFTER))
+ return ERR_PTR(-EINVAL);
+ } else if (!hlist_empty(progs)) {
+ /* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
+ if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
+ return ERR_PTR(-EINVAL);
+ }
+
+ if (link) {
+ anchor_link = bpf_get_anchor_link(flags, id_or_fd, prog->type);
+ if (IS_ERR(anchor_link))
+ return ERR_PTR(PTR_ERR(anchor_link));
+ anchor_prog = anchor_link->prog;
+ } else if (id || id_or_fd) {
+ anchor_prog = bpf_get_anchor_prog(flags, id_or_fd, prog->type);
+ if (IS_ERR(anchor_prog))
+ return ERR_PTR(PTR_ERR(anchor_prog));
+ }
+
+ if (!anchor_prog) {
+ /* if there is no anchor_prog, then BPF_F_PREORDER doesn't matter
+ * since either prepend or append to a combined list of progs will
+ * end up with correct result.
+ */
+ hlist_for_each_entry(pltmp, progs, node) {
+ if (flags & BPF_F_BEFORE)
+ return pltmp;
+ if (pltmp->node.next)
+ continue;
+ return pltmp;
+ }
+ return NULL;
+ }
+
+ hlist_for_each_entry(pltmp, progs, node) {
+ pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
+ if (pltmp_prog != anchor_prog)
+ continue;
+ if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
+ goto out;
+ if (anchor_link)
+ bpf_link_put(anchor_link);
+ else
+ bpf_prog_put(anchor_prog);
+ return pltmp;
+ }
+
+ ret = -ENOENT;
+out:
+ if (anchor_link)
+ bpf_link_put(anchor_link);
+ else
+ bpf_prog_put(anchor_prog);
+ return ERR_PTR(ret);
+}
+
+static int insert_pl_to_hlist(struct bpf_prog_list *pl, struct hlist_head *progs,
+ struct bpf_prog *prog, u32 flags, u32 id_or_fd)
+{
+ struct bpf_prog_list *pltmp;
+
+ pltmp = get_prog_list(progs, prog, flags, id_or_fd);
+ if (IS_ERR(pltmp))
+ return PTR_ERR(pltmp);
+
+ if (!pltmp)
+ hlist_add_head(&pl->node, progs);
+ else if (flags & BPF_F_BEFORE)
+ hlist_add_before(&pl->node, &pltmp->node);
+ else
+ hlist_add_behind(&pl->node, &pltmp->node);
+
+ return 0;
+}
+
/**
* __cgroup_bpf_attach() - Attach the program or the link to a cgroup, and
* propagate the change to descendants
@@ -633,6 +756,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
* @replace_prog: Previously attached program to replace if BPF_F_REPLACE is set
* @type: Type of attach operation
* @flags: Option flags
+ * @id_or_fd: Relative prog id or fd
+ * @revision: bpf_prog_list revision
*
* Exactly one of @prog or @link can be non-null.
* Must be called with cgroup_mutex held.
@@ -640,7 +765,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
static int __cgroup_bpf_attach(struct cgroup *cgrp,
struct bpf_prog *prog, struct bpf_prog *replace_prog,
struct bpf_cgroup_link *link,
- enum bpf_attach_type type, u32 flags)
+ enum bpf_attach_type type, u32 flags, u32 id_or_fd,
+ u64 revision)
{
u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
struct bpf_prog *old_prog = NULL;
@@ -656,6 +782,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
/* invalid combination */
return -EINVAL;
+ if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
+ /* only either replace or insertion with before/after */
+ return -EINVAL;
if (link && (prog || replace_prog))
/* only either link or prog/replace_prog can be specified */
return -EINVAL;
@@ -666,6 +795,8 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
atype = bpf_cgroup_atype_find(type, new_prog->aux->attach_btf_id);
if (atype < 0)
return -EINVAL;
+ if (revision && revision != cgrp->bpf.revisions[atype])
+ return -ESTALE;
progs = &cgrp->bpf.progs[atype];
@@ -694,22 +825,18 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
if (pl) {
old_prog = pl->prog;
} else {
- struct hlist_node *last = NULL;
-
pl = kmalloc(sizeof(*pl), GFP_KERNEL);
if (!pl) {
bpf_cgroup_storages_free(new_storage);
return -ENOMEM;
}
- if (hlist_empty(progs))
- hlist_add_head(&pl->node, progs);
- else
- hlist_for_each(last, progs) {
- if (last->next)
- continue;
- hlist_add_behind(&pl->node, last);
- break;
- }
+
+ err = insert_pl_to_hlist(pl, progs, prog ? : link->link.prog, flags, id_or_fd);
+ if (err) {
+ kfree(pl);
+ bpf_cgroup_storages_free(new_storage);
+ return err;
+ }
}
pl->prog = prog;
@@ -728,6 +855,7 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
if (err)
goto cleanup_trampoline;
+ cgrp->bpf.revisions[atype] += 1;
if (old_prog) {
if (type == BPF_LSM_CGROUP)
bpf_trampoline_unlink_cgroup_shim(old_prog);
@@ -759,12 +887,13 @@ static int cgroup_bpf_attach(struct cgroup *cgrp,
struct bpf_prog *prog, struct bpf_prog *replace_prog,
struct bpf_cgroup_link *link,
enum bpf_attach_type type,
- u32 flags)
+ u32 flags, u32 id_or_fd, u64 revision)
{
int ret;
cgroup_lock();
- ret = __cgroup_bpf_attach(cgrp, prog, replace_prog, link, type, flags);
+ ret = __cgroup_bpf_attach(cgrp, prog, replace_prog, link, type, flags,
+ id_or_fd, revision);
cgroup_unlock();
return ret;
}
@@ -852,6 +981,7 @@ static int __cgroup_bpf_replace(struct cgroup *cgrp,
if (!found)
return -ENOENT;
+ cgrp->bpf.revisions[atype] += 1;
old_prog = xchg(&link->link.prog, new_prog);
replace_effective_prog(cgrp, atype, link);
bpf_prog_put(old_prog);
@@ -977,12 +1107,14 @@ static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
* @prog: A program to detach or NULL
* @link: A link to detach or NULL
* @type: Type of detach operation
+ * @revision: bpf_prog_list revision
*
* At most one of @prog or @link can be non-NULL.
* Must be called with cgroup_mutex held.
*/
static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
- struct bpf_cgroup_link *link, enum bpf_attach_type type)
+ struct bpf_cgroup_link *link, enum bpf_attach_type type,
+ u64 revision)
{
enum cgroup_bpf_attach_type atype;
struct bpf_prog *old_prog;
@@ -1000,6 +1132,9 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
if (atype < 0)
return -EINVAL;
+ if (revision && revision != cgrp->bpf.revisions[atype])
+ return -ESTALE;
+
progs = &cgrp->bpf.progs[atype];
flags = cgrp->bpf.flags[atype];
@@ -1025,6 +1160,7 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
/* now can actually delete it from this cgroup list */
hlist_del(&pl->node);
+ cgrp->bpf.revisions[atype] += 1;
kfree(pl);
if (hlist_empty(progs))
@@ -1040,12 +1176,12 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
}
static int cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
- enum bpf_attach_type type)
+ enum bpf_attach_type type, u64 revision)
{
int ret;
cgroup_lock();
- ret = __cgroup_bpf_detach(cgrp, prog, NULL, type);
+ ret = __cgroup_bpf_detach(cgrp, prog, NULL, type, revision);
cgroup_unlock();
return ret;
}
@@ -1063,6 +1199,7 @@ static int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
struct bpf_prog_array *effective;
int cnt, ret = 0, i;
int total_cnt = 0;
+ u64 revision = 0;
u32 flags;
if (effective_query && prog_attach_flags)
@@ -1100,6 +1237,10 @@ static int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
return -EFAULT;
if (copy_to_user(&uattr->query.prog_cnt, &total_cnt, sizeof(total_cnt)))
return -EFAULT;
+ if (!effective_query && from_atype == to_atype)
+ revision = cgrp->bpf.revisions[from_atype];
+ if (copy_to_user(&uattr->query.revision, &revision, sizeof(revision)))
+ return -EFAULT;
if (attr->query.prog_cnt == 0 || !prog_ids || !total_cnt)
/* return early if user requested only program count + flags */
return 0;
@@ -1182,7 +1323,8 @@ int cgroup_bpf_prog_attach(const union bpf_attr *attr,
}
ret = cgroup_bpf_attach(cgrp, prog, replace_prog, NULL,
- attr->attach_type, attr->attach_flags);
+ attr->attach_type, attr->attach_flags,
+ attr->relative_fd, attr->expected_revision);
if (replace_prog)
bpf_prog_put(replace_prog);
@@ -1204,7 +1346,7 @@ int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
if (IS_ERR(prog))
prog = NULL;
- ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type);
+ ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, attr->expected_revision);
if (prog)
bpf_prog_put(prog);
@@ -1233,7 +1375,7 @@ static void bpf_cgroup_link_release(struct bpf_link *link)
}
WARN_ON(__cgroup_bpf_detach(cg_link->cgroup, NULL, cg_link,
- cg_link->type));
+ cg_link->type, 0));
if (cg_link->type == BPF_LSM_CGROUP)
bpf_trampoline_unlink_cgroup_shim(cg_link->link.prog);
@@ -1305,6 +1447,13 @@ static const struct bpf_link_ops bpf_cgroup_link_lops = {
.fill_link_info = bpf_cgroup_link_fill_link_info,
};
+#define BPF_F_LINK_ATTACH_MASK \
+ (BPF_F_ID | \
+ BPF_F_BEFORE | \
+ BPF_F_AFTER | \
+ BPF_F_PREORDER | \
+ BPF_F_LINK)
+
int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
struct bpf_link_primer link_primer;
@@ -1312,7 +1461,7 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
struct cgroup *cgrp;
int err;
- if (attr->link_create.flags)
+ if (attr->link_create.flags & (~BPF_F_LINK_ATTACH_MASK))
return -EINVAL;
cgrp = cgroup_get_from_fd(attr->link_create.target_fd);
@@ -1336,7 +1485,9 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
}
err = cgroup_bpf_attach(cgrp, NULL, NULL, link,
- link->type, BPF_F_ALLOW_MULTI);
+ link->type, BPF_F_ALLOW_MULTI | attr->link_create.flags,
+ attr->link_create.cgroup.relative_fd,
+ attr->link_create.cgroup.expected_revision);
if (err) {
bpf_link_cleanup(&link_primer);
goto out_put_cgroup;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 4b5f29168618..7e7e6c50718b 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -4185,6 +4185,25 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
}
}
+static bool is_cgroup_prog_type(enum bpf_prog_type ptype, enum bpf_attach_type atype,
+ bool check_atype)
+{
+ switch (ptype) {
+ case BPF_PROG_TYPE_CGROUP_DEVICE:
+ case BPF_PROG_TYPE_CGROUP_SKB:
+ case BPF_PROG_TYPE_CGROUP_SOCK:
+ case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
+ case BPF_PROG_TYPE_CGROUP_SOCKOPT:
+ case BPF_PROG_TYPE_CGROUP_SYSCTL:
+ case BPF_PROG_TYPE_SOCK_OPS:
+ return true;
+ case BPF_PROG_TYPE_LSM:
+ return check_atype ? atype == BPF_LSM_CGROUP : true;
+ default:
+ return false;
+ }
+}
+
#define BPF_PROG_ATTACH_LAST_FIELD expected_revision
#define BPF_F_ATTACH_MASK_BASE \
@@ -4215,7 +4234,7 @@ static int bpf_prog_attach(const union bpf_attr *attr)
if (bpf_mprog_supported(ptype)) {
if (attr->attach_flags & ~BPF_F_ATTACH_MASK_MPROG)
return -EINVAL;
- } else {
+ } else if (!is_cgroup_prog_type(ptype, 0, false)) {
if (attr->attach_flags & ~BPF_F_ATTACH_MASK_BASE)
return -EINVAL;
if (attr->relative_fd ||
@@ -4243,20 +4262,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
case BPF_PROG_TYPE_FLOW_DISSECTOR:
ret = netns_bpf_prog_attach(attr, prog);
break;
- case BPF_PROG_TYPE_CGROUP_DEVICE:
- case BPF_PROG_TYPE_CGROUP_SKB:
- case BPF_PROG_TYPE_CGROUP_SOCK:
- case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
- case BPF_PROG_TYPE_CGROUP_SOCKOPT:
- case BPF_PROG_TYPE_CGROUP_SYSCTL:
- case BPF_PROG_TYPE_SOCK_OPS:
- case BPF_PROG_TYPE_LSM:
- if (ptype == BPF_PROG_TYPE_LSM &&
- prog->expected_attach_type != BPF_LSM_CGROUP)
- ret = -EINVAL;
- else
- ret = cgroup_bpf_prog_attach(attr, ptype, prog);
- break;
case BPF_PROG_TYPE_SCHED_CLS:
if (attr->attach_type == BPF_TCX_INGRESS ||
attr->attach_type == BPF_TCX_EGRESS)
@@ -4265,7 +4270,10 @@ static int bpf_prog_attach(const union bpf_attr *attr)
ret = netkit_prog_attach(attr, prog);
break;
default:
- ret = -EINVAL;
+ if (!is_cgroup_prog_type(ptype, prog->expected_attach_type, true))
+ ret = -EINVAL;
+ else
+ ret = cgroup_bpf_prog_attach(attr, ptype, prog);
}
if (ret)
@@ -4295,6 +4303,9 @@ static int bpf_prog_detach(const union bpf_attr *attr)
if (IS_ERR(prog))
return PTR_ERR(prog);
}
+ } else if (is_cgroup_prog_type(ptype, 0, false)) {
+ if (attr->attach_flags || attr->relative_fd)
+ return -EINVAL;
} else if (attr->attach_flags ||
attr->relative_fd ||
attr->expected_revision) {
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 16e95398c91c..356cd2b185fb 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1794,6 +1794,13 @@ union bpf_attr {
};
__u64 expected_revision;
} netkit;
+ struct {
+ union {
+ __u32 relative_fd;
+ __u32 relative_id;
+ };
+ __u64 expected_revision;
+ } cgroup;
};
} link_create;
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v3 3/5] libbpf: Support link-based cgroup attach with options
2025-05-17 16:27 [PATCH bpf-next v3 0/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 1/5] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
@ 2025-05-17 16:27 ` Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 4/5] selftests/bpf: Move some tc_helpers.h functions to test_progs.h Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 5/5] selftests/bpf: Add two selftests for mprog API based cgroup progs Yonghong Song
4 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2025-05-17 16:27 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Currently libbpf supports bpf_program__attach_cgroup() with signature:
LIBBPF_API struct bpf_link *
bpf_program__attach_cgroup(const struct bpf_program *prog, int cgroup_fd);
To support mprog style attachment, additionsl fields like flags,
relative_{fd,id} and expected_revision are needed.
Add a new API:
LIBBPF_API struct bpf_link *
bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
const struct bpf_cgroup_opts *opts);
where bpf_cgroup_opts contains all above needed fields.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
tools/lib/bpf/bpf.c | 44 ++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/bpf.h | 5 +++++
tools/lib/bpf/libbpf.c | 28 +++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 15 ++++++++++++++
tools/lib/bpf/libbpf.map | 1 +
5 files changed, 93 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index a9c3e33d0f8a..6eb421ccf91b 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -837,6 +837,50 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, netkit))
return libbpf_err(-EINVAL);
break;
+ case BPF_CGROUP_INET_INGRESS:
+ case BPF_CGROUP_INET_EGRESS:
+ case BPF_CGROUP_INET_SOCK_CREATE:
+ case BPF_CGROUP_INET_SOCK_RELEASE:
+ case BPF_CGROUP_INET4_BIND:
+ case BPF_CGROUP_INET6_BIND:
+ case BPF_CGROUP_INET4_POST_BIND:
+ case BPF_CGROUP_INET6_POST_BIND:
+ case BPF_CGROUP_INET4_CONNECT:
+ case BPF_CGROUP_INET6_CONNECT:
+ case BPF_CGROUP_UNIX_CONNECT:
+ case BPF_CGROUP_INET4_GETPEERNAME:
+ case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
+ case BPF_CGROUP_INET4_GETSOCKNAME:
+ case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
+ case BPF_CGROUP_UDP4_SENDMSG:
+ case BPF_CGROUP_UDP6_SENDMSG:
+ case BPF_CGROUP_UNIX_SENDMSG:
+ case BPF_CGROUP_UDP4_RECVMSG:
+ case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
+ case BPF_CGROUP_SOCK_OPS:
+ case BPF_CGROUP_DEVICE:
+ case BPF_CGROUP_SYSCTL:
+ case BPF_CGROUP_GETSOCKOPT:
+ case BPF_CGROUP_SETSOCKOPT:
+ case BPF_LSM_CGROUP:
+ relative_fd = OPTS_GET(opts, cgroup.relative_fd, 0);
+ relative_id = OPTS_GET(opts, cgroup.relative_id, 0);
+ if (relative_fd && relative_id)
+ return libbpf_err(-EINVAL);
+ if (relative_id) {
+ attr.link_create.cgroup.relative_id = relative_id;
+ attr.link_create.flags |= BPF_F_ID;
+ } else {
+ attr.link_create.cgroup.relative_fd = relative_fd;
+ }
+ attr.link_create.cgroup.expected_revision =
+ OPTS_GET(opts, cgroup.expected_revision, 0);
+ if (!OPTS_ZEROED(opts, cgroup))
+ return libbpf_err(-EINVAL);
+ break;
default:
if (!OPTS_ZEROED(opts, flags))
return libbpf_err(-EINVAL);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 777627d33d25..1342564214c8 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -438,6 +438,11 @@ struct bpf_link_create_opts {
__u32 relative_id;
__u64 expected_revision;
} netkit;
+ struct {
+ __u32 relative_fd;
+ __u32 relative_id;
+ __u64 expected_revision;
+ } cgroup;
};
size_t :0;
};
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index e9c641a2fb20..6445165a24f2 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -12837,6 +12837,34 @@ struct bpf_link *bpf_program__attach_xdp(const struct bpf_program *prog, int ifi
return bpf_program_attach_fd(prog, ifindex, "xdp", NULL);
}
+struct bpf_link *
+bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
+ const struct bpf_cgroup_opts *opts)
+{
+ LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);
+ __u32 relative_id;
+ int relative_fd;
+
+ if (!OPTS_VALID(opts, bpf_cgroup_opts))
+ return libbpf_err_ptr(-EINVAL);
+
+ relative_id = OPTS_GET(opts, relative_id, 0);
+ relative_fd = OPTS_GET(opts, relative_fd, 0);
+
+ if (relative_fd && relative_id) {
+ pr_warn("prog '%s': relative_fd and relative_id cannot be set at the same time\n",
+ prog->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+
+ link_create_opts.cgroup.expected_revision = OPTS_GET(opts, expected_revision, 0);
+ link_create_opts.cgroup.relative_fd = relative_fd;
+ link_create_opts.cgroup.relative_id = relative_id;
+ link_create_opts.flags = OPTS_GET(opts, flags, 0);
+
+ return bpf_program_attach_fd(prog, cgroup_fd, "cgroup", &link_create_opts);
+}
+
struct bpf_link *
bpf_program__attach_tcx(const struct bpf_program *prog, int ifindex,
const struct bpf_tcx_opts *opts)
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index d39f19c8396d..622de1b932ee 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -877,6 +877,21 @@ LIBBPF_API struct bpf_link *
bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex,
const struct bpf_netkit_opts *opts);
+struct bpf_cgroup_opts {
+ /* size of this struct, for forward/backward compatibility */
+ size_t sz;
+ __u32 flags;
+ __u32 relative_fd;
+ __u32 relative_id;
+ __u64 expected_revision;
+ size_t :0;
+};
+#define bpf_cgroup_opts__last_field expected_revision
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
+ const struct bpf_cgroup_opts *opts);
+
struct bpf_map;
LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 1205f9a4fe04..c7fc0bde5648 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -437,6 +437,7 @@ LIBBPF_1.6.0 {
bpf_linker__add_fd;
bpf_linker__new_fd;
bpf_object__prepare;
+ bpf_program__attach_cgroup_opts;
bpf_program__func_info;
bpf_program__func_info_cnt;
bpf_program__line_info;
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v3 4/5] selftests/bpf: Move some tc_helpers.h functions to test_progs.h
2025-05-17 16:27 [PATCH bpf-next v3 0/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
` (2 preceding siblings ...)
2025-05-17 16:27 ` [PATCH bpf-next v3 3/5] libbpf: Support link-based cgroup attach with options Yonghong Song
@ 2025-05-17 16:27 ` Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 5/5] selftests/bpf: Add two selftests for mprog API based cgroup progs Yonghong Song
4 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2025-05-17 16:27 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Move static inline functions id_from_prog_fd() and id_from_link_fd()
from prog_tests/tc_helpers.h to test_progs.h so these two functions
can be reused for later cgroup mprog selftests.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
.../selftests/bpf/prog_tests/tc_helpers.h | 28 -------------------
tools/testing/selftests/bpf/test_progs.h | 28 +++++++++++++++++++
2 files changed, 28 insertions(+), 28 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/tc_helpers.h b/tools/testing/selftests/bpf/prog_tests/tc_helpers.h
index 924d0e25320c..d52a62af77bf 100644
--- a/tools/testing/selftests/bpf/prog_tests/tc_helpers.h
+++ b/tools/testing/selftests/bpf/prog_tests/tc_helpers.h
@@ -8,34 +8,6 @@
# define loopback 1
#endif
-static inline __u32 id_from_prog_fd(int fd)
-{
- struct bpf_prog_info prog_info = {};
- __u32 prog_info_len = sizeof(prog_info);
- int err;
-
- err = bpf_obj_get_info_by_fd(fd, &prog_info, &prog_info_len);
- if (!ASSERT_OK(err, "id_from_prog_fd"))
- return 0;
-
- ASSERT_NEQ(prog_info.id, 0, "prog_info.id");
- return prog_info.id;
-}
-
-static inline __u32 id_from_link_fd(int fd)
-{
- struct bpf_link_info link_info = {};
- __u32 link_info_len = sizeof(link_info);
- int err;
-
- err = bpf_link_get_info_by_fd(fd, &link_info, &link_info_len);
- if (!ASSERT_OK(err, "id_from_link_fd"))
- return 0;
-
- ASSERT_NEQ(link_info.id, 0, "link_info.id");
- return link_info.id;
-}
-
static inline __u32 ifindex_from_link_fd(int fd)
{
struct bpf_link_info link_info = {};
diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
index 870694f2a359..df2222a1806f 100644
--- a/tools/testing/selftests/bpf/test_progs.h
+++ b/tools/testing/selftests/bpf/test_progs.h
@@ -460,6 +460,34 @@ static inline void *u64_to_ptr(__u64 ptr)
return (void *) (unsigned long) ptr;
}
+static inline __u32 id_from_prog_fd(int fd)
+{
+ struct bpf_prog_info prog_info = {};
+ __u32 prog_info_len = sizeof(prog_info);
+ int err;
+
+ err = bpf_obj_get_info_by_fd(fd, &prog_info, &prog_info_len);
+ if (!ASSERT_OK(err, "id_from_prog_fd"))
+ return 0;
+
+ ASSERT_NEQ(prog_info.id, 0, "prog_info.id");
+ return prog_info.id;
+}
+
+static inline __u32 id_from_link_fd(int fd)
+{
+ struct bpf_link_info link_info = {};
+ __u32 link_info_len = sizeof(link_info);
+ int err;
+
+ err = bpf_link_get_info_by_fd(fd, &link_info, &link_info_len);
+ if (!ASSERT_OK(err, "id_from_link_fd"))
+ return 0;
+
+ ASSERT_NEQ(link_info.id, 0, "link_info.id");
+ return link_info.id;
+}
+
int bpf_find_map(const char *test, struct bpf_object *obj, const char *name);
int compare_map_keys(int map1_fd, int map2_fd);
int compare_stack_ips(int smap_fd, int amap_fd, int stack_trace_len);
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v3 5/5] selftests/bpf: Add two selftests for mprog API based cgroup progs
2025-05-17 16:27 [PATCH bpf-next v3 0/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
` (3 preceding siblings ...)
2025-05-17 16:27 ` [PATCH bpf-next v3 4/5] selftests/bpf: Move some tc_helpers.h functions to test_progs.h Yonghong Song
@ 2025-05-17 16:27 ` Yonghong Song
4 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2025-05-17 16:27 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Two tests are added:
- cgroup_mprog_opts, which mimics tc_opts.c ([1]). Both prog and link
attach are tested. Some negative tests are also included.
- cgroup_mprog_ordering, which actually runs the program with some mprog
API flags.
[1] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/tc_opts.c
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
.../bpf/prog_tests/cgroup_mprog_opts.c | 749 ++++++++++++++++++
.../bpf/prog_tests/cgroup_mprog_ordering.c | 77 ++
.../selftests/bpf/progs/cgroup_mprog.c | 30 +
3 files changed, 856 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c
create mode 100644 tools/testing/selftests/bpf/progs/cgroup_mprog.c
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c b/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c
new file mode 100644
index 000000000000..ad83da5fd4a2
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c
@@ -0,0 +1,749 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include "cgroup_helpers.h"
+#include "cgroup_mprog.skel.h"
+
+static void assert_mprog_count(int cg, int atype, int expected)
+{
+ __u32 count = 0, attach_flags = 0;
+ int err;
+
+ err = bpf_prog_query(cg, atype, 0, &attach_flags,
+ NULL, &count);
+ ASSERT_EQ(count, expected, "count");
+ ASSERT_EQ(err, 0, "prog_query");
+}
+
+static void test_prog_attach_detach(int atype)
+{
+ LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+ LIBBPF_OPTS(bpf_prog_detach_opts, optd);
+ LIBBPF_OPTS(bpf_prog_query_opts, optq);
+ __u32 fd1, fd2, fd3, fd4, id1, id2, id3, id4;
+ struct cgroup_mprog *skel;
+ __u32 prog_ids[10];
+ int cg, err;
+
+ cg = test__join_cgroup("/prog_attach_detach");
+ if (!ASSERT_GE(cg, 0, "join_cgroup /prog_attach_detach"))
+ return;
+
+ skel = cgroup_mprog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_load"))
+ goto cleanup;
+
+ fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+ fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+ fd3 = bpf_program__fd(skel->progs.getsockopt_3);
+ fd4 = bpf_program__fd(skel->progs.getsockopt_4);
+
+ id1 = id_from_prog_fd(fd1);
+ id2 = id_from_prog_fd(fd2);
+ id3 = id_from_prog_fd(fd3);
+ id4 = id_from_prog_fd(fd4);
+
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE | BPF_F_AFTER,
+ .expected_revision = 1,
+ );
+
+ /* ordering: [fd1] */
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup;
+
+ assert_mprog_count(cg, atype, 1);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE,
+ .expected_revision = 2,
+ );
+
+ /* ordering: [fd2, fd1] */
+ err = bpf_prog_attach_opts(fd2, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup1;
+
+ assert_mprog_count(cg, atype, 2);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+ .relative_fd = fd2,
+ .expected_revision = 3,
+ );
+
+ /* ordering: [fd2, fd3, fd1] */
+ err = bpf_prog_attach_opts(fd3, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup2;
+
+ assert_mprog_count(cg, atype, 3);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI,
+ .expected_revision = 4,
+ );
+
+ /* ordering: [fd2, fd3, fd1, fd4] */
+ err = bpf_prog_attach_opts(fd4, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup3;
+
+ assert_mprog_count(cg, atype, 4);
+
+ /* retrieve optq.prog_cnt */
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ /* optq.prog_cnt will be used in below query */
+ memset(prog_ids, 0, sizeof(prog_ids));
+ optq.prog_ids = prog_ids;
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ ASSERT_EQ(optq.count, 4, "count");
+ ASSERT_EQ(optq.revision, 5, "revision");
+ ASSERT_EQ(optq.prog_ids[0], id2, "prog_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], id3, "prog_ids[1]");
+ ASSERT_EQ(optq.prog_ids[2], id1, "prog_ids[2]");
+ ASSERT_EQ(optq.prog_ids[3], id4, "prog_ids[3]");
+ ASSERT_EQ(optq.prog_ids[4], 0, "prog_ids[4]");
+ ASSERT_EQ(optq.link_ids, NULL, "link_ids");
+
+cleanup4:
+ optd.expected_revision = 5;
+ err = bpf_prog_detach_opts(fd4, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+ LIBBPF_OPTS_RESET(optd);
+ err = bpf_prog_detach_opts(fd3, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 2);
+
+ /* Check revision after two detach operations */
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ ASSERT_OK(err, "prog_query");
+ ASSERT_EQ(optq.revision, 7, "revision");
+
+cleanup2:
+ err = bpf_prog_detach_opts(fd2, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+ err = bpf_prog_detach_opts(fd1, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 0);
+
+cleanup:
+ cgroup_mprog__destroy(skel);
+ close(cg);
+}
+
+static void test_link_attach_detach(int atype)
+{
+ LIBBPF_OPTS(bpf_cgroup_opts, opta);
+ LIBBPF_OPTS(bpf_cgroup_opts, optd);
+ LIBBPF_OPTS(bpf_prog_query_opts, optq);
+ struct bpf_link *link1, *link2, *link3, *link4;
+ __u32 fd1, fd2, fd3, fd4, id1, id2, id3, id4;
+ struct cgroup_mprog *skel;
+ __u32 prog_ids[10];
+ int cg, err;
+
+ cg = test__join_cgroup("/link_attach_detach");
+ if (!ASSERT_GE(cg, 0, "join_cgroup /link_attach_detach"))
+ return;
+
+ skel = cgroup_mprog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_load"))
+ goto cleanup;
+
+ fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+ fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+ fd3 = bpf_program__fd(skel->progs.getsockopt_3);
+ fd4 = bpf_program__fd(skel->progs.getsockopt_4);
+
+ id1 = id_from_prog_fd(fd1);
+ id2 = id_from_prog_fd(fd2);
+ id3 = id_from_prog_fd(fd3);
+ id4 = id_from_prog_fd(fd4);
+
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .expected_revision = 1,
+ );
+
+ /* ordering: [fd1] */
+ link1 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_1, cg, &opta);
+ if (!ASSERT_OK_PTR(link1, "link_attach"))
+ goto cleanup;
+
+ assert_mprog_count(cg, atype, 1);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_BEFORE | BPF_F_LINK,
+ .relative_id = id_from_link_fd(bpf_link__fd(link1)),
+ .expected_revision = 2,
+ );
+
+ /* ordering: [fd2, fd1] */
+ link2 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_2, cg, &opta);
+ if (!ASSERT_OK_PTR(link2, "link_attach"))
+ goto cleanup1;
+
+ assert_mprog_count(cg, atype, 2);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_AFTER | BPF_F_LINK,
+ .relative_fd = bpf_link__fd(link2),
+ .expected_revision = 3,
+ );
+
+ /* ordering: [fd2, fd3, fd1] */
+ link3 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_3, cg, &opta);
+ if (!ASSERT_OK_PTR(link3, "link_attach"))
+ goto cleanup2;
+
+ assert_mprog_count(cg, atype, 3);
+
+ LIBBPF_OPTS_RESET(opta,
+ .expected_revision = 4,
+ );
+
+ /* ordering: [fd2, fd3, fd1, fd4] */
+ link4 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_4, cg, &opta);
+ if (!ASSERT_OK_PTR(link4, "link_attach"))
+ goto cleanup3;
+
+ assert_mprog_count(cg, atype, 4);
+
+ /* retrieve optq.prog_cnt */
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ /* optq.prog_cnt will be used in below query */
+ memset(prog_ids, 0, sizeof(prog_ids));
+ optq.prog_ids = prog_ids;
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ ASSERT_EQ(optq.count, 4, "count");
+ ASSERT_EQ(optq.revision, 5, "revision");
+ ASSERT_EQ(optq.prog_ids[0], id2, "prog_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], id3, "prog_ids[1]");
+ ASSERT_EQ(optq.prog_ids[2], id1, "prog_ids[2]");
+ ASSERT_EQ(optq.prog_ids[3], id4, "prog_ids[3]");
+ ASSERT_EQ(optq.prog_ids[4], 0, "prog_ids[4]");
+ ASSERT_EQ(optq.link_ids, NULL, "link_ids");
+
+cleanup4:
+ bpf_link__destroy(link4);
+ assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+ bpf_link__destroy(link3);
+ assert_mprog_count(cg, atype, 2);
+
+ /* Check revision after two detach operations */
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ ASSERT_OK(err, "prog_query");
+ ASSERT_EQ(optq.revision, 7, "revision");
+
+cleanup2:
+ bpf_link__destroy(link2);
+ assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+ bpf_link__destroy(link1);
+ assert_mprog_count(cg, atype, 0);
+
+cleanup:
+ cgroup_mprog__destroy(skel);
+ close(cg);
+}
+
+static void test_mix_attach_detach(int atype)
+{
+ LIBBPF_OPTS(bpf_cgroup_opts, lopta);
+ LIBBPF_OPTS(bpf_cgroup_opts, loptd);
+ LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+ LIBBPF_OPTS(bpf_prog_detach_opts, optd);
+ LIBBPF_OPTS(bpf_prog_query_opts, optq);
+ __u32 fd1, fd2, fd3, fd4, id1, id2, id3, id4;
+ struct bpf_link *link2, *link4;
+ struct cgroup_mprog *skel;
+ __u32 prog_ids[10];
+ int cg, err;
+
+ cg = test__join_cgroup("/mix_attach_detach");
+ if (!ASSERT_GE(cg, 0, "join_cgroup /mix_attach_detach"))
+ return;
+
+ skel = cgroup_mprog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_load"))
+ goto cleanup;
+
+ fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+ fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+ fd3 = bpf_program__fd(skel->progs.getsockopt_3);
+ fd4 = bpf_program__fd(skel->progs.getsockopt_4);
+
+ id1 = id_from_prog_fd(fd1);
+ id2 = id_from_prog_fd(fd2);
+ id3 = id_from_prog_fd(fd3);
+ id4 = id_from_prog_fd(fd4);
+
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI,
+ .expected_revision = 1,
+ );
+
+ /* ordering: [fd1] */
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup;
+
+ assert_mprog_count(cg, atype, 1);
+
+ LIBBPF_OPTS_RESET(lopta,
+ .flags = BPF_F_BEFORE,
+ .expected_revision = 2,
+ );
+
+ /* ordering: [fd2, fd1] */
+ link2 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_2, cg, &lopta);
+ if (!ASSERT_OK_PTR(link2, "link_attach"))
+ goto cleanup1;
+
+ assert_mprog_count(cg, atype, 2);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER | BPF_F_LINK,
+ .relative_id = id_from_link_fd(bpf_link__fd(link2)),
+ .expected_revision = 3,
+ );
+
+ /* ordering: [fd2, fd3, fd1] */
+ err = bpf_prog_attach_opts(fd3, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup2;
+
+ assert_mprog_count(cg, atype, 3);
+
+ LIBBPF_OPTS_RESET(lopta,
+ .expected_revision = 4,
+ );
+
+ /* ordering: [fd2, fd3, fd1, fd4] */
+ link4 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_4, cg, &lopta);
+ if (!ASSERT_OK_PTR(link4, "link_attach"))
+ goto cleanup3;
+
+ assert_mprog_count(cg, atype, 4);
+
+ /* retrieve optq.prog_cnt */
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ /* optq.prog_cnt will be used in below query */
+ memset(prog_ids, 0, sizeof(prog_ids));
+ optq.prog_ids = prog_ids;
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ ASSERT_EQ(optq.count, 4, "count");
+ ASSERT_EQ(optq.revision, 5, "revision");
+ ASSERT_EQ(optq.prog_ids[0], id2, "prog_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], id3, "prog_ids[1]");
+ ASSERT_EQ(optq.prog_ids[2], id1, "prog_ids[2]");
+ ASSERT_EQ(optq.prog_ids[3], id4, "prog_ids[3]");
+ ASSERT_EQ(optq.prog_ids[4], 0, "prog_ids[4]");
+ ASSERT_EQ(optq.link_ids, NULL, "link_ids");
+
+cleanup4:
+ bpf_link__destroy(link4);
+ assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+ err = bpf_prog_detach_opts(fd3, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 2);
+
+ /* Check revision after two detach operations */
+ err = bpf_prog_query_opts(cg, atype, &optq);
+ ASSERT_OK(err, "prog_query");
+ ASSERT_EQ(optq.revision, 7, "revision");
+
+cleanup2:
+ bpf_link__destroy(link2);
+ assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+ err = bpf_prog_detach_opts(fd1, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 0);
+
+cleanup:
+ cgroup_mprog__destroy(skel);
+ close(cg);
+}
+
+static void test_preorder_prog_attach_detach(int atype)
+{
+ LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+ LIBBPF_OPTS(bpf_prog_detach_opts, optd);
+ __u32 fd1, fd2, fd3, fd4;
+ struct cgroup_mprog *skel;
+ int cg, err;
+
+ cg = test__join_cgroup("/preorder_prog_attach_detach");
+ if (!ASSERT_GE(cg, 0, "join_cgroup /preorder_prog_attach_detach"))
+ return;
+
+ skel = cgroup_mprog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_load"))
+ goto cleanup;
+
+ fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+ fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+ fd3 = bpf_program__fd(skel->progs.getsockopt_3);
+ fd4 = bpf_program__fd(skel->progs.getsockopt_4);
+
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI,
+ .expected_revision = 1,
+ );
+
+ /* ordering: [fd1] */
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup;
+
+ assert_mprog_count(cg, atype, 1);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_PREORDER,
+ .expected_revision = 2,
+ );
+
+ /* ordering: [fd1, fd2] */
+ err = bpf_prog_attach_opts(fd2, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup1;
+
+ assert_mprog_count(cg, atype, 2);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+ .relative_fd = fd2,
+ .expected_revision = 3,
+ );
+
+ err = bpf_prog_attach_opts(fd3, cg, atype, &opta);
+ if (!ASSERT_EQ(err, -EINVAL, "prog_attach"))
+ goto cleanup2;
+
+ assert_mprog_count(cg, atype, 2);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER | BPF_F_PREORDER,
+ .relative_fd = fd2,
+ .expected_revision = 3,
+ );
+
+ /* ordering: [fd1, fd2, fd3] */
+ err = bpf_prog_attach_opts(fd3, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup2;
+
+ assert_mprog_count(cg, atype, 3);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI,
+ .expected_revision = 4,
+ );
+
+ /* ordering: [fd2, fd3, fd1, fd4] */
+ err = bpf_prog_attach_opts(fd4, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup3;
+
+ assert_mprog_count(cg, atype, 4);
+
+ err = bpf_prog_detach_opts(fd4, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+ err = bpf_prog_detach_opts(fd3, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 2);
+
+cleanup2:
+ err = bpf_prog_detach_opts(fd2, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+ err = bpf_prog_detach_opts(fd1, cg, atype, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count(cg, atype, 0);
+
+cleanup:
+ cgroup_mprog__destroy(skel);
+ close(cg);
+}
+
+static void test_preorder_link_attach_detach(int atype)
+{
+ LIBBPF_OPTS(bpf_cgroup_opts, opta);
+ struct bpf_link *link1, *link2, *link3, *link4;
+ struct cgroup_mprog *skel;
+ __u32 fd2;
+ int cg;
+
+ cg = test__join_cgroup("/preorder_link_attach_detach");
+ if (!ASSERT_GE(cg, 0, "join_cgroup /preorder_link_attach_detach"))
+ return;
+
+ skel = cgroup_mprog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_load"))
+ goto cleanup;
+
+ fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .expected_revision = 1,
+ );
+
+ /* ordering: [fd1] */
+ link1 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_1, cg, &opta);
+ if (!ASSERT_OK_PTR(link1, "link_attach"))
+ goto cleanup;
+
+ assert_mprog_count(cg, atype, 1);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_PREORDER,
+ .expected_revision = 2,
+ );
+
+ /* ordering: [fd1, fd2] */
+ link2 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_2, cg, &opta);
+ if (!ASSERT_OK_PTR(link2, "link_attach"))
+ goto cleanup1;
+
+ assert_mprog_count(cg, atype, 2);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_AFTER,
+ .relative_fd = fd2,
+ .expected_revision = 3,
+ );
+
+ link3 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_3, cg, &opta);
+ if (!ASSERT_ERR_PTR(link3, "link_attach"))
+ goto cleanup2;
+
+ assert_mprog_count(cg, atype, 2);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_AFTER | BPF_F_PREORDER,
+ .relative_fd = fd2,
+ .expected_revision = 3,
+ );
+
+ /* ordering: [fd1, fd2, fd3] */
+ link3 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_3, cg, &opta);
+ if (!ASSERT_OK_PTR(link3, "link_attach"))
+ goto cleanup2;
+
+ assert_mprog_count(cg, atype, 3);
+
+ LIBBPF_OPTS_RESET(opta,
+ .expected_revision = 4,
+ );
+
+ /* ordering: [fd2, fd3, fd1, fd4] */
+ link4 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_4, cg, &opta);
+ if (!ASSERT_OK_PTR(link4, "prog_attach"))
+ goto cleanup3;
+
+ assert_mprog_count(cg, atype, 4);
+
+ bpf_link__destroy(link4);
+ assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+ bpf_link__destroy(link3);
+ assert_mprog_count(cg, atype, 2);
+
+cleanup2:
+ bpf_link__destroy(link2);
+ assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+ bpf_link__destroy(link1);
+ assert_mprog_count(cg, atype, 0);
+
+cleanup:
+ cgroup_mprog__destroy(skel);
+ close(cg);
+}
+
+static void test_invalid_attach_detach(int atype)
+{
+ LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+ __u32 fd1, fd2, id2;
+ struct cgroup_mprog *skel;
+ int cg, err;
+
+ cg = test__join_cgroup("/invalid_attach_detach");
+ if (!ASSERT_GE(cg, 0, "join_cgroup /invalid_attach_detach"))
+ return;
+
+ skel = cgroup_mprog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_load"))
+ goto cleanup;
+
+ fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+ fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+
+ id2 = id_from_prog_fd(fd2);
+
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE | BPF_F_AFTER,
+ .relative_id = id2,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -EINVAL, "prog_attach");
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE | BPF_F_ID,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -ENOENT, "prog_attach");
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER | BPF_F_ID,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -ENOENT, "prog_attach");
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE | BPF_F_AFTER,
+ .relative_id = id2,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -EINVAL, "prog_attach");
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_LINK,
+ .relative_id = id2,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -EINVAL, "prog_attach");
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI,
+ .relative_id = id2,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -EINVAL, "prog_attach");
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE,
+ .relative_fd = fd1,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -ENOENT, "prog_attach");
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+ .relative_fd = fd1,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -ENOENT, "prog_attach");
+ assert_mprog_count(cg, atype, 0);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup;
+ assert_mprog_count(cg, atype, 1);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -EINVAL, "prog_attach");
+ assert_mprog_count(cg, atype, 1);
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_ALLOW_MULTI | BPF_F_REPLACE | BPF_F_AFTER,
+ .replace_prog_fd = fd1,
+ );
+
+ err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+ ASSERT_EQ(err, -EINVAL, "prog_attach");
+ assert_mprog_count(cg, atype, 1);
+cleanup:
+ cgroup_mprog__destroy(skel);
+ close(cg);
+}
+
+void test_cgroup_mprog_opts(void)
+{
+ if (test__start_subtest("prog_attach_detach"))
+ test_prog_attach_detach(BPF_CGROUP_GETSOCKOPT);
+ if (test__start_subtest("link_attach_detach"))
+ test_link_attach_detach(BPF_CGROUP_GETSOCKOPT);
+ if (test__start_subtest("mix_attach_detach"))
+ test_mix_attach_detach(BPF_CGROUP_GETSOCKOPT);
+ if (test__start_subtest("preorder_prog_attach_detach"))
+ test_preorder_prog_attach_detach(BPF_CGROUP_GETSOCKOPT);
+ if (test__start_subtest("preorder_link_attach_detach"))
+ test_preorder_link_attach_detach(BPF_CGROUP_GETSOCKOPT);
+ if (test__start_subtest("invalid_attach_detach"))
+ test_invalid_attach_detach(BPF_CGROUP_GETSOCKOPT);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c b/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c
new file mode 100644
index 000000000000..4a4e9710b474
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include "cgroup_helpers.h"
+#include "cgroup_preorder.skel.h"
+
+static int run_getsockopt_test(int cg_parent, int sock_fd, bool has_relative_fd)
+{
+ LIBBPF_OPTS(bpf_prog_attach_opts, opts);
+ enum bpf_attach_type prog_p_atype, prog_p2_atype;
+ int prog_p_fd, prog_p2_fd;
+ struct cgroup_preorder *skel = NULL;
+ struct bpf_program *prog;
+ __u8 *result, buf;
+ socklen_t optlen;
+ int err = 0;
+
+ skel = cgroup_preorder__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "cgroup_preorder__open_and_load"))
+ return 0;
+
+ LIBBPF_OPTS_RESET(opts);
+ opts.flags = BPF_F_ALLOW_MULTI;
+ prog = skel->progs.parent;
+ prog_p_fd = bpf_program__fd(prog);
+ prog_p_atype = bpf_program__expected_attach_type(prog);
+ err = bpf_prog_attach_opts(prog_p_fd, cg_parent, prog_p_atype, &opts);
+ if (!ASSERT_OK(err, "bpf_prog_attach_opts-parent"))
+ goto close_skel;
+
+ opts.flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE;
+ if (has_relative_fd)
+ opts.relative_fd = prog_p_fd;
+ prog = skel->progs.parent_2;
+ prog_p2_fd = bpf_program__fd(prog);
+ prog_p2_atype = bpf_program__expected_attach_type(prog);
+ err = bpf_prog_attach_opts(prog_p2_fd, cg_parent, prog_p2_atype, &opts);
+ if (!ASSERT_OK(err, "bpf_prog_attach_opts-parent_2"))
+ goto detach_parent;
+
+ err = getsockopt(sock_fd, SOL_IP, IP_TOS, &buf, &optlen);
+ if (!ASSERT_OK(err, "getsockopt"))
+ goto detach_parent_2;
+
+ result = skel->bss->result;
+ ASSERT_TRUE(result[0] == 4 && result[1] == 3, "result values");
+
+detach_parent_2:
+ ASSERT_OK(bpf_prog_detach2(prog_p2_fd, cg_parent, prog_p2_atype),
+ "bpf_prog_detach2-parent_2");
+detach_parent:
+ ASSERT_OK(bpf_prog_detach2(prog_p_fd, cg_parent, prog_p_atype),
+ "bpf_prog_detach2-parent");
+close_skel:
+ cgroup_preorder__destroy(skel);
+ return err;
+}
+
+void test_cgroup_mprog_ordering(void)
+{
+ int cg_parent = -1, sock_fd = -1;
+
+ cg_parent = test__join_cgroup("/parent");
+ if (!ASSERT_GE(cg_parent, 0, "join_cgroup /parent"))
+ goto out;
+
+ sock_fd = socket(AF_INET, SOCK_STREAM, 0);
+ if (!ASSERT_GE(sock_fd, 0, "socket"))
+ goto out;
+
+ ASSERT_OK(run_getsockopt_test(cg_parent, sock_fd, false), "getsockopt_test_1");
+ ASSERT_OK(run_getsockopt_test(cg_parent, sock_fd, true), "getsockopt_test_2");
+
+out:
+ close(sock_fd);
+ close(cg_parent);
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_mprog.c b/tools/testing/selftests/bpf/progs/cgroup_mprog.c
new file mode 100644
index 000000000000..6a0ea02c4de2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_mprog.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("cgroup/getsockopt")
+int getsockopt_1(struct bpf_sockopt *ctx)
+{
+ return 1;
+}
+
+SEC("cgroup/getsockopt")
+int getsockopt_2(struct bpf_sockopt *ctx)
+{
+ return 1;
+}
+
+SEC("cgroup/getsockopt")
+int getsockopt_3(struct bpf_sockopt *ctx)
+{
+ return 1;
+}
+
+SEC("cgroup/getsockopt")
+int getsockopt_4(struct bpf_sockopt *ctx)
+{
+ return 1;
+}
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs
2025-05-17 16:27 ` [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
@ 2025-05-22 20:45 ` Andrii Nakryiko
2025-05-24 1:03 ` Yonghong Song
0 siblings, 1 reply; 10+ messages in thread
From: Andrii Nakryiko @ 2025-05-22 20:45 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
kernel-team, Martin KaFai Lau
On Sat, May 17, 2025 at 9:27 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>
> Current cgroup prog ordering is appending at attachment time. This is not
> ideal. In some cases, users want specific ordering at a particular cgroup
> level. To address this, the existing mprog API seems an ideal solution with
> supporting BPF_F_BEFORE and BPF_F_AFTER flags.
>
> But there are a few obstacles to directly use kernel mprog interface.
> Currently cgroup bpf progs already support prog attach/detach/replace
> and link-based attach/detach/replace. For example, in struct
> bpf_prog_array_item, the cgroup_storage field needs to be together
> with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
> as the member, which makes it difficult to use kernel mprog interface.
>
> In another case, the current cgroup prog detach tries to use the
> same flag as in attach. This is different from mprog kernel interface
> which uses flags passed from user space.
>
> So to avoid modifying existing behavior, I made the following changes to
> support mprog API for cgroup progs:
> - The support is for prog list at cgroup level. Cross-level prog list
> (a.k.a. effective prog list) is not supported.
> - Previously, BPF_F_PREORDER is supported only for prog attach, now
> BPF_F_PREORDER is also supported by link-based attach.
> - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID/BPF_F_LINK is supported
> similar to kernel mprog but with different implementation.
> - For detach and replace, use the existing implementation.
> - For attach, detach and replace, the revision for a particular prog
> list, associated with a particular attach type, will be updated
> by increasing count by 1.
>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
> include/uapi/linux/bpf.h | 7 ++
> kernel/bpf/cgroup.c | 195 +++++++++++++++++++++++++++++----
> kernel/bpf/syscall.c | 43 +++++---
> tools/include/uapi/linux/bpf.h | 7 ++
> 4 files changed, 214 insertions(+), 38 deletions(-)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 16e95398c91c..356cd2b185fb 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1794,6 +1794,13 @@ union bpf_attr {
> };
> __u64 expected_revision;
> } netkit;
> + struct {
> + union {
> + __u32 relative_fd;
> + __u32 relative_id;
> + };
> + __u64 expected_revision;
> + } cgroup;
> };
> } link_create;
>
> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
> index 62a1d8deb3dc..78e6fc70b8f9 100644
> --- a/kernel/bpf/cgroup.c
> +++ b/kernel/bpf/cgroup.c
> @@ -624,6 +624,129 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
> return NULL;
> }
>
> +static struct bpf_link *bpf_get_anchor_link(u32 flags, u32 id_or_fd, enum bpf_prog_type type)
> +{
> + struct bpf_link *link = ERR_PTR(-EINVAL);
> +
> + if (flags & BPF_F_ID)
> + link = bpf_link_by_id(id_or_fd);
> + else if (id_or_fd)
> + link = bpf_link_get_from_fd(id_or_fd);
> + if (IS_ERR(link))
> + return link;
> + if (type && link->prog->type != type) {
> + bpf_link_put(link);
> + return ERR_PTR(-EINVAL);
> + }
> +
> + return link;
> +}
> +
> +static struct bpf_prog *bpf_get_anchor_prog(u32 flags, u32 id_or_fd, enum bpf_prog_type type)
> +{
> + struct bpf_prog *prog = ERR_PTR(-EINVAL);
> +
> + if (flags & BPF_F_ID)
> + prog = bpf_prog_by_id(id_or_fd);
> + else if (id_or_fd)
> + prog = bpf_prog_get(id_or_fd);
> + if (IS_ERR(prog))
> + return prog;
> + if (type && prog->type != type) {
> + bpf_prog_put(prog);
> + return ERR_PTR(-EINVAL);
> + }
> +
> + return prog;
> +}
> +
> +static struct bpf_prog_list *get_prog_list(struct hlist_head *progs, struct bpf_prog *prog,
> + u32 flags, u32 id_or_fd)
> +{
> + bool link = flags & BPF_F_LINK, id = flags & BPF_F_ID;
> + struct bpf_prog *anchor_prog = NULL, *pltmp_prog;
> + bool preorder = flags & BPF_F_PREORDER;
> + struct bpf_link *anchor_link = NULL;
> + struct bpf_prog_list *pltmp;
> + int ret = -EINVAL;
> +
> + if (link || id || id_or_fd) {
please, use "is_id" to make it obvious that this is bool, it's very
confusing to see "id || id_or_fd"
same for is_link, please
> + /* flags must have either BPF_F_BEFORE or BPF_F_AFTER */
> + if (!(flags & BPF_F_BEFORE) != !!(flags & BPF_F_AFTER))
either/or here means exclusive or inclusive?
if it's inclusive: if (flags & (BPF_F_BEFORE | BPF_F_AFTER)) should be
enough to check that at least one of them is set
if exclusive, below you use a different style of checking (which
arguably is easier to follow), so let's stay consistent
I got to say that my brain broke trying to reason about this pattern:
if (!(...) != !!(...))
Way too many exclamations/negations, IMO... I'm not sure what sort of
condition we are expressing here?
pw-bot: cr
> + return ERR_PTR(-EINVAL);
> + } else if (!hlist_empty(progs)) {
> + /* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
> + if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
> + return ERR_PTR(-EINVAL);
do I understand correctly that neither BEFORE or AFTER might be set,
in which case it must be BPF_F_REPLACE, is that right? Can it happen
that we have neither REPLACE nor BEFORE/AFTER? Asked that below as
well...
> + }
> +
> + if (link) {
> + anchor_link = bpf_get_anchor_link(flags, id_or_fd, prog->type);
> + if (IS_ERR(anchor_link))
> + return ERR_PTR(PTR_ERR(anchor_link));
> + anchor_prog = anchor_link->prog;
> + } else if (id || id_or_fd) {
> + anchor_prog = bpf_get_anchor_prog(flags, id_or_fd, prog->type);
> + if (IS_ERR(anchor_prog))
> + return ERR_PTR(PTR_ERR(anchor_prog));
> + }
> +
> + if (!anchor_prog) {
> + /* if there is no anchor_prog, then BPF_F_PREORDER doesn't matter
> + * since either prepend or append to a combined list of progs will
> + * end up with correct result.
> + */
> + hlist_for_each_entry(pltmp, progs, node) {
> + if (flags & BPF_F_BEFORE)
> + return pltmp;
> + if (pltmp->node.next)
> + continue;
> + return pltmp;
> + }
> + return NULL;
> + }
> +
> + hlist_for_each_entry(pltmp, progs, node) {
> + pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
> + if (pltmp_prog != anchor_prog)
> + continue;
> + if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
> + goto out;
hm... thinking about this a bit more, is it illegal to have the same
BPF program attached as PREORDER and POSTORDER? That seems legit to
me, do we artificially disallow this?
And so my proposal is instead of `goto out;` do `continue;` and write
this loop as searching for an item and then checking whether that item
was found after the loop.
> + if (anchor_link)
> + bpf_link_put(anchor_link);
> + else
> + bpf_prog_put(anchor_prog);
and this duplicated cleanup would be best to avoid, given it's not
just a singular bpf_prog_put()...
> + return pltmp;
> + }
> +
> + ret = -ENOENT;
> +out:
> + if (anchor_link)
> + bpf_link_put(anchor_link);
> + else
> + bpf_prog_put(anchor_prog);
> + return ERR_PTR(ret);
> +}
> +
> +static int insert_pl_to_hlist(struct bpf_prog_list *pl, struct hlist_head *progs,
> + struct bpf_prog *prog, u32 flags, u32 id_or_fd)
> +{
> + struct bpf_prog_list *pltmp;
> +
> + pltmp = get_prog_list(progs, prog, flags, id_or_fd);
> + if (IS_ERR(pltmp))
> + return PTR_ERR(pltmp);
> +
> + if (!pltmp)
> + hlist_add_head(&pl->node, progs);
> + else if (flags & BPF_F_BEFORE)
> + hlist_add_before(&pl->node, &pltmp->node);
> + else
> + hlist_add_behind(&pl->node, &pltmp->node);
> +
> + return 0;
> +}
> +
> /**
> * __cgroup_bpf_attach() - Attach the program or the link to a cgroup, and
> * propagate the change to descendants
> @@ -633,6 +756,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
> * @replace_prog: Previously attached program to replace if BPF_F_REPLACE is set
> * @type: Type of attach operation
> * @flags: Option flags
> + * @id_or_fd: Relative prog id or fd
> + * @revision: bpf_prog_list revision
> *
> * Exactly one of @prog or @link can be non-null.
> * Must be called with cgroup_mutex held.
> @@ -640,7 +765,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
> static int __cgroup_bpf_attach(struct cgroup *cgrp,
> struct bpf_prog *prog, struct bpf_prog *replace_prog,
> struct bpf_cgroup_link *link,
> - enum bpf_attach_type type, u32 flags)
> + enum bpf_attach_type type, u32 flags, u32 id_or_fd,
> + u64 revision)
> {
> u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
> struct bpf_prog *old_prog = NULL;
> @@ -656,6 +782,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
> ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
> /* invalid combination */
> return -EINVAL;
> + if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
but can it be that neither is set?
> + /* only either replace or insertion with before/after */
> + return -EINVAL;
> if (link && (prog || replace_prog))
> /* only either link or prog/replace_prog can be specified */
> return -EINVAL;
[...]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs
2025-05-22 20:45 ` Andrii Nakryiko
@ 2025-05-24 1:03 ` Yonghong Song
2025-05-27 21:36 ` Andrii Nakryiko
0 siblings, 1 reply; 10+ messages in thread
From: Yonghong Song @ 2025-05-24 1:03 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
kernel-team, Martin KaFai Lau
On 5/22/25 1:45 PM, Andrii Nakryiko wrote:
> On Sat, May 17, 2025 at 9:27 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>> Current cgroup prog ordering is appending at attachment time. This is not
>> ideal. In some cases, users want specific ordering at a particular cgroup
>> level. To address this, the existing mprog API seems an ideal solution with
>> supporting BPF_F_BEFORE and BPF_F_AFTER flags.
>>
>> But there are a few obstacles to directly use kernel mprog interface.
>> Currently cgroup bpf progs already support prog attach/detach/replace
>> and link-based attach/detach/replace. For example, in struct
>> bpf_prog_array_item, the cgroup_storage field needs to be together
>> with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
>> as the member, which makes it difficult to use kernel mprog interface.
>>
>> In another case, the current cgroup prog detach tries to use the
>> same flag as in attach. This is different from mprog kernel interface
>> which uses flags passed from user space.
>>
>> So to avoid modifying existing behavior, I made the following changes to
>> support mprog API for cgroup progs:
>> - The support is for prog list at cgroup level. Cross-level prog list
>> (a.k.a. effective prog list) is not supported.
>> - Previously, BPF_F_PREORDER is supported only for prog attach, now
>> BPF_F_PREORDER is also supported by link-based attach.
>> - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID/BPF_F_LINK is supported
>> similar to kernel mprog but with different implementation.
>> - For detach and replace, use the existing implementation.
>> - For attach, detach and replace, the revision for a particular prog
>> list, associated with a particular attach type, will be updated
>> by increasing count by 1.
>>
>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>> ---
>> include/uapi/linux/bpf.h | 7 ++
>> kernel/bpf/cgroup.c | 195 +++++++++++++++++++++++++++++----
>> kernel/bpf/syscall.c | 43 +++++---
>> tools/include/uapi/linux/bpf.h | 7 ++
>> 4 files changed, 214 insertions(+), 38 deletions(-)
>>
>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>> index 16e95398c91c..356cd2b185fb 100644
>> --- a/include/uapi/linux/bpf.h
>> +++ b/include/uapi/linux/bpf.h
>> @@ -1794,6 +1794,13 @@ union bpf_attr {
>> };
>> __u64 expected_revision;
>> } netkit;
>> + struct {
>> + union {
>> + __u32 relative_fd;
>> + __u32 relative_id;
>> + };
>> + __u64 expected_revision;
>> + } cgroup;
>> };
>> } link_create;
>>
>> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
>> index 62a1d8deb3dc..78e6fc70b8f9 100644
>> --- a/kernel/bpf/cgroup.c
>> +++ b/kernel/bpf/cgroup.c
>> @@ -624,6 +624,129 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
>> return NULL;
>> }
>>
>> +static struct bpf_link *bpf_get_anchor_link(u32 flags, u32 id_or_fd, enum bpf_prog_type type)
>> +{
>> + struct bpf_link *link = ERR_PTR(-EINVAL);
>> +
>> + if (flags & BPF_F_ID)
>> + link = bpf_link_by_id(id_or_fd);
>> + else if (id_or_fd)
>> + link = bpf_link_get_from_fd(id_or_fd);
>> + if (IS_ERR(link))
>> + return link;
>> + if (type && link->prog->type != type) {
>> + bpf_link_put(link);
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> + return link;
>> +}
>> +
>> +static struct bpf_prog *bpf_get_anchor_prog(u32 flags, u32 id_or_fd, enum bpf_prog_type type)
>> +{
>> + struct bpf_prog *prog = ERR_PTR(-EINVAL);
>> +
>> + if (flags & BPF_F_ID)
>> + prog = bpf_prog_by_id(id_or_fd);
>> + else if (id_or_fd)
>> + prog = bpf_prog_get(id_or_fd);
>> + if (IS_ERR(prog))
>> + return prog;
>> + if (type && prog->type != type) {
>> + bpf_prog_put(prog);
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> + return prog;
>> +}
>> +
>> +static struct bpf_prog_list *get_prog_list(struct hlist_head *progs, struct bpf_prog *prog,
>> + u32 flags, u32 id_or_fd)
>> +{
>> + bool link = flags & BPF_F_LINK, id = flags & BPF_F_ID;
>> + struct bpf_prog *anchor_prog = NULL, *pltmp_prog;
>> + bool preorder = flags & BPF_F_PREORDER;
>> + struct bpf_link *anchor_link = NULL;
>> + struct bpf_prog_list *pltmp;
>> + int ret = -EINVAL;
>> +
>> + if (link || id || id_or_fd) {
> please, use "is_id" to make it obvious that this is bool, it's very
> confusing to see "id || id_or_fd"
>
> same for is_link, please
Okay, I am following mprog.c code like below:
====
static int bpf_mprog_link(struct bpf_tuple *tuple,
u32 id_or_fd, u32 flags,
enum bpf_prog_type type)
{
struct bpf_link *link = ERR_PTR(-EINVAL);
bool id = flags & BPF_F_ID;
if (id)
link = bpf_link_by_id(id_or_fd);
else if (id_or_fd)
link = bpf_link_get_from_fd(id_or_fd);
====
But agree is_id/is_link is more clear.
>
>> + /* flags must have either BPF_F_BEFORE or BPF_F_AFTER */
>> + if (!(flags & BPF_F_BEFORE) != !!(flags & BPF_F_AFTER))
> either/or here means exclusive or inclusive?
>
> if it's inclusive: if (flags & (BPF_F_BEFORE | BPF_F_AFTER)) should be
> enough to check that at least one of them is set
>
> if exclusive, below you use a different style of checking (which
> arguably is easier to follow), so let's stay consistent
>
>
> I got to say that my brain broke trying to reason about this pattern:
>
> if (!(...) != !!(...))
>
> Way too many exclamations/negations, IMO... I'm not sure what sort of
> condition we are expressing here?
Sorry for confusion. What I mean is 'exclusive'. I guess I can do
bool is_before = flags & BPF_F_BEFORE;
bool is_after = flags & BPF_F_AFTER;
if (is_link || is_id || id_or_fd) {
if (is_before == is_after)
return ERR_PTR(-EINVAL);
} else if (!hist_empty(progs)) {
if (is_before && is_after)
return ERR_PTR(-EINVAL);
...
}
>
> pw-bot: cr
>
>> + return ERR_PTR(-EINVAL);
>> + } else if (!hlist_empty(progs)) {
>> + /* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
>> + if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
>> + return ERR_PTR(-EINVAL);
> do I understand correctly that neither BEFORE or AFTER might be set,
> in which case it must be BPF_F_REPLACE, is that right? Can it happen
> that we have neither REPLACE nor BEFORE/AFTER? Asked that below as
> well...
I think 'neither REPLACE nor BEFORE/AFTER' is possible. In that case,
the prog is appended to the prog list.
The code path here should not have REPLACE. See the code
if (pl) {
old_prog = pl->prog;
} else {
pl = kmalloc(sizeof(*pl), GFP_KERNEL);
if (!pl) {
bpf_cgroup_storages_free(new_storage);
return -ENOMEM;
}
err = insert_pl_to_hlist(pl, progs, prog ? : link->link.prog, flags, id_or_fd);
if (err) {
kfree(pl);
bpf_cgroup_storages_free(new_storage);
return err;
}
}
If REPLACE is in the flag and prog replacement is successful, 'pl'
will not be null.
>
>> + }
>> +
>> + if (link) {
>> + anchor_link = bpf_get_anchor_link(flags, id_or_fd, prog->type);
>> + if (IS_ERR(anchor_link))
>> + return ERR_PTR(PTR_ERR(anchor_link));
>> + anchor_prog = anchor_link->prog;
>> + } else if (id || id_or_fd) {
>> + anchor_prog = bpf_get_anchor_prog(flags, id_or_fd, prog->type);
>> + if (IS_ERR(anchor_prog))
>> + return ERR_PTR(PTR_ERR(anchor_prog));
>> + }
>> +
>> + if (!anchor_prog) {
>> + /* if there is no anchor_prog, then BPF_F_PREORDER doesn't matter
>> + * since either prepend or append to a combined list of progs will
>> + * end up with correct result.
>> + */
>> + hlist_for_each_entry(pltmp, progs, node) {
>> + if (flags & BPF_F_BEFORE)
>> + return pltmp;
>> + if (pltmp->node.next)
>> + continue;
>> + return pltmp;
>> + }
>> + return NULL;
>> + }
>> +
>> + hlist_for_each_entry(pltmp, progs, node) {
>> + pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
>> + if (pltmp_prog != anchor_prog)
>> + continue;
>> + if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
>> + goto out;
> hm... thinking about this a bit more, is it illegal to have the same
> BPF program attached as PREORDER and POSTORDER? That seems legit to
> me, do we artificially disallow this?
Good question, in find_attach_entry(), we have
hlist_for_each_entry(pl, progs, node) {
if (prog && pl->prog == prog && prog != replace_prog)
/* disallow attaching the same prog twice */
return ERR_PTR(-EINVAL);
if (link && pl->link == link)
/* disallow attaching the same link twice */
return ERR_PTR(-EINVAL);
}
Basically, two same progs are not allowed. Here we didn't check PREORDER flag.
Should we relax this for this patch set?
>
> And so my proposal is instead of `goto out;` do `continue;` and write
> this loop as searching for an item and then checking whether that item
> was found after the loop.
>
>> + if (anchor_link)
>> + bpf_link_put(anchor_link);
>> + else
>> + bpf_prog_put(anchor_prog);
> and this duplicated cleanup would be best to avoid, given it's not
> just a singular bpf_prog_put()...
Will do.
>
>> + return pltmp;
>> + }
>> +
>> + ret = -ENOENT;
>> +out:
>> + if (anchor_link)
>> + bpf_link_put(anchor_link);
>> + else
>> + bpf_prog_put(anchor_prog);
>> + return ERR_PTR(ret);
>> +}
>> +
>> +static int insert_pl_to_hlist(struct bpf_prog_list *pl, struct hlist_head *progs,
>> + struct bpf_prog *prog, u32 flags, u32 id_or_fd)
>> +{
>> + struct bpf_prog_list *pltmp;
>> +
>> + pltmp = get_prog_list(progs, prog, flags, id_or_fd);
>> + if (IS_ERR(pltmp))
>> + return PTR_ERR(pltmp);
>> +
>> + if (!pltmp)
>> + hlist_add_head(&pl->node, progs);
>> + else if (flags & BPF_F_BEFORE)
>> + hlist_add_before(&pl->node, &pltmp->node);
>> + else
>> + hlist_add_behind(&pl->node, &pltmp->node);
>> +
>> + return 0;
>> +}
>> +
>> /**
>> * __cgroup_bpf_attach() - Attach the program or the link to a cgroup, and
>> * propagate the change to descendants
>> @@ -633,6 +756,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
>> * @replace_prog: Previously attached program to replace if BPF_F_REPLACE is set
>> * @type: Type of attach operation
>> * @flags: Option flags
>> + * @id_or_fd: Relative prog id or fd
>> + * @revision: bpf_prog_list revision
>> *
>> * Exactly one of @prog or @link can be non-null.
>> * Must be called with cgroup_mutex held.
>> @@ -640,7 +765,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
>> static int __cgroup_bpf_attach(struct cgroup *cgrp,
>> struct bpf_prog *prog, struct bpf_prog *replace_prog,
>> struct bpf_cgroup_link *link,
>> - enum bpf_attach_type type, u32 flags)
>> + enum bpf_attach_type type, u32 flags, u32 id_or_fd,
>> + u64 revision)
>> {
>> u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
>> struct bpf_prog *old_prog = NULL;
>> @@ -656,6 +782,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
>> ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
>> /* invalid combination */
>> return -EINVAL;
>> + if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
> but can it be that neither is set?
I would say it is possible. In that case, the new prog is appended to
the end of prog list.
>
>> + /* only either replace or insertion with before/after */
>> + return -EINVAL;
>> if (link && (prog || replace_prog))
>> /* only either link or prog/replace_prog can be specified */
>> return -EINVAL;
> [...]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs
2025-05-24 1:03 ` Yonghong Song
@ 2025-05-27 21:36 ` Andrii Nakryiko
2025-05-28 17:23 ` Yonghong Song
0 siblings, 1 reply; 10+ messages in thread
From: Andrii Nakryiko @ 2025-05-27 21:36 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
kernel-team, Martin KaFai Lau
On Fri, May 23, 2025 at 6:04 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
>
> On 5/22/25 1:45 PM, Andrii Nakryiko wrote:
> > On Sat, May 17, 2025 at 9:27 AM Yonghong Song <yonghong.song@linux.dev> wrote:
> >> Current cgroup prog ordering is appending at attachment time. This is not
> >> ideal. In some cases, users want specific ordering at a particular cgroup
> >> level. To address this, the existing mprog API seems an ideal solution with
> >> supporting BPF_F_BEFORE and BPF_F_AFTER flags.
> >>
> >> But there are a few obstacles to directly use kernel mprog interface.
> >> Currently cgroup bpf progs already support prog attach/detach/replace
> >> and link-based attach/detach/replace. For example, in struct
> >> bpf_prog_array_item, the cgroup_storage field needs to be together
> >> with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
> >> as the member, which makes it difficult to use kernel mprog interface.
> >>
> >> In another case, the current cgroup prog detach tries to use the
> >> same flag as in attach. This is different from mprog kernel interface
> >> which uses flags passed from user space.
> >>
> >> So to avoid modifying existing behavior, I made the following changes to
> >> support mprog API for cgroup progs:
> >> - The support is for prog list at cgroup level. Cross-level prog list
> >> (a.k.a. effective prog list) is not supported.
> >> - Previously, BPF_F_PREORDER is supported only for prog attach, now
> >> BPF_F_PREORDER is also supported by link-based attach.
> >> - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID/BPF_F_LINK is supported
> >> similar to kernel mprog but with different implementation.
> >> - For detach and replace, use the existing implementation.
> >> - For attach, detach and replace, the revision for a particular prog
> >> list, associated with a particular attach type, will be updated
> >> by increasing count by 1.
> >>
> >> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> >> ---
> >> include/uapi/linux/bpf.h | 7 ++
> >> kernel/bpf/cgroup.c | 195 +++++++++++++++++++++++++++++----
> >> kernel/bpf/syscall.c | 43 +++++---
> >> tools/include/uapi/linux/bpf.h | 7 ++
> >> 4 files changed, 214 insertions(+), 38 deletions(-)
> >>
[...]
> >> +
> >> + if (link || id || id_or_fd) {
> > please, use "is_id" to make it obvious that this is bool, it's very
> > confusing to see "id || id_or_fd"
> >
> > same for is_link, please
>
> Okay, I am following mprog.c code like below:
>
> ====
> static int bpf_mprog_link(struct bpf_tuple *tuple,
> u32 id_or_fd, u32 flags,
> enum bpf_prog_type type)
> {
> struct bpf_link *link = ERR_PTR(-EINVAL);
> bool id = flags & BPF_F_ID;
>
> if (id)
> link = bpf_link_by_id(id_or_fd);
> else if (id_or_fd)
> link = bpf_link_get_from_fd(id_or_fd);
> ====
>
> But agree is_id/is_link is more clear.
yeah, existing code is also confusing :)
>
> >
> >> + /* flags must have either BPF_F_BEFORE or BPF_F_AFTER */
> >> + if (!(flags & BPF_F_BEFORE) != !!(flags & BPF_F_AFTER))
> > either/or here means exclusive or inclusive?
> >
> > if it's inclusive: if (flags & (BPF_F_BEFORE | BPF_F_AFTER)) should be
> > enough to check that at least one of them is set
> >
> > if exclusive, below you use a different style of checking (which
> > arguably is easier to follow), so let's stay consistent
> >
> >
> > I got to say that my brain broke trying to reason about this pattern:
> >
> > if (!(...) != !!(...))
Note how you used singular ! in the left condition. Probably a typo,
but I wasn't sure if that was intentional.
> >
> > Way too many exclamations/negations, IMO... I'm not sure what sort of
> > condition we are expressing here?
>
> Sorry for confusion. What I mean is 'exclusive'. I guess I can do
It's fine to use a sort-of-standard pattern of !!(...) != !!(...),
just make sure to use the right amount of !!
>
> bool is_before = flags & BPF_F_BEFORE;
> bool is_after = flags & BPF_F_AFTER;
> if (is_link || is_id || id_or_fd) {
> if (is_before == is_after)
> return ERR_PTR(-EINVAL);
> } else if (!hist_empty(progs)) {
> if (is_before && is_after)
> return ERR_PTR(-EINVAL);
> ...
> }
>
> >
> > pw-bot: cr
> >
> >> + return ERR_PTR(-EINVAL);
> >> + } else if (!hlist_empty(progs)) {
> >> + /* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
> >> + if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
> >> + return ERR_PTR(-EINVAL);
> > do I understand correctly that neither BEFORE or AFTER might be set,
> > in which case it must be BPF_F_REPLACE, is that right? Can it happen
> > that we have neither REPLACE nor BEFORE/AFTER? Asked that below as
> > well...
>
> I think 'neither REPLACE nor BEFORE/AFTER' is possible. In that case,
> the prog is appended to the prog list.
>
> The code path here should not have REPLACE. See the code
>
> if (pl) {
> old_prog = pl->prog;
> } else {
> pl = kmalloc(sizeof(*pl), GFP_KERNEL);
> if (!pl) {
> bpf_cgroup_storages_free(new_storage);
> return -ENOMEM;
> }
>
> err = insert_pl_to_hlist(pl, progs, prog ? : link->link.prog, flags, id_or_fd);
> if (err) {
> kfree(pl);
> bpf_cgroup_storages_free(new_storage);
> return err;
> }
> }
>
> If REPLACE is in the flag and prog replacement is successful, 'pl'
> will not be null.
>
ok, thanks
> >
> >> + }
> >> +
> >> + if (link) {
> >> + anchor_link = bpf_get_anchor_link(flags, id_or_fd, prog->type);
> >> + if (IS_ERR(anchor_link))
> >> + return ERR_PTR(PTR_ERR(anchor_link));
> >> + anchor_prog = anchor_link->prog;
> >> + } else if (id || id_or_fd) {
> >> + anchor_prog = bpf_get_anchor_prog(flags, id_or_fd, prog->type);
> >> + if (IS_ERR(anchor_prog))
> >> + return ERR_PTR(PTR_ERR(anchor_prog));
> >> + }
> >> +
> >> + if (!anchor_prog) {
> >> + /* if there is no anchor_prog, then BPF_F_PREORDER doesn't matter
> >> + * since either prepend or append to a combined list of progs will
> >> + * end up with correct result.
> >> + */
> >> + hlist_for_each_entry(pltmp, progs, node) {
> >> + if (flags & BPF_F_BEFORE)
> >> + return pltmp;
> >> + if (pltmp->node.next)
> >> + continue;
> >> + return pltmp;
> >> + }
> >> + return NULL;
> >> + }
> >> +
> >> + hlist_for_each_entry(pltmp, progs, node) {
> >> + pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
> >> + if (pltmp_prog != anchor_prog)
> >> + continue;
> >> + if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
> >> + goto out;
> > hm... thinking about this a bit more, is it illegal to have the same
> > BPF program attached as PREORDER and POSTORDER? That seems legit to
> > me, do we artificially disallow this?
>
> Good question, in find_attach_entry(), we have
>
> hlist_for_each_entry(pl, progs, node) {
> if (prog && pl->prog == prog && prog != replace_prog)
> /* disallow attaching the same prog twice */
> return ERR_PTR(-EINVAL);
> if (link && pl->link == link)
> /* disallow attaching the same link twice */
> return ERR_PTR(-EINVAL);
> }
>
> Basically, two same progs are not allowed. Here we didn't check PREORDER flag.
> Should we relax this for this patch set?
So with BPF link-based attachment we already allow multiple same BPF
programs to be attached, regardless of PREORDER. I don't think we need
to make any relaxations for old-style program attachment. But your
mprog API is, effectively ignoring link stuff and checking for
uniqueness of the program (regardless of whether it came from link or
not), which is problematic, I think, no?
[...]
> >> @@ -640,7 +765,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
> >> static int __cgroup_bpf_attach(struct cgroup *cgrp,
> >> struct bpf_prog *prog, struct bpf_prog *replace_prog,
> >> struct bpf_cgroup_link *link,
> >> - enum bpf_attach_type type, u32 flags)
> >> + enum bpf_attach_type type, u32 flags, u32 id_or_fd,
> >> + u64 revision)
> >> {
> >> u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
> >> struct bpf_prog *old_prog = NULL;
> >> @@ -656,6 +782,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
> >> ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
> >> /* invalid combination */
> >> return -EINVAL;
> >> + if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
> > but can it be that neither is set?
>
> I would say it is possible. In that case, the new prog is appended to
> the end of prog list.
ok, makes sense
>
> >
> >> + /* only either replace or insertion with before/after */
> >> + return -EINVAL;
> >> if (link && (prog || replace_prog))
> >> /* only either link or prog/replace_prog can be specified */
> >> return -EINVAL;
> > [...]
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs
2025-05-27 21:36 ` Andrii Nakryiko
@ 2025-05-28 17:23 ` Yonghong Song
0 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2025-05-28 17:23 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
kernel-team, Martin KaFai Lau
On 5/27/25 2:36 PM, Andrii Nakryiko wrote:
> On Fri, May 23, 2025 at 6:04 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>>
>>
>> On 5/22/25 1:45 PM, Andrii Nakryiko wrote:
>>> On Sat, May 17, 2025 at 9:27 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>>>> Current cgroup prog ordering is appending at attachment time. This is not
>>>> ideal. In some cases, users want specific ordering at a particular cgroup
>>>> level. To address this, the existing mprog API seems an ideal solution with
>>>> supporting BPF_F_BEFORE and BPF_F_AFTER flags.
>>>>
>>>> But there are a few obstacles to directly use kernel mprog interface.
>>>> Currently cgroup bpf progs already support prog attach/detach/replace
>>>> and link-based attach/detach/replace. For example, in struct
>>>> bpf_prog_array_item, the cgroup_storage field needs to be together
>>>> with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
>>>> as the member, which makes it difficult to use kernel mprog interface.
>>>>
>>>> In another case, the current cgroup prog detach tries to use the
>>>> same flag as in attach. This is different from mprog kernel interface
>>>> which uses flags passed from user space.
>>>>
>>>> So to avoid modifying existing behavior, I made the following changes to
>>>> support mprog API for cgroup progs:
>>>> - The support is for prog list at cgroup level. Cross-level prog list
>>>> (a.k.a. effective prog list) is not supported.
>>>> - Previously, BPF_F_PREORDER is supported only for prog attach, now
>>>> BPF_F_PREORDER is also supported by link-based attach.
>>>> - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID/BPF_F_LINK is supported
>>>> similar to kernel mprog but with different implementation.
>>>> - For detach and replace, use the existing implementation.
>>>> - For attach, detach and replace, the revision for a particular prog
>>>> list, associated with a particular attach type, will be updated
>>>> by increasing count by 1.
>>>>
>>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>>> ---
>>>> include/uapi/linux/bpf.h | 7 ++
>>>> kernel/bpf/cgroup.c | 195 +++++++++++++++++++++++++++++----
>>>> kernel/bpf/syscall.c | 43 +++++---
>>>> tools/include/uapi/linux/bpf.h | 7 ++
>>>> 4 files changed, 214 insertions(+), 38 deletions(-)
>>>>
[...]
>>>> + }
>>>> +
>>>> + if (link) {
>>>> + anchor_link = bpf_get_anchor_link(flags, id_or_fd, prog->type);
>>>> + if (IS_ERR(anchor_link))
>>>> + return ERR_PTR(PTR_ERR(anchor_link));
>>>> + anchor_prog = anchor_link->prog;
>>>> + } else if (id || id_or_fd) {
>>>> + anchor_prog = bpf_get_anchor_prog(flags, id_or_fd, prog->type);
>>>> + if (IS_ERR(anchor_prog))
>>>> + return ERR_PTR(PTR_ERR(anchor_prog));
>>>> + }
>>>> +
>>>> + if (!anchor_prog) {
>>>> + /* if there is no anchor_prog, then BPF_F_PREORDER doesn't matter
>>>> + * since either prepend or append to a combined list of progs will
>>>> + * end up with correct result.
>>>> + */
>>>> + hlist_for_each_entry(pltmp, progs, node) {
>>>> + if (flags & BPF_F_BEFORE)
>>>> + return pltmp;
>>>> + if (pltmp->node.next)
>>>> + continue;
>>>> + return pltmp;
>>>> + }
>>>> + return NULL;
>>>> + }
>>>> +
>>>> + hlist_for_each_entry(pltmp, progs, node) {
>>>> + pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
>>>> + if (pltmp_prog != anchor_prog)
>>>> + continue;
>>>> + if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
>>>> + goto out;
>>> hm... thinking about this a bit more, is it illegal to have the same
>>> BPF program attached as PREORDER and POSTORDER? That seems legit to
>>> me, do we artificially disallow this?
>> Good question, in find_attach_entry(), we have
>>
>> hlist_for_each_entry(pl, progs, node) {
>> if (prog && pl->prog == prog && prog != replace_prog)
>> /* disallow attaching the same prog twice */
>> return ERR_PTR(-EINVAL);
>> if (link && pl->link == link)
>> /* disallow attaching the same link twice */
>> return ERR_PTR(-EINVAL);
>> }
>>
>> Basically, two same progs are not allowed. Here we didn't check PREORDER flag.
>> Should we relax this for this patch set?
> So with BPF link-based attachment we already allow multiple same BPF
> programs to be attached, regardless of PREORDER. I don't think we need
> to make any relaxations for old-style program attachment. But your
> mprog API is, effectively ignoring link stuff and checking for
> uniqueness of the program (regardless of whether it came from link or
> not), which is problematic, I think, no?
You are correct. I should check link instead of prog for link-based
attachment in mprog. Will fix.
>
> [...]
>
>>>> @@ -640,7 +765,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
>>>> static int __cgroup_bpf_attach(struct cgroup *cgrp,
>>>> struct bpf_prog *prog, struct bpf_prog *replace_prog,
>>>> struct bpf_cgroup_link *link,
>>>> - enum bpf_attach_type type, u32 flags)
>>>> + enum bpf_attach_type type, u32 flags, u32 id_or_fd,
>>>> + u64 revision)
>>>> {
>>>> u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
>>>> struct bpf_prog *old_prog = NULL;
>>>> @@ -656,6 +782,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
>>>> ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
>>>> /* invalid combination */
>>>> return -EINVAL;
>>>> + if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
>>> but can it be that neither is set?
>> I would say it is possible. In that case, the new prog is appended to
>> the end of prog list.
> ok, makes sense
>
>>>> + /* only either replace or insertion with before/after */
>>>> + return -EINVAL;
>>>> if (link && (prog || replace_prog))
>>>> /* only either link or prog/replace_prog can be specified */
>>>> return -EINVAL;
>>> [...]
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-05-28 17:23 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-17 16:27 [PATCH bpf-next v3 0/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 1/5] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 2/5] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
2025-05-22 20:45 ` Andrii Nakryiko
2025-05-24 1:03 ` Yonghong Song
2025-05-27 21:36 ` Andrii Nakryiko
2025-05-28 17:23 ` Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 3/5] libbpf: Support link-based cgroup attach with options Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 4/5] selftests/bpf: Move some tc_helpers.h functions to test_progs.h Yonghong Song
2025-05-17 16:27 ` [PATCH bpf-next v3 5/5] selftests/bpf: Add two selftests for mprog API based cgroup progs Yonghong Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).