bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/4] bpf: Implement mprog API on top of existing cgroup progs
@ 2025-05-08 22:35 Yonghong Song
  2025-05-08 22:35 ` [PATCH bpf-next v2 1/4] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Yonghong Song @ 2025-05-08 22:35 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
	Martin KaFai Lau

Current cgroup prog ordering is appending at attachment time. This is not
ideal. In some cases, users want specific ordering at a particular cgroup
level. For example, in Meta, we have a case where three different
applications all have cgroup/setsockopt progs and they require specific
ordering. Current approach is to use a bpfchainer where one bpf prog
contains multiple global functions and each global function can be
freplaced by a prog for a specific application. The ordering of global
functions decides the ordering of those application specific bpf progs.
Using bpfchainer is a centralized approach and is not desirable as
one of applications acts as a daemon. The decentralized attachment
approach is more favorable for those applications.

To address this, the existing mprog API ([2]) seems an ideal solution with
supporting BPF_F_BEFORE and BPF_F_AFTER flags on top of existing cgroup
bpf implementation. More specifically, the support is added for prog/link
attachment with BPF_F_BEFORE and BPF_F_AFTER. The kernel mprog
interface ([2]) is not used and the implementation is directly done in
cgroup bpf code base. The mprog 'revision' is also implemented in
attach/detach/replace, so users can query revision number to check the
change of cgroup prog list.

The patch set contains 4 patches. Patch 1 adds revision support for
cgroup bpf progs. Patch 2 implements mprog API implementation for
prog/link attach and revision update. Patch 3 adds a new libbpf
API to do cgroup link attach with flags like BPF_F_BEFORE/BPF_F_AFTER.
Patch 4 adds two tests to validate the implementation.

  [1] https://lore.kernel.org/r/20250224230116.283071-1-yonghong.song@linux.dev
  [2] https://lore.kernel.org/r/20230719140858.13224-2-daniel@iogearbox.net

Changelogs:
  v1 -> v2:
    - v1: https://lore.kernel.org/bpf/20250411011523.1838771-1-yonghong.song@linux.dev/
    - Change cgroup_bpf.revisions from atomic64_t to u64.
    - Added missing bpf_prog_put in various places.
    - Rename get_cmp_prog() to get_anchor_prog(). The implementation tries to
      find the anchor prog regardless of whether id_or_fd is non-NULL or not.
    - Rename bpf_cgroup_prog_attached() to is_cgroup_prog_type() and handle
      BPF_PROG_TYPE_LSM properly (with BPF_LSM_CGROUP attach type).
    - I kept 'id || id_or_fd' condition as the condition 'id' is also used
      in mprog.c so I assume it is okay in cgroup.c as well.

Yonghong Song (4):
  cgroup: Add bpf prog revisions to struct cgroup_bpf
  bpf: Implement mprog API on top of existing cgroup progs
  libbpf: Support link-based cgroup attach with options
  selftests/bpf: Add two selftests for mprog API based cgroup progs

 include/linux/bpf-cgroup-defs.h               |   1 +
 include/uapi/linux/bpf.h                      |   7 +
 kernel/bpf/cgroup.c                           | 144 +++-
 kernel/bpf/syscall.c                          |  44 +-
 kernel/cgroup/cgroup.c                        |   5 +
 tools/include/uapi/linux/bpf.h                |   7 +
 tools/lib/bpf/bpf.c                           |  44 +
 tools/lib/bpf/bpf.h                           |   5 +
 tools/lib/bpf/libbpf.c                        |  28 +
 tools/lib/bpf/libbpf.h                        |  15 +
 tools/lib/bpf/libbpf.map                      |   1 +
 .../bpf/prog_tests/cgroup_mprog_opts.c        | 752 ++++++++++++++++++
 .../bpf/prog_tests/cgroup_mprog_ordering.c    |  77 ++
 .../selftests/bpf/progs/cgroup_mprog.c        |  30 +
 14 files changed, 1123 insertions(+), 37 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_mprog.c

-- 
2.47.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v2 1/4] cgroup: Add bpf prog revisions to struct cgroup_bpf
  2025-05-08 22:35 [PATCH bpf-next v2 0/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
@ 2025-05-08 22:35 ` Yonghong Song
  2025-05-15 20:39   ` Andrii Nakryiko
  2025-05-08 22:35 ` [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Yonghong Song @ 2025-05-08 22:35 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
	Martin KaFai Lau

One of key items in mprog API is revision for prog list. The revision
number will be increased if the prog list changed, e.g., attach, detach
or replace.

Add 'revisions' field to struct cgroup_bpf, representing revisions for
all cgroup related attachment types. The initial revision value is
set to 1, the same as kernel mprog implementations.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 include/linux/bpf-cgroup-defs.h | 1 +
 kernel/cgroup/cgroup.c          | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
index 0985221d5478..c9e6b26abab6 100644
--- a/include/linux/bpf-cgroup-defs.h
+++ b/include/linux/bpf-cgroup-defs.h
@@ -63,6 +63,7 @@ struct cgroup_bpf {
 	 */
 	struct hlist_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
 	u8 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
+	u64 revisions[MAX_CGROUP_BPF_ATTACH_TYPE];
 
 	/* list of cgroup shared storages */
 	struct list_head storages;
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 63e5b90da1f3..260ce8fc4ea4 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -2071,6 +2071,11 @@ static void init_cgroup_housekeeping(struct cgroup *cgrp)
 	for_each_subsys(ss, ssid)
 		INIT_LIST_HEAD(&cgrp->e_csets[ssid]);
 
+#ifdef CONFIG_CGROUP_BPF
+	for (int i = 0; i < ARRAY_SIZE(cgrp->bpf.revisions); i++)
+		cgrp->bpf.revisions[i] = 1;
+#endif
+
 	init_waitqueue_head(&cgrp->offline_waitq);
 	INIT_WORK(&cgrp->release_agent_work, cgroup1_release_agent);
 }
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs
  2025-05-08 22:35 [PATCH bpf-next v2 0/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
  2025-05-08 22:35 ` [PATCH bpf-next v2 1/4] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
@ 2025-05-08 22:35 ` Yonghong Song
  2025-05-15 20:38   ` Andrii Nakryiko
  2025-05-08 22:35 ` [PATCH bpf-next v2 3/4] libbpf: Support link-based cgroup attach with options Yonghong Song
  2025-05-08 22:35 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add two selftests for mprog API based cgroup progs Yonghong Song
  3 siblings, 1 reply; 11+ messages in thread
From: Yonghong Song @ 2025-05-08 22:35 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
	Martin KaFai Lau

Current cgroup prog ordering is appending at attachment time. This is not
ideal. In some cases, users want specific ordering at a particular cgroup
level. To address this, the existing mprog API seems an ideal solution with
supporting BPF_F_BEFORE and BPF_F_AFTER flags.

But there are a few obstacles to directly use kernel mprog interface.
Currently cgroup bpf progs already support prog attach/detach/replace
and link-based attach/detach/replace. For example, in struct
bpf_prog_array_item, the cgroup_storage field needs to be together
with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
as the member, which makes it difficult to use kernel mprog interface.

In another case, the current cgroup prog detach tries to use the
same flag as in attach. This is different from mprog kernel interface
which uses flags passed from user space.

So to avoid modifying existing behavior, I made the following changes to
support mprog API for cgroup progs:
 - The support is for prog list at cgroup level. Cross-level prog list
   (a.k.a. effective prog list) is not supported.
 - Previously, BPF_F_PREORDER is supported only for prog attach, now
   BPF_F_PREORDER is also supported by link-based attach.
 - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID is supported similar to
   kernel mprog but with different implementation.
 - For detach and replace, use the existing implementation.
 - For attach, detach and replace, the revision for a particular prog
   list, associated with a particular attach type, will be updated
   by increasing count by 1.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 include/uapi/linux/bpf.h       |   7 ++
 kernel/bpf/cgroup.c            | 144 ++++++++++++++++++++++++++++-----
 kernel/bpf/syscall.c           |  44 ++++++----
 tools/include/uapi/linux/bpf.h |   7 ++
 4 files changed, 165 insertions(+), 37 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 71d5ac83cf5d..a5c7992e8f7c 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1794,6 +1794,13 @@ union bpf_attr {
 				};
 				__u64		expected_revision;
 			} netkit;
+			struct {
+				union {
+					__u32	relative_fd;
+					__u32	relative_id;
+				};
+				__u64		expected_revision;
+			} cgroup;
 		};
 	} link_create;
 
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 84f58f3d028a..7c258c4d9a74 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -624,6 +624,83 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
 	return NULL;
 }
 
+static struct bpf_prog *get_anchor_prog(struct hlist_head *progs, struct bpf_prog *prog,
+					u32 flags, u32 id_or_fd, struct bpf_prog_list **ppltmp)
+{
+	struct bpf_prog *anchor_prog = NULL, *pltmp_prog;
+	bool preorder = flags & BPF_F_PREORDER;
+	struct bpf_prog_list *pltmp;
+	bool id = flags & BPF_F_ID;
+	int ret = -EINVAL;
+
+	if (id || id_or_fd) {
+		/* flags must have BPF_F_BEFORE or BPF_F_AFTER */
+		if (!(flags & (BPF_F_BEFORE | BPF_F_AFTER)))
+			return ERR_PTR(-EINVAL);
+
+		if (id)
+			anchor_prog = bpf_prog_by_id(id_or_fd);
+		else
+			anchor_prog = bpf_prog_get(id_or_fd);
+		if (IS_ERR(anchor_prog))
+			return anchor_prog;
+		if (anchor_prog->type != prog->type)
+			goto out;
+	}
+
+	if (!anchor_prog) {
+		hlist_for_each_entry(pltmp, progs, node) {
+			if ((flags & BPF_F_BEFORE) && *ppltmp)
+				break;
+			*ppltmp = pltmp;
+		}
+	}  else {
+		hlist_for_each_entry(pltmp, progs, node) {
+			pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
+			if (pltmp_prog != anchor_prog)
+				continue;
+			if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
+				goto out;
+			*ppltmp = pltmp;
+			break;
+		}
+		if (!*ppltmp) {
+			ret = -ENOENT;
+			goto out;
+		}
+	}
+
+	return anchor_prog;
+
+out:
+	bpf_prog_put(anchor_prog);
+	return ERR_PTR(ret);
+}
+
+static int insert_pl_to_hlist(struct bpf_prog_list *pl, struct hlist_head *progs,
+			      struct bpf_prog *prog, u32 flags, u32 id_or_fd)
+{
+	struct bpf_prog_list *pltmp = NULL;
+	struct bpf_prog *anchor_prog;
+
+	/* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
+	if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
+		return -EINVAL;
+
+	anchor_prog = get_anchor_prog(progs, prog, flags, id_or_fd, &pltmp);
+	if (IS_ERR(anchor_prog))
+		return PTR_ERR(anchor_prog);
+
+	if (hlist_empty(progs))
+		hlist_add_head(&pl->node, progs);
+	else if (flags & BPF_F_BEFORE)
+		hlist_add_before(&pl->node, &pltmp->node);
+	else
+		hlist_add_behind(&pl->node, &pltmp->node);
+
+	return 0;
+}
+
 /**
  * __cgroup_bpf_attach() - Attach the program or the link to a cgroup, and
  *                         propagate the change to descendants
@@ -633,6 +710,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
  * @replace_prog: Previously attached program to replace if BPF_F_REPLACE is set
  * @type: Type of attach operation
  * @flags: Option flags
+ * @id_or_fd: Relative prog id or fd
+ * @revision: bpf_prog_list revision
  *
  * Exactly one of @prog or @link can be non-null.
  * Must be called with cgroup_mutex held.
@@ -640,7 +719,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
 static int __cgroup_bpf_attach(struct cgroup *cgrp,
 			       struct bpf_prog *prog, struct bpf_prog *replace_prog,
 			       struct bpf_cgroup_link *link,
-			       enum bpf_attach_type type, u32 flags)
+			       enum bpf_attach_type type, u32 flags, u32 id_or_fd,
+			       u64 revision)
 {
 	u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
 	struct bpf_prog *old_prog = NULL;
@@ -656,6 +736,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
 	    ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
 		/* invalid combination */
 		return -EINVAL;
+	if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
+		/* only either replace or insertion with before/after */
+		return -EINVAL;
 	if (link && (prog || replace_prog))
 		/* only either link or prog/replace_prog can be specified */
 		return -EINVAL;
@@ -663,9 +746,12 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
 		/* replace_prog implies BPF_F_REPLACE, and vice versa */
 		return -EINVAL;
 
+
 	atype = bpf_cgroup_atype_find(type, new_prog->aux->attach_btf_id);
 	if (atype < 0)
 		return -EINVAL;
+	if (revision && revision != cgrp->bpf.revisions[atype])
+		return -ESTALE;
 
 	progs = &cgrp->bpf.progs[atype];
 
@@ -694,22 +780,18 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
 	if (pl) {
 		old_prog = pl->prog;
 	} else {
-		struct hlist_node *last = NULL;
-
 		pl = kmalloc(sizeof(*pl), GFP_KERNEL);
 		if (!pl) {
 			bpf_cgroup_storages_free(new_storage);
 			return -ENOMEM;
 		}
-		if (hlist_empty(progs))
-			hlist_add_head(&pl->node, progs);
-		else
-			hlist_for_each(last, progs) {
-				if (last->next)
-					continue;
-				hlist_add_behind(&pl->node, last);
-				break;
-			}
+
+		err = insert_pl_to_hlist(pl, progs, prog ? : link->link.prog, flags, id_or_fd);
+		if (err) {
+			kfree(pl);
+			bpf_cgroup_storages_free(new_storage);
+			return err;
+		}
 	}
 
 	pl->prog = prog;
@@ -728,6 +810,7 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
 	if (err)
 		goto cleanup_trampoline;
 
+	cgrp->bpf.revisions[atype] += 1;
 	if (old_prog) {
 		if (type == BPF_LSM_CGROUP)
 			bpf_trampoline_unlink_cgroup_shim(old_prog);
@@ -759,12 +842,13 @@ static int cgroup_bpf_attach(struct cgroup *cgrp,
 			     struct bpf_prog *prog, struct bpf_prog *replace_prog,
 			     struct bpf_cgroup_link *link,
 			     enum bpf_attach_type type,
-			     u32 flags)
+			     u32 flags, u32 id_or_fd, u64 revision)
 {
 	int ret;
 
 	cgroup_lock();
-	ret = __cgroup_bpf_attach(cgrp, prog, replace_prog, link, type, flags);
+	ret = __cgroup_bpf_attach(cgrp, prog, replace_prog, link, type, flags,
+				  id_or_fd, revision);
 	cgroup_unlock();
 	return ret;
 }
@@ -852,6 +936,7 @@ static int __cgroup_bpf_replace(struct cgroup *cgrp,
 	if (!found)
 		return -ENOENT;
 
+	cgrp->bpf.revisions[atype] += 1;
 	old_prog = xchg(&link->link.prog, new_prog);
 	replace_effective_prog(cgrp, atype, link);
 	bpf_prog_put(old_prog);
@@ -977,12 +1062,14 @@ static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
  * @prog: A program to detach or NULL
  * @link: A link to detach or NULL
  * @type: Type of detach operation
+ * @revision: bpf_prog_list revision
  *
  * At most one of @prog or @link can be non-NULL.
  * Must be called with cgroup_mutex held.
  */
 static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
-			       struct bpf_cgroup_link *link, enum bpf_attach_type type)
+			       struct bpf_cgroup_link *link, enum bpf_attach_type type,
+			       u64 revision)
 {
 	enum cgroup_bpf_attach_type atype;
 	struct bpf_prog *old_prog;
@@ -1000,6 +1087,9 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
 	if (atype < 0)
 		return -EINVAL;
 
+	if (revision && revision != cgrp->bpf.revisions[atype])
+		return -ESTALE;
+
 	progs = &cgrp->bpf.progs[atype];
 	flags = cgrp->bpf.flags[atype];
 
@@ -1025,6 +1115,7 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
 
 	/* now can actually delete it from this cgroup list */
 	hlist_del(&pl->node);
+	cgrp->bpf.revisions[atype] += 1;
 
 	kfree(pl);
 	if (hlist_empty(progs))
@@ -1040,12 +1131,12 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
 }
 
 static int cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
-			     enum bpf_attach_type type)
+			     enum bpf_attach_type type, u64 revision)
 {
 	int ret;
 
 	cgroup_lock();
-	ret = __cgroup_bpf_detach(cgrp, prog, NULL, type);
+	ret = __cgroup_bpf_detach(cgrp, prog, NULL, type, revision);
 	cgroup_unlock();
 	return ret;
 }
@@ -1063,6 +1154,7 @@ static int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
 	struct bpf_prog_array *effective;
 	int cnt, ret = 0, i;
 	int total_cnt = 0;
+	u64 revision = 0;
 	u32 flags;
 
 	if (effective_query && prog_attach_flags)
@@ -1100,6 +1192,10 @@ static int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
 		return -EFAULT;
 	if (copy_to_user(&uattr->query.prog_cnt, &total_cnt, sizeof(total_cnt)))
 		return -EFAULT;
+	if (!effective_query && from_atype == to_atype)
+		revision = cgrp->bpf.revisions[from_atype];
+	if (copy_to_user(&uattr->query.revision, &revision, sizeof(revision)))
+		return -EFAULT;
 	if (attr->query.prog_cnt == 0 || !prog_ids || !total_cnt)
 		/* return early if user requested only program count + flags */
 		return 0;
@@ -1182,7 +1278,8 @@ int cgroup_bpf_prog_attach(const union bpf_attr *attr,
 	}
 
 	ret = cgroup_bpf_attach(cgrp, prog, replace_prog, NULL,
-				attr->attach_type, attr->attach_flags);
+				attr->attach_type, attr->attach_flags,
+				attr->relative_fd, attr->expected_revision);
 
 	if (replace_prog)
 		bpf_prog_put(replace_prog);
@@ -1204,7 +1301,7 @@ int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
 	if (IS_ERR(prog))
 		prog = NULL;
 
-	ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type);
+	ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, attr->expected_revision);
 	if (prog)
 		bpf_prog_put(prog);
 
@@ -1233,7 +1330,7 @@ static void bpf_cgroup_link_release(struct bpf_link *link)
 	}
 
 	WARN_ON(__cgroup_bpf_detach(cg_link->cgroup, NULL, cg_link,
-				    cg_link->type));
+				    cg_link->type, 0));
 	if (cg_link->type == BPF_LSM_CGROUP)
 		bpf_trampoline_unlink_cgroup_shim(cg_link->link.prog);
 
@@ -1312,7 +1409,8 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
 	struct cgroup *cgrp;
 	int err;
 
-	if (attr->link_create.flags)
+	if (attr->link_create.flags &&
+	    (attr->link_create.flags & (~(BPF_F_ID | BPF_F_BEFORE | BPF_F_AFTER | BPF_F_PREORDER))))
 		return -EINVAL;
 
 	cgrp = cgroup_get_from_fd(attr->link_create.target_fd);
@@ -1336,7 +1434,9 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
 	}
 
 	err = cgroup_bpf_attach(cgrp, NULL, NULL, link,
-				link->type, BPF_F_ALLOW_MULTI);
+				link->type, BPF_F_ALLOW_MULTI | attr->link_create.flags,
+				attr->link_create.cgroup.relative_fd,
+				attr->link_create.cgroup.expected_revision);
 	if (err) {
 		bpf_link_cleanup(&link_primer);
 		goto out_put_cgroup;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index df33d19c5c3b..58ea3c38eabb 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -4184,6 +4184,25 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
 	}
 }
 
+static bool is_cgroup_prog_type(enum bpf_prog_type ptype, enum bpf_attach_type atype,
+				bool check_atype)
+{
+	switch (ptype) {
+	case BPF_PROG_TYPE_CGROUP_DEVICE:
+	case BPF_PROG_TYPE_CGROUP_SKB:
+	case BPF_PROG_TYPE_CGROUP_SOCK:
+	case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
+	case BPF_PROG_TYPE_CGROUP_SOCKOPT:
+	case BPF_PROG_TYPE_CGROUP_SYSCTL:
+	case BPF_PROG_TYPE_SOCK_OPS:
+		return true;
+	case BPF_PROG_TYPE_LSM:
+		return check_atype ? atype == BPF_LSM_CGROUP : true;
+	default:
+		return false;
+	}
+}
+
 #define BPF_PROG_ATTACH_LAST_FIELD expected_revision
 
 #define BPF_F_ATTACH_MASK_BASE	\
@@ -4214,6 +4233,9 @@ static int bpf_prog_attach(const union bpf_attr *attr)
 	if (bpf_mprog_supported(ptype)) {
 		if (attr->attach_flags & ~BPF_F_ATTACH_MASK_MPROG)
 			return -EINVAL;
+	} else if (is_cgroup_prog_type(ptype, 0, false)) {
+		if (attr->attach_flags & BPF_F_LINK)
+			return -EINVAL;
 	} else {
 		if (attr->attach_flags & ~BPF_F_ATTACH_MASK_BASE)
 			return -EINVAL;
@@ -4242,20 +4264,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
 	case BPF_PROG_TYPE_FLOW_DISSECTOR:
 		ret = netns_bpf_prog_attach(attr, prog);
 		break;
-	case BPF_PROG_TYPE_CGROUP_DEVICE:
-	case BPF_PROG_TYPE_CGROUP_SKB:
-	case BPF_PROG_TYPE_CGROUP_SOCK:
-	case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
-	case BPF_PROG_TYPE_CGROUP_SOCKOPT:
-	case BPF_PROG_TYPE_CGROUP_SYSCTL:
-	case BPF_PROG_TYPE_SOCK_OPS:
-	case BPF_PROG_TYPE_LSM:
-		if (ptype == BPF_PROG_TYPE_LSM &&
-		    prog->expected_attach_type != BPF_LSM_CGROUP)
-			ret = -EINVAL;
-		else
-			ret = cgroup_bpf_prog_attach(attr, ptype, prog);
-		break;
 	case BPF_PROG_TYPE_SCHED_CLS:
 		if (attr->attach_type == BPF_TCX_INGRESS ||
 		    attr->attach_type == BPF_TCX_EGRESS)
@@ -4264,7 +4272,10 @@ static int bpf_prog_attach(const union bpf_attr *attr)
 			ret = netkit_prog_attach(attr, prog);
 		break;
 	default:
-		ret = -EINVAL;
+		if (!is_cgroup_prog_type(ptype, prog->expected_attach_type, true))
+			ret = -EINVAL;
+		else
+			ret = cgroup_bpf_prog_attach(attr, ptype, prog);
 	}
 
 	if (ret)
@@ -4294,6 +4305,9 @@ static int bpf_prog_detach(const union bpf_attr *attr)
 			if (IS_ERR(prog))
 				return PTR_ERR(prog);
 		}
+	} else if (is_cgroup_prog_type(ptype, 0, false)) {
+		if (attr->attach_flags || attr->relative_fd)
+			return -EINVAL;
 	} else if (attr->attach_flags ||
 		   attr->relative_fd ||
 		   attr->expected_revision) {
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 71d5ac83cf5d..a5c7992e8f7c 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1794,6 +1794,13 @@ union bpf_attr {
 				};
 				__u64		expected_revision;
 			} netkit;
+			struct {
+				union {
+					__u32	relative_fd;
+					__u32	relative_id;
+				};
+				__u64		expected_revision;
+			} cgroup;
 		};
 	} link_create;
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v2 3/4] libbpf: Support link-based cgroup attach with options
  2025-05-08 22:35 [PATCH bpf-next v2 0/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
  2025-05-08 22:35 ` [PATCH bpf-next v2 1/4] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
  2025-05-08 22:35 ` [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
@ 2025-05-08 22:35 ` Yonghong Song
  2025-05-15 20:42   ` Andrii Nakryiko
  2025-05-08 22:35 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add two selftests for mprog API based cgroup progs Yonghong Song
  3 siblings, 1 reply; 11+ messages in thread
From: Yonghong Song @ 2025-05-08 22:35 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
	Martin KaFai Lau

Currently libbpf supports bpf_program__attach_cgroup() with signature:
  LIBBPF_API struct bpf_link *
  bpf_program__attach_cgroup(const struct bpf_program *prog, int cgroup_fd);

To support mprog style attachment, additionsl fields like flags,
relative_{fd,id} and expected_revision are needed.

Add a new API:
  LIBBPF_API struct bpf_link *
  bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
                                  const struct bpf_cgroup_opts *opts);
where bpf_cgroup_opts contains all above needed fields.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 tools/lib/bpf/bpf.c      | 44 ++++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/bpf.h      |  5 +++++
 tools/lib/bpf/libbpf.c   | 28 +++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   | 15 ++++++++++++++
 tools/lib/bpf/libbpf.map |  1 +
 5 files changed, 93 insertions(+)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index a9c3e33d0f8a..6eb421ccf91b 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -837,6 +837,50 @@ int bpf_link_create(int prog_fd, int target_fd,
 		if (!OPTS_ZEROED(opts, netkit))
 			return libbpf_err(-EINVAL);
 		break;
+	case BPF_CGROUP_INET_INGRESS:
+	case BPF_CGROUP_INET_EGRESS:
+	case BPF_CGROUP_INET_SOCK_CREATE:
+	case BPF_CGROUP_INET_SOCK_RELEASE:
+	case BPF_CGROUP_INET4_BIND:
+	case BPF_CGROUP_INET6_BIND:
+	case BPF_CGROUP_INET4_POST_BIND:
+	case BPF_CGROUP_INET6_POST_BIND:
+	case BPF_CGROUP_INET4_CONNECT:
+	case BPF_CGROUP_INET6_CONNECT:
+	case BPF_CGROUP_UNIX_CONNECT:
+	case BPF_CGROUP_INET4_GETPEERNAME:
+	case BPF_CGROUP_INET6_GETPEERNAME:
+	case BPF_CGROUP_UNIX_GETPEERNAME:
+	case BPF_CGROUP_INET4_GETSOCKNAME:
+	case BPF_CGROUP_INET6_GETSOCKNAME:
+	case BPF_CGROUP_UNIX_GETSOCKNAME:
+	case BPF_CGROUP_UDP4_SENDMSG:
+	case BPF_CGROUP_UDP6_SENDMSG:
+	case BPF_CGROUP_UNIX_SENDMSG:
+	case BPF_CGROUP_UDP4_RECVMSG:
+	case BPF_CGROUP_UDP6_RECVMSG:
+	case BPF_CGROUP_UNIX_RECVMSG:
+	case BPF_CGROUP_SOCK_OPS:
+	case BPF_CGROUP_DEVICE:
+	case BPF_CGROUP_SYSCTL:
+	case BPF_CGROUP_GETSOCKOPT:
+	case BPF_CGROUP_SETSOCKOPT:
+	case BPF_LSM_CGROUP:
+		relative_fd = OPTS_GET(opts, cgroup.relative_fd, 0);
+		relative_id = OPTS_GET(opts, cgroup.relative_id, 0);
+		if (relative_fd && relative_id)
+			return libbpf_err(-EINVAL);
+		if (relative_id) {
+			attr.link_create.cgroup.relative_id = relative_id;
+			attr.link_create.flags |= BPF_F_ID;
+		} else {
+			attr.link_create.cgroup.relative_fd = relative_fd;
+		}
+		attr.link_create.cgroup.expected_revision =
+			OPTS_GET(opts, cgroup.expected_revision, 0);
+		if (!OPTS_ZEROED(opts, cgroup))
+			return libbpf_err(-EINVAL);
+		break;
 	default:
 		if (!OPTS_ZEROED(opts, flags))
 			return libbpf_err(-EINVAL);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 777627d33d25..1342564214c8 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -438,6 +438,11 @@ struct bpf_link_create_opts {
 			__u32 relative_id;
 			__u64 expected_revision;
 		} netkit;
+		struct {
+			__u32 relative_fd;
+			__u32 relative_id;
+			__u64 expected_revision;
+		} cgroup;
 	};
 	size_t :0;
 };
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 617cfb9a7ff5..90b0e5e4e2d6 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -12837,6 +12837,34 @@ struct bpf_link *bpf_program__attach_xdp(const struct bpf_program *prog, int ifi
 	return bpf_program_attach_fd(prog, ifindex, "xdp", NULL);
 }
 
+struct bpf_link *
+bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
+				const struct bpf_cgroup_opts *opts)
+{
+	LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);
+	__u32 relative_id;
+	int relative_fd;
+
+	if (!OPTS_VALID(opts, bpf_cgroup_opts))
+		return libbpf_err_ptr(-EINVAL);
+
+	relative_id = OPTS_GET(opts, relative_id, 0);
+	relative_fd = OPTS_GET(opts, relative_fd, 0);
+
+	if (relative_fd && relative_id) {
+		pr_warn("prog '%s': relative_fd and relative_id cannot be set at the same time\n",
+			prog->name);
+		return libbpf_err_ptr(-EINVAL);
+	}
+
+	link_create_opts.cgroup.expected_revision = OPTS_GET(opts, expected_revision, 0);
+	link_create_opts.cgroup.relative_fd = relative_fd;
+	link_create_opts.cgroup.relative_id = relative_id;
+	link_create_opts.flags = OPTS_GET(opts, flags, 0);
+
+	return bpf_program_attach_fd(prog, cgroup_fd, "cgroup", &link_create_opts);
+}
+
 struct bpf_link *
 bpf_program__attach_tcx(const struct bpf_program *prog, int ifindex,
 			const struct bpf_tcx_opts *opts)
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index d39f19c8396d..622de1b932ee 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -877,6 +877,21 @@ LIBBPF_API struct bpf_link *
 bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex,
 			   const struct bpf_netkit_opts *opts);
 
+struct bpf_cgroup_opts {
+	/* size of this struct, for forward/backward compatibility */
+	size_t sz;
+	__u32 flags;
+	__u32 relative_fd;
+	__u32 relative_id;
+	__u64 expected_revision;
+	size_t :0;
+};
+#define bpf_cgroup_opts__last_field expected_revision
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
+				const struct bpf_cgroup_opts *opts);
+
 struct bpf_map;
 
 LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 1205f9a4fe04..c7fc0bde5648 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -437,6 +437,7 @@ LIBBPF_1.6.0 {
 		bpf_linker__add_fd;
 		bpf_linker__new_fd;
 		bpf_object__prepare;
+		bpf_program__attach_cgroup_opts;
 		bpf_program__func_info;
 		bpf_program__func_info_cnt;
 		bpf_program__line_info;
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v2 4/4] selftests/bpf: Add two selftests for mprog API based cgroup progs
  2025-05-08 22:35 [PATCH bpf-next v2 0/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
                   ` (2 preceding siblings ...)
  2025-05-08 22:35 ` [PATCH bpf-next v2 3/4] libbpf: Support link-based cgroup attach with options Yonghong Song
@ 2025-05-08 22:35 ` Yonghong Song
  3 siblings, 0 replies; 11+ messages in thread
From: Yonghong Song @ 2025-05-08 22:35 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
	Martin KaFai Lau

Two tests are added:
  - cgroup_mprog_opts, which mimics tc_opts.c ([1]). Both prog and link
    attach are tested. Some negative tests are also included.
  - cgroup_mprog_ordering, which actually runs the program with some mprog
    API flags.

  [1] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/tc_opts.c

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 .../bpf/prog_tests/cgroup_mprog_opts.c        | 752 ++++++++++++++++++
 .../bpf/prog_tests/cgroup_mprog_ordering.c    |  77 ++
 .../selftests/bpf/progs/cgroup_mprog.c        |  30 +
 3 files changed, 859 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_mprog.c

diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c b/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c
new file mode 100644
index 000000000000..a8374ea2267b
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c
@@ -0,0 +1,752 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include "cgroup_helpers.h"
+#include "cgroup_mprog.skel.h"
+
+static __u32 id_from_prog_fd(int fd)
+{
+	struct bpf_prog_info prog_info = {};
+	__u32 prog_info_len = sizeof(prog_info);
+	int err;
+
+	err = bpf_obj_get_info_by_fd(fd, &prog_info, &prog_info_len);
+	if (!ASSERT_OK(err, "id_from_prog_fd"))
+		return 0;
+
+	ASSERT_NEQ(prog_info.id, 0, "prog_info.id");
+	return prog_info.id;
+}
+
+static void assert_mprog_count(int cg, int atype, int expected)
+{
+	__u32 count = 0, attach_flags = 0;
+	int err;
+
+	err = bpf_prog_query(cg, atype, 0, &attach_flags,
+			     NULL, &count);
+	ASSERT_EQ(count, expected, "count");
+	ASSERT_EQ(err, 0, "prog_query");
+}
+
+static void test_prog_attach_detach(int atype)
+{
+	LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+	LIBBPF_OPTS(bpf_prog_detach_opts, optd);
+	LIBBPF_OPTS(bpf_prog_query_opts, optq);
+	__u32 fd1, fd2, fd3, fd4, id1, id2, id3, id4;
+	struct cgroup_mprog *skel;
+	__u32 prog_ids[10];
+	int cg, err;
+
+	cg = test__join_cgroup("/prog_attach_detach");
+	if (!ASSERT_GE(cg, 0, "join_cgroup /prog_attach_detach"))
+		return;
+
+	skel = cgroup_mprog__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_load"))
+		goto cleanup;
+
+	fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+	fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+	fd3 = bpf_program__fd(skel->progs.getsockopt_3);
+	fd4 = bpf_program__fd(skel->progs.getsockopt_4);
+
+	id1 = id_from_prog_fd(fd1);
+	id2 = id_from_prog_fd(fd2);
+	id3 = id_from_prog_fd(fd3);
+	id4 = id_from_prog_fd(fd4);
+
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI,
+		.expected_revision = 1,
+	);
+
+	/* ordering: [fd1] */
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup;
+
+	assert_mprog_count(cg, atype, 1);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE,
+		.expected_revision = 2,
+	);
+
+	/* ordering: [fd2, fd1] */
+	err = bpf_prog_attach_opts(fd2, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup1;
+
+	assert_mprog_count(cg, atype, 2);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+		.relative_fd = fd2,
+		.expected_revision = 3,
+	);
+
+	/* ordering: [fd2, fd3, fd1] */
+	err = bpf_prog_attach_opts(fd3, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup2;
+
+	assert_mprog_count(cg, atype, 3);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI,
+		.expected_revision = 4,
+	);
+
+	/* ordering: [fd2, fd3, fd1, fd4] */
+	err = bpf_prog_attach_opts(fd4, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup3;
+
+	assert_mprog_count(cg, atype, 4);
+
+	/* retrieve optq.prog_cnt */
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	if (!ASSERT_OK(err, "prog_query"))
+		goto cleanup4;
+
+	/* optq.prog_cnt will be used in below query */
+	memset(prog_ids, 0, sizeof(prog_ids));
+	optq.prog_ids = prog_ids;
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	if (!ASSERT_OK(err, "prog_query"))
+		goto cleanup4;
+
+	ASSERT_EQ(optq.count, 4, "count");
+	ASSERT_EQ(optq.revision, 5, "revision");
+	ASSERT_EQ(optq.prog_ids[0], id2, "prog_ids[0]");
+	ASSERT_EQ(optq.prog_ids[1], id3, "prog_ids[1]");
+	ASSERT_EQ(optq.prog_ids[2], id1, "prog_ids[2]");
+	ASSERT_EQ(optq.prog_ids[3], id4, "prog_ids[3]");
+	ASSERT_EQ(optq.prog_ids[4], 0, "prog_ids[4]");
+	ASSERT_EQ(optq.link_ids, NULL, "link_ids");
+
+cleanup4:
+	optd.expected_revision = 5;
+	err = bpf_prog_detach_opts(fd4, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+	LIBBPF_OPTS_RESET(optd);
+	err = bpf_prog_detach_opts(fd3, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 2);
+
+	/* Check revision after two detach operations */
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	ASSERT_OK(err, "prog_query");
+	ASSERT_EQ(optq.revision, 7, "revision");
+
+cleanup2:
+	err = bpf_prog_detach_opts(fd2, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+	err = bpf_prog_detach_opts(fd1, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 0);
+
+cleanup:
+	cgroup_mprog__destroy(skel);
+	close(cg);
+}
+
+static void test_link_attach_detach(int atype)
+{
+	LIBBPF_OPTS(bpf_cgroup_opts, opta);
+	LIBBPF_OPTS(bpf_cgroup_opts, optd);
+	LIBBPF_OPTS(bpf_prog_query_opts, optq);
+	struct bpf_link *link1, *link2, *link3, *link4;
+	__u32 fd1, fd2, fd3, fd4, id1, id2, id3, id4;
+	struct cgroup_mprog *skel;
+	__u32 prog_ids[10];
+	int cg, err;
+
+	cg = test__join_cgroup("/link_attach_detach");
+	if (!ASSERT_GE(cg, 0, "join_cgroup /link_attach_detach"))
+		return;
+
+	skel = cgroup_mprog__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_load"))
+		goto cleanup;
+
+	fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+	fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+	fd3 = bpf_program__fd(skel->progs.getsockopt_3);
+	fd4 = bpf_program__fd(skel->progs.getsockopt_4);
+
+	id1 = id_from_prog_fd(fd1);
+	id2 = id_from_prog_fd(fd2);
+	id3 = id_from_prog_fd(fd3);
+	id4 = id_from_prog_fd(fd4);
+
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.expected_revision = 1,
+	);
+
+	/* ordering: [fd1] */
+	link1 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_1, cg, &opta);
+	if (!ASSERT_OK_PTR(link1, "link_attach"))
+		goto cleanup;
+
+	assert_mprog_count(cg, atype, 1);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_BEFORE,
+		.expected_revision = 2,
+	);
+
+	/* ordering: [fd2, fd1] */
+	link2 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_2, cg, &opta);
+	if (!ASSERT_OK_PTR(link2, "link_attach"))
+		goto cleanup1;
+
+	assert_mprog_count(cg, atype, 2);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_AFTER,
+		.relative_fd = fd2,
+		.expected_revision = 3,
+	);
+
+	/* ordering: [fd2, fd3, fd1] */
+	link3 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_3, cg, &opta);
+	if (!ASSERT_OK_PTR(link3, "link_attach"))
+		goto cleanup2;
+
+	assert_mprog_count(cg, atype, 3);
+
+	LIBBPF_OPTS_RESET(opta,
+		.expected_revision = 4,
+	);
+
+	/* ordering: [fd2, fd3, fd1, fd4] */
+	link4 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_4, cg, &opta);
+	if (!ASSERT_OK_PTR(link4, "link_attach"))
+		goto cleanup3;
+
+	assert_mprog_count(cg, atype, 4);
+
+	/* retrieve optq.prog_cnt */
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	if (!ASSERT_OK(err, "prog_query"))
+		goto cleanup4;
+
+	/* optq.prog_cnt will be used in below query */
+	memset(prog_ids, 0, sizeof(prog_ids));
+	optq.prog_ids = prog_ids;
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	if (!ASSERT_OK(err, "prog_query"))
+		goto cleanup4;
+
+	ASSERT_EQ(optq.count, 4, "count");
+	ASSERT_EQ(optq.revision, 5, "revision");
+	ASSERT_EQ(optq.prog_ids[0], id2, "prog_ids[0]");
+	ASSERT_EQ(optq.prog_ids[1], id3, "prog_ids[1]");
+	ASSERT_EQ(optq.prog_ids[2], id1, "prog_ids[2]");
+	ASSERT_EQ(optq.prog_ids[3], id4, "prog_ids[3]");
+	ASSERT_EQ(optq.prog_ids[4], 0, "prog_ids[4]");
+	ASSERT_EQ(optq.link_ids, NULL, "link_ids");
+
+cleanup4:
+	bpf_link__destroy(link4);
+	assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+	bpf_link__destroy(link3);
+	assert_mprog_count(cg, atype, 2);
+
+	/* Check revision after two detach operations */
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	ASSERT_OK(err, "prog_query");
+	ASSERT_EQ(optq.revision, 7, "revision");
+
+cleanup2:
+	bpf_link__destroy(link2);
+	assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+	bpf_link__destroy(link1);
+	assert_mprog_count(cg, atype, 0);
+
+cleanup:
+	cgroup_mprog__destroy(skel);
+	close(cg);
+}
+
+static void test_mix_attach_detach(int atype)
+{
+	LIBBPF_OPTS(bpf_cgroup_opts, lopta);
+	LIBBPF_OPTS(bpf_cgroup_opts, loptd);
+	LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+	LIBBPF_OPTS(bpf_prog_detach_opts, optd);
+	LIBBPF_OPTS(bpf_prog_query_opts, optq);
+	__u32 fd1, fd2, fd3, fd4, id1, id2, id3, id4;
+	struct bpf_link *link2, *link4;
+	struct cgroup_mprog *skel;
+	__u32 prog_ids[10];
+	int cg, err;
+
+	cg = test__join_cgroup("/mix_attach_detach");
+	if (!ASSERT_GE(cg, 0, "join_cgroup /mix_attach_detach"))
+		return;
+
+	skel = cgroup_mprog__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_load"))
+		goto cleanup;
+
+	fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+	fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+	fd3 = bpf_program__fd(skel->progs.getsockopt_3);
+	fd4 = bpf_program__fd(skel->progs.getsockopt_4);
+
+	id1 = id_from_prog_fd(fd1);
+	id2 = id_from_prog_fd(fd2);
+	id3 = id_from_prog_fd(fd3);
+	id4 = id_from_prog_fd(fd4);
+
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI,
+		.expected_revision = 1,
+	);
+
+	/* ordering: [fd1] */
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup;
+
+	assert_mprog_count(cg, atype, 1);
+
+	LIBBPF_OPTS_RESET(lopta,
+		.flags = BPF_F_BEFORE,
+		.expected_revision = 2,
+	);
+
+	/* ordering: [fd2, fd1] */
+	link2 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_2, cg, &lopta);
+	if (!ASSERT_OK_PTR(link2, "link_attach"))
+		goto cleanup1;
+
+	assert_mprog_count(cg, atype, 2);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+		.relative_fd = fd2,
+		.expected_revision = 3,
+	);
+
+	/* ordering: [fd2, fd3, fd1] */
+	err = bpf_prog_attach_opts(fd3, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup2;
+
+	assert_mprog_count(cg, atype, 3);
+
+	LIBBPF_OPTS_RESET(lopta,
+		.expected_revision = 4,
+	);
+
+	/* ordering: [fd2, fd3, fd1, fd4] */
+	link4 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_4, cg, &lopta);
+	if (!ASSERT_OK_PTR(link4, "link_attach"))
+		goto cleanup3;
+
+	assert_mprog_count(cg, atype, 4);
+
+	/* retrieve optq.prog_cnt */
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	if (!ASSERT_OK(err, "prog_query"))
+		goto cleanup4;
+
+	/* optq.prog_cnt will be used in below query */
+	memset(prog_ids, 0, sizeof(prog_ids));
+	optq.prog_ids = prog_ids;
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	if (!ASSERT_OK(err, "prog_query"))
+		goto cleanup4;
+
+	ASSERT_EQ(optq.count, 4, "count");
+	ASSERT_EQ(optq.revision, 5, "revision");
+	ASSERT_EQ(optq.prog_ids[0], id2, "prog_ids[0]");
+	ASSERT_EQ(optq.prog_ids[1], id3, "prog_ids[1]");
+	ASSERT_EQ(optq.prog_ids[2], id1, "prog_ids[2]");
+	ASSERT_EQ(optq.prog_ids[3], id4, "prog_ids[3]");
+	ASSERT_EQ(optq.prog_ids[4], 0, "prog_ids[4]");
+	ASSERT_EQ(optq.link_ids, NULL, "link_ids");
+
+cleanup4:
+	bpf_link__destroy(link4);
+	assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+	err = bpf_prog_detach_opts(fd3, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 2);
+
+	/* Check revision after two detach operations */
+	err = bpf_prog_query_opts(cg, atype, &optq);
+	ASSERT_OK(err, "prog_query");
+	ASSERT_EQ(optq.revision, 7, "revision");
+
+cleanup2:
+	bpf_link__destroy(link2);
+	assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+	err = bpf_prog_detach_opts(fd1, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 0);
+
+cleanup:
+	cgroup_mprog__destroy(skel);
+	close(cg);
+}
+
+static void test_preorder_prog_attach_detach(int atype)
+{
+	LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+	LIBBPF_OPTS(bpf_prog_detach_opts, optd);
+	__u32 fd1, fd2, fd3, fd4;
+	struct cgroup_mprog *skel;
+	int cg, err;
+
+	cg = test__join_cgroup("/preorder_prog_attach_detach");
+	if (!ASSERT_GE(cg, 0, "join_cgroup /preorder_prog_attach_detach"))
+		return;
+
+	skel = cgroup_mprog__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_load"))
+		goto cleanup;
+
+	fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+	fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+	fd3 = bpf_program__fd(skel->progs.getsockopt_3);
+	fd4 = bpf_program__fd(skel->progs.getsockopt_4);
+
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI,
+		.expected_revision = 1,
+	);
+
+	/* ordering: [fd1] */
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup;
+
+	assert_mprog_count(cg, atype, 1);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_PREORDER,
+		.expected_revision = 2,
+	);
+
+	/* ordering: [fd1, fd2] */
+	err = bpf_prog_attach_opts(fd2, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup1;
+
+	assert_mprog_count(cg, atype, 2);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+		.relative_fd = fd2,
+		.expected_revision = 3,
+	);
+
+	err = bpf_prog_attach_opts(fd3, cg, atype, &opta);
+	if (!ASSERT_EQ(err, -EINVAL, "prog_attach"))
+		goto cleanup2;
+
+	assert_mprog_count(cg, atype, 2);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER | BPF_F_PREORDER,
+		.relative_fd = fd2,
+		.expected_revision = 3,
+	);
+
+	/* ordering: [fd1, fd2, fd3] */
+	err = bpf_prog_attach_opts(fd3, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup2;
+
+	assert_mprog_count(cg, atype, 3);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI,
+		.expected_revision = 4,
+	);
+
+	/* ordering: [fd2, fd3, fd1, fd4] */
+	err = bpf_prog_attach_opts(fd4, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup3;
+
+	assert_mprog_count(cg, atype, 4);
+
+	err = bpf_prog_detach_opts(fd4, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+	err = bpf_prog_detach_opts(fd3, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 2);
+
+cleanup2:
+	err = bpf_prog_detach_opts(fd2, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+	err = bpf_prog_detach_opts(fd1, cg, atype, &optd);
+	ASSERT_OK(err, "prog_detach");
+	assert_mprog_count(cg, atype, 0);
+
+cleanup:
+	cgroup_mprog__destroy(skel);
+	close(cg);
+}
+
+static void test_preorder_link_attach_detach(int atype)
+{
+	LIBBPF_OPTS(bpf_cgroup_opts, opta);
+	struct bpf_link *link1, *link2, *link3, *link4;
+	struct cgroup_mprog *skel;
+	__u32 fd2;
+	int cg;
+
+	cg = test__join_cgroup("/preorder_link_attach_detach");
+	if (!ASSERT_GE(cg, 0, "join_cgroup /preorder_link_attach_detach"))
+		return;
+
+	skel = cgroup_mprog__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_load"))
+		goto cleanup;
+
+	fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.expected_revision = 1,
+	);
+
+	/* ordering: [fd1] */
+	link1 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_1, cg, &opta);
+	if (!ASSERT_OK_PTR(link1, "link_attach"))
+		goto cleanup;
+
+	assert_mprog_count(cg, atype, 1);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_PREORDER,
+		.expected_revision = 2,
+	);
+
+	/* ordering: [fd1, fd2] */
+	link2 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_2, cg, &opta);
+	if (!ASSERT_OK_PTR(link2, "link_attach"))
+		goto cleanup1;
+
+	assert_mprog_count(cg, atype, 2);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_AFTER,
+		.relative_fd = fd2,
+		.expected_revision = 3,
+	);
+
+	link3 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_3, cg, &opta);
+	if (!ASSERT_ERR_PTR(link3, "link_attach"))
+		goto cleanup2;
+
+	assert_mprog_count(cg, atype, 2);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_AFTER | BPF_F_PREORDER,
+		.relative_fd = fd2,
+		.expected_revision = 3,
+	);
+
+	/* ordering: [fd1, fd2, fd3] */
+	link3 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_3, cg, &opta);
+	if (!ASSERT_OK_PTR(link3, "link_attach"))
+		goto cleanup2;
+
+	assert_mprog_count(cg, atype, 3);
+
+	LIBBPF_OPTS_RESET(opta,
+		.expected_revision = 4,
+	);
+
+	/* ordering: [fd2, fd3, fd1, fd4] */
+	link4 = bpf_program__attach_cgroup_opts(skel->progs.getsockopt_4, cg, &opta);
+	if (!ASSERT_OK_PTR(link4, "prog_attach"))
+		goto cleanup3;
+
+	assert_mprog_count(cg, atype, 4);
+
+	bpf_link__destroy(link4);
+	assert_mprog_count(cg, atype, 3);
+
+cleanup3:
+	bpf_link__destroy(link3);
+	assert_mprog_count(cg, atype, 2);
+
+cleanup2:
+	bpf_link__destroy(link2);
+	assert_mprog_count(cg, atype, 1);
+
+cleanup1:
+	bpf_link__destroy(link1);
+	assert_mprog_count(cg, atype, 0);
+
+cleanup:
+	cgroup_mprog__destroy(skel);
+	close(cg);
+}
+
+static void test_invalid_attach_detach(int atype)
+{
+	LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+	__u32 fd1, fd2, id2;
+	struct cgroup_mprog *skel;
+	int cg, err;
+
+	cg = test__join_cgroup("/invalid_attach_detach");
+	if (!ASSERT_GE(cg, 0, "join_cgroup /invalid_attach_detach"))
+		return;
+
+	skel = cgroup_mprog__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_load"))
+		goto cleanup;
+
+	fd1 = bpf_program__fd(skel->progs.getsockopt_1);
+	fd2 = bpf_program__fd(skel->progs.getsockopt_2);
+
+	id2 = id_from_prog_fd(fd2);
+
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE | BPF_F_AFTER,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -EINVAL, "prog_attach");
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE | BPF_F_ID,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -ENOENT, "prog_attach");
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER | BPF_F_ID,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -ENOENT, "prog_attach");
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE | BPF_F_AFTER,
+		.relative_id = id2,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -EINVAL, "prog_attach");
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_ID,
+		.relative_id = id2,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -EINVAL, "prog_attach");
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE,
+		.relative_fd = fd1,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -ENOENT, "prog_attach");
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+		.relative_fd = fd1,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -ENOENT, "prog_attach");
+	assert_mprog_count(cg, atype, 0);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	if (!ASSERT_EQ(err, 0, "prog_attach"))
+		goto cleanup;
+	assert_mprog_count(cg, atype, 1);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_AFTER,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -EINVAL, "prog_attach");
+	assert_mprog_count(cg, atype, 1);
+
+	LIBBPF_OPTS_RESET(opta,
+		.flags = BPF_F_ALLOW_MULTI | BPF_F_REPLACE | BPF_F_AFTER,
+		.replace_prog_fd = fd1,
+	);
+
+	err = bpf_prog_attach_opts(fd1, cg, atype, &opta);
+	ASSERT_EQ(err, -EINVAL, "prog_attach");
+	assert_mprog_count(cg, atype, 1);
+cleanup:
+	cgroup_mprog__destroy(skel);
+	close(cg);
+}
+
+void test_cgroup_mprog_opts(void)
+{
+	if (test__start_subtest("prog_attach_detach"))
+		test_prog_attach_detach(BPF_CGROUP_GETSOCKOPT);
+	if (test__start_subtest("link_attach_detach"))
+		test_link_attach_detach(BPF_CGROUP_GETSOCKOPT);
+	if (test__start_subtest("mix_attach_detach"))
+		test_mix_attach_detach(BPF_CGROUP_GETSOCKOPT);
+	if (test__start_subtest("preorder_prog_attach_detach"))
+		test_preorder_prog_attach_detach(BPF_CGROUP_GETSOCKOPT);
+	if (test__start_subtest("preorder_link_attach_detach"))
+		test_preorder_link_attach_detach(BPF_CGROUP_GETSOCKOPT);
+	if (test__start_subtest("invalid_attach_detach"))
+		test_invalid_attach_detach(BPF_CGROUP_GETSOCKOPT);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c b/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c
new file mode 100644
index 000000000000..4a4e9710b474
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include "cgroup_helpers.h"
+#include "cgroup_preorder.skel.h"
+
+static int run_getsockopt_test(int cg_parent, int sock_fd, bool has_relative_fd)
+{
+	LIBBPF_OPTS(bpf_prog_attach_opts, opts);
+	enum bpf_attach_type prog_p_atype, prog_p2_atype;
+	int prog_p_fd, prog_p2_fd;
+	struct cgroup_preorder *skel = NULL;
+	struct bpf_program *prog;
+	__u8 *result, buf;
+	socklen_t optlen;
+	int err = 0;
+
+	skel = cgroup_preorder__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "cgroup_preorder__open_and_load"))
+		return 0;
+
+	LIBBPF_OPTS_RESET(opts);
+	opts.flags = BPF_F_ALLOW_MULTI;
+	prog = skel->progs.parent;
+	prog_p_fd = bpf_program__fd(prog);
+	prog_p_atype = bpf_program__expected_attach_type(prog);
+	err = bpf_prog_attach_opts(prog_p_fd, cg_parent, prog_p_atype, &opts);
+	if (!ASSERT_OK(err, "bpf_prog_attach_opts-parent"))
+		goto close_skel;
+
+	opts.flags = BPF_F_ALLOW_MULTI | BPF_F_BEFORE;
+	if (has_relative_fd)
+		opts.relative_fd = prog_p_fd;
+	prog = skel->progs.parent_2;
+	prog_p2_fd = bpf_program__fd(prog);
+	prog_p2_atype = bpf_program__expected_attach_type(prog);
+	err = bpf_prog_attach_opts(prog_p2_fd, cg_parent, prog_p2_atype, &opts);
+	if (!ASSERT_OK(err, "bpf_prog_attach_opts-parent_2"))
+		goto detach_parent;
+
+	err = getsockopt(sock_fd, SOL_IP, IP_TOS, &buf, &optlen);
+	if (!ASSERT_OK(err, "getsockopt"))
+		goto detach_parent_2;
+
+	result = skel->bss->result;
+	ASSERT_TRUE(result[0] == 4 && result[1] == 3, "result values");
+
+detach_parent_2:
+	ASSERT_OK(bpf_prog_detach2(prog_p2_fd, cg_parent, prog_p2_atype),
+		  "bpf_prog_detach2-parent_2");
+detach_parent:
+	ASSERT_OK(bpf_prog_detach2(prog_p_fd, cg_parent, prog_p_atype),
+		  "bpf_prog_detach2-parent");
+close_skel:
+	cgroup_preorder__destroy(skel);
+	return err;
+}
+
+void test_cgroup_mprog_ordering(void)
+{
+	int cg_parent = -1, sock_fd = -1;
+
+	cg_parent = test__join_cgroup("/parent");
+	if (!ASSERT_GE(cg_parent, 0, "join_cgroup /parent"))
+		goto out;
+
+	sock_fd = socket(AF_INET, SOCK_STREAM, 0);
+	if (!ASSERT_GE(sock_fd, 0, "socket"))
+		goto out;
+
+	ASSERT_OK(run_getsockopt_test(cg_parent, sock_fd, false), "getsockopt_test_1");
+	ASSERT_OK(run_getsockopt_test(cg_parent, sock_fd, true), "getsockopt_test_2");
+
+out:
+	close(sock_fd);
+	close(cg_parent);
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_mprog.c b/tools/testing/selftests/bpf/progs/cgroup_mprog.c
new file mode 100644
index 000000000000..6a0ea02c4de2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_mprog.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("cgroup/getsockopt")
+int getsockopt_1(struct bpf_sockopt *ctx)
+{
+	return 1;
+}
+
+SEC("cgroup/getsockopt")
+int getsockopt_2(struct bpf_sockopt *ctx)
+{
+	return 1;
+}
+
+SEC("cgroup/getsockopt")
+int getsockopt_3(struct bpf_sockopt *ctx)
+{
+	return 1;
+}
+
+SEC("cgroup/getsockopt")
+int getsockopt_4(struct bpf_sockopt *ctx)
+{
+	return 1;
+}
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs
  2025-05-08 22:35 ` [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
@ 2025-05-15 20:38   ` Andrii Nakryiko
  2025-05-15 21:05     ` Andrii Nakryiko
  2025-05-17 15:46     ` Yonghong Song
  0 siblings, 2 replies; 11+ messages in thread
From: Andrii Nakryiko @ 2025-05-15 20:38 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

On Thu, May 8, 2025 at 3:35 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
> Current cgroup prog ordering is appending at attachment time. This is not
> ideal. In some cases, users want specific ordering at a particular cgroup
> level. To address this, the existing mprog API seems an ideal solution with
> supporting BPF_F_BEFORE and BPF_F_AFTER flags.
>
> But there are a few obstacles to directly use kernel mprog interface.
> Currently cgroup bpf progs already support prog attach/detach/replace
> and link-based attach/detach/replace. For example, in struct
> bpf_prog_array_item, the cgroup_storage field needs to be together
> with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
> as the member, which makes it difficult to use kernel mprog interface.
>
> In another case, the current cgroup prog detach tries to use the
> same flag as in attach. This is different from mprog kernel interface
> which uses flags passed from user space.
>
> So to avoid modifying existing behavior, I made the following changes to
> support mprog API for cgroup progs:
>  - The support is for prog list at cgroup level. Cross-level prog list
>    (a.k.a. effective prog list) is not supported.
>  - Previously, BPF_F_PREORDER is supported only for prog attach, now
>    BPF_F_PREORDER is also supported by link-based attach.
>  - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID is supported similar to
>    kernel mprog but with different implementation.
>  - For detach and replace, use the existing implementation.
>  - For attach, detach and replace, the revision for a particular prog
>    list, associated with a particular attach type, will be updated
>    by increasing count by 1.
>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
>  include/uapi/linux/bpf.h       |   7 ++
>  kernel/bpf/cgroup.c            | 144 ++++++++++++++++++++++++++++-----
>  kernel/bpf/syscall.c           |  44 ++++++----
>  tools/include/uapi/linux/bpf.h |   7 ++
>  4 files changed, 165 insertions(+), 37 deletions(-)
>

[...]

> +       if (!anchor_prog) {
> +               hlist_for_each_entry(pltmp, progs, node) {
> +                       if ((flags & BPF_F_BEFORE) && *ppltmp)
> +                               break;
> +                       *ppltmp = pltmp;

This is be correct, but it's less obvious why because of all the
loops, breaks, and NULL anchor prog. The idea here is to find the very
first pl for BPF_F_BEFORE or the very last for BPF_F_AFTER, right? So
wouldn't this be more obviously correct:

hlist_for_each_entry(pltmp, progs, node) {
    if (flags & BPF_F_BEFORE) {
        *ppltmp = pltmp;
        return NULL;
    }
    *ppltmp = pltmp;
}
return NULL;


I.e., once you know the result, just return as early as possible and
don't require tracing through the rest of code just to eventually
return all the same (but now somewhat disguised) values.


Though see my point about anchor_prog below, which will simplify this
to just `return pltmp;`


I'd also add a comment that if there is no anchor_prog, then
BPF_F_PREORDER doesn't matter because we either prepend or append to a
combined list of progs and end up with correct result

> +               }
> +       }  else {
> +               hlist_for_each_entry(pltmp, progs, node) {
> +                       pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
> +                       if (pltmp_prog != anchor_prog)
> +                               continue;
> +                       if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
> +                               goto out;
> +                       *ppltmp = pltmp;
> +                       break;
> +               }
> +               if (!*ppltmp) {
> +                       ret = -ENOENT;
> +                       goto out;
> +               }
> +       }
> +
> +       return anchor_prog;
> +
> +out:
> +       bpf_prog_put(anchor_prog);
> +       return ERR_PTR(ret);
> +}
> +
> +static int insert_pl_to_hlist(struct bpf_prog_list *pl, struct hlist_head *progs,
> +                             struct bpf_prog *prog, u32 flags, u32 id_or_fd)
> +{
> +       struct bpf_prog_list *pltmp = NULL;
> +       struct bpf_prog *anchor_prog;
> +
> +       /* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
> +       if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
> +               return -EINVAL;

I think this should be handled by get_anchor_prog(), both BPF_F_AFTER
and BPF_F_BEFORE will just result in no valid anchor program and we'll
error out below

> +
> +       anchor_prog = get_anchor_prog(progs, prog, flags, id_or_fd, &pltmp);
> +       if (IS_ERR(anchor_prog))
> +               return PTR_ERR(anchor_prog);

it's confusing that we return anchor_prog but actually never use it,
no? wouldn't it make more sense to just return struct bpf_prog_list *
for an anchor then?

> +
> +       if (hlist_empty(progs))
> +               hlist_add_head(&pl->node, progs);
> +       else if (flags & BPF_F_BEFORE)
> +               hlist_add_before(&pl->node, &pltmp->node);
> +       else
> +               hlist_add_behind(&pl->node, &pltmp->node);
> +
> +       return 0;
> +}
> +
>  /**
>   * __cgroup_bpf_attach() - Attach the program or the link to a cgroup, and
>   *                         propagate the change to descendants
> @@ -633,6 +710,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
>   * @replace_prog: Previously attached program to replace if BPF_F_REPLACE is set
>   * @type: Type of attach operation
>   * @flags: Option flags
> + * @id_or_fd: Relative prog id or fd
> + * @revision: bpf_prog_list revision
>   *
>   * Exactly one of @prog or @link can be non-null.
>   * Must be called with cgroup_mutex held.
> @@ -640,7 +719,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
>  static int __cgroup_bpf_attach(struct cgroup *cgrp,
>                                struct bpf_prog *prog, struct bpf_prog *replace_prog,
>                                struct bpf_cgroup_link *link,
> -                              enum bpf_attach_type type, u32 flags)
> +                              enum bpf_attach_type type, u32 flags, u32 id_or_fd,
> +                              u64 revision)
>  {
>         u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
>         struct bpf_prog *old_prog = NULL;
> @@ -656,6 +736,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
>             ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
>                 /* invalid combination */
>                 return -EINVAL;
> +       if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
> +               /* only either replace or insertion with before/after */
> +               return -EINVAL;
>         if (link && (prog || replace_prog))
>                 /* only either link or prog/replace_prog can be specified */
>                 return -EINVAL;
> @@ -663,9 +746,12 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
>                 /* replace_prog implies BPF_F_REPLACE, and vice versa */
>                 return -EINVAL;
>
> +

nit: unnecessary empty line?

>         atype = bpf_cgroup_atype_find(type, new_prog->aux->attach_btf_id);
>         if (atype < 0)
>                 return -EINVAL;
> +       if (revision && revision != cgrp->bpf.revisions[atype])
> +               return -ESTALE;
>
>         progs = &cgrp->bpf.progs[atype];
>

[...]

> @@ -1312,7 +1409,8 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
>         struct cgroup *cgrp;
>         int err;
>
> -       if (attr->link_create.flags)
> +       if (attr->link_create.flags &&
> +           (attr->link_create.flags & (~(BPF_F_ID | BPF_F_BEFORE | BPF_F_AFTER | BPF_F_PREORDER))))

why the `attr->link_create.flags &&` check, seems unnecessary


also looking at BPF_F_ATTACH_MASK_MPROG, not allowing BPF_F_REPLACE
makes sense, but BPF_F_LINK makes sense for ordering, no?

>                 return -EINVAL;
>
>         cgrp = cgroup_get_from_fd(attr->link_create.target_fd);
> @@ -1336,7 +1434,9 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
>         }
>
>         err = cgroup_bpf_attach(cgrp, NULL, NULL, link,
> -                               link->type, BPF_F_ALLOW_MULTI);
> +                               link->type, BPF_F_ALLOW_MULTI | attr->link_create.flags,
> +                               attr->link_create.cgroup.relative_fd,
> +                               attr->link_create.cgroup.expected_revision);
>         if (err) {
>                 bpf_link_cleanup(&link_primer);
>                 goto out_put_cgroup;
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index df33d19c5c3b..58ea3c38eabb 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -4184,6 +4184,25 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
>         }
>  }
>
> +static bool is_cgroup_prog_type(enum bpf_prog_type ptype, enum bpf_attach_type atype,
> +                               bool check_atype)
> +{
> +       switch (ptype) {
> +       case BPF_PROG_TYPE_CGROUP_DEVICE:
> +       case BPF_PROG_TYPE_CGROUP_SKB:
> +       case BPF_PROG_TYPE_CGROUP_SOCK:
> +       case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> +       case BPF_PROG_TYPE_CGROUP_SOCKOPT:
> +       case BPF_PROG_TYPE_CGROUP_SYSCTL:
> +       case BPF_PROG_TYPE_SOCK_OPS:
> +               return true;
> +       case BPF_PROG_TYPE_LSM:
> +               return check_atype ? atype == BPF_LSM_CGROUP : true;
> +       default:
> +               return false;
> +       }
> +}
> +
>  #define BPF_PROG_ATTACH_LAST_FIELD expected_revision
>
>  #define BPF_F_ATTACH_MASK_BASE \
> @@ -4214,6 +4233,9 @@ static int bpf_prog_attach(const union bpf_attr *attr)
>         if (bpf_mprog_supported(ptype)) {
>                 if (attr->attach_flags & ~BPF_F_ATTACH_MASK_MPROG)
>                         return -EINVAL;
> +       } else if (is_cgroup_prog_type(ptype, 0, false)) {
> +               if (attr->attach_flags & BPF_F_LINK)
> +                       return -EINVAL;

Why disable BPF_F_LINK? It's just a matter of using FD/ID for link vs
program to specify the place to attach. It doesn't mean that we need
to attach through BPF link interface. Or am I misremembering?

>         } else {
>                 if (attr->attach_flags & ~BPF_F_ATTACH_MASK_BASE)
>                         return -EINVAL;
> @@ -4242,20 +4264,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
>         case BPF_PROG_TYPE_FLOW_DISSECTOR:
>                 ret = netns_bpf_prog_attach(attr, prog);
>                 break;
> -       case BPF_PROG_TYPE_CGROUP_DEVICE:
> -       case BPF_PROG_TYPE_CGROUP_SKB:
> -       case BPF_PROG_TYPE_CGROUP_SOCK:
> -       case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> -       case BPF_PROG_TYPE_CGROUP_SOCKOPT:
> -       case BPF_PROG_TYPE_CGROUP_SYSCTL:
> -       case BPF_PROG_TYPE_SOCK_OPS:
> -       case BPF_PROG_TYPE_LSM:
> -               if (ptype == BPF_PROG_TYPE_LSM &&
> -                   prog->expected_attach_type != BPF_LSM_CGROUP)
> -                       ret = -EINVAL;
> -               else
> -                       ret = cgroup_bpf_prog_attach(attr, ptype, prog);
> -               break;
>         case BPF_PROG_TYPE_SCHED_CLS:
>                 if (attr->attach_type == BPF_TCX_INGRESS ||
>                     attr->attach_type == BPF_TCX_EGRESS)
> @@ -4264,7 +4272,10 @@ static int bpf_prog_attach(const union bpf_attr *attr)
>                         ret = netkit_prog_attach(attr, prog);
>                 break;
>         default:
> -               ret = -EINVAL;
> +               if (!is_cgroup_prog_type(ptype, prog->expected_attach_type, true))
> +                       ret = -EINVAL;
> +               else
> +                       ret = cgroup_bpf_prog_attach(attr, ptype, prog);
>         }
>
>         if (ret)

[...]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 1/4] cgroup: Add bpf prog revisions to struct cgroup_bpf
  2025-05-08 22:35 ` [PATCH bpf-next v2 1/4] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
@ 2025-05-15 20:39   ` Andrii Nakryiko
  0 siblings, 0 replies; 11+ messages in thread
From: Andrii Nakryiko @ 2025-05-15 20:39 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

On Thu, May 8, 2025 at 3:35 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
> One of key items in mprog API is revision for prog list. The revision
> number will be increased if the prog list changed, e.g., attach, detach
> or replace.
>
> Add 'revisions' field to struct cgroup_bpf, representing revisions for
> all cgroup related attachment types. The initial revision value is
> set to 1, the same as kernel mprog implementations.
>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
>  include/linux/bpf-cgroup-defs.h | 1 +
>  kernel/cgroup/cgroup.c          | 5 +++++
>  2 files changed, 6 insertions(+)
>

LGTM

Acked-by: Andrii Nakryiko <andrii@kernel.org>

> diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
> index 0985221d5478..c9e6b26abab6 100644
> --- a/include/linux/bpf-cgroup-defs.h
> +++ b/include/linux/bpf-cgroup-defs.h
> @@ -63,6 +63,7 @@ struct cgroup_bpf {
>          */
>         struct hlist_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
>         u8 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
> +       u64 revisions[MAX_CGROUP_BPF_ATTACH_TYPE];
>
>         /* list of cgroup shared storages */
>         struct list_head storages;
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 63e5b90da1f3..260ce8fc4ea4 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -2071,6 +2071,11 @@ static void init_cgroup_housekeeping(struct cgroup *cgrp)
>         for_each_subsys(ss, ssid)
>                 INIT_LIST_HEAD(&cgrp->e_csets[ssid]);
>
> +#ifdef CONFIG_CGROUP_BPF
> +       for (int i = 0; i < ARRAY_SIZE(cgrp->bpf.revisions); i++)
> +               cgrp->bpf.revisions[i] = 1;
> +#endif
> +
>         init_waitqueue_head(&cgrp->offline_waitq);
>         INIT_WORK(&cgrp->release_agent_work, cgroup1_release_agent);
>  }
> --
> 2.47.1
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 3/4] libbpf: Support link-based cgroup attach with options
  2025-05-08 22:35 ` [PATCH bpf-next v2 3/4] libbpf: Support link-based cgroup attach with options Yonghong Song
@ 2025-05-15 20:42   ` Andrii Nakryiko
  0 siblings, 0 replies; 11+ messages in thread
From: Andrii Nakryiko @ 2025-05-15 20:42 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

On Thu, May 8, 2025 at 3:35 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
> Currently libbpf supports bpf_program__attach_cgroup() with signature:
>   LIBBPF_API struct bpf_link *
>   bpf_program__attach_cgroup(const struct bpf_program *prog, int cgroup_fd);
>
> To support mprog style attachment, additionsl fields like flags,
> relative_{fd,id} and expected_revision are needed.
>
> Add a new API:
>   LIBBPF_API struct bpf_link *
>   bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
>                                   const struct bpf_cgroup_opts *opts);
> where bpf_cgroup_opts contains all above needed fields.
>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
>  tools/lib/bpf/bpf.c      | 44 ++++++++++++++++++++++++++++++++++++++++
>  tools/lib/bpf/bpf.h      |  5 +++++
>  tools/lib/bpf/libbpf.c   | 28 +++++++++++++++++++++++++
>  tools/lib/bpf/libbpf.h   | 15 ++++++++++++++
>  tools/lib/bpf/libbpf.map |  1 +
>  5 files changed, 93 insertions(+)
>

LGTM

Acked-by: Andrii Nakryiko <andrii@kernel.org>

[...]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs
  2025-05-15 20:38   ` Andrii Nakryiko
@ 2025-05-15 21:05     ` Andrii Nakryiko
  2025-05-17 15:49       ` Yonghong Song
  2025-05-17 15:46     ` Yonghong Song
  1 sibling, 1 reply; 11+ messages in thread
From: Andrii Nakryiko @ 2025-05-15 21:05 UTC (permalink / raw)
  To: Yonghong Song, Daniel Borkmann
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, kernel-team,
	Martin KaFai Lau

On Thu, May 15, 2025 at 1:38 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, May 8, 2025 at 3:35 PM Yonghong Song <yonghong.song@linux.dev> wrote:
> >
> > Current cgroup prog ordering is appending at attachment time. This is not
> > ideal. In some cases, users want specific ordering at a particular cgroup
> > level. To address this, the existing mprog API seems an ideal solution with
> > supporting BPF_F_BEFORE and BPF_F_AFTER flags.
> >
> > But there are a few obstacles to directly use kernel mprog interface.
> > Currently cgroup bpf progs already support prog attach/detach/replace
> > and link-based attach/detach/replace. For example, in struct
> > bpf_prog_array_item, the cgroup_storage field needs to be together
> > with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
> > as the member, which makes it difficult to use kernel mprog interface.
> >
> > In another case, the current cgroup prog detach tries to use the
> > same flag as in attach. This is different from mprog kernel interface
> > which uses flags passed from user space.
> >
> > So to avoid modifying existing behavior, I made the following changes to
> > support mprog API for cgroup progs:
> >  - The support is for prog list at cgroup level. Cross-level prog list
> >    (a.k.a. effective prog list) is not supported.
> >  - Previously, BPF_F_PREORDER is supported only for prog attach, now
> >    BPF_F_PREORDER is also supported by link-based attach.
> >  - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID is supported similar to
> >    kernel mprog but with different implementation.
> >  - For detach and replace, use the existing implementation.
> >  - For attach, detach and replace, the revision for a particular prog
> >    list, associated with a particular attach type, will be updated
> >    by increasing count by 1.
> >
> > Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> > ---
> >  include/uapi/linux/bpf.h       |   7 ++
> >  kernel/bpf/cgroup.c            | 144 ++++++++++++++++++++++++++++-----
> >  kernel/bpf/syscall.c           |  44 ++++++----
> >  tools/include/uapi/linux/bpf.h |   7 ++
> >  4 files changed, 165 insertions(+), 37 deletions(-)
> >
>
> [...]
>
> > +       if (!anchor_prog) {
> > +               hlist_for_each_entry(pltmp, progs, node) {
> > +                       if ((flags & BPF_F_BEFORE) && *ppltmp)
> > +                               break;
> > +                       *ppltmp = pltmp;
>
> This is be correct, but it's less obvious why because of all the
> loops, breaks, and NULL anchor prog. The idea here is to find the very
> first pl for BPF_F_BEFORE or the very last for BPF_F_AFTER, right? So
> wouldn't this be more obviously correct:
>
> hlist_for_each_entry(pltmp, progs, node) {
>     if (flags & BPF_F_BEFORE) {
>         *ppltmp = pltmp;
>         return NULL;
>     }
>     *ppltmp = pltmp;
> }
> return NULL;
>
>
> I.e., once you know the result, just return as early as possible and
> don't require tracing through the rest of code just to eventually
> return all the same (but now somewhat disguised) values.
>
>
> Though see my point about anchor_prog below, which will simplify this
> to just `return pltmp;`
>
>
> I'd also add a comment that if there is no anchor_prog, then
> BPF_F_PREORDER doesn't matter because we either prepend or append to a
> combined list of progs and end up with correct result
>
> > +               }
> > +       }  else {
> > +               hlist_for_each_entry(pltmp, progs, node) {
> > +                       pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
> > +                       if (pltmp_prog != anchor_prog)
> > +                               continue;
> > +                       if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
> > +                               goto out;
> > +                       *ppltmp = pltmp;
> > +                       break;
> > +               }
> > +               if (!*ppltmp) {
> > +                       ret = -ENOENT;
> > +                       goto out;
> > +               }
> > +       }
> > +
> > +       return anchor_prog;
> > +
> > +out:
> > +       bpf_prog_put(anchor_prog);
> > +       return ERR_PTR(ret);
> > +}
> > +
> > +static int insert_pl_to_hlist(struct bpf_prog_list *pl, struct hlist_head *progs,
> > +                             struct bpf_prog *prog, u32 flags, u32 id_or_fd)
> > +{
> > +       struct bpf_prog_list *pltmp = NULL;
> > +       struct bpf_prog *anchor_prog;
> > +
> > +       /* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
> > +       if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
> > +               return -EINVAL;
>
> I think this should be handled by get_anchor_prog(), both BPF_F_AFTER
> and BPF_F_BEFORE will just result in no valid anchor program and we'll
> error out below

Oh, I just randomly realized that there is a special case that I think
is allowed by Daniel's mprog implementation, and it might be important
for some users. If both BPF_F_BEFORE and BPF_F_AFTER are specified and
there is no ID/FD, then this combination would succeed if and only if
the currently attached list of progs is empty. Check
bpf_mprog_attach() and how it handles BPF_F_BEFORE and BPF_F_AFTER
completely independently calculating tidx. If tidx ends up being
consistent (which should be -1 for empty list), then that's where the
prog/link is inserted (-1 result in prepending into an empty list).


Daniel, can you please double check and generally take a look at this
patch set, given you have the most detailed knowledge of mprog
interface? Thanks!

>
> > +
> > +       anchor_prog = get_anchor_prog(progs, prog, flags, id_or_fd, &pltmp);
> > +       if (IS_ERR(anchor_prog))
> > +               return PTR_ERR(anchor_prog);
>
> it's confusing that we return anchor_prog but actually never use it,
> no? wouldn't it make more sense to just return struct bpf_prog_list *
> for an anchor then?
>
> > +
> > +       if (hlist_empty(progs))
> > +               hlist_add_head(&pl->node, progs);
> > +       else if (flags & BPF_F_BEFORE)
> > +               hlist_add_before(&pl->node, &pltmp->node);
> > +       else
> > +               hlist_add_behind(&pl->node, &pltmp->node);
> > +
> > +       return 0;
> > +}
> > +
> >  /**
> >   * __cgroup_bpf_attach() - Attach the program or the link to a cgroup, and
> >   *                         propagate the change to descendants
> > @@ -633,6 +710,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
> >   * @replace_prog: Previously attached program to replace if BPF_F_REPLACE is set
> >   * @type: Type of attach operation
> >   * @flags: Option flags
> > + * @id_or_fd: Relative prog id or fd
> > + * @revision: bpf_prog_list revision
> >   *
> >   * Exactly one of @prog or @link can be non-null.
> >   * Must be called with cgroup_mutex held.
> > @@ -640,7 +719,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
> >  static int __cgroup_bpf_attach(struct cgroup *cgrp,
> >                                struct bpf_prog *prog, struct bpf_prog *replace_prog,
> >                                struct bpf_cgroup_link *link,
> > -                              enum bpf_attach_type type, u32 flags)
> > +                              enum bpf_attach_type type, u32 flags, u32 id_or_fd,
> > +                              u64 revision)
> >  {
> >         u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
> >         struct bpf_prog *old_prog = NULL;
> > @@ -656,6 +736,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
> >             ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
> >                 /* invalid combination */
> >                 return -EINVAL;
> > +       if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
> > +               /* only either replace or insertion with before/after */
> > +               return -EINVAL;
> >         if (link && (prog || replace_prog))
> >                 /* only either link or prog/replace_prog can be specified */
> >                 return -EINVAL;
> > @@ -663,9 +746,12 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
> >                 /* replace_prog implies BPF_F_REPLACE, and vice versa */
> >                 return -EINVAL;
> >
> > +
>
> nit: unnecessary empty line?
>
> >         atype = bpf_cgroup_atype_find(type, new_prog->aux->attach_btf_id);
> >         if (atype < 0)
> >                 return -EINVAL;
> > +       if (revision && revision != cgrp->bpf.revisions[atype])
> > +               return -ESTALE;
> >
> >         progs = &cgrp->bpf.progs[atype];
> >
>
> [...]
>
> > @@ -1312,7 +1409,8 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
> >         struct cgroup *cgrp;
> >         int err;
> >
> > -       if (attr->link_create.flags)
> > +       if (attr->link_create.flags &&
> > +           (attr->link_create.flags & (~(BPF_F_ID | BPF_F_BEFORE | BPF_F_AFTER | BPF_F_PREORDER))))
>
> why the `attr->link_create.flags &&` check, seems unnecessary
>
>
> also looking at BPF_F_ATTACH_MASK_MPROG, not allowing BPF_F_REPLACE
> makes sense, but BPF_F_LINK makes sense for ordering, no?
>
> >                 return -EINVAL;
> >
> >         cgrp = cgroup_get_from_fd(attr->link_create.target_fd);
> > @@ -1336,7 +1434,9 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
> >         }
> >
> >         err = cgroup_bpf_attach(cgrp, NULL, NULL, link,
> > -                               link->type, BPF_F_ALLOW_MULTI);
> > +                               link->type, BPF_F_ALLOW_MULTI | attr->link_create.flags,
> > +                               attr->link_create.cgroup.relative_fd,
> > +                               attr->link_create.cgroup.expected_revision);
> >         if (err) {
> >                 bpf_link_cleanup(&link_primer);
> >                 goto out_put_cgroup;
> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > index df33d19c5c3b..58ea3c38eabb 100644
> > --- a/kernel/bpf/syscall.c
> > +++ b/kernel/bpf/syscall.c
> > @@ -4184,6 +4184,25 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
> >         }
> >  }
> >
> > +static bool is_cgroup_prog_type(enum bpf_prog_type ptype, enum bpf_attach_type atype,
> > +                               bool check_atype)
> > +{
> > +       switch (ptype) {
> > +       case BPF_PROG_TYPE_CGROUP_DEVICE:
> > +       case BPF_PROG_TYPE_CGROUP_SKB:
> > +       case BPF_PROG_TYPE_CGROUP_SOCK:
> > +       case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> > +       case BPF_PROG_TYPE_CGROUP_SOCKOPT:
> > +       case BPF_PROG_TYPE_CGROUP_SYSCTL:
> > +       case BPF_PROG_TYPE_SOCK_OPS:
> > +               return true;
> > +       case BPF_PROG_TYPE_LSM:
> > +               return check_atype ? atype == BPF_LSM_CGROUP : true;
> > +       default:
> > +               return false;
> > +       }
> > +}
> > +
> >  #define BPF_PROG_ATTACH_LAST_FIELD expected_revision
> >
> >  #define BPF_F_ATTACH_MASK_BASE \
> > @@ -4214,6 +4233,9 @@ static int bpf_prog_attach(const union bpf_attr *attr)
> >         if (bpf_mprog_supported(ptype)) {
> >                 if (attr->attach_flags & ~BPF_F_ATTACH_MASK_MPROG)
> >                         return -EINVAL;
> > +       } else if (is_cgroup_prog_type(ptype, 0, false)) {
> > +               if (attr->attach_flags & BPF_F_LINK)
> > +                       return -EINVAL;
>
> Why disable BPF_F_LINK? It's just a matter of using FD/ID for link vs
> program to specify the place to attach. It doesn't mean that we need
> to attach through BPF link interface. Or am I misremembering?
>
> >         } else {
> >                 if (attr->attach_flags & ~BPF_F_ATTACH_MASK_BASE)
> >                         return -EINVAL;
> > @@ -4242,20 +4264,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
> >         case BPF_PROG_TYPE_FLOW_DISSECTOR:
> >                 ret = netns_bpf_prog_attach(attr, prog);
> >                 break;
> > -       case BPF_PROG_TYPE_CGROUP_DEVICE:
> > -       case BPF_PROG_TYPE_CGROUP_SKB:
> > -       case BPF_PROG_TYPE_CGROUP_SOCK:
> > -       case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> > -       case BPF_PROG_TYPE_CGROUP_SOCKOPT:
> > -       case BPF_PROG_TYPE_CGROUP_SYSCTL:
> > -       case BPF_PROG_TYPE_SOCK_OPS:
> > -       case BPF_PROG_TYPE_LSM:
> > -               if (ptype == BPF_PROG_TYPE_LSM &&
> > -                   prog->expected_attach_type != BPF_LSM_CGROUP)
> > -                       ret = -EINVAL;
> > -               else
> > -                       ret = cgroup_bpf_prog_attach(attr, ptype, prog);
> > -               break;
> >         case BPF_PROG_TYPE_SCHED_CLS:
> >                 if (attr->attach_type == BPF_TCX_INGRESS ||
> >                     attr->attach_type == BPF_TCX_EGRESS)
> > @@ -4264,7 +4272,10 @@ static int bpf_prog_attach(const union bpf_attr *attr)
> >                         ret = netkit_prog_attach(attr, prog);
> >                 break;
> >         default:
> > -               ret = -EINVAL;
> > +               if (!is_cgroup_prog_type(ptype, prog->expected_attach_type, true))
> > +                       ret = -EINVAL;
> > +               else
> > +                       ret = cgroup_bpf_prog_attach(attr, ptype, prog);
> >         }
> >
> >         if (ret)
>
> [...]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs
  2025-05-15 20:38   ` Andrii Nakryiko
  2025-05-15 21:05     ` Andrii Nakryiko
@ 2025-05-17 15:46     ` Yonghong Song
  1 sibling, 0 replies; 11+ messages in thread
From: Yonghong Song @ 2025-05-17 15:46 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau



On 5/15/25 4:38 AM, Andrii Nakryiko wrote:
> On Thu, May 8, 2025 at 3:35 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>> Current cgroup prog ordering is appending at attachment time. This is not
>> ideal. In some cases, users want specific ordering at a particular cgroup
>> level. To address this, the existing mprog API seems an ideal solution with
>> supporting BPF_F_BEFORE and BPF_F_AFTER flags.
>>
>> But there are a few obstacles to directly use kernel mprog interface.
>> Currently cgroup bpf progs already support prog attach/detach/replace
>> and link-based attach/detach/replace. For example, in struct
>> bpf_prog_array_item, the cgroup_storage field needs to be together
>> with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
>> as the member, which makes it difficult to use kernel mprog interface.
>>
>> In another case, the current cgroup prog detach tries to use the
>> same flag as in attach. This is different from mprog kernel interface
>> which uses flags passed from user space.
>>
>> So to avoid modifying existing behavior, I made the following changes to
>> support mprog API for cgroup progs:
>>   - The support is for prog list at cgroup level. Cross-level prog list
>>     (a.k.a. effective prog list) is not supported.
>>   - Previously, BPF_F_PREORDER is supported only for prog attach, now
>>     BPF_F_PREORDER is also supported by link-based attach.
>>   - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID is supported similar to
>>     kernel mprog but with different implementation.
>>   - For detach and replace, use the existing implementation.
>>   - For attach, detach and replace, the revision for a particular prog
>>     list, associated with a particular attach type, will be updated
>>     by increasing count by 1.
>>
>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>> ---
>>   include/uapi/linux/bpf.h       |   7 ++
>>   kernel/bpf/cgroup.c            | 144 ++++++++++++++++++++++++++++-----
>>   kernel/bpf/syscall.c           |  44 ++++++----
>>   tools/include/uapi/linux/bpf.h |   7 ++
>>   4 files changed, 165 insertions(+), 37 deletions(-)
>>
> [...]
>
>> +       if (!anchor_prog) {
>> +               hlist_for_each_entry(pltmp, progs, node) {
>> +                       if ((flags & BPF_F_BEFORE) && *ppltmp)
>> +                               break;
>> +                       *ppltmp = pltmp;
> This is be correct, but it's less obvious why because of all the
> loops, breaks, and NULL anchor prog. The idea here is to find the very
> first pl for BPF_F_BEFORE or the very last for BPF_F_AFTER, right? So
> wouldn't this be more obviously correct:
>
> hlist_for_each_entry(pltmp, progs, node) {
>      if (flags & BPF_F_BEFORE) {
>          *ppltmp = pltmp;
>          return NULL;
>      }
>      *ppltmp = pltmp;
> }
> return NULL;
>
>
> I.e., once you know the result, just return as early as possible and
> don't require tracing through the rest of code just to eventually
> return all the same (but now somewhat disguised) values.
>
>
> Though see my point about anchor_prog below, which will simplify this
> to just `return pltmp;`

Indeed, returning pltmp sounds a better idea. I will do that.

>
>
> I'd also add a comment that if there is no anchor_prog, then
> BPF_F_PREORDER doesn't matter because we either prepend or append to a
> combined list of progs and end up with correct result

Okay, will add such comments.


>
>> +               }
>> +       }  else {
>> +               hlist_for_each_entry(pltmp, progs, node) {
>> +                       pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
>> +                       if (pltmp_prog != anchor_prog)
>> +                               continue;
>> +                       if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
>> +                               goto out;
>> +                       *ppltmp = pltmp;
>> +                       break;
>> +               }
>> +               if (!*ppltmp) {
>> +                       ret = -ENOENT;
>> +                       goto out;
>> +               }
>> +       }
>> +
>> +       return anchor_prog;
>> +
>> +out:
>> +       bpf_prog_put(anchor_prog);
>> +       return ERR_PTR(ret);
>> +}
>> +
>> +static int insert_pl_to_hlist(struct bpf_prog_list *pl, struct hlist_head *progs,
>> +                             struct bpf_prog *prog, u32 flags, u32 id_or_fd)
>> +{
>> +       struct bpf_prog_list *pltmp = NULL;
>> +       struct bpf_prog *anchor_prog;
>> +
>> +       /* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
>> +       if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
>> +               return -EINVAL;
> I think this should be handled by get_anchor_prog(), both BPF_F_AFTER
> and BPF_F_BEFORE will just result in no valid anchor program and we'll
> error out below

Yes. I will move the above flags checking to get_anchor_prog().

>
>> +
>> +       anchor_prog = get_anchor_prog(progs, prog, flags, id_or_fd, &pltmp);
>> +       if (IS_ERR(anchor_prog))
>> +               return PTR_ERR(anchor_prog);
> it's confusing that we return anchor_prog but actually never use it,
> no? wouldn't it make more sense to just return struct bpf_prog_list *
> for an anchor then?

Totally agree. Will return 'struct bpf_prog_list *'.

>
>> +
>> +       if (hlist_empty(progs))
>> +               hlist_add_head(&pl->node, progs);
>> +       else if (flags & BPF_F_BEFORE)
>> +               hlist_add_before(&pl->node, &pltmp->node);
>> +       else
>> +               hlist_add_behind(&pl->node, &pltmp->node);
>> +
>> +       return 0;
>> +}
>> +
>>   /**
>>    * __cgroup_bpf_attach() - Attach the program or the link to a cgroup, and
>>    *                         propagate the change to descendants
>> @@ -633,6 +710,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
>>    * @replace_prog: Previously attached program to replace if BPF_F_REPLACE is set
>>    * @type: Type of attach operation
>>    * @flags: Option flags
>> + * @id_or_fd: Relative prog id or fd
>> + * @revision: bpf_prog_list revision
>>    *
>>    * Exactly one of @prog or @link can be non-null.
>>    * Must be called with cgroup_mutex held.
>> @@ -640,7 +719,8 @@ static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
>>   static int __cgroup_bpf_attach(struct cgroup *cgrp,
>>                                 struct bpf_prog *prog, struct bpf_prog *replace_prog,
>>                                 struct bpf_cgroup_link *link,
>> -                              enum bpf_attach_type type, u32 flags)
>> +                              enum bpf_attach_type type, u32 flags, u32 id_or_fd,
>> +                              u64 revision)
>>   {
>>          u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI));
>>          struct bpf_prog *old_prog = NULL;
>> @@ -656,6 +736,9 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
>>              ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI)))
>>                  /* invalid combination */
>>                  return -EINVAL;
>> +       if ((flags & BPF_F_REPLACE) && (flags & (BPF_F_BEFORE | BPF_F_AFTER)))
>> +               /* only either replace or insertion with before/after */
>> +               return -EINVAL;
>>          if (link && (prog || replace_prog))
>>                  /* only either link or prog/replace_prog can be specified */
>>                  return -EINVAL;
>> @@ -663,9 +746,12 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
>>                  /* replace_prog implies BPF_F_REPLACE, and vice versa */
>>                  return -EINVAL;
>>
>> +
> nit: unnecessary empty line?

Ack

>
>>          atype = bpf_cgroup_atype_find(type, new_prog->aux->attach_btf_id);
>>          if (atype < 0)
>>                  return -EINVAL;
>> +       if (revision && revision != cgrp->bpf.revisions[atype])
>> +               return -ESTALE;
>>
>>          progs = &cgrp->bpf.progs[atype];
>>
> [...]
>
>> @@ -1312,7 +1409,8 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
>>          struct cgroup *cgrp;
>>          int err;
>>
>> -       if (attr->link_create.flags)
>> +       if (attr->link_create.flags &&
>> +           (attr->link_create.flags & (~(BPF_F_ID | BPF_F_BEFORE | BPF_F_AFTER | BPF_F_PREORDER))))
> why the `attr->link_create.flags &&` check, seems unnecessary
>
>
> also looking at BPF_F_ATTACH_MASK_MPROG, not allowing BPF_F_REPLACE
> makes sense, but BPF_F_LINK makes sense for ordering, no?

I didn't add BPF_F_LINK as my current implementation didn't support it.
But as you mentioned, mprog API does support it (to find anchor prog).
I will add support in the next revision.

>
>>                  return -EINVAL;
>>
>>          cgrp = cgroup_get_from_fd(attr->link_create.target_fd);
>> @@ -1336,7 +1434,9 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
>>          }
>>
>>          err = cgroup_bpf_attach(cgrp, NULL, NULL, link,
>> -                               link->type, BPF_F_ALLOW_MULTI);
>> +                               link->type, BPF_F_ALLOW_MULTI | attr->link_create.flags,
>> +                               attr->link_create.cgroup.relative_fd,
>> +                               attr->link_create.cgroup.expected_revision);
>>          if (err) {
>>                  bpf_link_cleanup(&link_primer);
>>                  goto out_put_cgroup;
>> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
>> index df33d19c5c3b..58ea3c38eabb 100644
>> --- a/kernel/bpf/syscall.c
>> +++ b/kernel/bpf/syscall.c
>> @@ -4184,6 +4184,25 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
>>          }
>>   }
>>
>> +static bool is_cgroup_prog_type(enum bpf_prog_type ptype, enum bpf_attach_type atype,
>> +                               bool check_atype)
>> +{
>> +       switch (ptype) {
>> +       case BPF_PROG_TYPE_CGROUP_DEVICE:
>> +       case BPF_PROG_TYPE_CGROUP_SKB:
>> +       case BPF_PROG_TYPE_CGROUP_SOCK:
>> +       case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
>> +       case BPF_PROG_TYPE_CGROUP_SOCKOPT:
>> +       case BPF_PROG_TYPE_CGROUP_SYSCTL:
>> +       case BPF_PROG_TYPE_SOCK_OPS:
>> +               return true;
>> +       case BPF_PROG_TYPE_LSM:
>> +               return check_atype ? atype == BPF_LSM_CGROUP : true;
>> +       default:
>> +               return false;
>> +       }
>> +}
>> +
>>   #define BPF_PROG_ATTACH_LAST_FIELD expected_revision
>>
>>   #define BPF_F_ATTACH_MASK_BASE \
>> @@ -4214,6 +4233,9 @@ static int bpf_prog_attach(const union bpf_attr *attr)
>>          if (bpf_mprog_supported(ptype)) {
>>                  if (attr->attach_flags & ~BPF_F_ATTACH_MASK_MPROG)
>>                          return -EINVAL;
>> +       } else if (is_cgroup_prog_type(ptype, 0, false)) {
>> +               if (attr->attach_flags & BPF_F_LINK)
>> +                       return -EINVAL;
> Why disable BPF_F_LINK? It's just a matter of using FD/ID for link vs
> program to specify the place to attach. It doesn't mean that we need
> to attach through BPF link interface. Or am I misremembering?

Again, I didn't implement it. Will add support in the next revision.

>
>>          } else {
>>                  if (attr->attach_flags & ~BPF_F_ATTACH_MASK_BASE)
>>                          return -EINVAL;
>> @@ -4242,20 +4264,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
>>          case BPF_PROG_TYPE_FLOW_DISSECTOR:
>>                  ret = netns_bpf_prog_attach(attr, prog);
>>                  break;
>> -       case BPF_PROG_TYPE_CGROUP_DEVICE:
>> -       case BPF_PROG_TYPE_CGROUP_SKB:
>> -       case BPF_PROG_TYPE_CGROUP_SOCK:
>> -       case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
>> -       case BPF_PROG_TYPE_CGROUP_SOCKOPT:
>> -       case BPF_PROG_TYPE_CGROUP_SYSCTL:
>> -       case BPF_PROG_TYPE_SOCK_OPS:
>> -       case BPF_PROG_TYPE_LSM:
>> -               if (ptype == BPF_PROG_TYPE_LSM &&
>> -                   prog->expected_attach_type != BPF_LSM_CGROUP)
>> -                       ret = -EINVAL;
>> -               else
>> -                       ret = cgroup_bpf_prog_attach(attr, ptype, prog);
>> -               break;
>>          case BPF_PROG_TYPE_SCHED_CLS:
>>                  if (attr->attach_type == BPF_TCX_INGRESS ||
>>                      attr->attach_type == BPF_TCX_EGRESS)
>> @@ -4264,7 +4272,10 @@ static int bpf_prog_attach(const union bpf_attr *attr)
>>                          ret = netkit_prog_attach(attr, prog);
>>                  break;
>>          default:
>> -               ret = -EINVAL;
>> +               if (!is_cgroup_prog_type(ptype, prog->expected_attach_type, true))
>> +                       ret = -EINVAL;
>> +               else
>> +                       ret = cgroup_bpf_prog_attach(attr, ptype, prog);
>>          }
>>
>>          if (ret)
> [...]


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs
  2025-05-15 21:05     ` Andrii Nakryiko
@ 2025-05-17 15:49       ` Yonghong Song
  0 siblings, 0 replies; 11+ messages in thread
From: Yonghong Song @ 2025-05-17 15:49 UTC (permalink / raw)
  To: Andrii Nakryiko, Daniel Borkmann
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, kernel-team,
	Martin KaFai Lau



On 5/15/25 5:05 AM, Andrii Nakryiko wrote:
> On Thu, May 15, 2025 at 1:38 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
>> On Thu, May 8, 2025 at 3:35 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>>> Current cgroup prog ordering is appending at attachment time. This is not
>>> ideal. In some cases, users want specific ordering at a particular cgroup
>>> level. To address this, the existing mprog API seems an ideal solution with
>>> supporting BPF_F_BEFORE and BPF_F_AFTER flags.
>>>
>>> But there are a few obstacles to directly use kernel mprog interface.
>>> Currently cgroup bpf progs already support prog attach/detach/replace
>>> and link-based attach/detach/replace. For example, in struct
>>> bpf_prog_array_item, the cgroup_storage field needs to be together
>>> with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog
>>> as the member, which makes it difficult to use kernel mprog interface.
>>>
>>> In another case, the current cgroup prog detach tries to use the
>>> same flag as in attach. This is different from mprog kernel interface
>>> which uses flags passed from user space.
>>>
>>> So to avoid modifying existing behavior, I made the following changes to
>>> support mprog API for cgroup progs:
>>>   - The support is for prog list at cgroup level. Cross-level prog list
>>>     (a.k.a. effective prog list) is not supported.
>>>   - Previously, BPF_F_PREORDER is supported only for prog attach, now
>>>     BPF_F_PREORDER is also supported by link-based attach.
>>>   - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID is supported similar to
>>>     kernel mprog but with different implementation.
>>>   - For detach and replace, use the existing implementation.
>>>   - For attach, detach and replace, the revision for a particular prog
>>>     list, associated with a particular attach type, will be updated
>>>     by increasing count by 1.
>>>
>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>> ---
>>>   include/uapi/linux/bpf.h       |   7 ++
>>>   kernel/bpf/cgroup.c            | 144 ++++++++++++++++++++++++++++-----
>>>   kernel/bpf/syscall.c           |  44 ++++++----
>>>   tools/include/uapi/linux/bpf.h |   7 ++
>>>   4 files changed, 165 insertions(+), 37 deletions(-)
>>>
>> [...]
>>
>>> +       if (!anchor_prog) {
>>> +               hlist_for_each_entry(pltmp, progs, node) {
>>> +                       if ((flags & BPF_F_BEFORE) && *ppltmp)
>>> +                               break;
>>> +                       *ppltmp = pltmp;
>> This is be correct, but it's less obvious why because of all the
>> loops, breaks, and NULL anchor prog. The idea here is to find the very
>> first pl for BPF_F_BEFORE or the very last for BPF_F_AFTER, right? So
>> wouldn't this be more obviously correct:
>>
>> hlist_for_each_entry(pltmp, progs, node) {
>>      if (flags & BPF_F_BEFORE) {
>>          *ppltmp = pltmp;
>>          return NULL;
>>      }
>>      *ppltmp = pltmp;
>> }
>> return NULL;
>>
>>
>> I.e., once you know the result, just return as early as possible and
>> don't require tracing through the rest of code just to eventually
>> return all the same (but now somewhat disguised) values.
>>
>>
>> Though see my point about anchor_prog below, which will simplify this
>> to just `return pltmp;`
>>
>>
>> I'd also add a comment that if there is no anchor_prog, then
>> BPF_F_PREORDER doesn't matter because we either prepend or append to a
>> combined list of progs and end up with correct result
>>
>>> +               }
>>> +       }  else {
>>> +               hlist_for_each_entry(pltmp, progs, node) {
>>> +                       pltmp_prog = pltmp->link ? pltmp->link->link.prog : pltmp->prog;
>>> +                       if (pltmp_prog != anchor_prog)
>>> +                               continue;
>>> +                       if (!!(pltmp->flags & BPF_F_PREORDER) != preorder)
>>> +                               goto out;
>>> +                       *ppltmp = pltmp;
>>> +                       break;
>>> +               }
>>> +               if (!*ppltmp) {
>>> +                       ret = -ENOENT;
>>> +                       goto out;
>>> +               }
>>> +       }
>>> +
>>> +       return anchor_prog;
>>> +
>>> +out:
>>> +       bpf_prog_put(anchor_prog);
>>> +       return ERR_PTR(ret);
>>> +}
>>> +
>>> +static int insert_pl_to_hlist(struct bpf_prog_list *pl, struct hlist_head *progs,
>>> +                             struct bpf_prog *prog, u32 flags, u32 id_or_fd)
>>> +{
>>> +       struct bpf_prog_list *pltmp = NULL;
>>> +       struct bpf_prog *anchor_prog;
>>> +
>>> +       /* flags cannot have both BPF_F_BEFORE and BPF_F_AFTER */
>>> +       if ((flags & BPF_F_BEFORE) && (flags & BPF_F_AFTER))
>>> +               return -EINVAL;
>> I think this should be handled by get_anchor_prog(), both BPF_F_AFTER
>> and BPF_F_BEFORE will just result in no valid anchor program and we'll
>> error out below
> Oh, I just randomly realized that there is a special case that I think
> is allowed by Daniel's mprog implementation, and it might be important
> for some users. If both BPF_F_BEFORE and BPF_F_AFTER are specified and
> there is no ID/FD, then this combination would succeed if and only if
> the currently attached list of progs is empty. Check
> bpf_mprog_attach() and how it handles BPF_F_BEFORE and BPF_F_AFTER
> completely independently calculating tidx. If tidx ends up being
> consistent (which should be -1 for empty list), then that's where the
> prog/link is inserted (-1 result in prepending into an empty list).

I will add this support in the next revision.

>
>
> Daniel, can you please double check and generally take a look at this
> patch set, given you have the most detailed knowledge of mprog
> interface? Thanks!

Daniel, I will have another revision (v3) soon. Hopefully you can
review it as well. Thanks!

>
>>> +
>>> +       anchor_prog = get_anchor_prog(progs, prog, flags, id_or_fd, &pltmp);
>>> +       if (IS_ERR(anchor_prog))
>>> +               return PTR_ERR(anchor_prog);

[...]


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-05-17 15:50 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-08 22:35 [PATCH bpf-next v2 0/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
2025-05-08 22:35 ` [PATCH bpf-next v2 1/4] cgroup: Add bpf prog revisions to struct cgroup_bpf Yonghong Song
2025-05-15 20:39   ` Andrii Nakryiko
2025-05-08 22:35 ` [PATCH bpf-next v2 2/4] bpf: Implement mprog API on top of existing cgroup progs Yonghong Song
2025-05-15 20:38   ` Andrii Nakryiko
2025-05-15 21:05     ` Andrii Nakryiko
2025-05-17 15:49       ` Yonghong Song
2025-05-17 15:46     ` Yonghong Song
2025-05-08 22:35 ` [PATCH bpf-next v2 3/4] libbpf: Support link-based cgroup attach with options Yonghong Song
2025-05-15 20:42   ` Andrii Nakryiko
2025-05-08 22:35 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add two selftests for mprog API based cgroup progs Yonghong Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).