public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
@ 2023-09-22 11:28 Yafang Shao
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao

Currently, BPF is primarily confined to cgroup2, with the exception of
cgroup_iter, which supports cgroup1 fds. Unfortunately, this limitation
prevents us from harnessing the full potential of BPF within cgroup1
environments.

In our endeavor to seamlessly integrate BPF within our Kubernetes
environment, which relies on cgroup1, we have been exploring the
possibility of transitioning to cgroup2. While this transition is
forward-looking, it poses challenges due to the necessity for numerous
applications to adapt.

While we acknowledge that cgroup2 represents the future, we also recognize
that such transitions demand time and effort. As a result, we are
considering an alternative approach. Instead of migrating to cgroup2, we
are contemplating modifications to the BPF kernel code to ensure
compatibility with cgroup1. These adjustments appear to be relatively
minor, making this option more feasible.

Given the widespread use of cgroup1 in container environments, this change
would be beneficial to many users.

Based on our investigation, the optimal way to enable BPF on cgroup1 is to
utilize the cgroup controller. The cgroup becomes active only when it has
one or more of its controllers enabled. In production environments, a task
is consistently managed by at least one cgroup controller. Consequently, we
can always establish a direct link between a task and a cgroup controller,
enabling us to perform actions based on this connection. As a consequence,
this patchset introduces the following new kfuncs: 

- bpf_cgroup_id_from_task_within_controller
  Retrieves the cgroup ID from a task within a specific cgroup controller.
- bpf_cgroup_acquire_from_id_within_controller
  Acquires the cgroup from a cgroup ID within a specific cgroup controller.
- bpf_cgroup_ancestor_id_from_task_within_controller
  Retrieves the ancestor cgroup ID from a task within a specific cgroup
  controller.

The advantage of these new BPF kfuncs is their ability to abstract away the
complexities of cgroup hierarchies, irrespective of whether they involve
cgroup1 or cgroup2.

In the future, we may expand controller-based support to other BPF
functionalities, such as bpf_cgrp_storage, the attachment and detachment
of cgroups, skb_under_cgroup, and more.

Changes:
- bpf, cgroup: Enable cgroup_array map on cgroup1
  https://lore.kernel.org/bpf/20230903142800.3870-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org/

Yafang Shao (8):
  bpf: Fix missed rcu read lock in bpf_task_under_cgroup()
  cgroup: Enable task_under_cgroup_hierarchy() on cgroup1
  cgroup: Add cgroup_get_from_id_within_subsys()
  bpf: Add new kfuncs support for cgroup controller
  selftests/bpf: Fix issues in setup_classid_environment()
  selftests/bpf: Add parallel support for classid
  selftests/bpf: Add new cgroup helper get_classid_cgroup_id()
  selftests/bpf: Add selftests for cgroup controller

 include/linux/cgroup-defs.h                   |  20 +++
 include/linux/cgroup.h                        |  31 +++-
 kernel/bpf/helpers.c                          |  77 ++++++++-
 kernel/cgroup/cgroup-internal.h               |  20 ---
 kernel/cgroup/cgroup.c                        |  32 +++-
 tools/testing/selftests/bpf/cgroup_helpers.c  |  65 ++++++--
 tools/testing/selftests/bpf/cgroup_helpers.h  |   3 +-
 .../bpf/prog_tests/cgroup_controller.c        | 149 ++++++++++++++++++
 .../selftests/bpf/prog_tests/cgroup_v1v2.c    |   2 +-
 .../bpf/progs/test_cgroup_controller.c        |  80 ++++++++++
 10 files changed, 430 insertions(+), 49 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_controller.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_cgroup_controller.c

-- 
2.30.1 (Apple Git-130)


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC PATCH bpf-next 1/8] bpf: Fix missed rcu read lock in bpf_task_under_cgroup()
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2023-09-22 11:28   ` Yafang Shao
  2023-09-22 11:28   ` [RFC PATCH bpf-next 2/8] cgroup: Enable task_under_cgroup_hierarchy() on cgroup1 Yafang Shao
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao, Feng Zhou

When employed within a sleepable program not under RCU protection, the use
of 'bpf_task_under_cgroup()' may trigger a warning in the kernel log,
particularly when CONFIG_PROVE_RCU is enabled.

[ 1259.662354] =============================
[ 1259.662357] WARNING: suspicious RCU usage
[ 1259.662358] 6.5.0+ #33 Not tainted
[ 1259.662360] -----------------------------
[ 1259.662361] include/linux/cgroup.h:423 suspicious rcu_dereference_check() usage!
[ 1259.662364]
other info that might help us debug this:

[ 1259.662366]
rcu_scheduler_active = 2, debug_locks = 1
[ 1259.662368] 1 lock held by trace/72954:
[ 1259.662369]  #0: ffffffffb5e3eda0 (rcu_read_lock_trace){....}-{0:0}, at: __bpf_prog_enter_sleepable+0x0/0xb0
[ 1259.662383]
stack backtrace:
[ 1259.662385] CPU: 50 PID: 72954 Comm: trace Kdump: loaded Not tainted 6.5.0+ #33
[ 1259.662391] Call Trace:
[ 1259.662393]  <TASK>
[ 1259.662395]  dump_stack_lvl+0x6e/0x90
[ 1259.662401]  dump_stack+0x10/0x20
[ 1259.662404]  lockdep_rcu_suspicious+0x163/0x1b0
[ 1259.662412]  task_css_set.part.0+0x23/0x30
[ 1259.662417]  bpf_task_under_cgroup+0xe7/0xf0
[ 1259.662422]  bpf_prog_7fffba481a3bcf88_lsm_run+0x5c/0x93
[ 1259.662431]  bpf_trampoline_6442505574+0x60/0x1000
[ 1259.662439]  bpf_lsm_bpf+0x5/0x20
[ 1259.662443]  ? security_bpf+0x32/0x50
[ 1259.662452]  __sys_bpf+0xe6/0xdd0
[ 1259.662463]  __x64_sys_bpf+0x1a/0x30
[ 1259.662467]  do_syscall_64+0x38/0x90
[ 1259.662472]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
[ 1259.662479] RIP: 0033:0x7f487baf8e29
...
[ 1259.662504]  </TASK>

This issue can be reproduced by executing a straightforward program, as
demonstrated below:

SEC("lsm.s/bpf")
int BPF_PROG(lsm_run, int cmd, union bpf_attr *attr, unsigned int size)
{
        struct cgroup *cgrp = NULL;
        struct task_struct *task;
        int ret = 0;

        if (cmd != BPF_LINK_CREATE)
                return 0;

        // The cgroup2 should be mounted first
        cgrp = bpf_cgroup_from_id(1);
        if (!cgrp)
                goto out;
        task = bpf_get_current_task_btf();
        if (bpf_task_under_cgroup(task, cgrp))
                ret = -1;
        bpf_cgroup_release(cgrp);

out:
        return ret;
}

After running the program, if you subsequently execute another BPF program,
you will encounter the warning. It's worth noting that
task_under_cgroup_hierarchy() is also utilized by
bpf_current_task_under_cgroup(). However, bpf_current_task_under_cgroup()
doesn't exhibit this issue because it cannot be used in non-sleepable BPF
programs.

Fixes: b5ad4cdc46c7 ("bpf: Add bpf_task_under_cgroup() kfunc")
Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Feng Zhou <zhoufeng.zf-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 kernel/bpf/helpers.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index dd1c69ee3375..bb521b181cc3 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2212,7 +2212,12 @@ __bpf_kfunc struct cgroup *bpf_cgroup_from_id(u64 cgid)
 __bpf_kfunc long bpf_task_under_cgroup(struct task_struct *task,
 				       struct cgroup *ancestor)
 {
-	return task_under_cgroup_hierarchy(task, ancestor);
+	long ret;
+
+	rcu_read_lock();
+	ret = task_under_cgroup_hierarchy(task, ancestor);
+	rcu_read_unlock();
+	return ret;
 }
 #endif /* CONFIG_CGROUPS */
 
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH bpf-next 2/8] cgroup: Enable task_under_cgroup_hierarchy() on cgroup1
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  2023-09-22 11:28   ` [RFC PATCH bpf-next 1/8] bpf: Fix missed rcu read lock in bpf_task_under_cgroup() Yafang Shao
@ 2023-09-22 11:28   ` Yafang Shao
  2023-09-22 11:28   ` [RFC PATCH bpf-next 3/8] cgroup: Add cgroup_get_from_id_within_subsys() Yafang Shao
                     ` (7 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao

At present, the task_under_cgroup_hierarchy() function serves the purpose
of determining whether a task resides exclusively within a cgroup2
hierarchy. However, considering the ongoing prevalence of cgroup1 and the
substantial effort and time required to migrate all cgroup1-based
applications to the cgroup2 framework, it becomes beneficial to make a
minor adjustment that expands its functionality to encompass cgroup1 as
well. By implementing this modification, we will gain the capability to
easily confirm a task's cgroup membership within BPF programs. For example,
we can effortlessly verify if a task belongs to a cgroup1 directory, such
as '/sys/fs/cgroup/cpu,cpuacct/kubepods/', or
'/sys/fs/cgroup/cpu,cpuacct/system.slice/'.

Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 include/linux/cgroup-defs.h     | 20 ++++++++++++++++++++
 include/linux/cgroup.h          | 30 ++++++++++++++++++++++++++----
 kernel/cgroup/cgroup-internal.h | 20 --------------------
 3 files changed, 46 insertions(+), 24 deletions(-)

diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index f1b3151ac30b..5795825a04ff 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -299,6 +299,26 @@ struct css_set {
 	struct rcu_head rcu_head;
 };
 
+/*
+ * A cgroup can be associated with multiple css_sets as different tasks may
+ * belong to different cgroups on different hierarchies.  In the other
+ * direction, a css_set is naturally associated with multiple cgroups.
+ * This M:N relationship is represented by the following link structure
+ * which exists for each association and allows traversing the associations
+ * from both sides.
+ */
+struct cgrp_cset_link {
+	/* the cgroup and css_set this link associates */
+	struct cgroup		*cgrp;
+	struct css_set		*cset;
+
+	/* list of cgrp_cset_links anchored at cgrp->cset_links */
+	struct list_head	cset_link;
+
+	/* list of cgrp_cset_links anchored at css_set->cgrp_links */
+	struct list_head	cgrp_link;
+};
+
 struct cgroup_base_stat {
 	struct task_cputime cputime;
 
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index b307013b9c6c..e16cfb98b44c 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -387,8 +387,8 @@ static inline void cgroup_unlock(void)
  * The caller can also specify additional allowed conditions via @__c, such
  * as locks used during the cgroup_subsys::attach() methods.
  */
-#ifdef CONFIG_PROVE_RCU
 extern spinlock_t css_set_lock;
+#ifdef CONFIG_PROVE_RCU
 #define task_css_set_check(task, __c)					\
 	rcu_dereference_check((task)->cgroups,				\
 		rcu_read_lock_sched_held() ||				\
@@ -543,15 +543,37 @@ static inline struct cgroup *cgroup_ancestor(struct cgroup *cgrp,
  * @ancestor: possible ancestor of @task's cgroup
  *
  * Tests whether @task's default cgroup hierarchy is a descendant of @ancestor.
- * It follows all the same rules as cgroup_is_descendant, and only applies
- * to the default hierarchy.
+ * It follows all the same rules as cgroup_is_descendant.
  */
 static inline bool task_under_cgroup_hierarchy(struct task_struct *task,
 					       struct cgroup *ancestor)
 {
 	struct css_set *cset = task_css_set(task);
+	struct cgrp_cset_link *link;
+	struct cgroup *cgrp = NULL;
+	bool ret = false;
+
+	if (ancestor->root == &cgrp_dfl_root)
+		return cgroup_is_descendant(cset->dfl_cgrp, ancestor);
+
+	if (cset == &init_css_set)
+		return ancestor == &ancestor->root->cgrp;
+
+	spin_lock_irq(&css_set_lock);
+	list_for_each_entry(link, &cset->cgrp_links, cgrp_link) {
+		struct cgroup *c = link->cgrp;
+
+		if (c->root == ancestor->root) {
+			cgrp = c;
+			break;
+		}
+	}
+	spin_unlock_irq(&css_set_lock);
 
-	return cgroup_is_descendant(cset->dfl_cgrp, ancestor);
+	WARN_ON_ONCE(!cgrp);
+	if (cgroup_is_descendant(cgrp, ancestor))
+		ret = true;
+	return ret;
 }
 
 /* no synchronization, the result can only be used as a hint */
diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h
index c56071f150f2..620c60c9daa3 100644
--- a/kernel/cgroup/cgroup-internal.h
+++ b/kernel/cgroup/cgroup-internal.h
@@ -83,26 +83,6 @@ struct cgroup_file_ctx {
 	} procs1;
 };
 
-/*
- * A cgroup can be associated with multiple css_sets as different tasks may
- * belong to different cgroups on different hierarchies.  In the other
- * direction, a css_set is naturally associated with multiple cgroups.
- * This M:N relationship is represented by the following link structure
- * which exists for each association and allows traversing the associations
- * from both sides.
- */
-struct cgrp_cset_link {
-	/* the cgroup and css_set this link associates */
-	struct cgroup		*cgrp;
-	struct css_set		*cset;
-
-	/* list of cgrp_cset_links anchored at cgrp->cset_links */
-	struct list_head	cset_link;
-
-	/* list of cgrp_cset_links anchored at css_set->cgrp_links */
-	struct list_head	cgrp_link;
-};
-
 /* used to track tasks and csets during migration */
 struct cgroup_taskset {
 	/* the src and dst cset list running through cset->mg_node */
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH bpf-next 3/8] cgroup: Add cgroup_get_from_id_within_subsys()
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  2023-09-22 11:28   ` [RFC PATCH bpf-next 1/8] bpf: Fix missed rcu read lock in bpf_task_under_cgroup() Yafang Shao
  2023-09-22 11:28   ` [RFC PATCH bpf-next 2/8] cgroup: Enable task_under_cgroup_hierarchy() on cgroup1 Yafang Shao
@ 2023-09-22 11:28   ` Yafang Shao
  2023-09-22 11:28   ` [RFC PATCH bpf-next 4/8] bpf: Add new kfuncs support for cgroup controller Yafang Shao
                     ` (6 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao

Introduce a new helper function to retrieve the cgroup associated with a
specific cgroup ID within a particular subsystem.

Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 include/linux/cgroup.h |  1 +
 kernel/cgroup/cgroup.c | 32 +++++++++++++++++++++++++++-----
 2 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index e16cfb98b44c..9f7616cbf710 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -656,6 +656,7 @@ static inline void cgroup_kthread_ready(void)
 
 void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen);
 struct cgroup *cgroup_get_from_id(u64 id);
+struct cgroup *cgroup_get_from_id_within_subsys(u64 cgid, int ssid);
 #else /* !CONFIG_CGROUPS */
 
 struct cgroup_subsys_state;
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 1fb7f562289d..d30a62eed14c 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -6195,17 +6195,28 @@ void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen)
 }
 
 /*
- * cgroup_get_from_id : get the cgroup associated with cgroup id
- * @id: cgroup id
+ * cgroup_get_from_id_within_subsys - get the cgroup associated with cgroup id
+ *                                    within specific subsystem
+ * @cgid: cgroup id
+ * @ssid: cgroup subsystem id, -1 for cgroup default tree
  * On success return the cgrp or ERR_PTR on failure
  * Only cgroups within current task's cgroup NS are valid.
  */
-struct cgroup *cgroup_get_from_id(u64 id)
+struct cgroup *cgroup_get_from_id_within_subsys(u64 cgid, int ssid)
 {
+	struct cgroup_root *root;
 	struct kernfs_node *kn;
-	struct cgroup *cgrp, *root_cgrp;
+	struct cgroup *cgrp;
 
-	kn = kernfs_find_and_get_node_by_id(cgrp_dfl_root.kf_root, id);
+	if (ssid == -1) {
+		root = &cgrp_dfl_root;
+	} else {
+		if (ssid >= CGROUP_SUBSYS_COUNT)
+			return ERR_PTR(-EINVAL);
+		root = cgroup_subsys[ssid]->root;
+	}
+
+	kn = kernfs_find_and_get_node_by_id(root->kf_root, cgid);
 	if (!kn)
 		return ERR_PTR(-ENOENT);
 
@@ -6226,6 +6237,17 @@ struct cgroup *cgroup_get_from_id(u64 id)
 	if (!cgrp)
 		return ERR_PTR(-ENOENT);
 
+	return cgrp;
+}
+
+struct cgroup *cgroup_get_from_id(u64 id)
+{
+	struct cgroup *root_cgrp, *cgrp;
+
+	cgrp = cgroup_get_from_id_within_subsys(id, -1);
+	if (IS_ERR(cgrp))
+		return cgrp;
+
 	root_cgrp = current_cgns_cgroup_dfl();
 	if (!cgroup_is_descendant(cgrp, root_cgrp)) {
 		cgroup_put(cgrp);
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH bpf-next 4/8] bpf: Add new kfuncs support for cgroup controller
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
                     ` (2 preceding siblings ...)
  2023-09-22 11:28   ` [RFC PATCH bpf-next 3/8] cgroup: Add cgroup_get_from_id_within_subsys() Yafang Shao
@ 2023-09-22 11:28   ` Yafang Shao
  2023-09-22 11:28   ` [RFC PATCH bpf-next 5/8] selftests/bpf: Fix issues in setup_classid_environment() Yafang Shao
                     ` (5 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao

Introducing new kfuncs:

- bpf_cgroup_id_from_task_within_controller
  Retrieves the cgroup ID from a task within a specific cgroup controller.
- bpf_cgroup_acquire_from_id_within_controller
  Acquires the cgroup from a cgroup ID within a specific cgroup controller.
- bpf_cgroup_ancestor_id_from_task_within_controller
  Retrieves the ancestor cgroup ID from a task within a specific cgroup
  controller.

These functions eliminate the need to consider cgroup hierarchies,
regardless of whether they involve cgroup1 or cgroup2.

Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 kernel/bpf/helpers.c | 70 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index bb521b181cc3..1316b5fda349 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2219,6 +2219,73 @@ __bpf_kfunc long bpf_task_under_cgroup(struct task_struct *task,
 	rcu_read_unlock();
 	return ret;
 }
+
+/**
+ * bpf_cgroup_id_from_task_within_controller - To get the associated cgroup_id from
+ * a task within a specific cgroup controller.
+ * @task: The target task
+ * @ssid: The id of cgroup controller, e.g. cpu_cgrp_id, memory_cgrp_id and etc.
+ */
+__bpf_kfunc u64 bpf_cgroup_id_from_task_within_controller(struct task_struct *task, int ssid)
+{
+	struct cgroup *cgroup;
+	int id = 0;
+
+	rcu_read_lock();
+	cgroup = task_cgroup(task, ssid);
+	if (!cgroup)
+		goto out;
+	id = cgroup_id(cgroup);
+
+out:
+	rcu_read_unlock();
+	return id;
+}
+
+/**
+ * bpf_cgroup_id_from_task_within_controller - To get the associated cgroup_id from
+ * a task within a specific cgroup controller.
+ * @task: The target task
+ * @ssid: The id of cgroup subsystem, e.g. cpu_cgrp_id, memory_cgrp_id and etc.
+ * @level: The level of ancestor to look up
+ */
+__bpf_kfunc u64 bpf_cgroup_ancestor_id_from_task_within_controller(struct task_struct *task,
+								   int ssid, int level)
+{
+	struct cgroup *cgrp, *ancestor;
+	int id = 0;
+
+	rcu_read_lock();
+	cgrp = task_cgroup(task, ssid);
+	if (!cgrp)
+		goto out;
+	ancestor = cgroup_ancestor(cgrp, level);
+	if (ancestor)
+		id = cgroup_id(ancestor);
+
+out:
+	rcu_read_unlock();
+	return id;
+}
+
+/**
+ * bpf_cgroup_acquire_from_id_within_controller - To acquire the cgroup from a
+ * cgroup id within specific cgroup controller. A cgroup acquired by this kfunc
+ * which is not stored in a map as a kptr, must be released by calling
+ * bpf_cgroup_release().
+ * @cgid: cgroup id
+ * @ssid: The id of a cgroup controller, e.g. cpu_cgrp_id, memory_cgrp_id and etc.
+ */
+__bpf_kfunc struct cgroup *bpf_cgroup_acquire_from_id_within_controller(u64 cgid, int ssid)
+{
+	struct cgroup *cgrp;
+
+	cgrp = cgroup_get_from_id_within_subsys(cgid, ssid);
+	if (IS_ERR(cgrp))
+		return NULL;
+	return cgrp;
+}
+
 #endif /* CONFIG_CGROUPS */
 
 /**
@@ -2525,6 +2592,9 @@ BTF_ID_FLAGS(func, bpf_cgroup_release, KF_RELEASE)
 BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_task_under_cgroup, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cgroup_id_from_task_within_controller)
+BTF_ID_FLAGS(func, bpf_cgroup_ancestor_id_from_task_within_controller)
+BTF_ID_FLAGS(func, bpf_cgroup_acquire_from_id_within_controller, KF_ACQUIRE | KF_RET_NULL)
 #endif
 BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_throw)
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH bpf-next 5/8] selftests/bpf: Fix issues in setup_classid_environment()
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
                     ` (3 preceding siblings ...)
  2023-09-22 11:28   ` [RFC PATCH bpf-next 4/8] bpf: Add new kfuncs support for cgroup controller Yafang Shao
@ 2023-09-22 11:28   ` Yafang Shao
  2023-09-22 11:28   ` [RFC PATCH bpf-next 6/8] selftests/bpf: Add parallel support for classid Yafang Shao
                     ` (4 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao

If the net_cls subsystem is already mounted, attempting to mount it again
in setup_classid_environment() will result in a failure with the error code
EBUSY. Despite this, tmpfs will have been successfully mounted at
/sys/fs/cgroup/net_cls. Consequently, the /sys/fs/cgroup/net_cls directory
will be empty, causing subsequent setup operations to fail.

Here's an error log excerpt illustrating the issue when net_cls has already
been mounted at /sys/fs/cgroup/net_cls prior to running
setup_classid_environment():

- Before that change

  $ tools/testing/selftests/bpf/test_progs --name=cgroup_v1v2
  test_cgroup_v1v2:PASS:server_fd 0 nsec
  test_cgroup_v1v2:PASS:client_fd 0 nsec
  test_cgroup_v1v2:PASS:cgroup_fd 0 nsec
  test_cgroup_v1v2:PASS:server_fd 0 nsec
  run_test:PASS:skel_open 0 nsec
  run_test:PASS:prog_attach 0 nsec
  test_cgroup_v1v2:PASS:cgroup-v2-only 0 nsec
  (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup.procs
  (cgroup_helpers.c:540: errno: No such file or directory) Opening cgroup classid: /sys/fs/cgroup/net_cls/cgroup-test-work-dir/net_cls.classid
  run_test:PASS:skel_open 0 nsec
  run_test:PASS:prog_attach 0 nsec
  (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup-test-work-dir/cgroup.procs
  run_test:FAIL:join_classid unexpected error: 1 (errno 2)
  test_cgroup_v1v2:FAIL:cgroup-v1v2 unexpected error: -1 (errno 2)
  (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup.procs
  #44      cgroup_v1v2:FAIL
  Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED

- After that change
  $ tools/testing/selftests/bpf/test_progs --name=cgroup_v1v2
  #44      cgroup_v1v2:OK
  Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 tools/testing/selftests/bpf/cgroup_helpers.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c
index 24ba56d42f2d..9c36d1db9f94 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.c
+++ b/tools/testing/selftests/bpf/cgroup_helpers.c
@@ -518,10 +518,20 @@ int setup_classid_environment(void)
 		return 1;
 	}
 
-	if (mount("net_cls", NETCLS_MOUNT_PATH, "cgroup", 0, "net_cls") &&
-	    errno != EBUSY) {
-		log_err("mount cgroup net_cls");
-		return 1;
+	if (mount("net_cls", NETCLS_MOUNT_PATH, "cgroup", 0, "net_cls")) {
+		if (errno != EBUSY) {
+			log_err("mount cgroup net_cls");
+			return 1;
+		}
+
+		if (rmdir(NETCLS_MOUNT_PATH)) {
+			log_err("rmdir cgroup net_cls");
+			return 1;
+		}
+		if (umount(CGROUP_MOUNT_DFLT)) {
+			log_err("umount cgroup base");
+			return 1;
+		}
 	}
 
 	cleanup_classid_environment();
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH bpf-next 6/8] selftests/bpf: Add parallel support for classid
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
                     ` (4 preceding siblings ...)
  2023-09-22 11:28   ` [RFC PATCH bpf-next 5/8] selftests/bpf: Fix issues in setup_classid_environment() Yafang Shao
@ 2023-09-22 11:28   ` Yafang Shao
  2023-09-22 11:28   ` [RFC PATCH bpf-next 7/8] selftests/bpf: Add new cgroup helper get_classid_cgroup_id() Yafang Shao
                     ` (3 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao

Include the current pid in the classid cgroup path. This way, different
testers relying on classid-based configurations will have distinct classid
cgroup directories, enabling them to run concurrently. Additionally, we
leverage the current pid as the classid, ensuring unique identification.

Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 tools/testing/selftests/bpf/cgroup_helpers.c  | 19 ++++++++++++-------
 tools/testing/selftests/bpf/cgroup_helpers.h  |  2 +-
 .../selftests/bpf/prog_tests/cgroup_v1v2.c    |  2 +-
 3 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c
index 9c36d1db9f94..e378fa057757 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.c
+++ b/tools/testing/selftests/bpf/cgroup_helpers.c
@@ -45,9 +45,12 @@
 #define format_parent_cgroup_path(buf, path) \
 	format_cgroup_path_pid(buf, path, getppid())
 
-#define format_classid_path(buf)				\
-	snprintf(buf, sizeof(buf), "%s%s", NETCLS_MOUNT_PATH,	\
-		 CGROUP_WORK_DIR)
+#define format_classid_path_pid(buf, pid)				\
+	snprintf(buf, sizeof(buf), "%s%s%d", NETCLS_MOUNT_PATH,	\
+		 CGROUP_WORK_DIR, pid)
+
+#define format_classid_path(buf)	\
+	format_classid_path_pid(buf, getpid())
 
 static __thread bool cgroup_workdir_mounted;
 
@@ -546,15 +549,17 @@ int setup_classid_environment(void)
 
 /**
  * set_classid() - Set a cgroupv1 net_cls classid
- * @id: the numeric classid
  *
- * Writes the passed classid into the cgroup work dir's net_cls.classid
+ * Writes the classid into the cgroup work dir's net_cls.classid
  * file in order to later on trigger socket tagging.
  *
+ * To make sure different classid testers have different classids, we use
+ * the current pid as the classid by default.
+ *
  * On success, it returns 0, otherwise on failure it returns 1. If there
  * is a failure, it prints the error to stderr.
  */
-int set_classid(unsigned int id)
+int set_classid(void)
 {
 	char cgroup_workdir[PATH_MAX - 42];
 	char cgroup_classid_path[PATH_MAX + 1];
@@ -570,7 +575,7 @@ int set_classid(unsigned int id)
 		return 1;
 	}
 
-	if (dprintf(fd, "%u\n", id) < 0) {
+	if (dprintf(fd, "%u\n", getpid()) < 0) {
 		log_err("Setting cgroup classid");
 		rc = 1;
 	}
diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h
index 5c2cb9c8b546..92fc41daf4a4 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.h
+++ b/tools/testing/selftests/bpf/cgroup_helpers.h
@@ -29,7 +29,7 @@ int setup_cgroup_environment(void);
 void cleanup_cgroup_environment(void);
 
 /* cgroupv1 related */
-int set_classid(unsigned int id);
+int set_classid(void);
 int join_classid(void);
 
 int setup_classid_environment(void);
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c b/tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c
index 9026b42914d3..addf720428f7 100644
--- a/tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c
@@ -71,7 +71,7 @@ void test_cgroup_v1v2(void)
 	}
 	ASSERT_OK(run_test(cgroup_fd, server_fd, false), "cgroup-v2-only");
 	setup_classid_environment();
-	set_classid(42);
+	set_classid();
 	ASSERT_OK(run_test(cgroup_fd, server_fd, true), "cgroup-v1v2");
 	cleanup_classid_environment();
 	close(server_fd);
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH bpf-next 7/8] selftests/bpf: Add new cgroup helper get_classid_cgroup_id()
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
                     ` (5 preceding siblings ...)
  2023-09-22 11:28   ` [RFC PATCH bpf-next 6/8] selftests/bpf: Add parallel support for classid Yafang Shao
@ 2023-09-22 11:28   ` Yafang Shao
  2023-09-22 11:28   ` [RFC PATCH bpf-next 8/8] selftests/bpf: Add selftests for cgroup controller Yafang Shao
                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao

Introduce a new helper function to retrieve the cgroup ID from a net_cls
cgroup directory.

Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 tools/testing/selftests/bpf/cgroup_helpers.c | 28 +++++++++++++++-----
 tools/testing/selftests/bpf/cgroup_helpers.h |  1 +
 2 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c
index e378fa057757..7cb2c9597b8f 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.c
+++ b/tools/testing/selftests/bpf/cgroup_helpers.c
@@ -417,26 +417,23 @@ int create_and_get_cgroup(const char *relative_path)
 }
 
 /**
- * get_cgroup_id() - Get cgroup id for a particular cgroup path
- * @relative_path: The cgroup path, relative to the workdir, to join
+ * get_cgroup_id_from_path - Get cgroup id for a particular cgroup path
+ * @cgroup_workdir: The absolute cgroup path
  *
  * On success, it returns the cgroup id. On failure it returns 0,
  * which is an invalid cgroup id.
  * If there is a failure, it prints the error to stderr.
  */
-unsigned long long get_cgroup_id(const char *relative_path)
+unsigned long long get_cgroup_id_from_path(const char *cgroup_workdir)
 {
 	int dirfd, err, flags, mount_id, fhsize;
 	union {
 		unsigned long long cgid;
 		unsigned char raw_bytes[8];
 	} id;
-	char cgroup_workdir[PATH_MAX + 1];
 	struct file_handle *fhp, *fhp2;
 	unsigned long long ret = 0;
 
-	format_cgroup_path(cgroup_workdir, relative_path);
-
 	dirfd = AT_FDCWD;
 	flags = 0;
 	fhsize = sizeof(*fhp);
@@ -472,6 +469,14 @@ unsigned long long get_cgroup_id(const char *relative_path)
 	return ret;
 }
 
+unsigned long long get_cgroup_id(const char *relative_path)
+{
+	char cgroup_workdir[PATH_MAX + 1];
+
+	format_cgroup_path(cgroup_workdir, relative_path);
+	return get_cgroup_id_from_path(cgroup_workdir);
+}
+
 int cgroup_setup_and_join(const char *path) {
 	int cg_fd;
 
@@ -617,3 +622,14 @@ void cleanup_classid_environment(void)
 	join_cgroup_from_top(NETCLS_MOUNT_PATH);
 	nftw(cgroup_workdir, nftwfunc, WALK_FD_LIMIT, FTW_DEPTH | FTW_MOUNT);
 }
+
+/**
+ * get_classid_cgroup_id - Get the cgroup id of a net_cls cgroup
+ */
+unsigned long long get_classid_cgroup_id(void)
+{
+	char cgroup_workdir[PATH_MAX + 1];
+
+	format_classid_path(cgroup_workdir);
+	return get_cgroup_id_from_path(cgroup_workdir);
+}
diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h
index 92fc41daf4a4..e71da4ef031b 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.h
+++ b/tools/testing/selftests/bpf/cgroup_helpers.h
@@ -31,6 +31,7 @@ void cleanup_cgroup_environment(void);
 /* cgroupv1 related */
 int set_classid(void);
 int join_classid(void);
+unsigned long long get_classid_cgroup_id(void);
 
 int setup_classid_environment(void);
 void cleanup_classid_environment(void);
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH bpf-next 8/8] selftests/bpf: Add selftests for cgroup controller
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
                     ` (6 preceding siblings ...)
  2023-09-22 11:28   ` [RFC PATCH bpf-next 7/8] selftests/bpf: Add new cgroup helper get_classid_cgroup_id() Yafang Shao
@ 2023-09-22 11:28   ` Yafang Shao
  2023-09-22 16:52   ` [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support " Tejun Heo
  2023-09-25 18:22   ` Kui-Feng Lee
  9 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-22 11:28 UTC (permalink / raw)
  To: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA,
	Yafang Shao

Add selftests for cgroup controller on both cgroup1 and cgroup2.
The result as follows,

  $ tools/testing/selftests/bpf/test_progs --name=cgroup_controller
  #40/1    cgroup_controller/test_cgroup1_controller:OK
  #40/2    cgroup_controller/test_invalid_cgroup_id:OK
  #40/3    cgroup_controller/test_sleepable_prog:OK
  #40/4    cgroup_controller/test_cgroup2_controller:OK
  #40      cgroup_controller:OK

Signed-off-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 .../bpf/prog_tests/cgroup_controller.c        | 149 ++++++++++++++++++
 .../bpf/progs/test_cgroup_controller.c        |  80 ++++++++++
 2 files changed, 229 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_controller.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_cgroup_controller.c

diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_controller.c b/tools/testing/selftests/bpf/prog_tests/cgroup_controller.c
new file mode 100644
index 000000000000..f76ec1e65b2a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_controller.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> */
+
+#include <sys/types.h>
+#include <unistd.h>
+#include <test_progs.h>
+#include "cgroup_helpers.h"
+#include "test_cgroup_controller.skel.h"
+
+#define CGROUP2_DIR "/cgroup2_controller"
+
+static void bpf_cgroup1_controller(bool sleepable, __u64 cgrp_id)
+{
+	struct test_cgroup_controller *skel;
+	int err;
+
+	skel = test_cgroup_controller__open();
+	if (!ASSERT_OK_PTR(skel, "open"))
+		return;
+
+	skel->bss->target_pid = getpid();
+	skel->bss->ancestor_cgid = cgrp_id;
+
+	err = bpf_program__set_attach_target(skel->progs.fentry_run, 0, "bpf_fentry_test1");
+	if (!ASSERT_OK(err, "fentry_set_target"))
+		goto cleanup;
+
+	err = test_cgroup_controller__load(skel);
+	if (!ASSERT_OK(err, "load"))
+		goto cleanup;
+
+	/* Attach LSM prog first */
+	if (!sleepable) {
+		skel->links.lsm_net_cls = bpf_program__attach_lsm(skel->progs.lsm_net_cls);
+		if (!ASSERT_OK_PTR(skel->links.lsm_net_cls, "lsm_attach"))
+			goto cleanup;
+	} else {
+		skel->links.lsm_s_net_cls = bpf_program__attach_lsm(skel->progs.lsm_s_net_cls);
+		if (!ASSERT_OK_PTR(skel->links.lsm_s_net_cls, "lsm_attach_sleepable"))
+			goto cleanup;
+	}
+
+	/* LSM prog will be triggered when attaching fentry */
+	skel->links.fentry_run = bpf_program__attach_trace(skel->progs.fentry_run);
+	if (cgrp_id) {
+		ASSERT_NULL(skel->links.fentry_run, "fentry_attach_fail");
+	} else {
+		if (!ASSERT_OK_PTR(skel->links.fentry_run, "fentry_attach_success"))
+			goto cleanup;
+	}
+
+cleanup:
+	test_cgroup_controller__destroy(skel);
+}
+
+static void cgroup_controller_on_cgroup1(bool sleepable, bool invalid_cgid)
+{
+	__u64 cgrp_id;
+	int err;
+
+	/* Setup cgroup1 hierarchy */
+	err = setup_classid_environment();
+	if (!ASSERT_OK(err, "setup_classid_environment"))
+		return;
+
+	err = join_classid();
+	if (!ASSERT_OK(err, "join_cgroup1"))
+		goto cleanup;
+
+	cgrp_id = get_classid_cgroup_id();
+	if (invalid_cgid)
+		bpf_cgroup1_controller(sleepable, 0);
+	else
+		bpf_cgroup1_controller(sleepable, cgrp_id);
+
+cleanup:
+	/* Cleanup cgroup1 hierarchy */
+	cleanup_classid_environment();
+}
+
+static void bpf_cgroup2_controller(__u64 cgrp_id)
+{
+	struct test_cgroup_controller *skel;
+	int err;
+
+	skel = test_cgroup_controller__open();
+	if (!ASSERT_OK_PTR(skel, "open"))
+		return;
+
+	skel->bss->target_pid = getpid();
+	skel->bss->ancestor_cgid = cgrp_id;
+
+	err = bpf_program__set_attach_target(skel->progs.fentry_run, 0, "bpf_fentry_test1");
+	if (!ASSERT_OK(err, "fentry_set_target"))
+		goto cleanup;
+
+	err = test_cgroup_controller__load(skel);
+	if (!ASSERT_OK(err, "load"))
+		goto cleanup;
+
+	skel->links.lsm_cpu = bpf_program__attach_lsm(skel->progs.lsm_cpu);
+	if (!ASSERT_OK_PTR(skel->links.lsm_cpu, "lsm_attach"))
+		goto cleanup;
+
+	skel->links.fentry_run = bpf_program__attach_trace(skel->progs.fentry_run);
+	ASSERT_NULL(skel->links.fentry_run, "fentry_attach_fail");
+
+cleanup:
+	test_cgroup_controller__destroy(skel);
+}
+
+static void cgroup_controller_on_cgroup2(void)
+{
+	int cgrp_fd, cgrp_id, err;
+
+	err = setup_cgroup_environment();
+	if (!ASSERT_OK(err, "cgrp2_env_setup"))
+		goto cleanup;
+
+	cgrp_fd = test__join_cgroup(CGROUP2_DIR);
+	if (!ASSERT_GE(cgrp_fd, 0, "cgroup_join_cgroup2"))
+		goto cleanup;
+
+	err = enable_controllers(CGROUP2_DIR, "cpu");
+	if (!ASSERT_OK(err, "cgrp2_env_setup"))
+		goto close_fd;
+
+	cgrp_id = get_cgroup_id(CGROUP2_DIR);
+	if (!ASSERT_GE(cgrp_id, 0, "cgroup2_id"))
+		goto close_fd;
+	bpf_cgroup2_controller(cgrp_id);
+
+close_fd:
+	close(cgrp_fd);
+cleanup:
+	cleanup_cgroup_environment();
+}
+
+void test_cgroup_controller(void)
+{
+	if (test__start_subtest("test_cgroup1_controller"))
+		cgroup_controller_on_cgroup1(false, false);
+	if (test__start_subtest("test_invalid_cgroup_id"))
+		cgroup_controller_on_cgroup1(false, true);
+	if (test__start_subtest("test_sleepable_prog"))
+		cgroup_controller_on_cgroup1(true, false);
+	if (test__start_subtest("test_cgroup2_controller"))
+		cgroup_controller_on_cgroup2();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_cgroup_controller.c b/tools/testing/selftests/bpf/progs/test_cgroup_controller.c
new file mode 100644
index 000000000000..958804a34794
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_cgroup_controller.c
@@ -0,0 +1,80 @@
+// SPDX-License-Identifier: GPL-2.0
+//#endif
+/* Copyright (C) 2023 Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+__u64 ancestor_cgid;
+int target_pid;
+
+struct cgroup *bpf_cgroup_acquire_from_id_within_controller(u64 cgid, int ssid) __ksym;
+u64 bpf_cgroup_id_from_task_within_controller(struct task_struct *task, int ssid) __ksym;
+u64 bpf_cgroup_ancestor_id_from_task_within_controller(struct task_struct *task,
+						       int ssid, int level) __ksym;
+long bpf_task_under_cgroup(struct task_struct *task, struct cgroup *ancestor) __ksym;
+void bpf_cgroup_release(struct cgroup *p) __ksym;
+
+static int bpf_link_create_verify(int cmd, union bpf_attr *attr, unsigned int size, int ssid)
+{
+	struct cgroup *cgrp = NULL;
+	struct task_struct *task;
+	__u64 cgid, root_cgid;
+	int ret = 0;
+
+	if (cmd != BPF_LINK_CREATE)
+		return 0;
+
+	task = bpf_get_current_task_btf();
+	/* Then it can run in parallel */
+	if (target_pid != BPF_CORE_READ(task, pid))
+		return 0;
+
+	cgrp = bpf_cgroup_acquire_from_id_within_controller(ancestor_cgid, ssid);
+	if (!cgrp)
+		goto out;
+
+	if (bpf_task_under_cgroup(task, cgrp))
+		ret = -1;
+	bpf_cgroup_release(cgrp);
+
+	cgid = bpf_cgroup_id_from_task_within_controller(task, ssid);
+	if (cgid != ancestor_cgid)
+		ret = 0;
+
+	/* The level of root cgroup is 0, and its id is always 1 */
+	root_cgid = bpf_cgroup_ancestor_id_from_task_within_controller(task, ssid, 0);
+	if (root_cgid != 1)
+		ret = 0;
+
+out:
+	return ret;
+}
+
+SEC("lsm/bpf")
+int BPF_PROG(lsm_net_cls, int cmd, union bpf_attr *attr, unsigned int size)
+{
+	return bpf_link_create_verify(cmd, attr, size, net_cls_cgrp_id);
+}
+
+SEC("lsm.s/bpf")
+int BPF_PROG(lsm_s_net_cls, int cmd, union bpf_attr *attr, unsigned int size)
+{
+	return bpf_link_create_verify(cmd, attr, size, net_cls_cgrp_id);
+}
+
+SEC("lsm/bpf")
+int BPF_PROG(lsm_cpu, int cmd, union bpf_attr *attr, unsigned int size)
+{
+	return bpf_link_create_verify(cmd, attr, size, cpu_cgrp_id);
+}
+
+SEC("fentry")
+int BPF_PROG(fentry_run)
+{
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
                     ` (7 preceding siblings ...)
  2023-09-22 11:28   ` [RFC PATCH bpf-next 8/8] selftests/bpf: Add selftests for cgroup controller Yafang Shao
@ 2023-09-22 16:52   ` Tejun Heo
       [not found]     ` <ZQ3GQmYrYyKAg2uK-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
  2023-09-25 18:22   ` Kui-Feng Lee
  9 siblings, 1 reply; 21+ messages in thread
From: Tejun Heo @ 2023-09-22 16:52 UTC (permalink / raw)
  To: Yafang Shao
  Cc: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
	yosryahmed-hpIqsD4AKlfQT0dZR+AlfA, mkoutny-IBi9RG/b67k,
	cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA

Hello,

On Fri, Sep 22, 2023 at 11:28:38AM +0000, Yafang Shao wrote:
> - bpf_cgroup_id_from_task_within_controller
>   Retrieves the cgroup ID from a task within a specific cgroup controller.
> - bpf_cgroup_acquire_from_id_within_controller
>   Acquires the cgroup from a cgroup ID within a specific cgroup controller.
> - bpf_cgroup_ancestor_id_from_task_within_controller
>   Retrieves the ancestor cgroup ID from a task within a specific cgroup
>   controller.
> 
> The advantage of these new BPF kfuncs is their ability to abstract away the
> complexities of cgroup hierarchies, irrespective of whether they involve
> cgroup1 or cgroup2.

I'm afraid this is more likely to bring the unnecessary complexities of
cgroup1 into cgroup2.

> In the future, we may expand controller-based support to other BPF
> functionalities, such as bpf_cgrp_storage, the attachment and detachment
> of cgroups, skb_under_cgroup, and more.

I'm okay with minor / trivial quality of life improvements to cgroup1 but
this goes much beyond that and is starting to complications to cgroup2
users, which I think is a pretty bad idea long term. I'm afraid I'm gonna
have to nack this approach.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
       [not found]     ` <ZQ3GQmYrYyKAg2uK-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
@ 2023-09-24  6:32       ` Yafang Shao
       [not found]         ` <CALOAHbA9-BT1daw-KXHtsrN=uRQyt-p6LU=BEpvF2Yk42A_Vxw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 21+ messages in thread
From: Yafang Shao @ 2023-09-24  6:32 UTC (permalink / raw)
  To: Tejun Heo
  Cc: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
	yosryahmed-hpIqsD4AKlfQT0dZR+AlfA, mkoutny-IBi9RG/b67k,
	cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA

On Sat, Sep 23, 2023 at 12:52 AM Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
>
> Hello,
>
> On Fri, Sep 22, 2023 at 11:28:38AM +0000, Yafang Shao wrote:
> > - bpf_cgroup_id_from_task_within_controller
> >   Retrieves the cgroup ID from a task within a specific cgroup controller.
> > - bpf_cgroup_acquire_from_id_within_controller
> >   Acquires the cgroup from a cgroup ID within a specific cgroup controller.
> > - bpf_cgroup_ancestor_id_from_task_within_controller
> >   Retrieves the ancestor cgroup ID from a task within a specific cgroup
> >   controller.
> >
> > The advantage of these new BPF kfuncs is their ability to abstract away the
> > complexities of cgroup hierarchies, irrespective of whether they involve
> > cgroup1 or cgroup2.
>
> I'm afraid this is more likely to bring the unnecessary complexities of
> cgroup1 into cgroup2.

I concur with the idea that we should avoid introducing the
complexities of cgroup1 into cgroup2. Which specific change do you
believe might introduce these complexities into cgroup2? Is it the
modification within task_under_cgroup_hierarchy() or
cgroup_get_from_id()?

In fact, we have the option to utilize
bpf_cgroup_ancestor_id_from_task_within_controller() as a substitute
for bpf_task_under_cgroup(), which allows us to sidestep the need for
changes within task_under_cgroup_hierarchy() altogether.

>
> > In the future, we may expand controller-based support to other BPF
> > functionalities, such as bpf_cgrp_storage, the attachment and detachment
> > of cgroups, skb_under_cgroup, and more.
>
> I'm okay with minor / trivial quality of life improvements to cgroup1 but
> this goes much beyond that and is starting to complications to cgroup2
> users, which I think is a pretty bad idea long term. I'm afraid I'm gonna
> have to nack this approach.
>
> Thanks.
>
> --
> tejun



-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
       [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
                     ` (8 preceding siblings ...)
  2023-09-22 16:52   ` [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support " Tejun Heo
@ 2023-09-25 18:22   ` Kui-Feng Lee
  2023-09-25 18:22     ` Kui-Feng Lee
       [not found]     ` <9e83bda8-ea1b-75b9-c55f-61cf11b4cd83-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  9 siblings, 2 replies; 21+ messages in thread
From: Kui-Feng Lee @ 2023-09-25 18:22 UTC (permalink / raw)
  To: Yafang Shao, ast-DgEjT+Ai2ygdnm+yROfE0A,
	daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA



On 9/22/23 04:28, Yafang Shao wrote:
> Currently, BPF is primarily confined to cgroup2, with the exception of
> cgroup_iter, which supports cgroup1 fds. Unfortunately, this limitation
> prevents us from harnessing the full potential of BPF within cgroup1
> environments.
> 
> In our endeavor to seamlessly integrate BPF within our Kubernetes
> environment, which relies on cgroup1, we have been exploring the
> possibility of transitioning to cgroup2. While this transition is
> forward-looking, it poses challenges due to the necessity for numerous
> applications to adapt.
> 
> While we acknowledge that cgroup2 represents the future, we also recognize
> that such transitions demand time and effort. As a result, we are
> considering an alternative approach. Instead of migrating to cgroup2, we
> are contemplating modifications to the BPF kernel code to ensure
> compatibility with cgroup1. These adjustments appear to be relatively
> minor, making this option more feasible.

Do you mean giving up moving to cgroup2? Or, is it just a tentative
solution?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
  2023-09-25 18:22   ` Kui-Feng Lee
@ 2023-09-25 18:22     ` Kui-Feng Lee
       [not found]     ` <9e83bda8-ea1b-75b9-c55f-61cf11b4cd83-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  1 sibling, 0 replies; 21+ messages in thread
From: Kui-Feng Lee @ 2023-09-25 18:22 UTC (permalink / raw)
  To: Yafang Shao, ast, daniel, john.fastabend, andrii, martin.lau,
	song, yonghong.song, kpsingh, sdf, haoluo, jolsa, tj, lizefan.x,
	hannes, yosryahmed, mkoutny
  Cc: cgroups, bpf



On 9/22/23 04:28, Yafang Shao wrote:
> Currently, BPF is primarily confined to cgroup2, with the exception of
> cgroup_iter, which supports cgroup1 fds. Unfortunately, this limitation
> prevents us from harnessing the full potential of BPF within cgroup1
> environments.
> 
> In our endeavor to seamlessly integrate BPF within our Kubernetes
> environment, which relies on cgroup1, we have been exploring the
> possibility of transitioning to cgroup2. While this transition is
> forward-looking, it poses challenges due to the necessity for numerous
> applications to adapt.
> 
> While we acknowledge that cgroup2 represents the future, we also recognize
> that such transitions demand time and effort. As a result, we are
> considering an alternative approach. Instead of migrating to cgroup2, we
> are contemplating modifications to the BPF kernel code to ensure
> compatibility with cgroup1. These adjustments appear to be relatively
> minor, making this option more feasible.

Do you mean giving up moving to cgroup2? Or, is it just a tentative
solution?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
       [not found]         ` <CALOAHbA9-BT1daw-KXHtsrN=uRQyt-p6LU=BEpvF2Yk42A_Vxw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2023-09-25 18:43           ` Tejun Heo
  2023-09-25 18:43             ` Tejun Heo
       [not found]             ` <ZRHU6MfwqRxjBFUH-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
  0 siblings, 2 replies; 21+ messages in thread
From: Tejun Heo @ 2023-09-25 18:43 UTC (permalink / raw)
  To: Yafang Shao
  Cc: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
	yosryahmed-hpIqsD4AKlfQT0dZR+AlfA, mkoutny-IBi9RG/b67k,
	cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA

Hello,

On Sun, Sep 24, 2023 at 02:32:14PM +0800, Yafang Shao wrote:
> On Sat, Sep 23, 2023 at 12:52 AM Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> >
> > Hello,
> >
> > On Fri, Sep 22, 2023 at 11:28:38AM +0000, Yafang Shao wrote:
> > > - bpf_cgroup_id_from_task_within_controller
> > >   Retrieves the cgroup ID from a task within a specific cgroup controller.
> > > - bpf_cgroup_acquire_from_id_within_controller
> > >   Acquires the cgroup from a cgroup ID within a specific cgroup controller.
> > > - bpf_cgroup_ancestor_id_from_task_within_controller
> > >   Retrieves the ancestor cgroup ID from a task within a specific cgroup
> > >   controller.
> > >
> > > The advantage of these new BPF kfuncs is their ability to abstract away the
> > > complexities of cgroup hierarchies, irrespective of whether they involve
> > > cgroup1 or cgroup2.
> >
> > I'm afraid this is more likely to bring the unnecessary complexities of
> > cgroup1 into cgroup2.
> 
> I concur with the idea that we should avoid introducing the
> complexities of cgroup1 into cgroup2. Which specific change do you
> believe might introduce these complexities into cgroup2? Is it the
> modification within task_under_cgroup_hierarchy() or
> cgroup_get_from_id()?

The helpers you are adding only makes sense for cgroup1. e.g.
bpf_cgroup_ancestor_id_from_task_within_controller() makes no sense in
cgroup2. The ancestor ids don't change according to controllers. The only
thing you would ask in cgroup2 is the level at which a given controller is
enabled at along with the straight-forward "where am I in the hierarchy?"
questions. I really don't want to expose interfaces which assume that the
hierarchies change according to the controller in question.

Also, as pointed out before, this doesn't cover cgroup1 named hierarchies
which leaves out a good potion of cgroup1 use cases.

> In fact, we have the option to utilize
> bpf_cgroup_ancestor_id_from_task_within_controller() as a substitute
> for bpf_task_under_cgroup(), which allows us to sidestep the need for
> changes within task_under_cgroup_hierarchy() altogether.

I don't think this is the direction we should take. If you really want,
please tie the interface directly to the hierarchies. Don't hitch hierarchy
identificdation on the controllers. e.g. Introduce cgroup1 only interface
which takes both hierarchy ID and cgroup ID to operate on them.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
  2023-09-25 18:43           ` Tejun Heo
@ 2023-09-25 18:43             ` Tejun Heo
       [not found]             ` <ZRHU6MfwqRxjBFUH-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
  1 sibling, 0 replies; 21+ messages in thread
From: Tejun Heo @ 2023-09-25 18:43 UTC (permalink / raw)
  To: Yafang Shao
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, lizefan.x, hannes,
	yosryahmed, mkoutny, cgroups, bpf

Hello,

On Sun, Sep 24, 2023 at 02:32:14PM +0800, Yafang Shao wrote:
> On Sat, Sep 23, 2023 at 12:52 AM Tejun Heo <tj@kernel.org> wrote:
> >
> > Hello,
> >
> > On Fri, Sep 22, 2023 at 11:28:38AM +0000, Yafang Shao wrote:
> > > - bpf_cgroup_id_from_task_within_controller
> > >   Retrieves the cgroup ID from a task within a specific cgroup controller.
> > > - bpf_cgroup_acquire_from_id_within_controller
> > >   Acquires the cgroup from a cgroup ID within a specific cgroup controller.
> > > - bpf_cgroup_ancestor_id_from_task_within_controller
> > >   Retrieves the ancestor cgroup ID from a task within a specific cgroup
> > >   controller.
> > >
> > > The advantage of these new BPF kfuncs is their ability to abstract away the
> > > complexities of cgroup hierarchies, irrespective of whether they involve
> > > cgroup1 or cgroup2.
> >
> > I'm afraid this is more likely to bring the unnecessary complexities of
> > cgroup1 into cgroup2.
> 
> I concur with the idea that we should avoid introducing the
> complexities of cgroup1 into cgroup2. Which specific change do you
> believe might introduce these complexities into cgroup2? Is it the
> modification within task_under_cgroup_hierarchy() or
> cgroup_get_from_id()?

The helpers you are adding only makes sense for cgroup1. e.g.
bpf_cgroup_ancestor_id_from_task_within_controller() makes no sense in
cgroup2. The ancestor ids don't change according to controllers. The only
thing you would ask in cgroup2 is the level at which a given controller is
enabled at along with the straight-forward "where am I in the hierarchy?"
questions. I really don't want to expose interfaces which assume that the
hierarchies change according to the controller in question.

Also, as pointed out before, this doesn't cover cgroup1 named hierarchies
which leaves out a good potion of cgroup1 use cases.

> In fact, we have the option to utilize
> bpf_cgroup_ancestor_id_from_task_within_controller() as a substitute
> for bpf_task_under_cgroup(), which allows us to sidestep the need for
> changes within task_under_cgroup_hierarchy() altogether.

I don't think this is the direction we should take. If you really want,
please tie the interface directly to the hierarchies. Don't hitch hierarchy
identificdation on the controllers. e.g. Introduce cgroup1 only interface
which takes both hierarchy ID and cgroup ID to operate on them.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
       [not found]             ` <ZRHU6MfwqRxjBFUH-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
@ 2023-09-26  3:01               ` Yafang Shao
  2023-09-26  3:01                 ` Yafang Shao
  2023-09-26 18:25                 ` Tejun Heo
  0 siblings, 2 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-26  3:01 UTC (permalink / raw)
  To: Tejun Heo
  Cc: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
	yosryahmed-hpIqsD4AKlfQT0dZR+AlfA, mkoutny-IBi9RG/b67k,
	cgroups-u79uwXL29TY76Z2rM5mHXA, bpf-u79uwXL29TY76Z2rM5mHXA

On Tue, Sep 26, 2023 at 2:43 AM Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
>
> Hello,
>
> On Sun, Sep 24, 2023 at 02:32:14PM +0800, Yafang Shao wrote:
> > On Sat, Sep 23, 2023 at 12:52 AM Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> > >
> > > Hello,
> > >
> > > On Fri, Sep 22, 2023 at 11:28:38AM +0000, Yafang Shao wrote:
> > > > - bpf_cgroup_id_from_task_within_controller
> > > >   Retrieves the cgroup ID from a task within a specific cgroup controller.
> > > > - bpf_cgroup_acquire_from_id_within_controller
> > > >   Acquires the cgroup from a cgroup ID within a specific cgroup controller.
> > > > - bpf_cgroup_ancestor_id_from_task_within_controller
> > > >   Retrieves the ancestor cgroup ID from a task within a specific cgroup
> > > >   controller.
> > > >
> > > > The advantage of these new BPF kfuncs is their ability to abstract away the
> > > > complexities of cgroup hierarchies, irrespective of whether they involve
> > > > cgroup1 or cgroup2.
> > >
> > > I'm afraid this is more likely to bring the unnecessary complexities of
> > > cgroup1 into cgroup2.
> >
> > I concur with the idea that we should avoid introducing the
> > complexities of cgroup1 into cgroup2. Which specific change do you
> > believe might introduce these complexities into cgroup2? Is it the
> > modification within task_under_cgroup_hierarchy() or
> > cgroup_get_from_id()?
>
> The helpers you are adding only makes sense for cgroup1. e.g.
> bpf_cgroup_ancestor_id_from_task_within_controller() makes no sense in
> cgroup2. The ancestor ids don't change according to controllers. The only
> thing you would ask in cgroup2 is the level at which a given controller is
> enabled at along with the straight-forward "where am I in the hierarchy?"
> questions. I really don't want to expose interfaces which assume that the
> hierarchies change according to the controller in question.

Makes sense.

>
> Also, as pointed out before, this doesn't cover cgroup1 named hierarchies
> which leaves out a good potion of cgroup1 use cases.
>
> > In fact, we have the option to utilize
> > bpf_cgroup_ancestor_id_from_task_within_controller() as a substitute
> > for bpf_task_under_cgroup(), which allows us to sidestep the need for
> > changes within task_under_cgroup_hierarchy() altogether.
>
> I don't think this is the direction we should take. If you really want,
> please tie the interface directly to the hierarchies. Don't hitch hierarchy
> identificdation on the controllers. e.g. Introduce cgroup1 only interface
> which takes both hierarchy ID and cgroup ID to operate on them.

Thanks for your suggestion. I will think about it.
BTW, I can't find the hierarchy ID of systemd (/sys/fs/cgroup/systemd)
in /proc/cgroups. Is this intentional as part of the design, or might
it be possible that we overlooked it?
In the userspace, where can we find the hierarchy ID of a named hierarchy?

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
  2023-09-26  3:01               ` Yafang Shao
@ 2023-09-26  3:01                 ` Yafang Shao
  2023-09-26 18:25                 ` Tejun Heo
  1 sibling, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-26  3:01 UTC (permalink / raw)
  To: Tejun Heo
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, lizefan.x, hannes,
	yosryahmed, mkoutny, cgroups, bpf

On Tue, Sep 26, 2023 at 2:43 AM Tejun Heo <tj@kernel.org> wrote:
>
> Hello,
>
> On Sun, Sep 24, 2023 at 02:32:14PM +0800, Yafang Shao wrote:
> > On Sat, Sep 23, 2023 at 12:52 AM Tejun Heo <tj@kernel.org> wrote:
> > >
> > > Hello,
> > >
> > > On Fri, Sep 22, 2023 at 11:28:38AM +0000, Yafang Shao wrote:
> > > > - bpf_cgroup_id_from_task_within_controller
> > > >   Retrieves the cgroup ID from a task within a specific cgroup controller.
> > > > - bpf_cgroup_acquire_from_id_within_controller
> > > >   Acquires the cgroup from a cgroup ID within a specific cgroup controller.
> > > > - bpf_cgroup_ancestor_id_from_task_within_controller
> > > >   Retrieves the ancestor cgroup ID from a task within a specific cgroup
> > > >   controller.
> > > >
> > > > The advantage of these new BPF kfuncs is their ability to abstract away the
> > > > complexities of cgroup hierarchies, irrespective of whether they involve
> > > > cgroup1 or cgroup2.
> > >
> > > I'm afraid this is more likely to bring the unnecessary complexities of
> > > cgroup1 into cgroup2.
> >
> > I concur with the idea that we should avoid introducing the
> > complexities of cgroup1 into cgroup2. Which specific change do you
> > believe might introduce these complexities into cgroup2? Is it the
> > modification within task_under_cgroup_hierarchy() or
> > cgroup_get_from_id()?
>
> The helpers you are adding only makes sense for cgroup1. e.g.
> bpf_cgroup_ancestor_id_from_task_within_controller() makes no sense in
> cgroup2. The ancestor ids don't change according to controllers. The only
> thing you would ask in cgroup2 is the level at which a given controller is
> enabled at along with the straight-forward "where am I in the hierarchy?"
> questions. I really don't want to expose interfaces which assume that the
> hierarchies change according to the controller in question.

Makes sense.

>
> Also, as pointed out before, this doesn't cover cgroup1 named hierarchies
> which leaves out a good potion of cgroup1 use cases.
>
> > In fact, we have the option to utilize
> > bpf_cgroup_ancestor_id_from_task_within_controller() as a substitute
> > for bpf_task_under_cgroup(), which allows us to sidestep the need for
> > changes within task_under_cgroup_hierarchy() altogether.
>
> I don't think this is the direction we should take. If you really want,
> please tie the interface directly to the hierarchies. Don't hitch hierarchy
> identificdation on the controllers. e.g. Introduce cgroup1 only interface
> which takes both hierarchy ID and cgroup ID to operate on them.

Thanks for your suggestion. I will think about it.
BTW, I can't find the hierarchy ID of systemd (/sys/fs/cgroup/systemd)
in /proc/cgroups. Is this intentional as part of the design, or might
it be possible that we overlooked it?
In the userspace, where can we find the hierarchy ID of a named hierarchy?

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
       [not found]     ` <9e83bda8-ea1b-75b9-c55f-61cf11b4cd83-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2023-09-26  3:08       ` Yafang Shao
  2023-09-26  3:08         ` Yafang Shao
  0 siblings, 1 reply; 21+ messages in thread
From: Yafang Shao @ 2023-09-26  3:08 UTC (permalink / raw)
  To: Kui-Feng Lee
  Cc: ast-DgEjT+Ai2ygdnm+yROfE0A, daniel-FeC+5ew28dpmcu3hnIyYJQ,
	john.fastabend-Re5JQEeQqe8AvxtiuMwx3w,
	andrii-DgEjT+Ai2ygdnm+yROfE0A, martin.lau-fxUVXftIFDnyG1zEObXtfA,
	song-DgEjT+Ai2ygdnm+yROfE0A, yonghong.song-fxUVXftIFDnyG1zEObXtfA,
	kpsingh-DgEjT+Ai2ygdnm+yROfE0A, sdf-hpIqsD4AKlfQT0dZR+AlfA,
	haoluo-hpIqsD4AKlfQT0dZR+AlfA, jolsa-DgEjT+Ai2ygdnm+yROfE0A,
	tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, yosryahmed-hpIqsD4AKlfQT0dZR+AlfA,
	mkoutny-IBi9RG/b67k, cgroups-u79uwXL29TY76Z2rM5mHXA,
	bpf-u79uwXL29TY76Z2rM5mHXA

On Tue, Sep 26, 2023 at 2:22 AM Kui-Feng Lee <sinquersw-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
>
>
> On 9/22/23 04:28, Yafang Shao wrote:
> > Currently, BPF is primarily confined to cgroup2, with the exception of
> > cgroup_iter, which supports cgroup1 fds. Unfortunately, this limitation
> > prevents us from harnessing the full potential of BPF within cgroup1
> > environments.
> >
> > In our endeavor to seamlessly integrate BPF within our Kubernetes
> > environment, which relies on cgroup1, we have been exploring the
> > possibility of transitioning to cgroup2. While this transition is
> > forward-looking, it poses challenges due to the necessity for numerous
> > applications to adapt.
> >
> > While we acknowledge that cgroup2 represents the future, we also recognize
> > that such transitions demand time and effort. As a result, we are
> > considering an alternative approach. Instead of migrating to cgroup2, we
> > are contemplating modifications to the BPF kernel code to ensure
> > compatibility with cgroup1. These adjustments appear to be relatively
> > minor, making this option more feasible.
>
> Do you mean giving up moving to cgroup2? Or, is it just a tentative
> solution?

Our transition to cgroup2 won't happen in the near future. It's a
long-term job.

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
  2023-09-26  3:08       ` Yafang Shao
@ 2023-09-26  3:08         ` Yafang Shao
  0 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-26  3:08 UTC (permalink / raw)
  To: Kui-Feng Lee
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tj, lizefan.x, hannes,
	yosryahmed, mkoutny, cgroups, bpf

On Tue, Sep 26, 2023 at 2:22 AM Kui-Feng Lee <sinquersw@gmail.com> wrote:
>
>
>
> On 9/22/23 04:28, Yafang Shao wrote:
> > Currently, BPF is primarily confined to cgroup2, with the exception of
> > cgroup_iter, which supports cgroup1 fds. Unfortunately, this limitation
> > prevents us from harnessing the full potential of BPF within cgroup1
> > environments.
> >
> > In our endeavor to seamlessly integrate BPF within our Kubernetes
> > environment, which relies on cgroup1, we have been exploring the
> > possibility of transitioning to cgroup2. While this transition is
> > forward-looking, it poses challenges due to the necessity for numerous
> > applications to adapt.
> >
> > While we acknowledge that cgroup2 represents the future, we also recognize
> > that such transitions demand time and effort. As a result, we are
> > considering an alternative approach. Instead of migrating to cgroup2, we
> > are contemplating modifications to the BPF kernel code to ensure
> > compatibility with cgroup1. These adjustments appear to be relatively
> > minor, making this option more feasible.
>
> Do you mean giving up moving to cgroup2? Or, is it just a tentative
> solution?

Our transition to cgroup2 won't happen in the near future. It's a
long-term job.

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
  2023-09-26  3:01               ` Yafang Shao
  2023-09-26  3:01                 ` Yafang Shao
@ 2023-09-26 18:25                 ` Tejun Heo
  2023-09-27  2:27                   ` Yafang Shao
  1 sibling, 1 reply; 21+ messages in thread
From: Tejun Heo @ 2023-09-26 18:25 UTC (permalink / raw)
  To: Yafang Shao
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, lizefan.x, hannes,
	yosryahmed, mkoutny, cgroups, bpf

Hello,

On Tue, Sep 26, 2023 at 11:01:08AM +0800, Yafang Shao wrote:
> Thanks for your suggestion. I will think about it.
> BTW, I can't find the hierarchy ID of systemd (/sys/fs/cgroup/systemd)
> in /proc/cgroups. Is this intentional as part of the design, or might
> it be possible that we overlooked it?
> In the userspace, where can we find the hierarchy ID of a named hierarchy?

Yeah, /proc/cgroups only prints the hierarchies which have controllers
attached to them. The file is pretty sad in general. However,
/proc/PID/cgroup prints all existing hierarchies along with their IDs and
identifiers (controllers or names). Hopefully, that should be enough?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
  2023-09-26 18:25                 ` Tejun Heo
@ 2023-09-27  2:27                   ` Yafang Shao
  0 siblings, 0 replies; 21+ messages in thread
From: Yafang Shao @ 2023-09-27  2:27 UTC (permalink / raw)
  To: Tejun Heo
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, lizefan.x, hannes,
	yosryahmed, mkoutny, cgroups, bpf

On Wed, Sep 27, 2023 at 2:25 AM Tejun Heo <tj@kernel.org> wrote:
>
> Hello,
>
> On Tue, Sep 26, 2023 at 11:01:08AM +0800, Yafang Shao wrote:
> > Thanks for your suggestion. I will think about it.
> > BTW, I can't find the hierarchy ID of systemd (/sys/fs/cgroup/systemd)
> > in /proc/cgroups. Is this intentional as part of the design, or might
> > it be possible that we overlooked it?
> > In the userspace, where can we find the hierarchy ID of a named hierarchy?
>
> Yeah, /proc/cgroups only prints the hierarchies which have controllers
> attached to them. The file is pretty sad in general. However,
> /proc/PID/cgroup prints all existing hierarchies along with their IDs and
> identifiers (controllers or names). Hopefully, that should be enough?

Right, /proc/self/cgroup can show the hierarchies. Thanks for your explanation.

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-09-27  3:12 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-22 11:28 [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller Yafang Shao
     [not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2023-09-22 11:28   ` [RFC PATCH bpf-next 1/8] bpf: Fix missed rcu read lock in bpf_task_under_cgroup() Yafang Shao
2023-09-22 11:28   ` [RFC PATCH bpf-next 2/8] cgroup: Enable task_under_cgroup_hierarchy() on cgroup1 Yafang Shao
2023-09-22 11:28   ` [RFC PATCH bpf-next 3/8] cgroup: Add cgroup_get_from_id_within_subsys() Yafang Shao
2023-09-22 11:28   ` [RFC PATCH bpf-next 4/8] bpf: Add new kfuncs support for cgroup controller Yafang Shao
2023-09-22 11:28   ` [RFC PATCH bpf-next 5/8] selftests/bpf: Fix issues in setup_classid_environment() Yafang Shao
2023-09-22 11:28   ` [RFC PATCH bpf-next 6/8] selftests/bpf: Add parallel support for classid Yafang Shao
2023-09-22 11:28   ` [RFC PATCH bpf-next 7/8] selftests/bpf: Add new cgroup helper get_classid_cgroup_id() Yafang Shao
2023-09-22 11:28   ` [RFC PATCH bpf-next 8/8] selftests/bpf: Add selftests for cgroup controller Yafang Shao
2023-09-22 16:52   ` [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support " Tejun Heo
     [not found]     ` <ZQ3GQmYrYyKAg2uK-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
2023-09-24  6:32       ` Yafang Shao
     [not found]         ` <CALOAHbA9-BT1daw-KXHtsrN=uRQyt-p6LU=BEpvF2Yk42A_Vxw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-09-25 18:43           ` Tejun Heo
2023-09-25 18:43             ` Tejun Heo
     [not found]             ` <ZRHU6MfwqRxjBFUH-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
2023-09-26  3:01               ` Yafang Shao
2023-09-26  3:01                 ` Yafang Shao
2023-09-26 18:25                 ` Tejun Heo
2023-09-27  2:27                   ` Yafang Shao
2023-09-25 18:22   ` Kui-Feng Lee
2023-09-25 18:22     ` Kui-Feng Lee
     [not found]     ` <9e83bda8-ea1b-75b9-c55f-61cf11b4cd83-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2023-09-26  3:08       ` Yafang Shao
2023-09-26  3:08         ` Yafang Shao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox