cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case
@ 2023-12-05 14:37 Yafang Shao
  2023-12-05 14:37 ` [RFC PATCH bpf-next 1/3] bpf: Enable bpf_cgrp_storage for " Yafang Shao
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Yafang Shao @ 2023-12-05 14:37 UTC (permalink / raw)
  To: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tj
  Cc: bpf, cgroups, Yafang Shao

In the current cgroup1 environment, associating operations between a cgroup
and applications in a BPF program requires storing a mapping of cgroup_id
to application either in a hash map or maintaining it in userspace.
However, by enabling bpf_cgrp_storage for cgroup1, it becomes possible to
conveniently store application-specific information in cgroup-local storage
and utilize it within BPF programs. Furthermore, enabling this feature for
cgroup1 involves minor modifications for the non-attach case, streamlining
the process.

However, when it comes to enabling this functionality for the cgroup1
attach case, it presents challenges. Therefore, the decision is to focus on
enabling it solely for the cgroup1 non-attach case at present. If
attempting to attach to a cgroup1 fd, the operation will simply fail with
the error code -EBADF.

Yafang Shao (3):
  bpf: Enable bpf_cgrp_storage for cgroup1 non-attach case
  selftests/bpf: Add a new cgroup helper open_classid()
  selftests/bpf: Add selftests for cgroup1 local storage

 kernel/bpf/bpf_cgrp_storage.c                      |  6 +-
 tools/testing/selftests/bpf/cgroup_helpers.c       | 16 ++++
 tools/testing/selftests/bpf/cgroup_helpers.h       |  1 +
 .../selftests/bpf/prog_tests/cgrp_local_storage.c  | 92 +++++++++++++++++++++-
 .../selftests/bpf/progs/cgrp_ls_recursion.c        | 84 ++++++++++++++++----
 .../selftests/bpf/progs/cgrp_ls_sleepable.c        | 67 ++++++++++++++--
 tools/testing/selftests/bpf/progs/cgrp_ls_tp_btf.c | 82 +++++++++++++------
 7 files changed, 298 insertions(+), 50 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH bpf-next 1/3] bpf: Enable bpf_cgrp_storage for cgroup1 non-attach case
  2023-12-05 14:37 [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Yafang Shao
@ 2023-12-05 14:37 ` Yafang Shao
  2023-12-06  4:17   ` Yonghong Song
  2023-12-05 14:37 ` [RFC PATCH bpf-next 2/3] selftests/bpf: Add a new cgroup helper open_classid() Yafang Shao
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Yafang Shao @ 2023-12-05 14:37 UTC (permalink / raw)
  To: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tj
  Cc: bpf, cgroups, Yafang Shao

In the current cgroup1 environment, associating operations between a cgroup
and applications in a BPF program requires storing a mapping of cgroup_id
to application either in a hash map or maintaining it in userspace.
However, by enabling bpf_cgrp_storage for cgroup1, it becomes possible to
conveniently store application-specific information in cgroup-local storage
and utilize it within BPF programs. Furthermore, enabling this feature for
cgroup1 involves minor modifications for the non-attach case, streamlining
the process.

However, when it comes to enabling this functionality for the cgroup1
attach case, it presents challenges. Therefore, the decision is to focus on
enabling it solely for the cgroup1 non-attach case at present. If
attempting to attach to a cgroup1 fd, the operation will simply fail with
the error code -EBADF.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 kernel/bpf/bpf_cgrp_storage.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
index d44fe8dd9732..28efd0a3f220 100644
--- a/kernel/bpf/bpf_cgrp_storage.c
+++ b/kernel/bpf/bpf_cgrp_storage.c
@@ -82,7 +82,7 @@ static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
 	int fd;
 
 	fd = *(int *)key;
-	cgroup = cgroup_get_from_fd(fd);
+	cgroup = cgroup_v1v2_get_from_fd(fd);
 	if (IS_ERR(cgroup))
 		return ERR_CAST(cgroup);
 
@@ -101,7 +101,7 @@ static long bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key,
 	int fd;
 
 	fd = *(int *)key;
-	cgroup = cgroup_get_from_fd(fd);
+	cgroup = cgroup_v1v2_get_from_fd(fd);
 	if (IS_ERR(cgroup))
 		return PTR_ERR(cgroup);
 
@@ -131,7 +131,7 @@ static long bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key)
 	int err, fd;
 
 	fd = *(int *)key;
-	cgroup = cgroup_get_from_fd(fd);
+	cgroup = cgroup_v1v2_get_from_fd(fd);
 	if (IS_ERR(cgroup))
 		return PTR_ERR(cgroup);
 
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH bpf-next 2/3] selftests/bpf: Add a new cgroup helper open_classid()
  2023-12-05 14:37 [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Yafang Shao
  2023-12-05 14:37 ` [RFC PATCH bpf-next 1/3] bpf: Enable bpf_cgrp_storage for " Yafang Shao
@ 2023-12-05 14:37 ` Yafang Shao
  2023-12-06  4:18   ` Yonghong Song
  2023-12-05 14:37 ` [RFC PATCH bpf-next 3/3] selftests/bpf: Add selftests for cgroup1 local storage Yafang Shao
  2023-12-05 17:15 ` [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Tejun Heo
  3 siblings, 1 reply; 11+ messages in thread
From: Yafang Shao @ 2023-12-05 14:37 UTC (permalink / raw)
  To: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tj
  Cc: bpf, cgroups, Yafang Shao

This new helper allows us to obtain the fd of a net_cls cgroup, which will
be utilized in the subsequent patch.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 tools/testing/selftests/bpf/cgroup_helpers.c | 16 ++++++++++++++++
 tools/testing/selftests/bpf/cgroup_helpers.h |  1 +
 2 files changed, 17 insertions(+)

diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c
index 5aa133bf3688..19be9c63d5e8 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.c
+++ b/tools/testing/selftests/bpf/cgroup_helpers.c
@@ -689,3 +689,19 @@ int get_cgroup1_hierarchy_id(const char *subsys_name)
 	fclose(file);
 	return found ? id : -1;
 }
+
+/**
+ * open_classid() - Open a cgroupv1 net_cls classid
+ *
+ * This function expects the cgroup work dir to be already created, as we
+ * open it here.
+ *
+ * On success, it returns the file descriptor. On failure it returns -1.
+ */
+int open_classid(void)
+{
+	char cgroup_workdir[PATH_MAX + 1];
+
+	format_classid_path(cgroup_workdir);
+	return open(cgroup_workdir, O_RDONLY);
+}
diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h
index ee053641c026..502845160d88 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.h
+++ b/tools/testing/selftests/bpf/cgroup_helpers.h
@@ -33,6 +33,7 @@ void cleanup_cgroup_environment(void);
 int set_classid(void);
 int join_classid(void);
 unsigned long long get_classid_cgroup_id(void);
+int open_classid(void);
 
 int setup_classid_environment(void);
 void cleanup_classid_environment(void);
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH bpf-next 3/3] selftests/bpf: Add selftests for cgroup1 local storage
  2023-12-05 14:37 [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Yafang Shao
  2023-12-05 14:37 ` [RFC PATCH bpf-next 1/3] bpf: Enable bpf_cgrp_storage for " Yafang Shao
  2023-12-05 14:37 ` [RFC PATCH bpf-next 2/3] selftests/bpf: Add a new cgroup helper open_classid() Yafang Shao
@ 2023-12-05 14:37 ` Yafang Shao
  2023-12-06  4:24   ` Yonghong Song
  2023-12-05 17:15 ` [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Tejun Heo
  3 siblings, 1 reply; 11+ messages in thread
From: Yafang Shao @ 2023-12-05 14:37 UTC (permalink / raw)
  To: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tj
  Cc: bpf, cgroups, Yafang Shao

Expanding the test coverage from cgroup2 to include cgroup1. The result
as follows,

Already existing test cases for cgroup2:
  #48/1    cgrp_local_storage/tp_btf:OK
  #48/2    cgrp_local_storage/attach_cgroup:OK
  #48/3    cgrp_local_storage/recursion:OK
  #48/4    cgrp_local_storage/negative:OK
  #48/5    cgrp_local_storage/cgroup_iter_sleepable:OK
  #48/6    cgrp_local_storage/yes_rcu_lock:OK
  #48/7    cgrp_local_storage/no_rcu_lock:OK

Expanded test cases for cgroup1:
  #48/8    cgrp_local_storage/cgrp1_tp_btf:OK
  #48/9    cgrp_local_storage/cgrp1_recursion:OK
  #48/10   cgrp_local_storage/cgrp1_negative:OK
  #48/11   cgrp_local_storage/cgrp1_iter_sleepable:OK
  #48/12   cgrp_local_storage/cgrp1_yes_rcu_lock:OK
  #48/13   cgrp_local_storage/cgrp1_no_rcu_lock:OK

Summary:
  #48      cgrp_local_storage:OK
  Summary: 1/13 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 .../bpf/prog_tests/cgrp_local_storage.c       | 92 ++++++++++++++++++-
 .../selftests/bpf/progs/cgrp_ls_recursion.c   | 84 +++++++++++++----
 .../selftests/bpf/progs/cgrp_ls_sleepable.c   | 67 ++++++++++++--
 .../selftests/bpf/progs/cgrp_ls_tp_btf.c      | 82 ++++++++++++-----
 4 files changed, 278 insertions(+), 47 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
index 63e776f4176e..9524cde4cbf6 100644
--- a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
+++ b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
@@ -19,6 +19,9 @@ struct socket_cookie {
 	__u64 cookie_value;
 };
 
+bool is_cgroup1;
+int target_hid;
+
 static void test_tp_btf(int cgroup_fd)
 {
 	struct cgrp_ls_tp_btf *skel;
@@ -29,6 +32,9 @@ static void test_tp_btf(int cgroup_fd)
 	if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
 		return;
 
+	skel->bss->is_cgroup1 = is_cgroup1;
+	skel->bss->target_hid = target_hid;
+
 	/* populate a value in map_b */
 	err = bpf_map_update_elem(bpf_map__fd(skel->maps.map_b), &cgroup_fd, &val1, BPF_ANY);
 	if (!ASSERT_OK(err, "map_update_elem"))
@@ -130,6 +136,9 @@ static void test_recursion(int cgroup_fd)
 	if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
 		return;
 
+	skel->bss->target_hid = target_hid;
+	skel->bss->is_cgroup1 = is_cgroup1;
+
 	err = cgrp_ls_recursion__attach(skel);
 	if (!ASSERT_OK(err, "skel_attach"))
 		goto out;
@@ -165,6 +174,9 @@ static void test_cgroup_iter_sleepable(int cgroup_fd, __u64 cgroup_id)
 	if (!ASSERT_OK_PTR(skel, "skel_open"))
 		return;
 
+	skel->bss->target_hid = target_hid;
+	skel->bss->is_cgroup1 = is_cgroup1;
+
 	bpf_program__set_autoload(skel->progs.cgroup_iter, true);
 	err = cgrp_ls_sleepable__load(skel);
 	if (!ASSERT_OK(err, "skel_load"))
@@ -202,6 +214,8 @@ static void test_yes_rcu_lock(__u64 cgroup_id)
 	if (!ASSERT_OK_PTR(skel, "skel_open"))
 		return;
 
+	skel->bss->target_hid = target_hid;
+	skel->bss->is_cgroup1 = is_cgroup1;
 	skel->bss->target_pid = syscall(SYS_gettid);
 
 	bpf_program__set_autoload(skel->progs.yes_rcu_lock, true);
@@ -229,6 +243,9 @@ static void test_no_rcu_lock(void)
 	if (!ASSERT_OK_PTR(skel, "skel_open"))
 		return;
 
+	skel->bss->target_hid = target_hid;
+	skel->bss->is_cgroup1 = is_cgroup1;
+
 	bpf_program__set_autoload(skel->progs.no_rcu_lock, true);
 	err = cgrp_ls_sleepable__load(skel);
 	ASSERT_ERR(err, "skel_load");
@@ -236,7 +253,26 @@ static void test_no_rcu_lock(void)
 	cgrp_ls_sleepable__destroy(skel);
 }
 
-void test_cgrp_local_storage(void)
+static void test_cgrp1_no_rcu_lock(void)
+{
+	struct cgrp_ls_sleepable *skel;
+	int err;
+
+	skel = cgrp_ls_sleepable__open();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	skel->bss->target_hid = target_hid;
+	skel->bss->is_cgroup1 = is_cgroup1;
+
+	bpf_program__set_autoload(skel->progs.cgrp1_no_rcu_lock, true);
+	err = cgrp_ls_sleepable__load(skel);
+	ASSERT_OK(err, "skel_load");
+
+	cgrp_ls_sleepable__destroy(skel);
+}
+
+void cgrp2_local_storage(void)
 {
 	__u64 cgroup_id;
 	int cgroup_fd;
@@ -245,6 +281,8 @@ void test_cgrp_local_storage(void)
 	if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup /cgrp_local_storage"))
 		return;
 
+	is_cgroup1 = 0;
+	target_hid = -1;
 	cgroup_id = get_cgroup_id("/cgrp_local_storage");
 	if (test__start_subtest("tp_btf"))
 		test_tp_btf(cgroup_fd);
@@ -263,3 +301,55 @@ void test_cgrp_local_storage(void)
 
 	close(cgroup_fd);
 }
+
+void cgrp1_local_storage(void)
+{
+	int cgrp1_fd, cgrp1_hid, cgrp1_id, err;
+
+	/* Setup cgroup1 hierarchy */
+	err = setup_classid_environment();
+	if (!ASSERT_OK(err, "setup_classid_environment"))
+		return;
+
+	err = join_classid();
+	if (!ASSERT_OK(err, "join_cgroup1"))
+		goto cleanup;
+
+	cgrp1_fd = open_classid();
+	if (!ASSERT_GE(cgrp1_fd, 0, "cgroup1 fd"))
+		goto cleanup;
+
+	cgrp1_id = get_classid_cgroup_id();
+	if (!ASSERT_GE(cgrp1_id, 0, "cgroup1 id"))
+		goto close_fd;
+
+	cgrp1_hid = get_cgroup1_hierarchy_id("net_cls");
+	if (!ASSERT_GE(cgrp1_hid, 0, "cgroup1 hid"))
+		goto close_fd;
+	target_hid = cgrp1_hid;
+	is_cgroup1 = 1;
+
+	if (test__start_subtest("cgrp1_tp_btf"))
+		test_tp_btf(cgrp1_fd);
+	if (test__start_subtest("cgrp1_recursion"))
+		test_recursion(cgrp1_fd);
+	if (test__start_subtest("cgrp1_negative"))
+		test_negative();
+	if (test__start_subtest("cgrp1_iter_sleepable"))
+		test_cgroup_iter_sleepable(cgrp1_fd, cgrp1_id);
+	if (test__start_subtest("cgrp1_yes_rcu_lock"))
+		test_yes_rcu_lock(cgrp1_id);
+	if (test__start_subtest("cgrp1_no_rcu_lock"))
+		test_cgrp1_no_rcu_lock();
+
+close_fd:
+	close(cgrp1_fd);
+cleanup:
+	cleanup_classid_environment();
+}
+
+void test_cgrp_local_storage(void)
+{
+	cgrp2_local_storage();
+	cgrp1_local_storage();
+}
diff --git a/tools/testing/selftests/bpf/progs/cgrp_ls_recursion.c b/tools/testing/selftests/bpf/progs/cgrp_ls_recursion.c
index a043d8fefdac..610c2427fd93 100644
--- a/tools/testing/selftests/bpf/progs/cgrp_ls_recursion.c
+++ b/tools/testing/selftests/bpf/progs/cgrp_ls_recursion.c
@@ -21,50 +21,100 @@ struct {
 	__type(value, long);
 } map_b SEC(".maps");
 
+int target_hid = 0;
+bool is_cgroup1 = 0;
+
+struct cgroup *bpf_task_get_cgroup1(struct task_struct *task, int hierarchy_id) __ksym;
+void bpf_cgroup_release(struct cgroup *cgrp) __ksym;
+
+static void __on_lookup(struct cgroup *cgrp)
+{
+	bpf_cgrp_storage_delete(&map_a, cgrp);
+	bpf_cgrp_storage_delete(&map_b, cgrp);
+}
+
 SEC("fentry/bpf_local_storage_lookup")
 int BPF_PROG(on_lookup)
 {
 	struct task_struct *task = bpf_get_current_task_btf();
+	struct cgroup *cgrp;
+
+	if (is_cgroup1) {
+		cgrp = bpf_task_get_cgroup1(task, target_hid);
+		if (!cgrp)
+			return 0;
 
-	bpf_cgrp_storage_delete(&map_a, task->cgroups->dfl_cgrp);
-	bpf_cgrp_storage_delete(&map_b, task->cgroups->dfl_cgrp);
+		__on_lookup(cgrp);
+		bpf_cgroup_release(cgrp);
+		return 0;
+	}
+
+	__on_lookup(task->cgroups->dfl_cgrp);
 	return 0;
 }
 
-SEC("fentry/bpf_local_storage_update")
-int BPF_PROG(on_update)
+static void __on_update(struct cgroup *cgrp)
 {
-	struct task_struct *task = bpf_get_current_task_btf();
 	long *ptr;
 
-	ptr = bpf_cgrp_storage_get(&map_a, task->cgroups->dfl_cgrp, 0,
-				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0, BPF_LOCAL_STORAGE_GET_F_CREATE);
 	if (ptr)
 		*ptr += 1;
 
-	ptr = bpf_cgrp_storage_get(&map_b, task->cgroups->dfl_cgrp, 0,
-				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	ptr = bpf_cgrp_storage_get(&map_b, cgrp, 0, BPF_LOCAL_STORAGE_GET_F_CREATE);
 	if (ptr)
 		*ptr += 1;
+}
 
+SEC("fentry/bpf_local_storage_update")
+int BPF_PROG(on_update)
+{
+	struct task_struct *task = bpf_get_current_task_btf();
+	struct cgroup *cgrp;
+
+	if (is_cgroup1) {
+		cgrp = bpf_task_get_cgroup1(task, target_hid);
+		if (!cgrp)
+			return 0;
+
+		__on_update(cgrp);
+		bpf_cgroup_release(cgrp);
+		return 0;
+	}
+
+	__on_update(task->cgroups->dfl_cgrp);
 	return 0;
 }
 
-SEC("tp_btf/sys_enter")
-int BPF_PROG(on_enter, struct pt_regs *regs, long id)
+static void __on_enter(struct pt_regs *regs, long id, struct cgroup *cgrp)
 {
-	struct task_struct *task;
 	long *ptr;
 
-	task = bpf_get_current_task_btf();
-	ptr = bpf_cgrp_storage_get(&map_a, task->cgroups->dfl_cgrp, 0,
-				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0, BPF_LOCAL_STORAGE_GET_F_CREATE);
 	if (ptr)
 		*ptr = 200;
 
-	ptr = bpf_cgrp_storage_get(&map_b, task->cgroups->dfl_cgrp, 0,
-				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	ptr = bpf_cgrp_storage_get(&map_b, cgrp, 0, BPF_LOCAL_STORAGE_GET_F_CREATE);
 	if (ptr)
 		*ptr = 100;
+}
+
+SEC("tp_btf/sys_enter")
+int BPF_PROG(on_enter, struct pt_regs *regs, long id)
+{
+	struct task_struct *task = bpf_get_current_task_btf();
+	struct cgroup *cgrp;
+
+	if (is_cgroup1) {
+		cgrp = bpf_task_get_cgroup1(task, target_hid);
+		if (!cgrp)
+			return 0;
+
+		__on_enter(regs, id, cgrp);
+		bpf_cgroup_release(cgrp);
+		return 0;
+	}
+
+	__on_enter(regs, id, task->cgroups->dfl_cgrp);
 	return 0;
 }
diff --git a/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c b/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
index 4c7844e1dbfa..985ff419249c 100644
--- a/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
+++ b/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
@@ -17,7 +17,11 @@ struct {
 
 __u32 target_pid;
 __u64 cgroup_id;
+int target_hid;
+bool is_cgroup1;
 
+struct cgroup *bpf_task_get_cgroup1(struct task_struct *task, int hierarchy_id) __ksym;
+void bpf_cgroup_release(struct cgroup *cgrp) __ksym;
 void bpf_rcu_read_lock(void) __ksym;
 void bpf_rcu_read_unlock(void) __ksym;
 
@@ -37,23 +41,56 @@ int cgroup_iter(struct bpf_iter__cgroup *ctx)
 	return 0;
 }
 
+static void __no_rcu_lock(struct cgroup *cgrp)
+{
+	long *ptr;
+
+	/* Note that trace rcu is held in sleepable prog, so we can use
+	 * bpf_cgrp_storage_get() in sleepable prog.
+	 */
+	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
+				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (ptr)
+		cgroup_id = cgrp->kn->id;
+}
+
 SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
-int no_rcu_lock(void *ctx)
+int cgrp1_no_rcu_lock(void *ctx)
 {
 	struct task_struct *task;
 	struct cgroup *cgrp;
-	long *ptr;
+
+	if (!is_cgroup1)
+		return 0;
+
+	task = bpf_get_current_task_btf();
+	if (task->pid != target_pid)
+		return 0;
+
+	/* bpf_task_get_cgroup1 can work in sleepable prog */
+	cgrp = bpf_task_get_cgroup1(task, target_hid);
+	if (!cgrp)
+		return 0;
+
+	__no_rcu_lock(cgrp);
+	bpf_cgroup_release(cgrp);
+	return 0;
+}
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+int no_rcu_lock(void *ctx)
+{
+	struct task_struct *task;
+
+	if (is_cgroup1)
+		return 0;
 
 	task = bpf_get_current_task_btf();
 	if (task->pid != target_pid)
 		return 0;
 
 	/* task->cgroups is untrusted in sleepable prog outside of RCU CS */
-	cgrp = task->cgroups->dfl_cgrp;
-	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
-				   BPF_LOCAL_STORAGE_GET_F_CREATE);
-	if (ptr)
-		cgroup_id = cgrp->kn->id;
+	__no_rcu_lock(task->cgroups->dfl_cgrp);
 	return 0;
 }
 
@@ -68,6 +105,22 @@ int yes_rcu_lock(void *ctx)
 	if (task->pid != target_pid)
 		return 0;
 
+	if (is_cgroup1) {
+		bpf_rcu_read_lock();
+		cgrp = bpf_task_get_cgroup1(task, target_hid);
+		if (!cgrp) {
+			bpf_rcu_read_unlock();
+			return 0;
+		}
+
+		ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0, BPF_LOCAL_STORAGE_GET_F_CREATE);
+		if (ptr)
+			cgroup_id = cgrp->kn->id;
+		bpf_cgroup_release(cgrp);
+		bpf_rcu_read_unlock();
+		return 0;
+	}
+
 	bpf_rcu_read_lock();
 	cgrp = task->cgroups->dfl_cgrp;
 	/* cgrp is trusted under RCU CS */
diff --git a/tools/testing/selftests/bpf/progs/cgrp_ls_tp_btf.c b/tools/testing/selftests/bpf/progs/cgrp_ls_tp_btf.c
index 9ebb8e2fe541..1c348f000f38 100644
--- a/tools/testing/selftests/bpf/progs/cgrp_ls_tp_btf.c
+++ b/tools/testing/selftests/bpf/progs/cgrp_ls_tp_btf.c
@@ -27,62 +27,100 @@ pid_t target_pid = 0;
 int mismatch_cnt = 0;
 int enter_cnt = 0;
 int exit_cnt = 0;
+int target_hid = 0;
+bool is_cgroup1 = 0;
 
-SEC("tp_btf/sys_enter")
-int BPF_PROG(on_enter, struct pt_regs *regs, long id)
+struct cgroup *bpf_task_get_cgroup1(struct task_struct *task, int hierarchy_id) __ksym;
+void bpf_cgroup_release(struct cgroup *cgrp) __ksym;
+
+static void __on_enter(struct pt_regs *regs, long id, struct cgroup *cgrp)
 {
-	struct task_struct *task;
 	long *ptr;
 	int err;
 
-	task = bpf_get_current_task_btf();
-	if (task->pid != target_pid)
-		return 0;
-
 	/* populate value 0 */
-	ptr = bpf_cgrp_storage_get(&map_a, task->cgroups->dfl_cgrp, 0,
+	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
 				   BPF_LOCAL_STORAGE_GET_F_CREATE);
 	if (!ptr)
-		return 0;
+		return;
 
 	/* delete value 0 */
-	err = bpf_cgrp_storage_delete(&map_a, task->cgroups->dfl_cgrp);
+	err = bpf_cgrp_storage_delete(&map_a, cgrp);
 	if (err)
-		return 0;
+		return;
 
 	/* value is not available */
-	ptr = bpf_cgrp_storage_get(&map_a, task->cgroups->dfl_cgrp, 0, 0);
+	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0, 0);
 	if (ptr)
-		return 0;
+		return;
 
 	/* re-populate the value */
-	ptr = bpf_cgrp_storage_get(&map_a, task->cgroups->dfl_cgrp, 0,
+	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
 				   BPF_LOCAL_STORAGE_GET_F_CREATE);
 	if (!ptr)
-		return 0;
+		return;
 	__sync_fetch_and_add(&enter_cnt, 1);
 	*ptr = MAGIC_VALUE + enter_cnt;
-
-	return 0;
 }
 
-SEC("tp_btf/sys_exit")
-int BPF_PROG(on_exit, struct pt_regs *regs, long id)
+SEC("tp_btf/sys_enter")
+int BPF_PROG(on_enter, struct pt_regs *regs, long id)
 {
 	struct task_struct *task;
-	long *ptr;
+	struct cgroup *cgrp;
 
 	task = bpf_get_current_task_btf();
 	if (task->pid != target_pid)
 		return 0;
 
-	ptr = bpf_cgrp_storage_get(&map_a, task->cgroups->dfl_cgrp, 0,
+	if (is_cgroup1) {
+		cgrp = bpf_task_get_cgroup1(task, target_hid);
+		if (!cgrp)
+			return 0;
+
+		__on_enter(regs, id, cgrp);
+		bpf_cgroup_release(cgrp);
+		return 0;
+	}
+
+	__on_enter(regs, id, task->cgroups->dfl_cgrp);
+	return 0;
+}
+
+static void __on_exit(struct pt_regs *regs, long id, struct cgroup *cgrp)
+{
+	long *ptr;
+
+	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
 				   BPF_LOCAL_STORAGE_GET_F_CREATE);
 	if (!ptr)
-		return 0;
+		return;
 
 	__sync_fetch_and_add(&exit_cnt, 1);
 	if (*ptr != MAGIC_VALUE + exit_cnt)
 		__sync_fetch_and_add(&mismatch_cnt, 1);
+}
+
+SEC("tp_btf/sys_exit")
+int BPF_PROG(on_exit, struct pt_regs *regs, long id)
+{
+	struct task_struct *task;
+	struct cgroup *cgrp;
+
+	task = bpf_get_current_task_btf();
+	if (task->pid != target_pid)
+		return 0;
+
+	if (is_cgroup1) {
+		cgrp = bpf_task_get_cgroup1(task, target_hid);
+		if (!cgrp)
+			return 0;
+
+		__on_exit(regs, id, cgrp);
+		bpf_cgroup_release(cgrp);
+		return 0;
+	}
+
+	__on_exit(regs, id, task->cgroups->dfl_cgrp);
 	return 0;
 }
-- 
2.30.1 (Apple Git-130)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case
  2023-12-05 14:37 [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Yafang Shao
                   ` (2 preceding siblings ...)
  2023-12-05 14:37 ` [RFC PATCH bpf-next 3/3] selftests/bpf: Add selftests for cgroup1 local storage Yafang Shao
@ 2023-12-05 17:15 ` Tejun Heo
  2023-12-06  2:47   ` Alexei Starovoitov
  3 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2023-12-05 17:15 UTC (permalink / raw)
  To: Yafang Shao
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, bpf, cgroups

On Tue, Dec 05, 2023 at 02:37:22PM +0000, Yafang Shao wrote:
> In the current cgroup1 environment, associating operations between a cgroup
> and applications in a BPF program requires storing a mapping of cgroup_id
> to application either in a hash map or maintaining it in userspace.
> However, by enabling bpf_cgrp_storage for cgroup1, it becomes possible to
> conveniently store application-specific information in cgroup-local storage
> and utilize it within BPF programs. Furthermore, enabling this feature for
> cgroup1 involves minor modifications for the non-attach case, streamlining
> the process.
> 
> However, when it comes to enabling this functionality for the cgroup1
> attach case, it presents challenges. Therefore, the decision is to focus on
> enabling it solely for the cgroup1 non-attach case at present. If
> attempting to attach to a cgroup1 fd, the operation will simply fail with
> the error code -EBADF.
> 
> Yafang Shao (3):
>   bpf: Enable bpf_cgrp_storage for cgroup1 non-attach case
>   selftests/bpf: Add a new cgroup helper open_classid()
>   selftests/bpf: Add selftests for cgroup1 local storage

Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case
  2023-12-05 17:15 ` [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Tejun Heo
@ 2023-12-06  2:47   ` Alexei Starovoitov
  2023-12-06  3:01     ` Yafang Shao
  0 siblings, 1 reply; 11+ messages in thread
From: Alexei Starovoitov @ 2023-12-06  2:47 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Yafang Shao, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf,
	open list:CONTROL GROUP (CGROUP)

On Tue, Dec 5, 2023 at 9:15 AM Tejun Heo <tj@kernel.org> wrote:
>
> On Tue, Dec 05, 2023 at 02:37:22PM +0000, Yafang Shao wrote:
> > In the current cgroup1 environment, associating operations between a cgroup
> > and applications in a BPF program requires storing a mapping of cgroup_id
> > to application either in a hash map or maintaining it in userspace.
> > However, by enabling bpf_cgrp_storage for cgroup1, it becomes possible to
> > conveniently store application-specific information in cgroup-local storage
> > and utilize it within BPF programs. Furthermore, enabling this feature for
> > cgroup1 involves minor modifications for the non-attach case, streamlining
> > the process.
> >
> > However, when it comes to enabling this functionality for the cgroup1
> > attach case, it presents challenges. Therefore, the decision is to focus on
> > enabling it solely for the cgroup1 non-attach case at present. If
> > attempting to attach to a cgroup1 fd, the operation will simply fail with
> > the error code -EBADF.
> >
> > Yafang Shao (3):
> >   bpf: Enable bpf_cgrp_storage for cgroup1 non-attach case
> >   selftests/bpf: Add a new cgroup helper open_classid()
> >   selftests/bpf: Add selftests for cgroup1 local storage
>
> Acked-by: Tejun Heo <tj@kernel.org>


Yafang,
please resubmit without RFC tag, so it can get tested by BPF CI.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case
  2023-12-06  2:47   ` Alexei Starovoitov
@ 2023-12-06  3:01     ` Yafang Shao
  0 siblings, 0 replies; 11+ messages in thread
From: Yafang Shao @ 2023-12-06  3:01 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Tejun Heo, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf,
	open list:CONTROL GROUP (CGROUP)

On Wed, Dec 6, 2023 at 10:47 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Dec 5, 2023 at 9:15 AM Tejun Heo <tj@kernel.org> wrote:
> >
> > On Tue, Dec 05, 2023 at 02:37:22PM +0000, Yafang Shao wrote:
> > > In the current cgroup1 environment, associating operations between a cgroup
> > > and applications in a BPF program requires storing a mapping of cgroup_id
> > > to application either in a hash map or maintaining it in userspace.
> > > However, by enabling bpf_cgrp_storage for cgroup1, it becomes possible to
> > > conveniently store application-specific information in cgroup-local storage
> > > and utilize it within BPF programs. Furthermore, enabling this feature for
> > > cgroup1 involves minor modifications for the non-attach case, streamlining
> > > the process.
> > >
> > > However, when it comes to enabling this functionality for the cgroup1
> > > attach case, it presents challenges. Therefore, the decision is to focus on
> > > enabling it solely for the cgroup1 non-attach case at present. If
> > > attempting to attach to a cgroup1 fd, the operation will simply fail with
> > > the error code -EBADF.
> > >
> > > Yafang Shao (3):
> > >   bpf: Enable bpf_cgrp_storage for cgroup1 non-attach case
> > >   selftests/bpf: Add a new cgroup helper open_classid()
> > >   selftests/bpf: Add selftests for cgroup1 local storage
> >
> > Acked-by: Tejun Heo <tj@kernel.org>
>
>
> Yafang,
> please resubmit without RFC tag, so it can get tested by BPF CI.

will do it.

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH bpf-next 1/3] bpf: Enable bpf_cgrp_storage for cgroup1 non-attach case
  2023-12-05 14:37 ` [RFC PATCH bpf-next 1/3] bpf: Enable bpf_cgrp_storage for " Yafang Shao
@ 2023-12-06  4:17   ` Yonghong Song
  0 siblings, 0 replies; 11+ messages in thread
From: Yonghong Song @ 2023-12-06  4:17 UTC (permalink / raw)
  To: Yafang Shao, ast, daniel, john.fastabend, andrii, martin.lau,
	song, kpsingh, sdf, haoluo, jolsa, tj
  Cc: bpf, cgroups


On 12/5/23 9:37 AM, Yafang Shao wrote:
> In the current cgroup1 environment, associating operations between a cgroup
> and applications in a BPF program requires storing a mapping of cgroup_id
> to application either in a hash map or maintaining it in userspace.
> However, by enabling bpf_cgrp_storage for cgroup1, it becomes possible to
> conveniently store application-specific information in cgroup-local storage
> and utilize it within BPF programs. Furthermore, enabling this feature for
> cgroup1 involves minor modifications for the non-attach case, streamlining
> the process.
>
> However, when it comes to enabling this functionality for the cgroup1
> attach case, it presents challenges. Therefore, the decision is to focus on
> enabling it solely for the cgroup1 non-attach case at present. If
> attempting to attach to a cgroup1 fd, the operation will simply fail with
> the error code -EBADF.
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>

Acked-by: Yonghong Song <yonghong.song@linux.dev>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH bpf-next 2/3] selftests/bpf: Add a new cgroup helper open_classid()
  2023-12-05 14:37 ` [RFC PATCH bpf-next 2/3] selftests/bpf: Add a new cgroup helper open_classid() Yafang Shao
@ 2023-12-06  4:18   ` Yonghong Song
  0 siblings, 0 replies; 11+ messages in thread
From: Yonghong Song @ 2023-12-06  4:18 UTC (permalink / raw)
  To: Yafang Shao, ast, daniel, john.fastabend, andrii, martin.lau,
	song, kpsingh, sdf, haoluo, jolsa, tj
  Cc: bpf, cgroups


On 12/5/23 9:37 AM, Yafang Shao wrote:
> This new helper allows us to obtain the fd of a net_cls cgroup, which will
> be utilized in the subsequent patch.
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>

Acked-by: Yonghong Song <yonghong.song@linux.dev>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH bpf-next 3/3] selftests/bpf: Add selftests for cgroup1 local storage
  2023-12-05 14:37 ` [RFC PATCH bpf-next 3/3] selftests/bpf: Add selftests for cgroup1 local storage Yafang Shao
@ 2023-12-06  4:24   ` Yonghong Song
  2023-12-06  9:00     ` Yafang Shao
  0 siblings, 1 reply; 11+ messages in thread
From: Yonghong Song @ 2023-12-06  4:24 UTC (permalink / raw)
  To: Yafang Shao, ast, daniel, john.fastabend, andrii, martin.lau,
	song, kpsingh, sdf, haoluo, jolsa, tj
  Cc: bpf, cgroups


On 12/5/23 9:37 AM, Yafang Shao wrote:
> Expanding the test coverage from cgroup2 to include cgroup1. The result
> as follows,
>
> Already existing test cases for cgroup2:
>    #48/1    cgrp_local_storage/tp_btf:OK
>    #48/2    cgrp_local_storage/attach_cgroup:OK
>    #48/3    cgrp_local_storage/recursion:OK
>    #48/4    cgrp_local_storage/negative:OK
>    #48/5    cgrp_local_storage/cgroup_iter_sleepable:OK
>    #48/6    cgrp_local_storage/yes_rcu_lock:OK
>    #48/7    cgrp_local_storage/no_rcu_lock:OK
>
> Expanded test cases for cgroup1:
>    #48/8    cgrp_local_storage/cgrp1_tp_btf:OK
>    #48/9    cgrp_local_storage/cgrp1_recursion:OK
>    #48/10   cgrp_local_storage/cgrp1_negative:OK
>    #48/11   cgrp_local_storage/cgrp1_iter_sleepable:OK
>    #48/12   cgrp_local_storage/cgrp1_yes_rcu_lock:OK
>    #48/13   cgrp_local_storage/cgrp1_no_rcu_lock:OK
>
> Summary:
>    #48      cgrp_local_storage:OK
>    Summary: 1/13 PASSED, 0 SKIPPED, 0 FAILED
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>

LGTM with a few nits below.

Acked-by: Yonghong Song <yonghong.song@linux.dev>

> ---
>   .../bpf/prog_tests/cgrp_local_storage.c       | 92 ++++++++++++++++++-
>   .../selftests/bpf/progs/cgrp_ls_recursion.c   | 84 +++++++++++++----
>   .../selftests/bpf/progs/cgrp_ls_sleepable.c   | 67 ++++++++++++--
>   .../selftests/bpf/progs/cgrp_ls_tp_btf.c      | 82 ++++++++++++-----
>   4 files changed, 278 insertions(+), 47 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
> index 63e776f4176e..9524cde4cbf6 100644
> --- a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
> +++ b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
> @@ -19,6 +19,9 @@ struct socket_cookie {
>   	__u64 cookie_value;
>   };
>   
> +bool is_cgroup1;
> +int target_hid;
> +
>   static void test_tp_btf(int cgroup_fd)
>   {
>   	struct cgrp_ls_tp_btf *skel;
> @@ -29,6 +32,9 @@ static void test_tp_btf(int cgroup_fd)
>   	if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
>   		return;
>   
> +	skel->bss->is_cgroup1 = is_cgroup1;
> +	skel->bss->target_hid = target_hid;

Let reverse the order like below to be consistent with other code patterns:
+	skel->bss->target_hid = target_hid;
+	skel->bss->is_cgroup1 = is_cgroup1;

> +
>   	/* populate a value in map_b */
>   	err = bpf_map_update_elem(bpf_map__fd(skel->maps.map_b), &cgroup_fd, &val1, BPF_ANY);
>   	if (!ASSERT_OK(err, "map_update_elem"))
> @@ -130,6 +136,9 @@ static void test_recursion(int cgroup_fd)
>   	if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
>   		return;
>   
> +	skel->bss->target_hid = target_hid;
> +	skel->bss->is_cgroup1 = is_cgroup1;
> +
>   	err = cgrp_ls_recursion__attach(skel);
>   	if (!ASSERT_OK(err, "skel_attach"))
>   		goto out;
> [...]
> diff --git a/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c b/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
> index 4c7844e1dbfa..985ff419249c 100644
> --- a/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
> +++ b/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
> @@ -17,7 +17,11 @@ struct {
>   
>   __u32 target_pid;
>   __u64 cgroup_id;
> +int target_hid;
> +bool is_cgroup1;
>   
> +struct cgroup *bpf_task_get_cgroup1(struct task_struct *task, int hierarchy_id) __ksym;
> +void bpf_cgroup_release(struct cgroup *cgrp) __ksym;
>   void bpf_rcu_read_lock(void) __ksym;
>   void bpf_rcu_read_unlock(void) __ksym;
>   
> @@ -37,23 +41,56 @@ int cgroup_iter(struct bpf_iter__cgroup *ctx)
>   	return 0;
>   }
>   
> +static void __no_rcu_lock(struct cgroup *cgrp)
> +{
> +	long *ptr;
> +
> +	/* Note that trace rcu is held in sleepable prog, so we can use
> +	 * bpf_cgrp_storage_get() in sleepable prog.
> +	 */
> +	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
> +				   BPF_LOCAL_STORAGE_GET_F_CREATE);
> +	if (ptr)
> +		cgroup_id = cgrp->kn->id;
> +}
> +
>   SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
> -int no_rcu_lock(void *ctx)
> +int cgrp1_no_rcu_lock(void *ctx)
>   {
>   	struct task_struct *task;
>   	struct cgroup *cgrp;
> -	long *ptr;
> +
> +	if (!is_cgroup1)
> +		return 0;

Do we need this check? Looks like the user space controls whether it will
be loaded or not depending on whether it is cgrp1.

> +
> +	task = bpf_get_current_task_btf();
> +	if (task->pid != target_pid)
> +		return 0;
> +
> +	/* bpf_task_get_cgroup1 can work in sleepable prog */
> +	cgrp = bpf_task_get_cgroup1(task, target_hid);
> +	if (!cgrp)
> +		return 0;
> +
> +	__no_rcu_lock(cgrp);
> +	bpf_cgroup_release(cgrp);
> +	return 0;
> +}
> +
> +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
> +int no_rcu_lock(void *ctx)
> +{
> +	struct task_struct *task;
> +
> +	if (is_cgroup1)
> +		return 0;

Same here, check is not needed.

>   
>   	task = bpf_get_current_task_btf();
>   	if (task->pid != target_pid)
>   		return 0;
>   
>   	/* task->cgroups is untrusted in sleepable prog outside of RCU CS */
> -	cgrp = task->cgroups->dfl_cgrp;
> -	ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
> -				   BPF_LOCAL_STORAGE_GET_F_CREATE);
> -	if (ptr)
> -		cgroup_id = cgrp->kn->id;
> +	__no_rcu_lock(task->cgroups->dfl_cgrp);
>   	return 0;
>   }
>   
> @@ -68,6 +105,22 @@ int yes_rcu_lock(void *ctx)
>   	if (task->pid != target_pid)
>   		return 0;
>   
> +	if (is_cgroup1) {
> +		bpf_rcu_read_lock();
> +		cgrp = bpf_task_get_cgroup1(task, target_hid);
> +		if (!cgrp) {
> +			bpf_rcu_read_unlock();
> +			return 0;
> +		}
> +
> +		ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0, BPF_LOCAL_STORAGE_GET_F_CREATE);
> +		if (ptr)
> +			cgroup_id = cgrp->kn->id;
> +		bpf_cgroup_release(cgrp);
> +		bpf_rcu_read_unlock();
> +		return 0;
> +	}
> +
>   	bpf_rcu_read_lock();
>   	cgrp = task->cgroups->dfl_cgrp;
>   	/* cgrp is trusted under RCU CS */

[...]


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH bpf-next 3/3] selftests/bpf: Add selftests for cgroup1 local storage
  2023-12-06  4:24   ` Yonghong Song
@ 2023-12-06  9:00     ` Yafang Shao
  0 siblings, 0 replies; 11+ messages in thread
From: Yafang Shao @ 2023-12-06  9:00 UTC (permalink / raw)
  To: Yonghong Song
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, kpsingh,
	sdf, haoluo, jolsa, tj, bpf, cgroups

On Wed, Dec 6, 2023 at 12:24 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
> On 12/5/23 9:37 AM, Yafang Shao wrote:
> > Expanding the test coverage from cgroup2 to include cgroup1. The result
> > as follows,
> >
> > Already existing test cases for cgroup2:
> >    #48/1    cgrp_local_storage/tp_btf:OK
> >    #48/2    cgrp_local_storage/attach_cgroup:OK
> >    #48/3    cgrp_local_storage/recursion:OK
> >    #48/4    cgrp_local_storage/negative:OK
> >    #48/5    cgrp_local_storage/cgroup_iter_sleepable:OK
> >    #48/6    cgrp_local_storage/yes_rcu_lock:OK
> >    #48/7    cgrp_local_storage/no_rcu_lock:OK
> >
> > Expanded test cases for cgroup1:
> >    #48/8    cgrp_local_storage/cgrp1_tp_btf:OK
> >    #48/9    cgrp_local_storage/cgrp1_recursion:OK
> >    #48/10   cgrp_local_storage/cgrp1_negative:OK
> >    #48/11   cgrp_local_storage/cgrp1_iter_sleepable:OK
> >    #48/12   cgrp_local_storage/cgrp1_yes_rcu_lock:OK
> >    #48/13   cgrp_local_storage/cgrp1_no_rcu_lock:OK
> >
> > Summary:
> >    #48      cgrp_local_storage:OK
> >    Summary: 1/13 PASSED, 0 SKIPPED, 0 FAILED
> >
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
>
> LGTM with a few nits below.
>
> Acked-by: Yonghong Song <yonghong.song@linux.dev>
>
> > ---
> >   .../bpf/prog_tests/cgrp_local_storage.c       | 92 ++++++++++++++++++-
> >   .../selftests/bpf/progs/cgrp_ls_recursion.c   | 84 +++++++++++++----
> >   .../selftests/bpf/progs/cgrp_ls_sleepable.c   | 67 ++++++++++++--
> >   .../selftests/bpf/progs/cgrp_ls_tp_btf.c      | 82 ++++++++++++-----
> >   4 files changed, 278 insertions(+), 47 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
> > index 63e776f4176e..9524cde4cbf6 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
> > @@ -19,6 +19,9 @@ struct socket_cookie {
> >       __u64 cookie_value;
> >   };
> >
> > +bool is_cgroup1;
> > +int target_hid;
> > +
> >   static void test_tp_btf(int cgroup_fd)
> >   {
> >       struct cgrp_ls_tp_btf *skel;
> > @@ -29,6 +32,9 @@ static void test_tp_btf(int cgroup_fd)
> >       if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
> >               return;
> >
> > +     skel->bss->is_cgroup1 = is_cgroup1;
> > +     skel->bss->target_hid = target_hid;
>
> Let reverse the order like below to be consistent with other code patterns:

will change it.

> +       skel->bss->target_hid = target_hid;
> +       skel->bss->is_cgroup1 = is_cgroup1;
>
> > +
> >       /* populate a value in map_b */
> >       err = bpf_map_update_elem(bpf_map__fd(skel->maps.map_b), &cgroup_fd, &val1, BPF_ANY);
> >       if (!ASSERT_OK(err, "map_update_elem"))
> > @@ -130,6 +136,9 @@ static void test_recursion(int cgroup_fd)
> >       if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
> >               return;
> >
> > +     skel->bss->target_hid = target_hid;
> > +     skel->bss->is_cgroup1 = is_cgroup1;
> > +
> >       err = cgrp_ls_recursion__attach(skel);
> >       if (!ASSERT_OK(err, "skel_attach"))
> >               goto out;
> > [...]
> > diff --git a/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c b/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
> > index 4c7844e1dbfa..985ff419249c 100644
> > --- a/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
> > +++ b/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c
> > @@ -17,7 +17,11 @@ struct {
> >
> >   __u32 target_pid;
> >   __u64 cgroup_id;
> > +int target_hid;
> > +bool is_cgroup1;
> >
> > +struct cgroup *bpf_task_get_cgroup1(struct task_struct *task, int hierarchy_id) __ksym;
> > +void bpf_cgroup_release(struct cgroup *cgrp) __ksym;
> >   void bpf_rcu_read_lock(void) __ksym;
> >   void bpf_rcu_read_unlock(void) __ksym;
> >
> > @@ -37,23 +41,56 @@ int cgroup_iter(struct bpf_iter__cgroup *ctx)
> >       return 0;
> >   }
> >
> > +static void __no_rcu_lock(struct cgroup *cgrp)
> > +{
> > +     long *ptr;
> > +
> > +     /* Note that trace rcu is held in sleepable prog, so we can use
> > +      * bpf_cgrp_storage_get() in sleepable prog.
> > +      */
> > +     ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
> > +                                BPF_LOCAL_STORAGE_GET_F_CREATE);
> > +     if (ptr)
> > +             cgroup_id = cgrp->kn->id;
> > +}
> > +
> >   SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
> > -int no_rcu_lock(void *ctx)
> > +int cgrp1_no_rcu_lock(void *ctx)
> >   {
> >       struct task_struct *task;
> >       struct cgroup *cgrp;
> > -     long *ptr;
> > +
> > +     if (!is_cgroup1)
> > +             return 0;
>
> Do we need this check? Looks like the user space controls whether it will
> be loaded or not depending on whether it is cgrp1.

will remove this check.

>
> > +
> > +     task = bpf_get_current_task_btf();
> > +     if (task->pid != target_pid)
> > +             return 0;
> > +
> > +     /* bpf_task_get_cgroup1 can work in sleepable prog */
> > +     cgrp = bpf_task_get_cgroup1(task, target_hid);
> > +     if (!cgrp)
> > +             return 0;
> > +
> > +     __no_rcu_lock(cgrp);
> > +     bpf_cgroup_release(cgrp);
> > +     return 0;
> > +}
> > +
> > +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
> > +int no_rcu_lock(void *ctx)
> > +{
> > +     struct task_struct *task;
> > +
> > +     if (is_cgroup1)
> > +             return 0;
>
> Same here, check is not needed.

will remove it.

Thanks for your review.

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-12-06  9:00 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-05 14:37 [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Yafang Shao
2023-12-05 14:37 ` [RFC PATCH bpf-next 1/3] bpf: Enable bpf_cgrp_storage for " Yafang Shao
2023-12-06  4:17   ` Yonghong Song
2023-12-05 14:37 ` [RFC PATCH bpf-next 2/3] selftests/bpf: Add a new cgroup helper open_classid() Yafang Shao
2023-12-06  4:18   ` Yonghong Song
2023-12-05 14:37 ` [RFC PATCH bpf-next 3/3] selftests/bpf: Add selftests for cgroup1 local storage Yafang Shao
2023-12-06  4:24   ` Yonghong Song
2023-12-06  9:00     ` Yafang Shao
2023-12-05 17:15 ` [RFC PATCH bpf-next 0/3] bpf: Expand bpf_cgrp_storage to support cgroup1 non-attach case Tejun Heo
2023-12-06  2:47   ` Alexei Starovoitov
2023-12-06  3:01     ` Yafang Shao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).