linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 bpf-next 0/3] bpf: Add kmem_cache iterator and kfunc
@ 2024-10-02  6:54 Namhyung Kim
  2024-10-02  6:54 ` [PATCH v3 bpf-next 1/3] bpf: Add kmem_cache iterator Namhyung Kim
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Namhyung Kim @ 2024-10-02  6:54 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	LKML, bpf, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, Arnaldo Carvalho de Melo

Hello,

I'm proposing a new iterator and a kfunc for the slab memory allocator
to get information of each kmem_cache like in /proc/slabinfo or
/sys/kernel/slab in more flexible way.

v3 changes)

 * rework kmem_cache_iter not to hold slab_mutex when running BPF  (Alexei)
 * add virt_addr_valid() check  (Alexei)
 * fix random test failure by running test with the current task  (Hyeonggon)

v2: https://lore.kernel.org/lkml/20240927184133.968283-1-namhyung@kernel.org/

 * rename it to "kmem_cache_iter"
 * fix a build issue
 * add Acked-by's from Roman and Vlastimil (Thanks!)
 * add error codes in the test for debugging

v1: https://lore.kernel.org/lkml/20240925223023.735947-1-namhyung@kernel.org/

My use case is `perf lock contention` tool which shows contended locks
but many of them are not global locks and don't have symbols.  If it
can tranlate the address of the lock in a slab object to the name of
the slab, it'd be much more useful.

I'm not aware of type information in slab yet, but I was told there's
a work to associate BTF ID with it.  It'd be definitely helpful to my
use case.  Probably we need another kfunc to get the start address of
the object or the offset in the object from an address if the type
info is available.  But I want to start with a simple thing first.

The kmem_cache_iter iterates kmem_cache objects under slab_mutex and
will be useful for userspace to prepare some work for specific slabs
like setting up filters in advance.  And the bpf_get_kmem_cache()
kfunc will return a pointer to a slab from the address of a lock.  And
the test code is to read from the iterator and make sure it finds a
slab cache of the task_struct for the current task.

The code is available at 'bpf/slab-iter-v3' branch in
https://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git

Thanks,
Namhyung


Namhyung Kim (3):
  bpf: Add kmem_cache iterator
  mm/bpf: Add bpf_get_kmem_cache() kfunc
  selftests/bpf: Add a test for kmem_cache_iter

 include/linux/btf_ids.h                       |   1 +
 kernel/bpf/Makefile                           |   1 +
 kernel/bpf/helpers.c                          |   1 +
 kernel/bpf/kmem_cache_iter.c                  | 165 ++++++++++++++++++
 mm/slab_common.c                              |  19 ++
 .../bpf/prog_tests/kmem_cache_iter.c          |  64 +++++++
 tools/testing/selftests/bpf/progs/bpf_iter.h  |   7 +
 .../selftests/bpf/progs/kmem_cache_iter.c     |  66 +++++++
 8 files changed, 324 insertions(+)
 create mode 100644 kernel/bpf/kmem_cache_iter.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
 create mode 100644 tools/testing/selftests/bpf/progs/kmem_cache_iter.c


base-commit: 9502a7de5a61bec3bda841a830560c5d6d40ecac
-- 
2.46.1.824.gd892dcdcdd-goog



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 bpf-next 1/3] bpf: Add kmem_cache iterator
  2024-10-02  6:54 [PATCH v3 bpf-next 0/3] bpf: Add kmem_cache iterator and kfunc Namhyung Kim
@ 2024-10-02  6:54 ` Namhyung Kim
  2024-10-02 10:54   ` Vlastimil Babka
  2024-10-02  6:54 ` [PATCH v3 bpf-next 2/3] mm/bpf: Add bpf_get_kmem_cache() kfunc Namhyung Kim
  2024-10-02  6:54 ` [PATCH v3 bpf-next 3/3] selftests/bpf: Add a test for kmem_cache_iter Namhyung Kim
  2 siblings, 1 reply; 6+ messages in thread
From: Namhyung Kim @ 2024-10-02  6:54 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	LKML, bpf, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, Arnaldo Carvalho de Melo

The new "kmem_cache" iterator will traverse the list of slab caches
and call attached BPF programs for each entry.  It should check the
argument (ctx.s) if it's NULL before using it.

Now the iteration grabs the slab_mutex only if it traverse the list and
releases the mutex when it runs the BPF program.  The kmem_cache entry
is protected by a refcount during the execution.

It includes the internal "mm/slab.h" header to access kmem_cache,
slab_caches and slab_mutex.  Hope it's ok to mm folks.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
I've removed the Acked-by's from Roman and Vlastimil since it's changed
not to hold the slab_mutex and to manage the refcount.  Please review
this change again!

 include/linux/btf_ids.h      |   1 +
 kernel/bpf/Makefile          |   1 +
 kernel/bpf/kmem_cache_iter.c | 165 +++++++++++++++++++++++++++++++++++
 3 files changed, 167 insertions(+)
 create mode 100644 kernel/bpf/kmem_cache_iter.c

diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
index c0e3e1426a82f5c4..139bdececdcfaefb 100644
--- a/include/linux/btf_ids.h
+++ b/include/linux/btf_ids.h
@@ -283,5 +283,6 @@ extern u32 btf_tracing_ids[];
 extern u32 bpf_cgroup_btf_id[];
 extern u32 bpf_local_storage_map_btf_id[];
 extern u32 btf_bpf_map_id[];
+extern u32 bpf_kmem_cache_btf_id[];
 
 #endif
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 9b9c151b5c826b31..105328f0b9c04e37 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -52,3 +52,4 @@ obj-$(CONFIG_BPF_PRELOAD) += preload/
 obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
 obj-$(CONFIG_BPF_SYSCALL) += btf_iter.o
 obj-$(CONFIG_BPF_SYSCALL) += btf_relocate.o
+obj-$(CONFIG_BPF_SYSCALL) += kmem_cache_iter.o
diff --git a/kernel/bpf/kmem_cache_iter.c b/kernel/bpf/kmem_cache_iter.c
new file mode 100644
index 0000000000000000..a77c08b82c6bc965
--- /dev/null
+++ b/kernel/bpf/kmem_cache_iter.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2024 Google */
+#include <linux/bpf.h>
+#include <linux/btf_ids.h>
+#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/seq_file.h>
+
+#include "../../mm/slab.h" /* kmem_cache, slab_caches and slab_mutex */
+
+struct bpf_iter__kmem_cache {
+	__bpf_md_ptr(struct bpf_iter_meta *, meta);
+	__bpf_md_ptr(struct kmem_cache *, s);
+};
+
+static void *kmem_cache_iter_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	loff_t cnt = 0;
+	struct kmem_cache *s = NULL;
+
+	mutex_lock(&slab_mutex);
+
+	/*
+	 * Find an entry at the given position in the slab_caches list instead
+	 * of keeping a reference (of the last visited entry, if any) out of
+	 * slab_mutex. It might miss something if one is deleted in the middle
+	 * while it releases the lock.  But it should be rare and there's not
+	 * much we can do about it.
+	 */
+	list_for_each_entry(s, &slab_caches, list) {
+		if (cnt == *pos) {
+			/*
+			 * Make sure this entry remains in the list by getting
+			 * a new reference count.  Note that boot_cache entries
+			 * have a negative refcount, so don't touch them.
+			 */
+			if (s->refcount > 0)
+				s->refcount++;
+			break;
+		}
+
+		cnt++;
+	}
+	mutex_unlock(&slab_mutex);
+
+	if (cnt != *pos)
+		return NULL;
+
+	++*pos;
+	return s;
+}
+
+static void kmem_cache_iter_seq_stop(struct seq_file *seq, void *v)
+{
+	struct bpf_iter_meta meta;
+	struct bpf_iter__kmem_cache ctx = {
+		.meta = &meta,
+		.s = v,
+	};
+	struct bpf_prog *prog;
+	bool destroy = false;
+
+	meta.seq = seq;
+	prog = bpf_iter_get_info(&meta, true);
+	if (prog)
+		bpf_iter_run_prog(prog, &ctx);
+
+	mutex_lock(&slab_mutex);
+	if (ctx.s && ctx.s->refcount > 0)
+		destroy = true;
+	mutex_unlock(&slab_mutex);
+
+	if (destroy)
+		kmem_cache_destroy(ctx.s);
+}
+
+static void *kmem_cache_iter_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct kmem_cache *s = v;
+	struct kmem_cache *next = NULL;
+	bool destroy = false;
+
+	++*pos;
+
+	mutex_lock(&slab_mutex);
+
+	if (list_last_entry(&slab_caches, struct kmem_cache, list) != s) {
+		next = list_next_entry(s, list);
+		if (next->refcount > 0)
+			next->refcount++;
+	}
+
+	/* Skip kmem_cache_destroy() for active entries */
+	if (s->refcount > 1)
+		s->refcount--;
+	else if (s->refcount == 1)
+		destroy = true;
+
+	mutex_unlock(&slab_mutex);
+
+	if (destroy)
+		kmem_cache_destroy(s);
+
+	return next;
+}
+
+static int kmem_cache_iter_seq_show(struct seq_file *seq, void *v)
+{
+	struct bpf_iter_meta meta;
+	struct bpf_iter__kmem_cache ctx = {
+		.meta = &meta,
+		.s = v,
+	};
+	struct bpf_prog *prog;
+	int ret = 0;
+
+	meta.seq = seq;
+	prog = bpf_iter_get_info(&meta, false);
+	if (prog)
+		ret = bpf_iter_run_prog(prog, &ctx);
+
+	return ret;
+}
+
+static const struct seq_operations kmem_cache_iter_seq_ops = {
+	.start  = kmem_cache_iter_seq_start,
+	.next   = kmem_cache_iter_seq_next,
+	.stop   = kmem_cache_iter_seq_stop,
+	.show   = kmem_cache_iter_seq_show,
+};
+
+BTF_ID_LIST_GLOBAL_SINGLE(bpf_kmem_cache_btf_id, struct, kmem_cache)
+
+static const struct bpf_iter_seq_info kmem_cache_iter_seq_info = {
+	.seq_ops		= &kmem_cache_iter_seq_ops,
+};
+
+static void bpf_iter_kmem_cache_show_fdinfo(const struct bpf_iter_aux_info *aux,
+					    struct seq_file *seq)
+{
+	seq_puts(seq, "kmem_cache iter\n");
+}
+
+DEFINE_BPF_ITER_FUNC(kmem_cache, struct bpf_iter_meta *meta,
+		     struct kmem_cache *s)
+
+static struct bpf_iter_reg bpf_kmem_cache_reg_info = {
+	.target			= "kmem_cache",
+	.feature		= BPF_ITER_RESCHED,
+	.show_fdinfo		= bpf_iter_kmem_cache_show_fdinfo,
+	.ctx_arg_info_size	= 1,
+	.ctx_arg_info		= {
+		{ offsetof(struct bpf_iter__kmem_cache, s),
+		  PTR_TO_BTF_ID_OR_NULL | PTR_TRUSTED },
+	},
+	.seq_info		= &kmem_cache_iter_seq_info,
+};
+
+static int __init bpf_kmem_cache_iter_init(void)
+{
+	bpf_kmem_cache_reg_info.ctx_arg_info[0].btf_id = bpf_kmem_cache_btf_id[0];
+	return bpf_iter_reg_target(&bpf_kmem_cache_reg_info);
+}
+
+late_initcall(bpf_kmem_cache_iter_init);

base-commit: 9502a7de5a61bec3bda841a830560c5d6d40ecac
-- 
2.46.1.824.gd892dcdcdd-goog



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 bpf-next 2/3] mm/bpf: Add bpf_get_kmem_cache() kfunc
  2024-10-02  6:54 [PATCH v3 bpf-next 0/3] bpf: Add kmem_cache iterator and kfunc Namhyung Kim
  2024-10-02  6:54 ` [PATCH v3 bpf-next 1/3] bpf: Add kmem_cache iterator Namhyung Kim
@ 2024-10-02  6:54 ` Namhyung Kim
  2024-10-02  6:54 ` [PATCH v3 bpf-next 3/3] selftests/bpf: Add a test for kmem_cache_iter Namhyung Kim
  2 siblings, 0 replies; 6+ messages in thread
From: Namhyung Kim @ 2024-10-02  6:54 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	LKML, bpf, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, Arnaldo Carvalho de Melo

The bpf_get_kmem_cache() is to get a slab cache information from a
virtual address like virt_to_cache().  If the address is a pointer
to a slab object, it'd return a valid kmem_cache pointer, otherwise
NULL is returned.

It doesn't grab a reference count of the kmem_cache so the caller is
responsible to manage the access.  The intended use case for now is to
symbolize locks in slab objects from the lock contention tracepoints.

Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev> (mm/*)
Acked-by: Vlastimil Babka <vbabka@suse.cz> #mm/slab
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 kernel/bpf/helpers.c |  1 +
 mm/slab_common.c     | 19 +++++++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 4053f279ed4cc7ab..3709fb14288105c6 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -3090,6 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW)
 BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY)
 BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE)
+BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL)
 BTF_KFUNCS_END(common_btf_ids)
 
 static const struct btf_kfunc_id_set common_kfunc_set = {
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 7443244656150325..5484e1cd812f698e 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1322,6 +1322,25 @@ size_t ksize(const void *objp)
 }
 EXPORT_SYMBOL(ksize);
 
+#ifdef CONFIG_BPF_SYSCALL
+#include <linux/btf.h>
+
+__bpf_kfunc_start_defs();
+
+__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr)
+{
+	struct slab *slab;
+
+	if (!virt_addr_valid(addr))
+		return NULL;
+
+	slab = virt_to_slab((void *)(long)addr);
+	return slab ? slab->slab_cache : NULL;
+}
+
+__bpf_kfunc_end_defs();
+#endif /* CONFIG_BPF_SYSCALL */
+
 /* Tracepoints definitions. */
 EXPORT_TRACEPOINT_SYMBOL(kmalloc);
 EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);
-- 
2.46.1.824.gd892dcdcdd-goog



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 bpf-next 3/3] selftests/bpf: Add a test for kmem_cache_iter
  2024-10-02  6:54 [PATCH v3 bpf-next 0/3] bpf: Add kmem_cache iterator and kfunc Namhyung Kim
  2024-10-02  6:54 ` [PATCH v3 bpf-next 1/3] bpf: Add kmem_cache iterator Namhyung Kim
  2024-10-02  6:54 ` [PATCH v3 bpf-next 2/3] mm/bpf: Add bpf_get_kmem_cache() kfunc Namhyung Kim
@ 2024-10-02  6:54 ` Namhyung Kim
  2024-10-04 23:53   ` Alexei Starovoitov
  2 siblings, 1 reply; 6+ messages in thread
From: Namhyung Kim @ 2024-10-02  6:54 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	LKML, bpf, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, Arnaldo Carvalho de Melo

The test traverses all slab caches using the kmem_cache_iter and check
if current task's pointer is from "task_struct" slab cache.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 .../bpf/prog_tests/kmem_cache_iter.c          | 64 ++++++++++++++++++
 tools/testing/selftests/bpf/progs/bpf_iter.h  |  7 ++
 .../selftests/bpf/progs/kmem_cache_iter.c     | 66 +++++++++++++++++++
 3 files changed, 137 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
 create mode 100644 tools/testing/selftests/bpf/progs/kmem_cache_iter.c

diff --git a/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c b/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
new file mode 100644
index 0000000000000000..3965e2924ac82d91
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google */
+
+#include <test_progs.h>
+#include <bpf/libbpf.h>
+#include <bpf/btf.h>
+#include "kmem_cache_iter.skel.h"
+
+static void test_kmem_cache_iter_check_task(struct kmem_cache_iter *skel)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, opts,
+		.flags = 0,  /* run it with the current task */
+	);
+	int prog_fd = bpf_program__fd(skel->progs.check_task_struct);
+
+	/* get task_struct and check it if's from a slab cache */
+	bpf_prog_test_run_opts(prog_fd, &opts);
+
+	/* the BPF program should set 'found' variable */
+	ASSERT_EQ(skel->bss->found, 1, "found task_struct");
+}
+
+void test_kmem_cache_iter(void)
+{
+	DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+	struct kmem_cache_iter *skel = NULL;
+	union bpf_iter_link_info linfo = {};
+	struct bpf_link *link;
+	char buf[1024];
+	int iter_fd;
+
+	skel = kmem_cache_iter__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "kmem_cache_iter__open_and_load"))
+		return;
+
+	opts.link_info = &linfo;
+	opts.link_info_len = sizeof(linfo);
+
+	link = bpf_program__attach_iter(skel->progs.slab_info_collector, &opts);
+	if (!ASSERT_OK_PTR(link, "attach_iter"))
+		goto destroy;
+
+	iter_fd = bpf_iter_create(bpf_link__fd(link));
+	if (!ASSERT_GE(iter_fd, 0, "iter_create"))
+		goto free_link;
+
+	memset(buf, 0, sizeof(buf));
+	while (read(iter_fd, buf, sizeof(buf) > 0)) {
+		/* read out all contents */
+		printf("%s", buf);
+	}
+
+	/* next reads should return 0 */
+	ASSERT_EQ(read(iter_fd, buf, sizeof(buf)), 0, "read");
+
+	test_kmem_cache_iter_check_task(skel);
+
+	close(iter_fd);
+
+free_link:
+	bpf_link__destroy(link);
+destroy:
+	kmem_cache_iter__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter.h b/tools/testing/selftests/bpf/progs/bpf_iter.h
index c41ee80533ca219a..3305dc3a74b32481 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter.h
+++ b/tools/testing/selftests/bpf/progs/bpf_iter.h
@@ -24,6 +24,7 @@
 #define BTF_F_PTR_RAW BTF_F_PTR_RAW___not_used
 #define BTF_F_ZERO BTF_F_ZERO___not_used
 #define bpf_iter__ksym bpf_iter__ksym___not_used
+#define bpf_iter__kmem_cache bpf_iter__kmem_cache___not_used
 #include "vmlinux.h"
 #undef bpf_iter_meta
 #undef bpf_iter__bpf_map
@@ -48,6 +49,7 @@
 #undef BTF_F_PTR_RAW
 #undef BTF_F_ZERO
 #undef bpf_iter__ksym
+#undef bpf_iter__kmem_cache
 
 struct bpf_iter_meta {
 	struct seq_file *seq;
@@ -165,3 +167,8 @@ struct bpf_iter__ksym {
 	struct bpf_iter_meta *meta;
 	struct kallsym_iter *ksym;
 };
+
+struct bpf_iter__kmem_cache {
+	struct bpf_iter_meta *meta;
+	struct kmem_cache *s;
+} __attribute__((preserve_access_index));
diff --git a/tools/testing/selftests/bpf/progs/kmem_cache_iter.c b/tools/testing/selftests/bpf/progs/kmem_cache_iter.c
new file mode 100644
index 0000000000000000..3f6ec15a1bf6344c
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/kmem_cache_iter.c
@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google */
+
+#include "bpf_iter.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define SLAB_NAME_MAX  256
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(key_size, sizeof(void *));
+	__uint(value_size, SLAB_NAME_MAX);
+	__uint(max_entries, 1024);
+} slab_hash SEC(".maps");
+
+extern struct kmem_cache *bpf_get_kmem_cache(__u64 addr) __ksym;
+
+/* result, will be checked by userspace */
+int found;
+
+SEC("iter/kmem_cache")
+int slab_info_collector(struct bpf_iter__kmem_cache *ctx)
+{
+	struct seq_file *seq = ctx->meta->seq;
+	struct kmem_cache *s = ctx->s;
+
+	if (s) {
+		char name[SLAB_NAME_MAX];
+
+		/*
+		 * To make sure if the slab_iter implements the seq interface
+		 * properly and it's also useful for debugging.
+		 */
+		BPF_SEQ_PRINTF(seq, "%s: %u\n", s->name, s->object_size);
+
+		bpf_probe_read_kernel_str(name, sizeof(name), s->name);
+		bpf_map_update_elem(&slab_hash, &s, name, BPF_NOEXIST);
+	}
+
+	return 0;
+}
+
+SEC("raw_tp/bpf_test_finish")
+int BPF_PROG(check_task_struct)
+{
+	__u64 curr = bpf_get_current_task();
+	struct kmem_cache *s;
+	char *name;
+
+	s = bpf_get_kmem_cache(curr);
+	if (s == NULL) {
+		found = -1;
+		return 0;
+	}
+
+	name = bpf_map_lookup_elem(&slab_hash, &s);
+	if (name && !bpf_strncmp(name, 11, "task_struct"))
+		found = 1;
+	else
+		found = -2;
+
+	return 0;
+}
-- 
2.46.1.824.gd892dcdcdd-goog



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 bpf-next 1/3] bpf: Add kmem_cache iterator
  2024-10-02  6:54 ` [PATCH v3 bpf-next 1/3] bpf: Add kmem_cache iterator Namhyung Kim
@ 2024-10-02 10:54   ` Vlastimil Babka
  0 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2024-10-02 10:54 UTC (permalink / raw)
  To: Namhyung Kim, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko
  Cc: Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	LKML, bpf, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
	linux-mm, Arnaldo Carvalho de Melo

On 10/2/24 08:54, Namhyung Kim wrote:
> The new "kmem_cache" iterator will traverse the list of slab caches
> and call attached BPF programs for each entry.  It should check the
> argument (ctx.s) if it's NULL before using it.
> 
> Now the iteration grabs the slab_mutex only if it traverse the list and
> releases the mutex when it runs the BPF program.  The kmem_cache entry
> is protected by a refcount during the execution.
> 
> It includes the internal "mm/slab.h" header to access kmem_cache,
> slab_caches and slab_mutex.  Hope it's ok to mm folks.
> 
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> I've removed the Acked-by's from Roman and Vlastimil since it's changed
> not to hold the slab_mutex and to manage the refcount.  Please review
> this change again!
> 
>  include/linux/btf_ids.h      |   1 +
>  kernel/bpf/Makefile          |   1 +
>  kernel/bpf/kmem_cache_iter.c | 165 +++++++++++++++++++++++++++++++++++
>  3 files changed, 167 insertions(+)
>  create mode 100644 kernel/bpf/kmem_cache_iter.c
> 
> diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
> index c0e3e1426a82f5c4..139bdececdcfaefb 100644
> --- a/include/linux/btf_ids.h
> +++ b/include/linux/btf_ids.h
> @@ -283,5 +283,6 @@ extern u32 btf_tracing_ids[];
>  extern u32 bpf_cgroup_btf_id[];
>  extern u32 bpf_local_storage_map_btf_id[];
>  extern u32 btf_bpf_map_id[];
> +extern u32 bpf_kmem_cache_btf_id[];
>  
>  #endif
> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> index 9b9c151b5c826b31..105328f0b9c04e37 100644
> --- a/kernel/bpf/Makefile
> +++ b/kernel/bpf/Makefile
> @@ -52,3 +52,4 @@ obj-$(CONFIG_BPF_PRELOAD) += preload/
>  obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
>  obj-$(CONFIG_BPF_SYSCALL) += btf_iter.o
>  obj-$(CONFIG_BPF_SYSCALL) += btf_relocate.o
> +obj-$(CONFIG_BPF_SYSCALL) += kmem_cache_iter.o
> diff --git a/kernel/bpf/kmem_cache_iter.c b/kernel/bpf/kmem_cache_iter.c
> new file mode 100644
> index 0000000000000000..a77c08b82c6bc965
> --- /dev/null
> +++ b/kernel/bpf/kmem_cache_iter.c
> @@ -0,0 +1,165 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright (c) 2024 Google */
> +#include <linux/bpf.h>
> +#include <linux/btf_ids.h>
> +#include <linux/slab.h>
> +#include <linux/kernel.h>
> +#include <linux/seq_file.h>
> +
> +#include "../../mm/slab.h" /* kmem_cache, slab_caches and slab_mutex */
> +
> +struct bpf_iter__kmem_cache {
> +	__bpf_md_ptr(struct bpf_iter_meta *, meta);
> +	__bpf_md_ptr(struct kmem_cache *, s);
> +};
> +
> +static void *kmem_cache_iter_seq_start(struct seq_file *seq, loff_t *pos)
> +{
> +	loff_t cnt = 0;
> +	struct kmem_cache *s = NULL;
> +
> +	mutex_lock(&slab_mutex);
> +
> +	/*
> +	 * Find an entry at the given position in the slab_caches list instead
> +	 * of keeping a reference (of the last visited entry, if any) out of
> +	 * slab_mutex. It might miss something if one is deleted in the middle
> +	 * while it releases the lock.  But it should be rare and there's not
> +	 * much we can do about it.
> +	 */
> +	list_for_each_entry(s, &slab_caches, list) {
> +		if (cnt == *pos) {
> +			/*
> +			 * Make sure this entry remains in the list by getting
> +			 * a new reference count.  Note that boot_cache entries
> +			 * have a negative refcount, so don't touch them.
> +			 */
> +			if (s->refcount > 0)
> +				s->refcount++;
> +			break;
> +		}
> +
> +		cnt++;
> +	}
> +	mutex_unlock(&slab_mutex);
> +
> +	if (cnt != *pos)
> +		return NULL;
> +
> +	++*pos;
> +	return s;
> +}
> +
> +static void kmem_cache_iter_seq_stop(struct seq_file *seq, void *v)
> +{
> +	struct bpf_iter_meta meta;
> +	struct bpf_iter__kmem_cache ctx = {
> +		.meta = &meta,
> +		.s = v,
> +	};
> +	struct bpf_prog *prog;
> +	bool destroy = false;
> +
> +	meta.seq = seq;
> +	prog = bpf_iter_get_info(&meta, true);
> +	if (prog)
> +		bpf_iter_run_prog(prog, &ctx);
> +
> +	mutex_lock(&slab_mutex);
> +	if (ctx.s && ctx.s->refcount > 0)
> +		destroy = true;

I'd do the same optimization as in kmem_cache_iter_seq_next() otherwise this
will always results in taking the mutex twice and performing
kvfree_rcu_barrier() needlessly?

> +	mutex_unlock(&slab_mutex);
> +
> +	if (destroy)
> +		kmem_cache_destroy(ctx.s);
> +}
> +
> +static void *kmem_cache_iter_seq_next(struct seq_file *seq, void *v, loff_t *pos)
> +{
> +	struct kmem_cache *s = v;
> +	struct kmem_cache *next = NULL;
> +	bool destroy = false;
> +
> +	++*pos;
> +
> +	mutex_lock(&slab_mutex);
> +
> +	if (list_last_entry(&slab_caches, struct kmem_cache, list) != s) {
> +		next = list_next_entry(s, list);
> +		if (next->refcount > 0)
> +			next->refcount++;
> +	}
> +
> +	/* Skip kmem_cache_destroy() for active entries */
> +	if (s->refcount > 1)
> +		s->refcount--;
> +	else if (s->refcount == 1)
> +		destroy = true;
> +
> +	mutex_unlock(&slab_mutex);
> +
> +	if (destroy)
> +		kmem_cache_destroy(s);
> +
> +	return next;
> +}
> +
> +static int kmem_cache_iter_seq_show(struct seq_file *seq, void *v)
> +{
> +	struct bpf_iter_meta meta;
> +	struct bpf_iter__kmem_cache ctx = {
> +		.meta = &meta,
> +		.s = v,
> +	};
> +	struct bpf_prog *prog;
> +	int ret = 0;
> +
> +	meta.seq = seq;
> +	prog = bpf_iter_get_info(&meta, false);
> +	if (prog)
> +		ret = bpf_iter_run_prog(prog, &ctx);
> +
> +	return ret;
> +}
> +
> +static const struct seq_operations kmem_cache_iter_seq_ops = {
> +	.start  = kmem_cache_iter_seq_start,
> +	.next   = kmem_cache_iter_seq_next,
> +	.stop   = kmem_cache_iter_seq_stop,
> +	.show   = kmem_cache_iter_seq_show,
> +};
> +
> +BTF_ID_LIST_GLOBAL_SINGLE(bpf_kmem_cache_btf_id, struct, kmem_cache)
> +
> +static const struct bpf_iter_seq_info kmem_cache_iter_seq_info = {
> +	.seq_ops		= &kmem_cache_iter_seq_ops,
> +};
> +
> +static void bpf_iter_kmem_cache_show_fdinfo(const struct bpf_iter_aux_info *aux,
> +					    struct seq_file *seq)
> +{
> +	seq_puts(seq, "kmem_cache iter\n");
> +}
> +
> +DEFINE_BPF_ITER_FUNC(kmem_cache, struct bpf_iter_meta *meta,
> +		     struct kmem_cache *s)
> +
> +static struct bpf_iter_reg bpf_kmem_cache_reg_info = {
> +	.target			= "kmem_cache",
> +	.feature		= BPF_ITER_RESCHED,
> +	.show_fdinfo		= bpf_iter_kmem_cache_show_fdinfo,
> +	.ctx_arg_info_size	= 1,
> +	.ctx_arg_info		= {
> +		{ offsetof(struct bpf_iter__kmem_cache, s),
> +		  PTR_TO_BTF_ID_OR_NULL | PTR_TRUSTED },
> +	},
> +	.seq_info		= &kmem_cache_iter_seq_info,
> +};
> +
> +static int __init bpf_kmem_cache_iter_init(void)
> +{
> +	bpf_kmem_cache_reg_info.ctx_arg_info[0].btf_id = bpf_kmem_cache_btf_id[0];
> +	return bpf_iter_reg_target(&bpf_kmem_cache_reg_info);
> +}
> +
> +late_initcall(bpf_kmem_cache_iter_init);
> 
> base-commit: 9502a7de5a61bec3bda841a830560c5d6d40ecac



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 bpf-next 3/3] selftests/bpf: Add a test for kmem_cache_iter
  2024-10-02  6:54 ` [PATCH v3 bpf-next 3/3] selftests/bpf: Add a test for kmem_cache_iter Namhyung Kim
@ 2024-10-04 23:53   ` Alexei Starovoitov
  0 siblings, 0 replies; 6+ messages in thread
From: Alexei Starovoitov @ 2024-10-04 23:53 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	LKML, bpf, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, Arnaldo Carvalho de Melo

On Tue, Oct 1, 2024 at 11:55 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> +++ b/tools/testing/selftests/bpf/progs/kmem_cache_iter.c
> @@ -0,0 +1,66 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2024 Google */
> +
> +#include "bpf_iter.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +#define SLAB_NAME_MAX  256
> +
> +struct {
> +       __uint(type, BPF_MAP_TYPE_HASH);
> +       __uint(key_size, sizeof(void *));
> +       __uint(value_size, SLAB_NAME_MAX);
> +       __uint(max_entries, 1024);
> +} slab_hash SEC(".maps");
> +
> +extern struct kmem_cache *bpf_get_kmem_cache(__u64 addr) __ksym;
> +
> +/* result, will be checked by userspace */
> +int found;
> +
> +SEC("iter/kmem_cache")
> +int slab_info_collector(struct bpf_iter__kmem_cache *ctx)
> +{
> +       struct seq_file *seq = ctx->meta->seq;
> +       struct kmem_cache *s = ctx->s;
> +
> +       if (s) {
> +               char name[SLAB_NAME_MAX];
> +
> +               /*
> +                * To make sure if the slab_iter implements the seq interface
> +                * properly and it's also useful for debugging.
> +                */
> +               BPF_SEQ_PRINTF(seq, "%s: %u\n", s->name, s->object_size);
> +
> +               bpf_probe_read_kernel_str(name, sizeof(name), s->name);
> +               bpf_map_update_elem(&slab_hash, &s, name, BPF_NOEXIST);
> +       }
> +
> +       return 0;
> +}
> +
> +SEC("raw_tp/bpf_test_finish")
> +int BPF_PROG(check_task_struct)
> +{
> +       __u64 curr = bpf_get_current_task();
> +       struct kmem_cache *s;
> +       char *name;
> +
> +       s = bpf_get_kmem_cache(curr);
> +       if (s == NULL) {
> +               found = -1;
> +               return 0;
> +       }
> +
> +       name = bpf_map_lookup_elem(&slab_hash, &s);
> +       if (name && !bpf_strncmp(name, 11, "task_struct"))
> +               found = 1;
> +       else
> +               found = -2;
> +
> +       return 0;
> +}

The test is a bit too simple.

Could you add a more comprehensive test that also demonstrates
the power of such a slab iterator?

Like progs/bpf_iter_task_vmas.c provides output equivalent to
cat proc/pid/maps

and progs/bpf_iter_tcp6.c dumps equivalent output to
cat /proc/net/tcp6

Would be great to have a selftest that is equivalent to
cat /proc/slabinfo
(or at least close enough)

That will give more confidence that the interface works as intended.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-10-05  0:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-02  6:54 [PATCH v3 bpf-next 0/3] bpf: Add kmem_cache iterator and kfunc Namhyung Kim
2024-10-02  6:54 ` [PATCH v3 bpf-next 1/3] bpf: Add kmem_cache iterator Namhyung Kim
2024-10-02 10:54   ` Vlastimil Babka
2024-10-02  6:54 ` [PATCH v3 bpf-next 2/3] mm/bpf: Add bpf_get_kmem_cache() kfunc Namhyung Kim
2024-10-02  6:54 ` [PATCH v3 bpf-next 3/3] selftests/bpf: Add a test for kmem_cache_iter Namhyung Kim
2024-10-04 23:53   ` Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).