* [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names
@ 2024-12-20 6:00 Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK Namhyung Kim
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-12-20 6:00 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Andrii Nakryiko, Song Liu, bpf,
Stephane Eranian, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
Kees Cook, Chun-Tse Shao
Hello,
This is to support symbolization of dynamic locks using slab
allocator's metadata. The kernel support is merged to v6.13.
It provides the new "kmem_cache" BPF iterator and "bpf_get_kmem_cache"
kfunc to get the information from an address. The feature detection is
done using BTF type info and it won't have any effect on old kernels.
v3 changes)
* fix build error with GEN_VMLINUX_H=1 (Arnaldo)
* update comment to explain slab cache ID (Vlastimil)
* add Ian's Acked-by
v2) https://lore.kernel.org/linux-perf-users/20241108061500.2698340-1-namhyung@kernel.org
* don't use libbpf_get_error() (Andrii)
v1) https://lore.kernel.org/linux-perf-users/20241105172635.2463800-1-namhyung@kernel.org
With this change, it can show locks in a slab object like below. I
added "&" sign to distinguish them from global locks.
# perf lock con -abl sleep 1
contended total wait max wait avg wait address symbol
2 1.95 us 1.77 us 975 ns ffff9d5e852d3498 &task_struct (mutex)
1 1.18 us 1.18 us 1.18 us ffff9d5e852d3538 &task_struct (mutex)
4 1.12 us 354 ns 279 ns ffff9d5e841ca800 &kmalloc-cg-512 (mutex)
2 859 ns 617 ns 429 ns ffffffffa41c3620 delayed_uprobe_lock (mutex)
3 691 ns 388 ns 230 ns ffffffffa41c0940 pack_mutex (mutex)
3 421 ns 164 ns 140 ns ffffffffa3a8b3a0 text_mutex (mutex)
1 409 ns 409 ns 409 ns ffffffffa41b4cf8 tracepoint_srcu_srcu_usage (mutex)
2 362 ns 239 ns 181 ns ffffffffa41cf840 pcpu_alloc_mutex (mutex)
1 220 ns 220 ns 220 ns ffff9d5e82b534d8 &signal_cache (mutex)
1 215 ns 215 ns 215 ns ffffffffa41b4c28 tracepoint_srcu_srcu_usage (mutex)
The first two were from "task_struct" slab cache. It happened to
match with the type name of object but there's no guarantee. We need
to add type info to slab cache to resolve the lock inside the object.
Anyway, the third one has no dedicated slab cache and was allocated by
kmalloc.
Those slab objects can be used to filter specific locks using -L or
--lock-filter option. (It needs quotes to avoid special handling in
the shell).
# perf lock con -ab -L '&task_struct' sleep 1
contended total wait max wait avg wait type caller
1 25.10 us 25.10 us 25.10 us mutex perf_event_exit_task+0x39
1 21.60 us 21.60 us 21.60 us mutex futex_exit_release+0x21
1 5.56 us 5.56 us 5.56 us mutex futex_exec_release+0x21
The code is available at 'perf/lock-slab-v3' branch in my tree
git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git
Thanks,
Namhyung
Namhyung Kim (4):
perf lock contention: Add and use LCB_F_TYPE_MASK
perf lock contention: Run BPF slab cache iterator
perf lock contention: Resolve slab object name using BPF
perf lock contention: Handle slab objects in -L/--lock-filter option
tools/perf/builtin-lock.c | 39 ++++-
tools/perf/util/bpf_lock_contention.c | 140 +++++++++++++++++-
.../perf/util/bpf_skel/lock_contention.bpf.c | 95 +++++++++++-
tools/perf/util/bpf_skel/lock_data.h | 15 +-
tools/perf/util/bpf_skel/vmlinux/vmlinux.h | 8 +
tools/perf/util/lock-contention.h | 2 +
6 files changed, 292 insertions(+), 7 deletions(-)
--
2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK
2024-12-20 6:00 [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Namhyung Kim
@ 2024-12-20 6:00 ` Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator Namhyung Kim
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-12-20 6:00 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Andrii Nakryiko, Song Liu, bpf,
Stephane Eranian, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
Kees Cook, Chun-Tse Shao
This is a preparation for the later change. It'll use more bits in the
flags so let's rename the type part and use the mask to extract the
type.
Acked-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/builtin-lock.c | 4 ++--
tools/perf/util/bpf_skel/lock_data.h | 3 ++-
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
index f66948b1fbed96de..d9f3477d2b02b612 100644
--- a/tools/perf/builtin-lock.c
+++ b/tools/perf/builtin-lock.c
@@ -1490,7 +1490,7 @@ static const struct {
static const char *get_type_str(unsigned int flags)
{
- flags &= LCB_F_MAX_FLAGS - 1;
+ flags &= LCB_F_TYPE_MASK;
for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
if (lock_type_table[i].flags == flags)
@@ -1501,7 +1501,7 @@ static const char *get_type_str(unsigned int flags)
static const char *get_type_name(unsigned int flags)
{
- flags &= LCB_F_MAX_FLAGS - 1;
+ flags &= LCB_F_TYPE_MASK;
for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
if (lock_type_table[i].flags == flags)
diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h
index de12892f992f8d43..4f0aae5483745dfa 100644
--- a/tools/perf/util/bpf_skel/lock_data.h
+++ b/tools/perf/util/bpf_skel/lock_data.h
@@ -32,7 +32,8 @@ struct contention_task_data {
#define LCD_F_MMAP_LOCK (1U << 31)
#define LCD_F_SIGHAND_LOCK (1U << 30)
-#define LCB_F_MAX_FLAGS (1U << 7)
+#define LCB_F_TYPE_MAX (1U << 7)
+#define LCB_F_TYPE_MASK 0x0000007FU
struct contention_data {
u64 total_time;
--
2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator
2024-12-20 6:00 [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK Namhyung Kim
@ 2024-12-20 6:00 ` Namhyung Kim
2024-12-20 23:52 ` Alexei Starovoitov
2024-12-20 6:00 ` [PATCH v3 3/4] perf lock contention: Resolve slab object name using BPF Namhyung Kim
` (2 subsequent siblings)
4 siblings, 1 reply; 9+ messages in thread
From: Namhyung Kim @ 2024-12-20 6:00 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Andrii Nakryiko, Song Liu, bpf,
Stephane Eranian, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
Kees Cook, Chun-Tse Shao
Recently the kernel got the kmem_cache iterator to traverse metadata of
slab objects. This can be used to symbolize dynamic locks in a slab.
The new slab_caches hash map will have the pointer of the kmem_cache as
a key and save the name and a id. The id will be saved in the flags
part of the lock.
Acked-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/util/bpf_lock_contention.c | 50 +++++++++++++++++++
.../perf/util/bpf_skel/lock_contention.bpf.c | 48 ++++++++++++++++++
tools/perf/util/bpf_skel/lock_data.h | 12 +++++
tools/perf/util/bpf_skel/vmlinux/vmlinux.h | 8 +++
4 files changed, 118 insertions(+)
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index 37e17c56f1064e60..169531d1865264be 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -12,12 +12,59 @@
#include <linux/zalloc.h>
#include <linux/string.h>
#include <bpf/bpf.h>
+#include <bpf/btf.h>
#include <inttypes.h>
#include "bpf_skel/lock_contention.skel.h"
#include "bpf_skel/lock_data.h"
static struct lock_contention_bpf *skel;
+static bool has_slab_iter;
+
+static void check_slab_cache_iter(struct lock_contention *con)
+{
+ struct btf *btf = btf__load_vmlinux_btf();
+ s32 ret;
+
+ if (btf == NULL) {
+ pr_debug("BTF loading failed: %s\n", strerror(errno));
+ return;
+ }
+
+ ret = btf__find_by_name_kind(btf, "bpf_iter__kmem_cache", BTF_KIND_STRUCT);
+ if (ret < 0) {
+ bpf_program__set_autoload(skel->progs.slab_cache_iter, false);
+ pr_debug("slab cache iterator is not available: %d\n", ret);
+ goto out;
+ }
+
+ has_slab_iter = true;
+
+ bpf_map__set_max_entries(skel->maps.slab_caches, con->map_nr_entries);
+out:
+ btf__free(btf);
+}
+
+static void run_slab_cache_iter(void)
+{
+ int fd;
+ char buf[256];
+
+ if (!has_slab_iter)
+ return;
+
+ fd = bpf_iter_create(bpf_link__fd(skel->links.slab_cache_iter));
+ if (fd < 0) {
+ pr_debug("cannot create slab cache iter: %d\n", fd);
+ return;
+ }
+
+ /* This will run the bpf program */
+ while (read(fd, buf, sizeof(buf)) > 0)
+ continue;
+
+ close(fd);
+}
int lock_contention_prepare(struct lock_contention *con)
{
@@ -109,6 +156,8 @@ int lock_contention_prepare(struct lock_contention *con)
skel->rodata->use_cgroup_v2 = 1;
}
+ check_slab_cache_iter(con);
+
if (lock_contention_bpf__load(skel) < 0) {
pr_err("Failed to load lock-contention BPF skeleton\n");
return -1;
@@ -304,6 +353,7 @@ static void account_end_timestamp(struct lock_contention *con)
int lock_contention_start(void)
{
+ run_slab_cache_iter();
skel->bss->enabled = 1;
return 0;
}
diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
index 1069bda5d733887f..bed446c42561d8bf 100644
--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
+++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
@@ -100,6 +100,13 @@ struct {
__uint(max_entries, 1);
} cgroup_filter SEC(".maps");
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(key_size, sizeof(long));
+ __uint(value_size, sizeof(struct slab_cache_data));
+ __uint(max_entries, 1);
+} slab_caches SEC(".maps");
+
struct rw_semaphore___old {
struct task_struct *owner;
} __attribute__((preserve_access_index));
@@ -136,6 +143,8 @@ int perf_subsys_id = -1;
__u64 end_ts;
+__u32 slab_cache_id;
+
/* error stat */
int task_fail;
int stack_fail;
@@ -563,4 +572,43 @@ int BPF_PROG(end_timestamp)
return 0;
}
+/*
+ * bpf_iter__kmem_cache added recently so old kernels don't have it in the
+ * vmlinux.h. But we cannot add it here since it will cause a compiler error
+ * due to redefinition of the struct on later kernels.
+ *
+ * So it uses a CO-RE trick to access the member only if it has the type.
+ * This will support both old and new kernels without compiler errors.
+ */
+struct bpf_iter__kmem_cache___new {
+ struct kmem_cache *s;
+} __attribute__((preserve_access_index));
+
+SEC("iter/kmem_cache")
+int slab_cache_iter(void *ctx)
+{
+ struct kmem_cache *s = NULL;
+ struct slab_cache_data d;
+ const char *nameptr;
+
+ if (bpf_core_type_exists(struct bpf_iter__kmem_cache)) {
+ struct bpf_iter__kmem_cache___new *iter = ctx;
+
+ s = BPF_CORE_READ(iter, s);
+ }
+
+ if (s == NULL)
+ return 0;
+
+ nameptr = BPF_CORE_READ(s, name);
+ bpf_probe_read_kernel_str(d.name, sizeof(d.name), nameptr);
+
+ d.id = ++slab_cache_id << LCB_F_SLAB_ID_SHIFT;
+ if (d.id >= LCB_F_SLAB_ID_END)
+ return 0;
+
+ bpf_map_update_elem(&slab_caches, &s, &d, BPF_NOEXIST);
+ return 0;
+}
+
char LICENSE[] SEC("license") = "Dual BSD/GPL";
diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h
index 4f0aae5483745dfa..c15f734d7fc4aecb 100644
--- a/tools/perf/util/bpf_skel/lock_data.h
+++ b/tools/perf/util/bpf_skel/lock_data.h
@@ -32,9 +32,16 @@ struct contention_task_data {
#define LCD_F_MMAP_LOCK (1U << 31)
#define LCD_F_SIGHAND_LOCK (1U << 30)
+#define LCB_F_SLAB_ID_SHIFT 16
+#define LCB_F_SLAB_ID_START (1U << 16)
+#define LCB_F_SLAB_ID_END (1U << 26)
+#define LCB_F_SLAB_ID_MASK 0x03FF0000U
+
#define LCB_F_TYPE_MAX (1U << 7)
#define LCB_F_TYPE_MASK 0x0000007FU
+#define SLAB_NAME_MAX 28
+
struct contention_data {
u64 total_time;
u64 min_time;
@@ -55,4 +62,9 @@ enum lock_class_sym {
LOCK_CLASS_RQLOCK,
};
+struct slab_cache_data {
+ u32 id;
+ char name[SLAB_NAME_MAX];
+};
+
#endif /* UTIL_BPF_SKEL_LOCK_DATA_H */
diff --git a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
index 4dcad7b682bdee9c..7b81d3173917fdb5 100644
--- a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
+++ b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
@@ -195,4 +195,12 @@ struct bpf_perf_event_data_kern {
*/
struct rq {};
+struct kmem_cache {
+ const char *name;
+} __attribute__((preserve_access_index));
+
+struct bpf_iter__kmem_cache {
+ struct kmem_cache *s;
+} __attribute__((preserve_access_index));
+
#endif // __VMLINUX_H
--
2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 3/4] perf lock contention: Resolve slab object name using BPF
2024-12-20 6:00 [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator Namhyung Kim
@ 2024-12-20 6:00 ` Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 4/4] perf lock contention: Handle slab objects in -L/--lock-filter option Namhyung Kim
2024-12-20 19:20 ` [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Arnaldo Carvalho de Melo
4 siblings, 0 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-12-20 6:00 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Andrii Nakryiko, Song Liu, bpf,
Stephane Eranian, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
Kees Cook, Chun-Tse Shao
The bpf_get_kmem_cache() kfunc can return an address of the slab cache
(kmem_cache). As it has the name of the slab cache from the iterator,
we can use it to symbolize some dynamic kernel locks in a slab.
Before:
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
contended total wait max wait avg wait address symbol
2 3.34 us 2.87 us 1.67 us ffff9d7800ad9600 (mutex)
2 2.16 us 1.93 us 1.08 us ffff9d7804b992d8 (mutex)
4 1.37 us 517 ns 343 ns ffff9d78036e6e00 (mutex)
1 1.27 us 1.27 us 1.27 us ffff9d7804b99378 (mutex)
2 845 ns 599 ns 422 ns ffffffff9e1c3620 delayed_uprobe_lock (mutex)
1 845 ns 845 ns 845 ns ffffffff9da0b280 jiffies_lock (spinlock)
2 377 ns 259 ns 188 ns ffffffff9e1cf840 pcpu_alloc_mutex (mutex)
1 305 ns 305 ns 305 ns ffffffff9e1b4cf8 tracepoint_srcu_srcu_usage (mutex)
1 295 ns 295 ns 295 ns ffffffff9e1c0940 pack_mutex (mutex)
1 232 ns 232 ns 232 ns ffff9d7804b7d8d8 (mutex)
1 180 ns 180 ns 180 ns ffffffff9e1b4c28 tracepoint_srcu_srcu_usage (mutex)
1 165 ns 165 ns 165 ns ffffffff9da8b3a0 text_mutex (mutex)
After:
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
contended total wait max wait avg wait address symbol
2 1.95 us 1.77 us 975 ns ffff9d5e852d3498 &task_struct (mutex)
1 1.18 us 1.18 us 1.18 us ffff9d5e852d3538 &task_struct (mutex)
4 1.12 us 354 ns 279 ns ffff9d5e841ca800 &kmalloc-cg-512 (mutex)
2 859 ns 617 ns 429 ns ffffffffa41c3620 delayed_uprobe_lock (mutex)
3 691 ns 388 ns 230 ns ffffffffa41c0940 pack_mutex (mutex)
3 421 ns 164 ns 140 ns ffffffffa3a8b3a0 text_mutex (mutex)
1 409 ns 409 ns 409 ns ffffffffa41b4cf8 tracepoint_srcu_srcu_usage (mutex)
2 362 ns 239 ns 181 ns ffffffffa41cf840 pcpu_alloc_mutex (mutex)
1 220 ns 220 ns 220 ns ffff9d5e82b534d8 &signal_cache (mutex)
1 215 ns 215 ns 215 ns ffffffffa41b4c28 tracepoint_srcu_srcu_usage (mutex)
Note that the name starts with '&' sign for slab objects to inform they
are dynamic locks. It won't give the accurate lock or type names but
it's still useful. We may add type info to the slab cache later to get
the exact name of the lock in the type later.
Acked-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/util/bpf_lock_contention.c | 52 +++++++++++++++++++
.../perf/util/bpf_skel/lock_contention.bpf.c | 26 +++++++++-
2 files changed, 76 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index 169531d1865264be..a31ace04cb5e7a8f 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -2,6 +2,7 @@
#include "util/cgroup.h"
#include "util/debug.h"
#include "util/evlist.h"
+#include "util/hashmap.h"
#include "util/machine.h"
#include "util/map.h"
#include "util/symbol.h"
@@ -20,12 +21,25 @@
static struct lock_contention_bpf *skel;
static bool has_slab_iter;
+static struct hashmap slab_hash;
+
+static size_t slab_cache_hash(long key, void *ctx __maybe_unused)
+{
+ return key;
+}
+
+static bool slab_cache_equal(long key1, long key2, void *ctx __maybe_unused)
+{
+ return key1 == key2;
+}
static void check_slab_cache_iter(struct lock_contention *con)
{
struct btf *btf = btf__load_vmlinux_btf();
s32 ret;
+ hashmap__init(&slab_hash, slab_cache_hash, slab_cache_equal, /*ctx=*/NULL);
+
if (btf == NULL) {
pr_debug("BTF loading failed: %s\n", strerror(errno));
return;
@@ -49,6 +63,7 @@ static void run_slab_cache_iter(void)
{
int fd;
char buf[256];
+ long key, *prev_key;
if (!has_slab_iter)
return;
@@ -64,6 +79,34 @@ static void run_slab_cache_iter(void)
continue;
close(fd);
+
+ /* Read the slab cache map and build a hash with IDs */
+ fd = bpf_map__fd(skel->maps.slab_caches);
+ prev_key = NULL;
+ while (!bpf_map_get_next_key(fd, prev_key, &key)) {
+ struct slab_cache_data *data;
+
+ data = malloc(sizeof(*data));
+ if (data == NULL)
+ break;
+
+ if (bpf_map_lookup_elem(fd, &key, data) < 0)
+ break;
+
+ hashmap__add(&slab_hash, data->id, data);
+ prev_key = &key;
+ }
+}
+
+static void exit_slab_cache_iter(void)
+{
+ struct hashmap_entry *cur;
+ unsigned bkt;
+
+ hashmap__for_each_entry(&slab_hash, cur, bkt)
+ free(cur->pvalue);
+
+ hashmap__clear(&slab_hash);
}
int lock_contention_prepare(struct lock_contention *con)
@@ -397,6 +440,7 @@ static const char *lock_contention_get_name(struct lock_contention *con,
if (con->aggr_mode == LOCK_AGGR_ADDR) {
int lock_fd = bpf_map__fd(skel->maps.lock_syms);
+ struct slab_cache_data *slab_data;
/* per-process locks set upper bits of the flags */
if (flags & LCD_F_MMAP_LOCK)
@@ -415,6 +459,12 @@ static const char *lock_contention_get_name(struct lock_contention *con,
return "rq_lock";
}
+ /* look slab_hash for dynamic locks in a slab object */
+ if (hashmap__find(&slab_hash, flags & LCB_F_SLAB_ID_MASK, &slab_data)) {
+ snprintf(name_buf, sizeof(name_buf), "&%s", slab_data->name);
+ return name_buf;
+ }
+
return "";
}
@@ -589,5 +639,7 @@ int lock_contention_finish(struct lock_contention *con)
cgroup__put(cgrp);
}
+ exit_slab_cache_iter();
+
return 0;
}
diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
index bed446c42561d8bf..7182eb559496e34e 100644
--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
+++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
@@ -123,6 +123,8 @@ struct mm_struct___new {
struct rw_semaphore mmap_lock;
} __attribute__((preserve_access_index));
+extern struct kmem_cache *bpf_get_kmem_cache(u64 addr) __ksym __weak;
+
/* control flags */
const volatile int has_cpu;
const volatile int has_task;
@@ -496,8 +498,28 @@ int contention_end(u64 *ctx)
};
int err;
- if (aggr_mode == LOCK_AGGR_ADDR)
- first.flags |= check_lock_type(pelem->lock, pelem->flags);
+ if (aggr_mode == LOCK_AGGR_ADDR) {
+ first.flags |= check_lock_type(pelem->lock,
+ pelem->flags & LCB_F_TYPE_MASK);
+
+ /* Check if it's from a slab object */
+ if (bpf_get_kmem_cache) {
+ struct kmem_cache *s;
+ struct slab_cache_data *d;
+
+ s = bpf_get_kmem_cache(pelem->lock);
+ if (s != NULL) {
+ /*
+ * Save the ID of the slab cache in the flags
+ * (instead of full address) to reduce the
+ * space in the contention_data.
+ */
+ d = bpf_map_lookup_elem(&slab_caches, &s);
+ if (d != NULL)
+ first.flags |= d->id;
+ }
+ }
+ }
err = bpf_map_update_elem(&lock_stat, &key, &first, BPF_NOEXIST);
if (err < 0) {
--
2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 4/4] perf lock contention: Handle slab objects in -L/--lock-filter option
2024-12-20 6:00 [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Namhyung Kim
` (2 preceding siblings ...)
2024-12-20 6:00 ` [PATCH v3 3/4] perf lock contention: Resolve slab object name using BPF Namhyung Kim
@ 2024-12-20 6:00 ` Namhyung Kim
2024-12-20 19:20 ` [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Arnaldo Carvalho de Melo
4 siblings, 0 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-12-20 6:00 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Andrii Nakryiko, Song Liu, bpf,
Stephane Eranian, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
Kees Cook, Chun-Tse Shao
This is to filter lock contention from specific slab objects only.
Like in the lock symbol output, we can use '&' prefix to filter slab
object names.
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
contended total wait max wait avg wait address symbol
3 14.99 us 14.44 us 5.00 us ffffffff851c0940 pack_mutex (mutex)
2 2.75 us 2.56 us 1.38 us ffff98d7031fb498 &task_struct (mutex)
4 1.42 us 557 ns 355 ns ffff98d706311400 &kmalloc-cg-512 (mutex)
2 953 ns 714 ns 476 ns ffffffff851c3620 delayed_uprobe_lock (mutex)
1 929 ns 929 ns 929 ns ffff98d7031fb538 &task_struct (mutex)
3 561 ns 210 ns 187 ns ffffffff84a8b3a0 text_mutex (mutex)
1 479 ns 479 ns 479 ns ffffffff851b4cf8 tracepoint_srcu_srcu_usage (mutex)
2 320 ns 195 ns 160 ns ffffffff851cf840 pcpu_alloc_mutex (mutex)
1 212 ns 212 ns 212 ns ffff98d7031784d8 &signal_cache (mutex)
1 177 ns 177 ns 177 ns ffffffff851b4c28 tracepoint_srcu_srcu_usage (mutex)
With the filter, it can show contentions from the task_struct only.
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl -L '&task_struct' sleep 1
contended total wait max wait avg wait address symbol
2 1.97 us 1.71 us 987 ns ffff98d7032fd658 &task_struct (mutex)
1 1.20 us 1.20 us 1.20 us ffff98d7032fd6f8 &task_struct (mutex)
It can work with other aggregation mode:
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -ab -L '&task_struct' sleep 1
contended total wait max wait avg wait type caller
1 25.10 us 25.10 us 25.10 us mutex perf_event_exit_task+0x39
1 21.60 us 21.60 us 21.60 us mutex futex_exit_release+0x21
1 5.56 us 5.56 us 5.56 us mutex futex_exec_release+0x21
Acked-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/builtin-lock.c | 35 ++++++++++++++++
tools/perf/util/bpf_lock_contention.c | 40 ++++++++++++++++++-
.../perf/util/bpf_skel/lock_contention.bpf.c | 21 +++++++++-
tools/perf/util/lock-contention.h | 2 +
4 files changed, 95 insertions(+), 3 deletions(-)
diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
index d9f3477d2b02b612..208c482daa56ef93 100644
--- a/tools/perf/builtin-lock.c
+++ b/tools/perf/builtin-lock.c
@@ -1539,6 +1539,12 @@ static void lock_filter_finish(void)
zfree(&filters.cgrps);
filters.nr_cgrps = 0;
+
+ for (int i = 0; i < filters.nr_slabs; i++)
+ free(filters.slabs[i]);
+
+ zfree(&filters.slabs);
+ filters.nr_slabs = 0;
}
static void sort_contention_result(void)
@@ -2305,6 +2311,27 @@ static bool add_lock_sym(char *name)
return true;
}
+static bool add_lock_slab(char *name)
+{
+ char **tmp;
+ char *sym = strdup(name);
+
+ if (sym == NULL) {
+ pr_err("Memory allocation failure\n");
+ return false;
+ }
+
+ tmp = realloc(filters.slabs, (filters.nr_slabs + 1) * sizeof(*filters.slabs));
+ if (tmp == NULL) {
+ pr_err("Memory allocation failure\n");
+ return false;
+ }
+
+ tmp[filters.nr_slabs++] = sym;
+ filters.slabs = tmp;
+ return true;
+}
+
static int parse_lock_addr(const struct option *opt __maybe_unused, const char *str,
int unset __maybe_unused)
{
@@ -2328,6 +2355,14 @@ static int parse_lock_addr(const struct option *opt __maybe_unused, const char *
continue;
}
+ if (*tok == '&') {
+ if (!add_lock_slab(tok + 1)) {
+ ret = -1;
+ break;
+ }
+ continue;
+ }
+
/*
* At this moment, we don't have kernel symbols. Save the symbols
* in a separate list and resolve them to addresses later.
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index a31ace04cb5e7a8f..fc8666222399c995 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -112,7 +112,7 @@ static void exit_slab_cache_iter(void)
int lock_contention_prepare(struct lock_contention *con)
{
int i, fd;
- int ncpus = 1, ntasks = 1, ntypes = 1, naddrs = 1, ncgrps = 1;
+ int ncpus = 1, ntasks = 1, ntypes = 1, naddrs = 1, ncgrps = 1, nslabs = 1;
struct evlist *evlist = con->evlist;
struct target *target = con->target;
@@ -201,6 +201,13 @@ int lock_contention_prepare(struct lock_contention *con)
check_slab_cache_iter(con);
+ if (con->filters->nr_slabs && has_slab_iter) {
+ skel->rodata->has_slab = 1;
+ nslabs = con->filters->nr_slabs;
+ }
+
+ bpf_map__set_max_entries(skel->maps.slab_filter, nslabs);
+
if (lock_contention_bpf__load(skel) < 0) {
pr_err("Failed to load lock-contention BPF skeleton\n");
return -1;
@@ -271,6 +278,36 @@ int lock_contention_prepare(struct lock_contention *con)
bpf_program__set_autoload(skel->progs.collect_lock_syms, false);
lock_contention_bpf__attach(skel);
+
+ /* run the slab iterator after attaching */
+ run_slab_cache_iter();
+
+ if (con->filters->nr_slabs) {
+ u8 val = 1;
+ int cache_fd;
+ long key, *prev_key;
+
+ fd = bpf_map__fd(skel->maps.slab_filter);
+
+ /* Read the slab cache map and build a hash with its address */
+ cache_fd = bpf_map__fd(skel->maps.slab_caches);
+ prev_key = NULL;
+ while (!bpf_map_get_next_key(cache_fd, prev_key, &key)) {
+ struct slab_cache_data data;
+
+ if (bpf_map_lookup_elem(cache_fd, &key, &data) < 0)
+ break;
+
+ for (i = 0; i < con->filters->nr_slabs; i++) {
+ if (!strcmp(con->filters->slabs[i], data.name)) {
+ bpf_map_update_elem(fd, &key, &val, BPF_ANY);
+ break;
+ }
+ }
+ prev_key = &key;
+ }
+ }
+
return 0;
}
@@ -396,7 +433,6 @@ static void account_end_timestamp(struct lock_contention *con)
int lock_contention_start(void)
{
- run_slab_cache_iter();
skel->bss->enabled = 1;
return 0;
}
diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
index 7182eb559496e34e..6c771ef751d83b43 100644
--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
+++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
@@ -100,6 +100,13 @@ struct {
__uint(max_entries, 1);
} cgroup_filter SEC(".maps");
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(key_size, sizeof(long));
+ __uint(value_size, sizeof(__u8));
+ __uint(max_entries, 1);
+} slab_filter SEC(".maps");
+
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(key_size, sizeof(long));
@@ -131,6 +138,7 @@ const volatile int has_task;
const volatile int has_type;
const volatile int has_addr;
const volatile int has_cgroup;
+const volatile int has_slab;
const volatile int needs_callstack;
const volatile int stack_skip;
const volatile int lock_owner;
@@ -213,7 +221,7 @@ static inline int can_record(u64 *ctx)
__u64 addr = ctx[0];
ok = bpf_map_lookup_elem(&addr_filter, &addr);
- if (!ok)
+ if (!ok && !has_slab)
return 0;
}
@@ -226,6 +234,17 @@ static inline int can_record(u64 *ctx)
return 0;
}
+ if (has_slab && bpf_get_kmem_cache) {
+ __u8 *ok;
+ __u64 addr = ctx[0];
+ long kmem_cache_addr;
+
+ kmem_cache_addr = (long)bpf_get_kmem_cache(addr);
+ ok = bpf_map_lookup_elem(&slab_filter, &kmem_cache_addr);
+ if (!ok)
+ return 0;
+ }
+
return 1;
}
diff --git a/tools/perf/util/lock-contention.h b/tools/perf/util/lock-contention.h
index bd71fb73825aa8e1..a09f7fe877df8184 100644
--- a/tools/perf/util/lock-contention.h
+++ b/tools/perf/util/lock-contention.h
@@ -10,10 +10,12 @@ struct lock_filter {
int nr_addrs;
int nr_syms;
int nr_cgrps;
+ int nr_slabs;
unsigned int *types;
unsigned long *addrs;
char **syms;
u64 *cgrps;
+ char **slabs;
};
struct lock_stat {
--
2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names
2024-12-20 6:00 [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Namhyung Kim
` (3 preceding siblings ...)
2024-12-20 6:00 ` [PATCH v3 4/4] perf lock contention: Handle slab objects in -L/--lock-filter option Namhyung Kim
@ 2024-12-20 19:20 ` Arnaldo Carvalho de Melo
4 siblings, 0 replies; 9+ messages in thread
From: Arnaldo Carvalho de Melo @ 2024-12-20 19:20 UTC (permalink / raw)
To: Namhyung Kim
Cc: Ian Rogers, Kan Liang, Jiri Olsa, Adrian Hunter, Peter Zijlstra,
Ingo Molnar, LKML, linux-perf-users, Andrii Nakryiko, Song Liu,
bpf, Stephane Eranian, Vlastimil Babka, Roman Gushchin,
Hyeonggon Yoo, Kees Cook, Chun-Tse Shao
On Thu, Dec 19, 2024 at 10:00:05PM -0800, Namhyung Kim wrote:
> Hello,
>
> This is to support symbolization of dynamic locks using slab
> allocator's metadata. The kernel support is merged to v6.13.
>
> It provides the new "kmem_cache" BPF iterator and "bpf_get_kmem_cache"
> kfunc to get the information from an address. The feature detection is
> done using BTF type info and it won't have any effect on old kernels.
>
> v3 changes)
>
> * fix build error with GEN_VMLINUX_H=1 (Arnaldo)
Thanks, applied to perf-tools-next,
- Arnaldo
> * update comment to explain slab cache ID (Vlastimil)
> * add Ian's Acked-by
>
> v2) https://lore.kernel.org/linux-perf-users/20241108061500.2698340-1-namhyung@kernel.org
>
> * don't use libbpf_get_error() (Andrii)
>
> v1) https://lore.kernel.org/linux-perf-users/20241105172635.2463800-1-namhyung@kernel.org
>
> With this change, it can show locks in a slab object like below. I
> added "&" sign to distinguish them from global locks.
>
> # perf lock con -abl sleep 1
> contended total wait max wait avg wait address symbol
>
> 2 1.95 us 1.77 us 975 ns ffff9d5e852d3498 &task_struct (mutex)
> 1 1.18 us 1.18 us 1.18 us ffff9d5e852d3538 &task_struct (mutex)
> 4 1.12 us 354 ns 279 ns ffff9d5e841ca800 &kmalloc-cg-512 (mutex)
> 2 859 ns 617 ns 429 ns ffffffffa41c3620 delayed_uprobe_lock (mutex)
> 3 691 ns 388 ns 230 ns ffffffffa41c0940 pack_mutex (mutex)
> 3 421 ns 164 ns 140 ns ffffffffa3a8b3a0 text_mutex (mutex)
> 1 409 ns 409 ns 409 ns ffffffffa41b4cf8 tracepoint_srcu_srcu_usage (mutex)
> 2 362 ns 239 ns 181 ns ffffffffa41cf840 pcpu_alloc_mutex (mutex)
> 1 220 ns 220 ns 220 ns ffff9d5e82b534d8 &signal_cache (mutex)
> 1 215 ns 215 ns 215 ns ffffffffa41b4c28 tracepoint_srcu_srcu_usage (mutex)
>
> The first two were from "task_struct" slab cache. It happened to
> match with the type name of object but there's no guarantee. We need
> to add type info to slab cache to resolve the lock inside the object.
> Anyway, the third one has no dedicated slab cache and was allocated by
> kmalloc.
>
> Those slab objects can be used to filter specific locks using -L or
> --lock-filter option. (It needs quotes to avoid special handling in
> the shell).
>
> # perf lock con -ab -L '&task_struct' sleep 1
> contended total wait max wait avg wait type caller
>
> 1 25.10 us 25.10 us 25.10 us mutex perf_event_exit_task+0x39
> 1 21.60 us 21.60 us 21.60 us mutex futex_exit_release+0x21
> 1 5.56 us 5.56 us 5.56 us mutex futex_exec_release+0x21
>
> The code is available at 'perf/lock-slab-v3' branch in my tree
>
> git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git
>
> Thanks,
> Namhyung
>
>
> Namhyung Kim (4):
> perf lock contention: Add and use LCB_F_TYPE_MASK
> perf lock contention: Run BPF slab cache iterator
> perf lock contention: Resolve slab object name using BPF
> perf lock contention: Handle slab objects in -L/--lock-filter option
>
> tools/perf/builtin-lock.c | 39 ++++-
> tools/perf/util/bpf_lock_contention.c | 140 +++++++++++++++++-
> .../perf/util/bpf_skel/lock_contention.bpf.c | 95 +++++++++++-
> tools/perf/util/bpf_skel/lock_data.h | 15 +-
> tools/perf/util/bpf_skel/vmlinux/vmlinux.h | 8 +
> tools/perf/util/lock-contention.h | 2 +
> 6 files changed, 292 insertions(+), 7 deletions(-)
>
> --
> 2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator
2024-12-20 6:00 ` [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator Namhyung Kim
@ 2024-12-20 23:52 ` Alexei Starovoitov
2024-12-21 23:55 ` Namhyung Kim
0 siblings, 1 reply; 9+ messages in thread
From: Alexei Starovoitov @ 2024-12-20 23:52 UTC (permalink / raw)
To: Namhyung Kim
Cc: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang, Jiri Olsa,
Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML, linux-perf-use.,
Andrii Nakryiko, Song Liu, bpf, Stephane Eranian, Vlastimil Babka,
Roman Gushchin, Hyeonggon Yoo, Kees Cook, Chun-Tse Shao
On Thu, Dec 19, 2024 at 10:01 PM Namhyung Kim <namhyung@kernel.org> wrote:
> +struct bpf_iter__kmem_cache___new {
> + struct kmem_cache *s;
> +} __attribute__((preserve_access_index));
> +
> +SEC("iter/kmem_cache")
> +int slab_cache_iter(void *ctx)
> +{
> + struct kmem_cache *s = NULL;
> + struct slab_cache_data d;
> + const char *nameptr;
> +
> + if (bpf_core_type_exists(struct bpf_iter__kmem_cache)) {
> + struct bpf_iter__kmem_cache___new *iter = ctx;
> +
> + s = BPF_CORE_READ(iter, s);
> + }
> +
> + if (s == NULL)
> + return 0;
> +
> + nameptr = BPF_CORE_READ(s, name);
since the feature depends on the latest kernel please use
direct access. There is no need to use BPF_CORE_READ() to
be compatible with old kernels.
Just iter->s and s->name will work and will be much faster.
Underneath these loads will be marked with PROBE_MEM flag and
will be equivalent to probe_read_kernel calls, but faster
since the whole thing will be inlined by JITs.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator
2024-12-20 23:52 ` Alexei Starovoitov
@ 2024-12-21 23:55 ` Namhyung Kim
2024-12-23 16:38 ` Arnaldo Carvalho de Melo
0 siblings, 1 reply; 9+ messages in thread
From: Namhyung Kim @ 2024-12-21 23:55 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang, Jiri Olsa,
Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML, linux-perf-use.,
Andrii Nakryiko, Song Liu, bpf, Stephane Eranian, Vlastimil Babka,
Roman Gushchin, Hyeonggon Yoo, Kees Cook, Chun-Tse Shao
Hi Alexei,
On Fri, Dec 20, 2024 at 03:52:36PM -0800, Alexei Starovoitov wrote:
> On Thu, Dec 19, 2024 at 10:01 PM Namhyung Kim <namhyung@kernel.org> wrote:
> > +struct bpf_iter__kmem_cache___new {
> > + struct kmem_cache *s;
> > +} __attribute__((preserve_access_index));
> > +
> > +SEC("iter/kmem_cache")
> > +int slab_cache_iter(void *ctx)
> > +{
> > + struct kmem_cache *s = NULL;
> > + struct slab_cache_data d;
> > + const char *nameptr;
> > +
> > + if (bpf_core_type_exists(struct bpf_iter__kmem_cache)) {
> > + struct bpf_iter__kmem_cache___new *iter = ctx;
> > +
> > + s = BPF_CORE_READ(iter, s);
> > + }
> > +
> > + if (s == NULL)
> > + return 0;
> > +
> > + nameptr = BPF_CORE_READ(s, name);
>
> since the feature depends on the latest kernel please use
> direct access. There is no need to use BPF_CORE_READ() to
> be compatible with old kernels.
> Just iter->s and s->name will work and will be much faster.
> Underneath these loads will be marked with PROBE_MEM flag and
> will be equivalent to probe_read_kernel calls, but faster
> since the whole thing will be inlined by JITs.
Oh, thanks for your review. I thought it was requried, but it'd
be definitely better if we can access them directly. I'll fold
the below to v4, unless Arnaldo does it first. :)
Thanks,
Namhyung
---8<---
diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
index 6c771ef751d83b43..6533ea9b044c71d1 100644
--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
+++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
@@ -635,13 +635,13 @@ int slab_cache_iter(void *ctx)
if (bpf_core_type_exists(struct bpf_iter__kmem_cache)) {
struct bpf_iter__kmem_cache___new *iter = ctx;
- s = BPF_CORE_READ(iter, s);
+ s = iter->s;
}
if (s == NULL)
return 0;
- nameptr = BPF_CORE_READ(s, name);
+ nameptr = s->name;
bpf_probe_read_kernel_str(d.name, sizeof(d.name), nameptr);
d.id = ++slab_cache_id << LCB_F_SLAB_ID_SHIFT;
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator
2024-12-21 23:55 ` Namhyung Kim
@ 2024-12-23 16:38 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 9+ messages in thread
From: Arnaldo Carvalho de Melo @ 2024-12-23 16:38 UTC (permalink / raw)
To: Namhyung Kim
Cc: Alexei Starovoitov, Ian Rogers, Kan Liang, Jiri Olsa,
Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML, linux-perf-use.,
Andrii Nakryiko, Song Liu, bpf, Stephane Eranian, Vlastimil Babka,
Roman Gushchin, Hyeonggon Yoo, Kees Cook, Chun-Tse Shao
On Sat, Dec 21, 2024 at 03:55:32PM -0800, Namhyung Kim wrote:
> Hi Alexei,
>
> On Fri, Dec 20, 2024 at 03:52:36PM -0800, Alexei Starovoitov wrote:
> > On Thu, Dec 19, 2024 at 10:01 PM Namhyung Kim <namhyung@kernel.org> wrote:
> > > +struct bpf_iter__kmem_cache___new {
> > > + struct kmem_cache *s;
> > > +} __attribute__((preserve_access_index));
> > > +
> > > +SEC("iter/kmem_cache")
> > > +int slab_cache_iter(void *ctx)
> > > +{
> > > + struct kmem_cache *s = NULL;
> > > + struct slab_cache_data d;
> > > + const char *nameptr;
> > > +
> > > + if (bpf_core_type_exists(struct bpf_iter__kmem_cache)) {
> > > + struct bpf_iter__kmem_cache___new *iter = ctx;
> > > +
> > > + s = BPF_CORE_READ(iter, s);
> > > + }
> > > +
> > > + if (s == NULL)
> > > + return 0;
> > > +
> > > + nameptr = BPF_CORE_READ(s, name);
> >
> > since the feature depends on the latest kernel please use
> > direct access. There is no need to use BPF_CORE_READ() to
> > be compatible with old kernels.
> > Just iter->s and s->name will work and will be much faster.
> > Underneath these loads will be marked with PROBE_MEM flag and
> > will be equivalent to probe_read_kernel calls, but faster
> > since the whole thing will be inlined by JITs.
>
> Oh, thanks for your review. I thought it was requried, but it'd
> be definitely better if we can access them directly. I'll fold
> the below to v4, unless Arnaldo does it first. :)
I'll check and adjust, thanks everybody :-)
- Arnaldo
> Thanks,
> Namhyung
>
>
> ---8<---
> diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> index 6c771ef751d83b43..6533ea9b044c71d1 100644
> --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
> +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> @@ -635,13 +635,13 @@ int slab_cache_iter(void *ctx)
> if (bpf_core_type_exists(struct bpf_iter__kmem_cache)) {
> struct bpf_iter__kmem_cache___new *iter = ctx;
>
> - s = BPF_CORE_READ(iter, s);
> + s = iter->s;
> }
>
> if (s == NULL)
> return 0;
>
> - nameptr = BPF_CORE_READ(s, name);
> + nameptr = s->name;
> bpf_probe_read_kernel_str(d.name, sizeof(d.name), nameptr);
>
> d.id = ++slab_cache_id << LCB_F_SLAB_ID_SHIFT;
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-12-23 16:38 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-20 6:00 [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator Namhyung Kim
2024-12-20 23:52 ` Alexei Starovoitov
2024-12-21 23:55 ` Namhyung Kim
2024-12-23 16:38 ` Arnaldo Carvalho de Melo
2024-12-20 6:00 ` [PATCH v3 3/4] perf lock contention: Resolve slab object name using BPF Namhyung Kim
2024-12-20 6:00 ` [PATCH v3 4/4] perf lock contention: Handle slab objects in -L/--lock-filter option Namhyung Kim
2024-12-20 19:20 ` [PATCH v3 0/4] perf lock contention: Symbolize locks using slab cache names Arnaldo Carvalho de Melo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).