* [PATCH 0/4] perf lock contention: Symbolize locks using slab cache names (v1)
@ 2024-11-05 17:26 Namhyung Kim
2024-11-05 17:26 ` [PATCH 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK Namhyung Kim
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-11-05 17:26 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Song Liu, bpf, Stephane Eranian,
Vlastimil Babka, Kees Cook, Roman Gushchin, Hyeonggon Yoo
Hello,
This is to support symbolization of dynamic locks using slab
allocator's metadata. The kernel support is in the bpf-next tree now.
It provides the new "kmem_cache" BPF iterator and "bpf_get_kmem_cache"
kfunc to get the information from an address. The feature detection is
done using BTF type info and it won't have any effect on old kernels.
With this change, it can show locks in a slab object like below. I
added "&" sign to distinguish them from global locks.
# perf lock con -abl sleep 1
contended total wait max wait avg wait address symbol
2 1.95 us 1.77 us 975 ns ffff9d5e852d3498 &task_struct (mutex)
1 1.18 us 1.18 us 1.18 us ffff9d5e852d3538 &task_struct (mutex)
4 1.12 us 354 ns 279 ns ffff9d5e841ca800 &kmalloc-cg-512 (mutex)
2 859 ns 617 ns 429 ns ffffffffa41c3620 delayed_uprobe_lock (mutex)
3 691 ns 388 ns 230 ns ffffffffa41c0940 pack_mutex (mutex)
3 421 ns 164 ns 140 ns ffffffffa3a8b3a0 text_mutex (mutex)
1 409 ns 409 ns 409 ns ffffffffa41b4cf8 tracepoint_srcu_srcu_usage (mutex)
2 362 ns 239 ns 181 ns ffffffffa41cf840 pcpu_alloc_mutex (mutex)
1 220 ns 220 ns 220 ns ffff9d5e82b534d8 &signal_cache (mutex)
1 215 ns 215 ns 215 ns ffffffffa41b4c28 tracepoint_srcu_srcu_usage (mutex)
The first two were from "task_struct" slab cache. It happened to
match with the type name of object but there's no guarantee. We need
to add type info to slab cache to resolve the lock inside the object.
Anyway, the third one has no dedicated slab cache and was allocated by
kmalloc.
Those slab objects can be used to filter specific locks using -L or
--lock-filter option.
# perf lock con -ab -L '&task_struct' sleep 1
contended total wait max wait avg wait type caller
1 25.10 us 25.10 us 25.10 us mutex perf_event_exit_task+0x39
1 21.60 us 21.60 us 21.60 us mutex futex_exit_release+0x21
1 5.56 us 5.56 us 5.56 us mutex futex_exec_release+0x21
The code is available at 'perf/lock-slab-v1' branch in my tree
git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git
Thanks,
Namhyung
Namhyung Kim (4):
perf lock contention: Add and use LCB_F_TYPE_MASK
perf lock contention: Run BPF slab cache iterator
perf lock contention: Resolve slab object name using BPF
perf lock contention: Handle slab objects in -L/--lock-filter option
tools/perf/builtin-lock.c | 39 ++++-
tools/perf/util/bpf_lock_contention.c | 141 +++++++++++++++++-
.../perf/util/bpf_skel/lock_contention.bpf.c | 70 ++++++++-
tools/perf/util/bpf_skel/lock_data.h | 15 +-
tools/perf/util/bpf_skel/vmlinux/vmlinux.h | 8 +
tools/perf/util/lock-contention.h | 2 +
6 files changed, 268 insertions(+), 7 deletions(-)
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK
2024-11-05 17:26 [PATCH 0/4] perf lock contention: Symbolize locks using slab cache names (v1) Namhyung Kim
@ 2024-11-05 17:26 ` Namhyung Kim
2024-11-05 17:26 ` [PATCH 2/4] perf lock contention: Run BPF slab cache iterator Namhyung Kim
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-11-05 17:26 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Song Liu, bpf, Stephane Eranian,
Vlastimil Babka, Kees Cook, Roman Gushchin, Hyeonggon Yoo
This is a preparation for the later change. It'll use more bits in the
flags so let's rename the type part and use the mask to extract the
type.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/builtin-lock.c | 4 ++--
tools/perf/util/bpf_skel/lock_data.h | 3 ++-
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
index 062e2b56a2ab570e..89ee2a2f78603906 100644
--- a/tools/perf/builtin-lock.c
+++ b/tools/perf/builtin-lock.c
@@ -1597,7 +1597,7 @@ static const struct {
static const char *get_type_str(unsigned int flags)
{
- flags &= LCB_F_MAX_FLAGS - 1;
+ flags &= LCB_F_TYPE_MASK;
for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
if (lock_type_table[i].flags == flags)
@@ -1608,7 +1608,7 @@ static const char *get_type_str(unsigned int flags)
static const char *get_type_name(unsigned int flags)
{
- flags &= LCB_F_MAX_FLAGS - 1;
+ flags &= LCB_F_TYPE_MASK;
for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
if (lock_type_table[i].flags == flags)
diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h
index de12892f992f8d43..4f0aae5483745dfa 100644
--- a/tools/perf/util/bpf_skel/lock_data.h
+++ b/tools/perf/util/bpf_skel/lock_data.h
@@ -32,7 +32,8 @@ struct contention_task_data {
#define LCD_F_MMAP_LOCK (1U << 31)
#define LCD_F_SIGHAND_LOCK (1U << 30)
-#define LCB_F_MAX_FLAGS (1U << 7)
+#define LCB_F_TYPE_MAX (1U << 7)
+#define LCB_F_TYPE_MASK 0x0000007FU
struct contention_data {
u64 total_time;
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/4] perf lock contention: Run BPF slab cache iterator
2024-11-05 17:26 [PATCH 0/4] perf lock contention: Symbolize locks using slab cache names (v1) Namhyung Kim
2024-11-05 17:26 ` [PATCH 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK Namhyung Kim
@ 2024-11-05 17:26 ` Namhyung Kim
2024-11-06 19:36 ` Andrii Nakryiko
2024-11-05 17:26 ` [PATCH 3/4] perf lock contention: Resolve slab object name using BPF Namhyung Kim
2024-11-05 17:26 ` [PATCH 4/4] perf lock contention: Handle slab objects in -L/--lock-filter option Namhyung Kim
3 siblings, 1 reply; 9+ messages in thread
From: Namhyung Kim @ 2024-11-05 17:26 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Song Liu, bpf, Stephane Eranian,
Vlastimil Babka, Kees Cook, Roman Gushchin, Hyeonggon Yoo
Recently the kernel got the kmem_cache iterator to traverse metadata of
slab objects. This can be used to symbolize dynamic locks in a slab.
The new slab_caches hash map will have the pointer of the kmem_cache as
a key and save the name and a id. The id will be saved in the flags
part of the lock.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/util/bpf_lock_contention.c | 51 +++++++++++++++++++
.../perf/util/bpf_skel/lock_contention.bpf.c | 28 ++++++++++
tools/perf/util/bpf_skel/lock_data.h | 12 +++++
tools/perf/util/bpf_skel/vmlinux/vmlinux.h | 8 +++
4 files changed, 99 insertions(+)
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index 41a1ad08789511c3..a2efd40897bad316 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -12,12 +12,60 @@
#include <linux/zalloc.h>
#include <linux/string.h>
#include <bpf/bpf.h>
+#include <bpf/btf.h>
#include <inttypes.h>
#include "bpf_skel/lock_contention.skel.h"
#include "bpf_skel/lock_data.h"
static struct lock_contention_bpf *skel;
+static bool has_slab_iter;
+
+static void check_slab_cache_iter(struct lock_contention *con)
+{
+ struct btf *btf = btf__load_vmlinux_btf();
+ s32 ret;
+
+ ret = libbpf_get_error(btf);
+ if (ret) {
+ pr_debug("BTF loading failed: %d\n", ret);
+ return;
+ }
+
+ ret = btf__find_by_name_kind(btf, "bpf_iter__kmem_cache", BTF_KIND_STRUCT);
+ if (ret < 0) {
+ bpf_program__set_autoload(skel->progs.slab_cache_iter, false);
+ pr_debug("slab cache iterator is not available: %d\n", ret);
+ goto out;
+ }
+
+ has_slab_iter = true;
+
+ bpf_map__set_max_entries(skel->maps.slab_caches, con->map_nr_entries);
+out:
+ btf__free(btf);
+}
+
+static void run_slab_cache_iter(void)
+{
+ int fd;
+ char buf[256];
+
+ if (!has_slab_iter)
+ return;
+
+ fd = bpf_iter_create(bpf_link__fd(skel->links.slab_cache_iter));
+ if (fd < 0) {
+ pr_debug("cannot create slab cache iter: %d\n", fd);
+ return;
+ }
+
+ /* This will run the bpf program */
+ while (read(fd, buf, sizeof(buf)) > 0)
+ continue;
+
+ close(fd);
+}
int lock_contention_prepare(struct lock_contention *con)
{
@@ -109,6 +157,8 @@ int lock_contention_prepare(struct lock_contention *con)
skel->rodata->use_cgroup_v2 = 1;
}
+ check_slab_cache_iter(con);
+
if (lock_contention_bpf__load(skel) < 0) {
pr_err("Failed to load lock-contention BPF skeleton\n");
return -1;
@@ -304,6 +354,7 @@ static void account_end_timestamp(struct lock_contention *con)
int lock_contention_start(void)
{
+ run_slab_cache_iter();
skel->bss->enabled = 1;
return 0;
}
diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
index 1069bda5d733887f..fd24ccb00faec0ba 100644
--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
+++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
@@ -100,6 +100,13 @@ struct {
__uint(max_entries, 1);
} cgroup_filter SEC(".maps");
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(key_size, sizeof(long));
+ __uint(value_size, sizeof(struct slab_cache_data));
+ __uint(max_entries, 1);
+} slab_caches SEC(".maps");
+
struct rw_semaphore___old {
struct task_struct *owner;
} __attribute__((preserve_access_index));
@@ -136,6 +143,8 @@ int perf_subsys_id = -1;
__u64 end_ts;
+__u32 slab_cache_id;
+
/* error stat */
int task_fail;
int stack_fail;
@@ -563,4 +572,23 @@ int BPF_PROG(end_timestamp)
return 0;
}
+SEC("iter/kmem_cache")
+int slab_cache_iter(struct bpf_iter__kmem_cache *ctx)
+{
+ struct kmem_cache *s = ctx->s;
+ struct slab_cache_data d;
+
+ if (s == NULL)
+ return 0;
+
+ d.id = ++slab_cache_id << LCB_F_SLAB_ID_SHIFT;
+ bpf_probe_read_kernel_str(d.name, sizeof(d.name), s->name);
+
+ if (d.id >= LCB_F_SLAB_ID_END)
+ return 0;
+
+ bpf_map_update_elem(&slab_caches, &s, &d, BPF_NOEXIST);
+ return 0;
+}
+
char LICENSE[] SEC("license") = "Dual BSD/GPL";
diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h
index 4f0aae5483745dfa..c15f734d7fc4aecb 100644
--- a/tools/perf/util/bpf_skel/lock_data.h
+++ b/tools/perf/util/bpf_skel/lock_data.h
@@ -32,9 +32,16 @@ struct contention_task_data {
#define LCD_F_MMAP_LOCK (1U << 31)
#define LCD_F_SIGHAND_LOCK (1U << 30)
+#define LCB_F_SLAB_ID_SHIFT 16
+#define LCB_F_SLAB_ID_START (1U << 16)
+#define LCB_F_SLAB_ID_END (1U << 26)
+#define LCB_F_SLAB_ID_MASK 0x03FF0000U
+
#define LCB_F_TYPE_MAX (1U << 7)
#define LCB_F_TYPE_MASK 0x0000007FU
+#define SLAB_NAME_MAX 28
+
struct contention_data {
u64 total_time;
u64 min_time;
@@ -55,4 +62,9 @@ enum lock_class_sym {
LOCK_CLASS_RQLOCK,
};
+struct slab_cache_data {
+ u32 id;
+ char name[SLAB_NAME_MAX];
+};
+
#endif /* UTIL_BPF_SKEL_LOCK_DATA_H */
diff --git a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
index 4dcad7b682bdee9c..7b81d3173917fdb5 100644
--- a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
+++ b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
@@ -195,4 +195,12 @@ struct bpf_perf_event_data_kern {
*/
struct rq {};
+struct kmem_cache {
+ const char *name;
+} __attribute__((preserve_access_index));
+
+struct bpf_iter__kmem_cache {
+ struct kmem_cache *s;
+} __attribute__((preserve_access_index));
+
#endif // __VMLINUX_H
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/4] perf lock contention: Resolve slab object name using BPF
2024-11-05 17:26 [PATCH 0/4] perf lock contention: Symbolize locks using slab cache names (v1) Namhyung Kim
2024-11-05 17:26 ` [PATCH 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK Namhyung Kim
2024-11-05 17:26 ` [PATCH 2/4] perf lock contention: Run BPF slab cache iterator Namhyung Kim
@ 2024-11-05 17:26 ` Namhyung Kim
2024-11-05 17:41 ` Ian Rogers
2024-11-05 17:26 ` [PATCH 4/4] perf lock contention: Handle slab objects in -L/--lock-filter option Namhyung Kim
3 siblings, 1 reply; 9+ messages in thread
From: Namhyung Kim @ 2024-11-05 17:26 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Song Liu, bpf, Stephane Eranian,
Vlastimil Babka, Kees Cook, Roman Gushchin, Hyeonggon Yoo
The bpf_get_kmem_cache() kfunc can return an address of the slab cache
(kmem_cache). As it has the name of the slab cache from the iterator,
we can use it to symbolize some dynamic kernel locks in a slab.
Before:
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
contended total wait max wait avg wait address symbol
2 3.34 us 2.87 us 1.67 us ffff9d7800ad9600 (mutex)
2 2.16 us 1.93 us 1.08 us ffff9d7804b992d8 (mutex)
4 1.37 us 517 ns 343 ns ffff9d78036e6e00 (mutex)
1 1.27 us 1.27 us 1.27 us ffff9d7804b99378 (mutex)
2 845 ns 599 ns 422 ns ffffffff9e1c3620 delayed_uprobe_lock (mutex)
1 845 ns 845 ns 845 ns ffffffff9da0b280 jiffies_lock (spinlock)
2 377 ns 259 ns 188 ns ffffffff9e1cf840 pcpu_alloc_mutex (mutex)
1 305 ns 305 ns 305 ns ffffffff9e1b4cf8 tracepoint_srcu_srcu_usage (mutex)
1 295 ns 295 ns 295 ns ffffffff9e1c0940 pack_mutex (mutex)
1 232 ns 232 ns 232 ns ffff9d7804b7d8d8 (mutex)
1 180 ns 180 ns 180 ns ffffffff9e1b4c28 tracepoint_srcu_srcu_usage (mutex)
1 165 ns 165 ns 165 ns ffffffff9da8b3a0 text_mutex (mutex)
After:
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
contended total wait max wait avg wait address symbol
2 1.95 us 1.77 us 975 ns ffff9d5e852d3498 &task_struct (mutex)
1 1.18 us 1.18 us 1.18 us ffff9d5e852d3538 &task_struct (mutex)
4 1.12 us 354 ns 279 ns ffff9d5e841ca800 &kmalloc-cg-512 (mutex)
2 859 ns 617 ns 429 ns ffffffffa41c3620 delayed_uprobe_lock (mutex)
3 691 ns 388 ns 230 ns ffffffffa41c0940 pack_mutex (mutex)
3 421 ns 164 ns 140 ns ffffffffa3a8b3a0 text_mutex (mutex)
1 409 ns 409 ns 409 ns ffffffffa41b4cf8 tracepoint_srcu_srcu_usage (mutex)
2 362 ns 239 ns 181 ns ffffffffa41cf840 pcpu_alloc_mutex (mutex)
1 220 ns 220 ns 220 ns ffff9d5e82b534d8 &signal_cache (mutex)
1 215 ns 215 ns 215 ns ffffffffa41b4c28 tracepoint_srcu_srcu_usage (mutex)
Note that the name starts with '&' sign for slab objects to inform they
are dynamic locks. It won't give the accurate lock or type names but
it's still useful. We may add type info to the slab cache later to get
the exact name of the lock in the type later.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/util/bpf_lock_contention.c | 52 +++++++++++++++++++
.../perf/util/bpf_skel/lock_contention.bpf.c | 21 +++++++-
2 files changed, 71 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index a2efd40897bad316..50c3039c647d4d77 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -2,6 +2,7 @@
#include "util/cgroup.h"
#include "util/debug.h"
#include "util/evlist.h"
+#include "util/hashmap.h"
#include "util/machine.h"
#include "util/map.h"
#include "util/symbol.h"
@@ -20,12 +21,25 @@
static struct lock_contention_bpf *skel;
static bool has_slab_iter;
+static struct hashmap slab_hash;
+
+static size_t slab_cache_hash(long key, void *ctx __maybe_unused)
+{
+ return key;
+}
+
+static bool slab_cache_equal(long key1, long key2, void *ctx __maybe_unused)
+{
+ return key1 == key2;
+}
static void check_slab_cache_iter(struct lock_contention *con)
{
struct btf *btf = btf__load_vmlinux_btf();
s32 ret;
+ hashmap__init(&slab_hash, slab_cache_hash, slab_cache_equal, /*ctx=*/NULL);
+
ret = libbpf_get_error(btf);
if (ret) {
pr_debug("BTF loading failed: %d\n", ret);
@@ -50,6 +64,7 @@ static void run_slab_cache_iter(void)
{
int fd;
char buf[256];
+ long key, *prev_key;
if (!has_slab_iter)
return;
@@ -65,6 +80,34 @@ static void run_slab_cache_iter(void)
continue;
close(fd);
+
+ /* Read the slab cache map and build a hash with IDs */
+ fd = bpf_map__fd(skel->maps.slab_caches);
+ prev_key = NULL;
+ while (!bpf_map_get_next_key(fd, prev_key, &key)) {
+ struct slab_cache_data *data;
+
+ data = malloc(sizeof(*data));
+ if (data == NULL)
+ break;
+
+ if (bpf_map_lookup_elem(fd, &key, data) < 0)
+ break;
+
+ hashmap__add(&slab_hash, data->id, data);
+ prev_key = &key;
+ }
+}
+
+static void exit_slab_cache_iter(void)
+{
+ struct hashmap_entry *cur;
+ unsigned bkt;
+
+ hashmap__for_each_entry(&slab_hash, cur, bkt)
+ free(cur->pvalue);
+
+ hashmap__clear(&slab_hash);
}
int lock_contention_prepare(struct lock_contention *con)
@@ -398,6 +441,7 @@ static const char *lock_contention_get_name(struct lock_contention *con,
if (con->aggr_mode == LOCK_AGGR_ADDR) {
int lock_fd = bpf_map__fd(skel->maps.lock_syms);
+ struct slab_cache_data *slab_data;
/* per-process locks set upper bits of the flags */
if (flags & LCD_F_MMAP_LOCK)
@@ -416,6 +460,12 @@ static const char *lock_contention_get_name(struct lock_contention *con,
return "rq_lock";
}
+ /* look slab_hash for dynamic locks in a slab object */
+ if (hashmap__find(&slab_hash, flags & LCB_F_SLAB_ID_MASK, &slab_data)) {
+ snprintf(name_buf, sizeof(name_buf), "&%s", slab_data->name);
+ return name_buf;
+ }
+
return "";
}
@@ -590,5 +640,7 @@ int lock_contention_finish(struct lock_contention *con)
cgroup__put(cgrp);
}
+ exit_slab_cache_iter();
+
return 0;
}
diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
index fd24ccb00faec0ba..b5bc37955560a58e 100644
--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
+++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
@@ -123,6 +123,8 @@ struct mm_struct___new {
struct rw_semaphore mmap_lock;
} __attribute__((preserve_access_index));
+extern struct kmem_cache *bpf_get_kmem_cache(u64 addr) __ksym __weak;
+
/* control flags */
const volatile int has_cpu;
const volatile int has_task;
@@ -496,8 +498,23 @@ int contention_end(u64 *ctx)
};
int err;
- if (aggr_mode == LOCK_AGGR_ADDR)
- first.flags |= check_lock_type(pelem->lock, pelem->flags);
+ if (aggr_mode == LOCK_AGGR_ADDR) {
+ first.flags |= check_lock_type(pelem->lock,
+ pelem->flags & LCB_F_TYPE_MASK);
+
+ /* Check if it's from a slab object */
+ if (bpf_get_kmem_cache) {
+ struct kmem_cache *s;
+ struct slab_cache_data *d;
+
+ s = bpf_get_kmem_cache(pelem->lock);
+ if (s != NULL) {
+ d = bpf_map_lookup_elem(&slab_caches, &s);
+ if (d != NULL)
+ first.flags |= d->id;
+ }
+ }
+ }
err = bpf_map_update_elem(&lock_stat, &key, &first, BPF_NOEXIST);
if (err < 0) {
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/4] perf lock contention: Handle slab objects in -L/--lock-filter option
2024-11-05 17:26 [PATCH 0/4] perf lock contention: Symbolize locks using slab cache names (v1) Namhyung Kim
` (2 preceding siblings ...)
2024-11-05 17:26 ` [PATCH 3/4] perf lock contention: Resolve slab object name using BPF Namhyung Kim
@ 2024-11-05 17:26 ` Namhyung Kim
3 siblings, 0 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-11-05 17:26 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang
Cc: Jiri Olsa, Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Song Liu, bpf, Stephane Eranian,
Vlastimil Babka, Kees Cook, Roman Gushchin, Hyeonggon Yoo
This is to filter lock contention from specific slab objects only.
Like in the lock symbol output, we can use '&' prefix to filter slab
object names.
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
contended total wait max wait avg wait address symbol
3 14.99 us 14.44 us 5.00 us ffffffff851c0940 pack_mutex (mutex)
2 2.75 us 2.56 us 1.38 us ffff98d7031fb498 &task_struct (mutex)
4 1.42 us 557 ns 355 ns ffff98d706311400 &kmalloc-cg-512 (mutex)
2 953 ns 714 ns 476 ns ffffffff851c3620 delayed_uprobe_lock (mutex)
1 929 ns 929 ns 929 ns ffff98d7031fb538 &task_struct (mutex)
3 561 ns 210 ns 187 ns ffffffff84a8b3a0 text_mutex (mutex)
1 479 ns 479 ns 479 ns ffffffff851b4cf8 tracepoint_srcu_srcu_usage (mutex)
2 320 ns 195 ns 160 ns ffffffff851cf840 pcpu_alloc_mutex (mutex)
1 212 ns 212 ns 212 ns ffff98d7031784d8 &signal_cache (mutex)
1 177 ns 177 ns 177 ns ffffffff851b4c28 tracepoint_srcu_srcu_usage (mutex)
With the filter, it can show contentions from the task_struct only.
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl -L '&task_struct' sleep 1
contended total wait max wait avg wait address symbol
2 1.97 us 1.71 us 987 ns ffff98d7032fd658 &task_struct (mutex)
1 1.20 us 1.20 us 1.20 us ffff98d7032fd6f8 &task_struct (mutex)
It can work with other aggregation mode:
root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -ab -L '&task_struct' sleep 1
contended total wait max wait avg wait type caller
1 25.10 us 25.10 us 25.10 us mutex perf_event_exit_task+0x39
1 21.60 us 21.60 us 21.60 us mutex futex_exit_release+0x21
1 5.56 us 5.56 us 5.56 us mutex futex_exec_release+0x21
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
tools/perf/builtin-lock.c | 35 ++++++++++++++++
tools/perf/util/bpf_lock_contention.c | 40 ++++++++++++++++++-
.../perf/util/bpf_skel/lock_contention.bpf.c | 21 +++++++++-
tools/perf/util/lock-contention.h | 2 +
4 files changed, 95 insertions(+), 3 deletions(-)
diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
index 89ee2a2f78603906..405e95666257b7fe 100644
--- a/tools/perf/builtin-lock.c
+++ b/tools/perf/builtin-lock.c
@@ -1646,6 +1646,12 @@ static void lock_filter_finish(void)
zfree(&filters.cgrps);
filters.nr_cgrps = 0;
+
+ for (int i = 0; i < filters.nr_slabs; i++)
+ free(filters.slabs[i]);
+
+ zfree(&filters.slabs);
+ filters.nr_slabs = 0;
}
static void sort_contention_result(void)
@@ -2412,6 +2418,27 @@ static bool add_lock_sym(char *name)
return true;
}
+static bool add_lock_slab(char *name)
+{
+ char **tmp;
+ char *sym = strdup(name);
+
+ if (sym == NULL) {
+ pr_err("Memory allocation failure\n");
+ return false;
+ }
+
+ tmp = realloc(filters.slabs, (filters.nr_slabs + 1) * sizeof(*filters.slabs));
+ if (tmp == NULL) {
+ pr_err("Memory allocation failure\n");
+ return false;
+ }
+
+ tmp[filters.nr_slabs++] = sym;
+ filters.slabs = tmp;
+ return true;
+}
+
static int parse_lock_addr(const struct option *opt __maybe_unused, const char *str,
int unset __maybe_unused)
{
@@ -2435,6 +2462,14 @@ static int parse_lock_addr(const struct option *opt __maybe_unused, const char *
continue;
}
+ if (*tok == '&') {
+ if (!add_lock_slab(tok + 1)) {
+ ret = -1;
+ break;
+ }
+ continue;
+ }
+
/*
* At this moment, we don't have kernel symbols. Save the symbols
* in a separate list and resolve them to addresses later.
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index 50c3039c647d4d77..2891a81380204b1d 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -113,7 +113,7 @@ static void exit_slab_cache_iter(void)
int lock_contention_prepare(struct lock_contention *con)
{
int i, fd;
- int ncpus = 1, ntasks = 1, ntypes = 1, naddrs = 1, ncgrps = 1;
+ int ncpus = 1, ntasks = 1, ntypes = 1, naddrs = 1, ncgrps = 1, nslabs = 1;
struct evlist *evlist = con->evlist;
struct target *target = con->target;
@@ -202,6 +202,13 @@ int lock_contention_prepare(struct lock_contention *con)
check_slab_cache_iter(con);
+ if (con->filters->nr_slabs && has_slab_iter) {
+ skel->rodata->has_slab = 1;
+ nslabs = con->filters->nr_slabs;
+ }
+
+ bpf_map__set_max_entries(skel->maps.slab_filter, nslabs);
+
if (lock_contention_bpf__load(skel) < 0) {
pr_err("Failed to load lock-contention BPF skeleton\n");
return -1;
@@ -272,6 +279,36 @@ int lock_contention_prepare(struct lock_contention *con)
bpf_program__set_autoload(skel->progs.collect_lock_syms, false);
lock_contention_bpf__attach(skel);
+
+ /* run the slab iterator after attaching */
+ run_slab_cache_iter();
+
+ if (con->filters->nr_slabs) {
+ u8 val = 1;
+ int cache_fd;
+ long key, *prev_key;
+
+ fd = bpf_map__fd(skel->maps.slab_filter);
+
+ /* Read the slab cache map and build a hash with its address */
+ cache_fd = bpf_map__fd(skel->maps.slab_caches);
+ prev_key = NULL;
+ while (!bpf_map_get_next_key(cache_fd, prev_key, &key)) {
+ struct slab_cache_data data;
+
+ if (bpf_map_lookup_elem(cache_fd, &key, &data) < 0)
+ break;
+
+ for (i = 0; i < con->filters->nr_slabs; i++) {
+ if (!strcmp(con->filters->slabs[i], data.name)) {
+ bpf_map_update_elem(fd, &key, &val, BPF_ANY);
+ break;
+ }
+ }
+ prev_key = &key;
+ }
+ }
+
return 0;
}
@@ -397,7 +434,6 @@ static void account_end_timestamp(struct lock_contention *con)
int lock_contention_start(void)
{
- run_slab_cache_iter();
skel->bss->enabled = 1;
return 0;
}
diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
index b5bc37955560a58e..048a04fc3a7fc27d 100644
--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
+++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
@@ -100,6 +100,13 @@ struct {
__uint(max_entries, 1);
} cgroup_filter SEC(".maps");
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(key_size, sizeof(long));
+ __uint(value_size, sizeof(__u8));
+ __uint(max_entries, 1);
+} slab_filter SEC(".maps");
+
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(key_size, sizeof(long));
@@ -131,6 +138,7 @@ const volatile int has_task;
const volatile int has_type;
const volatile int has_addr;
const volatile int has_cgroup;
+const volatile int has_slab;
const volatile int needs_callstack;
const volatile int stack_skip;
const volatile int lock_owner;
@@ -213,7 +221,7 @@ static inline int can_record(u64 *ctx)
__u64 addr = ctx[0];
ok = bpf_map_lookup_elem(&addr_filter, &addr);
- if (!ok)
+ if (!ok && !has_slab)
return 0;
}
@@ -226,6 +234,17 @@ static inline int can_record(u64 *ctx)
return 0;
}
+ if (has_slab && bpf_get_kmem_cache) {
+ __u8 *ok;
+ __u64 addr = ctx[0];
+ long kmem_cache_addr;
+
+ kmem_cache_addr = (long)bpf_get_kmem_cache(addr);
+ ok = bpf_map_lookup_elem(&slab_filter, &kmem_cache_addr);
+ if (!ok)
+ return 0;
+ }
+
return 1;
}
diff --git a/tools/perf/util/lock-contention.h b/tools/perf/util/lock-contention.h
index 1a7248ff388947e1..95331b6ec062410d 100644
--- a/tools/perf/util/lock-contention.h
+++ b/tools/perf/util/lock-contention.h
@@ -10,10 +10,12 @@ struct lock_filter {
int nr_addrs;
int nr_syms;
int nr_cgrps;
+ int nr_slabs;
unsigned int *types;
unsigned long *addrs;
char **syms;
u64 *cgrps;
+ char **slabs;
};
struct lock_stat {
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 3/4] perf lock contention: Resolve slab object name using BPF
2024-11-05 17:26 ` [PATCH 3/4] perf lock contention: Resolve slab object name using BPF Namhyung Kim
@ 2024-11-05 17:41 ` Ian Rogers
2024-11-05 20:45 ` Namhyung Kim
0 siblings, 1 reply; 9+ messages in thread
From: Ian Rogers @ 2024-11-05 17:41 UTC (permalink / raw)
To: Namhyung Kim
Cc: Arnaldo Carvalho de Melo, Kan Liang, Jiri Olsa, Adrian Hunter,
Peter Zijlstra, Ingo Molnar, LKML, linux-perf-users, Song Liu,
bpf, Stephane Eranian, Vlastimil Babka, Kees Cook, Roman Gushchin,
Hyeonggon Yoo
On Tue, Nov 5, 2024 at 9:26 AM Namhyung Kim <namhyung@kernel.org> wrote:
>
> The bpf_get_kmem_cache() kfunc can return an address of the slab cache
> (kmem_cache). As it has the name of the slab cache from the iterator,
> we can use it to symbolize some dynamic kernel locks in a slab.
>
> Before:
> root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
> contended total wait max wait avg wait address symbol
>
> 2 3.34 us 2.87 us 1.67 us ffff9d7800ad9600 (mutex)
> 2 2.16 us 1.93 us 1.08 us ffff9d7804b992d8 (mutex)
> 4 1.37 us 517 ns 343 ns ffff9d78036e6e00 (mutex)
> 1 1.27 us 1.27 us 1.27 us ffff9d7804b99378 (mutex)
> 2 845 ns 599 ns 422 ns ffffffff9e1c3620 delayed_uprobe_lock (mutex)
> 1 845 ns 845 ns 845 ns ffffffff9da0b280 jiffies_lock (spinlock)
> 2 377 ns 259 ns 188 ns ffffffff9e1cf840 pcpu_alloc_mutex (mutex)
> 1 305 ns 305 ns 305 ns ffffffff9e1b4cf8 tracepoint_srcu_srcu_usage (mutex)
> 1 295 ns 295 ns 295 ns ffffffff9e1c0940 pack_mutex (mutex)
> 1 232 ns 232 ns 232 ns ffff9d7804b7d8d8 (mutex)
> 1 180 ns 180 ns 180 ns ffffffff9e1b4c28 tracepoint_srcu_srcu_usage (mutex)
> 1 165 ns 165 ns 165 ns ffffffff9da8b3a0 text_mutex (mutex)
>
> After:
> root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
> contended total wait max wait avg wait address symbol
>
> 2 1.95 us 1.77 us 975 ns ffff9d5e852d3498 &task_struct (mutex)
> 1 1.18 us 1.18 us 1.18 us ffff9d5e852d3538 &task_struct (mutex)
> 4 1.12 us 354 ns 279 ns ffff9d5e841ca800 &kmalloc-cg-512 (mutex)
> 2 859 ns 617 ns 429 ns ffffffffa41c3620 delayed_uprobe_lock (mutex)
> 3 691 ns 388 ns 230 ns ffffffffa41c0940 pack_mutex (mutex)
> 3 421 ns 164 ns 140 ns ffffffffa3a8b3a0 text_mutex (mutex)
> 1 409 ns 409 ns 409 ns ffffffffa41b4cf8 tracepoint_srcu_srcu_usage (mutex)
> 2 362 ns 239 ns 181 ns ffffffffa41cf840 pcpu_alloc_mutex (mutex)
> 1 220 ns 220 ns 220 ns ffff9d5e82b534d8 &signal_cache (mutex)
> 1 215 ns 215 ns 215 ns ffffffffa41b4c28 tracepoint_srcu_srcu_usage (mutex)
>
> Note that the name starts with '&' sign for slab objects to inform they
> are dynamic locks. It won't give the accurate lock or type names but
> it's still useful. We may add type info to the slab cache later to get
> the exact name of the lock in the type later.
Many variables may reference a lock through a pointer, should the name
not be associated with the lock or from decoding the task_struct?
The '&' looks redundant as the addresses are clearly different.
How are >1 lock/mutex in the same struct handled?
Thanks,
Ian
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> tools/perf/util/bpf_lock_contention.c | 52 +++++++++++++++++++
> .../perf/util/bpf_skel/lock_contention.bpf.c | 21 +++++++-
> 2 files changed, 71 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
> index a2efd40897bad316..50c3039c647d4d77 100644
> --- a/tools/perf/util/bpf_lock_contention.c
> +++ b/tools/perf/util/bpf_lock_contention.c
> @@ -2,6 +2,7 @@
> #include "util/cgroup.h"
> #include "util/debug.h"
> #include "util/evlist.h"
> +#include "util/hashmap.h"
> #include "util/machine.h"
> #include "util/map.h"
> #include "util/symbol.h"
> @@ -20,12 +21,25 @@
>
> static struct lock_contention_bpf *skel;
> static bool has_slab_iter;
> +static struct hashmap slab_hash;
> +
> +static size_t slab_cache_hash(long key, void *ctx __maybe_unused)
> +{
> + return key;
> +}
> +
> +static bool slab_cache_equal(long key1, long key2, void *ctx __maybe_unused)
> +{
> + return key1 == key2;
> +}
>
> static void check_slab_cache_iter(struct lock_contention *con)
> {
> struct btf *btf = btf__load_vmlinux_btf();
> s32 ret;
>
> + hashmap__init(&slab_hash, slab_cache_hash, slab_cache_equal, /*ctx=*/NULL);
> +
> ret = libbpf_get_error(btf);
> if (ret) {
> pr_debug("BTF loading failed: %d\n", ret);
> @@ -50,6 +64,7 @@ static void run_slab_cache_iter(void)
> {
> int fd;
> char buf[256];
> + long key, *prev_key;
>
> if (!has_slab_iter)
> return;
> @@ -65,6 +80,34 @@ static void run_slab_cache_iter(void)
> continue;
>
> close(fd);
> +
> + /* Read the slab cache map and build a hash with IDs */
> + fd = bpf_map__fd(skel->maps.slab_caches);
> + prev_key = NULL;
> + while (!bpf_map_get_next_key(fd, prev_key, &key)) {
> + struct slab_cache_data *data;
> +
> + data = malloc(sizeof(*data));
> + if (data == NULL)
> + break;
> +
> + if (bpf_map_lookup_elem(fd, &key, data) < 0)
> + break;
> +
> + hashmap__add(&slab_hash, data->id, data);
> + prev_key = &key;
> + }
> +}
> +
> +static void exit_slab_cache_iter(void)
> +{
> + struct hashmap_entry *cur;
> + unsigned bkt;
> +
> + hashmap__for_each_entry(&slab_hash, cur, bkt)
> + free(cur->pvalue);
> +
> + hashmap__clear(&slab_hash);
> }
>
> int lock_contention_prepare(struct lock_contention *con)
> @@ -398,6 +441,7 @@ static const char *lock_contention_get_name(struct lock_contention *con,
>
> if (con->aggr_mode == LOCK_AGGR_ADDR) {
> int lock_fd = bpf_map__fd(skel->maps.lock_syms);
> + struct slab_cache_data *slab_data;
>
> /* per-process locks set upper bits of the flags */
> if (flags & LCD_F_MMAP_LOCK)
> @@ -416,6 +460,12 @@ static const char *lock_contention_get_name(struct lock_contention *con,
> return "rq_lock";
> }
>
> + /* look slab_hash for dynamic locks in a slab object */
> + if (hashmap__find(&slab_hash, flags & LCB_F_SLAB_ID_MASK, &slab_data)) {
> + snprintf(name_buf, sizeof(name_buf), "&%s", slab_data->name);
> + return name_buf;
> + }
> +
> return "";
> }
>
> @@ -590,5 +640,7 @@ int lock_contention_finish(struct lock_contention *con)
> cgroup__put(cgrp);
> }
>
> + exit_slab_cache_iter();
> +
> return 0;
> }
> diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> index fd24ccb00faec0ba..b5bc37955560a58e 100644
> --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
> +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> @@ -123,6 +123,8 @@ struct mm_struct___new {
> struct rw_semaphore mmap_lock;
> } __attribute__((preserve_access_index));
>
> +extern struct kmem_cache *bpf_get_kmem_cache(u64 addr) __ksym __weak;
> +
> /* control flags */
> const volatile int has_cpu;
> const volatile int has_task;
> @@ -496,8 +498,23 @@ int contention_end(u64 *ctx)
> };
> int err;
>
> - if (aggr_mode == LOCK_AGGR_ADDR)
> - first.flags |= check_lock_type(pelem->lock, pelem->flags);
> + if (aggr_mode == LOCK_AGGR_ADDR) {
> + first.flags |= check_lock_type(pelem->lock,
> + pelem->flags & LCB_F_TYPE_MASK);
> +
> + /* Check if it's from a slab object */
> + if (bpf_get_kmem_cache) {
> + struct kmem_cache *s;
> + struct slab_cache_data *d;
> +
> + s = bpf_get_kmem_cache(pelem->lock);
> + if (s != NULL) {
> + d = bpf_map_lookup_elem(&slab_caches, &s);
> + if (d != NULL)
> + first.flags |= d->id;
> + }
> + }
> + }
>
> err = bpf_map_update_elem(&lock_stat, &key, &first, BPF_NOEXIST);
> if (err < 0) {
> --
> 2.47.0.199.ga7371fff76-goog
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/4] perf lock contention: Resolve slab object name using BPF
2024-11-05 17:41 ` Ian Rogers
@ 2024-11-05 20:45 ` Namhyung Kim
0 siblings, 0 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-11-05 20:45 UTC (permalink / raw)
To: Ian Rogers
Cc: Arnaldo Carvalho de Melo, Kan Liang, Jiri Olsa, Adrian Hunter,
Peter Zijlstra, Ingo Molnar, LKML, linux-perf-users, Song Liu,
bpf, Stephane Eranian, Vlastimil Babka, Kees Cook, Roman Gushchin,
Hyeonggon Yoo
Hi Ian,
On Tue, Nov 05, 2024 at 09:41:05AM -0800, Ian Rogers wrote:
> On Tue, Nov 5, 2024 at 9:26 AM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > The bpf_get_kmem_cache() kfunc can return an address of the slab cache
> > (kmem_cache). As it has the name of the slab cache from the iterator,
> > we can use it to symbolize some dynamic kernel locks in a slab.
> >
> > Before:
> > root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
> > contended total wait max wait avg wait address symbol
> >
> > 2 3.34 us 2.87 us 1.67 us ffff9d7800ad9600 (mutex)
> > 2 2.16 us 1.93 us 1.08 us ffff9d7804b992d8 (mutex)
> > 4 1.37 us 517 ns 343 ns ffff9d78036e6e00 (mutex)
> > 1 1.27 us 1.27 us 1.27 us ffff9d7804b99378 (mutex)
> > 2 845 ns 599 ns 422 ns ffffffff9e1c3620 delayed_uprobe_lock (mutex)
> > 1 845 ns 845 ns 845 ns ffffffff9da0b280 jiffies_lock (spinlock)
> > 2 377 ns 259 ns 188 ns ffffffff9e1cf840 pcpu_alloc_mutex (mutex)
> > 1 305 ns 305 ns 305 ns ffffffff9e1b4cf8 tracepoint_srcu_srcu_usage (mutex)
> > 1 295 ns 295 ns 295 ns ffffffff9e1c0940 pack_mutex (mutex)
> > 1 232 ns 232 ns 232 ns ffff9d7804b7d8d8 (mutex)
> > 1 180 ns 180 ns 180 ns ffffffff9e1b4c28 tracepoint_srcu_srcu_usage (mutex)
> > 1 165 ns 165 ns 165 ns ffffffff9da8b3a0 text_mutex (mutex)
> >
> > After:
> > root@virtme-ng:/home/namhyung/project/linux# tools/perf/perf lock con -abl sleep 1
> > contended total wait max wait avg wait address symbol
> >
> > 2 1.95 us 1.77 us 975 ns ffff9d5e852d3498 &task_struct (mutex)
> > 1 1.18 us 1.18 us 1.18 us ffff9d5e852d3538 &task_struct (mutex)
> > 4 1.12 us 354 ns 279 ns ffff9d5e841ca800 &kmalloc-cg-512 (mutex)
> > 2 859 ns 617 ns 429 ns ffffffffa41c3620 delayed_uprobe_lock (mutex)
> > 3 691 ns 388 ns 230 ns ffffffffa41c0940 pack_mutex (mutex)
> > 3 421 ns 164 ns 140 ns ffffffffa3a8b3a0 text_mutex (mutex)
> > 1 409 ns 409 ns 409 ns ffffffffa41b4cf8 tracepoint_srcu_srcu_usage (mutex)
> > 2 362 ns 239 ns 181 ns ffffffffa41cf840 pcpu_alloc_mutex (mutex)
> > 1 220 ns 220 ns 220 ns ffff9d5e82b534d8 &signal_cache (mutex)
> > 1 215 ns 215 ns 215 ns ffffffffa41b4c28 tracepoint_srcu_srcu_usage (mutex)
> >
> > Note that the name starts with '&' sign for slab objects to inform they
> > are dynamic locks. It won't give the accurate lock or type names but
> > it's still useful. We may add type info to the slab cache later to get
> > the exact name of the lock in the type later.
>
> Many variables may reference a lock through a pointer, should the name
> not be associated with the lock or from decoding the task_struct?
I'm not sure I understood you correctly. But this only covers when the
lock variable is inside a slab object so that the address falls into the
slab pages.
> The '&' looks redundant as the addresses are clearly different.
Probably. But sometimes users may want clear separation without looking
at the address values. Also we want to use it in the filters and that
would need some form of indication for the slab locks.
> How are >1 lock/mutex in the same struct handled?
It cannot distinguish them for now. It'll be possible once we can have
type info (BTF) for slab objects and a helper to tell the offset inside
the object from the given address. With that, we could have something
like &task_struct.futex_exit_mutex or &signal_struct.cred_guard_mutex.
Thanks,
Namhyung
>
> Thanks,
> Ian
>
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> > tools/perf/util/bpf_lock_contention.c | 52 +++++++++++++++++++
> > .../perf/util/bpf_skel/lock_contention.bpf.c | 21 +++++++-
> > 2 files changed, 71 insertions(+), 2 deletions(-)
> >
> > diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
> > index a2efd40897bad316..50c3039c647d4d77 100644
> > --- a/tools/perf/util/bpf_lock_contention.c
> > +++ b/tools/perf/util/bpf_lock_contention.c
> > @@ -2,6 +2,7 @@
> > #include "util/cgroup.h"
> > #include "util/debug.h"
> > #include "util/evlist.h"
> > +#include "util/hashmap.h"
> > #include "util/machine.h"
> > #include "util/map.h"
> > #include "util/symbol.h"
> > @@ -20,12 +21,25 @@
> >
> > static struct lock_contention_bpf *skel;
> > static bool has_slab_iter;
> > +static struct hashmap slab_hash;
> > +
> > +static size_t slab_cache_hash(long key, void *ctx __maybe_unused)
> > +{
> > + return key;
> > +}
> > +
> > +static bool slab_cache_equal(long key1, long key2, void *ctx __maybe_unused)
> > +{
> > + return key1 == key2;
> > +}
> >
> > static void check_slab_cache_iter(struct lock_contention *con)
> > {
> > struct btf *btf = btf__load_vmlinux_btf();
> > s32 ret;
> >
> > + hashmap__init(&slab_hash, slab_cache_hash, slab_cache_equal, /*ctx=*/NULL);
> > +
> > ret = libbpf_get_error(btf);
> > if (ret) {
> > pr_debug("BTF loading failed: %d\n", ret);
> > @@ -50,6 +64,7 @@ static void run_slab_cache_iter(void)
> > {
> > int fd;
> > char buf[256];
> > + long key, *prev_key;
> >
> > if (!has_slab_iter)
> > return;
> > @@ -65,6 +80,34 @@ static void run_slab_cache_iter(void)
> > continue;
> >
> > close(fd);
> > +
> > + /* Read the slab cache map and build a hash with IDs */
> > + fd = bpf_map__fd(skel->maps.slab_caches);
> > + prev_key = NULL;
> > + while (!bpf_map_get_next_key(fd, prev_key, &key)) {
> > + struct slab_cache_data *data;
> > +
> > + data = malloc(sizeof(*data));
> > + if (data == NULL)
> > + break;
> > +
> > + if (bpf_map_lookup_elem(fd, &key, data) < 0)
> > + break;
> > +
> > + hashmap__add(&slab_hash, data->id, data);
> > + prev_key = &key;
> > + }
> > +}
> > +
> > +static void exit_slab_cache_iter(void)
> > +{
> > + struct hashmap_entry *cur;
> > + unsigned bkt;
> > +
> > + hashmap__for_each_entry(&slab_hash, cur, bkt)
> > + free(cur->pvalue);
> > +
> > + hashmap__clear(&slab_hash);
> > }
> >
> > int lock_contention_prepare(struct lock_contention *con)
> > @@ -398,6 +441,7 @@ static const char *lock_contention_get_name(struct lock_contention *con,
> >
> > if (con->aggr_mode == LOCK_AGGR_ADDR) {
> > int lock_fd = bpf_map__fd(skel->maps.lock_syms);
> > + struct slab_cache_data *slab_data;
> >
> > /* per-process locks set upper bits of the flags */
> > if (flags & LCD_F_MMAP_LOCK)
> > @@ -416,6 +460,12 @@ static const char *lock_contention_get_name(struct lock_contention *con,
> > return "rq_lock";
> > }
> >
> > + /* look slab_hash for dynamic locks in a slab object */
> > + if (hashmap__find(&slab_hash, flags & LCB_F_SLAB_ID_MASK, &slab_data)) {
> > + snprintf(name_buf, sizeof(name_buf), "&%s", slab_data->name);
> > + return name_buf;
> > + }
> > +
> > return "";
> > }
> >
> > @@ -590,5 +640,7 @@ int lock_contention_finish(struct lock_contention *con)
> > cgroup__put(cgrp);
> > }
> >
> > + exit_slab_cache_iter();
> > +
> > return 0;
> > }
> > diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> > index fd24ccb00faec0ba..b5bc37955560a58e 100644
> > --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
> > +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> > @@ -123,6 +123,8 @@ struct mm_struct___new {
> > struct rw_semaphore mmap_lock;
> > } __attribute__((preserve_access_index));
> >
> > +extern struct kmem_cache *bpf_get_kmem_cache(u64 addr) __ksym __weak;
> > +
> > /* control flags */
> > const volatile int has_cpu;
> > const volatile int has_task;
> > @@ -496,8 +498,23 @@ int contention_end(u64 *ctx)
> > };
> > int err;
> >
> > - if (aggr_mode == LOCK_AGGR_ADDR)
> > - first.flags |= check_lock_type(pelem->lock, pelem->flags);
> > + if (aggr_mode == LOCK_AGGR_ADDR) {
> > + first.flags |= check_lock_type(pelem->lock,
> > + pelem->flags & LCB_F_TYPE_MASK);
> > +
> > + /* Check if it's from a slab object */
> > + if (bpf_get_kmem_cache) {
> > + struct kmem_cache *s;
> > + struct slab_cache_data *d;
> > +
> > + s = bpf_get_kmem_cache(pelem->lock);
> > + if (s != NULL) {
> > + d = bpf_map_lookup_elem(&slab_caches, &s);
> > + if (d != NULL)
> > + first.flags |= d->id;
> > + }
> > + }
> > + }
> >
> > err = bpf_map_update_elem(&lock_stat, &key, &first, BPF_NOEXIST);
> > if (err < 0) {
> > --
> > 2.47.0.199.ga7371fff76-goog
> >
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/4] perf lock contention: Run BPF slab cache iterator
2024-11-05 17:26 ` [PATCH 2/4] perf lock contention: Run BPF slab cache iterator Namhyung Kim
@ 2024-11-06 19:36 ` Andrii Nakryiko
2024-11-07 19:04 ` Namhyung Kim
0 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2024-11-06 19:36 UTC (permalink / raw)
To: Namhyung Kim
Cc: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang, Jiri Olsa,
Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Song Liu, bpf, Stephane Eranian,
Vlastimil Babka, Kees Cook, Roman Gushchin, Hyeonggon Yoo
On Tue, Nov 5, 2024 at 9:27 AM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Recently the kernel got the kmem_cache iterator to traverse metadata of
> slab objects. This can be used to symbolize dynamic locks in a slab.
>
> The new slab_caches hash map will have the pointer of the kmem_cache as
> a key and save the name and a id. The id will be saved in the flags
> part of the lock.
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> tools/perf/util/bpf_lock_contention.c | 51 +++++++++++++++++++
> .../perf/util/bpf_skel/lock_contention.bpf.c | 28 ++++++++++
> tools/perf/util/bpf_skel/lock_data.h | 12 +++++
> tools/perf/util/bpf_skel/vmlinux/vmlinux.h | 8 +++
> 4 files changed, 99 insertions(+)
>
> diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
> index 41a1ad08789511c3..a2efd40897bad316 100644
> --- a/tools/perf/util/bpf_lock_contention.c
> +++ b/tools/perf/util/bpf_lock_contention.c
> @@ -12,12 +12,60 @@
> #include <linux/zalloc.h>
> #include <linux/string.h>
> #include <bpf/bpf.h>
> +#include <bpf/btf.h>
> #include <inttypes.h>
>
> #include "bpf_skel/lock_contention.skel.h"
> #include "bpf_skel/lock_data.h"
>
> static struct lock_contention_bpf *skel;
> +static bool has_slab_iter;
> +
> +static void check_slab_cache_iter(struct lock_contention *con)
> +{
> + struct btf *btf = btf__load_vmlinux_btf();
> + s32 ret;
> +
> + ret = libbpf_get_error(btf);
please don't use libbpf_get_error() in new code. I left that API for
cases when user might want to support both per-1.0 libbpf and 1.0+,
but by now I don't think you should be caring about <1.0 versions. And
in 1.0+, you'll get btf == NULL on error, and errno will be set to
error. So just check errno directly.
> + if (ret) {
> + pr_debug("BTF loading failed: %d\n", ret);
> + return;
> + }
> +
> + ret = btf__find_by_name_kind(btf, "bpf_iter__kmem_cache", BTF_KIND_STRUCT);
> + if (ret < 0) {
> + bpf_program__set_autoload(skel->progs.slab_cache_iter, false);
> + pr_debug("slab cache iterator is not available: %d\n", ret);
> + goto out;
> + }
> +
> + has_slab_iter = true;
> +
> + bpf_map__set_max_entries(skel->maps.slab_caches, con->map_nr_entries);
> +out:
> + btf__free(btf);
> +}
> +
> +static void run_slab_cache_iter(void)
> +{
> + int fd;
> + char buf[256];
> +
> + if (!has_slab_iter)
> + return;
> +
> + fd = bpf_iter_create(bpf_link__fd(skel->links.slab_cache_iter));
> + if (fd < 0) {
> + pr_debug("cannot create slab cache iter: %d\n", fd);
> + return;
> + }
> +
> + /* This will run the bpf program */
> + while (read(fd, buf, sizeof(buf)) > 0)
> + continue;
> +
> + close(fd);
> +}
>
> int lock_contention_prepare(struct lock_contention *con)
> {
> @@ -109,6 +157,8 @@ int lock_contention_prepare(struct lock_contention *con)
> skel->rodata->use_cgroup_v2 = 1;
> }
>
> + check_slab_cache_iter(con);
> +
> if (lock_contention_bpf__load(skel) < 0) {
> pr_err("Failed to load lock-contention BPF skeleton\n");
> return -1;
> @@ -304,6 +354,7 @@ static void account_end_timestamp(struct lock_contention *con)
>
> int lock_contention_start(void)
> {
> + run_slab_cache_iter();
> skel->bss->enabled = 1;
> return 0;
> }
> diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> index 1069bda5d733887f..fd24ccb00faec0ba 100644
> --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
> +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> @@ -100,6 +100,13 @@ struct {
> __uint(max_entries, 1);
> } cgroup_filter SEC(".maps");
>
> +struct {
> + __uint(type, BPF_MAP_TYPE_HASH);
> + __uint(key_size, sizeof(long));
> + __uint(value_size, sizeof(struct slab_cache_data));
> + __uint(max_entries, 1);
> +} slab_caches SEC(".maps");
> +
> struct rw_semaphore___old {
> struct task_struct *owner;
> } __attribute__((preserve_access_index));
> @@ -136,6 +143,8 @@ int perf_subsys_id = -1;
>
> __u64 end_ts;
>
> +__u32 slab_cache_id;
> +
> /* error stat */
> int task_fail;
> int stack_fail;
> @@ -563,4 +572,23 @@ int BPF_PROG(end_timestamp)
> return 0;
> }
>
> +SEC("iter/kmem_cache")
> +int slab_cache_iter(struct bpf_iter__kmem_cache *ctx)
> +{
> + struct kmem_cache *s = ctx->s;
> + struct slab_cache_data d;
> +
> + if (s == NULL)
> + return 0;
> +
> + d.id = ++slab_cache_id << LCB_F_SLAB_ID_SHIFT;
> + bpf_probe_read_kernel_str(d.name, sizeof(d.name), s->name);
> +
> + if (d.id >= LCB_F_SLAB_ID_END)
> + return 0;
> +
> + bpf_map_update_elem(&slab_caches, &s, &d, BPF_NOEXIST);
> + return 0;
> +}
> +
> char LICENSE[] SEC("license") = "Dual BSD/GPL";
> diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h
> index 4f0aae5483745dfa..c15f734d7fc4aecb 100644
> --- a/tools/perf/util/bpf_skel/lock_data.h
> +++ b/tools/perf/util/bpf_skel/lock_data.h
> @@ -32,9 +32,16 @@ struct contention_task_data {
> #define LCD_F_MMAP_LOCK (1U << 31)
> #define LCD_F_SIGHAND_LOCK (1U << 30)
>
> +#define LCB_F_SLAB_ID_SHIFT 16
> +#define LCB_F_SLAB_ID_START (1U << 16)
> +#define LCB_F_SLAB_ID_END (1U << 26)
> +#define LCB_F_SLAB_ID_MASK 0x03FF0000U
> +
> #define LCB_F_TYPE_MAX (1U << 7)
> #define LCB_F_TYPE_MASK 0x0000007FU
>
> +#define SLAB_NAME_MAX 28
> +
> struct contention_data {
> u64 total_time;
> u64 min_time;
> @@ -55,4 +62,9 @@ enum lock_class_sym {
> LOCK_CLASS_RQLOCK,
> };
>
> +struct slab_cache_data {
> + u32 id;
> + char name[SLAB_NAME_MAX];
> +};
> +
> #endif /* UTIL_BPF_SKEL_LOCK_DATA_H */
> diff --git a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
> index 4dcad7b682bdee9c..7b81d3173917fdb5 100644
> --- a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
> +++ b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
> @@ -195,4 +195,12 @@ struct bpf_perf_event_data_kern {
> */
> struct rq {};
>
> +struct kmem_cache {
> + const char *name;
> +} __attribute__((preserve_access_index));
> +
> +struct bpf_iter__kmem_cache {
> + struct kmem_cache *s;
> +} __attribute__((preserve_access_index));
> +
> #endif // __VMLINUX_H
> --
> 2.47.0.199.ga7371fff76-goog
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/4] perf lock contention: Run BPF slab cache iterator
2024-11-06 19:36 ` Andrii Nakryiko
@ 2024-11-07 19:04 ` Namhyung Kim
0 siblings, 0 replies; 9+ messages in thread
From: Namhyung Kim @ 2024-11-07 19:04 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Arnaldo Carvalho de Melo, Ian Rogers, Kan Liang, Jiri Olsa,
Adrian Hunter, Peter Zijlstra, Ingo Molnar, LKML,
linux-perf-users, Song Liu, bpf, Stephane Eranian,
Vlastimil Babka, Kees Cook, Roman Gushchin, Hyeonggon Yoo
Hello,
On Wed, Nov 06, 2024 at 11:36:19AM -0800, Andrii Nakryiko wrote:
> On Tue, Nov 5, 2024 at 9:27 AM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > Recently the kernel got the kmem_cache iterator to traverse metadata of
> > slab objects. This can be used to symbolize dynamic locks in a slab.
> >
> > The new slab_caches hash map will have the pointer of the kmem_cache as
> > a key and save the name and a id. The id will be saved in the flags
> > part of the lock.
> >
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> > tools/perf/util/bpf_lock_contention.c | 51 +++++++++++++++++++
> > .../perf/util/bpf_skel/lock_contention.bpf.c | 28 ++++++++++
> > tools/perf/util/bpf_skel/lock_data.h | 12 +++++
> > tools/perf/util/bpf_skel/vmlinux/vmlinux.h | 8 +++
> > 4 files changed, 99 insertions(+)
> >
> > diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
> > index 41a1ad08789511c3..a2efd40897bad316 100644
> > --- a/tools/perf/util/bpf_lock_contention.c
> > +++ b/tools/perf/util/bpf_lock_contention.c
> > @@ -12,12 +12,60 @@
> > #include <linux/zalloc.h>
> > #include <linux/string.h>
> > #include <bpf/bpf.h>
> > +#include <bpf/btf.h>
> > #include <inttypes.h>
> >
> > #include "bpf_skel/lock_contention.skel.h"
> > #include "bpf_skel/lock_data.h"
> >
> > static struct lock_contention_bpf *skel;
> > +static bool has_slab_iter;
> > +
> > +static void check_slab_cache_iter(struct lock_contention *con)
> > +{
> > + struct btf *btf = btf__load_vmlinux_btf();
> > + s32 ret;
> > +
> > + ret = libbpf_get_error(btf);
>
> please don't use libbpf_get_error() in new code. I left that API for
> cases when user might want to support both per-1.0 libbpf and 1.0+,
> but by now I don't think you should be caring about <1.0 versions. And
> in 1.0+, you'll get btf == NULL on error, and errno will be set to
> error. So just check errno directly.
Oh, great. I'll update the code like below.
if (btf == NULL) {
pr_debug("BTF loading failed: %s\n", strerror(errno));
return;
}
Thanks for your review,
Namhyung
>
> > + if (ret) {
> > + pr_debug("BTF loading failed: %d\n", ret);
> > + return;
> > + }
> > +
> > + ret = btf__find_by_name_kind(btf, "bpf_iter__kmem_cache", BTF_KIND_STRUCT);
> > + if (ret < 0) {
> > + bpf_program__set_autoload(skel->progs.slab_cache_iter, false);
> > + pr_debug("slab cache iterator is not available: %d\n", ret);
> > + goto out;
> > + }
> > +
> > + has_slab_iter = true;
> > +
> > + bpf_map__set_max_entries(skel->maps.slab_caches, con->map_nr_entries);
> > +out:
> > + btf__free(btf);
> > +}
> > +
> > +static void run_slab_cache_iter(void)
> > +{
> > + int fd;
> > + char buf[256];
> > +
> > + if (!has_slab_iter)
> > + return;
> > +
> > + fd = bpf_iter_create(bpf_link__fd(skel->links.slab_cache_iter));
> > + if (fd < 0) {
> > + pr_debug("cannot create slab cache iter: %d\n", fd);
> > + return;
> > + }
> > +
> > + /* This will run the bpf program */
> > + while (read(fd, buf, sizeof(buf)) > 0)
> > + continue;
> > +
> > + close(fd);
> > +}
> >
> > int lock_contention_prepare(struct lock_contention *con)
> > {
> > @@ -109,6 +157,8 @@ int lock_contention_prepare(struct lock_contention *con)
> > skel->rodata->use_cgroup_v2 = 1;
> > }
> >
> > + check_slab_cache_iter(con);
> > +
> > if (lock_contention_bpf__load(skel) < 0) {
> > pr_err("Failed to load lock-contention BPF skeleton\n");
> > return -1;
> > @@ -304,6 +354,7 @@ static void account_end_timestamp(struct lock_contention *con)
> >
> > int lock_contention_start(void)
> > {
> > + run_slab_cache_iter();
> > skel->bss->enabled = 1;
> > return 0;
> > }
> > diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> > index 1069bda5d733887f..fd24ccb00faec0ba 100644
> > --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
> > +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> > @@ -100,6 +100,13 @@ struct {
> > __uint(max_entries, 1);
> > } cgroup_filter SEC(".maps");
> >
> > +struct {
> > + __uint(type, BPF_MAP_TYPE_HASH);
> > + __uint(key_size, sizeof(long));
> > + __uint(value_size, sizeof(struct slab_cache_data));
> > + __uint(max_entries, 1);
> > +} slab_caches SEC(".maps");
> > +
> > struct rw_semaphore___old {
> > struct task_struct *owner;
> > } __attribute__((preserve_access_index));
> > @@ -136,6 +143,8 @@ int perf_subsys_id = -1;
> >
> > __u64 end_ts;
> >
> > +__u32 slab_cache_id;
> > +
> > /* error stat */
> > int task_fail;
> > int stack_fail;
> > @@ -563,4 +572,23 @@ int BPF_PROG(end_timestamp)
> > return 0;
> > }
> >
> > +SEC("iter/kmem_cache")
> > +int slab_cache_iter(struct bpf_iter__kmem_cache *ctx)
> > +{
> > + struct kmem_cache *s = ctx->s;
> > + struct slab_cache_data d;
> > +
> > + if (s == NULL)
> > + return 0;
> > +
> > + d.id = ++slab_cache_id << LCB_F_SLAB_ID_SHIFT;
> > + bpf_probe_read_kernel_str(d.name, sizeof(d.name), s->name);
> > +
> > + if (d.id >= LCB_F_SLAB_ID_END)
> > + return 0;
> > +
> > + bpf_map_update_elem(&slab_caches, &s, &d, BPF_NOEXIST);
> > + return 0;
> > +}
> > +
> > char LICENSE[] SEC("license") = "Dual BSD/GPL";
> > diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h
> > index 4f0aae5483745dfa..c15f734d7fc4aecb 100644
> > --- a/tools/perf/util/bpf_skel/lock_data.h
> > +++ b/tools/perf/util/bpf_skel/lock_data.h
> > @@ -32,9 +32,16 @@ struct contention_task_data {
> > #define LCD_F_MMAP_LOCK (1U << 31)
> > #define LCD_F_SIGHAND_LOCK (1U << 30)
> >
> > +#define LCB_F_SLAB_ID_SHIFT 16
> > +#define LCB_F_SLAB_ID_START (1U << 16)
> > +#define LCB_F_SLAB_ID_END (1U << 26)
> > +#define LCB_F_SLAB_ID_MASK 0x03FF0000U
> > +
> > #define LCB_F_TYPE_MAX (1U << 7)
> > #define LCB_F_TYPE_MASK 0x0000007FU
> >
> > +#define SLAB_NAME_MAX 28
> > +
> > struct contention_data {
> > u64 total_time;
> > u64 min_time;
> > @@ -55,4 +62,9 @@ enum lock_class_sym {
> > LOCK_CLASS_RQLOCK,
> > };
> >
> > +struct slab_cache_data {
> > + u32 id;
> > + char name[SLAB_NAME_MAX];
> > +};
> > +
> > #endif /* UTIL_BPF_SKEL_LOCK_DATA_H */
> > diff --git a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
> > index 4dcad7b682bdee9c..7b81d3173917fdb5 100644
> > --- a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
> > +++ b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
> > @@ -195,4 +195,12 @@ struct bpf_perf_event_data_kern {
> > */
> > struct rq {};
> >
> > +struct kmem_cache {
> > + const char *name;
> > +} __attribute__((preserve_access_index));
> > +
> > +struct bpf_iter__kmem_cache {
> > + struct kmem_cache *s;
> > +} __attribute__((preserve_access_index));
> > +
> > #endif // __VMLINUX_H
> > --
> > 2.47.0.199.ga7371fff76-goog
> >
> >
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-11-07 19:04 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-05 17:26 [PATCH 0/4] perf lock contention: Symbolize locks using slab cache names (v1) Namhyung Kim
2024-11-05 17:26 ` [PATCH 1/4] perf lock contention: Add and use LCB_F_TYPE_MASK Namhyung Kim
2024-11-05 17:26 ` [PATCH 2/4] perf lock contention: Run BPF slab cache iterator Namhyung Kim
2024-11-06 19:36 ` Andrii Nakryiko
2024-11-07 19:04 ` Namhyung Kim
2024-11-05 17:26 ` [PATCH 3/4] perf lock contention: Resolve slab object name using BPF Namhyung Kim
2024-11-05 17:41 ` Ian Rogers
2024-11-05 20:45 ` Namhyung Kim
2024-11-05 17:26 ` [PATCH 4/4] perf lock contention: Handle slab objects in -L/--lock-filter option Namhyung Kim
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).